I have just read an interesting piece by Rebecca Brooks published on Research World (ESOMAR’s own publication, November/December 2015 issue, pages 20-22) where she proposes at least a couple of reasons why research project might yield a far-from-satisfactory result: organisational fragmentation and narcissistic objectives.
While I am not going to discuss here the impact of having multiple clients – and sometimes contracting needs – to be satisfied within organisations when undertaking a market research project, I would like to focus on the misalignment between the client objectives and the respondent goals during the survey interaction. Speaking from experience, it is very often the case that when market researchers write a customer satisfaction questionnaires they ask respondents to express what they think of brand X, candidate Y, service Z, etc very often driven by client’s request. That is probably because it sounds like the most logical way of measuring the likelihood current customers will continue to purchase what they did in the past.
However it is true – and market researchers are very aware of the fact that – “the consumer has changed” and “technology is changing [customer] behavior”. That also means that increasingly “consumers aren’t interested in helping a brand improve itself”, mining the efficacy of the traditional customer satisfaction survey.
Consequently, market researchers should switch from client-focussed questions – e.g. “how do you evaluate X?”, “how likely are you to recommend Y to friends?”, etc… to more respondent-focussed discussion with questions of the sort “in what way X is functional to your well-being?”,”how deep would you say is your relationship with service Z?”, etc…
Yes, definitely more qualitative data, surely harder to analyse and to report than numerical scales, but of greater value because part of a more meaningful relationship between the company and the client.
PS: As you might have guessed by now English is not my mother-tongue so please excuse any grammar/syntax faults.
I have just read a paper from uSamp titled “Are Mobile Insights Better Than Online?” which I have downloaded from http://pages.usamp.com/WP-MobilevsOnline-US300.html. Excellent job from Ben Leet and Edward Appleton focusing on a very relevant topic.
Here I would like to share/discuss a few points which might be useful to improve my understanding – and perhaps that of others:
- On page 2 there is a chart suggesting that (in 2015?) “83% of internet usage [will be] from mobile” That sounds like a quite impressive figure, can anyone support that data? The paper does not specify it but what could it be the source? Are we talking about time units spent connected to internet or about something else?
- On page 5 I read “what was not anticipated was the extent to which mobile methodology experiences higher drop out relative to online.” and “the reason for the high mobile drop-out rates is that mobile respondents are far more impatient.” I wonder, could that have been caused by a substantially different socio-demographic profile between online and mobile users? I understand that “Other factors, such as interview length, locations, and survey experience, were all measured as almost identical across modes, and therefore had no bearing on the drop-off rate” but it is unclear whether socio-demo were also controlled for.
- On page 6 I read “Where were the Lays Cheesy Garlic Bread Chips on the shelf?” followwe by “It is therefore clear that online respondents ‘guessed’ the answer to this question” and I am thinking how much biased must be many of the questionnaires based on recall. Context becomes a must during survey administration and mobile is indeed a very useful instrument in this respect.
- Always on page 6 the authors report quite significantly different answer patterns to the question “From the moment you entered the store, how long did it take you to find the Walkers Prawn Cocktail crisps?” between mobile and online respondents, but again such a difference could be caused by likely differences in the social-demographic profiles of mobile vs online respondents. To further elaborate, my suspicion is that mobile respondents were (on average) probably younger and more dynamic than online respondents, and age is quite likely a determinant of own perception of time-related tasks – that is: the younger one is, the faster one perceive himself/herself to be.
- On page 7 I read “The impact on accuracy of recall is likely negative” and I could not agree more with such a statement. Every minute separating the experience from the recall is filled with thousands if not millions of (unconscious) stimuli our mind elaborates and our memory deals with, so once again context is king.
- Always on page 7 the authors report little differences in sensory evaluation of the crisps by online and mobile respondents. That sounds like good news to me, as it indirectly supports mixed-mode research approaches which – in my opinion – are quite useful and should be further deployed.
- On page 8 I read “With evidence showing that mobile respondents were much closer to the moment of
consumption, it’s very possible that an element of satisficing contained within the online sample” and I wonder if that could also be an effect of different socio-demographic profiles between online and mobile samples. I am guessing that younger, more mobile targets could be less accommodating and satisficing than older, more online respondents.
- On page 9 I read something very revealing to me: “It is a clear sign that the next generation is more comfortable typing a response on a smart phone than on a desktop/laptop keyboard.” It just makes very much sense: users can type on their smartphone wherever they happen to be, and keyboards are at risk to become obsolete (!) especially with younger, on-the-go users. On a side note, I wonder how the likely increased usage of abbreviations – quite often used when typing from smartphones – is going to impact the comprehension and (automatic) decoding/analysis of mobile free text…
Finally, while I agree with the conclusions of the study on page 11, I would like to see a similar study with an additional control for the socio-demographic profile of respondents as that could even help sort out some of the issues related with timing and costing of mobile market research applications.
To end my commentaries, a big thanks to Ben Leet and Edward Appleton for sharing their research-on-research about this very interesting topic.
Of course, I would welcome a continued discussion on the topic.
The Professional Researcher Certification Board for the Marketing Research Association announced that Luca Meyer achieved the professional designation of Professional Researcher Certification (PRC) by successfully completing all the application, continuing education and examination requirements. Candidates must meet rigorous eligibility requirements to be able to take the examination. Achieving the PRC designation demonstrates a desire to assert the professional standard and commit to career development in the marketing research industry.
Luca Meyer has been working in the field since 1994, holding both domestic and international roles in both final client and market research agencies. Luca is Italian mothertongue and it is fluent in English, with some basic understanding of French and German.
Professional Researcher Certification (PRC) is a powerful tool for individual researchers of all levels of work experience and education. MRA’s goal is to encourage high standards within the profession to raise competency, establish an objective measure of an individual’s knowledge and proficiency and to encourage continued professional development. Additionally, these certification standards are designed to increase consumer understanding of research and foster premiere professional standards in the profession.
PRC is a career-long commitment that shows your peers, your employees and your organization that you have mastered the core PRC principles and that you are dedicated to staying current in your profession.
For more information, visit www.marketingresearch.org/certification or call MRA at 202.800.2545.