Bastion Latitude’s Research Director and comms expert, Mehdi Khallouk, explores the strategy behind impactful comms.
No, comms testing isn’t about asking whether people “like” the ad: it’s about nurturing the potential of a good idea… and identifying strategic and creative mistakes before they become too costly.
Investing into a new campaign is a big step for brand, marketing or comms professionals: it is understandable they need certainty that it will meet their objectives and they have done everything they can to maximise the likelihood that it will work in market. This is where research, or ‘comms testing’, is often sought: to provide a ‘prediction’ of whether the comms will work and how to best refine the idea to maximise impact.
Researchers tend to be seen with suspicion if not downright condescension by planners and creatives when it comes to comms testing: aren’t researchers just miserly geeks who can’t see that their risk-averse mindset and use of out of context metrics will kill any chance of a smart and disruptive creative idea to ever see the day of light?
Is comms testing always terrible?
Certainly, I believe some methodologies such as standardised quantitative pre-testing based on normative databases should be taken with caution as they tend to put all comms in the same boat instead of providing ad hoc metrics. Qualitative testing is also ridden with risk if moderators are inexperienced, unskilled in comms or not attuned to group dynamics.
However, I have also seen great work over the years, especially qualitative. The kind that really opened up new avenues for planners, that helped creatives amp the power of their big idea, that successfully guided clients towards selecting risky but worthy concepts, and that indeed canned ineffective ideas to successfully redirect investment towards others.
So what makes for great qualitative comms testing?
Well for a start, I would encourage every researcher to consider it an ‘evaluation’ rather than ‘testing’ exercise. Testing speaks to a binary ‘proceed / don’t proceed’ mechanic that clearly doesn’t suit the planning and creative process or clients’ own decision-making. Evaluation requires an understanding of the elements that drive effectiveness as much as the level of effectiveness itself.
The first step is to differentiate between strategy, creative idea and executional elements. This seems obvious but I have indeed seen great creative ideas penalised for poor execution. This is a researcher’s duty to use better laddering techniques to get to the bottom of why a particular executional element creates unease and to take it out of the picture to move on to the broader idea. A recent example of this was a creative concept aimed at older Australians that introduced so much new tech language that it alienated the audience – yet the overall idea using an ingenious visual and verbal device to get the message across resonated highly.
Another key duty of the researcher is to not get persuaded by what people like or dislike, and not confuse the stated with the outtake. Behavioural insight tells us that people are subject to ‘confirmation bias’, the tendency to want to reinforce their own view of the world… when advertising is often there to challenge it. What matters more than a like or a dislike is what people do with the information they’re given and how they’re left feeling.
This is why first impressions are so important. Long form questioning is unlikely to uncover true effectiveness: listening to how people have internalised the message, what implications they draw from it, what direction they push the conversation towards and what kind of energy is in the room after the first five minutes of exposure to a concept is more likely to tell you all you need to know.
This requires us to set up as natural an environment as possible. From creating energy at the onset of a group discussion – where participants exchange with one another rather than the moderator – to actually not approaching the topic before exposing participants to the concepts, everything needs to be done to limit creating fake build ups or biases.
Great evaluation also involves an element of comparison. We can’t expect everyone to be emotionally fluent or articulate. Having points of comparison between different concepts can greatly help participants to make a point.
Finally, great researchers are knowledgeable about comms. They either have a comms background, have taken a strong interest over the years or are intuitive enough to understand the discipline’s intricacies. They can make the part between strategy and creative, interpret the rejection of a paradigm shifting idea for what it is and have the courage to argue for difficult and unpopular choices with clients.
Are all strategic and creative ideas always that brilliant…?
Well, I have seen as many flat uninspiring “creative ideas” as I have seen bad research projects. Inexperience and incompetence abound in all industries: planning, creative and research. We are all guilty of sometimes producing average work, including myself.
Let’s cut everyone some slack. Research shouldn’t be viewed as the enemy. Good comms research plays a critical role in shaping brilliant creative work.