Brand tracking studies, whether done continuously or periodically, represent one of the largest research investments a company makes. With such a large investment and with outcomes that impact significant investment decisions, it is critical that companies maximise the potential for insights they receive from their research that deliver competitive advantage and brand growth.
Based on experience as both an insight manager and as a consultant designing brand studies across different industries, below are seven ways I learned to improve the insights and reliability of your brand studies without driving up costs.
Focus on Source of Growth
A number of years ago I managed a brand tracker that had been in place for almost ten years. Things were good and our tracking results showed this. Then things went bad. Strangely our results improved. At the heart of the problem was that our tracker focused on our communications target audience and not the market, and our measures did not adequately measure incidence and usage. Because our results did not address the market, when the market moved we could not provide insights to change our direction. Our results were also weighting light and medium buyers as important as frequent buyers. With the introduction of a new competitor our frequent buyers were also leaving, yet our brand equity was improving.
Not only should your sample frame should reflect the market in which you compete but it should also include the measures needed to determine where your market growth is coming from and how any changes could impact your brand. Depending on your category and business your revenue model or path to purchase (brand funnel) will give you strong guidance on what measures you need to include.
Design with brand performance insight in mind.
Benchmark Against the Trend
When looking into a mirror, if the only thing you have to compare to yourself is yourself, you will develop a distorted view of yourself and what you are capable of achieving. The same happens with brand research. To develop a healthy view of your brand and its potential you need to benchmark yourself against competitors. By comparing across brands with dramatically increase the strategic insights we get from our research by changing our focus from individual dynamics to the market level dynamics.
To get the best from benchmarking don’t just compare or rank scores with other brands. This is depressing for small brands and can reinforce for larger brands. Your benchmarks should take into account some context measure to allow comparisons. With national we use ‘per capita’ and ‘% of GDP’ to let us compare small and large countries, we should use the same approach in brand studies.
An approach that is effective, requires no data modification and jointly shows the relationship between a dependent (our focus) and independent (what we are using for context) measures is to map the context factor with the performance measure and then compare brands to the trend. The scatter plot below shows this approach with brand awareness and consideration. To understand how well our brand consideration is performing we need to control level of awareness which is generally needed before you can consider a brand. If we were one of the brands sitting along the curve we can see we have no brand-specific issue and by using the link between awareness and consideration we can set realistic objectives for increasing our brand awareness. If however, we were the Ascendency Brand, whose consideration is above benchmark for its awareness, we would need to know why before further investing in brand awareness build communications. For the Legacy Brand we have the opposite problem. We need to understand why our awareness is not transferring to consideration. This position can happen for once strong brands which are on the decline or superseded by new features.
A good place to start for intelligent benchmarking is your brand funnel and path to purchase measures.
Learn from your competitors; use intelligent benchmarking.
Mobile Devices Are Small; Be Brief
Do you like reading detailed emails on your phone? Like you, survey participants dislike doing long and detailed tasks on their phone, however, with more than 20% of participants now completing surveys on the phone, long surveys drive down value by increasing recruitment costs and reducing quality. In a recent study we conducted the average interview length for those using a smartphone was twice as long as those using a PC. Smartphone users were also more likely to put the survey aside and then come back to it later, if at all.
Smartphones also offer new opportunities for research by giving us the ability to reach people we could not previously and measure behaviour that is time and place dependent.
We need to get out of the mindset that survey length means depth and more insight. Instead of trying to cram all your questions into one survey, use a modular design and to take advantage of smartphones use and expanding capabilities.
Be focused, be brief; improve insight through engagement.
When recently auditing a brand tracker it was like doing an archaeological dig. Measurements of fashionable ideas past and objectives long-abandoned stared hollowly back at me. Many unreported and unloved. Yet we needed to add some new measures to the survey but the survey was approaching 40 minutes long. Because these questions were woven into the survey, removing them was likely to impact several core measures.
Some things in your market are likely to change less often than other aspects, others will only be of importance once. Likewise, there will be changes in focus in issues you need to address depending on where you are in your strategy development cycle. A modular design has a core set of questions followed by a section which is rotated or changed each survey. Keeping core and module elements separate allows flexibility to respond to new issues without affecting key measures.
Modular surveys not only reduce average survey length but they also increase the potential coverage of the study, allowing the inclusion of things such as media usage, consumption experience, product development, and awareness of issues. When creating a modular design you need to be very clear on what is core and what is modular, otherwise elements of modules will creep into the core survey and can lead to the module section becoming too small to be of value.
Go modular; improve insight flexibility.
Grids Kill; Kill the Grid
Once a great research innovation their over use and excessive length encourage flat line responses that can lead to 25% or more of your responses becoming useless. Not only do large grids reduce your effective sample size they also lead to results that do not change or vary between brands. A flat line response is when a person just ticks down a column; sometimes they even do zig-zag patterns. In addition to reduced value, when the grids includes more than one brand – such as in brand image matrix questions – survey length increases dramatically. Twenty five statements by five brands equals a hundred questions you are asking a person to consider. Mobile phones also struggle to show large grid questions.
Rather than rely on grid questions it is better to focus on the critical image statements that discriminate between brands and measure these as separate questions or using a format that encourages individual responses.
Kill the grid; improve your insight.
Track Performance and Not Random
Random sampling allows us to make generalisations, however, survey to survey results can have sample variations that directly impact results and lead to false results. Customers are more likely to notice advertising from companies they are with, are more likely to agree with positive brand image statements, and are more knowledgeable about your offer. Swings in the proportion of customers will directly flow into your total market results leading to incorrect estimates of campaign results. The swings you think you are seeing are no more than changes to sample composition.
If you track results on a month to month basis this is bad enough, but if you run less often, such as post-campaign, then the results can lead to disastrous brand decisions such as pulling spend out of the campaigns second burst.
Using stratified sampling where you quota or weight your sample to reflect the market and hold it consistent across studies with periodic reviews, greatly reduces the chances of making false conclusions about your performance. The key to using this approach is to only base it on structural elements that directly impact results and only the key few factors. Studies with many interlocking quotas significantly increase studies, delivering marginal benefits if any over a simpler approach.
Reduce random noise; improve insight reliability.
Make the Story about Cause and Effect
The story is often what we remember from our research, if clearly communicated. In many studies there is a clear objective about what the research is being done. The story is built into the objectives. For campaign evaluation studies the link between what we did and what happened in the market place is clearly stated in the studies objectives. For brand tracking research the objectives are more abstract and after doing the research for several months or years we can easily drift away from the key reason for doing the research and end up with presentations that grope for meaning. At all stages from design, implementation and analysis to reporting the researcher needs to have a clear understanding of what is happening in the market.
Part of the cause and effect story often involves multiple data sources that either give the cause, such as media spend, seasons change, policy or legislative change, or it is something the brand research is used to understand the effect, such as sales, change in distribution, donations, or account openings. Also keep in mind that not all causes are planned or driven by the business.
Make the link; uncover insights that drive change.
The seven ways to improve your brand tracking listed above provide just an over view of some the ways to get value from the significant investment made in tracking. Brand tracking studies ultimately need to reflect the markets in which they operate and how brands compete in the market to provide actionable insight.