In Part 1 of this topic (Testing the Overall Product/Service Concept), we looked at the very first stage of testing a product or service concept quantitatively: Testing its most rudimentary parameters, and gauging interest in that basic product or service among its target audience.
So let’s assume that you’ve tested the various new cornflakes flavors your client has been considering, via an online survey administered to 1,000 male and female adult consumers across the United States who currently eat cereal at least twice a week. According to your research findings, there is a definite interest in peanut butter-flavored and – surprise! – bacon-flavored cornflakes among these consumers; however, they express very little interest in the chocolate cornflake cereal that was being considered by your client. You’ve presented these findings to the client, and they’ve subsequently stated their willingness to consider manufacturing and marketing both the peanut butter and bacon flavors, given the high interest in both.
It’s time to travel from that bird’s eye view of your client’s new product … to move from focusing on the overall concepts to honing in on the various attributes comprising the “skeleton” of the product/service. These “attributes” are the characteristics that give the nebulous overall concept its “shape” and nuance. A specific combination of product attributes serves as the “fingerprint” of the concept, serving to make your client’s product/service unique.
The specific types of attributes on which you’ll focus in this phase of iterative research tend to be category-specific. For example, there is a general list of attributes that you’ll tend to ask respondents to evaluate if your client is launching a new prescription drug (e.g., efficacy; side effects; adverse events; half-life; onset of action; formulation, etc.), and a different list that typically comes into play when considering a television show (e.g., program length; filmed using multi-camera versus single-camera; featured actors and actresses; genre; time period represented; continuing “serial” storylines versus independent episodes, etc.). Using our example of a new peanut butter- and/or bacon-flavored cereal, attributes on which you may ask respondents to weigh in will likely include:
As you can see, it’s possible to develop lists of multiple, diverse attributes for even the most basic products. For more complex products/services (e.g., automobiles; hotels or vacation destinations), you can easily create attribute lists of 50 items or more. Bear in mind as well that, depending upon the product information your client already has available, there may be specific attributes that they ask you to include.
As mentioned in Part 1, it’s advisable to steer clear of yes/no questions for these attribute evaluations; instead, you can ask respondents to rate the importance of each attribute to their purchase decision, and then ask them to rank the attributes in order of importance to their purchase decision from least to most important. If you’re using an online or CATI (computer-assisted telephone interview) survey to conduct this research, it’s especially important to make sure that your programming team includes an instruction to randomly rotate the order of each attribute list from respondent to respondent; this programming strategy will help prevent respondent fatigue and/or order bias from affecting your findings. Follow up key areas with open-ended questions that ask respondents for the reasons behind their ratings/rankings; these open-ended responses will add depth and “texture” to your data. Open ends also serve to break up seemingly-endless pages of a single task, such as rating scales.
In addition to breaking up any monotony posed by repetitive tasks by inserting a couple of well-placed open-ended questions, you can also use different multivariate techniques - such as Max-Diff - as an alternative way to approach a plethora of attributes that need to be ranked. Max-Diff will arrive at essentially the same “end-point” as a straight ranking exercise, but it approaches the task a bit differently; it is likely a bit of a novelty to most respondents, so it may hold their attention longer.
Sometimes, your client will want to get a sense of respondents’ perceptions of combinations of attributes. This happens frequently in pharmaceutical research, where there may be a direct correlation between a drug’s efficacy and the type or severity of its side effects, for example. However, it is possible to evaluate perceptions of specific combinations of attributes across virtually any market sector. If it’s necessary to consider a number of specific bundles of attributes, you’ll most likely need to incorporate a multivariate component to your study, such as a Discrete Choice exercise. Although this type of exercise drives up the complexity, cost, and timing of the project, there are times that it serves to allow for invaluable insights that would be virtually impossible to glean in any other manner.
All of the aforementioned tasks – list randomization, Max-Diff, and Discrete Choice – require planning and upfront communication with your programming team, to ensure that the proper resources for your project are available within the relevant timeframe. You’ll also need to verify that your client’s budget is sufficient to add in these multivariate techniques.
If the client revises the product attribute list significantly after reviewing the findings for this phase of research, you may find that you need to repeat this study using the revised list of attributes. However, once you are confident that the attribute list is final, and that the client understands the pros and cons of the specific attribute profile that has been selected, you will move into the next phases of pre-launch Concept Testing. You will likely segue into Message Testing - in which you design research to drill down through a laundry list of marketing content to uncover the most compelling promotional messages for your client’s product – and then glide into Package Testing – in which various packaging types, labeling, colors, and logos are evaluated. Pricing Sensitivity research may also be suggested at this juncture. But that’s a topic for another post!