Measuring Consumer Preferences: Grids or MaxDiff?
When it comes to measuring consumer preferences, there are a variety of research methods available. From conjoint analysis to simple survey questions, it can be difficult to know which option is right depending on research objectives. To help, we’ve outlined some of the specific guidelines when it comes to several of the most commonly used options for measuring consumer preferences—particularly when it comes to the constant question of using MaxDiff or grids.
Methods of Measuring Consumer Preferences
1. Ranking Questions
Possibly the most straightforward means for measuring consumer feedback, ranking questions ask respondents to place a set of items (e.g., features, benefits, etc.) in order relative to a specific metric (e.g., importance, appeal, etc.). Preference scores are then measured by aggregating the individual rankings and sorting the items by the percentage of respondents who provide each level of rank.
Ranking questions are useful in that it forces respondents to compare and decide between items in the context of a specific attribute. As a result, they tend to create differentiated results since multiple items can’t be ranked in the same position.
However, ranking questions don’t provide a means to understand the relative difference in opinion between items (e.g., how much more important the most highly ranked item is over the secondly ranked item). Further, the more items you try to add to the list to rank, the more difficult and less effective it becomes, so it’s best to use with a list of 5-7 items.
2. Forced Choice
In a forced choice question, respondents are forced to select a single item from a list that applies to the question they’re being asked, such as their favorite item—similar to single-select questions. In this method, consumer preferences are measured by aggregating the individual responses and sorting them based on those selected the most.
Forced choice questions avoid bias by making respondents select one choice, but sometimes this can also force respondents to decide between items when there actually may not be a difference of opinion. Therefore, providing an alternative option for making a choice can help; it just won’t be as valuable.
3. Rating Scales
Rating scales can be done through either single-select questions or in a grid format. Respondents react to a list of items and score them based on a metric. This is similar to ranking, except that not every item’s rank is exclusive—multiple items can have the same “score.” The scores are then used to measure consumer preferences by aggregating individual ratings into the total percentage of respondents who selected each item at a certain rating.
The benefits of this kind of method are numerous:
- It’s easily understood by respondents since scale questions are commonly used in a variety of scenarios
- Respondents can answer the questions quickly (especially in a grid format), which opens up time to fit other questions into the questionnaire
- It’s good for items that are likely to be similar to each other because it doesn’t force artificial differences
- It’s respondent friendly, as it allows a range of answer options
While the benefits are many, there are some drawbacks to using rating scales. For example, respondents in different countries or with different levels of survey-taking experience may use the scale differently. We’ll cover additional considerations, specifically when it comes to grid rating scales when we compare them to the MaxDiff method.
4. MaxDiff
MaxDiff, or best-worst scaling, is when respondents see a set of randomized items multiple times and select the “best” and “worst” item each time. MaxDiff uses a software that develops a randomized set of items before the survey is fielded, for each respondent to evaluate. This design allows the researcher to derive preference scores based on the raw information respondents provide.
MaxDiff avoids scale-use bias since the question asks respondents to make choices numerous times. It also capitalizes on the fact that respondents find it easier to identify extremes—like best and worst—than to differentiate between the items in the middle. The drawbacks of this method are that it
- Requires creating a balanced experimental design to accurately test all combinations of items
- Takes multiple methods of analyzing results depending on the usage scenario
When to Use Grids vs. MaxDiff
Many market researchers may debate the question over grids or MaxDiff and make the claim that MaxDiff is better. We find that depending on unique situations, either can be used. However, there are some important notes to make for using grids versus MaxDiff.
Grids allow you to analyze on multiple metrics as opposed to just one. For example, when evaluating claims you may want to analyze on multiple metrics, such as believability, uniqueness, and fit with the brand, in order to have a better understanding of the results. Furthur, grids don’t force artificial differences in preference for respondents. A respondent may view an entire list of claims as unappealing, and in a grid study, they could rate them as such. But a MaxDiff study, they are forced to pick a favorite and least appealing option—this artificially inflates the data differences. Similarly, a person could like all options but be forced to rate something they would like as the worst. So while MaxDiff provides a convenient ranking scale, it’s not always reflective of actual respondent opinions.
Considerations of Using Grids
Before jumping the gun and using grids for everything, keep in mind a few considerations. Use grids when there are claims, names, logos, reasons-to-believe, and similar items to compare. Using this method with larger images, or concepts is not advised since they’re not easily converted into mobile formats. Furthur, more complex answer options are difficult for respondents to assess, as most only spend 3-4 seconds evaluating each item.
Manage the number of grid questions used per study. Grid questions should be used sparingly because they are very taxing for the survey respondents and can lead to survey fatigue and bad data. Some research literature indicates that grid questions can be one of the hardest question types to process. Using question types that are difficult to respondents is likely to result in early fatigue, decreased motivation, and lower data quality.
But when used properly, grid questions can be valuable sources for understanding consumer perceptions. Interested in being a part of these industry discussions? You could join the conversation as a researcher, engineer, or product and marketing professional. Check out our careers page and you’ll also see some of the perks we offer here at GutCheck.
Want to stay up to date latest GutCheck blog posts?
Follow us on
Check Out Our Most Recent Blog Posts
When Vocation and Avocation Collide
At GutCheck, we have four brand pillars upon which we build our business. One of those is to 'lead...
Reflections on Season 1 of Gutsiest Brands
Understanding people is at the heart of market research. Sure, companies want to know what ideas...
Permission to Evolve with Miguel Garcia Castillo
(highlights from Episode #22 of the Gutsiest Brands podcast) Check out the latest lessons from our...
1-877-990-8111
[email protected]
© 2023 GutCheck is a registered trademark of Brainyak, Inc. All rights reserved.
© 2020 GutCheck is a registered trademark of Brainyak, Inc. All rights reserved.