MaxDiff is a quantitative technique that many knew about, but few understood in depth. In this article, we will detail its implementation, uses, and applications in the field of market research through surveys, and provide recommendations for its optimal application and implementation.
WHAT IS MAXDIFF AND WHO IS THE CREATOR OF THIS TECHNIQUE?
MaxDiff, also known as "Best/Worst Scaling," is a radical variant of the classic single, multiple, or ranking selection question that we have all seen in marketing questionnaires. For example, respondents were asked: "From the list of brands shown below, select the one that best meets your needs," and then a list of brands was presented for selection. Or, for example, respondents were asked: "Which of the following attributes do you consider most important when deciding to buy a smartphone?" presenting a list of attributes such as price, quality, battery life, warranty, durability, design, etc.
The problem identified by Professor Jordan Louviere, PhD in Marketing from the University of Sydney, Australia, and creator of the MaxDiff technique in the 1990s, is that buyers' decision-making is more complex than simply selecting an item from a closed list. Louviere argued that the human mind makes complex comparisons between attributes such as price, brand, quality, convenience, warranty, value, etc., discarding attributes until only a final pair or triplet remains, making the final decision between only 2 or 3 final elements.
For example, a customer evaluating cars might compare the Toyota brand vs. Ford, reasoning that Toyota offers better long-term quality, but Ford has a better price. Thus, they might choose Toyota for its quality. However, when comparing Toyota vs. Hyundai, they might discard Toyota for its high fuel consumption, now choosing Hyundai for being more economical and offering similar quality. This "one against one" and "one against all" comparison gave rise to the MaxDiff technique.
With the advancement of personal computing in the 1990s, the application of this technique became possible, initially in small focus groups, and from the year 2000, it became a standard technique applicable through online and offline questionnaires. Today, MaxDiff is offered by online survey software tools such as RotatorSurvey, QuestionPro, Qualtrics, SurveyMonkey, Survey Gizmo, among others, and is offered as a portfolio product by thousands of market research companies.
IMPLEMENTATION OF THE MAXDIFF TECHNIQUE
To study this technique in detail, we will evaluate the attributes that customers value most when buying a new car. Suppose we have defined a closed list of attributes provided by the client of the study (TOYOTA OF MEXICO) which has been validated in various focus groups conducted by our company. These attributes could be:
We will then create combinations of elements in groups of 2 or 3 for the respondent to indicate which is the most and least important to consider in the purchase decision. For example:
"When buying a new car, from the 3 attributes shown below, indicate which is the most important to you and which is the least important":
The respondent might indicate that the most important attribute is "Warranty" and the least important is "Resale value."
Then we ask:
"When buying a new car, from the 3 attributes shown below, indicate which is the most important to you and which is the least important":
The respondent might select "Price" as the most important and "Warranty" as the least important.
In this way, the attributes will be presented on the computer screen in groups of 3, while the respondent selects the best and worst of the shown attributes. Internally, the MaxDiff tool will assign +1 to the attribute selected as most important and -1 to the one selected as least important, while the unselected attributes will assume a value of zero. From this calculation, the name "MaxDiff" is derived, as the product of the technique is precisely to highlight the most different attributes, either because they accumulate a lot of positive scores or because they accumulate a lot of negative scores.
PLANNING AND CONFIGURATION OF MAXDIFF
During the MaxDiff planning process, it is necessary to define several parameters that will be critical to the quality and reliability of the results:
1. Number of attributes to evaluate: Empirical evidence recommends that the list of attributes or items to evaluate should not be too long or too short, as extensive lists will require large samples to generate all possible combinations.
2. Number of sets or "tasks" to be performed by the respondent: The longer the list of attributes, the greater the combination of elements to present, which can cause mental and visual stress among respondents, thus reducing the reliability of the results.
3. Number of elements per set: If the list of attributes in each set shown is too large, it will cause mental and visual fatigue for the respondent. Thus, the recommended number can vary between 2 to 4 elements per set.
4. Frequency of appearance of each attribute: Presenting the same attribute many times can cause confusion and cognitive fatigue; the recommendation based on empirical experience is 3 to 4 times at most.
5. Sample size: It must be large enough for the results to be reliable and projectable to the studied universe; recommended samples vary between 200 subjects for few attributes and more than 1000 respondents for many attributes.
ANALYSIS OF MAXDIFF RESULTS
Once the data is collected, we proceed to analyze it. Although each software tool offers various analysis possibilities, the basic analysis common among them is the percentage obtained by dividing the sum of each attribute by the times it was shown, known as the MaxDiff Percentage. By graphing this percentage, a horizontal bar chart with negative and positive values is obtained, with the most positive being the most important attributes and the most negative being the least important. With these figures, a ranking can be constructed that will be of great value to the client of the study.
Video 1 for a more in-depth discussion:
Video 2 for a more in-depth discussion: