Introduction to Partial Profile CBC
In a previous post I discussed different ways of handling large numbers of attributes in choice-based conjoint (CBC) experiments (Chrzan 2025). In addition to adaptive CBC, the post discussed three flavors of partial profile CBC:
- Implicit partial profile: we show each respondent a different subset of the attributes in each of their choice sets
- Explicit partial profile: we show each respondent all the attributes, but in each question, a different subset varies from profile to profile while the rest of the attributes completely overlap (confusingly and non-specifically, this is also called an "overlap" design)
- Bespoke CBC: in each CBC question we show only those attributes the respondent identifies as important (Peitz and Lerner 2021)
Research Motivation and New Method Discovery
Because I've had more clients asking for bespoke CBC lately, I want to experiment with it a little to make sure I can justify its use. Later this year we'll be fielding an empirical study, but in the meantime I can put some artificial respondents to work doing some testing. We program the artificial respondents to make choices using utilities from human respondents and we add to those choices human levels of response error.
As I was in the process of doing this, though, one more type of partial profile conjoint came to mind, one where we randomly select the subset of attributes to show in each respondent's CBC questions. In other applications we rely on the "magic" of HB analysis can fill in the utilities for the missing items or attributes, so I wanted to see how well it would work in this case.
Methodology: Artificial Data Study Design
For this artificial data study, I use utilities from 1,000 human respondents who answered a commercial CBC with 20 attributes. The humans were highly engaged with the survey and they paid attention to more of the attributes than I would have expected. The average respondent attended 15.3 of the 20 attributes, meaning attribute non-attendance (ANA) was only 23.5%. I divided the respondents into a "test" group of 600 respondents and a holdout sample of 400 respondents.
Three CBC Approaches
To test the utilities from, and the predictive accuracy of, my 600 test respondents, I had them do three CBC experiments each using the data generation functionality of our Lighthouse Studio software.
1. Standard Full-Profile CBC
The first experiment was a standard full-profile CBC wherein each respondent received 12 questions wherein they had a forced choice among 3 profiles, each profile containing all 20 attributes.
2. Bespoke CBC Method
The second experiment was a bespoke CBC, also with 12 questions per respondent, where we would show to each respondent only the 10 attributes most important to that respondent (and despite the fact we knew respondents used on average 15 of the attributes).
3. Random Attribute Selection
Finally, in the third experiment each respondent again receives 12 questions but this time they see a randomly selected 10 attributes in all their questions.
Results: Utility Correlation Analysis
After generating the response data as described above we run HB-MNL analysis to generate utilities for all three experiments. Comparing these to the known utilities of the 600 respondents we can see that all three methods do well. Correlations of each with the known utilities were:
- Full-profile: 0.98
- Bespoke: 0.96
- Random 10 attributes: 0.96
While I wasn't surprised that we could get a 96% correlation with all 20 known utilities from the Bespoke questionnaire, I was very surprised we could do the same using 10 randomly chosen attributes per respondent.
Predictive Validity Testing Results
We also generated responses for the 400 holdout respondents, again using their known utilities and Gumbel errors. We can use these to test the out-of-sample predictive validity of the three CBC models. Here the full-profile CBC did a little better than the bespoke CBC, as did the partial profile with 10 random attributes per respondent in terms of the mean absolute error (MAE) of prediction:
- Full profile: 6.6 percentage points
- Bespoke: 10.1 percentage points
- Random 10 attributes: 7.2 percentage points
Discussion and Future Research Implications
Note that I didn't do any tuning of the utilities to optimize prediction – these figures reflect the share simulations from the raw utilities. I suspect tuning would improve both the full profile and bespoke MAEs. Also, human respondents are more prone to cognitive overload and fatigue, which I believe gave bespoke CBC its edge in Peitz and Lerner's study. In addition, I have some reasons to believe that the random 10 attributes method won't pan out as well among human respondents, but it's an unexpected finding that I'll want to test when I move on to the empirical study.
And that wraps up today's episode of "Fun with Robots."
References
Chrzan, K. (2025). "Conjoint methods for large attribute sets: Adding bespoke CBC to the list." LinkedIn. https://www.linkedin.com/pulse/conjoint-methods-large-attribute-sets-adding-bespoke-cbc-keith-chrzan-7ez8f/
Peitz, M. and A. Lerner (2021) "Bespoke Choice-Based Conjoint When to Use & Why," paper presented at the Turbo Choice Modeling Event.