You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
|[Priming experiments](#priming-experiments)| Estimate effect of prime on behavior/attitudes (typical) | Use prime as diagnostic to infer knowledge/beliefs (rarer) | Confusing the effect of the prime with the effect of the thing being primed. For example thinking you are finding the effects of exposure to violence by reminding people about past exposure. |
26
-
|[Conjoints](#conjoints)| Estimate effect of feature on choices, given a distribution of other fixed features (rare?) | Make inferences about preferences, classification rules, or ideal points (typical?) | Confusing the (controlled) effects of changing question wording with the (total) effects of intervening on the thing itself. For example thinking you are finding the effects of regime type on willingness to go to war. |
26
+
|[Conjoints](#conjoints)| Estimate effect of feature on choices, given a distribution of other fixed features (rare?) | Make inferences about preferences, classification rules, or ideal points (typical?) | Confusing the effects of a controlled change in question wording with the effects of intervening on the thing itself. For example thinking you are finding the effects of regime type on willingness to go to war or a candidate's gender on their vote share. |
27
27
| [List experiments](#list-experiments) | Estimate effect of list length or content on response patterns (rare) | Infer prevalence of sensitive beliefs/behaviors (typical) | Using an experiment for a descriptive quantity might mean accepting too much error in order to reduce bias.
28
28
29
29
: Summary of different uses for survey experiments {#tbl-survey-experiments}
30
30
31
-
31
+
I use question marks in the last row because I am confused on what some of these are trying to do.
32
32
33
33
The rest of this note just unpacks these ideas, many of which are developed also in @blair2023research (see for example the discussion of the [Conjoint design](https://book.declaredesign.org/library/experimental-descriptive.html#sec-ch17s3)).
34
34
@@ -109,27 +109,32 @@ Note the emphasis on measurement. Arguably, the remit of conjoints for descripti
109
109
110
110
In the many cases in which the goal is to measure preferences, interpretations, or classification rules, conjoint experiments may be best thought of as focused on descriptive inference and using causal inference to make those descriptive inferences.
111
111
112
-
An example: say a bank uses a rule to decide whether to give loans or not. You want to figure out the rule. You do so using a conjoint to assess which profiles are more likely to get loans given different attributes. The estimand of interest is not a set of causal effects, it is a rule. But you try to figure it out by seeing whether notional features "affect" the classification.
112
+
For example, in @hartmann2024trading, we use a conjoint to measure policy preferences. We combine the conjoint results with a choice model to estimate ideal points. Although we use the language of effects a bunch we are interested in trying to measure something but are resorting to using the conjoint to make inferences.
113
+
114
+
Another example: say a bank uses a rule to decide whether to give loans or not. You want to figure out the rule. You do so using a conjoint to assess which profiles are more likely to get loans given different attributes. The estimand of interest is not a set of causal effects, it is a rule. But you try to figure it out by seeing whether notional features "affect" the classification.
115
+
113
116
114
117
Two implications from recognizing that the goal here is in fact descriptive inference:
115
118
116
119
* Opportunity. You might find out that a more effective strategy would be to figure out the rule from archival sources, such as regulations or instructions to staff. Maybe it is measurable, in which case measure it.
117
120
118
-
* Risk. You might fall into the trap of thinking the relation between feature values and outcomes corresponds to the causal effects of changing the feature (or confuse the direct/controlled effect within the experimental regime with the average effect). This is a little trickier, but to think through a simple example: Say in truth we have $X_1 \rightarrow X_2 \rightarrow Y$, and $X_1$ affects $Y$ via $X_2$ but not conditional on $X_2$. Then a conjoint might pick up that $X_1$ is not part of the classification rule for $Y$ and $X_2$ is. But it would be wrong to infer from this that actually changing $X_1$ will not affect classifications (since it might via changes in $X_2$).
121
+
* Risk. You might fall into the trap of thinking the relation between feature values and outcomes corresponds to the causal effects of changing the feature (or confuse the direct/controlled effect within the experimental regime with the average effect). This is a little trickier, but to think through a simple example: Say in truth we have $X_1 \rightarrow X_2 \rightarrow Y$, and $X_1$ affects $Y$ via $X_2$ but not conditional on $X_2$. Then a conjoint might pick up that $X_1$ is not part of the classification rule for $Y$ and $X_2$ is. But it would be wrong to infer from this that actually changing $X_1$ will not affect classifications (since it might via changes in $X_2$). The problem here is confusing "how the rule determines outcomes given features" with "the effect of changing features, given the rule."
119
122
120
-
For another example, in @hartmann2024trading, we use a conjoint to measure policy preferences. We combine the conjoint results with a choice model to estimate ideal points. Although we use the language of effects a bunch we are interested in trying to measure something but are resorting to using the conjoint to make inferences.
123
+
124
+
I think when @schwarz2022have talk about learning about discrimination they are focused on uncovering preferences in this way; but the language of describing "the average effect of *being* a woman" (emphasis added), seems to suggest an interest in the effect of the attribute itself.
121
125
122
126
### Conjoints for causal inference
123
127
124
128
Even still, conjoints can also be used when the primary target is a causal estimand. Say you really are interested in whether the presence of a given feature on a list of features makes it more likely that an outcome will be selected from the list.
125
129
126
-
You might have an application where people are electing candidates and know nothing about the candidates other than what they get in a flyer. You want to know how features of the flyer affect the choice. Then you are pretty close to the conjoint. You are interested in the effect of the feature on behavior. You have to worry about external validity (is there too much control and all that) but these are common worries for any experiment.
130
+
You might have an application where people are electing candidates and know nothing about the candidates other than what they get in a flyer. You want to know how a given feature of the flyer affects the choice, perhaps conditional on all other features. Then you are pretty close to the conjoint. You have to worry about external validity (is there too much control and all that) but these are common worries for any experiment.
127
131
128
-
This is the sort of setting discussed in @bansak2023using.
129
132
130
133
The risk above remains: the effect you are getting is the effect of the attribute on the list, not the average (total) effect of the attribute itself on the outcomes. For example you might find that a powerful candidate does well *given* different values of corruption (even for different distributions of corruption), but this does not give you the effect of power itself, since, after all, power corrupts.
131
134
132
135
136
+
I think this is close to the sort of setting @bansak2023using have in mind (though, maybe not: they do use the language of "the effect of a change in an attribute on a candidate’s or party’s expected vote share" which could be confused for the effect of an intervention on the attribute itself rather than an intervention on a listed feature within a list of features).
137
+
133
138
## List experiments
134
139
135
140
List experiments might also be done for either reason, but the typical use is for descriptive inference.
0 commit comments