1. Is a more general explanation always a causal history rather than a reason? Example: “He is knowledgeable” as compared to “He knows that immigrants have difficulties”?
Yes, that would typically be correct. An actual reason must have a specific content – being knowledgeable doesn’t.
2. Should knowledge and lack of knowledge (or any other explanation that can be described in a positive or negative manner) be coded the same way, or could they have different codes?
Definitely can have different codes. “He went to Severance Hall because he thought that this is where the Psychology Department is.” vs. “He used Hwy 12 because he didn’t know there was construction going on. It’s not about the words themselves but about the meaning and function of the words.
3. Is everything that denotes the agent thinking wrongly / lacking competence causal history?
Not necessarily. A false belief can be a reason (e.g., she looked for the keys in the closet because she thought they were there [but actually she had them in her pocket]). Even something undesirable can be a reason (He jumped at his throat because he wanted to kill him). As mentioned above, lack of knowledge –the actual absence of relevant beliefs – is often a CHR (I went left because I didn’t know it was a dead-end street), but it can be a reason when the agent becomes aware of the lack (I bought a book about WWI because I [realized that I] didn’t really know why it all started).
4. If it can be assumed that the explainer sees the agent from an outside (more critical, distanced) perspective, would that favor CHR codes?
Yes, sometimes explainers take a dismissive “psychoanalytic” position (She just wants some attention), which typically counts as a CHR explanations. But you can obviously be critical and still be ascribing undesirable reasons, especially desires (She wanted to hurt him).
5. Is there a general rule for how to distinguish valuings from beliefs or opinions or attitudes as explanations? Is a value used to describe a general characteristic of the agent automatically a valuing? Or perhaps a causal history?
We must not mistake values for valuings. Values are abstract principles/ideals coded as CHRs (honesty, loyalty, …). Valuings are much more concrete states of liking/disliking something, enjoying something, missing someone. Attitudes are also typically CHRs. Opinions are typically beliefs, but deeply held convictions are like attitudes and therefore often CHRs.
6. Do you have any rules of thumb for working out whether or not an explanation
actually went through the agent's head and informed their action- that is, how
does one make the call as to CHR vs. reason explanations? A tricky example is 'we're
good friends'- this is given in the F.Ex guidelines as an example of a CHR but
at least twice in the practice files was coded as a 452.
“Being friends” can be either one, depending on the action and the context. “She invited him for lunch because they're friends (and because she hadn’t seen him in a while)” suggests a CHR (and a reason for the second explanation) because people normally don't think “We are friends, I should invite her for lunch.” But consider “She decided to lend her the money because they're friends,” where it's likely that being friends was actually a consideration on her mind. You see here that the coders have to “simulate” and make a subjective judgment about the likelihood that the agent actually had the explanatory content on her mind or not, given what you know about action, context, and what people generally think or don't think. Our coder judgments were validated in a couple of studies (Malle, 1999; Malle et al., 2000), but of course any given explanation can have some slippage.
There is another issue. When an explanation stands by itself, and it could be coded either as a reason or a CHR, we often pick the reason coded (because reasons have the higher base rate). When the explainer offers more than one explanation, it sometimes gets easier to pick apart those that are merely background (CHR) and those that are the reasons for which the agent acted. In spoken language, the CHR's tend to come first as the setup of the entire explanation.
7. In O’Laughlin & Malle (2002) and Malle et al. (2000) you considered DVs such as CHR, Beliefs, and Unmarked reasons to be continuous. However, two colleagues speculated that I might need to consider the DVs to be ordinal (and use some kind of rank-ordered model or probit).
The variables are more like metric because they represent the number of explanations per relevant unit -- number of CHRs per intentional behaviors explained, number of beliefs per behaviors explained by reasons, etc. We can’t be certain whether the distances between 1 and 2 and between 2 and 3 are psychologically equal, but it’s a good working hypothesis. The variables also have a true 0 point, because, say, no CHR explanations per intentional behavior explained is a meaningful value.
8. Are percentages of CHR and percentages of reasons better than absolute numbers?
Percentages are not well distributed and make one of the two scores (reasons or CHRs) redundant, so I don’t recommend using them.
9. How do the various explanation features (CHRs, reasons, beliefs, etc.) get analyzed statistically?
In O’Laughlin and Malle (2002) we propose that CHRs and
reasons can be treated as correlated measures in a Manova because they can in
principle increase or decrease independently (though in reality they are
negatively correlated). This
Manova approach identifies the unique contributions that CHRs and reasons make
to the total effect (e.g., a group difference). Nowadays we often adopt the
simpler approach of treating CHRs and reasons as levels of a Rea-CHR
within-subject factor — measuring how many more reasons than CHRs explainers
The same situation holds for beliefs and desires — they can be treated as correlated measures but also as levels of a Bel-Des within-subject factor. Belief markers (or mental state markers in general) can only be analyzed as levels of a within-subject factor because a reason can only be marked or unmarked, nothing else.
When we analyze all explanation parameters simultaneously (e.g., in our actor-observer paper, in preparation 2006), we tend to form within-subject factors (CHR-Rea, Bel-Des, Marked, etc.) to conduct uniform analyses.
10. Are the various explanation features (CHR-reason, belief-desires, etc.) independent?
Yes, we always compute the features as orthogonal: Rea-CHR for intentional behaviors, Bel-Des for behaviors explained by reasons, MarkedBel for behaviors explained by belief reasons. In more detail, we calculate them as below:
Int = #intentional behaviors explained (i.e., behaviors explained by reasons or CHRs [there are so few enabling factor (EF) explanations that we usually skip instances in which someone explains a behavior with EFs])
CHR = #CHRs per intentional behaviors explained (only excludes behaviors that were not explained by either reasons or CHRs, hence unintentional behaviors)
REA = …
The two parameters, CHR and REA are set to missing value (not 0) for all explanations of unintentional behaviors.
(If one is interested:
CHR-P = #person CHRs per behaviors explained by CHRs
CHR-S = ...)
These two parameters are set to missing value for all explanations that are not CHRs.
B = #belief reasons per behavior explained by reasons
D = ...
These two parameters are set to missing value for all explanations that are not reasons.
MarkB = marked beliefs per behavior explained by belief reasons
UnmB = ...
These two parameters are set to missing value for all explanations that are not belief reasons.
This approach creates very different Ns for each of the explanation parameters, and the smallest Ns for MarkB and UnmB.