1. General remarks about content coding
1.1. Traditions of sociology, conversation analysis, descriptive/functional linguistics. Relatively rare in psychology (but if done well and appropriate for the research question perfectly accepted).
1.2. Language is a social behavior, so content coding is a method of observing and classifying behavior. In that sense it requires intersubjective criteria: an explicit category system and checks for reliability.
1.3. Content coding is both qualitative and quantitative. It is qualitative during the original classification but becomes quantitative once we count, form ratios, differences, etc. of the numbers of instances that fall into certain categories.
2. Collecting the explanations
2.1. What trigger?
2.1.1. Spontaneous explanations in continuous narratives or conversations. Must “extract” explanations and explained events (on the basis of marker words such as “because,” “since,” “why” and syntactic structures such as “(in order) to,” “so that”).
2.1.2. Elicited explanations as answers to why-questions (by experimenter or conversation partner). The events explained could either be held constant across participants (everybody explains “last time somebody stood you up…”) or kept idiosyncratic. In the latter case, the eliciting instruction could be “Recall the last time you puzzled over something somebody else/you did, said, felt, thought, etc. Why did the person/you?” Or the participant could be invited to first tell a story (e.g., a conflict narrative or some nonroutine things they did the past week) and then the experimenter selects mentioned behavioral events and asks the participant to explain them.
2.2. What audience?
2.2.1. Private explanations are answers to one’s own why-questions (in diaries, internal speech, etc.)
2.2.2. Public explanations are answers to someone else’s why questions or spontaneous explanations given for the benefit of an audience or conversation partner.
2.3. What events?
2.3.1. Select events that are truly ascribed to a person — behaviors, experiences, etc. Typically come in the structure of noun + “psychological” verb.
2.3.2. Decide whether to include plural agents (we, they) or not. If they are to be excluded, also exclude interactive relationships among individuals (e.g., “their relationship got worse and worse”)
2.3.3. Decide how to treat stable dispositions among explained events. Some are trends, that is, repeated instances of intentional actions (“she goes shopping every week”; “he sits in front of the TV all the time”) that are explainable, but will almost guarantee a CHR explanation (if it’s an endorsed, intentional trend) or cause explanation (if it’s an unwelcome trend, such as “I never get my reviews in on time”). Trends that include more than one person are complicated and may need to be excluded (e.g., “I don’t really have many conflicts.”)
2.3.4. The explained events can be coded with the B.Ev coding scheme, if that is of interest.
3. Transcribing spoken explanations
3.1. When explanations are uttered in conversation or vis-ą-vis an audience, they should be transcribed before coding (because they are normally too complex to be coded directly from tape). Transcriptions should be detailed such that all words are captured. A decision has to be made whether “uhs” and “ums” are coded (and perhaps pauses). They may be of interest psychologically (e.g., to classify hesitance vs. confidence).
3.2. Sometimes a word or passage is difficult to understand for the transcriber. If so, a second person should listen to the passage and try to identify the word(s). Words that remain incomprehensible are marked with “(unidentifiable)” or “(best guess).”
4. Steps of Coding
4.1. We first “unitize” the given text (especially in the case of continuous narratives) into separable units, which usually have the structure “agent + behavioral/mental verb.” If this unitizing step is nontrivial, it has to be performed by two coders and checked for reliability. A method that has worked well is to use clear separators (//, **) or the color highlighting tool to indicate which clauses are units.
4.2. Of these separated units, some are codable, some are not. In some texts, this decision of codability is performed after the decision of which text passages constitute units. In more structured texts (e.g., answers to why-questions), the unitizing and the decision of codability can be performed in the same step.
4.3. One clause can have separate codable units, such as “She was friendly and competent.” However, (near-)synonymous expressions are not coded separately (e.g., “She was nice, like, really friendly”).
4.4. If the B.Ev system is used, units are codable that refer to behavioral events of one agent (with certain exclusion criteria as decided by the researchers; see above). If the F.Ex system is used, actual explanations of actual behavioral events are codable — which excludes explanations of nonpsychological events (physical, processes) and statements that use the word because but are not explanations.
4.4.1. The primary nonexplanation that is used with the word because is the so-called claim backing — justification or evidence for one’s prediction or claim (not a causal explanation of that claim). For example, “It was 4pm because I was looking at my watch right then”; or “He must have been doing alright because the teacher didn’t say anything to him.” If the statement isn’t clearly recognizable as a claim backing, a good test is to turn the typical order of behavior+explanation around and see whether explanation+behavior makes any sense. It does not for claim backings (e.g., “Because the teacher didn’t say anything to him, he must have been doing alright.”).
4.4.2. Sometimes a behavior is explained by a few clauses, but there are some additional clauses that actually don’t provide any explanatory content – they may be repeating the behavior or straying from the main topic and are classified as uncodable.
4.5. After settling disagreements among coders, the codable units are subjected to the next coding steps.
4.6. Usually, we next code for perspective (actor/observer), plural, perhaps social desirability.
4.7. A decision has to be made about how to handle intentionality of the behavior explained. On can either leave the intentionality decision up to the F.Ex coder (the distinction between 1__ and all the other codes), or one can insert a coding step. (In the beginning such a step can help reduce the confusion over the numerous coding categories).
4.8. Finally, F.Ex codes are assigned.
5. The coding table
5.1. A very convenient format of coding is to create a table in which clauses are separated into rows and those clauses that are codable receive various codes placed in separate columns.
5.2. The raw (transcribed) text file can be treated in MSWord such that the table falls out easily. (a) After each separable clause insert a hard line break. (b) After the entire text or a relevant section has been broken up in this way, run a ReplaceAll of ^p [the line break] with ^t^t^t^t^t^p [5 tabs and the line break]. The number of tabs depends on the number of columns needed for coding (e.g., codable, perspective, plural, intentionality, F.Ex). (c) Finally, select the entire text so marked and use TableConvert-Text-to-Table to turn it into a real table. Later on, columns can always be added or removed with the Table drop-down menu.
5.3. Typically, there will be multiple versions of this table – the original one with empty coding columns; ones with both coders’ classifications of codability the relevant coding category (e.g., codability); and, once disagreements have been settled, tables with only one “final” column per coding category.
6. F.Ex coding
6.1. Brief history and purpose
6.2. Modes of explanation (see separate F.Ex coding scheme document)
6.3. Types within each mode (see separate F.Ex coding scheme document)
6.4. The general path diagram (see separate F.Ex coding scheme document)
6.5. Practice sets 1 and 2
6.6. Complex practice set
6.7. Special practice sets: Reasons and causal histories, beliefs and desires, marked and unmarked beliefs
7. Calculating reliability
7.1. To compute reliability we use the table that contains both coders' classifications. The two coders’ codes become variables in a data file (e.g., by turning MSWord Table columns into EXCEL columns). One can then use SPSS to analyze the data columns (CROSSTABS has inter-rater reliability measures) or run them in EXCEL itself (by first creating a Pivot-Table report and then running my kappa file over it).
7.2. We typically compute reliability for the following parameters: Unitizing, Codability, Intentionality (if separately coded) or All modes, Rea-CHR, Reason type (desire, belief, valuing), Mental State Markers, Reason Content (person, situation, interactions), CHR/cause type (person, situation, interactions).
8. Statistical Analysis
8.1. From the coding table one can easily create a data file (e.g., turning the MSWord table into an EXCEL table, which can be read into SPSS or similar programs). The data file can be merged with other variables collected on the participants.
8.2. Depending on the type of data (multiple behaviors per person, between- or within-factors of perspective or condition...), various SPSS program files are available to produce the final explanation parameters from the raw F.Ex codes (soon to be posted on the web). Typically we analyze Reasons-CHR, belief-desire, belief markers, All-PS, CHR-PS, Cau-PS, All traits, CHR-traits, Cau-traits. Our program files makes sure that the explanation parameters are independent, so that, for example, the reason-CHR comparison is orthogonal to the belief-desire comparison.
8.3. The parameter pair reason-CHR can be treated as a difference score (i.e., a within-subjects factor) in an Anova or as a pair of correlated dependent measures in a Manova. The same option holds for all PS analyses. For the other parameters, the difference score conceptualization makes more sense (because there really is a choice between, say, using a marker for a reason or not).
8.4. The data can be analyzed with behavior as unit of analysis, with explainer as units of analysis, or with both in a multi-level model. We typically focus on per-explainer analyses and form reliable per-behavior scores across the multiple behaviors that each explainer explained. Here we treat the raw data as counts (a continuous variable), inviting an Anova approach. With behaviors as units of analysis, a loglinear approach (numbers of explanation types as frequencies) is suitable as well.
8.5. As in all statistical analyses, care should be taken in graphing all data to detect outliers, unusual values, and potential data entry errors.
9. Interpretation of explanation parameters
9.1. Reason explanations: More often used in actor explanations than in
observer explanations. More often
used when explaining a prototypical agent’s behavior (individual or jointly acting group rather
than an aggregate group). Normally indicates that observers attempt to take the
agent’s perspective, attempt charitable portrayals of the agent (if observers
are instructed to make the actor look good, they increase their number of
reason explanations). From the
actor perspective, reasons increase when the actor tries to appear rational.
Causal history of reason explanations. Used when reasons are unknown or obvious or would be unflattering. Also downplay intentionality, control, “agency” and therefore increase in explanations of negative behavior (from both actor and observer perspective). Also used when explaining behavior trends that would otherwise require a long list of reasons to make sense of the trend (trends can be across multiple behaviors of one agent — “why I go shopping many times per week” — or multiple agents within an aggregate group — “why most women voted for Clinton in the 1996 election”).
9.2. Desire reasons. Easiest among reasons to
infer or guess because culturally available. Portray the agent as “wanting” — i.e., currently lacking
something. How unflattering the
desire reason is depends on the social desirability of the desired end
(contrast “She wanted to help her mom…” vs. “She wanted to have a fur”). When used with mental state markers,
desire reasons can be used to make the agent appear self-centered. When explaining mildly negative
actions, explainers can use desire reasons to emphasize the honorable goal they
had (even though the action may have led to a negative outcome), as in “I stole
from the store because I didn’t want to look like a coward in front of my
Belief reasons. More idiosyncratic and less “visible” in behavior, hence more difficult to infer from the observer perspective. As a result, more often used by actors than by observers and by close, intimate observers than by stranger observers. Are used to highlight the agent’s deliberation and rationality and can be used to weaken blame for more significantly negative actions.
9.3. Mental state markers. In the case of beliefs, markers highlight that the reason is the agent’s reason but not necessarily endorsed by the explainer’s (e.g., “Why were they kicking the bum?—They thought it was fun”). Actors, too, can distance themselves from their own past beliefs by using a marker (“I watered the orchids because I thought they were dry”). When observers were not present at the time of the action, they are more likely to use marked belief reasons because they didn’t share the agent’s reality with respect to the action. In the case of desires, the use of markers seems to serve a “parsimony” function – using the fewest words to explain a behavior (e.g., “Why did he drink a lot last night?—To feel better”). It also emphasizes the agent’s self-centeredness (“she wanted this, she wanted that”). Finally, it can imply the obviousness of the reason (e.g., “Why did she go to the butcher?”—“Well, to buy some meat”).
9.4. Traits in person cause and person CHR explanations. Rare, especially among actors, but tend to be more often used by intimates than by strangers (if you know someone well you can subsume a highly typical behavior under the agent’s personality). When strangers use traits at all they frequently derogate the agent (e.g., “Why didn’t she invite you to the party?—She’s just mean”).
9.5. Situation vs. person causes/CHRs. We don’t find any systematic differences in any of our data. Classically situation causes have been described as being used more by actors than observers (but there is no evidence for this hypothesis); also described as being used for exonerating purposes (little to no evidence for this hypothesis either).