Saturday, October 25, 2014

Key Points: Chapter 6 Measurement

I’m going to talk about two key facets of measurement: 1) conceptualization and operationalization, and 2) validity and reliability.  I cannot stress how important it is that these concepts are clear in your heads.

One question had to do with the difference between conceptualization and a conceptual definition. Conceptualization is the process of developing or “fleshing out” a theoretical construct by giving it a working definition. The book uses the example of “prejudice” (108-9). The discussion is thorough so I won’t repeat it here, but it may help to think of conceptualization as a process that culminates with a conceptual definition. This “process,” incidentally, doesn’t mean a haphazard personal definition, but rather doing one’s homework by consulting multiple sources to produce an informed conceptual definition of a construct that other people can clearly understand.

In turn, the conceptual definition informs the next step of the process, operationalization. When operationalizing, we determine an appropriate method to use in measuring our original construct. Depending on how specific our conceptual definition and what we want to know, a researcher could use a survey or field observation or personal interviews or any number of methods to measure a construct.

Another question had to do with the difference between conceptual and empirical hypotheses. A conceptual hypothesis is when a researcher surmises that a relationship exists between variables, whereas an empirical hypothesis is a definite claim about how variables are related or influence one another. In the conceptual stage we think through options of what variables mean and how they are related, while in the empirical stage we assert a definite claim about how variables interact.

Think of it like betting on horses at the racetrack (something I’m sure all of you do regularly). First, you would conceptually review the options: “3-Legged Nag” doesn’t sound very promising but “Thunderbolt” just screams of a big-money-winner. Next you observe a few races and indeed find that Nag looses every time while Thunderbolt consistently places in the top three. Eventually you put money down on the horse you think will win based on what you’ve studied and reflected on. You’ve gone from a conceptual process to an empirical venture.

I recommend rereading page 111, paragraph 1 as it walks you through the conceptual-empirical process. Reread that paragraph and then actually map out the stages on a piece of paper. Yes, I’m serious—drawing diagrams is a great way to learn this stuff!

One last question was about why internal consistency or reliability matters. I guess the most basic answer is that it only matters if you value reliability. Given the complexity of social phenomena, we want our measures to be as reliable or dependable as possible. The text notes that we improve reliability by clearly defining constructs, using precise levels of measurement, using multiple indicators, and by pilot testing.

Specifically, multiple indicators enable us to measure a construct in different ways. Returning to the text’s example, prejudice does not exist in people’s attitudes and actions in a single way. Rather, it is manifest in different feelings and behaviors. It is therefore much more informative if we can measure multiple facets of prejudice such as attitude, popular belief, ideology, and behavior. In developing multiple indicators, we increase reliability by measuring more content. It also helps us root out weaker measures. For example, say that 3 of 4 measures are highly correlated while 1 is not. It is likely, then, that the lone measure is either a bad indicator of the construct or that it is somehow erroneous (like maybe it’s ambiguously worded which leads people to respond erratically instead of reliably).