Wednesday, June 13, 2007

Final FAQ #2

Q2: Do we need to know the strengths & weaknesses of survey research, experiments, and content analysis?

A: Yes.


Q3: How much are we required to know about interaction analysis?


A: I'm guessing that you'll only need to understand it at a basic level (i.e., that it's the quantitative version of conversation analysis), but because there's more info than just that in the textbook, it's possible that Prof. Mullin would ask something a little more specific.


Q4: How does the cross-lagged panel design help reduce the directionality problem?

A: Recall that in survey/correlational research, if you find a strong correlation between variable X and variable Y, you can't tell whether it's X that's causing Y or Y that's causing X (i.e., there's a directionality / direction of causation problem). But if we know which variable came first, X or Y, then we'll know which variable is causing the other. (That's a law of causality -- the cause has to come before the effect.)

Experiments can establish which variable came first, because experimenters make sure that the treatment (IV/cause) comes before they measure the response (DV/effect). But no such luck when you're measuring two variables at the same time and testing their association, as is done in most survey/correlational research; if you find a strong correlation between X and Y at a particular time, you don't know if X or Y came first.

But what if you could measure the two variables at two (or more) different times? Suppose that X at time 1 strongly predicts Y at time 2, but that Y at time 1 is unrelated to X at time 2. That strongly suggests that X came first -- X had a later effect on Y, while Y did not have a later effect on X.

For example, if TV exposure at age 8 predicts aggression at age 20, but aggression at age 8 does NOT predict TV exposure at age 20, then it's most likely that TV exposure preceded aggression, and therefore TV exposure probably causes aggression rather than the other way around.

That's what cross-lagged panel designs allow us to demonstrate. Measuring the same variables (X and Y) in the same participants (panel design), you cross X over a time lag with Y, and Y over a time lag with X, to see which came first. (If you find all strong or all weak correlations, then you're stuck -- the design won't help you figure out what came first, and you'll have to try an experiment, if possible.)


Q5: How do you calculate data at different levels of measurement?

A: Simply speaking, the following types of calculations are used to determine results.

[Note: variables at the nominal or ordinal level are generally considered "categorical" or "discrete" variables, and variables at the interval or ratio level of measurement are generally considered "continuous."]

IV meas. level
DV meas. level
Calculation
Example
[no real IV]
nominal / ordinal
(e.g., gender)
Percentages
50% of people are male and 50% are female
nominal / ordinal
(e.g., gender)
nominal / ordinal
(e.g., yes/no)
Percentages
40% of females said yes and 60% of females said no, but 20% of males said yes and 80% of males said no
nominal / ordinal
(e.g., gender)
interval / ratio
(e.g., hair length)
Means
The mean hair length for males is 1.5 inches, and the mean hair length for females is 4 inches.
interval / ratio
(e.g., hair length)
interval / ratio
(e.g., # of incarcerations)
Correlations
The correlation between hair length and incarcerations is -.40


Q6: How is statistical regression a threat to internal validity?

A: Recall that "statistical regression," or "regression toward the mean," is the phenomenon whereby people with extreme scores at one time will have less extreme scores the next time the scores are measured. It's a threat to internal validity (= the ability to conclude that a study's results accurately reflect true relationships among variables) because in regression toward the mean, people's scores don't become less extreme because the people have actually gotten "better" or "worse" on the measure, but rather because of a numerical/statistical law. It's an artifact of measurement, not a true change in the score.

E.g., odds are, people who scored super low on the midterm for this class will do better (percentage-wise) on the final exam. Why? Not because they got any smarter, or necessarily know the material any better this time (though that would also help!), but rather just because of the numerical law that says their scores will become less extreme.


Q7: What's the setup for the Solomon four group design?

One group gets the experimental treatment and a posttest.
Another group does not get the experimental treatment but still gets the posttest.
A third group gets the experimental treament, plus a pretest, plus the posttest.
A fourth group does not get the experimental treatment but still gets a pretest and the posttest.


Q8: What's the difference between evaluation research and market research?

My understanding is that evaluation research is done to test the effectiveness of real-world treatment programs (e.g., an AIDS education campaign), and market research is done to test the "saleability" of products (so market research involves studying customers, competing businesses, product campaigns, the economy, etc.). Evaluation research usually uses quasiexperimental designs (as your textbook glossary says), while market resarch generally uses methods like focus groups.


Q9: What does "fairly reductionistic" mean in the context of content analysis?

A: The content of (e.g.) a TV show is rich with visual and aural detail, and everything is created and orchestrated deliberately -- dialog, musical scores, laugh tracks, sound effects, gestures, tone of voice, volume & dynamic contrasts, edits, camera angles & movement, color schemes, sets & backdrops, clothing, props, characters, stunts, storylines, story arcs, themes, genres, depicted action, implied action, morality, etc., etc. And all that stuff is continually going on on multiple channels 24-7.

Any given content analysis takes all of that huge array of rich & complex artistic stuff and squeezes it all into a few numbers. Content analysis focuses on a tiny slice of all that content, e.g., violence, and reduces all the content to, e.g., the number of violent scenes per primetime hour. And content analysis researchers can't rely on coders' personal interpretation of the complex meaning, either, because in order to ensure inter-coder reliability (remember that term? :] ), variables (e.g., violence) have to be standardized into specific, measurable units that everyone is able to code the same way.

So some scholars (generally humanities scholars, who value subjective analysis of deep, rich, complex, idiosyncratic meaning) criticize content analysis as being too "reductionistic" -- the criticism is that social science content analysis reduces complex content into small, measurable units, and forces coders to basically leave their "humanity" behind and become robot number-crunchers.


Q10: If random assignment is used, is the method always an experiment? For example, if convenience sampling is used to gather a group of people and then random assignment is used to separate the participants into the control and treatment groups, it's still an experiment, right?

A: First issue: what does random sampling have to do with random assignment? Random sampling (= random selection) is completely separate from random assignment. Sampling tells how to get your sample, and assignment tells what to do with your sample once you have it. Sampling is random if everyone in your population has an equal chance of being chosen for your sample. Assignment is random if everyone in your sample has an equal chance of being put into any given condition.

Second issue: what makes an experiment an experiment? The "3 C's" tell you whether a study is a true experiment or not.

Conditions: manipulating the independent variable to create different conditions (an experimental group, a control group, etc.)

Control: keeping everything other than the manipulated variable constant (IMPORTANT: the main way to control for 3rd variables is random assignment of participants to the different conditions; if you've done random assignment, then you can assume that all initial differences among participants have been equalized & therefore any differences among groups that you see after the treatment happened because of the treatment)

Causality: If the other 2 C's have taken place, and you've found a significant difference among DV means in the different conditions, then (and only then) you can determine that the IV has caused the changes in the DV (if there's causal language involved in a conclusion, that suggests that the conclusion was drawn from an experiment).

So the answer to your question is that as long as there has been random assignment to different manipulated conditions and control of extraneous factors, yes, it's an experiment, whether your sample is a random sample or a convenience sample.

NOTE: random sampling is the only way that you can accurately generalize to a larger population (the population from which you randomly sampled), so random sampling increases external validity (how true your results are for people other than the people you sampled). But external validity doesn't just involve people, it also involves situations. So another way to increase external valitity is to make your study environment, measures, etc., as similar as possible to real-life conditions. The type of external validity that refers to is "how true results are for situations other than the study conditions."

Generally, the purpose of surveys is to describe a population (e.g., 50% of California voters are planning to vote for Proposition 30), so the goal of surveying a sample is to accurately represent the population -- that is, the goal is external validity. Generally, the purpose of experiments is to determine cause and effect (regardless of the sample), so the goal of experiments is to say that the relationship that exists between variables in a study is actually a true causal relationship -- that is, the goal of an experiment is internal validity.

HOWEVER, a survey-method study isn't automatically externally valid, and an experiment isn't automatically internally valid. The studies have to be properly conducted (survey: random sampling; experiment: 3 C's) to make those conclusions.

AND, on the other hand, you can have a survey-method study that is internally valid, and an experiment that's externally valid. For example, if you randomly sample from a large population, and then randomly assign your sample to conditions (and do everything else that's required of a true experiment), plus you keep study conditions really similar to real-life conditions, then you can (a) make causal conclusions (because of the 3 C's), (b) generalize your conclusions to your larger population (because of the random sampling), and (c) generalize your results to real-world conditions (because of the real-life conditions).

0 Comments:

Post a Comment

Subscribe to Post Comments [Atom]

<< Home