Monday, June 18, 2007

Paper & Final Exam Comments

Final Exam Comments

Good job on the final exam, folks! Most people in my sections improved their % of correct answers between the midterm and final, and actually, the mean change was +3 percentage points, even though the class median % went down. Two students in my sections actually improved their scores by more than 30 percentage points! (Wow!) I'd like to take credit for your achievements, of course, but I know that you are the ones who put in the time and effort to study for this class, so this was your achievement. Great job!

Overall Paper Comments

Overall, I was also really impressed with your papers. First of all, I was really happy about the quality of your writing (for the most part :] ). Based on other precomm papers I've graded in the last few years, I was expecting a lot more trouble understanding what you had to say, but it seems that most of you really picked up on the straightforward social-science style. I'd recommend that you try the same style in your other comm papers! Secondly, most of you did a really good job at following the assignment prompt and including most, if not all, of the elements you needed. Good for you!

On the flip side, that means that papers that didn't include everything pretty much automatically looked worse than the papers that did. Remember: paper grades are based on the median paper, so your papers weren't competing against some arbitrary standard that I thought would be adequate, but they were rather competing against other papers, so your grades are relative to how much time, effort, knowledge, understanding, etc. other students put into their papers, and in this case, it seems everyone else did put in a substantial amount of time, effort, etc. If your paper got above around a 55/80, then you got the main point of the assignment. If you got below that score, that probably means you ere missing something major, and I'd welcome your meeting with me to discuss what you can do to improve for your next precomm class. And If you got a "check-minus" or below on the "Writing style" criterion, I'd STRONGLY recommend that you visit CLAS before turning in your next comm paper -- it means that I had to spend a significant amount of time struggling to decode your writing before I could understand what you were trying to say (if I understood it at all).

As you probably noticed, I didn't have the time to put written summary comments on most of your papers. Instead, most of your feedback was in the form of written comments on the paper, plus that half-sheet at the end. But before I explain what my codes mean, here's a reminder of what I said about the paper in section (taken directly from the slides)...

Section Slides

Paper objectives
  • Learn how to conduct research
    • Understand the research process & criteria for evaluation
    • Understand your own study
    • Learn how to follow procedure & directions
  • Learn how to report research
    • Understand what should be included in each section
    • Learn to write well in a scientific manner & in APA style
General paper-writing tips
  • 1. Put in time & effort.
  • 2. Follow directions (esp pp.11-12 of reader).
  • 3. Evaluate your study & writing based on the “steps” criteria.
  • 4. Use course terminology & show what you’ve learned.
  • 5. Focus on content.
  • Note: I will follow pp. 11-12 of the reader & my handout CLOSELY when grading
Specific paper-writing tips
  • “Introduction” is a section, not a paragraph
  • Relate your results to your hypotheses
  • APA style
    • See p. 17
    • Paraphrase, don’t quote
    • Cite everything that’s not your group’s idea
  • Group section of this class basically over
    • Still OK to confer re: specific things, but
    • DON’T READ ANY PORTION OF ANY GROUP MEMBER’S WRITING
  • Important: Write your name on the back of your back page & nowhere else
  • P.11
Grading Scheme

So how did I decide how well you'd completed each of the parts of this assignment, relative to other students? Just as I said I would -- by seeing to what degree you followed directions, included everything you were supposed to, showed an ability to discuss your study using correct course terminology & other knowledge gained from the class, and otherwise met the paper objectives I showed you. And remember how, when I said, "I will follow pp. 11-12 of the reader & my handout CLOSELY when grading," I threatened to make a checklist and check your papers to make sure you'd included everything? Well, as you probably noticed, I (eventually) decided to do just that. So here's what the codes mean:

This means that you gave a brilliant, knowledgeable, thorough, clear, nuanced, insightful answer; you demonstrated a thorough understanding of course material and its relationship to your study; you clearly met the criteria in the handout for good completion of applicable research steps.
This means that you gave something more than the basic requirement -- you gave more detail; you were able to link important, relevant outside class knowledge to your discussion; you noticed important things about your study; etc.
This means that you completed the task at about the median level. You "did the job," but nothing more.
This means that something important was missing from your answer, or your answer was very vague, or you got something a little wrong.
This means that either you didn't include this element in your paper, or your discussion showed significant misunderstanding of the element.

Here are some examples of probably the simplest element of the paper, the "Overall design."

"We used the survey method, because after we randomly assigned people to groups and showed them websites with different levels of interactivity, we had them fill out a questionnaire."
"We used an experiment with a survey."
"The overall design of our study was an experiment."
"The overall design of our study was an experiment, and in particular, a posttest-only control group design."
"Because this study's hypotheses predicted a cause-and-effect relationship between interactivity and clicking behavior, the methodology of this study was an experiment -- we manipulated our independent variable, website interactivity; we flipped a coin to randomly assign participants to our experimental and control groups; and we attempted to control all other extraneous variables by treating all participants equally. The particular design of this study was a posttest-only control group design, because we observed clicking behavior in our randomly assigned experimental and control groups only after participants viewed the stimulus webpage."

People's paper scores were based on how far above or below the median their paper was overall, balancing all the sections and weighting more heavily the "most important section!" (p.12 reader), the discussion section. So if your paper got more minuses than pluses, you probably got below the median score, and if your paper had a lot more pluses than minuses, you probably got a lot higher than the median score. The student with the top paper has generously given me a copy of her paper, so if you'd like to see what it would've taken you to get the top score, you can come to my office and I can show the paper to you.

Written Comments in Paper

Comments within the paper can help you figure out why you got the pluses & minuses, but a lot of the written comments in the paper were mainly aimed at helping you improve your paper-writing skills. A lot of you probably read my comments and thought, "Wow -- that's really nitpicky!" If that's the case, then that probably means I thought highly enough of your abilities that I thought you could handle detailed critiques. My goal with the nitpicky stuff wasn't to be overly harsh with your grade, it was to push your writing to the next level. You can get a better idea of what your grade was based on by looking at the half-sheet than by looking at the detailed comments.

FYI, stuff that was underlined or put in parentheses were either wrong or awkwardly phrased. Comments/changes of my own that are in parentheses are recommendations to improve your writing; they don't necessarily mean that you got the thing wrong, they're only suggestions that you might want to consider. Arrows generally mean that your subject-verb agreement or subject-pronoun agreement was off, but sometimes they're just arrows. Circles were for my own reference; I usually used them to identify your hypotheses/RQ's, if you didn't clearly identify them yourself.

Remaining Points of Confusion

Here are some concepts that a lot of you still seem confused about...

1. Social science does not answer philosophical or moral questions (re: values, appropriateness, social policy, etc.); social science doesn't say what should or shouldn't be, it just shows what is. What is can inform what you think should be, but doesn't itself recommend anything.

2. The only way you can generalize to a larger population than your sample is if you get a representative sample of that population. The only way you can get a representative sample of a population is through random sampling techniques. Methodology has little to do with external validity (ability to generalize) -- it's not true that all surveys have external validity and all experiments don't; it's only true that most survey/correlational surveys use representative sampling techniques, and most experiments do not, and it's those sampling techniques that determine generalizability. You can't generalize results to a larger population with an improperly conducted survey with no random sampling, while you can properly generalize results to a larger population with an experiment that used random sampling techniques. To what population can you properly generalize? To the population from which you randomly selected your sample.

3. Internal validity is not the same thing as the ability to make causal claims. "Internal validity" refers to the ability to conclude that the results of your study accurately reflect what went on in your study; it refers to how well you conducted your study. So experiments and surveys both have some level of internal validity (just as they both have some level of external validity), but it's not true that "experiments" automatically have internal validity, any more than it's true that "surveys" automatically have external validity. BUT it is true that if an experiment has been performed correctly, i.e., with true random assignment to conditions based on a manipulated independent variable, and with all variables other than the manipulated IV held constant across groups, then it has internal validity, and if an experiment has been performed correctly, then causal claims can be made. So if and only if an experiment has been performed correctly, then (a) the experiment has internal validity and (b) causal claims can be made.

4. Random sampling and random selection are not the same thing as random assignment (see Final FAQ #2, Q10), so you can't use representative sampling methods to randomly assign people to conditions. That means that any sort of systematic assignment to groups is just that -- systematic (so it'll produce systematic error), not random.

5. Hypotheses are the basis for everything else that goes on in your study, so (a) your rationale for making your predictions should be clear and well thought out, and (b) everything should be linked to your hypotheses. Hypotheses say what variables you're interested in, which variables (if any) you need to manipulate and which you need to measure, what relationship among variables is predicted, what type of research methodology you need to use, what type of sample you need to collect, what type of statistical analyses you need to perform, and what conclusions you can make. So your results section should be focussed on testing your predictions, which means that the analyses you perform should be based on the variables in your hypotheses, not on the questions in your questionnaire. Because results should show relationships among variables, rather than people's responses to particular questions, if you've used multiple questions to measure a variable (for triangulation of measurement), then you need to create a scale of those questions to turn multiple items into one variable.

Most Commmon Writing Errors

Here are the most common writing errors...

1. Errors with amount vs. number terms. Use number terms (e.g., "many," "few,") when quantities are measured as individual, discrete items, and amount terms (e.g., "much," "little,") when quantities are massed and collective. People are always referred to with number terms (e.g., "a large number of people preferred interactive to noninteractive websites") and not with amount terms. You can think of the difference in this way (warning: this example is gruesome, but probably memorable): in order to talk about people in amount terms, you'd need to make them all massed together, so you'd need to e.g., put a bunch of people into a wood mulcher; only then could you select "a little amount" or "a large amount" of people -- you could select e.g., 100 oz of people, or 1 gallon of people, etc. Otherwise, you can only measure people as discrete, complete individuals.

On a related note, "between" is used when entities are distinct individuals (a traitor stood between the two patriots"), and "among" is used when items are a mass or collective group (e.g., "there was a traitor among them"), and in most cases, "between" is used for two entities, and "among" is used for more than two entities or masses.

On another related note, people are "whos" and not "whats," so you should say, "Participants who looked at the interactive website..." rather than "Participants that looked at the interactive website..." That's especially true in social science -- it's important for ethical reasons to treat participants as people rather than lab rats (which is the reason that they're now called "participants" instead of "subjects").

2. Errors with comparison terms. If you use comparison terms, you need to specify what's being compared, so you need to say e.g., more than what, less than what, as ______ as what, different from what (also notice the preposition -- "different" always takes "from" and not "than"), etc.

3. Errors with homonyms, homophones, and plurals, e.g., its (possessive form of "it") and it's (contraction of "it is"), whose (possesive form of who) and who's (contraction of "who is"), effects (noun) vs. affects (verb), media (modes of communication) vs. mediums (people who communicate with the dead), etc.

Wow, that was long! :] In the next few days, I plan to make one last blog entry about how you can use the knowledge you've gained in this class.

Thursday, June 14, 2007

Paper comments

First of all, congratulations! You've now finished taking your first big step toward becoming great researchers (or at least toward understanding and evaluating communication research). And it's finally summertime!

I'm sure a lot of you are wondering about what all those scrawls and arrows and lines and things on your paper mean, and what all those checks and check-pluses and check-minus-minuses on the attached half-sheet mean, so I plan to post an explanation of my paper comments tonight or tomorrow. We TA's are still on campus with Prof. Mullin working on inputting your grades...

Final FAQ #3

Q11: In the names of factorial designs, does it matter what order the numbers go in (e.g., is a 2x2x3 design different from a 2x3x2 design)?

A: No. Order of the numbers doesn't matter, so 2x2x3 = 2x3x2 = 3x2x2. They all just say that there are 3 IV's, two with two levels each and one with 3 levels. (But obviously 2x3x3 is different from 2x2x3x2 and from 2x2x1.)


Q12: How do I graph 2x2 factorial designs to look for interaction effects?

A: The goal of the graph is to plot a dependent variable mean (the numbers in the boxes) for each of the levels of the independent variables. To create the graph, you basically just need to remember which variable goes where.

The dependent variable always goes on the vertical axis.
One of the IV's (it doesn't matter which) goes on the horizontal axis.
The other IV is plotted as separate lines on the graph.

Once you've labelled everything, you just need to match the numbers in the boxes (the DV values) with lines on the graph. (FYI, if you have a 2x2 factorial design, then you'll need to plot 2x2=4 points on the graph, and the 4 points will be joined by 2 lines.) Once you've plotted the two lines, imagine that the lines go on forever. If the lines would eventually cross, then there's an interaction effect (between the IV's), but if the lines are parallel, there's no interaction effect.


Q13: How do I graph the marginal means?

You don't. You only graph the numbers inside the "boxes" (cells). The marginal means don't tell you anything about interactions, but they do tell you about main effects. If the marginal means of different "rows" are different from each other, then there's a main effect of the variable whose levels are in different rows. If the marginal means of the different columns are different from each other, then there's a main effect of the variable whose levels are in different columns.


Q14: In practice final qu. 10, why is "a" the correct answer?

A: Here's the practice question:
If a study finds that a measure of “prejudicial attitudes” is correlated with a measure of “discriminatory behaviors” at r = .85, what would be appropriate to conclude?
a. The more discriminatory behaviors a person shows, the more prejudicial attitudes that person is likely to hold.
b. There is a weak relationship between prejudicial attitudes and discriminatory behaviors.
c. Prejudicial attitudes are likely to lead to discriminatory behaviors.
d. both a and c

This question is complicated because it's testing a few different areas of knowledge at once (particularly, knowledge about correlations and knowledge about causality) -- it requires thorough understanding of the course material. Here are the things you need to know in order to answer this question correctly:

1. r = .85 indicates a positive (direct) correlation, meaning that as one variable goes up, the other goes up. (If it were r = -.85, that would mean that as one variable goes up, the other goes down.)

2. r = .85 indicates a strong correlation; as a correlation gets farther from 0, it gets stronger. (If it were r = -.85, then the magnitude would be the same -- to tell how strong a correlation is, ignore the + or - in front of the number.)

3. If you have a simple correlation between variables, you can't rule out the possibility that instead of the "independent variable" causing the "dependent variable", the direction could be the reverse -- the DV could cause the IV. (Recall that the other main causality problem with correlations is the "3rd-variable problem" -- it could be a third, unmeasured variable that's changing both the IV and the DV.)

4. Causality cannot be established by correlational/survey research (see 3. above), so in correlational/survey research, you can't appropriately conclude that one of the variables has caused changes in another, even if the two variables are strongly correlated.

Answer choice (a) is really asking 2 things:
(I) Does r= .85 between prejudiced attitudes (PA) and discriminatory behavior (DB) mean there's a positive relationship between the variables (as one variable goes up, the other variable goes up)?
Answer: yes -- the sign is positive.
(II) Is it the same thing to say that PA is associated with DB as to say that DB is associated with PA?
Answer: yes -- directionality can't be established.

Answer choice (b) is asking:
Is r=.85 a weak correlation?
Answer: no -- the magnitude of .85 is far from 0 and close to 1, so it's a strong correlation.

Answer choice (c) is asking:
(I) Does r= .85 between PA and DB mean there's a positive relationship between the variables (as one goes up, the other goes up)?
Answer: yes -- the sign is positive.
(II) Based on correlational research, can you conclude that one variable causes ("leads to") the other?
Answer: NO -- correlation doesn't equal causation.

So the correct answer is (a).

Wednesday, June 13, 2007

Office hours

I'm going to be about 10 minutes late for office hours today -- sorry! I'll stay past 6 PM to make up for it...

Final FAQ #2

Q2: Do we need to know the strengths & weaknesses of survey research, experiments, and content analysis?

A: Yes.


Q3: How much are we required to know about interaction analysis?


A: I'm guessing that you'll only need to understand it at a basic level (i.e., that it's the quantitative version of conversation analysis), but because there's more info than just that in the textbook, it's possible that Prof. Mullin would ask something a little more specific.


Q4: How does the cross-lagged panel design help reduce the directionality problem?

A: Recall that in survey/correlational research, if you find a strong correlation between variable X and variable Y, you can't tell whether it's X that's causing Y or Y that's causing X (i.e., there's a directionality / direction of causation problem). But if we know which variable came first, X or Y, then we'll know which variable is causing the other. (That's a law of causality -- the cause has to come before the effect.)

Experiments can establish which variable came first, because experimenters make sure that the treatment (IV/cause) comes before they measure the response (DV/effect). But no such luck when you're measuring two variables at the same time and testing their association, as is done in most survey/correlational research; if you find a strong correlation between X and Y at a particular time, you don't know if X or Y came first.

But what if you could measure the two variables at two (or more) different times? Suppose that X at time 1 strongly predicts Y at time 2, but that Y at time 1 is unrelated to X at time 2. That strongly suggests that X came first -- X had a later effect on Y, while Y did not have a later effect on X.

For example, if TV exposure at age 8 predicts aggression at age 20, but aggression at age 8 does NOT predict TV exposure at age 20, then it's most likely that TV exposure preceded aggression, and therefore TV exposure probably causes aggression rather than the other way around.

That's what cross-lagged panel designs allow us to demonstrate. Measuring the same variables (X and Y) in the same participants (panel design), you cross X over a time lag with Y, and Y over a time lag with X, to see which came first. (If you find all strong or all weak correlations, then you're stuck -- the design won't help you figure out what came first, and you'll have to try an experiment, if possible.)


Q5: How do you calculate data at different levels of measurement?

A: Simply speaking, the following types of calculations are used to determine results.

[Note: variables at the nominal or ordinal level are generally considered "categorical" or "discrete" variables, and variables at the interval or ratio level of measurement are generally considered "continuous."]

IV meas. level
DV meas. level
Calculation
Example
[no real IV]
nominal / ordinal
(e.g., gender)
Percentages
50% of people are male and 50% are female
nominal / ordinal
(e.g., gender)
nominal / ordinal
(e.g., yes/no)
Percentages
40% of females said yes and 60% of females said no, but 20% of males said yes and 80% of males said no
nominal / ordinal
(e.g., gender)
interval / ratio
(e.g., hair length)
Means
The mean hair length for males is 1.5 inches, and the mean hair length for females is 4 inches.
interval / ratio
(e.g., hair length)
interval / ratio
(e.g., # of incarcerations)
Correlations
The correlation between hair length and incarcerations is -.40


Q6: How is statistical regression a threat to internal validity?

A: Recall that "statistical regression," or "regression toward the mean," is the phenomenon whereby people with extreme scores at one time will have less extreme scores the next time the scores are measured. It's a threat to internal validity (= the ability to conclude that a study's results accurately reflect true relationships among variables) because in regression toward the mean, people's scores don't become less extreme because the people have actually gotten "better" or "worse" on the measure, but rather because of a numerical/statistical law. It's an artifact of measurement, not a true change in the score.

E.g., odds are, people who scored super low on the midterm for this class will do better (percentage-wise) on the final exam. Why? Not because they got any smarter, or necessarily know the material any better this time (though that would also help!), but rather just because of the numerical law that says their scores will become less extreme.


Q7: What's the setup for the Solomon four group design?

One group gets the experimental treatment and a posttest.
Another group does not get the experimental treatment but still gets the posttest.
A third group gets the experimental treament, plus a pretest, plus the posttest.
A fourth group does not get the experimental treatment but still gets a pretest and the posttest.


Q8: What's the difference between evaluation research and market research?

My understanding is that evaluation research is done to test the effectiveness of real-world treatment programs (e.g., an AIDS education campaign), and market research is done to test the "saleability" of products (so market research involves studying customers, competing businesses, product campaigns, the economy, etc.). Evaluation research usually uses quasiexperimental designs (as your textbook glossary says), while market resarch generally uses methods like focus groups.


Q9: What does "fairly reductionistic" mean in the context of content analysis?

A: The content of (e.g.) a TV show is rich with visual and aural detail, and everything is created and orchestrated deliberately -- dialog, musical scores, laugh tracks, sound effects, gestures, tone of voice, volume & dynamic contrasts, edits, camera angles & movement, color schemes, sets & backdrops, clothing, props, characters, stunts, storylines, story arcs, themes, genres, depicted action, implied action, morality, etc., etc. And all that stuff is continually going on on multiple channels 24-7.

Any given content analysis takes all of that huge array of rich & complex artistic stuff and squeezes it all into a few numbers. Content analysis focuses on a tiny slice of all that content, e.g., violence, and reduces all the content to, e.g., the number of violent scenes per primetime hour. And content analysis researchers can't rely on coders' personal interpretation of the complex meaning, either, because in order to ensure inter-coder reliability (remember that term? :] ), variables (e.g., violence) have to be standardized into specific, measurable units that everyone is able to code the same way.

So some scholars (generally humanities scholars, who value subjective analysis of deep, rich, complex, idiosyncratic meaning) criticize content analysis as being too "reductionistic" -- the criticism is that social science content analysis reduces complex content into small, measurable units, and forces coders to basically leave their "humanity" behind and become robot number-crunchers.


Q10: If random assignment is used, is the method always an experiment? For example, if convenience sampling is used to gather a group of people and then random assignment is used to separate the participants into the control and treatment groups, it's still an experiment, right?

A: First issue: what does random sampling have to do with random assignment? Random sampling (= random selection) is completely separate from random assignment. Sampling tells how to get your sample, and assignment tells what to do with your sample once you have it. Sampling is random if everyone in your population has an equal chance of being chosen for your sample. Assignment is random if everyone in your sample has an equal chance of being put into any given condition.

Second issue: what makes an experiment an experiment? The "3 C's" tell you whether a study is a true experiment or not.

Conditions: manipulating the independent variable to create different conditions (an experimental group, a control group, etc.)

Control: keeping everything other than the manipulated variable constant (IMPORTANT: the main way to control for 3rd variables is random assignment of participants to the different conditions; if you've done random assignment, then you can assume that all initial differences among participants have been equalized & therefore any differences among groups that you see after the treatment happened because of the treatment)

Causality: If the other 2 C's have taken place, and you've found a significant difference among DV means in the different conditions, then (and only then) you can determine that the IV has caused the changes in the DV (if there's causal language involved in a conclusion, that suggests that the conclusion was drawn from an experiment).

So the answer to your question is that as long as there has been random assignment to different manipulated conditions and control of extraneous factors, yes, it's an experiment, whether your sample is a random sample or a convenience sample.

NOTE: random sampling is the only way that you can accurately generalize to a larger population (the population from which you randomly sampled), so random sampling increases external validity (how true your results are for people other than the people you sampled). But external validity doesn't just involve people, it also involves situations. So another way to increase external valitity is to make your study environment, measures, etc., as similar as possible to real-life conditions. The type of external validity that refers to is "how true results are for situations other than the study conditions."

Generally, the purpose of surveys is to describe a population (e.g., 50% of California voters are planning to vote for Proposition 30), so the goal of surveying a sample is to accurately represent the population -- that is, the goal is external validity. Generally, the purpose of experiments is to determine cause and effect (regardless of the sample), so the goal of experiments is to say that the relationship that exists between variables in a study is actually a true causal relationship -- that is, the goal of an experiment is internal validity.

HOWEVER, a survey-method study isn't automatically externally valid, and an experiment isn't automatically internally valid. The studies have to be properly conducted (survey: random sampling; experiment: 3 C's) to make those conclusions.

AND, on the other hand, you can have a survey-method study that is internally valid, and an experiment that's externally valid. For example, if you randomly sample from a large population, and then randomly assign your sample to conditions (and do everything else that's required of a true experiment), plus you keep study conditions really similar to real-life conditions, then you can (a) make causal conclusions (because of the 3 C's), (b) generalize your conclusions to your larger population (because of the random sampling), and (c) generalize your results to real-world conditions (because of the real-life conditions).

Tuesday, June 12, 2007

Update

Hi, all!

I've been busy grading your papers, so I haven't had the chance to get to most of your exam questions yet. I hope to get to them tonight.

For those of you who didn't get Prof. Mullin's e-mail, here's a link to the final exam review guide.

Sunday, June 10, 2007

Final FAQ #1: Types of designs

Q1: What types of designs have we studied again?

A: Here are the ones I mentioned in section:
  • Preexperimental designs: one-shot case study, one-group pretest-posttest, static group comparison
  • Experiment-type designs:
    • Quasiexperimental designs: nonequivalent control group, single-group interrupted time series, multiple time series
      • Longitudinal designs (usually time series): panel, trend, cohort
        • Type of panel design for reducing 3rd-variable problem: cross-lagged panel design
    • Within-subjects designs (vs. between-subjects designs)
    • Factorial designs (used for data analysis in survey/correlational method or true experiments)
  • True/full experimental designs: posttest-only control-group, pretest-posttest control-group, Solomon four-group (also know the difference between lab experiments and field experiments)
When you're trying to tell the difference among all of these, think especially about:
  1. How they treat time (One observation? Pretest and posttest? Time series?)
  2. How they treat comparison groups (Do they involve any comparison groups? What type -- arbitrary comparison group, comparison group chosen based on pretreatment similarity to treatment group, true randomly assigned control group, or people acting as their own control group? How many control groups & treatment groups-- one of each, or two of each, like in Solomon?)
Also, please note that the nonequivalent control group design involves both a pretest and a posttest -- I think that in one of my sections I implied that it was posttest only (when we were looking at that graph).

Thursday, June 7, 2007

Also, please bring...

...the practice final exam that's on eres.

Bring stuff to section

Please bring your notes, reader, textbook, etc. to section tomorrow.

Monday, June 4, 2007

Homework

The final homework for this class (due at the beginning of section on Friday) will be Exercise #6. Remember to check eres for a legible version of the assignment.