Chapter 11 Notes
Chapter 11 Notes Psych 2300
Popular in Research Methods in Psychology
Popular in Psychlogy
This 4 page Class Notes was uploaded by Emma Dahlin on Saturday November 14, 2015. The Class Notes belongs to Psych 2300 at Ohio State University taught by Seth Miller in Fall 2015. Since its upload, it has received 41 views. For similar materials see Research Methods in Psychology in Psychlogy at Ohio State University.
Reviews for Chapter 11 Notes
Report this Material
What is Karma?
Karma is the currency of StudySoup.
Date Created: 11/14/15
Chapter 11 Notes: More on Experiments: Confounding and Obscuring Variables Threats to Internal Validity Did the Iv really cause the difference? 3 most common threats to interval validity=design confounds, selection effects, order effects There are numerous threats to internal validity that a one- group, pretest/posttest design is vulnerable to The Detrimental Dozen 1. Design Confoundthere is alternative explanation b/c experiment was poorly designed; another variable systematically varies with IV 2. Selection Effectconfound exists b/c different IV groups have different types of participants 3. Order Effectoutcome might be caused by order in which levels of variable are presented (carryover, fatigue, practice, boredom) 4. Maturation Threatsa change in the participants over time, due to natural development or spontaneous improvement, leading to change in DV (solution=include comparison/control group) 5. History Threatsscores on DV change over time b/c of an external factor or event that affects most members of treatment group around time of treatment (solution=include comparison/control group) 6. Regression Threatsa change in the DV due to regression to the mean o Regression to the mean: statistical phenomenon describing tendency for extreme scores at one measurement to be less extreme (closer to average) at a different measurement o Occur only in pretest/posttest designs, and only when group has extreme score at pretest o Problem: extreme scores sometimes motivate an attempt at treatment, and regression then looks like treatment effect o Problem: policy changes are typically response to poor conditions, but poor conditions often regress to normal over time, even under existing policies…we may see an improvement when in fact the policy has no benefit o Solution: include comparison (control) group and look at pattern of results 7. Attrition Threatspeople drop out of studies b/t pretest and posttest measurements o Problem: if attrition is systematic (ex: suppliers of extreme scores drop out most often), its effects may look like a treatment effect o Solution: remove pretest data of participants who drop out 8. Testing Threatparticipants score changes b/c of repeated testing (familiarity, boredom, fatigue, practice) o Solution: use alternative format for posttest measurement, use comparison (control) group, or use posttest-only design 9. Instrumental Threatthe instrument or measuring device changes over time (or at least how instrument is used, applied, or interpreted changes) o Ex: observers may change their standards for judging behavior over time or alternate versions of test may be too dissimilar o Solution: use posttest-only designs, ensure pre/posttest equivalence in rating criteria, counterbalance versions of instrument, ensure good coding standards throughout experiment (check for reliability/validity) 10. Observer Bias observers’ expectations influence their interpretation of participant behaviors, thus influencing recorded results (also affects construct validity) o Solution: run double-blind experiment: neither participants nor researchers know who is in treatment or comparison (control) group o Masked/blind design: participants may have some knowledge about what condition they are in, but observer(s) should not 11. Demand Characteristicsparticipants respond as they believe the researcher expects them to respond, due to their beliefs about researcher’s expectations o Solution: use double-blind or masked/blind design 12. Placebo Effectparticipants have their own expectations about impact of treatment, and their outcomes/responses become consistent with these expectations o Placebo=sham drug/procedure o Solution: use double-blind placebo control study: include placebo control group and ensure that neither participants nor researchers observing/treating participants know who is assigned to each group…also include a no-treatment control group to help identify placebo effects Combined Threats: Selection-History Threatoutside event/factor systematically affects people in the study-but only those at one level of the IV Selection-Attrition Threatthere is attrition for some groups, but not others Interrogating Null Effects: What if the IV doesn’t make a difference? Null effectoccurs when we do not detect a statistically significant association b/t 2 variables or a significant difference b/t 2 levels of an IV (on the DV) o May not be enough between-groups difference o Within-groups variability might obscure group differences o There may be no difference Not Large Enough Between-Groups Difference o Weak manipulations o Insensitive Measures – want to use dependent measures that have detailed, quantitative increments (not just 2-3 levels) o Ceiling and Floor Effects – ceiling effect: all scores at high ends, floor effect: all scores cluster at low end Can occur for IV and/or DV Solution: use manipulation check (separate DV that experimenters include in study) or pilot study o Design confounds can counteract an actual effect of the IV, leading to a null result Within-Group Variability can Obscure Group Differences o Lower within-groups variability is better b/c is makes it easier to detect difference b/t independent variable groups o Sometimes null effects can happen b/c of too much unsystematic variance or error variance Also called noise Scores within group may vary greatly b/c of measurement error, individual differences, and situational noise o Measurement error: any factor that can inflate/deflate a person’s true score on dependent measure Solution: use reliable, precise measurements and measure more instances (collect larger sample size) o Individual differences: characteristics of participants add noise for between-groups designs Use within-groups design (or matched-pairs design) or add more participants o Situation Noise: may occur when external distractions or other factors increase variability in the data Solution: conduct studies in controlled environment Power: ability to find statistically significant effect when that effect actually exists o Using sufficiently strong manipulation, reducing noise (using any of methods discussed) or increasing # of participants in a study are all ways to increase power o When researchers design study with a lot of power, they are ore likely to detect true patterns-even small ones Maybe IV really doesn’t affect DV o A “failure to reject the null hypothesis” can be seen as uninformative. After all, this simply says that we don’t have enough evidence to reject the claim that there is no association or difference. o However, a very high-powered study that results in a failure to reject the null can be informative. A string of high-power studies that fail to replicate an existing finding start to cast doubt on the original finding. Science involves replication for good reason Null results/failures tend to be harder to publish