Description
So much better than office hours. Needed something I could understand, and I got it. Will be turning back to StudySoup in the future
PSY 313 Final Exam Review Sheet
Note that every exam in this class is cumulative. Therefore this review sheet is simply combining the review sheet for Exam 1 and Exam 2. The only new content that has been added is the Stanford Prison Experiment (the video we watched in our last class).
There are four types of questions on this exam.
D: Those that require you to know the definition of a term.
I:Those that require you to be able to generate an example or identify an example when given a scenario. Know how to use and apply these terms, not simply report the definition (although obviously defining the terms is the first step toward identifying or generating).
W:Things for which you need to what's the big point here, why did we talk or read about this.
We also discuss several other topics like ut ib
C: Calculations. These are the equations you should know how to use. A copy of each (exactly as it looks below) will be on the exam for your use. I may ask you to compute the entire equation or part of an equation (For example, I might ask for XM, etc). You may not use a calculator.
SOURCES OF BELIEFS
Method of Tenacity
● believing something b/c it has been around for a long time
○ Ex) opposites attract, chicken soup cures a cold
■ you can use science to see if they’re true or not
Method Of Authority/Faith
● believing something b/c an authority figure/expert/someone you trust says its true ○ parent: tooth fairy put money under the pillow
Method of Intuition
If you want to learn more check out uw madison cs 200
● believing something b/c it just feels right (gut feeling)
○ Ex) the coin toss will land on tails
○ Ex) driving not sure where you go end up at place by gut feeling
Rational Method
● use logic and reason to infer
○ Ex) If… and then statements
○ the sun will rise tomorrow b/c it does everyday
■ bird / feather/ flamingo
Empirical Method
● beliefs based on direct observation
● evidencebased ( based on findings you have seen)
○ Ex) I am short
○ Ex) it snows in Syracuse
Myth Busters video:
are elephants afraid of mice?
shows the thought process of conducting experiments Don't forget about the age old question of geog1112
how to think logically
deals with Method of Tenacity
CHARACTERISTICS OF SCIENCE
Empirical
● driven by evidence in the form of real life (systematic) observation
● observed/experienced but not theory
Objective
● NOT SUBJECTIVE
● recognizes and avoids bias
○ statistics
○ replication
○ collaboration
Public
● write papers, attend and present at conferences, distribute to policy makers by popular media ○ accessible ( online sources : national academy of sciences)
○ scientific advisors to the president
Cycle of ScienceResearch & Revision
idea→ define variables→ participants→ design exp.> collect data→ evaluate data→ report results(publish)→ refine idea→ define variables
( look in notebook)
this cycle shows the process of a research study from the idea→ the results , then refining the idea and conducting the study We also discuss several other topics like unl dlc registration
we talked about this to show that studies can be conducted over again, you should get similar results
Project ADAM Legos Lab
the purpose of this lab was to show the importance of the instructions so the experiment can be repeated and get the same results everytime
Structure of a Scientific Paper
Abstract
main ideas, what study is on, hypothesis
Introduction
background info
hypothesis
● needs to be justified
● funnel approach
Methods
participants
● who took the study (age, gender, # of part.)
materials
● description of the questionnaires/ instruments used
procedure
● description of how the study was conducted
● operational definitions
Results Don't forget about the age old question of What are the basic laws of probability?
● researchers statistical analysis
● tables, graphs, figures,
● DO NOT INTERPRET RESULTS YET
● quantitative/ qualitative results of observ.
Discussion
● data interpreted
○ implications of findings explored
○ limitations
○ what went wrong
○ future direction of exp.
RESEARCH IDEAS
What makes a good research idea? Don't forget about the age old question of lehigh ees
1) Logical
○ follows from facts or observations
■ Ex) evidencebased/ logical thoughts from concrete facts 2) Testable
● all variables can be measured
3) Refutable/Falsifiable
● can be wrong
● specifies direction????
Casual Vs Formal Sources
1. Casual
● interests, curiosity, fleeting thoughts
○ ex. I wonder if moment…?
● casual observations
○ ex. people watching
● beliefs
○ ex. intuition, tenacity, authority
2. Informed/Formal Sources
● solving practical problems
○ ex. fake bus stop for Dementia/ Alzheimer's patients
● replication of published research
○ ex. using an idea in diff. details
○ ex. using diff. operational definitions
● testing predictions of theories about human behavior
DEFINING & MEASURING VARIABLES
1. Theory
● an idea about how world based on empirical data
● explans how scientific laws fit together
● Psychological Theories set of integrated statements that explain behavior ○ describes behavior and underlying causes, predicts behaviors, & control, manipulate, and change behavior
Developing a Theory:
● a theory describes how constructs are related
○ construct the concept of interest ( not directly observable)
■ ex.) stress, attention, love, memory, knowledge
2. Developing a Hypothesis:
● a hypothesis is a prediction of specific ideas about the relation between constructs derived from the theory
○ must be TESTABLE & REFUTABLE
■ Testable constructs must be clear and defined
● if not directly observable, turns constructs into something
measurable observable
■ Refutable can be proven wrong
3.Construct ( idea/general)
● concepts of interest
● not directly observed
○ Ex) stress, love
Operational Definitions (specific)
● specifies how each construct will be measured
○ turns constructs into something measurable. observable
■ ex.) NASA built a space shuttle, messed up measurements, fell off orbit
Independent Variable (IV)
• the cause under investigation, what the experimenter manipulates
• Treatment conditions: 2 or more levels of the IV
• ex. temperature vs. 60 vs 80
Dependent Variable (DV)
• the effect; what the experimenter measures
3 TYPES OF RESEARCH METHODS
1) Descriptive (one variable)
● provides a snapshot of the world
● not concerned w/ relationship bet. variables , but the description of the variable itself Ex) how many people drive drunk on college campus
Ex) how aggressive are children on the playground
2) Correlational (2 variables)
● how are 2 variables related?
● CORRELATION does NOT equal CAUSATION
○ ex) do people who sleep longer have better memory?
3)Experimental ( cause/ effect)
● establishing a cause/effect relationship between 2 or more variables
○ ex. does sleeping longer → better memory?
● test the hypothesis
● has constructs (operational definitions : IV, DV)
○ evaluate the impact of the one variable on another
Manipulate something > measure the outcome *variables measurable attributes (IV) (DV) that vary
● IV( Independent Variable)
○ manipulates something
○ controlled
○ predictor
● DV (Dependent Variable)
○ measured
○ depends on the IV
○ outcome
OBSERVATIONAL RESEARCH
Types of Observations:
1)Naturalistic try not to disrupt subjects being observed (hide, habituate)
Ex) Jane Goodall study on chimps; observed behaviors & took careful notes
● discovered: chimps make tools, chimps hunt & eat meat, tribes go to war
with one another
2)Participant become one of them
Ex) “being sane in insane places”
Rosenhan 1973 Study wondered how accurate medical staff were at diagnosing mental illnesses
Results: 7 part. pretended to have schizophrenia, once admitted resumed normal behavior, tried to get released, some took (752 days to get released)
3) Contrived/Structured
● construct situation so you can measure relevant behaviors
● Ex) Bandura study bobo doll
○ transmission of aggression through imitation of aggressive models
Data Collection:
What?
● Behavioral Categories:
○ identify every category of behaviors prior to observation
○ list everything that quantifies as a member of each category
○ needs clear operational definitions
When?
1. Frequency Method count # of times observable behavior is in fixed amounts of times ○ only used if there is a consistent time frame
2. Duration Method measure amount of time spent engaging in behaviors 3. Interval Method set time intervals and note if behavior is observed during that time (yes/no)
How?
1. Event Sampling observe behavior #1 & observe behavior # 2
2. Individual Sampling all behaviors from person #1 & all from person #2 3. Time Sampling Person #1 observe→ record & Person #2 observe →record
Types of Data:
1) Quantitative
● quantifiable usually numerical
● objectively measured
○ Ex) amount, money, ( measuring how many times it takes a dog to know its name)
2) Qualitative
● quality of data
● subjective and descriptive
● typesof things that you observe from your exp.
○ Ex) gender, king
ANALYSIS:
Rosenham 1973 Study
1. Qualitative Data:
● other patients ( but not staff) seemed to notice
○ “You’re not crazy! You’re a journalist or a reporter, You’re checking up on the hospital”
● staff often responded generically to a specific question
○ “Good morning, how are you doing” too casual of a question it it was actually a patient
2. Quantitative Data
● # of days spent in the hospital
● # of time staff ignored a question asked by experimenter
3) Contrived/ Structured construct situation so you can measure relevant behaviors Ex)Bandura Study (aggression)
transimission of aggression through imitiation of aggressive models (Bobo doll) Results: exp. group displayed more aggressive behaviors than cotnrol
POTENTIAL PROBLEMS WITH OBSERVATIONS:
1.Interrater reliability
• same observation, different raters
• correlation between ratings of different judges
• degree of agreement between two observers who simultaneously record measurements of the behaviors.
• correlation does NOT mean exact same values must be obtained
• same relative score is important
2.Reactivity
● modify nat. behavior when they know they’re being watching
** avoid problems by picking the proper method
choose appropriate method ( natural)
also can replicate the study
Solman Asch Study (1963)
social pressure/conformity
Results: 75% of participants went along w/ confederates at least once
30% of the time people agreed w/ incorrect answers
3.Demand Characteristics
● people might do what they feel is expected of them based on clues from the researcher or research design
4.Experimenter Expectancy Effect
(Rosenhan)
● students worked in lab training rats to go through a maze
● some RAs were told they were working w/ animals bred to learn quickly
● other RAs were told they were working w/ dumb rats
**** in reality, same set of rats
● Results: “ smart” rats learned quicker. “dumb” rats slower
TO AVOID THIS….
● single blind study experimenter does not know hypothesis or the condition the participant is in ● double blind study neither experimenter nor participant knows the condition
SELF REPORT & SURVEY RESEARCH
Survey Structure:
○ open with a non threatening, interesting question as a warmup
■ good place for openended question
○ remember that the respondent will be researching to your questions
■ do you approve of women’s right to choose?
■ do you approve of abortion?
○ put general question before specific ones
■ how do you feel about the economy in general?
Type of Questions:
● OpenEnded (broad questions) ** NOT limited
○ PROS
■ ability to provide complex answers
■ elaborate on answers to restricted questions
■ exploratory research where you may not know the best appropriate respnoses ■ participants answer is not based by the provided responses
○ CONS
■ possibly too much info
■ potentially ambiguous responses
■ people don’t have to respond
● Closed/Restriced
○ PROS
■ easy to quantify
■ answer is always related to the question
■ quick
■ response scale is meaningful ( can be interpreted)
○ CONS
■ basis for response is unknown
■ the participant may wish to give a response that is not available
■ different conception of ranking (my 10 isn’t your 10)
Types of Restricted Questions:
1. Likert type scales
● “I like apple martinis”
● 1. storgly disagree
○ 2. disagree 3. neutral 4. agree 5. strongly agree
2. Semantic Differential
● “ The weather at Syracuse is…?”
○ good xhorrible
3. Categorical
● “My favorite city is?”
○ 1. Chicago 2. NYC 3. LA 4. Seattle
4. Quantitative
● “What is your current weight?”
○ <100 100149 150199…..
● Potential Problems
1. Vocab:
● appropriate for sample (surveying economics vs everybody else)
2. Emotional content:
● avoid words with emotional baggage
3. Avoid leading questions
● (ex. Do you agree that…?)
4. Avoid tactless questions
● (ex. Do you have a real job?)
5. Clarity: be clear
● Holocaust Study Ex
6. Avoid ambiguous answers
7. Response set:
■ people tend to pick a response and stick with it if possible
■ give a rating of agree to every question
■ solution: use both positive and negative statements
8. Establish a frame of reference
■ you want to know why someone answers the way they do
■ ask broad questions or specifically ask
9. Memory: is easily altered
■ ex. dream study
● subjects are asked to report an event from their life
● none of them reported being lost in mall
● 1015 days later they are called to what they think is a different
study of dream interpretation; dream of getting lost in mall
● 1015 days later same participants fill out same questionnaire from
beginning, 60 to 80 % reported being lost
SAMPLING RESEARCH PARTICIPANTS
1.Population
● the entire group of interest
2.Sample
● a subset of the population
● part of the population
3.Representativeness
*developing a sample plan can establish representatives
Bias
● systematic difference between sample and population
● bias means we have INACCURATELY sampled our data
UNBIASED=GOOD=RELIABLE SAMPLE
Stability
● how much noise in our data?
● spread of variance of sample
HIGH STABILITY=LOW SPREAD=GOOD THING
* Unstable sample means our sample is NOT RELIABLE**
Sampling Bias
● avoid unstable samples by…. having a large sample size (N)
SAMPLING PLANS
1) Nonprobablitty sampling
● not drawin from entire population
○ Conveniencetake whatever you can get
○ Quota selectively take what is availabe
■ Ex) take first 10 people you see
2) Probability Sampling
● each member of population has a known & nonzero chance of being selected ○ Ex) Census, Nieslen ratings (entire population)
■ Known must access and identify each member
■ NonZero everyone has a chance of being sampled
1)Simple Random
● everyone in pop. has an equal chance of being selected
○ without replacementeach person can only be sampled ONCE
○ with replacement people can be selected MULTIPLE times
2)Stratfified
● break pop. into subsamples and choose randomly from sub sample
○ stratified random sample: an equal # of each strata
○ proportionate stratified random sample each srata in proportion
to its size in the population
CENTRAL TENDENCY
1.mean average
● best in most situations (common)
2.median middle number
● best if there are extreme values
3.mode most occurring common observ.
● used when decimals don't make sense & for categorical response scales
a. Positively Skewed
tail points towards + end
b.Negatively Skewed
tail points towards
end
DISPERSION
● measures of stability/dispersion
1. Variance 2. Standard Deviation
***as the mean grows the variance grows
Error bars and significance (I, D)
STANDARD SCORES
1. Standardized scores ( z scores)
○ convert any measure (x) to a standardized score and then compare it to other scores on standardized/fixed scale3
Score = XM/ S
● What does it mean? (I)
○ the number of standard deviation an observation is above the mean
○ positive standard score observation is above mean
○ negative standard score observation is below the mean
● Why use a standard score? (D)
● % above or below the standard score (see SAT example) (I)
CORRELATION
● purpose is to examine relationship bet. 2 variables
● relationship does NOT IMPLY CAUSATION
1.Strength: (1 to +1)
1. the magnitude of correlation coefficient
2. fuzziness of cloud on scatterplot
a. no relationship..
i. x and y not related
ii. correlation coefficient near
iii. cloud is maximally fuzzy
** the fuzzier the graph the weaker**
2.Form:
pattern in the data
correlation coefficients: either positive or negative not both
assume linear or monotonic relation
EXPERIMENTATION
3.Causation:
● does the IV CAUSE a change in the DV
4.Direction:
1) Positive
● as x increases y increases
● x and y vary in SAME direction
● Pearson’s >0
2) Negative
● as x increases y decreases
● x and y vary in different directions
● Pearson’s <0
5.3rd Variable
● when a variable relates to 2 variables of interest
● problem b/c don’t know which causes which
● only a problem if it can explain relationship bet. 2 variables
6.Pearson’s R Coefficient calculating a correlation statistic
● X is a continuous variable
● Y is a continuous variable
1. standard scores allow us to compare variables on different scales
2. (+) or () matters; range is 1 → +1
○ allows prediction: ( predict x when given Y & vice versa)
3. CORRELATION does NOT = CAUSATION
○ causation requires:
■ correlationpurpose is to examine relationship bet. 2 variables
■ temporal precedence the cause comes before the effect
■ ruling out all 3rd variables when 3rd variable correlates with 2 variables
of interest
● do NOT know which causes which
● only a 3rd variable problem if it explains relationship between the
2
7.Scatter Plots
each point = 1 observation (ex. 1 participant)
8.Temporal Precedence
the cause (precedes) comes before the effect
we can design experiment to ensure this is the case
experiment allows you to control the whole thing
RELIABILITY
Reliability
● is the measurement consistent and stable?
● will it provide the same result again?
● the stability or consistency of the measurement
● precision of data based on measurement
– Random vs. Systematic error
– Types of reliability
• Interrater reliability
• Test retest reliability
• Splithalf reliability
1.Interrater reliability
• same observation, different raters
• correlation between ratings of different judges
• degree of agreement between two observers who simultaneously record measurements of the behaviors.
• correlation does NOT mean exact same values must be obtained
• same relative score is important
2.Testretest reliability
• same measurement, different times
• correlation between a test on different trials/days/weeks
• comparing scores obtained from 2 successive measurements of the same individuals and calculating a correlation.
3.Splithalf reliability ( Parallel form Reliability)
• same test, different items which assess the same thing
• design a test that has different items that assess the same construct
• if your test is RELIABLE the results will be (+) positively correlated
• splitting items on a test in half, computing a separate score for each and calculating degree of consistency between the 2 scores for a group of participants.
VALIDITY
Validity:
● does the experiment test what the experimenter says it tests?
● the degree to which the measurement process measures the variable that it claims to measure.
● *** reliability is necessary for validity
○ can’t answer question with bad data
Threats:
1. Extraneous Variable(EV)
○ any variable that is not a DV or IV
2. Confound
○ an EV that systematically varies with IV & explains the data
○ like a 3rd variable but in an experimental situation
3 Sources of Validity :
1.External Validity
● 3 examples of external validity:
○ are the results more specific than suggested?)
• Population other participants, cultures, gender, age, etc.
• Ecological from lab situation to real world, can it generalize it?
• Temporal to other periods of time ( of the day, of the year) or generations.
• all research is conducted at a specific time, in a specific place,with a specific group • how far can we generalize our results to different times, places, & people • we want to be broad and global in our conclusions
Question is…..
• does the specific time/place matter?
• could there be something about the situation that influences our results
• Usually, we want to be broad and global in our conclusions
– You *must* state how that special property affects the results (if you have no reason to suspect that is does, then it probably doesn’t matter)
• Sometimes, we study a specific subgroup/time/place → then external validity is not a concern
• replicate experiment in a different time/population/outside lab to know if the study lacks external validity.
2. Construct Validity
• Does this experiment really measure what the research claims it does? • Are the operational definitions reasonable measures of the construct?
Is an elevation in heart rate a good operational definition of stress?
Is a standardized test score a good measurement for teacher effectiveness?
Precautions:
use logic
• convergent validity– show that 2 measures of the same construct are correlated
• ex.
• hypothesis: boys more aggressive than girls
• operationaldefinitionof aggression: number of times a child hits or
kicks another person during recess
• convergent measure:a different operational definition of aggression
(e.g. ask teacher for overall ratings of aggressiveness for each child)
• divergent validity– show lack of correlation to unrelated construct with a different (potentially explanatory) construct
• ex.
• hypothesis: boys more aggressive than girls
• operationaldefinitionof aggression: number of times a child hits or
kicks another person during recess
• Divergent measure:want to rule out the possibility that our measure
is measuring a related construct
• e.g. activity level, measure activity level by counting the total duration
of running during recess
• should find no correlation. this is good!
*Rorschach lacks construct validity (does not measure personality disorders, performance is not correlated with mental health problems)
3.Internal Validity:
1. Selection/Assignment
○ SelfSelection or improper assignment to condition
3. History
○ uncontrolled events that happen midexperiment
4. Maturation
○ participant changes over time
5. Instrumentation
○ change in the ability to use instrumentation or in the measurement device itself 6. Testing effects
○ change in performance due to practice or fatigue with the material
*************
EXPERIMENTAL CONTROL
Extraneous Variable
• a variable that is not controlled or manipulated
Confound
• systematically varies with the IV and can explain the results
Control Group
• held constant (equal, same) across levels of the independent variable because it is a suspected confound
• eliminates confound
• help in comparing results (are they significant?)
• useful in self selection situations
Matching Stimuli
● A tightly controlled way to cancel out differencesin a potential confounding variable ● if you have identified a possible confound
○ match (participants or stimulus material) on that confounding variable across the tlevels of IV
○ for every level of the IV there exists one item/person with the same value of potential confound
■ Ex) shoe size/reading level
■ size 4: 4 yr, 5 yr, 6 yr
■ size 5: 4 yr, 5 yr, 6 yr
Holding Constant
● hold the value of a potential confound constant across ALL levels of IV
● all items have same value or restricted range
Random Assignment
● for each participant from the sample randomly assign them to condition (flip coin, random # generator)
● any differencesbetween individuals “should be” equally spread across conditions by chance
○ sampling how we select people form the population
○ assignment how we place that sample into the condition of the experiment(levels of IV)
BETWEENSUBJECT DESIGN
(IndependentMeasures)
Advantages & Disadvantages
Advantages
● clean measures
● no practice of fatigue effects
● participants are naive to the experiment
○ can ask WHO questions (individual differences)
Disadvantages
● needs lots of resources ( more participants)
● risk differences between participants in each group that are not related to IV ○ dealing with individual differences
○ potential confounds
Individual Difference Study
(also called quasiexperimental):
• exploits the differences that naturally exist between individuals
• Ex) male vs. female, young vs. old, disabled and. nondisabled, athletes vs. nonathletes
** individuals are different !!
** participants are different!!
Controlled Difference Study
● attempts to eliminate or minimize differences between groups
● use experimental control ( matching, holding constant, or random assignment) After Only Design
● we are only measuring the DV once at the end of the experiment
Before After Design
( pretest/ posttest)
○ test before and after IV manipulation
○ measure differences and compare across groups
Using Matching to Control a Variable
● safest to match a variable when…
○ you have a small # of participants
○ concern about pot. confounding variable
■ pitfalls to avoid…
● diff. between environmental conditions
● control all other aspects of experiment
● ONLY difference between groups should be IV
Experimenter Bias
● If experimenter is aware of the groups he or she may inadvertently influence the results: ○ gestures or trone of voice
○ reinforce desired behaviors
○ misinterpret behavior in the direction of what is expected
Single Blind: experimenter unaware which group each participant is in
Double Blind:experimenter & participant both unaware of group
WITHINSUBJECT DESIGN
(RepeatedMeasures)
○ Each participant is in every condition/ receives every level of the IV
● variance is caused by random unsystematic variance caused by individual differences or sampling error
Advantages & Disadvantages
Advantages
• no worries about differences between groups (because there is a single group) • statistically more powerful because differences between people is controlled for (each person is their own control)
• Individual differences are eliminated because everyone is in the same group ex. every participant contributes to every level of the IV
Disadvantages
• prior behavior/decisions may affect later behavior/decisions
• changes over time ( see maturation in internal validity)
• testing effects
• carryover effects
• participant attrition
1)Testing effect
● Repeated practice with the DV can help or harm experiment
Ex) 2 hour task where you press z for even #’s and m for odd
● mere act of testing someone’s memory will strengthen the memory, regardless of whether there is feedback
2) Carryover effects
● an effect that "carries over" from one experimental condition to another 3) Participant Attrition
● rate of decline of participants in a long term study due to various reasons
Carryover Effects
● an effect that carries over form one experimental condition to another
Counterbalancing
● used to avoid carryover and testing effects
1)Random Method for Counterbalancing
● subjects get the different conditions ( levels of IV) in a random order
2)Balanced Method of Counterbalancing
● an equal number of subjects receive each fixed order of conditions (levels of IV) **Both of these distribute any order effects evenly between the conditions ( levels of IV)
FACTORIAL DESIGN
● manipulate 2 IVs in the same experiment
○ Factor: Independent Variable (IV)
○ Level: the # of conditions for that factor
Terminology of factors and levels
• 2x2 design
• means 2 IVs each with 2 levels → 4 conditions in total
• 3x2 design
• means 2 IVS, one with 2 levels and one with 3 levels → 6 conditions
• 2x2x4 design
• means 3 IVs, 2 with 2 levels each, and one with 4 levels → 16 conditions • 5 x 2 x 7 x 2 design
Advantages
● Allows you to look for an main effects & interactions between variables main effect= effect of one variable ALONE
Interaction = effect of the two variables TOGETHER
● one factor modifies the effects of the second factor
Disadvantages
• the disadvantage of a factorial design is that you test all possible options • in the case there are 2* 2* 2* 4* =32 different conditions
Interaction
● effect of the two variables TOGETHER
Main Effect
● effect of one variable ALONE
Shapes of Factorial Designs:
1) Parallel
● at least 1 main effect
● no interaction
2) VShaped
● at least 1 main effect, plus interaction
3) XShaped
● only 1 interaction
● no main effect
ETHICS
Principles of Ethics
1. Autonomy
● the participant must have the unbiased /uncoerced right:
○ to know what they are participating in
○ to decide whether or not to participate
Informed Consent:
● informed consent ensures autonomy
Special concerns:
● pregnant women, children, institutionalized people, nonnative speakers ● incentives
○ withholding of incentives as coercion
2. Nonmaleficence (no harm) & Beneficence
“to avoid bringing harm to research participants and to take steps to maximize the benefits of research and minimize risks”
● always costs of research
● always benefits
● benefit must outweigh the costs
3. Justice
burdens and rewards are distributed equally (demographics)
4. Trust
ensure a relationship of trust between participants and researchers
● Confidentialityyour identity is linked to your data but we keep that link
secret
● Anonymityyour identity is not linked to your data
● Deceptionuse only when absolutely necessary
● Debriefingeducating participants about the design and purpose of the
experiment
5. Fidelity & Scientific Integrity
● researchers must be honest
○ Misconduct/Fraud
■ making up data
■ leaving out relevant data
Goal of peerreview is to ensure fidelity & scientific integrity
Tuskegee Study
● done on black males with syphylis
● violated almost all principles of ehtics
● were not given adequate treatment when it was available
● told they were being treated when given a spinal tap
● bribed to continue participation
● weren’t aloud to leave the study
Jewish Chronic Disease Hospital (W)(1963)
(Violation of Autonomy)
● hypothesis: debilitated patients will reject foreign cells
● 22 chronically ill patients who did not have cancer were injected with live human cancer cells
● the physicians did not inform the patients as to what they were doing because ○ they did not want to scare the patients
○ they thought the cells would be rejected
Animals in Research
3 R’s of animal research
1. reduce the number of animals used
2. refine to cause least stress
3. replace animals with other models
Who Determines if Researchers Follow Guidelines?
IRB
● International Review Board
● people
IACUC
● Institutional Animal Care and Use Committee
● animals
● Oversight Process
○ composed of unaffiliated member, prisoner advocate, child advocate, religious affiliate, faculty, vet ( as appropriate)
■ submit detailed proposals yearly
■ government may visit and insect animal labs
Stanford Prison Study
contrived/structured observation
● took place at Stanford University
● college students were randomly selected to be prisoners and prison guards ● entire experiment was stopped after 6 days
● participants resumed positions
○ prison guards: mean, psychologically abusive, authoritarian
○ prisoners: abused, harassed
Types of Error:
• Observed Score or measure (measured score) =
true value/score +/ error
ex.) exam score= knowledge & stress
1.Random Error
• can be in the instrument or in the person being measured
• because it is random, it cancels out with repeated measure
ex.) weight
● perfectly sound measure
– Intrinsic noise
• drink a litre of water before 2 measurements you might weigh more
– Measurement / observer error
reading from a scale
** because it is random it cancels out with repeated measures of same device
** Individual who makes the measurements can introduce simple human error into the measurement process.
2.Systematic Error
• consistent error
• Flaw in equipment or design
• ex.) scale always adds 21 lbs to the real weight
Calculations: These are the equations you should know how to use. A copy of each (exactly as it looks below) will be on the exam for your use.
Disclaimer: Please note that you are responsible for everything I covered during lecture and all assigned readings, regardless of whether that material is listed on this review sheet or not. The review sheet is a guide to where you should place the majority of your efforts while studying.