PSY 335 Week 8 Notes
PSY 335 Week 8 Notes PSY 313
Popular in Intro. to Research Methodology
verified elite notetaker
Popular in Psychlogy
This 4 page Class Notes was uploaded by Bria Harris on Thursday October 22, 2015. The Class Notes belongs to PSY 313 at Syracuse University taught by Amy Criss in Summer 2015. Since its upload, it has received 28 views. For similar materials see Intro. to Research Methodology in Psychlogy at Syracuse University.
Reviews for PSY 335 Week 8 Notes
Report this Material
What is Karma?
Karma is the currency of StudySoup.
Date Created: 10/22/15
PSY 313 Introduction to Research Methods Week 8 Lecture Notes October 19th and 21St Reliability and Validity Will Your Data Test Your Hypothesis Reliability Is the measurement consistent amp stable Will it produce the same result again and again Validity Does the experiment test what the experimenter says it does Reliability Types of Error Observed score or measure true value error Eg Exam score knowledge stress Random error just exists fine to have Can be in the instrument or in the person being measured Because it s random it cancels out with repeated measures Eg Weight Reliability Problems Error Random error a perfectly sound measure eg weight may produce different values Intrinsic noise drink half a gallon of water between 2 measurements you may weigh more Measurement error reading from a scale Because it s random it presumably cancels out with repeated measures of the same device Consistent error 0 Eg Scale always adds 2 pounds to the real weight How Do You Measure Reliability in a Research Study 1 Interrater reliability Same observations different raters Interrater reliability correlation between the rating of different judges Correlation doesn t mean the exact same values must be obtained But the same relative order is important 0 Take scores from 2 judges correlate them 2 Testretest reliability Same measurement different times Correlation between a test on different trails days weeks Eg Measure groups ability to block an air hockey puck 20 times to asses hand eye coordination Measure again 6 months later Correlate values r 027 weak 3 Split Half Reliability Same test different items that assess the same things Design a test that as different items that asses the same construct Eg 10th grade math test with 10 questions assessing trigonometry it is reliable Randomly split test into 2 halves question 15 then questions 6 10 Correlate scores against one another If reliable results will be positively correlated r 089 Reliability amp Validity Reliability refers to precision of data Are there systematic errors Validity refers to whether the data answers the research question Reliability is necessary for validity Validity Does the experiment answer the research question being asked Anything that makes you say umm maybe not thereat to validity Types External Do results generalize Construct Do operational definitions address constructs Internal Are alternative causes rules out External Validity All research is constructed at specific at a specific time in a specific place with a specific group of people How far can we generalize our results to different times places amp people Want to be broad amp global in our conclusions Therefore the question is Does the specific time place matter Could there be something that in uences our results Sometimes we study a specific group amp don t want to generalize then external validity isn t a concern Are the results more specific than suggested Population other participants cultures genders age etc Ecological from the lab to the real world Temporal to other periods of time of the day of the year or generations Does the study lack external validity Best answered by replication experiment in different time with a different population or outside of the lab Must be logical reason that some property of the situation affects the results 0 Threat eg volunteering at 2 AM volunteering at an unsafe location 0 No threat 0 Threat time period amp environment Construct Validity Does this experiment really measure what the research claims it measures Are the operational definitions reasonable measures of the construct Eg Elevation in hear rate good for stress Standardized test for teacher effectiveness Precautions Use logic Convergent validity shows that 2 measures of the same construct are correlated Divergent validity shows a lack of correlation to unrelated constructs Convergent Validity show that your operational definitions in correlated with another measure of the same construct Eg Hypothesis Boys are more aggressive than girls 0 Operational definition of aggression Number of times child hits or kicks another person 0 Convergent measure different operational definition asking the teacher the overall rating of aggressiveness for each child should positively correlate Divergent Validity show lack of correlation with different potentially explanatory constructs Hypothesis amp Operational definition should do the same Want to rule out the possibility that our is measuring a related construct activity level by counting total duration of running during recess should find no correlation Internal Validity Goal to day that manipulating the IV causes a change in the DV A threat to internal validity is anything that prevents making that causal conclusion Early in experimental design Extraneous Variable any variable that is not a DV or a IV random error Confound an extraneous variable that systematically varies with the manipulated random variable AND explains that data Eg A 3rd variable but in experimental situation Eg Pepsi wins but label was confounded with beverage L always coke amp S always Pepsi In experiment when randomly assigned labels L or S people choose S 85 of the time regardless of beverage