Reliability and Validity for Research Methods in Psychology
Reliability and Validity for Research Methods in Psychology PSY 3213
Popular in Research Methods in Psychology
verified elite notetaker
Popular in Psychlogy
This 5 page Class Notes was uploaded by Emily Notetaker on Monday February 22, 2016. The Class Notes belongs to PSY 3213 at University of South Florida taught by Dr. Brannick in Summer 2015. Since its upload, it has received 18 views. For similar materials see Research Methods in Psychology in Psychlogy at University of South Florida.
Reviews for Reliability and Validity for Research Methods in Psychology
Report this Material
What is Karma?
Karma is the currency of StudySoup.
You can buy or earn more Karma at anytime and redeem it for class notes, study guides, flashcards, and more!
Date Created: 02/22/16
Reliability Reliability and Validity Evaluating the relationship b/w CONSTRUCT and OPERATIOALIZED variables o Question: How do we know whether the measures assess/correspond to the LATENT CONSTRUCT? o Answer: determine the degree to which the measure is RELIABLE and VALID Reliability: degree to which scores show consistency or repeatability – freedom from error Validity – degree to which scores support inferences or intended meaning Bathroom scale: o Gives consistent measure of weight (step on, step off, etc.) o Provides good data for inferring who is heavier, what luggage will cost, whether were gaining or losing weight or time o Not so good to determine a person’s height o Good reliability, good validity for weight, poor validity for height Types of Reliability Estimates o Internal reliability Frequently also referred to as internal consistency (not to be confused w/ internal validity) Its purpose is to verify whether items that are supposed to measure the same construct are actually producing similar scores How do we measure this? FANCY math! Cronbach’s Alpha Est. correlation b/w the observed (total) score and the universe score, where universe is based on all possible items of the same kind. Alpha is derived from inter-item correlations. Implications of Internal Consistency Why does this matter? If your internal consistency is in the acceptable range or higher, it means that your test should converge w/ another similar test If your internal consistency is low, the study results usually won’t come out (ex. If your marital satisfaction scale is bad, it won’t correlate with anything) What to do? If your internal consistent is not in the acceptable range or higher, you may have to consider: There may be a few items, you may need more of them. Multiple choice tests in educational contexts are like that If many items have poor inter-correlations, your scale could be measuring multiple things instead of one underlying construct. Revise the items Most of all, the people have the same standing on the thing being measure. If everyone gets all the items right in an educational test, then the correlations b/w items will be zero, and the estimated reliability will be zero as well. o Inter-rater reliability Its purpose is to measure the degree of agreement or consistency b/w raters (people) If inter-rater reliability is poor, if suggests that The operationalization of the construct is defective OR The raters need to be re-trained or Some raters can be excused (fired) Calculating inter-rater reliability Percentage agreement (e.g., 30/32 aggressive acts) Cohen’s kappa Calculated the degree of agreement The equation takes into account agreement occurring by chance Arguably a better measure than simple percentage agreement An adequate level of inter-rater reliability is 70% or above o Test Retest Reliability Its purpose is to determine the variation in a specific measure over different time points Correlation between scores at the different time points Usually used for measures that should not change drastically over a short period of time Can be used over a long period of time; however, reliability may be lower Developmental changes Why might your test retest reliability be low? Poor items Real changes If its short term reliability (less than a year), this is more of a concern o Should consider environmental factors that may have had an influence on your measures Ex: the perceived stress scale in college students (may have elevated scores during finals and midterms) If its long term reliability (more than a year), this is less of a concern o Should consider developmental changes Ex: testing an impulsivity scale in adolescents and then again in adulthood. o Take Home Points! Reliability is important b/c you need to know whether you are tapping consistently into the latent construct (e.g., shyness, in the example in the beginning) Internal reliability is concerned with the amount of error within items Inter-rater reliability is concerned with the amount of error between judges or raters scores Test-retest reliability is concerned with the amount of error b/w time points
Are you sure you want to buy this material for
You're already Subscribed!
Looks like you've already subscribed to StudySoup, you won't need to purchase another subscription to get this material. To access this material simply click 'View Full Document'