Psychology 124, Week 5 Notes
Psychology 124, Week 5 Notes PSY 124 - 03
Popular in Fndtns/Psyc Science I:Methods
Popular in Psychlogy
PSY 124 - 03
verified elite notetaker
This 3 page Class Notes was uploaded by Layne Franklin on Friday February 12, 2016. The Class Notes belongs to PSY 124 - 03 at University of Indianapolis taught by Jordan Sparks Waldron in Fall 2015. Since its upload, it has received 22 views. For similar materials see Fndtns/Psyc Science I:Methods in Psychlogy at University of Indianapolis.
Reviews for Psychology 124, Week 5 Notes
Report this Material
What is Karma?
Karma is the currency of StudySoup.
You can buy or earn more Karma at anytime and redeem it for class notes, study guides, flashcards, and more!
Date Created: 02/12/16
Reliability Measurement error reduces reliability of a measure. Reliability How consistent or dependable is the measure? The reliability of a measure is an inverse function of measurement error. Reliability = Variance in observed scores due to True Scores / Total Variance in observed scores Measurement Error Observed Score True Score + Measurement Error True Score Participant’s score if the measure was perfect. Measurement Error Variability in observed scores due to extraneous factors. Total Variance in Observed Scores = Variance due to True Scores + Variance due to Measurement Error Systematic Variance + Error Variance Many different potential sources of error Assessing Reliability Correlation Coefficient Expresses the strength of the relationship between two variables Test-Retest Reliability Consistency of participants’ responses on a measure over time. If characteristic being measured is supposed to be stable, should be high correlation between scores at Time 1 with scores at Time 2 Interitem Reliability Consistency among items on a scale (measures with more than one item where a composite summary score is created) Including items that aren’t measuring what they should be increases measurement error. Item-Total Correlation (1 item compare to sum of the rest) Split-Half Reliability (Set of items compared to another set) Interrater Reliability Consistency between two or more researchers who observe and code participants’ behaviors. Examine the degree of agreement among raters We want observers to make similar ratings Validity Validity The degree to which a measurement procedure actually measures what it is intended to measure rather than measuring something else. Measures can be highly reliable but not valid. A reliable clock tells us that it is 2pm every day at the same time. A valid clock tells us that it is 2pm when it is actually 2pm. What about vice versa? Assessing Validity Face Validity Do the questions in our measure LOOK like they are measuring what they are supposed to measure? Construct Validity Does the measure of a hypothetical construct relate as it should to other measures? Convergent – correlate with the measures that it should Discriminant – not correlate with the measures that it should not Criterion Validity Does the measure allow us to distinguish among participants on the basis of a particular behavioral criterion? Concurrent Validity Predictive Validity Bias in Measurement Test Bias A measure is not equally valid (or reliable?) for different groups. More error in how a test measures for members of a particular group Race, ethnicity, gender, age, etc. Just because there are gender, racial, or ethnic differences in a measure does not mean that bias exists. True differences between groups may exist. Examples of bias Ways to investigate bias Avoid the arm chair analysis!
Are you sure you want to buy this material for
You're already Subscribed!
Looks like you've already subscribed to StudySoup, you won't need to purchase another subscription to get this material. To access this material simply click 'View Full Document'