310 Test Three Study Guide
310 Test Three Study Guide 3100
Popular in Advanced Experimental Psychology
verified elite notetaker
Popular in Psychlogy
This 12 page Study Guide was uploaded by Grace Gibson on Thursday April 7, 2016. The Study Guide belongs to 3100 at Clemson University taught by Dr. Thomas Britt in Winter 2016. Since its upload, it has received 43 views. For similar materials see Advanced Experimental Psychology in Psychlogy at Clemson University.
Reviews for 310 Test Three Study Guide
Report this Material
What is Karma?
Karma is the currency of StudySoup.
You can buy or earn more Karma at anytime and redeem it for class notes, study guides, flashcards, and more!
Date Created: 04/07/16
Test Three Qualitative Research ● Qualitative research is inductive; we rely on experiences of individuals to develop our theory ● This is usually not in numbers, but rather in words and themes ● Our class mostly does quantitative research ● The key issues are defined by the participant, not the researchers ● This is grounded in personal experience ● Qualitative research provides rich, indepth information ● Examples: semistructured interviews, focus groups, journal analysis, open ended questions on surveys, life stories, etc…. ● Gergen wrote an article on qualitative research ○ It was about qualitative research becoming a complementary alternative to quantitative research ○ After this study, qualitative research was more accepted ○ Mixed methods: using qualitative and quantitative research ○ Qualitative research can help shed light on quantitative findings ○ There’s now a journal titled “Qualitative Psychology” ● Qualitative research has a structure ● ● NASA doesn’t like people doing qualitative research on them, but they’ll do journals each night ○ Only let one guy analyze these journals ● Strengths of Qualitative research ○ Useful in exploratory research, especially in an area where there isn’t much prior research ○ Helps understand a specific context ○ You get a deeper understanding of participants ○ Helps researchers avoid their own preconceived ideas ○ Gives greater voice to the participants ● Weaknesses of Qualitative research ○ There’s unclear standards for data quality ○ Answers can be ambiguous and hard to code ○ Requires sophisticated researchers and participants ○ Possibility of reactivity (people might not what to tell you what you’re asking) ○ People are less reactive in a survey than in person ○ Inherently subjective ● Interrater reliability for the coding of qualitative data should be 0.8 or higher ● Don’t use qualitative researcher when you’re measuring physiological parameters or if it’s a mature area where a lot of research has already been done ● Benefits of the Mixed Method Approach: gives you both the experiences of the participants and the numerical data ● Information saturation: people are coming up with the same themes over and over again ○ The same content is being monitored by the participants who are interviewed ○ This means you have enough participants Sampling ● Our goal in our studies is to discover a law of nature (find something true in humans) ● Population: entire group of people we want our findings to generalize to ● Sample: small subset of the population chosen to represent the population ● You want your sample to represent all demographics ● You want to draw an inference for the population from your sample ○ Use the sample to infer the responses of the population ○ It is critical that the sample is representative of the population ○ They must be similar in key characteristics, which vary from study to study ○ Unrepresentative samples are not similar in key characteristic ● Random sampling promotes representativeness ● Simple random sampling: each person in the population has an equal chance of being chosen for the sample ● Stratified random sampling: divide the population based on a characteristic then randomly sample within the divisions ○ Use this when you want to ensure your sample is representative on a key characteristic ● Cluster sampling: take advantage of clusters and randomly sample within clusters ○ This can make it easier to accomplish random sampling ● Psychologists often don’t have the time, finances, or desire to do random sampling ● Nonrandom sampling ○ Convenience sampling: use whomever is available ○ Snowball sampling: find people for your sample then they recruit others ■ E.g. drug addicts, homeless people, etc… ○ Purposive sampling: sampling people with unique knowledge or attributes ○ Quota sampling: ensuring sample characteristics match the population of interest ■ Like stratified, but nonrandom ■ You are trying to make sure your sample is representative ● Convenience sampling is the method of choice is psychology (it’s easy) ○ We justify this by saying we’re just trying to discover laws of nature, not population means ○ The sample can be interchangeable because this law will apply to all ○ This justification is stronger in physical, objective, physiological areas ● You can empirically examine if the results generalize ● Self serving bias: if something good happens, it is due to internal factors; if something bad happens, it is due to external factors ○ We initially thought this applied to all ○ However, a researcher looked at both Eastern and Western cultures and found this occurred in the USA more than China ● Because we never know how our sample will generalize, we should include a statement in our final paper about generalization ● Gergen wrote an article saying findings might not be replicated over time ● Should we use undergrad students as participants? ○ Psychology students clearly aren’t a random sample ○ College students have less crystallized attitudes (attitudes are still developing) ○ They don’t have a strong selfconcept (sense of self) ○ They have stronger cognitive skills ○ They have a stronger tendency to comply to authority ○ They have more unstable peergroup relationships ○ These differences might influence studies ○ Studies of college students say all people are compliant and easily influenced, their attitudes are easily changed, their behavior is inconsistent with their attitudes, are more materially interested, and that they are victims to group norms Problem of NonResponse ● Nonresponse: when people don’t respond ● There’s a potential for bias (i.e. why did certain people not respond or not respond?) ● There was a famous experiment where Consumer Report did a study of perceptions of mental treatments ● They reached out to all readers of Consumer Report and 7000 responded (this is good) ● They found that psychotherapy helped 9/10 people and that those in treatment more than six months were helped the most ● They found no form of therapy helped more than another for any disorder ● The problem with that is that people who were helped by psychotherapy are more likely to respond and they didn’t know how many people with mental health issues didn’t respond ● How to reduce the problem of nonresponse: ○ First try to increase response rate by contacting them again ○ Maybe offer incentives this time ○ If you can’t get a higher response rate, see if your sample is similar in key characteristics in the population ○ It is difficult to completely remove bias so acknowledge it and move on ● Effects of survey parameters on accuracy ○ Can affect survey responses ■ Schwarz: how successful have you been in your life? ■ 34% report high success when the scale is 5 to 5, but 13% report high success when the scale is 010 ○ Effects of prior questions ■ Strack: have people recall three positive or three negative life events ■ The events are either recent or past ■ For recent events, include recent events in the mental representation of their current life ■ For past events, events included as a standard to judge their mental representation of their current life ○ Question order can determine correlations between variables ○ Relationship between marital satisfaction and life satisfaction ■ Condition one: first asked general, then marital (the two judgements correlated .32) ■ Condition two: first marital, then general (the two judgements correlated .67) ■ In condition two, they used marital information to form an overall estimate of life satisfaction ■ This is called priming the participant Analysis of Data The Role of Statistics in the Research Process ● Statistics are always used in the service of testing a specific research hypothesis ● If trying to answer a complex question, statistics will be complex ● Avoid unnecessary analyses that do not contribute ● Do not use complicated statistics when simpler statistics will suffice ● Statistics cannot fix the problems of a poor research design ● Which statistic you use may have implications for how many people you include in your study ○ E.g. for factor analysis, you want at least five people for each item Describing stuff ● Measures of central tendency ○ Create a graph to assess whether your variable is skewed or not ○ Use mean when the data are continuous and not highly skewed ○ Use median (or other descriptor) when data are highly skewed ● Example of skewed distribution: participants asked how many times they investigated presidential candidates on the internet (80% didn’t look at participants at all) ● Huff: how to lie with statistics (you can make a small effect seem big) ● Effects of tutoring on test performance ○ Test scores by 10 doesn’t look big, but test scores by 2 looks big ○ What should the range be on your yaxis? ■ Response measurements ■ Base depending on distribution of scores on your measures ■ Lowest score gotten to highest score gotten (Likert Scale: do NOT include 0) Measures of Variability ● Range: highest score lowest score ○ This is limited because you can only take into account two scores ● Variance: the average of the squareddeviations from the mean ○ S x S (X – X)/N ○ Used when variables are approximately normal ● Standard deviation: the square root of the variance ○ Most of use will report the standard deviation ○ Use two decimal places with anything Reporting descriptive information in a manuscript ● Goes in Method, under participants: ○ “The average age of this sample was 18.18 years (SD = 2.56), and the gender distribution was 44.7% male and 55.3% female. In terms of ethnicity, the sample was composed of 8.3% African American, 1.9% Asian American, 0.8% Hispanic American, 86.5% CaucasianAmerican, and 1.5% Other.” ● Goes in results: ○ The first paragraph should be devoted to descriptive statistics, where you indicate the central tendency and variability of your measures ○ “The scores on selfesteem were relatively high for the sample M = 4.0,SD = 1.05, on a 15 point scale).” Correlational Data Analysis ● Defining feature of correlational analysis is measuring variables of interest ● Change in one variable is associated with change in another ● Measured variables can be continuous (eg. intelligence) or discrete (e.g. gender, religion, etc…) ● Key point: correlational research is not equal to a correlational coefficient ● If both measured variables are continuous, use a correlational coefficient ● If both measured variables are discrete, use chisquared ● If one variable is discrete and one is continuous: ttest or ANOVA ○ Ttest can only have two groups but ANOVAs can always do it ● Reporting Correlations in a report: ○ Avoid causal language (e.g. the results showed that the study hours led to higher GPA or the results showed that having poor body image contributed to low selfesteem ● Report the direction, size, and significance level ○ E.g. “The correlation between warriorism and job engagement was r = .30, p < .01, suggesting that a higher level of warriorism was associated with higher job engagement.” ○ E.g. “The results revealed a negative correlation between GPA and binge drinking (r= .26, p< .05), such that a higher GPA was associated with lower binge drinking.” ○ e.g. “The correlation between parental arguments and children’s depression, although significant, was smaller than expected r = .18p < .05).” Advanced Correlational Techniques Partial Correlation ● Partial correlation: Examining the relationship between two variables after controlling for another variable ● Can be used to test for mediation or to be sure the correlation is due to a third variable ● e.g. examining correlation between intergroup anxiety and contact with African Americans after controlling for modern racism ● Reporting it: “The correlation between contact with AfricanAmericans and anxiety towards interacting with AfricanAmericans remained significant when controlling for modern racism, partial r= .21, p < .05.” ● This is basically the same thing as analysis of covariance but with correlational designs, not experimental Multiple Regression ● Example of Standard Multiple Regression ○ Predicting engagement in voting ○ Criterion: engagement in voting ○ Predictors: four predictors that are all continuous variables ○ We wanted to include all the predictor variables so we did a standard multiple regression ○ They found all four predictors accounted for unique variance in the outcome ● Example of Stepwise Multiple Regression ○ Leary wanted to predict tendency to blush ○ Criterion: tendency to blush ○ There were eight predictor variables ○ Embarrassability was the predictor variable that went in first because it had the highest correlation with the tendency to blush MetaAnalysis ● Let’s say there are five studies looking at the relationship between stress and GPA ● Each study will use a sample that is an imperfect sample of the population ● The idea is that combining the results from many studies can give you a more accurate estimate of the “true” relationship than any singular study ● Metaanalyses tend to be cited a lot by others ● It does not rely on human judgements of biases to detect patterns in literature ● Lets you see if the effects are stronger under certain circumstances ● What you do: ○ Obtain the effect size from each study (correlation or differences between two means) ○ Combines the effects into an overall average which is seen as the best estimate of the true relationship ● Effect sizes vary from study to study because sample are imperfect representations of the population ● Sampling error: difference between the true relationship and the sample ● Every study has sampling error so let’s combine all the effect sizes to cancel out the sampling error ● One way to deal with this sampling error is to combine studies and weigh them by their sample size ● The higher the sample size, supposedly the lower the sampling error so you weigh these studies higher Analyzing Experimental Data ● IV: time management training vs. relaxation training ● DV: GPA ● There are ten people in each condition ● The statistical test will tell us if those results are by chance or a real difference (was my experiment effective?) ● We want to make sure these differences aren’t due to chance so we have to measure random variation ● Compare the difference in the means to a measure of random error (variability in the study or average of within variability of all conditions) ● Ttest: difference between means/average error in conditions ○ If this value is high enough, the experiment is significant ● How can we increase our tvalue? ○ Decrease the amount of variability by running the study the same time everyday, use the same research assistant, etc…..(standardize the experiment) ○ Drive the means further apart (maybe do weeks of time management training rather than an hour) Hypothesis Testing ● Your null hypothesis says the IV will not have an effect on the DV ● Alternative hypothesis says the IV will have an effect on the DV ● We want to reject the null and accept the alternative ● You may fail to reject the null which means the IV did not have an effect on the DV ○ This does not mean you’ve proved the null! ● Directional hypothesis: basically saying the means will different in a certain direction (onetailed) ● Nondirectional hypothesis: not sure how they will differ (twotailed) ○ We will almost always use twotailed in this class ○ They are more explanatory ● Always use p < 0.05 which means less than 5% of the time, these two means will be the same Errors in Inferences ● We want to detect if the differences are actually present ● We want to not detect a relationship if it is not present ● Type I Error: we say the relationship is present when it really is not ○ Defined by the alpha level we set ○ If p = 0.05, less than five times out of a hundred we are going to say a relationship is going to happen when it really doesn’t ○ The lower the alpha, the less the likelihood is of making this error ○ We might be more worried about this error in an established field ● Type II Error: we say the relationship does not exist when it really does ○ In part defined by the alpha level ○ The lower the alpha, the higher the chance of this error ○ We might be more worried about this error when the study is exploratory ○ The power of the design to detect the effect is increased by: ■ Lower amount of error in the study ■ Higher number of participants ■ Stronger effects between groups ○ If the power of the design to detect the effect is reduced, the likelihood of Type II Error increases ● We are most concerned for Type I error if there’s major consequences for finding an effect like a majorly controversial area ● We are most concerned for Type II error if there’s big consequences for a missed discovery such as the effects of secondhand smoke ● Type II Error = beta ● Power = (1 beta) ● There’s a way you can calculate the power of your design ANOVA (Analysis of Variance) ● Paired ttest: used when the same participants are compared under two conditions (e.g. stress at the beginning and end of the semester) ● The same logic underlies the ANOVA but you can have more than two conditions ● e.g. compare stress after time management training, relaxation training, and no training ● A withinsubjects ANOVA: two or more conditions with the same participants ○ e.g. stress at the beginning, middle, and end of the semester ○ Also called a repeated measures ANOVA ● First you assess if your Fvalue is significant ○ i.e. is your F big enough to predict a significant difference between groups? ○ F = difference between conditions/average error ● If F is significant, do follow up comparisons to see which means differ from one another ● Only after these posthoc comparisons are you done with your analysis ● Be sure to indicate which means differ from the other means TwoWay Designs (manipulating two IVS) ● Twoway ANOVAs look at the effects of two IVs on a DV ● Interaction: effects of one IV on the DV differ depending on the level of the second IV ● e.g. we take a test and we get positive or negative feedback and we’re told the test either is or isn’t in our major ○ Hypothesis: if the test is in our major, our well being will be lower whenever we get negative feedback ○ In this study, there was an interaction ● Main effect: the effect of one IV when ignoring the other IV (collapse across the other IV) ● Main effect of the second IV: effect of this IV when ignoring the other one ● Further analysis ○ If either of your IVs has more than two levels and a significant main effect, you have to do more test (follow up contrasts to determine the source of your differences) ○ For any significant interactions, you must do a simple effects test ■ Simple effects test: basically comparing the effects of each IV at the level of the other IV (do a one way ANOVA for one IV at each level of the second) ■ This will show us the source of your interaction Planned Contrasts in Experiments ● Let’s say you have five conditions and you expect one condition to differ from the rest ● This is a planned contrast ● Terkel and Rosenblatt study ○ Wanted to know if virgin rats would show maternal behavior ○ The experimental group was injected with maternal blood plasma from rats who had just given birth ○ Control groups: plasma from other rats taken from two stages in the menstrual cycle, a saline group, and no treatment group ○ DV: how many days does it take for the injected rat to approach a baby rat? ○ Compared the average of the other four groups to the experimental group ● When you can, you should do these planned contrasts Issues in Using Inferential Statistics ● Statistical significance vs. magnitude of effect ● Statistical significance: probability the effect in our study generalizes to the population ○ This doesn’t tell us how big our difference in means is ● Effect size: how much of the variability in your DV is due to your manipulation? ○ Small = 0.02 Medium = 0.13 High = 0.28 ● You should report both the significance and the effect size ● All else being equal, the greater the number of participants, the greater the likelihood your ttest, Ftest, or r is significant Statistical Analyses: Summed Up ● Correlation ○ Finding relationship between two continuous variables ● Partial Correlation ○ Finding relationship between two continuous variables while controlling for a third continuous ● Independent Sample Ttest ○ Manipulate a variable so there’s two conditions or measure a variable with two conditions ○ The groups (conditions) are independent ○ Dependent variable is a continuous measure ● Paired Sample Ttest ○ Manipulate a variable so there’s two conditions or measure a variable with two conditions ○ The groups (conditions) are dependent ○ Dependent variable is a continuous measure ● Oneway ANOVA ○ Manipulate a variable so there are more than two conditions ○ OR measure a variable so there are more than two discrete conditions ○ Dependent variable is a continuous measure ○ Must follow with posthoc comparisons ● Factorial ANOVA ○ Manipulate two or more variables ○ OR measure two or more discrete variables ○ Dependent variable is a continuous measure ○ Must follow with posthoc comparison ○ Must do simple effects test if the interaction is significant ● Analysis of Covariance ○ Manipulate or measure a discrete IV ○ Measure a continuous DV ○ Want to examine the effect of your IV while controlling for another continuous variable ● Chisquare Test ○ Manipulate or measure a discrete IV ○ Measure a discrete DV ● Multiple Regression ○ Measure more than one continuous predictor ○ Measure a continuous outcome measure ● Factor Analysis ○ Measure a large number of continuous items and want to understand how many dimensions underlay the items ● Metaanalysis ○ Want to calculate the average effect size for a certain finding across a number of studies that have tested the same effect size
Are you sure you want to buy this material for
You're already Subscribed!
Looks like you've already subscribed to StudySoup, you won't need to purchase another subscription to get this material. To access this material simply click 'View Full Document'