### Create a StudySoup account

#### Be part of our community, it's free to join!

Already have a StudySoup account? Login here

# FINAL Stats Study Guide PSYCH-UA 10 - 001

NYU

### View Full Document

## About this Document

## 129

## 7

## Popular in Statistics for the Behavioral Sciences

## Popular in Department

This 9 page Study Guide was uploaded by Julia_K on Sunday May 8, 2016. The Study Guide belongs to PSYCH-UA 10 - 001 at New York University taught by Elizabeth A. Bauer in Spring 2016. Since its upload, it has received 129 views.

## Similar to PSYCH-UA 10 - 001 at NYU

## Reviews for FINAL Stats Study Guide

### What is Karma?

#### Karma is the currency of StudySoup.

#### You can buy or earn more Karma at anytime and redeem it for class notes, study guides, flashcards, and more!

Date Created: 05/08/16

Statistics for the Behavioral Sciences FINAL Study guide Topics: Chisquare: Chi Squares tests are nonparametric. Nonparametric work with categorical (nominal/ordinal) data ; use frequencies per category Make few assumptions about the population distribution Less powerful Chisquare distribution (X ) are always positively skewed, and their shape depends on degrees of freedom Chi critical value. (df = k 1 = 2). Note: as df increases, cv also increases because you’re measuring participants Values that fall inside the cv are saying that the observed and expected frequencies are relatively similar. Values that fall past the cv are saying that the observed and expected frequencies are different. Goodness of Fit – used to determine how well observed data falls in with expected values. Independent Hypothesis Testing used to determine whether there is a significant association between the two categorical variables (ex: gender and voting preference) One Way ANOVA (Analysis of Variance) components: Used to compare more than 2 means in a way that reduces Type 1 Error MSbetween shows how far means are spread out from each other. The variability of group means. MSwithin – the average of these variances. Shows how far scores are generically spread out from means. Known as the error term. The variability of scores around their group means. F Ratio: F = ( MS Between / MS Within ) Degrees of Freedom: Df bet = k -1 (where k=number of groups) Df w = NT – k Df total = NT – 1 (NT = total number of participants) One Group T-Test Components 1. Null (mu = 0) and Alternative (mu ≠ 0) Hypotheses 2. Find Standard Error 3. Deciding on Z or T (use T if N<40) 4. Degrees of freedom = n-1 If t-calc > t-crit, then we reject the null. If t-calc < t-crit then we fail to reject the null. 5. Confidence Intervals are used to estimate how confident you are that your results reflect the true population mean scores. Confidence Intervals should affirm the null. If you rejected the null, you want the null (0) to fall outside our confidence interval. Two Group T-Test Components 1. Null (mu1 – mu2 = 0) and Alternative (mu1 – mu2 ≠ 0) Hypotheses 2. Find Variances (SS/df) and Standard Error a. Note: df= (n1 + n2) - 2 3. Decide on the appropriate test used to calculate t: Questions to ask: a. Are both sample sizes large? (each sample size must be > 100) Yes – use large sample test for independent means (use z test!) No – Go to Question # 2 b. Are the sample sizes equal? Yes – use pooled variances test for equal sample sizes No – Go to question #3 to check for Homogeneity of Variance [for there to be HoV, one variance has to be no more than twice as big as the other variance.] c. Can the population variances be assumed equal? Yes – use pooled variances test No – use separate variances t-test NOTE: (mu1 – mu2) in the formulas will be replaced with 0 4. If t-calc > t-crit, then we reject the null. If t-calc < t-crit then we fail to reject the null. 5. Confidence Interval should affirm the null. If you rejected the null, you want the null (0) to fall outside our confidence interval. Matched T-Tests Matched pairs take correlation between 2 items into consideration. (“Items” can mean either 2 different subjects, or 1 subject tested twice) The point is to decrease variability between 2 items. Doing so decreases error. Matched T-test PROCESS: -Let’s say we use an independent t-test, and end up failing to reject our null based on our results. -BUT this conclusion can still be wrong – if there was a lot of variability between the individual items, then this gets in the way of finding the true pattern and making accurate results. -So we turn this into a matched-t test (a test combining both items) -The null: muD = 0 -The alternative: (muD ≠ 0 -Then find D bar (the mean difference) – it is the sum of the difference scores. -The findS sub D bar – the standard error of the difference of the means. -So therefore the Direct Difference Formula takes themean of the difference scores (D bar) and divides it bystandard error of the difference of the means (S sub D bar). That’s how you get the new calculated t -Then we find the t critical value. NOTE: because we turned this into a one sample test, we have to use df = n – 1. -After that, we can do a new, more accurate comparison of t calc and t crit -Find confidence interval for the difference of 2 population means. APA: I am 95% confident that ____ and ___ contains the population mean difference in score between ____ and ____. Advantages and Disadvantage of a Matched T Test: Pros: -has more power! -matched t = higher correlation = higher calculated t -we can subtract unnecessary variance (extraneous variance) so we are left with necessary variance (treatment variance) Cons: df goes down and makes results harder to reject COUNTERBALANCING AND DESIGNS The two types of Matched T-Tests: 1. Repeated Measures (1 subject): your go-to design. Measuring the same person twice. • Simultaneous Measurement –a random presentation of conditions • Successive Measurement – conditions are presented successively this causes problem of order: what if order affects the results? o Before/After o Counterbalancing –randomly varying the order of presentation. It gets rid of order problems. But does NOT get rid of carryover effects (this is when you carry over the effects/impression from one experiment into another – only time can fix this). Pros – more power because fewer subjects! Cons – no control group 2. Matched Pairs Design (2 subjects) Comparing 2 conditions on different yet similar subjects (i.e. twins) but has less power. POST HOC TESTS: Pairwise comparisons (LSD and HSD) If you have a significant F and 3 groups, then follow up with Fischer’s LSD 2MS LSDt cv W n note: use dfw to find tcv on chart Checking for significance: if the difference between a mean pair is bigger than LSD, then it is significant For example: if LSD = 1.87 Counseling – Systematic Desensitization = 10 – 6 = 4 (Sig) Counseling – Counter Conditioning = 10 – 5 = 15 (Sig) Systematic Desensitization – Counter Conditioning = 6 – 5 = 1 (Not Sig) But…if you don’t have significant F and have more than 3 groups, then follow up with pairwise Tukey’s HSD. MS W HSD q cv n note: look up qcv in table A.11 (“k” across top, and “dfw” down the side) Checking for significance: First, compare ONLY the smallest and largest means. -If the difference between them is greater than HSD, then keep going and compare other pairs of means. -If no, then stop. With HSD, it is possible to find pairs of means sig. different even when the overall ANOVA is not sig. A priori test: Bonferroni t (Dunn’s test): Alpha PC= this is your adjusted alpha. You compare this to the sig values from the regular t tests for SPSS. Problem: Bonferroni is very conservative – it can make alpha really small. Complex comparisons: still comparing two things, but unlike Pairwise, it doesn’t have to be between 2 groups. It’s a difference score involving group means, and we “weight” these means with coefficients. 1. Find L using means. L= mu1 + [mu2 + mu3 / 2]. This will yield your coefficients too. 1. To test this for significance: must convert your “L” into an SScontrast. Note: MScontrast will always equal SScontrast !! 2. Calculate the F ratio to test this contrast: F = (MScont) / (MSwithin) 3. Find Fcv (1, dfwithin) 4. If F > Fcv, and the comparison was planned, then this is significant. 5. BUT if the comparison was not planned (you looked at data after), then we need to use Scheffe’s test (instead of Fcv) : Scheffe’s F test: K = # of groups NT= total number of subjects -Compare initial F to Fs. If F > FS, then there is significance and you reject the null. Problems: very conservative. lose a lot of power with Sheffe’s test only use them when comparisons haven’t been planned. TWO WAY ANOVA: Two Way ANOVA works with not just the Main Effects (Arousal and Task), but the Interaction between them STEPS for Completing 2 Way ANOVA Summary Table: (Task and Arousal Example) 1. dfBetween: r (c) – 1 dfrow: r-1 dfcolumn: c - 1 dfinteraction: (r - 1) (c - 1) dfwithin: NT – r (c) dftotal: NT -1 1. MSw (same formula, but k = # of cells) 2. SSw = MSw (dfw) 2 3. SSBet = (NT) ( sigma for cell means) list cell means in calculator and calculate sigma 4. SSrow = NT (sigma for row means) 5. MSrow = SSrow / dfrow 2 6. SScolumn = (NT) (sigma for column means) 7. MScolumn = SScolumn / dfcolumn 8. SSinteraction = SSBetween – (SSrow + SScolumn) 9. MSinteraciton = SSint / dfint 10.Now we ask: how much is due to task, how much is due to arousal, and how much is due to interaction (or the combination of those 2 factors) So we create F ratios for low, medium, and high arousal: F = MSBet (low) / Msw F = MsBet(med) / MSw F = MSBet (high) / Msw 11.Fcv (dfbetween, dfwithin) dfbetween corresponds to its specific category. 12.Compare F ratios to Fcv to find significance 13.Is my interaction significant? If interaction is not significant and significant main effect involves more than two levels, then use posthoc tests (either HSD or LSD). If interaction IS significant, do simple effects. o If you have a common 2X3 table, then split up into 2 One Way ANOVAs o First, find differences in AROUSAL (columns). To do this, find the Fratio for both types of tasks and then follow up with LSD or HSD if Fratio is significant. o Second, find differences in TASK (rows). To do this, subtract the different arousal type means from each other to see how they compare to the LSD or HSD you calculated before. INETRACTIONS: Interaction – when the effects of one independent variable change with different levels of the other IV. Parallel lines = no interaction. If the lines are not parallel, you have an interaction (yet we don’t know if it’s significant) … When lines are not parallel but go in the same direction, it’s called an ordinal interaction. When the lines are crisscrossed, this is called a disordinal interaction (the effects shift) **As a rule, a significant interaction tells you not to take the main effects as face value, meaning there’s some underlying relationship to consider. POWER: Beta: (B) the probability of making a Type 2 error from an Alternative Hypothesis Distribution (which is the ttest distribution). Failing to reject when you should’ve rejected. Power: (1Beta) the probability that we correctly rejected the null hypothesis, and correctly found significant results. In other words, finding a difference when there really is a difference. We want bigger power! (0.08 and higher is good power) Delta – the expected tvalue. It is found on table A.4 using power and alpha. Effect size – a measure of overlap between two population distributions. The difference between the 2 population means in terms of standard deviations [(mu1 – mu2) / standard dev.)] • Small effect size: 0.2 • Medium: 0.5 • Large: 0.8 Type 1 and 2 Errors: Type 2 errors are more dangerous to make. What we SHOULD HAVE done (reality) What we CHOSE to do Fail to t Reject the Null Rejec (below ) Fail to Reject Null Correct Type 2 Conclusion Error Reject Null Type 1 Correct Conclusion Error Coefficients of Determination and Nondetermination Coefficient of Determination (r^2) – how much the total variance depends on the predictor variable. In other words, explaining the variability using the explained variance. Note: this doesn’t imply causality – it’s all about estimating how well our predictions are. Coefficient of Non-determination (k^2 or 1 – r^2) – portion of the total variance not accounted for by the predictor variable. Note: CoeffD + CoeffND = 1 CORRELATION: Perfect Correlation – if there is a change in variable x, there will be a proportional change in variable y. Correlation is NOT about 2 variables having the exact same value – it’s about 2 variables having the same position/location on the data set. Z scores show location of scores in distribution so perfectly correlated variables will have the same z scores. Pearson’s Correlation Coefficient (r) – a measurement of linear correlation. r = (+/- 1) when there is a perfect correlation. But as r approaches 0, there is lack of correlation – more error. Note: as sample size increases, “r” becomes more significant, or a more accurate representation of the relationship between 2 variables. Correlation values: (0.1 – 0.29) = small correlation (0.3 – 0.49) = medium correlation (0.5 +) = strong correlation Correlation Problems: 1. Curvalinear relationships - have a bell curve distribution. Since this isn’t linear, it can’t be measured by Pearson’s r. 2. Restricted/Truncated range – A reasonably strong “r” can weaken depending on the range you choose to look at in your distribution. To avoid this, make sure the sample is representative of the population! (ex: if you’re measuring years of education correlating with cost, then 4 college years would yield different results than a full education measure). 3. Bivariate Outliers – an extreme combination of variables (super tall and super wide). Outliers can change the shape of the envelope weaken the correlation (reduce “r”).

### BOOM! Enjoy Your Free Notes!

We've added these Notes to your profile, click here to view them now.

### You're already Subscribed!

Looks like you've already subscribed to StudySoup, you won't need to purchase another subscription to get this material. To access this material simply click 'View Full Document'

## Why people love StudySoup

#### "I was shooting for a perfect 4.0 GPA this semester. Having StudySoup as a study aid was critical to helping me achieve my goal...and I nailed it!"

#### "Selling my MCAT study guides and notes has been a great source of side revenue while I'm in school. Some months I'm making over $500! Plus, it makes me happy knowing that I'm helping future med students with their MCAT."

#### "There's no way I would have passed my Organic Chemistry class this semester without the notes and study guides I got from StudySoup."

#### "Their 'Elite Notetakers' are making over $1,200/month in sales by creating high quality content that helps their classmates in a time of need."

### Refund Policy

#### STUDYSOUP CANCELLATION POLICY

All subscriptions to StudySoup are paid in full at the time of subscribing. To change your credit card information or to cancel your subscription, go to "Edit Settings". All credit card information will be available there. If you should decide to cancel your subscription, it will continue to be valid until the next payment period, as all payments for the current period were made in advance. For special circumstances, please email support@studysoup.com

#### STUDYSOUP REFUND POLICY

StudySoup has more than 1 million course-specific study resources to help students study smarter. If you’re having trouble finding what you’re looking for, our customer support team can help you find what you need! Feel free to contact them here: support@studysoup.com

Recurring Subscriptions: If you have canceled your recurring subscription on the day of renewal and have not downloaded any documents, you may request a refund by submitting an email to support@studysoup.com

Satisfaction Guarantee: If you’re not satisfied with your subscription, you can contact us for further help. Contact must be made within 3 business days of your subscription purchase and your refund request will be subject for review.

Please Note: Refunds can never be provided more than 30 days after the initial purchase date regardless of your activity on the site.