Exam 2 Study Guide
Exam 2 Study Guide PSYCH 2220 - 0020
Popular in Data Analysis in Psychology
Popular in Psychlogy
This 9 page Study Guide was uploaded by Emma Dahlin on Tuesday October 27, 2015. The Study Guide belongs to PSYCH 2220 - 0020 at Ohio State University taught by Anna Yocom in Summer 2015. Since its upload, it has received 198 views. For similar materials see Data Analysis in Psychology in Psychlogy at Ohio State University.
Reviews for Exam 2 Study Guide
Report this Material
What is Karma?
Karma is the currency of StudySoup.
You can buy or earn more Karma at anytime and redeem it for class notes, study guides, flashcards, and more!
Date Created: 10/27/15
PSYCH 2220 STUDY GUIDE EXAM 2 Z Normal curve speci c bellshaped curve that is unimodal symmetric and de ned mathematically 0 As sample size increases approaches size of population the distribution more and more closely resembles the normal curve When data is normally distributed we can compare a score to an entire distribution of scores by converting raw scoresljlstandardized score 0 2 distribution always has mean of O and SD of 1 Standardization way to convert individual scores from different normal distributions to a shared normal distribution with a known mean standard deviation and percentiles z score number of standard deviations a particular score is from the mean X M O Raw Score formula X Z 039 1 Normal curve also allows us to convert scores to percentiles bc 100 of the population is represented under the bellshaped curve 0 Middle point50th percentile Standard 2 distribution allows us to Transform raw scores into standardized scores 2 scores Transform z scores back into raw scores Compare 2 scores to each other Transform z scores into percentiles that can be more easily understood 0000 34 34 14 14 2 Central Limit Theorem Refers to how a distribution of sample means is a more normal distribution than a distribution of scores 0 Repeated sampling approximates a normal curve even When the original population is not normally distributed 0 Distribution of means is less variable and more tightly clustered smaller SD than distribution of scores 0 Spread decreases and outliers are eliminated For hypothesis testing distribution of means is more useful than distribution of scores Distribution of means 0 HM mean 0 0M standard error standard deviation of the distribution of means 0 bc distribution of means is narrowerthan distribution of scores it has a smallerSD and standard error 0 M TN 3 important characteristics 2 statistic tells us how many standard deviations a sample mean is from the population mean 0 zMJMO39M The 2 table help us transition from one way of naming a score to another 0 Provides of scores bt a given 2 score and mean AND in tail of distribution 0 Negative 2 statistics not included in table bc all we have to do is change sign from negative to positive normal curve is symmetric one side mirrors other 0 To get the percentile we can simply add the mean to z to 50 everything that falls below To calculate a percent that is at least as extreme as the z score you simply do 2x the amount in the tail bc the other side of the tail would contain scores just as extreme If you are trying to nd a score from a percentile you look at the percent in taiwhich should be 100percentile score 0 EX score that is in the 77th percentile would have 23 in tail look for this in table to get the z Parametric tests inferential statistical test based on assumptions about a population Nonparametric tests inferential statistical test not based on assumptions about the population 0 Three assumptions for hypothesis testing 1 Dependent variable is on a scale measure 2 Participants are randomly selected 3 Population must have approximately normal distribution usually okay if samplegt30 TABLE 3 The Size Steels eff Hypethesis Testing We use the same sis hesie steels with eeeh tires ei hyieethesis test 1 identity the seetileiiehs distritziutieh snti essiiihetieits Esit iheh cheese the ener enriete hyeethesis test State the null Eil39llji resesireh hypetheses iri heth werth she symhelie hetstieh 339 Determine the ehsrseteristies til the eemeerisen eistriihutieh Determine the critical values er eutefts that iiliIliCEitE39 the hsints listens which we will reject the hull heeethesis Csieulste the test statistic lZieeitie whether he retest er ieii te rejeet the till ht ee39thesis We use a twotailed test for when are predicting a difference in EITHER direction nondirectional 0 difference in scores Onetailed tests are used for directional hypotheses predicting either increase or a decrease but not both 0 Critical value test statistic value beyond which we reject the null hypothesis often called the cutoff Critical region area in the tails of the comparison distribution in which the null hypothesis can be rejected 0 p level probability used to determine the critical values or cutoffs in hypothesis testing often called alpha Statistically signi cant nding is statistically signi cant if the data differ from what we would expect by chance if there were in fact no actual difference 0 Research convention is to set cutoffs to a p level of 005 o 25 on both tails or 5 in one tail 0 Critical values are 196 and 196 0 There is a 5 or less chance we would nd results this extreme if the null were true 4750 25096 25000 4196 196 If something is statistically signi cant it means it is unlikely to occur by chance Reject or fail to reject null o If test statistic is more extreme than extreme valuereject o If test statistic is NOT more extremeljfail to reject Con dence Intervals Point estimate summary statisticone number as an estimate of the population ex mean Interval estimate based on our sample statistic range of sample statistics we would expect if we repeatedly sampled from the same population 0 Con dence interval Con dence intervalsljlnterva estimate that includes the mean we would expect for the sample statistic a certain percentage of the time were we to sample from the same population repeatedly 0 Typically set at 95 The range around the mean when we add and subtract a margin of error Con rms ndings of hypothesis testing and adds more detail 0 If you run a 95 con dence interval it should match results of hypothesis test at 05 likewise if you run a 99 CI it should match the results of hypothesis test at 01 Calculating Con dence Intervals 1 Draw a picture of the distribution that will include con dence intervals use sample mean 2 Indicate the bounds of the CI on the drawing based on CI 3 Determine the 2 statistics that fall at each line marking the middle 95 4 Turn the 2 statistic back into raw means 5 Check that the CIs make sense Example IQ Scores IQ scores are designed to have a mean of 100 and a standard deviation of 15 A school psychologist is convinced that the mean IQ score of the high school seniors in her district is different from 100 She administered an IQ test to random sample of 50 seniors in her district and found their mean IQ was 104 o 95 CI want to get a range of values and see if it matches up with 100 0 Look up 25 in table to get 2 scores of 196 and 196 475 475 25 25 1 l232 196 0 o How is 99 CI different 0 It would include more values in that interval 0 Probability would be higher that true mean would be included 0 Step 4 Calculate raw means 0 Mower 39ZOM Msampe Mupper ZOM Msampe Remember to use sample means amp standard error meer 1962121049984 Mupper 196212 10410816 These are values that would mark off upper and lower boundanes Step 5 is sample mean in the middle right in bt 2 values 0 104 is exactly in between 9984 amp 10816 0 95 Cl is 998410816 0 p 9984 Sp510816 95 o The probability is 95 that an interval such as 9984 to 10816 contains the true average IQ score 0 Her district could possibly have IQ score of 100 possible they could have same exact mean 0 This would tell us to FAIL TO REJECT NULL bc they could be the same aka not different OOOOO Effect Size 0 Just how big is the difference 0 Increasing sample size makes us more likely to nd a statistically signi cant effect 0 Effect size is the size of the difference that is unaffected by sample size 0 Standardization across studies 0 Represented on graph when there is less overlap less variability between two different distributions 0 To increase effect size we decrease the amount of overlap between two distributions 0 Means are farther apart 0 Variations within each population are smaller the bottom graph has a bigger effect Cohen s d estimates effect size 0 Uses STANDARD DEVIATION instead of standard error M 0 TABLE BEL Gehens Gentrentiens fer Effect Sises st Jeeelr Eehen hes etieiiished guidelmes ier seereirtieesi beset en the everlee betrisen tee tlistrrtmtierrs te help reseerehers t letermirie trrhether sh effeet is small medium er large These members ere net eutefts merely reueh guidelines its eitl reseereirters in their intemretetien ef results d Effeet Siee C lti i Dug a Smell 02 see M Bell L if j I 5 E F g3 Large 08 sftFl Ei If there is a small effect size it means there is a lot of overlap bt 2 groups 0 Likewise a large effect size would mean there is not a lot of overlap bt 2 groups 0 Effect size tells you NOTHING about statistical signi cance 0 Large effects might not be statistically signi cant Statistical Power 0 Measure of our ability to reject null hypothesis given that null is false 0 Calculation of statistical power ranges from probability of 000 to probability of 100 0 to 100 o The probability we will 0 Reject null when we should 0 Find an effect difference when it really exists 0 Avoid Type I error 3 so power 1 3 Factors Affecting Power 1 Alpha levelhigher alpha increases power 0 Potential problem Increase change for Type I Error 2 One or twotailed test 0 1tailed test increases power 0 Potential problem Only helpful if CERTAIN of direction of effect 3 Sample Size and Variability 0 Larger sample size and smaller standard deviation less variability reduces noise and increases power 4 Actual difference effect size 0 Increase difference bt means stronger manipulationmore pronounced effect We want studies with high power bc we want research that has a good chance of being successful correctly rejecting H0 Ideally a researcher only conducts a study when there is 80 statistical power that is at least 80 of the time the researcher will correctly reject the null hypothesis Ways to Increase Statistical Power 1 Increase alpha 2 Turn twotailed hypothesis into a onetailed hypothesis 3 Increase N 4 Exaggerate mean difference bt levels of the independent variable 5 Decrease standard deviation T Distributions Use when parameters are NOT known Use them to estimate population standard deviation from a sample tdistributions help us specify how con dent we can be about research ndings 0 Want to know if we can generalize from sample to larger population Sample standard deviation SD 2 XM2 N Estimated population standard deviation S Eur MY iv 1 Standard error S S M xN The t statistic t distance of sample mean from pop mean in terms of estimated std error M MM SM 0 t statistics are more conservative than 2 statistics less extreme o The tdistributions distribution of differences bt means 0 As sample size increases s approaches a tand zbecome more equal t Standard normal 2 distribution vi t distribution an individuals t distributidn 8 individuals HZ t distribution 2 individuals As sample size gets larger the tdistribution begins to merge with the zdistribution bc we gain more con dence as more participants are added to the study The single sample ttest 0 When we know the population mean but NOT the standard deviation 0 Degrees of freedom 0 dfN1 where N is sample size 0 gt of scores free to vary in estimate of population twotailed tests are more conservative bc you are putting less in each tail and a smaller tail makes it harder to reject HYPOTHESIS TEST STEPS 0 Identify populations distributions assumptions 0 State the nullresearch hypotheses 0 Determine characteristics of the comparison distribution 0 Determine critical values or cutoffs df 0 Calculate test statistic 0 Make a decision 0 CONFIDENCE INTERVAL STEPS 1 Draw a picture of a tdistribution that includes the con dence interval centersample mean 2 Indicate the bounds of the con dence interval on the drawing in tail 3 Look up the tstatistics that fall at each line marking the middle 95 tcritical 4 Convert the tstatistics back into raw means 0 Mlower 39tSM Msample O Mupper tSM Msample Verify that the con dence interval makes sense 0 Sample mean should fall exactly in middle of interval CALCULATING EFFECT SIZE 0 Cohen s d is based on the spread of distribution of individual scores rather than the distribution of means oThis tells us how many standard deviations apart the sample mean is from the population mean dM M S U39l
Are you sure you want to buy this material for
You're already Subscribed!
Looks like you've already subscribed to StudySoup, you won't need to purchase another subscription to get this material. To access this material simply click 'View Full Document'