Psychological Statistics PSYC 2101
Popular in Course
Popular in Psychlogy
verified elite notetaker
This 15 page Class Notes was uploaded by Lane Schuster on Sunday October 11, 2015. The Class Notes belongs to PSYC 2101 at East Carolina University taught by Karl Wuensch in Fall. Since its upload, it has received 18 views. For similar materials see /class/221339/psyc-2101-east-carolina-university in Psychlogy at East Carolina University.
Reviews for Psychological Statistics
Report this Material
What is Karma?
Karma is the currency of StudySoup.
You can buy or earn more Karma at anytime and redeem it for class notes, study guides, flashcards, and more!
Date Created: 10/11/15
2101 Bivariate Linear Correlation One way to describe the association between two variables is to assume that the value of the one variable is a linear function of the value ofthe other variable lfthis relationship is perfect then it can be described by the slopeintercept equation for a straight line Y a bX Even if the relationship is not perfect one may be able to describe it as nonperfect linear Scatter Plots One way to describe a bivariate association is to prepare a scatter plot a plot of all the known paired XY values dots in Cartesian space X is traditionally plotted on the horizontal dimension the abscissa and Y on the vertical the ordinate If all the dots fall on a straight line with a positive slope the relationship is perfect positive linear Every time X goes up one unit Y goes up b units If all dots fall on a negatively sloped line the relationship is perfect negative linear Perfect Positive Linear PeIfect Negative Linear 12 12 owlsmm IIII oversaw IIII A linear relationship is monotonic ofone direction that is the slope ofthe line relating Y to X is either always positive or always negative A monotonic relationship can however be nonlinear ifthe slope of the line changes magnitude but not direction as in the plots below Copyright 2009 Karl L Wuensch All rights reserved Corr2101doc Perfect Positive Monotonic Perfect Negative Monotonic 12 12 10 10 s 8 Y 6 Y 6 4 4 2 2 o I I I I I o I I I I o 1 2 3 4 5 e o 1 2 3 4 5 e X X Nonmonotonic Relationship A nonlinear relationship may however not be monotonic as shown to the right where we have a quadratic relationship between level of test anxiety and performance on a complex cognitive task We shall not cover in this course the techniques available to analyze such a relationship such as polynomial regression Performance 0 N w 0 0 Test Anxiety Do note that a linear relationship is a monotonic relationship but a monotonic relationship is not necessarily a linear relationship If I tell you that every time X goes up Y also goes up then you know the relationship is monotonic but you do not know whether or not it is linear Please read the document at httgcoreecuedu9sycwuenschkdocs01fAThenBdoc Of course with real data the dots are not likely all to fall on any one simple line but may be approximately described by a simple line We shall learn how to compute correlation coefficients that describe how well a straight line ts the data lfyour plot shows that the line that relates X and Y is linear you should use the Pearson correlation coef cient discussed below lfthe plot shows that the relationship is monotonic not a straight line but a line whose slope is always positive or always negative you can use the Spearman correlation coef cient discussed below If your plot shows that the relationship is curvilinear but not monotonic you need advanced techniques such as polynomial regression not covered in this class Let us imagine that variable X is the number of hamburgers consumed at a cookout and variable Y is the number of beers consumed We wish to measure the relationship between these two variables and develop a regression equation that will enable us to predict how many beers a person will consume given that we know how many burgers that person will consume A Scatter Plot of Our Data Subject X Y XY 12 Burgers Beers 10 1 5 8 40 u s 2 4 10 40 g 6 0 3 3 4 12 439 2 O 4 2 6 12 o 5 1 2 2 0 1 2 3 4 5 Sum 15 30 106 Burgers Mean 3 6 St Dev 1581 3162 5 431622 40 Covariance One way to measure the linear association between two variables is covariance an extension of the unidimensional concept of variance into two dimensions The Sum of Squares Cross Products 16 SSCPZX XY VZXY W106 If most of the dots in the scatter plot are in the lower left and upper right quadrants most of the crossproducts will be positive so SSCP will be positive as X goes up so does Y If most are in the upper left and lower right SSCP will be negative as X goes up Y goes down Just as variance is an average sum of squares 8 N or to estimate population variance from sample data 88 N1 covariance is an average SSCP SSCP N We shall compute covariance as an estimate ofthat in the population from which our data were SSCP 16 N 1 4 A major problem with COV is that it is affected not only by degree of linear relationship between X and Y but also by the standard deviations in X and in Y In fact the maximum randomly sampled That is COV absolute value of COVXY is the product oXoy Imagine that you and I each measured the height and weight of individuals in our class and then computed the covariance between height and weight You use inches and pounds but I use miles and tons Your numbers would be much larger than mine so your covariance would be larger than mine but the strength of the relationship between height and weight should be the same for both ofour data sets We need to standardize the unit of measure of our variables Computing Pearson r We can get a standardized index ofthe degree of linear association by dividing COV by the two standard deviations removing the effect of the two univariate standard deviations This index is called the Pearson roduct moment correlation coefficient rfor short and is COVXY 4 sxsy 1 5813162 39 de ned as r 0 Pearson r may also be de ned as a mean ZXZ if r Z Ny where the Z scores are computed using population standard deviations g n SSCP 16 16 Jssxss 415812431622 x1040 Pearson rwill vary from 1 to O to 1 lfr 1 the relationship is perfect positive and every pair of XY scores has 2 Zy If r 0 there is no linear relationship lfr 1 the relationship is perfect negative and every pair of XY scores has ZX Zy Pearson r may also be computed as r If we have XY data sampled randomly from some bivariate population of interest we may wish to test Hg p 0 the null hypothesis that the population correlation coefficient rho is zero X and Y are independent of one another there is no linear association between X and Y This is quite simply done with Student s t run 2 8 1r2 1 64 You should rememberthat we used this formula earlier to demonstrate that the independent samples ttest is just a special case of a correlation analysis if one of the variables is dichotomous and the other continuous computing the point biserial rand testing its significance is absolutely equivalent to conducting an independent samples ttest Keep this in mind when someone tells you that you can make causal inferences from the results of a ttest but not from the results of a correlation analysis the two are mathematically identical so it does not matter which analysis you did What does matter is how the data were collected lfthey were collected in an experimental manner manipulating the independent variable with adequate control of extraneous variables you can make a causal inference lfthey were gathered in a nonexperimental manner you cannot t 2309 with df N 2 Putting a Confidence Interval on R or R2 It is a good idea to place a con dence interval around the sample value of r or r2 but it is tedious to compute by hand Fortunately there is now available a free program for constructing such confidence intervals Please read my document Putting Confidence Intervals on R2 or R For our beer and burger data a 95 confidence interval for rextends from 28 to 99 This should be reported in the summary statement Reporting Pearson r For our beer and burger data our APA summary statement could read like this The correlation between my friends burger consumption and their beer consumption fell short of statistical significance rn 5 8 p 10 A 95 confidence interval for p runs from 28 to 99 For some strange reason the value of the computed tis not generally given when reporting a test of the signi cance of a correlation coef cient You might want to warn your readers that a Type II error is quite likely here given the small sample size Were the result significant your summary statement might read something like this Among my friends burger consumption was significantly positively related to beer consumption Assumptions When Testing Hypotheses About r or Putting a Confidence Interval on r There are no assumptions ifyou are simply using the correlation coef cient to describe the strength of linear association between X and Y in your sample If however you wish to use tor Fto test hypothesis about p or place a confidence interval about your estimate of p there are assumptions Bivariate Normality It is assumed that the joint distribution ofXY is bivariate normal To see what such a distribution look like try the Java Applet at httpucskuleuvenbe39avaversion20Applet030html Use the controls to change various parameters and rotate the plot in threedimensional space In a bivariate normal distribution the following will be true The marginal distribution on ignoring X will be normal The marginal distribution ofX ignoring Y will be normal OakA Every conditional distribution onX will be normal 4 Every conditional distribution ofXY will be normal Homoscedasticity 1 The variance in the conditional distributions of YX is constant across values of X 2 The variance in the conditional distributions of XY is constant across values of Y Shrunken r2 Please note that these procedures require the same assumptions made for testing the null hypothesis that the p is zero There are however no assumptions necessary to use ras a descriptive statistic to describe the strength of linear association between X and Y in the data you have For a relatively unbiased estimate of population r2 requiring no assumptions compute the shrunken r2 11 r2n 111 644I52 n 2 3 This corrects for the tendency to get overestimates ofp from small samples What is the value of rif n 2 How well can you t any two points in Cartesian space with a straight line See my document What is R2 When N p 1 and df 0 for the answer to this question Spearman rho When one s data are ranks one may compute the Spearman correlation for ranked data also called the Spearman p which is computed and significancetested exactly as is Pearson rifn lt 10 find a special table for testing the significance ofthe Spearman p The Spearman p measures the linear association between pairs ofranks lf one s data are not ranks but e converts the raw data into ranks prior to computing the correlation coefficient the Spearman measures the degree of monotonicity between the original variables lfevery time X goes up Y goes up the slope of the line relating X to Y is always positive there is a perfect positive monotonic relationship but not necessarily a perfect linear relationship for which the slope would have to be constant Consider the following data X 10 19 20 29 30 31 40 41 5 Y 10 99 100 999 1000 1001 10000 10001 100000 used SPSS to plot these data and compute the simple Pearson rbetween X and Y between X and the base 10 log of Y and between rank ofX and rank on Spearman Here is the output Correlations Correlations X X Y Pearson correlat39on 39678 Spearman39s rho Y Correlation Coef cient 1000 Sig 2tailed 045 Sig 2ta ed N 9 N 9 LOG10Y Pearson Correlation 999 Sig 2tailed 000 N 9 120000 100000 I 80000 60000 40000 20000 Y is o o o o 0 As you can see the relationship between X and Y is perfectly monotonic and nearly perfect exponential the correlation between X and the log on is almost perfect so the Spearman coef cient is 1000 The Pearson linear coef cient does not really adequately describe how strongly X and Y are related How Do Behavioral Scientists Use Correlation Analyses 1 to measure the linear association between two variables without establishing any causeeffect relationship 2 as a necessary and suggestive but not sufficient condition to establish causality see the online document When Does Correlation lmply Causation lf changing X causes Y to change then X and Y must be correlated but the correlation is not necessarily linear X and Y may however be correlated without X causing Y It may be that Y causes X Maybe increasing Z causes increases in both X and Y producing a correlation between X and Y with no causeeffect relationship between X and Y For example smoking cigarettes is well known to be correlated with health problems in humans but we cannot do experimental research on the effect of smoking upon humans health Experimental research with rats has shown a causal relationship but we are not rats One alternative explanation of the correlation between smoking and health problems in humans is that there is a third variable or constellation ofvariables genetic disposition or personality that is causally related to both smoking and development of health problems That is if you have this disposition it causes you to smoke and it causes you to have health problems creating a spurious correlation between smoking and health problems but the disposition that caused the smoking would have caused the health problems whether or not you smoked No I do not believe this model but the data on humans cannot rule it out As another example of a third variable problem consider the strike by PATCO the union of air traffic controllers back during the Reagan years The union cited statistics that air traffic controllers had much higherthan normal incidence of stressrelated illnesses hypertension heart attacks drug abuse suicide divorce etc They said that this was caused by the stress ofthe job and demanded better benefits to deal with the stress no mandatory overtime rotation between high stress and low stress job positions etc The government crushed the strike red all controllers invoking a third variable explanation of the observed correlation between working in air traf c control and these illnesses They said that the air traffic controller profession attracted persons ofa certain disposition Type A individuals who are perfectionists who seem always to be undertime pressure and these individuals would get those illnesses whether they worked in air traf c or not Accordingly the government said the problem was the fault ofthe individuals not the job Maybe the government would prefer that we hire only Type B controllers folks who take it easy and don t get so upset when they see two blips converging on the radar screen 3 to establish an instrument s reliability a reliable instrument is one which will produce about the same measurements when the same objects are measured repeatedly in which case the scores at one time should be well correlated with the scores at another time and have equivalent means and variances as well 4 to establish an instruments criterionrelated validity a valid instrument is one which measures what it says it measures One way to establish such validity is to show that there is a strong positive correlation between scores on the instrument and an independent measure of the attribute being measured For example the Scholastic Aptitude Test was designed to measure individuals ability to do well in college Showing that scores on this test are well correlated with grades in college establishes the tests validity 5 to do independent groups ttests if the independent variable X groups is coded 01 or any other two numbers and X is correlated with the dependent variable Y a significancetest ofthe hypothesis that p 0 will yield exactly the same tand p as the traditional pooledvariances independent groups t test In other words the independent groups t test is just a special case of correlation analysis where the X variable is dichotomous The r is called a pointbiserial r It can also be shown that the 2 x 2 Pearson Chisquare test is a special case of r When both X and Y are dichotomous the r is called phi 6 One can measure the correlation between Y and an optimally weighted set oftwo or more X s Such a correlation is called a multiple correlation A model with multiple predictors might well predict a criterion variable better than would a model with just a single predictor variable Consider the research reported by McCammon Golden and Wuensch in the Journal of Research in Science Education 1988 25 501510 Subjects were students in freshman and sophomore level Physics courses only those courses that were designed for science majors no general education ltfootbal physicsgt courses The mission was to develop a model to predict performance in the course The predictor variables were CT the WatsonGlaser Critical Thinking Appraisal PMA Thurstone s Primary Mental Abilities Test ARI the College Entrance Exam Board s Arithmetic Skills Test ALG the College Entrance Exam Board s Elementary Algebra Skills Test and ANX the Mathematics Anxiety Rating Scale The criterion variable was subjects scores on course examinations Our results indicated that we could predict performance in the physics classes much better with a combination of these predictors than with just any one ofthem At Susan McCammon s insistence I also separately analyzed the data from female and male students Much to my surprise I found a remarkable sex difference Among female students every one ofthe predictors was significantly related to the criterion among male students none of the predictors was A posteriori searching ofthe literature revealed that Anastasi Psychological Testing 1982 had noted a relatively consistent nding of sex differences in the predictability of academic grades possibly due to women being more conforming and more accepting of academic standards better students so that women put maximal effort into their studies whether or not they like the course and according they work up to their potential Men on the other hand may be more ckle putting forth maximum effort only if they like the course thus making it dif cult to predict their performance solely from measures ofability ANOVA which we shall cover later can be shown to be a special case of multiple correlationregression analysis 7 One can measure the correlation between an optimally weighted set of Y s and an optimally weighted set of X s Such an analysis is called canonical correlation and almost all inferential statistics in common use can be shown to be special cases of canonical correlation analysis As an example of a canonical correlation consider the research reported by Patel Long McCammon amp Wuensch Journal of Interpersonal Violence 1995 10 354366 1994 We had two sets of data on a group of male college students The one set was personality variables from the MMPI One ofthese was the PD psychopathically deviant scale Scale 4 on which high scores are associated with general social maladjustment and hostility The second was the MF masculinityfemininity scale Scale 5 on which low scores are associated with stereotypical masculinity The third was the MA hypomania scale Scale 9 on which high scores are associated with overactivity ight of ideas low frustration tolerance narcissism irritability restlessness hostility and dif culty with controlling impulses The fourth MMPI variable was Scale K which is a validity scale on which high scores indicate that the subject is clinically defensive attempting to present himself in a favorable light and low scores indicate that the subject is unusually frank The second set of variables was a pair of homonegativity variables One was the lAH Index of Attitudes Towards Homosexuals designed to measure affective components of homophobia The second was the SBS SelfReport of Behavior Scale designed to measure past aggressive behavior towards homosexuals an instrument specifically developed for this study Our results indicated that high scores on the SBS and the lAH were associated with stereotypical masculinity low Scale 5 frankness low Scale K impulsivity high Scale 9 and general social maladjustment and hostility high Scale 4 A second relationship found showed that having a low lAH but high SBS not being homophobic but nevertheless aggressing against gays was associated with being high on Scales 5 not being stereotypically masculine and 9 impulsivity This relationship seems to re ect a general not directed towards homosexuals aggressiveness in the words of one of my graduate students being an equal opportunity bully Factors Which Can Affect the Size of r Range restrictions lfthe range ofX is restricted rwill usually fall it can rise ifX and Y are related in a curvilinear fashion and a linear correlation coef cient has inappropriately been used This is very important when interpreting criterionrelated validity studies such as one correlating entrance exam scores with grades after entrance 1O Extraneous variance Anything causing variance in Y but not in X will tend to reduce the correlation between X and Y For example with a homogeneous set of subjects all run under highly controlled conditions the r between alcohol intake and reaction time might be 095 but if subjects were very heterogeneous and testing conditions variable r might be only 050 Alcohol might still have just as strong an effect on reaction time but the effects of many other extraneous variables such as sex age health time of day day of week etc upon reaction time would dilute the apparent effect of alcohol as measured by r Interactions It is also possible that the extraneous variables might interact with X in determining Y That is X might have one effect on Y ifZ 1 and a different effect ifZ 2 For example among experienced drinkers Z 1 alcohol might affect reaction time less than among novice drinkers Z 2 If such an interaction is not taken into account by the statistical analysis a topic beyond the scope of this course the rwill likely be smaller than it otherwise would be Power Analysis Power analysis for r is exceptionally simple Vn 1 assuming that dfare large enough for tto be approximately normal Cohen s benchmarks for effect sizes for rare 10 is small but not necessarily trivial 30 is medium and 50 is large Cohen J A Power Primer PsychologicalBuIetin 1992 112 155159 For our burgerbeer data how much power would we have ifthe effect size was large in the population that is p 50 55JZ100 From our power table using the traditional 05 criterion of signi cance we then see that power is 17 As stated earlier a Type II error is quite likely here How many subjects would we need to have 95 powerto 2 detect even a small effect Lots n 1 1297 That is a lot of burgers and beer p Copyright 2009 Karl L Wuensch All rights reserved TwoWay Independent Samples ANOVA with SPSS Obtain the le ANOVA2SAV on my SPSS Data page The data are those that appear in Table 173 of Howell s Fundamental statistics for the behavioral sciences 6th ed and in Table 132 of Howell s Statistical methods forpsychology 6th ed The independent variables are age of participant young or old and depth of cognitive processing manipulated by the instructions given to participants prior to presentation of a list of words The dependent variable is number of words correctly recalled later Bring the data le ANOVA2SAV into SPSS To conduct the factorial analysis click Analyze General Linear Model Univariate Scoot Items into the Dependent Variable box and Age and Condition into the Fixed Factors box Click Plots and scoot Conditon into the Horizontal Axis box and Age into the Separate Lines box Click Add Continue Click Post H00 and scoot Conditon into the quotPost Hoc Tests forquot box Check REGWQ Click Continue Click options check Descriptive Statistics and Estimates of Effect Size click Continue Click OK Look at the plot The plot makes it pretty clearthat there is an interaction here The difference between the oldsters and the youngsters is quite small when the experimental condition is one with little depth of cognitive processing counting or rhyming but much greater with higher levels of depth of cognitive processing With the youngsters recall performance increases with each increase in depth of processing V th the oldsters there is an interesting dip in performance in the intentional condition Perhaps that is a matter of motivation with oldsters just refusing to follow instructions that ask them to memorize a silly list of words Do note that the means plotted here are least squares means SPSS calls them estimated means For our data these are the same as the observed means We had the same number of scores in each cell ofour design lfwe had unequal numbers of scores in our cells then our independent variables would be correlated with one another and the observed means would be 39contaminated39 by the correlations between independent variables The estimated means represent an attempt to estimate what the cells means would be if the independent variables were not correlated with one another These estimated means are also available in the Options dialog box Look at the output from the omnibus ANOVA We generally ignore the F for the quotCorrected Model that is the Fthat would be obtained if we were to do a oneway ANOVA where the groups are our cells Here it simply tells us that our cell means differ signi cantly from one another The twoway factorial ANOVA is really just an orthogonal partitioning ofthe treatment variance from such a oneway ANOVA that variance is partitioned into three components The two main effects and the one interaction We also ignore the test of the intercept which tests the null hypothesis that the mean ofa the scores is zero If you divide each effect39s SS by the total SS you see that the condition effect accounts for a whopping 57 ofthe total variance with the age effect only accounting for 9 and the interaction only accounting for 7 Despite the fact that all three ofthese effects are statistically signi cant one really should keep Copyright 2007 Karl L Wuensch All rights reserved ANOVA2 SPSSdOC that in mind and point out to the readers of the research report that the age and interaction effects are much less in magnitude than is the effect of recall condition depth of processing Look at the withincell standard deviations In the text book Howell says quotit is important to note that the data themselves are approximately normally distributed with acceptably equal variancesquot I beg to differ Fmax is 452 142 gt 10 but I am going to ignore that here The interpretation ofthe effect of age is straightforward the youngsters recalled significantly more items than did the oldsters 31 items on average The pooled within age standard deviation is computed by taking the square root of the mean ofthe two 2 2 groups variances speced M 4977 The standardized difference d is then 314977 62 Using Cohen39s guidelines that is a medium to large sized effect In terms of percentage of variance explained 2 SSge 24025 77 7 88 266779 Corrected Total The interpretation ofthe recall condition means is also pretty simple V th greater dept of processing recall is better but the difference between the intentional condition and the imagery condition is too small to be significant as is the difference between the rhyming condition and the counting condition The pooled standard deviation within the intentional recall and the counting conditions is 2 2 spewed 1 365 Standardized effect size d is then m 244 an enormous effect In terms of percentage of variance explained by recall condition 772 57 266779 Although the signi cant interaction effect is small 772 07 compared to the main effect of recall condition we shall investigate it by examining simple main effects For pedagogical purposes we shall obtain the simple main effects of age at each level of recall condition as well as the simple main effects of recall condition for each age Notice that SPSS gives you values of partial etasquared Also note that they sum to more than 100 ofthe variance lfyou want to place confidence intervals on the obtained values of etasquared you must compute an adjusted F for each effect as l have shown you elsewhere To place confidence intervals on partial etasquared you need only the F and dfvalues that SPSS reports Using the NoncF script here are the con dence intervals urh to the Data Eotor Chck Data Spht FHe TeH SPSS to Orgamze the output by groups based oh the Cohotoh yahabte OK Chck Ahatyze Corhpare Meahs ohervvay ANOVA Scoot tterhs mto the Depehoeht Dst and Age mto the Factor box OK sutts show that the ouhgsters recaHed stghthcahtty rhore terhsthah do the otosters at the mgher teyets of processmg adjecwe magery aho ntenttona but hot atthe tower teyets Countmg aho rhyrhthg The tests we haye Obtamed here p ermsrtha y etyeh of u tr prob em we rhtght want to get a Mme more power by usmg a pooteo error term What we wouto haye to do ts take the treatment MStor each of these tests dee t b the error Whom the oyerau tactohat ahatysts aho eyatuate each resuttthg Fwtth the same error dfused m the oyerau ANOVA Our error utwoutotheh be 90 mstead Of18 WH ch wouto gwe us a httte more power Data Eotor Chck Data Spht FHe TeH SPSS to Orgamze the output by groups based oh the Age yartabte OK Chck Ahat ze Corhpare Meahs ohervvay ANOVA Leave tterhs h the Depehoeht Dst aho reptace Age wtth Cohotoh h the Factor box OK Note that the effect of Condmon ts stghmcaht for both age groupsy but ts targer m magmtudefortheyoungsters 72 83 than for the otdsters 72 45 don ttmnkthat uht r rah hht brte y Arhohgthe otdSterS rheah recaH W the adjectwe thtehttohah ahd trhagery greater rheah greater than that m rrhagery aho rhtehtrohat cohortrohs Writing up the Results Here is an Example A 2 X 5 factorial ANOVA was employed to determine the effects of age group and recall condition on participants recall of the items A 05 criterion of statistical significance was employed for all tests The main effects of age F1 90 2994 p lt 001 7712 25 C95 11 38 and recall condition F4 90 4719 p lt 001 7712 68 C95 55 74 were statistically signi cant as was their interaction F4 90 593 p lt 001 772 21 C95 05 32 MSE 803 for each effect Overall younger participants recalled more items M 1316 than did older participants M 1006 The REGWQ procedure was employed to conduct pairwise comparisons on the marginal means for recall condition As shown in the table below recall was better for the conditions which involved greater depth of processing than for the conditions that involved less cognitive processing Table 1 The Main Effect of Recall Condition Recall Condition Counting Rhyming Adjective Imagery Intentional Mean 675A 725A 1290B 1550C 1565C Note Means sharing a letter in their superscript are not significantly different from one another according to REGWQ tests The interaction is displayed in the following gure Recall condition had a significant simple main effect in both the younger participants F4 45 5306 MSE 638 p lt 001 772 83 C95 70 87 and the older participants F4 45 908 MSE 968 p lt 001 772 45 C95 18 57 but the effect was clearly stronger in the younger participants than in the older participants The younger participants recalled significantly more items than did the older participants in the adjective condition F1 18 785 MSE 92 p 012 772 30 C95 02 55 the imagery condition F1 18 654 MSE 1349 p 020 772 27 Cl95 005 52 and the intentional condition F1 18 2523 MSE 1056 p lt 001 772 58 Cl95 23 74 but the effect of age fell well short of significance in the counting condition F1 18 046 MSE 269 p 50 772 03 C95 00 25 and in the rhyming condition F1 18 059 MSE 418 p 45 772 03 Cl95 00 27 Recall in Young and Old Participants N o 0 I Young Old Mean Items Recalled O Counting Rhyming Adjective Imagery Intentional Recall Condition Return to the SPSS Lessons Paqe Copyright 2007 Karl L Wuensch All rights reserved
Are you sure you want to buy this material for
You're already Subscribed!
Looks like you've already subscribed to StudySoup, you won't need to purchase another subscription to get this material. To access this material simply click 'View Full Document'