Quantive Methods Social Rsrch
Quantive Methods Social Rsrch SOC 355
Popular in Course
Popular in Sociology
verified elite notetaker
This 3 page Class Notes was uploaded by Ryan Watsica on Monday October 5, 2015. The Class Notes belongs to SOC 355 at California State University - Long Beach taught by Staff in Fall. Since its upload, it has received 12 views. For similar materials see /class/218743/soc-355-california-state-university-long-beach in Sociology at California State University - Long Beach.
Reviews for Quantive Methods Social Rsrch
Report this Material
What is Karma?
Karma is the currency of StudySoup.
You can buy or earn more Karma at anytime and redeem it for class notes, study guides, flashcards, and more!
Date Created: 10/05/15
Soc355 Methods of Social Research Hicks Marlowe STATISTICS THE BIGGER PICTURE Effective use of statistical techniques depends on your ability to understand the material you learned in your statistics class on two rather different levels The first level deals with the detailed analysis and calculation of specific procedures and is the subject matter of statistics classes This information is obviously necessary to provide you with an understanding of what the formulas are actually doing to the data However in conducting research you will seldom calculate even the simplest descriptive statistical procedures by hand Calculators spreadsheets and particularly statistical software programs like SPSS Statistical Package for the Social Sciences handle this detailed work much more effectively and efficiently than you ever could The second and broader level provides the logic and rationale underlying the use and structure of the specific techniques That is it helps you understand 1 why the operations used most commonly in statistical procedures are necessary for accurately describing group data and making valid decisions about them and 2 why statistical measures are put together the way they are Let me briefly summarize this logic and rationale The basic objectives of statistical techniques and operations are to l describe group data in the most compact and reliable way and 2 make valid comparisons of these data The principles most commonly used in accomplishing these goals are condensation representation and standardization ln descriptive statistics this first goal has resulted in frequency distributions of various kinds as a basic unit for representing a collection of individual scores Since frequency distributions may vary in a number of ways eg how many scores are being considered the numerical range of the scores how similar are the individual scores to one another a first problem is to develop some common way to represent them The most basic ways frequency distributions differ is in the dimensions of central tendency variability and shape In your statistics class you discussed a variety of speci c summary measures that allow us to describe these first two dimensions of distributions as well as ways to graphically display their shape but the ones used most often are the mean and some version of the idea of standard deviation Having identified what general characteristics of distributions may be most useful to compare we still have the problem of how to use them to make valid comparisons between distributions You would expect for example that a distribution based on a range of possible scores from to 100 will have a larger mean than one based on scores ranging from 1 to 5 This is where standardization becomes essential In this context by quotstandardizationquot we mean that the amount of total space occupied by the combined scores of any distribution is the same regardless of the possible range or number of scores considered This requirement is met in the measure called the standard or zquot score by converting the original score into one that takes into account the mean and standard deviation of the particular distribution in question The resulting score represents a percentage or proportion of the whole distribution the total area of which naturally has to add up to 100 or 1 0 and half of which will be located on either side of the mean So far so good We are now able to describe the scores making up distributions in a standard way All together the original scores translated into z scores will add up to 100 of the area occupied by the distribution But how do we standardize the shape of the distribution so that the physical location of our reference point the mean and specified proportions eg lower 25 upper 75 is constant As you know for example the mean of two distribution that are skewed to the right ie more low scores and left will intersect the base line of the distribution curve at different points This obviously presents a problem for standardization As a concrete example of this problem take a look at the two distributions shown below The mean for Distribution 1 would fall almost directly in the physical center of the distribution curve while the mean in Distribution 2 would fall more to the left Similarly in the distribution on the right you would have to go further to the right of the mean to find the score that separated the lower 75 and upper 25 4o 50 60 7o 10 20 3o 40 30 4o 50 60 7o 80 10 20 3o 40 50 6O 10 20 30 4o 50 60 7o 80 90 100 10 20 3o 40 50 6O 70 80 90 100 Distribution 1 Distribution 2 The solution statisticians have come up with for this problem is the normal curve There are two main reasons the normal curve is so valuable in statistics The first is that it has structural properties that make it an ideal reference standard for comparison purposes Most important its characteristics are stable and known lts shape is symmetrical which means that the lower and upper halves of the distribution are identical and more speci cally that the score representing the mean median and mode is the same Further the proportion of the distribution falling between the mean and specified standard deviation units is constant one two and three standard deviation units on each side ie negative and positive of the mean account for about 68 95 and 997 of the total cases For descriptive purposes these constant and known characteristics of the normal curve allow us to determine what proportion of the relatively normal distribution of scores falls above or below a particular score or range of scores Or conversely to determine what particular score or range marks the boundary for a particular proportion of the distribution Both depend on our knowledge that the total area under the curve is 10 or 100 that the mean marks the point that separates the lower and upper 50 of the distribution and that the relation between particular z scores and proportions of area are constant It may occur to you that this is all nice and neat as long as the shape of your distributions do in fact look like a normal curve But what if they don t fall into a convenient quotbellquot shape That possibility brings up the second reason the normal curve is so useful After examining the shape of many many many real distributions it turns out that the normal curve is a close approximation of the distribution curve that would actually be generated by repeated random samples of numerous real phenomena Perhaps even more useful is that fact that if sample size is not too small ie at least 25 it can be used as a comparison model even if the population from which random samples are drawn is not distributed normally This is due to something that has been called the Central Limit Theorem or Law of Large Numbers If you are uneasy about accepting the validity of this notion I suppose you could eventually check it out empirically However I can assure you that a lot of folk more obsessively concerned with getting it right have verified that it works Anyway this idea is invaluable when we move into the techniques of inferential statistics because the more powerful parametric techniques assume that the target population approximates a normal distribution To summarize the principal goals of descriptive statistics are simple to find ways of representing the most important dimensions of group data in a form that is compact comprehensible and comparable Each of the procedures you covered in the statistics class contribute to these goals in different ways And while the concrete operations necessary to make this possible are admittedly technical and complex the reasoning and objectives behind the operations are not The most basic goal of inferential statistics is even more simple comparisons to discover similarities and differences that will help in decision making Sometimes the comparison is between an established standard and empirical results An example would be a comparison between a desired level of quality set by an organization for an item it manufactures and a sample of the current output If there is a statistically significant difference between them that organization will probably start trying to find out why the difference exists then start making some changes that will reduce the difference Other times the comparison will be between the characteristics of some known group as documented by historical or national norms and samples from a particular population This sort of comparison was a major factor that generated concern over declining grade point averages of American high school students a few years ago That general finding also prompted additional comparisons to try and discover what school teacher or parent variables were most closely associated ie correlated with higher grade point averages Frequently comparisons are made between two groups who have been intentionally exposed to different conditions eg medical treatments teaching techniques supervisory styles This is the basic design of experiments There is a very practical reason for all these comparisons implementing a major change in the structure or policies of an organization can be very expensive and wasteful if it doesn t improve things On the other hand it may be well worth the effort and expensive if the change produces significant improve ment Statistical comparisons of group data allow us to draw conclusions that reduce the risks involved in making such decisions At least they do if we know what specific comparisons are need to draw these conclusions what constitutes a valid comparison and the practical implications of statistically significant differences or similarities for our particular substantive problem In terms of overall priorities this broader understanding of statistical techniques what they attempt to accomplish My we use the ones we do and how they relate to real questions we have is more important than technical expertise in carrying the mathematical operations required for specific procedures This is particularly true today when statistical packages for computers like SPSS make it impractical to do manual calculations for anything but very small samples as I mentioned earlier However even if we didn t have these time savers it would still be true because this more general knowledge affects many of the more specific aspects of your use of statistics such as your ability to evaluate the appropriateness of particular techniques your awareness of possible alternative techniques for dealing with a particular problem and your skill in arriving at more precise interpretations of data Bottom Line SPSS is fast tireless and totally ignorant of the purpose of your study or what you are trying to discover or decide with your research It s like a big slobbery puppy who only wants to please It will create beautiful graphs and tables and precise looking correlations and significance tests using incorrect data and inappropriate comparisons if you ask it to Don t ask it to Get familiar enough with your topic area to identify clearly relevant variables and design questionnaire items that will validly capture those variables Keep the data analysis portion of your research work in mind from the beginning of the design of your study Ultimately you are going to have to relate or compare variables that are represented by numbers Make sure that your questionnaire includes all the items you need to produce the quantitative comparisons you ll need to answer your research questions It s a stone bummer to get to the data analysis portion of your study and discover that you didn t collect some data that you now realize you need to draw conclusions Along similar lines spend some time with the quothelpquot les in SPSS They provide a good brief description of what each procedure is designed to do If you have a clear idea of the comparisons you want to make I would be very surprised if SPSS didn t have a procedure to make those comparisons Finally and very importantly be aware that each phase of the research process study design instrument construction data collection coding and setting up the data and data analysis will probably take longer than you plan for it So get into the concrete details of your study early Use all the help you can get from the literature review Babbie s text the SPSS help files and me That way you the puppy and I will all be a lot happier at the end of the semester
Are you sure you want to buy this material for
You're already Subscribed!
Looks like you've already subscribed to StudySoup, you won't need to purchase another subscription to get this material. To access this material simply click 'View Full Document'