BTM8106 Week 1,2,3,4,5,7,8 Complete Solutions
BTM8106 Week 1,2,3,4,5,7,8 Complete Solutions nurs
Popular in Department
verified elite notetaker
This 52 page Study Guide was uploaded by NUMBER1TUTOR Notetaker on Thursday November 5, 2015. The Study Guide belongs to nurs at California State University - Dominguez Hills taught by in Fall 2015. Since its upload, it has received 354 views.
Reviews for BTM8106 Week 1,2,3,4,5,7,8 Complete Solutions
Report this Material
What is Karma?
Karma is the currency of StudySoup.
You can buy or earn more Karma at anytime and redeem it for class notes, study guides, flashcards, and more!
Date Created: 11/05/15
2. What are degrees of freedom? How are the calculated? 3. What do inferential statistics allow you to infer? 4. What is the General Linear Model (GLM)? Why does it matter? 5. Compare and contrast parametric and nonparametric statistics. Why and in what types of cases would you use one over the other? 6. Why is it important to pay attention to the assumptions of the statistical test? What are your options if your dependent variable scores are not normally distributed? Part II Part II introduces you to a debate in the field of education between those who support Null Hypothesis Significance Testing (NHST) and those who argue that NHST is poorly suited to most of the questions educators are interested in. Jackson (2012) and Trochim and Donnelly (2006) pretty much follow this model. Northcentral follows it. But, as the authors of the readings for Part II argue, using statistical analyses based on this model may yield very misleading results. You may or may not propose a study that uses alternative models of data analysis and presentation of findings (e.g., confidence intervals and effect sizes) or supplements NHST with another model. In any case, by learning about alternatives to NHST, you will better understand it and the culture of the field of education. Answer the following questions: 1. What does p = .05 mean? What are some misconceptions about the meaning of p =.05? Why are they wrong? Should all research adhere to the p = .05 standard for significance? Why or why not? 2. Compare and contrast the concepts of effect size and statistical significance. 3. What is the difference between a statistically significant result and a clinically or “real world” significant result? Give examples of both. 4. What is NHST? Describe the assumptions of the model. 5. Describe and explain three criticisms of NHST. 6. Describe and explain two alternatives to NHST. What do their proponents consider to be their advantages? 7. Which type of analysis would best answer the research question you stated in Activity 1? Justify your answer. Inferential Statistics 2. What are degrees of freedom? How are they calculated? Answer: The degree of freedoms is equal to the number of independent observation or the number of subjects in the data, minus the parameters estimated. A parameter to be estimated is related to the value of an independent variable and included in a statistical equation. A researcher may estimate parameters using different amounts or pieces of information and the number of independent pieces of information he or she used to estimate statistic or a parameter is called the degree of freedom. Calculation: Step 1 Determine what type of statistical test I need to run. Both ttests and chisquared tests use degrees of freedom and have distinct degrees of freedom tables. Ttests are used when the population or sample has distinct variables. Chisquared tests are used when the population or sample has continuous variables. Both tests assume normal population or sample distribution. Step 2 Identify how many independent variables I have in my population or sample. If I have a sample population of N random values then the equation has N degrees of freedom. If my data set required me to subtract the mean from each data pointas in a chisquared testthen I will have N1 degrees of freedom. Step 3 Look up the critical values for my equation using a critical value table. Knowing the degrees of freedom for a population or sample does not give me much insight in of itself. Rather, the correct degrees of freedom and my chosen alpha together give me a critical value. This value allows me to determine the statistical significance of my results. 3. What do inferential statistics allow you to infer? Answer: Inferential statistics is concerned with making predictions or inferences about a population from observations and analyses of a sample. That is, we can take the results of an analysis using a sample and can generalize it to the larger population that the sample represents. In order to do this, however, it is imperative that the sample is representative of the group to which it is being generalized. To address this issue of generalization, we have tests of significance. A Chisquare or Ttest, for example, can tell us the probability that the results of our analysis on the sample are representative of the population that the sample represents. In other words, these tests of significance tell us the probability that the results of the analysis could have occurred by chance when there is no relationship at all between the variables we studied in the population we studied. 4. What is the General Linear Model (GLM)? Why does it matter? Answer: The General Linear Model (GLM) underlies most of the statistical analyses that are used in applied and social research. It is the foundation for the t-test, Analysis of Variance (ANOVA), Analysis of Covariance (ANCOVA), regression analysis, and many of the multivariate methods including factor analysis, cluster analysis, multidimensional scaling, discriminant function analysis, canonical correlation, and others. Because of its generality, the model is important for students of social research. Although a deep understanding of the GLM requires some advanced statistics training, I will attempt here to introduce the concept and provide a non- statistical description. When there is a relationship among the variables and then they can expressed by the general linear models. 5. Compare and contrast parametric and nonparametric statistics. Why and in what types of cases would you use one over the other? Answer: Nonparametric statistics (also called “distribution free statistics”) are those that can describe some attribute of a population, test hypotheses about that attribute, its relationship with some other attribute, or differences on that attribute across populations, across time or across related constructs, that require no assumptions about the form of the population data distribution(s) nor require interval level measurement. In the literal meaning of the terms, a parametric statistical test is one that makes assumptions about the parameters (defining properties) of the population distribution(s) from which one's data are drawn, while a non-parametric test is one that makes no such assumptions. In this strict sense, "non-parametric" is essentially a null category, since virtually all statistical tests assume one thing or another about the properties of the source population(s). We will use parametric statistics and non-parametric statistics in the following situation: 6. Why is it important to pay attention to the assumptions of the statistical test? What are your options if your dependent variable scores are not normally distributed? When you do a statistical test, you are, in essence, testing if the assumptions are valid. We are typically only interested in one, the null hypothesis. That is, the assumption that the difference is zero (actually it could test if the difference were any amount). But the null hypothesis is only one of many assumptions. A second assumption is that the data are normally distributed. One unusual thing about the ‘real’ world is that data are often normally distributed. Height, IQ and many, many other parameters are. In general, if a variable is affected by many, many different factors, it will be normally distributed. We even have tests to determine if the data are normal. Unfortunately, almost all variables have a slight departure from normality. When the dependent variables score are not normally distributed then we have to standardized the dependent variable such that it can follows normal distributions. NHST 1. What does p = .05 mean? What are some misconceptions about the meaning of p =.05? Why are they wrong? Should all research adhere to the p = .05 standard for significance? Why or why not? Answer: p = 0.05 means signifance level of the given sample size. P= 0.05 is a threshold given by fisher used to check the significance or nonsignificance of a sample size. Fisher does not explain the 0.05 clearly like from where it came what are the concepts behind the value of p. So people will remain in confusion about the choice of the value of 0.05. But it was assumed that 0.05 is just an arbitrary number fisher assumed which is used to check the significance level. No it is not necessary to adhere all research to the value 0.05 as it is not proved its origin hence we can make some other arbitrary constant for the same with proper reasoning. . 2. Compare and contrast the concepts of effect size and statistical significance. Answer: Effect size is a simple way of quantifying the difference between two groups that has many advantages over the use of tests of statistical significance alone. Effect size emphasizes the size of the difference rather than confounding this with sample size. If effect size comes under the purview of 0.05 then it is said to be significant and acceptable but it is goes beyond the purview of 0.05 then the effect size or the sample size is said to be nonsignificant. 3. What is the difference between a statistically significant result and a clinically or “real world” significant result? Give examples of both. Answer: Statistically significant results are based on various assumptions while the clinically significant results are due to a logical conclusion. For example in financial sector performance of the company is analyzed in advance on the basis of statistical calculation of previous years while in reality check it could differ from the statistical result. Like GDP growth of US is 2.3 predicted in advance but in real calculation it was found to be +1.0. 4. What is NHST? Describe the assumptions of the model. Answer: Null Hypothesis Significance Testing (NHST) is a type of statistical techniques or methods which are used to check or calculate the effect of certain factor on our observation. Assumptions used in NHST In the null hypotheses involves the absence of factors such as selection and drift etc. These null hypotheses does not based on reality. Null hypotheses is formulated on a wellinformed hypotheses based on tested a priori assumptions. Hypotheses allow the analysis and reconstruction of models. 5. Describe and explain three criticisms of NHST. Answer: For NHST, the two independent dimensions of measurement are (1) the strength of an effect, measured using the distance of a point estimate from zero; and (2) the uncertainty we have about the effect’s true strength, measured using something like the expected variance of our measurement device. These two dimensions are reduced into a single pvalue in a way that discards much of the meaning of the original data. In addition to that nobody knows the origin of 0.05 in NHST and why only 0.05 is assumed so it is like an assumption assumed to check result. 6. Describe and explain two alternatives to NHST. What do their proponents consider to be their advantages? Answer: Today there is need to replace the NHST with real life statistical calculations. There are various alternatives available to NHST; Two alternatives to NHST Power Analysis In statistical power analysis it gives you a longterm probability but given that the population effect size, alpha, and sample size, of rejecting the null hypothesis, given that the null hypothesis is false. The influence of sample size on significance has long been understood. It is better than NHST as their results and probability are for long term. PlotPlusErrorBar Procedure In this method a bar graph is used to predict results. This method is reliable as it tracks the data from the various no of years. This procedure is also known as PPE. 7. Which type of analysis would best answer the research question you stated in Activity 1? Justify your answer. Answer: You are a researcher interested in addressing the question: does smiling cause mood rise (i.e. become more positive)? Sketch betweenparticipants, withinparticipants, and matched participants designs that address this question and discuss the advantages and disadvantages of each to yielding that help you answer the question. Describe and discuss each design in 45 sentences. To research whether smiling has an impact on mood changing a researcher can employ different designs to reach his or her findings. Therefore, the impact of a smile on mood changing will be the hypothesis of the research and a researcher can use betweenparticipants, withinparticipants and matchedparticipants design. These designs are applied differently but they all give the same findings. Betweenparticipants design is a number of respondents are divided into two. One group tests whether a smile changes the mood (McKenney & Reeves, 2012). They smile to different people and the findings are recorded. Another group does not smile to people and the findings are recorded. After the sample period is over the findings from both respondents give the answer where smiling is a mood changer. Withinparticipants design is the process by which similar respondents are used to get findings from a research study. In this study to find out if smiling causes mood to change, the same respondents are used to get findings. The study is done in two stages. First, the respondents smile to people to see whether their mood will change (McKenney & Reeves, 2012). Then after some time the same respondents do not smile and see whether that action will have an impact on the mood of a person. The findings are also recorded. Matchedparticipants design is the process by which participants are matched with respondents. In this case, the participants that smile are matched with ones that prefer to be smiled at. From the findings of all three designs, it will be found that smiling will have an impact on the mood of a person. When a respondent smiles to a person he or she changes their mood. It was found that smiling definitely changes moods. The advantages of betweenparticipants are that it gives the study variety and it takes a short time as the study is done concurrently by all participants. The disadvantage is that it uses more money because more people are needed. The advantage of withinparticipants is it costs less as the same participants are used. A disadvantage is that it takes longer as the same participants take part in two stages. The advantage of matched participants is that it takes the best out of withinparticipants and betweenparticipants. Here Since we have to check the significance level of the three designs so from present scenario nothing is better than NHST in predicting significance of events. It can be easily predicted that which one is better or whether mood changes or not. Jackson, evennumbered Chapter Exercises, pp. 335337. Experimental Designs 2. Explain the difference between multiple independent variables and multiple levels of independent variables. Which is better? Answer: The general purpose of multivariate analysis of variance (MANOVA) is to determine whether multiple levels of independent variables on their own or in combination with one another have an effect on the dependent variables. MANOVA requires that the dependent variables meet parametric requirements. MANOVA is used under the same circumstances as ANOVA but when there are multiple dependent variables as well as independent variables within the model which the researcher wishes to test. MANOVA is also considered a valid alternative to the repeated measures ANOVA when sphericity is violated. Like an ANOVA, MANOVA examines the degree of variance within the independent variables and determines whether it is smaller than the degree of variance between the independent variables. If the within subjects variance is smaller than the betw een subjects variance it means the independent variable has had a significant effect on the dependent variables. There are two main differences between MANOVAs and ANOVAs. The first is that MANOVAs are able to take into account multiple independent and multiple dependent variables within the same model, permitting greater complexity. Secondly,rather than using the F value as the indicator of significance a number of multivariate measures. MANOVAs the independent variables relevant to each main effect are weighted to give them priority in the calculations performed. In interactions the independent variables are equally weighted to determine whether or not they have an additive effect in terms of the combined variance they account for in the dependent variable/s. The main effects of the independent variables and of the interactions are examined with all else held constant. The effect of each of the independent variables is tested separately. Any multiple interactions are tested separately from one another and from any significant main effects. Assuming there are equal sample sizes both in the main effects and the inter actions, each test performed will be independent of the next or previous calculation (exce pt for the error term which is calculated across the independent variables). 3. What is blocking and how does it reduce “noise”? What is a disadvantage of blocking? Sol: The Randomized Block Design is research design's equivalent to stratified random sampling. Like stratified sampling, randomized block designs are constructed to reduce noise or variance in the data (see Classifying the Exper imental Designs). How do they do it? They require that the researcher divide the sample into relatively homogeneous subgroups or blocks (analogous to "strata" in stratified sampling). Then, the experimental design you want to impl ement is implemented within each block or homogeneous subgroup. The key idea is that the variability within each block is less than the variability of the entire sample. Thus each estimate of the treatment effect within a block is more efficient than estimates across the entire sample. And, when we pool these more efficient estimates across blocks, we should get an overall more efficient estimate than we would without blocking. How Blocking Reduces Noise So how does blocking work to reduce noise in the data? To see how it works, you have to begin by thinking about the nonblocked study. The figure shows the pretestposttest distribution for a hypothetical prepost randomized experi mental design. We use the 'X' symbol to indicate a program group case and the 'O' symbol for a comparison group member. You can see that for any specific pretest value, the program group tends to outscore the comparison group by about 10 points on the posttest. That is, there is about a 10point posttest mean difference. Now, let's consider an example where we divide the sample into three relatively homogeneous blocks. To see what happens graphically, we'll use the pretest mea sure to block. This will assure that the groups are very homogeneous. Let's look at what is happening within the third block. Notice that the mean difference is still the same as it was for the entire sample about 10 points within each block. But also notice that the variability of the posttest is much less than it was for the entire samp le. Remember that the treatment effect estimate is a signaltonoise ratio. The signal in this case is the mean difference. The noise is the variability. The two figures show that we haven't changed the signal in moving to blocking there is still about a 10point posttest difference. But, we have changed the noise the variability on the posttest is much smaller within each block that it is for the entire sample. So, the treatment effect will have less noise for the same signal. It should be clear from the graphs that the blocking design in this case will yield the stronger treatment effect. But this is true only because we did a good job assuring that the blocks were homogeneous. If the blocks weren't homogeneous. their variability was as large as the entire sample's. we would actually get worse estimates than in the simple randomized experim ental case. 4. What is a factor? How can the use of factors benefit a design? Sol: Probably the easiest way to begin understanding factorial designs is by looking at an example. Let's imagine a design where we have an educational program where we would like to look at a variety of program variations to see which works best. For instance, we would like to vary the amount of time the children receive instruction with one group getting 1 hour of instruction per week and another getting 4 hours per week. And, we'd like to vary the setting with one group getting the instruction inclass (probably pulled off into a corner of the classroom) and the other group being pulledout of the classroom for instruction in another room. We could think about having four separate groups to do this. With factorial designs, we don't have to compromise when answering these questions. We can have it both ways if we cross each of our two time in inst ruction conditions with each of our two settings. Let's begin by doing some defining of terms. In factorial designs, a factor is a major independent variable. In this example we have two factors: time in instruction and setting. A level is a subdivision of a factor. In this example, time in instruction has two levels and setting has two levels. Sometimes we depict a factorial design with a numbering notation. In this example, we can say that we have a 22 (spoken "twobytwo) factorial design. In this notation, the number of numbers tells you how many factors there are and the number values tell you how many levels. If I said I had a 34 factorial design, you would know that I had 2 factors and that one factor had 3 levels while the other had 4. Order of the numbers makes no difference and we could just as easily term this a 43 factorial design. The number of different treatment groups that we have in any factorial design can easily be deter mined by multiplying through the number notation. For instance, in our example we have 22 = 4 groups. In our notational example, we would need 34 = 12 groups. 5.Explain main effects and interaction effects. Sol: The Main Effects A main effect is an outcome that is a consistent difference between levels of a factor. For instance, we would say there's a main effect for setting if we find a statistical difference between the averages for the inclass and pullout groups, at all levels of time in instruction. The first figure depicts a main effect of time. For all settings, the 4 hour/week condition worked better than the 1 hour/week one. It is also possible to have a main effect for setting (and none for time). In the second main effect graph we see that inclass training was better than p ull out training for all amounts of time. Finally, it is possible to have a main effect on both variables simultaneously as depicted in the third main effect figure. In this instance 4 hours/week always works better than 1 hour/week and inclass setting always works better than pull out. Interaction Effects If we could only look at main effects, factorial designs would be useful. But, because of the way we combine levels in factorial designs, they also enable us to examine the interaction effects that exist between factors. An interaction effect exists when differences on one factor depend on the level you are on another factor. It's important to recognize that an interaction is between factors, not levels. We wouldn't say there's an interaction between 4 hours/week and in class treatment. Instead, we would say that there's an interaction between time and setting, and then we would go on to describe the specific levels involved. How do you know if there is an interaction in a factorial design? There are three ways you can determine there's an interaction. First, when you run the statistical analysis, the statistical table will report on all main effects and interactions. Second, you know there's an interaction when can't talk about effect on one factor without mentioning the other factor. if you can say at the end of our study that time in instruction makes a difference, then you know that you have a main effect and not an interaction (because you did not have to mention the setting factor when describing the results for time). On the other hand, when you have an interaction it is impossible to describe your results accurately without mentioning both factors. Finally, you can always spot an interaction in the graphs of group means whenever there are lines that are not parallel there is an interaction present! If you check out the main effect graphs above, you will notice that all of the lines within a graph are parallel. In contrast, for all of the interaction graphs, you will see that the lines are not parallel. 6.How does a covariate reduce noise? Sol: One of the most important ideas in social research is how we make a statistical adjustment adjust one variable based on its covariance with another variable. If you understand this idea, you'll be well on your way to mastering social research. What I want to do here is to show you a series of graphs that illustrate pictorially what we mean by adjusting for a covariate. Let's begin with data from a simple ANCOVA design as described above. The first figure shows the prepost bivariate distribution. Each "dot" on the graph represents the pretest and posttest score for an individual. We use an 'X' to signify a program or treated case and an 'O' to describe a control or comparison case. You should be able to see a few things immediately. First, you should be able to see a whopping treatment effect! It's so obvious that you don't even need statistical analysis to tell you whether there's an effect (altho ugh you may want to use statistics to estimate its size and probability). How do I know there's an effect? Look at any pretest value (value on the horizontal axis). Now, look up from that value you are looking up the posttest scale from lower to higher posttest scores. Do you see any pattern with respect to the groups? It should be obvious to you that the program cases (the 'X's) tend to score higher on the posttest at any given pretest value. Second, you should see that the posttest var iability has a range of about 70 points. Now, let's fit some straight lines to the data. The lines on the graph are regression lines that describe the prepost relationship for each of the groups. The regression line shows the expected posttest score for any pretest score. The treatment effect is even clearer with the regression lines. You should see that the line for the treated group is about 10 points higher than the line for the comparison group at any pretest value. 7.Describe and explain three trade-offs present in experiments Sol: 1. People Make Tradeoffs: Economic goods and services are limited, while the need to use services of these goods and services seem limitless. There are simply not enough goods and services to satisfy even a small fraction of everyone's cons umption desires. Thus, societies must decide how to use these limited resources and distribute them among different people. This means, to get one thing that we like, we usually have to give up another thing that we also like. Making decision requires trading off one goal against another. Consider a society that decides to spend more on national defense to protect its shores from foreign aggressors: the more the society spends on the national defense, the less it can spend on personal goods to raise its standard of living at home. Or consider the tradeoff between a clean air environment and a high level of income. Laws that require firms to reduce pollution have the cost of reducing the incomes of the firm's owners, workers, and customers, while pollution regulations give the society the benefit of a cleaner environment and the improved health that comes with it. Another trade off society faces is between efficiency and equity. Efficiency deals with a society's ability to get the most effective use o its resources in satisfying people's wants and needs. Equity denotes the fair distribution of benefits of those resources among society's members. 1-Compare and Contrast Internal and External Validity The internal validity depends over the accuracy of the results. Hence, if the sample does not select by random collection method, then it could affect the internal validity because internal validity depends over the data collection method. In the same manner, the external validity includes the concepts of generalization to know the impacts of the results for the larger population (Herek, 2012). Hence, it also depends over the data collection method as like internal validity. There are some differences between the external validity and internal validity. The internal validity generally makes some dealing with the research study without the help of other research elements. On the other side, external validity looks over that situation, when one could take the outcomes of the research study and simplify it in a broader perspective. The generalization also works with the help of external validity (Kimmel, 2007). It could be better understood with an example. When researchers conduct a research study for the testing of drug, the drug organization could not consist in the test population as the target population. Suppose a research question that is “To identify the impact of changes in ecology on the living style of Hispanic”. Then, for this research question, the external validity could be a primary concern because this question focuses over the whole group of Hispanic instead of someone or a particular group. But, if the research question states that “To identify the impact of changes in ecology on the living style of teenagers Hispanic” then it would be very difficult to generalize and simplify the outcomes of this question for the whole people of the Hispanic community. In this situation internal validity is a primary concern because without it, the result of the study could not be effective (Herek, 2012). For making strong claims in the context of applicability of findings towards a target population the researchers use the random sampling method, systematic samples, and cluster sampling for gathering the information and data from the selected population (Kimmel, 2007). This strategy plays an effective role in the generalization of study outcomes because the sampling process based on probability always includes the high response rate that helps the researchers to apply the result of study in the broader manner. 2-Compare and Contrast Random Selection and Random Assignment The random selection and random assignment, both are slightly relative terms. The random selection describes how the research could draw the sample from the whole population related to the research study, while the random assignment describes about the process of researchers that they use to assign and draw the sample from different groups of the population (Trochim, 2006). Due to this similarity, both random assignment and selection could be applied to conduct a study. It could be better understood with an example. If a researcher draws 50 people from a group of 500 people with the help of random sample, then it would call random sampling. But, the researcher selects 50 people for new treatment instead of remaining 450 people from whole population then it would be called random assignment. In some situation, when the researcher does not use the random sample to select the population but selected the some people for providing the different treatment, then the random selection could not be applied but random assignment could be used (McNabb, 2010). But, there are some difference between random selection and random assignment. Random selection depends over sampling process and it is directly related to the external validity of the study results (Trochim, 2006). On the other hand, the random assignment is related to the study design and also related to the internal validity of the study’s result. 3-Sample Size and Likelihood of a Statistically Significance In the sense of statistics, significance is also a statistical term, which helps to ensure the difference and relationship takes place in two groups. It also depends on the sample size (StatPac Inc., 2013). For example, in a study, 50 people are selected for a test and the merit would be decided on the basis of male and female category. The males score 100 and the females score 98. The t-test gives the difference in significance is .001. At the same time, there is no huge difference in 100 and 98. This result describes that there is small difference between the groups that could not make negative impact on the outcomes of the study. Therefore, it could be stated that the sample size makes the drastic impacts on the outcomes of the study, whether the study is conducted for any population or for any cause. The sample size also includes some errors. If there is the small sample size, then this situation could affect the likelihood of a statistically significant because it could not be encountered the possible errors and it could affect the outcomes of the study. But, the error could be decreased in that situation, when the research takes a large group (StatPac Inc., 2013). Hence, there is a relationship between sample size and likelihood of a statistically significant in the context of two groups. It is because that, when there is a large sample size, the different types of likelihood such as the type I and type II encounter the errors and also reduces them, if the other study part is constructed carefully. At the same time, the large sample size also provides the rights to the researchers for increasing the significance level of study outcomes (Biau, Kernéis & Porcher, 2008). Therefore, it could be stated that if the sample size increases the likelihood of finding a statistically significant relationship also increases in the same manner because the large size represent the characteristics as like the population. 4-Probability and Non-probability Sampling Probability Sampling: This method is used to select choices with the help of the complete random process (Hackley, 2003). This sampling method is used for ensuring that the collected sample is not similar and totally random. Advantages The main advantage of this sampling method is its fairness. It is because that it describes that every selected participant has given the equal opportunities before the gathering of the sample. Hence, it also improves the validity of the research outcomes. It is very effective for the smaller population and the sample could be free from bias (McNabb, 2010). Disadvantages This sampling method completely depends over the selected people, so they could cheat or the research could face the situation related to possibility of flaws that could also affect the fairness of this sampling model (Hackley, 2003). It takes time, so, if the sample group is large then it would require too much patience and time. Non-probability Sampling: The non-probability sampling is the just different from probability sampling because in this method, the sample could be selected by the systematic method. As like the probability sampling, it is also effective for the small population (McNabb, 2010). Following are some advantages and disadvantages of non-probability sampling method. Advantages This sampling method is more effective because this it helps the researcher to target the specific group of people (Hackley, 2003). Disadvantages The main disadvantage of this method is its bias character. It is because that the people are selected from the similar group, so, their view could not represent the view of whole populations. Calculate the sample size needed given these factors: one-tailed t-test with two independent groups of equal size small effect size (see Piasta, S.B., & Justice, L.M., 2010) alpha =.05 beta = .2 Assume that the result is a sample size beyond what you can obtain. Use the compromise function to compute alpha and beta for a sample half the size. Indicate the resulting alpha and beta. Present an argument that your study is worth doing with the smaller sample. Solution a) in G*Power First, we select in test family : t-test And in statistical test we choice means: Difference between two independent means (two groups) Also in type of power analysis, we choice A priori : Compute required sample size – given a, power , and effect size And in input parameters In tails(s) one Effect size d 0.2 which is small effect size a err prob 0.05 power(1 – B err prob ) 1- B = 1 – (0.2) = 0.8 AS beta = 0.2 Allocation ratio N2/N1 1 You will get the sample size needed which is 310 as you shown below b) After that in type of power analysis we choice compromise : compute implied a & power – given B/a raito , sample size, and effect size And write the sample half size which is 310/2 = 155 Then click calculate you will get the resulting alpha and beta As you see the resulting alpha and beta , if we Use the compromise function to compute alpha and beta for a sample half the size. 2. a. Calculate the sample size needed given these factors: • ANOVA (fixed effects, omnibus, one-way) • small effect size • alpha =.05 • beta = .2 • 3 groups b. Assume that the result is a sample size beyond what you can obtain. Use the compromise function to compute alpha and beta for a sample approximately half the size. Give your rationale for your selected beta/alpha ratio. Indicate the resulting alpha and beta. Give an argument that your study is worth doing with the smaller sample. 3. In a few sentences, describe two designs that can address your research question. The designs must involve two different statistical analyses. For each design, specify and justify each of the four factors and calculate the estimated sample size you’ll need. Give reasons for any parameters you need to specify for G*Power. Solution • Select Test family : f tests Statistical test : ANOVA (fixed effects, omnibus, one-way) Type of power analysis: A priori • Input Effect size f : 0.10 a err prob: 0.05 Power (1-b err prob): 0.8 Number of groups: 3 • Output Noncentrality parameter : 9.9375000 Critical F: 3.0540042 Numerator df: 2 Denominator df: 156 Total sample size: 159 Actual Power: 0.8048873 1 Concept Paper Inventory Management KPIs Name of the School 2 Version: February 2015 © XXXXXXXXUniversity, 2015 Contents Concept Paper......................................................................................................................1 Inventory Management KPIs...............................................................................................1 Name of the School........................................................................................................1 Introduction..........................................................................................................................3 Statement of the Problem...............................................................................................4 Purpose of the Study......................................................................................................5 Research Questions........................................................................................................7 Research Method.................................................................................................................8 Operational Definition of Variables/ Constructs.........................................................11 Measurement................................................................................................................12 Summary......................................................................................................................13 Appendix A........................................................................................................................15 Annotated Bibliography.....................................................................................................15 References..........................................................................................................................17 Introduction For any organization, Key performance indicators are the key to determine their efficiency and helps organizations set their goals and device their corporate strategies. Key performance indicators provide quantifiable outputs which act as the single point of truth and helps understand the performance of different aspects of the company and can be evaluated against the industry benchmarks to ensure consistent growth. Operational KPI’s focus on the product or service quality and the production details of the organization. They help in optimizing the production output of the individual factories 3 manage inventory and ensure optimum lead times. This data in turn helps the top management take critical business decision that drives company growth. In the past decade the markets have been much more volatile and the competition has intensified with globalization which has dramatically increased the need for the organizations to optimize their inventories in order to reduce costs. The major risks that the organizations face if they fail to optimize the inventory are: Inventory needs storage space and hence higher than required production can result in higher inventory which in turn increases the storage costs. Majorly for perishable goods, high inventory means the product will spend more time in the warehouse and hence the products are closer to expiration by the time their reach the retail stores. Lower inventory is also a major concern for organizations. Lower inventory would mean the company will fail to deliver the product to the customers in time. This gives the competitors the opportunity to gain market share. Also implementing emergency strategies to replenish inventory in case of low inventories is an expensive strategy and has a direct impact on the profits. As stated by Sayed. H. ,The ability to minimize stock versus maximizing its availability is the holy grail of managing inventory and in order to accomplish this, it takes a great deal of planning. Hence it is pivotal for organizations to identify the key inventory management parameters (KPI’s), understand these inventory management KPIs, define the tracking mechanism and take decisions based on these values. 4 Statement of the Problem This concept Paper is focused completely on identifying the key performance indicators for inventory and operations management within an organization. The scope of the study is to identify and evaluate the nature, prospects, the need and challenges in identifying the key Inventory KPIs and how each of the KPIs determine and help improve the production efficiency of the organization. This concept paper focuses on providing the research design that will help define the process of identifying and capturing these inventory KPI’s. It explains the approach to be considered in order to answer the 3 key questions: ‘What KPI’s help in defining the total inventory to be maintained’ ‘What is the frequency with which each of these KPIs need to be monitored and how this will impact inventory management?’ ‘Who in the organization needs to be notified and updated with respect to which KPI?’ The documented problem for the company is not knowing the key KPIs to be tracked and the frequency with which each KPI needs to be tracked. Since inability to effectively manage inventory impacts the complete production and operations of the organization and has a direct impact on the organizational profits, it has been chosen as the key issue for the organization. Optimized inventory helps companies reduce inventory storage costs and helps them optimize their products. Ilies. L, Turdean. A and Crisan. E (2009). R. Anupindi and R. Akella, in the article entitled as ‘Diversification under supply chain uncertainty’ has defined how different operational sourcing strategies help companies hedge against the uncertainty of delivery. This is important as sourcing strategies have a 5 direct impact on the inventory levels. Too much raw material increases the inventory and too less slows down the production and hence results in diminishing inventories. Purpose of the Study This study is being conducted to assess the awareness level of the middle management with respect to the Inventory KPI’s and understand the KPI’s that need to be quantitatively measured and Reported. The study also aims to define the frequency with which each of the KPI’s need to be assessed. This will help the company not only capture the KPI values to manage inventory, but also push the notifications in a timely manner so as to ensure right information reaches the right individual in time to take action and optimize inventory. The main objectives of the study are: To brainstorm with the middle and top management and understand the key KPIs they perceive to be important to take operational decisions To enlist all the key KPI’s that need to be tracked real time based on the job role of each individual and the frequency with which the management members need to be notified. Define the KPI benchmarks that organization follows. The research focuses on both quantitative and qualitative research methods. It will use the focus group discussions to pull out the variables, then build a questionnaire based on the variable to capture data for analysis. We will also utilize the mass observation technique to verify the outputs of the statistical analysis is justified as stated in Mayer. A. (2014). . The key KPI’s based on the industry standards (Manufacturing industry) have been selected as the constructs like inventory accuracy, rate of return, order tracking etc. for this market research and variables scale is built based on these constructs. The research 6 method is identified as qualitative, quantitative, or mixed method. For example how does the construct ‘rate of return’ impact the inventory stock of the factory, and the variables for this construct will be measured on the liker scale with respect to the cost incurred for one set of audience and with respect to the level of impact (low, medium or high) with another set of users. The data/ information needs to come from the employees of the organization who are actively involved with the organization sales, distribution and operations, the external vendors/suppliers and the distributors. The research will be focused on factories/ inventory warehouses of the organization with in the United States. The same research can then be conducted for other factories and inventory stores worldwide. This will help evaluate the geographic differences and the differences in benchmarks. Research Questions The research hypothesis focuses on identifying the key Inventory KPI’s that need to be captured and their importance to each of the stakeholders in the organization. The hypothesis states that there is a corelation between the production in the factory, the raw material sourced from the suppliers and the inventory levels in the organization and they have a direct impact on the organizations cash flow. Qualitative Questions: Qualitative questions are to be utilized to understand how the middle and top management perceive the key KPIs to be important to take operational decisions To enlist all the key KPI’s that need to be tracked real time based on the job role of each individual and the frequency with which the management members need to be notified. 7 Quantitative questions: To define the probable revenue impact of each of the KPIs on the organization. Determine the KPI benchmarks quantitatively that organization follows. Hypotheses: H1 0 – Null Hypotheses: The higher Frequency of the operations (inventory) KPI’s data is NOT positively related to the level efficiency of inventory management or to the efficiency of production. H1 a –ternative Hypotheses: The higher Frequency of the operations (inventory) KPI’s data is positively related to the efficiency levels of inventory management or to the efficiency of production. Definition of Key Terms Inventory Turnover: This refers to amount of sale by inventory and could be given by the store stakeholders. Order Tracking: This enables tracking of the in and out of goods as well as raw materials. This information could be taken up by internal stakeholder handling information system. Inventory Accuracy: Comparison of inventory levels recorded by bookkeepers with the actual stock levels in the warehouse. Order Status: Tracking of the realtime status of all orders and categorizing them based on the action taken Mass Observation: Done by the researcher to validate the inputs that has been gathered during the previous approaches just to make sure that the direction of the research taken is appropriate. 8 Research Method Research Objective: While considering any activity, there are certain measuring parameters for that activity. Key performance indicators are generally taken up as measuring scale for the assessment of these activities. Each of the KPI could be mapped to certain process elements in the organization and needs to be benchmarked with the historical data standard of the organization. Once the KPI definition is stated and mapped to the processes of inventory management process, the measurement protocol need to be set. Hence to ensure that proper KPIs have been taken up, a research needs to be conducted taking into considerations and viewpoints of all the stakeholders related to the processes in consideration. Research Methodology: Research methodology would be covering the systematic process of handling the complete research process of the KPI allocation as per the processes in consideration for inventory management and improvement. Definition: The identification of inventory management key performance indicators which would be according to the inventory management process flow. Research Characteristic: The research method would be controlled as the target respondents would be the stakeholders related to the inventory management process flow. Validity of the identified KPI would be as per the management team allocated to govern the processes. Further once the data has been gathered quantification of the key parameters would be done. Sayed. H. (2013). 9 Research type: Research type would be descriptive in nature as the identification of the KPIs cannot be done at the initial stage. Hence once the inputs from different stakeholders are aggregated, convergence toward the key parameters would be feasible. The type of data which needs to be gathered from the respondents would be quantitative in nature. This type of quantitative data is necessary so as to apply statistical techniques to find accurate deductions from the given data. This would be possible if the key factors could be found using the initial variables that needs to be taken into consideration. The variable selection would be based on the attributes of the processes of inventory management that is being considered. Research Approach: Generally the when it comes to the selection of research approach, there are two options available for the researchers. One can either go for quantitative approach or qualitative approach, depending on the type of results that is needed. In the current research as the result should be the KPIs which would be used for measurement of processes, it is necessary to consider quantitative approach for the research. Further the type of responses that would be considered in the current research study would be as follows: 1. Questionnaire Approach: The respondents would be given a set of questions which would be needed to quantify the variables and eventually help in figuring out the factors for KPI selection. 10 2. Focused group discussion: This would be very streamlined interview process for the internal stakeholders who are involved in the execution of inventory management processes. 3. Mass Observation: This would be done by the researcher to validate the inputs that has been gathered during the previous approaches just to make sure that the direction of the research taken is appropriate. Operational Definition of Variables/ Constructs This refers to the identification of the variables and constructs that would be necessary for the research. This would become the base of the questionnaire as well as the focus group discussion. Major constructs that would be taken into consideration are as follows: Sl No. Variables/ Description Construtcs 1 Carrying Cost of Inventory storing cost that is beard by the organization. This Inventory would be taken up from top management 2 Inventory Turnover This refers to amount of sale by inventory and could be given by the store stakeholders. 3 Order Tracking This enables tracking of the in and out of goods as well as raw materials. This information could be taken up by internal stakeholder handling information system. 4 Inventory to Sales Ratio of instock items to amount of sales orders. This Ratio information could be taken up from internal stakeholders 11 5 Units Per The average number of units purchased per order and it’s Transaction comparison to target values. 6 Rate of Return The rate at which the customer or distributor returns the shipped items. These returns are categorized based on the reasons for return. 7 Order Status Tracking of the realtime status of all orders and categorizing them based on the action taken 8 Inventory Accuracy Comparison of inventory levels recorded by bookkeepers with the actual stock levels in the warehouse. 9 Back Order Rate The total number of orders that a not fulfilled when a customer or distributor places them. 10 Perfect Order Rate Measures the number of orders that are shipped to customers without any incidents. Measurement Once the appropriate stakeholders are selected in the organization, the measurement of data would be done via three main techniques. First technique that will be used in this case would be the focused group discussion which would help in streamlining the type of variables that needs to be selected for the given constructs above. This is necessary for the researcher so as to get the variables correct from relevant stakeholders as they would be able to properly identify the variables which could help in selection of final KPIs. Sayed. H. (2013). Once the focus group discussion is done, questionnaire method would be used to collect the data for the selected variables using a list of questions. The questions would cover discrete data sets as well as likert scale for qualitative data sets. The data would be 12 stored as a quantitative tabul
Are you sure you want to buy this material for
You're already Subscribed!
Looks like you've already subscribed to StudySoup, you won't need to purchase another subscription to get this material. To access this material simply click 'View Full Document'