Very helpful. it did lack a lot of information but it did help
1.Jackson (2012) evennumbered Chapter Exercises (p. 244). You read in a health magazine about a study in which a new therapy technique for depression was examined. A group of depressed individuals volunteered to participate in the study, which lasted 9 months. There were 50 participants at the beginning of the study and 29 at the end of the 9 months. The researchers claimed that of those who completed the program, 85% improved. What possible confounds can you identify in this study?
What are internal validity and external validity, and why are they so important to researcher?
Research has a bias since they are not taking into account the total number of participants. They are not considering the complete sample. It is important to have a complete validity of data for better analysis. Complete analysis of sample needs to be taken into account. Integrity in the field of academics is crucial not only inside the university but also for maintaining the level of trust in the minds of the people. It is upto the researchers to maintain the integrity and the brand image of the university.
2.What is the purpose of conducting an experiment? How does an experimental design accomplish its purpose?
Kant in 1793 proposes something that went on rampage and is now known worldwide. The saying was
We also discuss several other topics like What do whales use sound for?
“Some things are fine in theory, but do not work in practice”
This rule is applicable even after centuries. Now people understand that theory and then application of the same in real like is not very easy. Identifying something in theory is somewhat easy. It just needs some logic and nothing else whereas, application of the same in theory is very different. When a person starts applying a theory into actual practice, the real test starts and it is not very easy to pass through. There will be many hurdles that a person needs to pass.
3.What are the advantages and disadvantages of an experimental design in an educational study? We also discuss several other topics like What is nomenclature?
Biggest advantage is related to the fact that the experimental design helps in analysing the difference between fact and the theory and the practical implementation of the same.
• To describe the characteristics of relevant groups, such as consumers, salespeople, organizations, or market areas.
• To estimate the percentage of units in a specified population exhibiting a certain behavior.
• To determine the perceptions of product characteristics.
• To determine the degree to which marketing variables are associated. • To make specific predictions.
4.What is more important in an experimental study, designing the study in order to make strong internal validity claims or strong external validity claims? Why?
It is important to balance the two. According to stats direct:
This is about the validity of results within, or internal to, a study. It usually concerns causality, i.e. the strength of assigning causes to outcomes. For laboratory experiments with tightly controlled conditions, it is usually easy to achieve high internal
validity. For studies in difficult to control environments, e.g. health services research, it can be difficult to claim high internal validity. When you claim high internal validity you are saying that in your study, you can assign causes to effects unambiguously. Randomisation is a powerful tool for increasing internal validity see confounding.
In the context of questionnaires the term criterion validity is used to mean the extent to which items on a questionnaire are actually measuring the realworld states or events that they are intended to measure. This type of internal validity could be assessed by comparing questionnaire responses with objective measures of the states or events to which they refer; for example comparing the selfreported amount of cigarette smoking with some objective measure such as cotinine levels in breath. If you want to learn more check out Single skeletal muscle cell is called?
This is about the validity of applying your study conclusions outside, or external to, the setting of your study. Another term for this is generalisability. Sometimes this is obvious, for example a public opinion poll taken at the entrance to a football match would not be properly representative of the general population. Often it is less obvious, for example a study in medical settings on a Monday morning will not be representative of the pattern of illnesses seen at other times of the week. A key to improving external validity is to understand the setting thoroughly before you embark upon the study.
5.In an experiment, what is a control? What is the purpose of a control group? Of single or multiple comparison groups? Don't forget about the age old question of What is immune system attacks itself?
Control means the level of significance an experiment.
Control group helps in comparing various groups in the experiment. A single hypothesis is tried for rejecting or acceptance in the experiment.
6.What are confounds? Give an example of a design that has three confounds. Describe three ways to alter the design to address these confounds and explain the advantages and disadvantages of each.
Confounds means creating confusion in the experiment. When irrelevant data is added into the relevant data, confusion creeps into the experiment.
In the words of South Alabama university:
Potential confounding variables can be controlled for by using of one or more of a variety of techniques that eliminate the differential influence an extraneous variable may have for the comparison groups in a research study.
∙ Differential influence occurs when the influence of an extraneous variable is different for the various comparison groups.
∙ For example, if one group is mostly females and the other group is mostly males, then the gender may have a differentially effect on the outcome. As a result, you will not know whether the outcome is due to the treatment or due to the effect of gender.
∙ If the comparison groups are the same on all extraneous variables at the start of the experiment, then differential influence is unlikely to occur.
∙ In experiments, we want our groups to be the same (or “equivalent” on all potentially confounding extraneous variables). The control techniques are essentially attempts to make the groups similar or equivalent.
7.What does “cause” mean and why is it an important concept in research? How are correlation and causation related? If you want to learn more check out Three kinds of studies probe brain-behavior relationships.
A good hypothesis is a question(s) that would be answered with the data to be collected. The goal is to develop a hypothesis that is practical and testable. This the cause and the research try to prove it right or wrong.
8.You are a researcher interested in addressing the question: does smiling cause mood to rise (i.e., become more positive)? Sketch betweenparticipants, withinparticipants, and matched participants designs that address this question and discuss the advantages and disadvantages of each to yielding data that help you answer the question.
In between participants all the participants are researched and data is then compared. In within subject, repeated research and use of statistical tools is the norm. In matched participants, different participant in each group are researched. Reference: http://www.ablongman.com/graziano6e/text_site/MATERIAL/sg/sg11su.htm
2. What are degrees of freedom? How are the calculated?
3. What do inferential statistics allow you to infer?
4. What is the General Linear Model (GLM)? Why does it matter?
5. Compare and contrast parametric and nonparametric statistics. Why and in what types of cases would you use one over the other?
6. Why is it important to pay attention to the assumptions of the statistical test? What are your options if your dependent variable scores are not normally distributed? We also discuss several other topics like Who is Dr. Francis Townsend?
Part II Part II introduces you to a debate in the field of education between those who support Null Hypothesis Significance Testing (NHST) and those who argue that NHST is poorly suited to most of the questions educators are interested in. Jackson (2012) and Trochim and Donnelly (2006) pretty much follow this model. Northcentral follows it. But, as the authors of the readings for Part II argue, using statistical analyses based on this model may yield very misleading results. You may or may not propose a study that uses alternative models of data analysis and presentation of findings (e.g., confidence intervals and effect sizes) or supplements NHST with another model. In any case, by learning about alternatives to NHST, you will better understand it and the culture of the field of education.
Answer the following questions:
1. What does p = .05 mean? What are some misconceptions about the meaning of p =.05? Why are they wrong? Should all research adhere to the p = .05 standard for significance? Why or why not?
2. Compare and contrast the concepts of effect size and statistical significance.
3. What is the difference between a statistically significant result and a clinically or “real world” significant result? Give examples of both.
4. What is NHST? Describe the assumptions of the model.
5. Describe and explain three criticisms of NHST.
6. Describe and explain two alternatives to NHST. What do their proponents consider to be their advantages? 7. Which type of analysis would best answer the research question you stated in Activity 1? Justify your answer.
2. What are degrees of freedom? How are they calculated?
Answer: The degree of freedoms is equal to the number of independent observation or the number of subjects in the data, minus the parameters estimated. A parameter to be estimated is related to the value of an independent variable and included in a statistical equation. A researcher may estimate parameters using different amounts or pieces of information and the number of independent pieces of information he or she used to estimate statistic or a parameter is called the degree of freedom.
Determine what type of statistical test I need to run. Both ttests and chisquared tests use degrees of freedom and have distinct degrees of freedom tables. Ttests are used when the population or sample has distinct variables. Chisquared tests are used when the population or sample has continuous variables. Both tests assume normal population or sample distribution.
Identify how many independent variables I have in my population or sample. If I have a sample population of N random values then the equation has N degrees of freedom. If my data set required me to subtract the mean from each data pointas in a chisquared testthen I will have N1 degrees of freedom.
Look up the critical values for my equation using a critical value table. Knowing the degrees of freedom for a population or sample does not give me much insight in of itself. Rather, the correct degrees of freedom and my chosen alpha together give me a critical value. This value allows me to determine the statistical significance of my results.
3. What do inferential statistics allow you to infer?
Answer: Inferential statistics is concerned with making predictions or inferences about a population from observations and analyses of a sample. That is, we can take the results of an analysis using a sample and can generalize it to the larger population that the sample represents. In order to do this, however, it is imperative that the sample is representative of the group to which it is being generalized.
To address this issue of generalization, we have tests of significance. A Chisquare or Ttest, for example, can tell us the probability that the results of our analysis on the sample are representative of the population that the sample represents. In other words, these tests of significance tell us the probability that the results of the analysis could have occurred by chance when there is no relationship at all between the variables we studied in the population we studied.
4. What is the General Linear Model (GLM)? Why does it matter? Answer:
The General Linear Model (GLM) underlies most of the statistical analyses that are used in applied and social research. It is the foundation for the t-test, Analysis of Variance (ANOVA), Analysis of Covariance (ANCOVA), regression analysis, and many of the multivariate methods including factor analysis, cluster analysis, multidimensional scaling, discriminant function analysis, canonical correlation, and others. Because of its generality, the model is important for students of social research. Although a deep understanding of the GLM requires some advanced statistics training, I will attempt here to introduce the concept and provide a non statistical description.
When there is a relationship among the variables and then they can expressed by the general linear models.
5. Compare and contrast parametric and nonparametric statistics. Why and in what types of cases would you use one over the other?
Answer: Nonparametric statistics (also called “distribution free statistics”) are those that can describe some attribute of a population, test hypotheses about that attribute, its relationship with some other attribute, or differences on that attribute across populations, across time or across related constructs, that require no assumptions about the form of the population data distribution(s) nor require interval level measurement.
In the literal meaning of the terms, a parametric statistical test is one that makes assumptions about the parameters (defining properties) of the population distribution(s) from which one's data are drawn, while a non-parametric test is one that makes no such assumptions. In this strict sense, "non-parametric" is essentially a null category, since virtually all statistical tests assume one thing or another about the properties of the source population(s).
We will use parametric statistics and non-parametric statistics in the following situation:
6. Why is it important to pay attention to the assumptions of the statistical test? What are your options if your dependent variable scores are not normally distributed?
When you do a statistical test, you are, in essence, testing if the assumptions are valid. We are typically only interested in one, the null hypothesis. That is, the assumption that the difference is zero (actually it could test if the difference were any amount). But the null hypothesis is only one of many assumptions.
A second assumption is that the data are normally distributed. One unusual thing about the ‘real’ world is that data are often normally distributed. Height, IQ and many, many other parameters are. In general, if a variable is affected by many, many different factors, it will be normally distributed. We even have tests to determine if the data are normal. Unfortunately, almost all variables have a slight departure from normality.
When the dependent variables score are not normally distributed then we have to standardized the dependent variable such that it can follows normal distributions.
1. What does p = .05 mean? What are some misconceptions about the meaning of p =.05? Why are they wrong? Should all research adhere to the p = .05 standard for significance? Why or why not?
p = 0.05 means signifance level of the given sample size. P= 0.05 is a threshold given by fisher used to check the significance or nonsignificance of a sample size. Fisher does not explain the 0.05 clearly like from where it came what are the concepts behind the value of p. So people will remain in confusion about the choice of the value of 0.05. But it was assumed that 0.05 is just an arbitrary number fisher assumed which is used to check the significance level. No it is not necessary to adhere all research to the value 0.05 as it is not proved its origin hence we can make some other arbitrary constant for the same with proper reasoning.
2. Compare and contrast the concepts of effect size and statistical significance. Answer:
Effect size is a simple way of quantifying the difference between two groups that has many advantages over the use of tests of statistical significance alone. Effect size emphasizes the size of the difference rather than confounding this with sample size. If effect size comes under the purview of 0.05 then it is said to be significant and acceptable but it is goes beyond the purview of 0.05 then the effect size or the sample size is said to be nonsignificant.
3. What is the difference between a statistically significant result and a clinically or “real world” significant result? Give examples of both.
Answer: Statistically significant results are based on various assumptions while the clinically significant results are due to a logical conclusion. For example in financial sector performance of the company is analyzed in advance on the basis of statistical calculation of previous years while in reality check it could differ from the statistical result. Like GDP growth of US is 2.3 predicted in advance but in real calculation it was found to be +1.0.
4. What is NHST? Describe the assumptions of the model.
Null Hypothesis Significance Testing (NHST) is a type of statistical techniques or methods which are used to check or calculate the effect of certain factor on our observation.
Assumptions used in NHST
In the null hypotheses involves the absence of factors such as selection and drift etc. These null hypotheses does not based on reality.
Null hypotheses is formulated on a wellinformed hypotheses based on tested a priori
Hypotheses allow the analysis and reconstruction of models.
5. Describe and explain three criticisms of NHST.
For NHST, the two independent dimensions of measurement are (1) the strength of an effect, measured using the distance of a point estimate from zero; and (2) the uncertainty we have about the effect’s true strength, measured using something like the expected variance of our measurement device. These two dimensions are reduced into a single pvalue in a way that discards much of the meaning of the original data. In addition to that nobody knows the origin of 0.05 in NHST and why only 0.05 is assumed so it is like an assumption assumed to check result.
6. Describe and explain two alternatives to NHST. What do their proponents consider to be their advantages?
Answer: Today there is need to replace the NHST with real life statistical calculations. There are various alternatives available to NHST;
Two alternatives to NHST
In statistical power analysis it gives you a longterm probability but given that the population effect size, alpha, and sample size, of rejecting the null hypothesis, given that the null hypothesis is false. The influence of sample size on significance has long been understood. It is better than NHST as their results and probability are for long term.
In this method a bar graph is used to predict results. This method is reliable as it tracks the data from the various no of years. This procedure is also known as PPE.
7. Which type of analysis would best answer the research question you stated in Activity 1? Justify your answer.
You are a researcher interested in addressing the question: does smiling cause mood rise (i.e. become more positive)? Sketch betweenparticipants, withinparticipants, and matched participants designs that address this question and discuss the advantages and disadvantages of each to yielding that help you answer the question. Describe and discuss each design in 45 sentences. To research whether smiling has an impact on mood changing a researcher can employ different designs to reach his or her findings. Therefore, the impact of a smile on mood changing will be the hypothesis of the research and a researcher can use betweenparticipants, withinparticipants and matchedparticipants design. These designs are applied differently but they all give the same findings. Betweenparticipants design is a number of respondents are divided into two. One group tests whether a smile changes the mood (McKenney & Reeves, 2012). They smile to different people and the findings are recorded. Another group does not smile to people and the findings are recorded. After the sample period is over the findings from both respondents give the answer where smiling is a mood changer.
Withinparticipants design is the process by which similar respondents are used to get findings from a research study. In this study to find out if smiling causes mood to change, the same
respondents are used to get findings. The study is done in two stages. First, the respondents smile to people to see whether their mood will change (McKenney & Reeves, 2012). Then after some time the same respondents do not smile and see whether that action will have an impact on the mood of a person. The findings are also recorded. Matchedparticipants design is the process by which participants are matched with respondents. In this case, the participants that smile are matched with ones that prefer to be smiled at.
From the findings of all three designs, it will be found that smiling will have an impact on the mood of a person. When a respondent smiles to a person he or she changes their mood. It was found that smiling definitely changes moods. The advantages of betweenparticipants are that it gives the study variety and it takes a short time as the study is done concurrently by all participants. The disadvantage is that it uses more money because more people are needed. The advantage of withinparticipants is it costs less as the same participants are used. A disadvantage is that it takes longer as the same participants take part in two stages. The advantage of matched participants is that it takes the best out of withinparticipants and betweenparticipants.
Here Since we have to check the significance level of the three designs so from present scenario nothing is better than NHST in predicting significance of events. It can be easily predicted that which one is better or whether mood changes or not.
Jackson, evennumbered Chapter Exercises, pp. 308310.
2. What is an F-ratio? Define all the technical terms in your answer.
Answer: The ratio which is used to find if variances are equal in two independent samples is known as F-ratio.
The F-ratio is given as S12/S22
Where S12 = Largest Variance, S22 = Smallest variance
If the F-ratio is statically insignificant we take it to be homogeneity of variance and can apply standard t-test for the different of means. However, if F-ratio turns out to be statistically significant, we apply alternative t-test to find Cochran and Cox method.
3. What is error variance and how is it calculated?
Error variance is the outcome of non-systematic differences between participants; it is that part of the total variance in a group of data that remains uncounted even when a systematic variance is taken away.
Error variance = Total variance - Systematic variance
4. Why would anyone ever want more than two (2) levels of an independent variable?
A research might use more than two levels of an independent variable to perform regression analysis. More than two independent variables are used to include attributes into such as sex (male or female). However, it is important that the two independent variables must have only two levels which known as dummy coding.
5. If you were doing a study to see if a treatment causes a significant effect, what would it mean if within groups, variance was higher than between groups variance? If between groups variance was higher than within groups variance? Explain your answer
If within groups variance was higher than between groups variance it means that the variable taken into study had no real effect and if between groups variance was higher than within groups variance it means that the variable taken into study had real effect.
6. What is the purpose of a post-hoc test with analysis of variance? Answer:
Post hoc tests in the Analysis of Variance are constructed for situations in which scientists has already find a significant omnibus F-test with a factor that made up of 3 or more means and other exploration of the differences in means is required to get particular knowledge on which means are significantly distinct from each other’s.
7. What is probabilistic equivalence? Why is it important?
Probabilistic equivalence means that we understand perfect the odds that we will look for a pretest difference among the two groups. It is important usually when we deal with human beings because any two individuals or groups are not equal.
Jackson, evennumbered Chapter Exercises, pp. 335337.
2. Explain the difference between multiple independent variables and multiple levels of independent variables. Which is better?
The general purpose of multivariate analysis of variance (MANOVA) is to determine whether multiple levels of independent variables on their own or in combination with one another have an effect on the dependent variables. MANOVA requires that the dependent variables meet parametric requirements.
MANOVA is used under the same circumstances as ANOVA but when there are multiple dependent variables as well as independent variables within the model which the researcher wishes to test. MANOVA is also considered a valid alternative to the repeated measures ANOVA when sphericity is violated.
Like an ANOVA, MANOVA examines the degree of variance within the independent variables and determines whether it is smaller than the degree of variance between the independent variables. If the within subjects variance is smaller than the betw een subjects variance it means the independent variable has had a significant effect on the dependent variables. There are two main differences between MANOVAs and ANOVAs. The first is that MANOVAs are able to take into account multiple independent and multiple dependent variables within the same model, permitting greater complexity. Secondly,rather than using the F value as the indicator of significance a number of multivariate measures.
MANOVAs the independent variables relevant to each main effect are weighted to give them priority in the calculations performed. In interactions the independent variables are equally weighted to determine whether or not they have an additive effect in terms of the combined variance they account for in the dependent variable/s.
The main effects of the independent variables and of the interactions are examined with all else held constant. The effect of each of the independent variables is tested separately. Any multiple interactions are tested separately from one another and from any significant main effects. Assuming there are equal sample sizes both in the main effects and the inter actions, each test performed will be independent of the next or previous calculation (exce pt for the error term which is calculated across the independent variables).
3. What is blocking and how does it reduce “noise”? What is a disadvantage of blocking?
The Randomized Block Design is research design's equivalent to stratified random sampling. Like stratified sampling, randomized block designs are constructed to reduce noise or variance in the data (see Classifying the Exper imental Designs). How do they do it? They require that the researcher divide the sample into relatively homogeneous subgroups or blocks (analogous to "strata" in stratified sampling). Then, the experimental design you want to impl ement is implemented within each block or homogeneous subgroup. The key idea is that the variability within each block is less than the variability of the entire sample. Thus each estimate of the treatment effect within a block is more efficient than estimates across the entire sample. And, when we pool these more efficient estimates across blocks, we should get an overall more efficient estimate than we would without blocking.
How Blocking Reduces Noise
So how does blocking work to reduce noise in the data? To see how it works, you have to begin by thinking about the nonblocked study. The figure shows the pretestposttest distribution for a hypothetical prepost randomized experi mental design. We use the 'X' symbol to indicate a program group case and the
'O' symbol for a comparison group member. You can see that for any specific pretest value, the program group tends to outscore the comparison group by about 10 points on the posttest. That is, there is about a 10point posttest mean difference.
Now, let's consider an example where we divide the sample into three relatively homogeneous blocks. To see what happens graphically, we'll use the pretest mea sure to block. This will assure that the groups are very homogeneous. Let's look at what is happening within the third block. Notice that the mean difference is still the same as it was for the entire sample about 10 points within each block. But also notice that the variability of the posttest is much less than it was for the entire samp le. Remember that the treatment effect estimate is a signaltonoise ratio. The signal in this case is the mean difference. The noise is the variability. The two figures show that we haven't changed the signal in moving to blocking there is still about a 10point posttest difference. But, we have changed the noise the variability on the posttest is much smaller within each block that it is for the entire sample. So, the treatment effect will have less noise for the same signal.
It should be clear from the graphs that the blocking design in this case will yield the stronger treatment effect. But this is true only because we did a good job assuring that the blocks were homogeneous. If the blocks weren't homogeneous. their variability was as large as the entire sample's. we would actually get worse estimates than in the simple randomized experim ental case.
4. What is a factor? How can the use of factors benefit a design? Sol:
Probably the easiest way to begin understanding factorial designs is by looking at an example. Let's imagine a design where we have an educational program where we would like to look at a variety of program variations to see which works best. For instance, we would like to vary the amount of time the children receive instruction with one group getting 1 hour of instruction per week and another getting 4 hours per week. And, we'd like to vary the setting with one group getting the instruction inclass (probably pulled off into a corner of the classroom) and the other group being pulledout of the classroom for instruction in another room. We could think about having four separate groups to do this.
With factorial designs, we don't have to compromise when answering these questions. We can have it both ways if we cross each of our two time in inst ruction conditions with each of our two settings. Let's begin by doing some defining of terms. In factorial designs, a factor is a major independent variable. In this example we have two factors: time in instruction and setting.
A level is a subdivision of a factor. In this example, time in instruction has two levels and setting has two levels. Sometimes we depict a factorial design with a numbering notation. In this example, we can say that we have a 2 2 (spoken
"twobytwo) factorial design. In this notation, the number of numbers tells you
how many factors there are and the number values tell you how many levels
. If I
said I had a 3 4 factorial design, you would know that I had 2 factors and that
one factor had 3 levels while the other had 4. Order of the numbers makes no
difference and we could just
as easily term this a 4 3 factorial design. The number ×
of different treatment groups that we have in any factorial design can easily be deter
mined by multiplying through the number notation. For ins
tance, in our example we
have 2 2 = 4 groups. In our notational ex × × ample, we would need 3 4 = 12 groups. 5.Explain main effects and interaction effects.
The Main Effects
A main effect is an outcome that is a consistent difference between levels of a factor. For instance, we would say there's a main effect for setting if we find a statistical difference between the averages for the inclass and pullout groups, at all levels of time in instruction. The first figure depicts a main effect of time. For all settings, the 4 hour/week condition worked better than the 1 hour/week one. It is also possible to have a main effect for setting (and none for time).
In the second main effect graph we see that inclass training was better than pull out training for all amounts of time.
Finally, it is possible to have a main effect on both variables simultaneously as depicted in the third main effect figure. In this instance 4 hours/week always works better than 1 hour/week and inclass setting always works better than pull out.
If we could only look at main effects, factorial designs would be useful. But, because of the way we combine levels in factorial designs, they also enable us to examine the interaction effects that exist between factors. An interaction effect exists when differences on one factor depend on the level you are on another factor. It's important to recognize that an interaction is between factors, not levels. We wouldn't say there's an interaction between 4 hours/week and in class treatment. Instead, we would say that there's an interaction between time and setting, and then we would go on to describe the specific levels involved.
How do you know if there is an interaction in a factorial design? There are three ways you can determine there's an interaction. First, when you run the statistical analysis, the statistical table will report on all main effects and interactions. Second, you know there's an interaction when can't talk about effect on one factor without mentioning the other factor. if you can say at the end of our study that time in instruction makes a difference, then you know that you have a main effect and not an interaction (because you did not have to mention the setting factor when describing the results for time). On the other hand, when you have an interaction it is impossible to describe your results accurately without mentioning both factors. Finally, you can always spot an interaction in the graphs of group means whenever there are lines that are not parallel there is an interaction present! If you check out the main effect graphs above, you will notice that all of the lines within a graph are parallel. In contrast, for all of the interaction graphs, you will see that the lines are not parallel.
6.How does a covariate reduce noise?
One of the most important ideas in social research is how we make a statistical adjustment adjust one variable based on its covariance with another variable. If you understand this idea, you'll be well on your way to mastering social research. What I want to do here is to show you a series of graphs that illustrate pictorially what we mean by adjusting for a covariate.
Let's begin with data from a simple ANCOVA design as described above. The first figure shows the prepost bivariate distribution. Each "dot" on the graph represents the pretest and posttest score for an individual. We use an 'X' to signify a program or treated case and an 'O' to describe a control or comparison case. You should be able to see a few things immediately.
First, you should be able to see a whopping treatment effect! It's so obvious that you don't even need statistical analysis to tell you whether there's an effect (altho ugh you may want to use statistics to estimate its size and probability). How do I
know there's an effect? Look at any pretest value (value on the horizontal axis). Now, look up from that value you are looking up the posttest scale from lower to higher posttest scores. Do you see any pattern with respect to the groups? It should be obvious to you that the program cases (the 'X's) tend to score higher on the posttest at any given pretest value. Second, you should see that the posttest var iability has a range of about 70 points.
Now, let's fit some straight lines to the data. The lines on the graph are regression lines that describe the prepost relationship for each of the groups. The regression line shows the expected posttest score for any pretest score. The treatment effect is even clearer with the regression lines. You should see that the line for the treated group is about 10 points higher than the line for the comparison group at any pretest value.
7.Describe and explain three trade-offs present in experiments Sol:
1. People Make Tradeoffs:
Economic goods and services are limited, while the need to use services of these goods and services seem limitless. There are simply not enough goods and services to satisfy even a small fraction of everyone's cons umption desires. Thus, societies must decide how to use these limited resources and distribute them among different people. This means, to get one thing that we like, we usually have to give up another thing that we also like. Making decision requires trading off one goal against another.
Consider a society that decides to spend more on national defense to protect its shores from foreign aggressors: the more the society spends on the national defense, the less it can spend on personal goods to raise its standard of living at home. Or consider the tradeoff between a clean air environment and a high level of income. Laws that require firms to reduce pollution have the cost of reducing the incomes of the firm's owners, workers, and customers, while pollution regulations give the society the benefit of a cleaner environment and the improved health that comes with it.
Another trade off society faces is between efficiency and equity. Efficiency deals with a society's ability to get the most effective use o its resources in satisfying people's wants and needs. Equity denotes the fair distribution of benefits of those resources among society's members.
Running Head: QUASIEXPERIMENTS 1 QuasiExperiments
Running Head: QUASIEXPERIMENTS 2
This study addresses the question such as the children of FrenchEnglish specking respond positively or English specking towards the advertisement on American TV program. Answer 2
On the basis of the research analysis, it could be stated that Goldberg conduct this study to support his view that the commercial programs through the TV makes considerable impacts over the children. In conducting this study, Goldberg uses his last research study that was conducted to indicate the choices of children in the context of snack food on the basis of commercial TV advertisement. The last study was designed for contributing to the theory that could be used for conducting the next study. The result of the study also contributes to theory because it gives the base to the further student related to the behavior of children and the impacts of commercial messages by TV on the willingness of children in the selection of snacks (Goldberg, 1990). The next study of Goldberg also includes the behavior of the children towards the advertisement of the toy and the impacts of the media. Hence, the result of previous study also contributes in the theory of second research.
This study addresses about the how the TV program and related commercial advertising activities manipulate the children. For this Goldberg made an approach which states that the toy commercial persuasive more that the mother of children by which these commercial advertising make impacts. In analyzing this approach, he also forms a sample group with the help of 5
children who are 8 years old and show them a cartoon network each day with 5 minute commercial ads related to orange juice and snacks (Goldberg, 1990). He collected the response
Running Head: QUASIEXPERIMENTS 3
of the children continue two weeks. After every day, he gave snacks and juices of different companies for children to choose one. Hence, their choices for drinks and snacks reflects their TV experience.
In this study, there are several dependent and independent variables that are used for conducting this research study. The language of children and the ACTV program are the independent variable. The language is an independent variable between two groups and ACTV program works as an independent variable within a group (Goldberg, 1990). This study includes the children’s toys and cereals as a dependent variable because the purchase of both products could depend over the children and their selection process.
In conducting this research study, Goldberg used quasiexperimental design with the multiple choice recognition test for assessing the awareness of children towards the toys on the basis of the American network. This research design also helped Goldberg to formulate the hypothesis for getting the result of this study. On the other hand, only this research design could be able to get the significant result of this research study because this research design allows the researchers to determine the causal relationships and also allow the researchers to conduct the research process with the help of survey.
Researchers addressed several threats related to external and internal validity in their design. Through their study, the researchers address the threat related to testing for knowing the results of the second test after the effect of the first test. For this threat the researchers used the
Running Head: QUASIEXPERIMENTS 4
result of their past study in that was held in the same context to analyze the behavior of children towards the TV commercials. Researchers also addressed the instruments related to observation of the children for conducting the study because the change in the observation could change the outcomes of the research (Ohlund & Yu, n.d.). Hence, they used a similar research process
without making any change for two weeks. For addressing the external threats, the researchers also used the unique TV program feature with the help of commercial ads that reduced the effects of history and effects of research settings.
But, the researchers did not address the reactive effects of the experimental process and related arrangements in the comparison of natural settings. Hence, this threat affected the interpretation of researchers in the context of data findings because natural setting for this study group may include a large group instead of small then it could very tough task to observe the casual behavior of children on the basis of TV program because of their different cultures and languages (Goldberg, 1990).
This study describes the causal effect to expose the children on the basis of TV commercial advertisements under the laboratory experiments, but it could be also observed same with the total concerns of natural process without taking the help of any causal agents. It is because the external environment influences and manipulate the mind of people. In the context of children under natural experiments, this process could also change because they learn for the acts of others and if the other children is using something then they could select that product, whether they see the ads or not. The conclusion of Goldberg could not be convinced.
Running Head: QUASIEXPERIMENTS 5
Goldberg, M.E. (1990). A QuasiExperiment Assessing the Effectiveness of TV Advertising Directed to Children. Journal of Marketing Research, 27(4), 445454.
Ohlund, B. & Yu, C. (n.d). Threats to validity of Research Design. Retrieved from http://web.pdx.edu/~stipakb/download/PA555/ResearchDesign.html
1-Compare and Contrast Internal and External Validity
The internal validity depends over the accuracy of the results. Hence, if the sample does not select by random collection method, then it could affect the internal validity because internal validity depends over the data collection method. In the same manner, the external validity includes the concepts of generalization to know the impacts of the results for the larger population (Herek, 2012). Hence, it also depends over the data collection method as like internal validity. There are some differences between the external validity and internal validity. The internal validity generally makes some dealing with the research study without the help of other research elements. On the other side, external validity looks over that situation, when one could take the outcomes of the research study and simplify it in a broader perspective. The generalization also works with the help of external validity (Kimmel, 2007). It could be better understood with an example. When researchers conduct a research study for the testing of drug, the drug organization could not consist in the test population as the target population.
Suppose a research question that is “To identify the impact of changes in ecology on the living style of Hispanic”. Then, for this research question, the external validity could be a primary concern because this question focuses over the whole group of Hispanic instead of someone or a particular group.
But, if the research question states that “To identify the impact of changes in ecology on the living style of teenagers Hispanic” then it would be very difficult to generalize and simplify the outcomes of this question for the whole people of the Hispanic community. In this situation internal validity is
a primary concern because without it, the result of the study could not be effective (Herek, 2012). For making strong claims in the context of applicability of findings towards a target population the researchers use the random sampling method, systematic samples, and cluster sampling for gathering the information and data from the selected population (Kimmel, 2007). This strategy plays an effective role in the generalization of study outcomes because the sampling process based on probability always includes the high response rate that helps the researchers to apply the result of study in the broader manner.
2-Compare and Contrast Random Selection and Random Assignment The random selection and random assignment, both are slightly relative terms. The random selection describes how the research could draw the sample from the whole population related to the research study, while the random assignment describes about the process of researchers that they use to assign and draw the sample from different groups of the population (Trochim, 2006). Due to this similarity, both random assignment and selection could be applied to conduct a study. It could be better understood with an example. If a researcher draws 50 people from a group of 500 people with the help of random sample, then it would call random sampling. But, the researcher selects 50 people for new treatment instead of remaining 450 people from whole population then it would be called random assignment.
In some situation, when the researcher does not use the random sample to select the population but selected the some people for providing the different treatment, then the random selection could not be applied but random assignment could be used (McNabb, 2010).
But, there are some difference between random selection and random assignment. Random selection depends over sampling process and it is directly related to the external validity of the study results (Trochim, 2006). On the other hand, the random assignment is related to the study design and also related to the internal validity of the study’s result.
3-Sample Size and Likelihood of a Statistically Significance
In the sense of statistics, significance is also a statistical term, which helps to ensure the difference and relationship takes place in two groups. It also depends on the sample size (StatPac Inc., 2013). For example, in a study, 50 people are selected for a test and the merit would be decided on the basis of male and female category. The males score 100 and the females score 98. The t-test gives the difference in significance is .001. At the same time, there is no huge difference in 100 and 98. This result describes that there is small difference between the groups that could not make negative impact on the outcomes of the study.
Therefore, it could be stated that the sample size makes the drastic impacts on the outcomes of the study, whether the study is conducted for any population or for any cause. The sample size also includes some errors. If there is the small sample size, then this situation could affect the likelihood of a statistically significant because it could not be encountered the possible errors and it could affect the outcomes of the study. But, the error could be decreased in that situation, when the research takes a large group (StatPac Inc., 2013).
Hence, there is a relationship between sample size and likelihood of a statistically significant in the context of two groups. It is because that, when there is a large sample size, the different types of likelihood such as the type I and type II encounter the errors and also reduces them, if the other study part is constructed carefully. At the same time, the large sample size also provides the rights to the researchers for increasing the significance level of study outcomes (Biau, Kernéis & Porcher, 2008). Therefore, it could be stated that if the sample size increases the likelihood of finding a statistically significant relationship also increases in the same manner because the large size represent the characteristics as like the population.
4-Probability and Non-probability Sampling
Probability Sampling: This method is used to select choices with the help of the complete random process (Hackley, 2003). This sampling method is used for ensuring that the collected sample is not similar and totally random.
The main advantage of this sampling method is its fairness. It is because that it describes that every selected participant has given the equal opportunities before the gathering of the sample. Hence, it also improves the validity of the research outcomes. It is very effective for the smaller population and the sample could be free from bias (McNabb, 2010).
This sampling method completely depends over the selected people, so they could cheat or the research could face the situation related to possibility of flaws that could also affect the fairness of this sampling model (Hackley, 2003). It takes time, so, if the sample group is large then it would require too much patience and time.
Non-probability Sampling: The non-probability sampling is the just different from probability sampling because in this method, the sample could be selected by the systematic method. As like the probability sampling, it is also effective for the small population (McNabb, 2010). Following are some advantages and disadvantages of non-probability sampling method.
This sampling method is more effective because this it helps the researcher to target the specific group of people (Hackley, 2003).
The main disadvantage of this method is its bias character. It is because that the people are selected from the similar group, so, their view could not represent the view of whole populations.
Calculate the sample size needed given these factors:
one-tailed t-test with two independent groups of equal size
small effect size (see Piasta, S.B., & Justice, L.M., 2010)
beta = .2
Assume that the result is a sample size beyond what you can obtain. Use the compromise function to compute alpha and beta for a sample half the size. Indicate the resulting alpha and beta. Present an argument that your study is worth doing with the smaller sample.
First, we select in test family : t-test
And in statistical test we choice means: Difference between two independent means (two groups)
Also in type of power analysis, we choice A priori : Compute required sample size – given a, power , and effect size
And in input parameters
In tails(s) one
Effect size d 0.2 which is small effect size
a err prob 0.05
power(1 – B err prob ) 1- B = 1 – (0.2) = 0.8 AS beta = 0.2
Allocation ratio N2/N1 1
You will get the sample size needed which is 310 as you shown below
After that in type of power analysis we choice compromise : compute implied a & power – given B/a raito , sample size, and effect size
And write the sample half size which is 310/2 = 155
Then click calculate you will get the resulting alpha and beta
As you see the resulting alpha and beta , if we Use the compromise function to compute alpha and beta for a sample half the size.
a. Calculate the sample size needed given these factors:
• ANOVA (fixed effects, omnibus, one-way)
• small effect size
• alpha =.05
• beta = .2
• 3 groups
b. Assume that the result is a sample size beyond what you can obtain. Use the compromise function to compute alpha and beta for a sample approximately half the size. Give your rationale for your selected beta/alpha ratio. Indicate the resulting alpha and beta. Give an argument that your study is worth doing with the smaller sample.
3. In a few sentences, describe two designs that can address your research question. The designs must involve two different statistical analyses. For each design, specify and justify each of the four factors and calculate the estimated sample size you’ll need. Give reasons for any parameters you need to specify for G*Power.
Test family : f tests
Statistical test : ANOVA (fixed effects, omnibus, one-way)
Type of power analysis: A priori
Effect size f : 0.10
a err prob: 0.05
Power (1-b err prob): 0.8
Number of groups: 3
Noncentrality parameter : 9.9375000
Critical F: 3.0540042
Numerator df: 2
Denominator df: 156
Total sample size: 159
Actual Power: 0.8048873
Inventory Management KPIs
Name of the School
Version: February 2015
© XXXXXXXXUniversity, 2015
Concept Paper......................................................................................................................1 Inventory Management KPIs...............................................................................................1 Name of the School........................................................................................................1 Introduction..........................................................................................................................3
Statement of the Problem...............................................................................................4 Purpose of the Study......................................................................................................5 Research Questions........................................................................................................7
Operational Definition of Variables/ Constructs.........................................................11 Measurement................................................................................................................12 Summary......................................................................................................................13
Appendix A........................................................................................................................15 Annotated Bibliography.....................................................................................................15 References..........................................................................................................................17 Introduction
For any organization, Key performance indicators are the key to determine their efficiency and helps organizations set their goals and device their corporate strategies. Key performance indicators provide quantifiable outputs which act as the single point of truth and helps understand the performance of different aspects of the company and can be evaluated against the industry benchmarks to ensure consistent growth. Operational KPI’s focus on the product or service quality and the production details of the organization. They help in optimizing the production output of the individual factories
manage inventory and ensure optimum lead times. This data in turn helps the top management take critical business decision that drives company growth. In the past decade the markets have been much more volatile and the competition has intensified with globalization which has dramatically increased the need for the organizations to optimize their inventories in order to reduce costs. The major risks that the organizations face if they fail to optimize the inventory are:
Inventory needs storage space and hence higher than required production can result in higher inventory which in turn increases the storage costs.
Majorly for perishable goods, high inventory means the product will spend more time in the warehouse and hence the products are closer to expiration by the time their reach the retail stores.
Lower inventory is also a major concern for organizations. Lower inventory would mean the company will fail to deliver the product to the customers in time. This gives the competitors the opportunity to gain market share.
Also implementing emergency strategies to replenish inventory in case of low inventories is an expensive strategy and has a direct impact on the profits. As stated by Sayed. H. ,The ability to minimize stock versus maximizing its availability is the holy grail of managing inventory and in order to accomplish this, it takes a great deal of planning. Hence it is pivotal for organizations to identify the key inventory management parameters (KPI’s), understand these inventory management KPIs, define the tracking mechanism and take decisions based on these values.
Statement of the Problem
This concept Paper is focused completely on identifying the key performance indicators for inventory and operations management within an organization. The scope of the study is to identify and evaluate the nature, prospects, the need and challenges in identifying the key Inventory KPIs and how each of the KPIs determine and help improve the production efficiency of the organization.
This concept paper focuses on providing the research design that will help define the process of identifying and capturing these inventory KPI’s. It explains the approach to be considered in order to answer the 3 key questions:
‘What KPI’s help in defining the total inventory to be maintained’
‘What is the frequency with which each of these KPIs need to be monitored and how this
will impact inventory management?’
‘Who in the organization needs to be notified and updated with respect to which KPI?’
The documented problem for the company is not knowing the key KPIs to be tracked and the frequency with which each KPI needs to be tracked. Since inability to effectively manage inventory impacts the complete production and operations of the organization and has a direct impact on the organizational profits, it has been chosen as the key issue for the organization. Optimized inventory helps companies reduce inventory storage costs and helps them optimize their products. Ilies. L, Turdean. A and Crisan. E (2009).
R. Anupindi and R. Akella, in the article entitled as ‘Diversification under supply chain uncertainty’ has defined how different operational sourcing strategies help companies hedge against the uncertainty of delivery. This is important as sourcing strategies have a
direct impact on the inventory levels. Too much raw material increases the inventory and too less slows down the production and hence results in diminishing inventories. Purpose of the Study
This study is being conducted to assess the awareness level of the middle management with respect to the Inventory KPI’s and understand the KPI’s that need to be quantitatively measured and Reported. The study also aims to define the frequency with which each of the KPI’s need to be assessed. This will help the company not only capture the KPI values to manage inventory, but also push the notifications in a timely manner so as to ensure right information reaches the right individual in time to take action and optimize inventory. The main objectives of the study are:
To brainstorm with the middle and top management and understand the key KPIs
they perceive to be important to take operational decisions
To enlist all the key KPI’s that need to be tracked real time based on the job role of each individual and the frequency with which the management members need
to be notified.
Define the KPI benchmarks that organization follows.
The research focuses on both quantitative and qualitative research methods. It will use the focus group discussions to pull out the variables, then build a questionnaire based on the variable to capture data for analysis. We will also utilize the mass observation technique to verify the outputs of the statistical analysis is justified as stated in Mayer. A. (2014). . The key KPI’s based on the industry standards (Manufacturing industry) have been selected as the constructs like inventory accuracy, rate of return, order tracking etc. for this market research and variables scale is built based on these constructs. The research
method is identified as qualitative, quantitative, or mixed method. For example how does the construct ‘rate of return’ impact the inventory stock of the factory, and the variables for this construct will be measured on the liker scale with respect to the cost incurred for one set of audience and with respect to the level of impact (low, medium or high) with another set of users. The data/ information needs to come from the employees of the organization who are actively involved with the organization sales, distribution and operations, the external vendors/suppliers and the distributors. The research will be focused on factories/ inventory warehouses of the organization with in the United States. The same research can then be conducted for other factories and inventory stores worldwide. This will help evaluate the geographic differences and the differences in benchmarks.
The research hypothesis focuses on identifying the key Inventory KPI’s that need to be captured and their importance to each of the stakeholders in the organization. The hypothesis states that there is a corelation between the production in the factory, the raw material sourced from the suppliers and the inventory levels in the organization and they have a direct impact on the organizations cash flow.
Qualitative questions are to be utilized to understand how the middle and top management perceive the key KPIs to be important to take operational decisions To enlist all the key KPI’s that need to be tracked real time based on the job role of each individual and the frequency with which the management members need to be notified.
To define the probable revenue impact of each of the KPIs on the organization. Determine the KPI benchmarks quantitatively that organization follows. Hypotheses:
H10. – Null Hypotheses: The higher Frequency of the operations (inventory) KPI’s data is NOT positively related to the level efficiency of inventory management or to the efficiency of production.
H1a – Alternative Hypotheses: The higher Frequency of the operations (inventory) KPI’s data is positively related to the efficiency levels of inventory management or to the efficiency of production.
Definition of Key Terms
Inventory Turnover: This refers to amount of sale by inventory and could be given by the store stakeholders.
Order Tracking: This enables tracking of the in and out of goods as well as raw materials. This information could be taken up by internal stakeholder handling information system. Inventory Accuracy: Comparison of inventory levels recorded by bookkeepers with the actual stock levels in the warehouse.
Order Status: Tracking of the realtime status of all orders and categorizing them based on the action taken
Mass Observation: Done by the researcher to validate the inputs that has been gathered during the previous approaches just to make sure that the direction of the research taken is appropriate.
While considering any activity, there are certain measuring parameters for that activity. Key performance indicators are generally taken up as measuring scale for the assessment of these activities. Each of the KPI could be mapped to certain process elements in the organization and needs to be benchmarked with the historical data standard of the organization. Once the KPI definition is stated and mapped to the processes of inventory management process, the measurement protocol need to be set. Hence to ensure that proper KPIs have been taken up, a research needs to be conducted taking into considerations and viewpoints of all the stakeholders related to the processes in consideration.
Research methodology would be covering the systematic process of handling the complete research process of the KPI allocation as per the processes in consideration for inventory management and improvement.
Definition: The identification of inventory management key performance indicators which would be according to the inventory management process flow. Research Characteristic: The research method would be controlled as the target respondents would be the stakeholders related to the inventory management process flow. Validity of the identified KPI would be as per the management team allocated to govern the processes. Further once the data has been gathered quantification of the key parameters would be done. Sayed. H. (2013).
Research type: Research type would be descriptive in nature as the identification of the KPIs cannot be done at the initial stage. Hence once the inputs from different stakeholders are aggregated, convergence toward the key parameters would be feasible. The type of data which needs to be gathered from the respondents would be quantitative in nature. This type of quantitative data is necessary so as to apply statistical techniques to find accurate deductions from the given data. This would be possible if the key factors could be found using the initial variables that needs to be taken into consideration. The variable selection would be based on the attributes of the processes of inventory management that is being considered.
Generally the when it comes to the selection of research approach, there are two options available for the researchers. One can either go for quantitative approach or qualitative approach, depending on the type of results that is needed. In the current research as the result should be the KPIs which would be used for measurement of processes, it is necessary to consider quantitative approach for the research. Further the type of responses that would be considered in the current research study would be as follows:
1. Questionnaire Approach: The respondents would be given a set of questions which would be needed to quantify the variables and eventually help in figuring out the factors for KPI selection.
2. Focused group discussion: This would be very streamlined interview process for the internal stakeholders who are involved in the execution of inventory management processes.
3. Mass Observation: This would be done by the researcher to validate the inputs that has been gathered during the previous approaches just to make sure that the direction of the research taken is appropriate.
Operational Definition of Variables/ Constructs
This refers to the identification of the variables and constructs that would be necessary for the research. This would become the base of the questionnaire as well as the focus group discussion. Major constructs that would be taken into consideration are as follows:
Sl No. Variables/
1 Carrying Cost of Inventory
Inventory storing cost that is beard by the organization. This would be taken up from top management
2 Inventory Turnover This refers to amount of sale by inventory and could be
given by the store stakeholders.
3 Order Tracking This enables tracking of the in and out of goods as well as raw materials. This information could be taken up by
internal stakeholder handling information system.
4 Inventory to Sales Ratio
Ratio of instock items to amount of sales orders. This information could be taken up from internal stakeholders
5 Units Per Transaction
The average number of units purchased per order and it’s comparison to target values.
6 Rate of Return The rate at which the customer or distributor returns the shipped items. These returns are categorized based on the
reasons for return.
7 Order Status Tracking of the realtime status of all orders and
categorizing them based on the action taken
8 Inventory Accuracy Comparison of inventory levels recorded by bookkeepers
with the actual stock levels in the warehouse.
9 Back Order Rate The total number of orders that a not fulfilled when a
customer or distributor places them.
10 Perfect Order Rate Measures the number of orders that are shipped to customers without any incidents.
Once the appropriate stakeholders are selected in the organization, the measurement of data would be done via three main techniques. First technique that will be used in this case would be the focused group discussion which would help in streamlining the type of variables that needs to be selected for the given constructs above. This is necessary for the researcher so as to get the variables correct from relevant stakeholders as they would be able to properly identify the variables which could help in selection of final KPIs. Sayed. H. (2013).
Once the focus group discussion is done, questionnaire method would be used to collect the data for the selected variables using a list of questions. The questions would cover discrete data sets as well as likert scale for qualitative data sets. The data would be
stored as a quantitative tabular sheet to use it as input for statistical methods such as factor analysis and then multi step regression would be run on the given data to see whether the factor chosen is actually affecting the inventory management results. The target would be to make the regression equation as close as possible for the independent and the dependent variables. Sayed. H. (2013).
Apart from these techniques there would be one more method of mass observation which would be applied for the variable definition. This is to make sure that the proper validation of the selected variables is done.
Main agenda of the research study is to identify the key performance index for analyzing and streamlining of inventory management system. To ensure that the correct KPIs are identified the target research group would be the internal stakeholders of the company who are involved in the process flow of inventory management in their day to day work. By using different research approaches the data would be gathered from these internal stakeholders. Before finalizing on the type of data and variables to be selected, a focused group discussion would be initiated. This would ensure the correct direction of the variable selection.
Once the correct variable is selected, then questionnaire preparation would be done so as to make a research instrument for data collection. This would be circulated to different stakeholders who would be the respondents of the research study. The data collected would be quantitative data for the variables.
Likert scale would be used to measure the qualitative data in the questionnaire. Once the data collection is done, statistical techniques would be applied to figure out the factors and dependency equation which closely matches the inventory management system. Once the dependency equation is ready, real world data would be utilized to see the validation of the result. Further mass observation technique would be utilized for general validation of the effectiveness of the result.
The complete research study would ensure that the final conclusion that is given would be useful for the organization in measuring their inventory management effectiveness. In case the inventory management is not that effective, then the problem areas could be identified using the key performance indicators. If a certain key performance indicator is found to be the culprit of low efficiency then, the stakeholder associated with the KPI would be informed and proper measures could be taken to resolve the performance issue.
1. Cochran, W.G. Sampling Techniques, 3rd ed. Toronto: Wiley, 1977.
This research study presents the theory related to sampling techniques which shows how theory regarding the sampling could be utilized for practical purpose. Further it illustrates that a level of statistical knowledge is needed to ensure better deductions of data collected.
2. Hájek, J. Sampling From a Finite Population. New York: Marcel Dekker, 1981.
This research study proposes the methods that could be used for approximations for the data collected. Further supporting information related to theoretical concepts and numerical calculations. It also suggests the method of sampling that could be used and sample correction techniques needed.
3. Hedayat, A.S., & Sinha, B.K. Design and Inference in Finite Population Sampling. New York: John Wiley & Sons, 1991.
This research is to help in designing the survey or the sampling methodology. It also helps in assessing the design with respect to theoretical results. It states the
probability sampling and finite population sampling techniques. It is effective study to understand the estimation techniques and resampling methodologies.
4. Kish, L. Survey Sampling. New York: John Wiley & Sons, 1965. This research study states the working knowledge which is applicable for practical sampling. It states the formulas and the underlying assumptions that are needed. It also discussed the biases and the nonsampling errors that could come up during the study.
Krishnaiah, P.R., & Rao, C.R. Handbook of Statistics, Vol. 6 (Sampling). Amsterdam: Elsevier Science Publishers, 1988.
R. Anupindi and R. Akella. Diversification under supply chain uncertainty. Management Science, Volume 39, pp. 944963, 1993.
Sayed. H. (2013). Supply Chain Key Performance Indicators Analysis. International Journal of Application or Innovation in Engineering & Management. Retrieved from: http://www.ijaiem.org/volume2Issue1/IJAIEM20130128 059.pdf
Sturgeon. T.J, Memedovic. O, Biesebroeck. J. V, (2009). Globalisation of the automotive industry: main features and trends. Int. J. Technological Learning, Innovation and Development, Vol. 2, Nos. 1/2, 2009. Retrieved form: http://feb.kuleuven.be/public/n07057/cv/smvg09ijtlid.pdf
Ilies. L, Turdean. A and Crisan. E (2009).Warehouse performance management. University DIn Ordea. Retrieved from:
Handfield. R, Straube. F, Pfohl. H and Wieldland. A. (2013).Trends and strategies in logistics management. DVV Media Group. Retrived
Mayer. A. (2014). Supply Chain Metrics That Matter. Kinaxis Publications. Retrived from: https://www.kinaxis.com/Global/resources/papers/metricsthat matteraerospaceanddefensesupplychaininsightsresearch.pdf