Week 4 Notes
Week 4 Notes CRMJ 303
Popular in Crime Prevention
verified elite notetaker
Popular in Criminology and Criminal Justice
This 4 page Class Notes was uploaded by Luppino70 on Thursday September 29, 2016. The Class Notes belongs to CRMJ 303 at University of Wisconsin - Eau Claire taught by Dr. Jason Spraitz in Fall 2016. Since its upload, it has received 18 views. For similar materials see Crime Prevention in Criminology and Criminal Justice at University of Wisconsin - Eau Claire.
Reviews for Week 4 Notes
Report this Material
What is Karma?
Karma is the currency of StudySoup.
You can buy or earn more Karma at anytime and redeem it for class notes, study guides, flashcards, and more!
Date Created: 09/29/16
Ch. 3 Evaluations Saturday, September 17, 2016 1:21 PM Evaluation: investigating the usefulness of a phenomena. It can be a Policy or program as well. Implementation Impact and Outcome Impact Evaluations: What happened after the introduction of a policy or program. What changes have been made? These can be hard since many approaches are multi-faceted and what part has the most impact? It is hard to isolate neighborhoods since there is no one area that is definitely defined as a neighborhood. Also outside influences cab effect people too. Displacement (crime will just move to someplace else) and diffusion (what happens in other places is heard in your area and has an impact) Difficulties to measuring Neighborhood level data is rarely kept Victimization surveys may have a telescoping and response error and wrongly suggest the program was not working. Process Evaluations: look at the implementation of a program and see what process is needed to put the program into action. When this evaluation is done the social context must be taken into account where the program is attempting to work. This sometimes is the only evaluation that is performed since this is the easiest. Little is learned of its effect on crime, but lets us know if the program is properly implemented, answers generalizability, and shows accomplishments of the implementation. Cost-Benefit Evaluation: Is the money spent on the program worth the outcome being produced? Evaluations have a source of bias since to find out how well something is working you must ask those implementing them and they may make it sound better or worse than it really is. Also evaluations can be difficult since they are performed at the end and changes may not be known and data is hard to obtain. But what is the goal? To reduce crime or fear levels E-B Crime Prevention Evaluation Experimental design Experimental Control Random Assignment for equal groups Benefits are better overall experiments and closer to the gold standard. These types of evaluations are rarely done. Maryland Scale of Scientific Method: This method is used to determine effectiveness of a program and gives evidence that very few programs are effective. 5 point scale Judges the methodological Rigor of evaluations and not on the policy or program itself. Level 5 is the gold standard and very rarely performed Gold Standard: A true experimental design method with being able to control every variable except one. Control, experiment, random assignment, etc. Level 4 is around quasi-experimental design, not random assignment or non-equivalent groups. The control will be for the differences Level 3 are still non-equivalent groups but with no control of differences, more threats to internal validity. This is the minimum for an evaluation Level 2 is a before and after evaluation of a program, no control group and only shows what has happened. This cannot be used to determine effectiveness Level 1 is mostly looking at a program just at a certain point. No control group and hardly enough evidence for correlation. Maryland university has said to show what does work are two evaluations at a minimum of level 3 with statistical significance, this is the same to prove a policy does not work. Most of the programs that get a true good evaluations show they do not work. One evaluation at the 3 to 5 level w/ statistical significance can show a program to be promising. Nothing else counts for anything else or lower in level. Is there an overemphasis on experimental design for research? Research is valuable as well. Qualitative research such as interviews and surveys can be beneficial as well. Realistic Evaluation: not just experimental methods but to observe the phenomena itself. Research. These things are not counted in a process evaluation This will show how the phenomena unfolds Mechanism: What is the element of the policy that is making it effective? The cause of the effect Obtain the context of how things operate in the environment, the natural flow of the program working. Within the time and place. Something may be unique to one area in certain circumstances and usually are small and therefore not found in an experiment Not one method of evaluation fits all Threats to Internal Validity: Things that can cause the results to be the way they are. Selection Bias Treatment and comparison groups are different at beginning or end of the study, this is usually taken care of by random assignment, but random assignment cannot always be achieved. Some people might drop out of the study, this is called mortality Endogenous Change A change within the individual, maybe changing answers on a test. Experiment group members know what you are looking for and therefore give you answers you think they want. Maturation can occur via longitudinal studies. Like juveniles eventually are no longer juveniles. Regression may occur, this is individuals getting better on their own (regression to the average) History Effect Changes in the environment that are out of the researchers control. Can be a media event, or within the area event like a violent crime. Contamination Control and experiments group communicate with each other. The control group finds out about the treatment given to the experiment group or some communication. This could cause compensatory rivalry where someone from the control group works harder for not having the intervention. Or demoralization, meaning they become worse than before or leave the study. Both of these ways can throw off your data and show a great difference that really isn't that great. Treatment misidentification The outcome comes from something that happened during the experiment, usually some unidentified third variable. Generalizability: Can these results be used in a broader sense. Threats to External Validity: What threatens the trying to replicate the findings of a program evaluation.
Are you sure you want to buy this material for
You're already Subscribed!
Looks like you've already subscribed to StudySoup, you won't need to purchase another subscription to get this material. To access this material simply click 'View Full Document'