New User Special Price Expires in

Let's log you in.

Sign in with Facebook


Don't have a StudySoup account? Create one here!


Create a StudySoup account

Be part of our community, it's free to join!

Sign up with Facebook


Create your account
By creating an account you agree to StudySoup's terms and conditions and privacy policy

Already have a StudySoup account? Login here

SPSY 8024: Academic Assessment and Intervention Study Guide Quiz Week 1-7

Star Star Star Star Star
1 review
by: Hannah Mecaskey

SPSY 8024: Academic Assessment and Intervention Study Guide Quiz Week 1-7 SPSY 8024

Marketplace > University of Cincinnati > Behavioral Sciences > SPSY 8024 > SPSY 8024 Academic Assessment and Intervention Study Guide Quiz Week 1 7
Hannah Mecaskey
GPA 3.75

Preview These Notes for FREE

Get a free preview of these Notes, just enter your email below.

Unlock Preview
Unlock Preview

Preview these materials now for free

Why put in your email? Get access to more of this material and other relevant free materials for your school

View Preview

About this Document

This is a thorough study Guide for 50 point quiz for weeks 1-7. Articles are also available upon request and specific quiz questions will be uploaded for review later this week
Academic Assessment and Intervention
Study Guide
SPSY 8024, Academic Assessment and Intervention, Study Guide
50 ?




Star Star Star Star Star
1 review
Star Star Star Star Star
"What an unbelievable resource! I probably needed course on how to decipher my own handwriting, but not anymore..."
Carroll Keebler MD

Popular in Academic Assessment and Intervention

Popular in Behavioral Sciences

This 10 page Study Guide was uploaded by Hannah Mecaskey on Monday February 29, 2016. The Study Guide belongs to SPSY 8024 at University of Cincinnati taught by in Spring 2016. Since its upload, it has received 76 views. For similar materials see Academic Assessment and Intervention in Behavioral Sciences at University of Cincinnati.


Reviews for SPSY 8024: Academic Assessment and Intervention Study Guide Quiz Week 1-7

Star Star Star Star Star

What an unbelievable resource! I probably needed course on how to decipher my own handwriting, but not anymore...

-Carroll Keebler MD


Report this Material


What is Karma?


Karma is the currency of StudySoup.

You can buy or earn more Karma at anytime and redeem it for class notes, study guides, flashcards, and more!

Date Created: 02/29/16
SPSY 8024: Academic Assessment and Intervention Study Guide Quiz Week 1-7 There will be a couple multiple-choice questions, however, the majority of the week 7 quiz will consist of short answers. Focus on producing brief responses. Also, you will want to be familiar with Wayman et al. (2007) as well as McMaster and Epsin (2007). You will not need to score or interpret any CBM assessments during the quiz. Learning Objectives: Recall five hypotheses for academic difficulties and how they can help in intervention planning Figure 1. Reasonable hypotheses for academic deficits. REASON #1: They do not want to do it. The student is not motivated to respond to the instructional demands REASON #2: They have not spent enough time doing it. Insufficient active student responding in curricular materials REASON #3: They have not had enough help to do it. Insufficient prompting and feedback for active responding Student displays poor accuracy in target skill(s) Student displays poor fluency in target skill(s) Student does not generalize use of the skill to the natural setting and/or to other materials/settings REASON #4: They have not had to do it that way before. The instructional demands do not promote mastery of the curricular objective REASON #5: It is too hard. Student's skill level is poorly matched to the difficulty of the instructional materials THEY DO NOT WANT TO DO IT Is the student not able to perform the skill (a skill deficit) or is the student able to perform the skill, but "just doesn't want to?" The distinction between skill and performance deficits was clarified by Lentz (1988, p. 354) who stated, "Skill problems will require interventions that produce new behavior; performance problems may require interventions involving manipulation of 'motivation' through contingency management." It is relatively easy to test the hypothesis of a performance deficit. Incentives for reading (Ayllon & Roberts, 1974; Staats & Butterfield, 1964; Staats, Minke, Finley, Wolf, & Brooks, 1964) and math (Broughton & Lahey, 1978) have been effective in improving students' motivation and performance (i.e., increasing active participation and decreasing disruptive behaviors). If a student fails to respond to incentives for increased academic performance, then either the wrong incentives were used or the student does not have the skills to perform the task. The literature contains numerous examples of interventions that can be used to test this hypothesis. For example, Lovitt, Eaton, Kirkwood, and Pelander (1971) improved students' oral reading fluency by offering incentives for reading faster. Another strategy for improving students' motivation that is relatively easy to implement for most teachers is offering students a choice of work to be performed (e.g., a story about baseball versus a fairy tale) or the order in which work is performed (e.g., allowing the child the choice of a vocabulary drill or a silent reading first). Students' on-task behavior has been improved by giving students a choice of instructional activities (Dunlap et al., 1994; Seybert, Dunlap, & Ferro, 1996), a strategy that can be easily adapted to most instructional formats. It is noteworthy that in some of this research students displayed high rates of on-task behavior on those very assignments that they refused to do previously. The only difference between assignments completed and assignments refused was that students were allowed to choose among several instructional assignments during seatwork. When students were allowed to choose the assignment, their compliance and on-task behavior improved. THEY HAVE NOT SPENT ENOUGH TIME DOING IT A student may not be progressing academically because he or she simply has not spent enough time actively practicing the skill. There are large differences in the amount of time students spend actively engaged in academic responding (Rosenshine, 1980). Large differences also have been observed across socioeconomic levels (Greenwood, 1991). For instance, longitudinal studies conducted by researchers at the Juniper Gardens Children's Project have identified large cumulative differences across socioeconomic levels in the amount of time students are actively responding. These differences amount to greater than 1.5 years more schooling by the end of middle school for students of higher socioeconomic levels than for students of lower socioeconomic levels (Greenwood, Hart, Walker, & Risley, 1994). In a review of the literature on academic engaged time, Ysseldyke and Christenson (1993) concluded that variability across classrooms and schools leads to large differences in the amount of time that students are academically engaged. These differences increase the salience of engaged time as an important variable in the investigation of a student's academic problems and underscore the importance of examining this factor on an individual basis. The implications for intervention are obvious. As a first step, a student's current rate of active responding in the problematic subject area or time of day should be estimated. This task can be accomplished through recent advances in observation techniques such as the Ecobehavioral Assessment Systems Software (Greenwood, Carta, Kamps, & Delquadri, 1995) and the Behavior Observation of Students in Schools (Shapiro, 1996), two observation tools that provide estimates of student active engagement. The second step involves increasing the student's active responding. Various strategies such as providing highly structured tasks, allocating sufficient time for instruction, providing continuous and active instruction, maintaining high success rates, and providing immediate feedback have been shown to improve student engagement rates (Denham & Lieberman, 1980; Stallings, 1975; Ysseldyke & Christenson, 1993). Even simpler solutions may be equally as effective; for instance, allocating more time for student responding and decreasing intrusions (e.g., transition time) into instructional time (Gettinger, 1995; Rosenshine, 1980). THEY HAVE NOT HAD ENOUGH HELP TO DO IT Feedback. Ysseldyke and Christenson (1993) warn that engaged time is only moderately (though significantly) related to student achievement. Increasing time for engagement may not be sufficient if a student needs more help to perform instructional tasks successfully. Feedback for student responses may be necessary to assist a student to respond accurately and quickly (Heward, 1994). Feedback is an integral part of the learning trial and consists of an instructional antecedent (e.g., "Who was the first president of the United States?"), an active student response (e.g., "George Washington."), and a consequence (e.g., "Correct!"). When teachers actively provide feedback to students for responding, they increase the likelihood of student achievement (Rosenshine & Berliner, 1978). Belfiore, Skinner, and Ferkis (1995) showed that complete learning trials were more effective in helping students to master sight words than merely having students repeatedly say the correct response. A learning trial consists of an antecedent (e.g., a flashcard with "3 × 3") prior to a response and a consequence (e.g., "Correct!" or "No, the correct answer is 9.") following a response. Another strategy for increasing feedback via complete learning trials is choral responding. Choral responding involves having all students respond verbally during group lessons. Choral responding has been shown to improve learning rates for diverse groups of students, including preschool children with developmental disabilities, children identified as Severe Behavior Handicap, first grade Chapter 1 students, general education students, and students identified as Developmentally Handicapped in special education classrooms (Heward, 1994). Choral responding has been shown to be more effective at improving learning rates when compared to on-task instruction in which the teacher praised students for paying attention while asking the same number and type of questions (Sterling, Barbetta, Heron, & Heward, 1997). Another strategy for increasing feedback to students for nonverbal responses is response cards. To use response cards, teachers can instruct students to write the correct response on laminated cards during group instruction for math, spelling, or content lessons. When the teacher asks a question, the students are expected to write their answers on the cards and to hold up the correct response. The teacher scans students' responses and provides feedback to students. Heward (1994) provided guidelines for implementing the use of response cards. Response cards have been shown to improve (a) rates of responding and quiz scores relative to hand-raising during fourth-grade recitation social studies lessons (Narayan, Heward, Gardner, Courson, & Omness, 1990), (b) on-task behavior of students with disruptive and off-task behavior during social studies lessons (Gardner, Heward, & Grossi, 1994), and (c) quiz scores in earth science classes for high school students (Cavanaugh, Heward, & Donelson, 1996). The common strand of these strategies is that they increase the amount of feedback given to students immediately following responding. It is an opportunity to provide positive feedback for correct responses and to correct errors immediately rather than allowing a student to practice the wrong answer. The Instructional Hierarchy. In the event that increasing feedback during time allocated for instruction is not sufficient for improving student performance, it may be necessary to look more carefully at a student's skill level as a basis for developing instructional interventions. How much assistance a student requires is dependent upon his or her level of skill mastery. Mastery, in turn, develops in a sequence of stages which lead to proficiency and use of the skill across time and contexts (Daly, Lentz, & Boyer, 1996; Haring, Lovitt, Eaton, & Hansen, 1978; Howell, Fox, & Morehead, 1993). Initially, effective instruction promotes accurate performance of the skill. At this stage, modeling the skill and observing the learner are critical components of good instruction. Also, explicit feedback about performance is necessary. After the learner becomes accurate, the next step is to become fluent in the use of the skill. For a skill (e.g., "4 × 2 = 8") to be useful in the future (e.g., with long division), the learner must be able to respond rapidly when presented with the problem. Practice improves skill fluency. Accurate and fluent skill use increases the chances that the learner will generalize the skill across time, settings, and other skills (Daly, Martens, Kilmer, & Massie, 1996; LaBerge & Samuels, 1974; Wolery, Bailey, & Sugai, 1988). Daly, Lentz, and Boyer (1996) used the heuristic notion of an "instructional hierarchy" developed by Haring et al. (1978) to show that in many studies of academic interventions the effectiveness of instructional strategies for improving student accuracy and fluency could be predicted based on the strategies used (e.g., modeling versus practice). Although other instructional hierarchies have been developed during the past century, Haring et al.'s (1978) model is particularly useful because it explains patterns of results and allows us to predict which interventions are most likely to be effective based on the components of instruction and where students are in the learning sequence. The instructional hierarchy suggests that strategies that incorporate modeling, prompting, and error correction can be expected to improve accuracy and that strategies that incorporate practice and reinforcement for rapid responding can be expected to improve fluency. In a demonstration of the predictive power of this particular instructional hierarchy, Daly and Martens (1994) accurately predicted the pattern of results of three reading interventions based on the instructional strategies incorporated by each. THEY HAVE NOT HAD TO DO IT THAT WAY BEFORE Thus far, we have considered contingencies for performance, the amount of time students are actively engaged in schoolwork, how much feedback teachers give, and some important teaching strategies. If interventions based on these factors do not lead to improvements in academic performance, then the next factor to examine is the role of the instructional materials of improving student outcomes. When analyzing the instructional tasks and assignments as possible factors related to poor student performance, there are at least two reasons why they may be hindering student learning: either they are not helping the student to practice actual skill use (Vargas, 1984) (Reason #4) or they are too difficult (Reason #5) (Gickling & Armstrong, 1978; Gickling & Rosenfield, 1995). The goal of instruction is to bring student responding under the control of the instructional materials (i.e., they do not need help to get the correct answer) so that students can apply their skills to real life demands (Heward, Barbetta, Cavanaugh, & Grossi, 1996). For instance, when teaching a student to read the word "Name," the goal is for the printed word "Name" to elicit the student response (i.e., reads "Name"). In this way, the student will be able to respond to many demands for use of the skill such as completing a job application. Practicing the skill means that instructional materials provide enough examples and nonexamples of the target skill so students know when to use the skill (Englemann & Carnine, 1991). For instance, if you are teaching colors to students, you want to be sure to present enough examples of the colors being taught using several different shapes so that they do not confuse the color red with the shape of the objects you are using. Therefore, the instructional materials are critical for providing students enough opportunities to practice important academic skills while helping them recognize when to use the skill and when not to use the skill. Practicing the skill also means that students need to know how to obtain the correct answer. Unfortunately, many instructional materials allow students to achieve the correct answer for the wrong reason (Vargas, 1984). Take Bill, for instance, who knows how to complete his vocabulary worksheets by looking at the number of blanks where he is supposed to write the response and counting the number of letters in each of the vocabulary words. He is not learning his vocabulary words. Instead, he has devised a strategy for obtaining the right answer for the wrong reasons. Finally, the instructional materials might not have the student respond in the way that he or she would be expected to respond according to the curriculum. Therefore, teaching spelling by having students circle spelling words in word puzzles or having them circle the correct answer among four options is inadequate preparation for spelling words for which dictation is required. Interventions aimed at improving the instructional materials should be directed toward using materials that elicit the kinds of student responses that would be expected of students who have mastered the curriculum. As a first step, the student's objectives should be defined (e.g., will spell words with common consonant combinations accurately). The, there must be assurance that the instructional materials provide enough practice in actual use of the skill (e.g., have students practice spelling words with common consonant combinations rather than having them circle correct spellings). Worksheets that only require the student to respond to a few items (e.g., each vocabulary word appears once) in a limited format (e.g., circling correct words versus spelling words) are not likely to improve student performance. Therefore, whereas Hypothesis #2 suggested that a student might not be spending enough time practicing the skill to perform it better, Hypothesis #4 suggests that the student might not be spending time on the right kinds of instructional activities to perform the skill better, indicating a need for a different type of intervention. IT IS TOO HARD Finally, the student might not be successful because the instructional materials are too difficult. Gickling and Armstrong (1978) improved students' accuracy of assigned work and on-task behavior by changing instructional materials to assure that they were not too difficult or too easy. Students are more likely to generalize what they have just learned to other similar instructional materials when they are instructed at their instructional level. Daly, Martens, Kilmer, and Massie (1996) found that providing reading instruction at a level that was more appropriately matched to students' skill levels resulted in greater generalization than when instruction was provided in more difficult materials. The importance of the effects of task difficulty on student learning is obvious and cannot be overstated. Unfortunately, often teachers have in their classrooms learners with multiple skill levels, and changing instructional materials to match each student's instructional level is a difficult task. Therefore, it is one of the last things that they may be willing to change. For this reason, we present it last. If, however, instructional changes based on the other factors are not effective, teachers will be hard pressed to insist that they do not want to change the difficulty level of the materials to meet the student's needs. Copied from A Model for Conducting a Functional Analysis of Academic Performance Problems by Edward J. Daly III; Joseph C. Witt; Brian K. Martens; Eric J. Dool. The School Psychology Review v26 no4 p554-74 '97 Define the five critical skills necessary for reading (revisit the big ideas via ):  Phonemic Awareness: Ability to hear and manipulate the sounds in spoken words, Recognition that words are made up of units of sound (phonemes), Auditory skill that does not involve words in print It is the best predictor of reading difficulty in kindergarten or first grade. Plays a causal role in the development of beginning reading Evidence that a primary difference between good and poor readers is phonemic awareness skills  Alphabetic Principle: Understanding that words are made of letters that represent sounds; Ability to associate sounds with letters and use the sounds to pronounce words (read) and form words (spell) Pre-requisite to effective word identification; Primary difference between good and poor readers is the ability to use letter-sound correspondence to id words. Combination of instruction in phonological awareness and letter-sound correspondence is critical for successful early reading  Fluency: Ability to read accurately and quickly; Automaticity in reading words with no effort; Children who are fluent readers: Identify letter-sound correspondence accurately and quickly; Use spelling patterns to decode efficiently Reading fluency allows students to focus on the meaning of the text. Needs to be well developed to support comprehension. Students who read fluently read more.  Vocabulary: Ability to understand and use words to acquire and convey meaning Vocabulary development is a fundamental goal in elementary school. Children who are exposed to more vocabulary through speech and reading will have significantly greater vocabulary growth rates.  Comprehension: Complex interaction between reader and text to convey meaning Good comprehension is linked to good decoding skills. Time spent reading is highly correlated with comprehension. Summarize the research regarding the reliability and validity of CBM reading: [from Major Conclusions of Wayman et al. (2007)]:  CBM reading aloud is a valid measure of reading performance for students grades 2-5  More research is needed on use of reading aloud with younger and older students  Word ID may be more appropriate for beginning readers  Maze may be more appropriate for older readers  Much more research is needed on measuring and predicting growth with CBM Technical Adequacy of reading CBM:  Strong relationship between reading aloud and reading proficiency  Reading aloud is a better indicator of comprehension than typical comprehension measures  Reading aloud may not be the best choice for very young or older students  Floor effect & correlations with criterion measures decrease at intermediate grades  Maze selection may be more appropriate task for secondary students  Little research on high school students  More research is needed to understand performance differences by population groups (minority groups, ESL)  Strongly correlated with performance on state achievement tests Effects of Text Materials:  CBM reading measures function consistently across curricular materials  Do not have to develop CBM probes from materials being used for instruction  Measure still strong when more difficult or easier than student’s instructional level but… o If too hard, growth rates may be slower o Beginning readers may be more affected by difficulty than advanced readers o More closely the measure is tied to instruction, the more sensitive it is to growth  Passages should be of equivalent difficulty Identify an assessment measure for each of the five critical reading skills: 1. Measure for Phonemic Awareness: The Dynamic Indicators of Basic Early Literacy Skills (DIBELS) assessment system provides two measures that can be used to assess phonemic segmentation skills, Initial Sounds Fluency (ISF) and Phonemic Segmentation Fluency (PSF). The DIBELS Initial Sounds Fluency (ISF) Measure is a standardized, individually administered measure of phonological awareness that assesses a child's ability to recognize and produce the initial sound in an orally presented word. The DIBELS Phoneme Segmentation Fluency (PSF) measure is a standardized, individually administered test of phonological awareness. The PSF measure assesses a student's ability to segment three- and four-phoneme words into their individual phonemes fluently. The PSF measure has been found to be a good predictor of later reading achievement (Kaminski & Good, 1996; see References). 2. Measure for Alphabetic Principle: The Nonsense Word Fluency (NWF) measure probes student knowledge of letter-sound correspondences and sounding out words. To learn more about the DIBELS assessment system, click here. DIBELS Nonsense Word Fluency (NWF) measure is a standardized, individually administered test of the alphabetic principle - including letter-sound correspondence and of the ability to blend letters into words in which letters represent their most common sounds (Kaminski & Good, 1996; see References). 3. Measure for Fluency: Fluency skills can be assessed using standardized measures. The Dynamic Indicators of Basic Early Literacy Skills (DIBELS) assessment system provide a measure that can be used to assess fluency, called Oral Reading Fluency (ORF). To learn more about the DIBELS assessment system, click here. The DIBELS Oral Reading Fluency measure is a standardized, individually administered test of accuracy and fluency with connected text. The DORF passages and procedures are based on the program of research and development of Curriculum-Based Measurement of Reading by Stan Deno and colleagues at the University of Minnesota and using the procedures described in Shinn (1989). A version of CBM reading also has been published as The Test of Reading Fluency (TORF) (Children's Educational Services, 1987). 4. Measure for Vocabulary: 5. Measure for Comprehension: Comprehension skills can be assessed using standardized measures. The Dynamic Indicators of Basic Early Literacy Skills (DIBELS) assessment system provides a measure that can be used to assess students' comprehnsion level. The Retell Fluency (RTF) measure is intended to provide a comprehension check for the Oral Reading Fluency (ORF) assessment. To learn more about the DIBELS assessment system, click here. Retell Fluency (RTF) is intended to provide a comprehension check for the ORF assessment. In general, oral reading fluency provides one of the best measures of reading competence, including comprehension, for children in first through third grades. The purpose of the RTF measure is to (a) prevent inadvertently learning or practicing a misrule, (b) identify children whose comprehension is not consistent with their fluency, (c) provide an explicit linkage to the core components in the National Reading Panel (2000) report, and (d) increase the face validity of the ORF. The DIBELS Oral Reading Fluency measure is a standardized, individually administered test of accuracy and fluency with connected text. The DORF passages and procedures are based on the program of research and development of Curriculum-Based Measurement of Reading by Stan Deno and colleagues at the University of Minnesota and using the procedures described in Shinn (1989). A version of CBM reading also has been published as The Test of Reading Fluency (TORF) (Children's Educational Services, 1987). Define the critical skills for writing: Components of Written Expression:  Syntactic maturity o Sentence structure o Summarize types of sentences (incomplete, simple, compound, complex, run-on, fragmented)  T-units- minimal terminal unit; shortest unit that can stand alone within a sentence o Example: Howard rides his bicycle/ and Thad rides in a seat on the back.- 2 T units o Evaluate  Topic sentences  Relation bx sentences and topic  Logical order of sentences  Use of transition sentences  Semantic maturity or vocabulary  Vocabulary o Variety of words  Type/token ratio = # of different words/total words o Maturity of words  Words with 7 or more letters  # of mature words/total words  Content o Analytic rating scales to evaluate:  Original & relevant responses  Includes basic elements of a minimal story  Writing on topic and including basic elements of appropriate text structure  Organization  Conventions  Number of words spelled correctly  Proportion of errors in categories o Howell & Nolet- list of 50 capitalization and punctuation conventions o Spelling, capitalization, punctuation  Number of correct writing sequences o Each correct pair of adjacent words or punctuation marks that fit context of overall sentence o Most technically sound of CBM for progress monitoring  Legibility/Penmanship: o Spacing bx letters and words o Letter size o Alignment o Line quality o Letter slant o Letter formation o Style  Fluency o Number of words written- CBM o Average length of sentences  Writing Process (Knowledge of the Writing Process):  Writing steps include  Planning  Transcribing or drafting  Reviewing and revising  Editing  Publishing Identify an assessment measure for important writing skills:  Test of Written Language (TOWL)  Standard Frequency Index: number of words written (WW), words spelled correctly (WSC), and correct letter sequences (CLS; any two adjacent letters that are correct according to the spelling of the word). Summarize the research regarding the reliability and validity of CBM writing:  IRLD Studies: rd th o Established the validity for students in 3 through 6 grades o Reliability evidence was not as strong as validity evidence o Reliability lower for students diagnosed with LD and with low achievement  Elementary Studies: o Reliability continues to be an issue for some CBM writing variables o Weaker validity coefficients than in IRLD studies  Secondary Studies: o Sufficient reliability o Validity unclear; CBM writing variables may not be sufficient for assessing writing at this level o More complex measures may be needed  Studies Across Grade Level: o Validity decreases with age/grade level o Longer writing samples increase validity of scores  General Implications: o WW or WSC often primary measures of writing o Use caution o More complex measures are better at secondary levels o Should also attend to qualitative features of writing Identify the critical foundations for math:  Fluency with whole numbers: By the end of Grade 3, students should be proficient with the addition and subtraction of whole numbers. By the end of Grade 5, students should be proficient with multiplication and division of whole numbers.  Fluency with Fractions: By the end of Grade 4, students should be able to identify and represent fractions and decimals, and compare them on a number line or with other common representations of fractions and decimals. By the end of Grade 5, students should be proficient with comparing fractions and decimals and common percent, and with the addition and subtraction of fractions and decimals. By the end of Grade 6, students should be proficient with multiplication and division of fractions and decimals. By the end of Grade 6, students should be proficient with all operations involving positive and negative integers. By the end of Grade 7, students should be proficient with all operations involving positive and negative fractions. By the end of Grade 7, students should be able to solve problems involving percent, ratio, and rate and extend this work to proportionality.  Geometry and Measurement: By the end of Grade 5, students should be able to solve problems involving perimeter and area of triangles and all quadrilaterals having at least one pair of parallel sides (i.e.,trapezoids). By the end of Grade 6, students should be able to analyze the properties of two-dimensional shapes and solve problems involving perimeter and area, and analyze the properties of three dimensional shapes and solve problems involving surface area and volume. By the end of Grade 7, students should be familiar with the relationship between similar triangles and the concept of the slope of a line. Describe methods for establishing academic performance goals:  Normative approach o Using local or national norms (i.e., simply a description of students’ performance at time of testing)  Criterion- or Competency-based approach o Targets are based on attainment of scores that predict with high probability successful outcome for future performance (i.e., benchmarks) Describe the strengths and limitations associated with the use of local norms, national norms, and empirically-based benchmarks: 1. Local Norms - which look at other students in the same context. This is beneficial because it is a fair comparison, however a limitation is that sometimes local norms don’t equate best performance or performance at a level that is predictive of future success. Strengths: a. Decreased likelihood of bias i. Meaningful comparison group ii. Objective and accountable process b. Greater teaching/testing overlap c. Utility across educational decisions Limitations: a. Must be careful that people interpret accurately b. Do not tell us how to teach c. Local norms do not always equate acceptable performance i. Must not rely solely on local norms ii. Look at empirically derived benchmarks 2. National Norms – which are the distribution of the population at a national level. Strengths: a. Provide comparison between your student and national sample of same grade/age peers b. Utility across educational decisions Limitations: a. May have little contextual overlap with the learning environment of your student(s) (i.e., curriculum, context); b. Performance may not equate acceptable performance (over or underestimate levels needed for success) i. Have to look at benchmarks as well 3. Empirically Derived Benchmarks – strongest source which is based on research and highlight a minimum level of performance that predicts future success in an area. This is where we want our students to be. But, sometimes these goals are too ambitious when working with low achieving students in underachieving context, then you may need to set some intermediate goals by looking at local norms and national norms while working toward these benchmarks. Strengths: a. Data based goals for performance based on research examining learning outcomes b. Not linked to a specific curriculum or instruction Limitations: a. May have to set intermediate goals if performance is significantly below benchmark b. Do not inform how to change instruction Describe methods for establishing decision rules (Be prepared to give examples of decision rules – Be specific!): Decision rules are used to create accountability and add objectivity to the assessment process to make sure that once we put an intervention into place, we are really engaging in data-based decision making, rather than just saying lets give it some more time. Instead we set a rule, and then are able to make a decision, based on the data if we need to change the intervention. We will also talk about different sources for goal setting such as:  Needed to guide evaluator in determining progress o To determine if intervention needs to be changed o To decide when intervention can be faded out (i.e., student meets goal)  Contributes to objectivity and accountability  Can be based on time, trend estimation, or combination Example Decision Rules:  For changing intervention o Time- after 6 weeks, if student has not reached goal, intervention will be changed o Trend- if student has 3 points below the aim line, intervention will be changed o Combo- if after 5 weeks, the student has 3 consecutive points below the aim line, intervention will be changed  For fading intervention o Student meets goal for 3 consecutive weeks, fade intervention o Student meets goal 3 out of last 5 data points, fade intervention Email me if you would like more specific questions as I got 100% on the quiz!


Buy Material

Are you sure you want to buy this material for

50 Karma

Buy Material

BOOM! Enjoy Your Free Notes!

We've added these Notes to your profile, click here to view them now.


You're already Subscribed!

Looks like you've already subscribed to StudySoup, you won't need to purchase another subscription to get this material. To access this material simply click 'View Full Document'

Why people love StudySoup

Steve Martinelli UC Los Angeles

"There's no way I would have passed my Organic Chemistry class this semester without the notes and study guides I got from StudySoup."

Janice Dongeun University of Washington

"I used the money I made selling my notes & study guides to pay for spring break in Olympia, Washington...which was Sweet!"

Jim McGreen Ohio University

"Knowing I can count on the Elite Notetaker in my class allows me to focus on what the professor is saying instead of just scribbling notes the whole time and falling behind."

Parker Thompson 500 Startups

"It's a great way for students to improve their educational experience and it seemed like a product that everybody wants, so all the people participating are winning."

Become an Elite Notetaker and start selling your notes online!

Refund Policy


All subscriptions to StudySoup are paid in full at the time of subscribing. To change your credit card information or to cancel your subscription, go to "Edit Settings". All credit card information will be available there. If you should decide to cancel your subscription, it will continue to be valid until the next payment period, as all payments for the current period were made in advance. For special circumstances, please email


StudySoup has more than 1 million course-specific study resources to help students study smarter. If you’re having trouble finding what you’re looking for, our customer support team can help you find what you need! Feel free to contact them here:

Recurring Subscriptions: If you have canceled your recurring subscription on the day of renewal and have not downloaded any documents, you may request a refund by submitting an email to

Satisfaction Guarantee: If you’re not satisfied with your subscription, you can contact us for further help. Contact must be made within 3 business days of your subscription purchase and your refund request will be subject for review.

Please Note: Refunds can never be provided more than 30 days after the initial purchase date regardless of your activity on the site.