New User Special Price Expires in

Let's log you in.

Sign in with Facebook


Don't have a StudySoup account? Create one here!


Create a StudySoup account

Be part of our community, it's free to join!

Sign up with Facebook


Create your account
By creating an account you agree to StudySoup's terms and conditions and privacy policy

Already have a StudySoup account? Login here

PSY 310 Exam 2 material

by: Jessica Poland

PSY 310 Exam 2 material psy 310-03

Jessica Poland
GPA 3.5

Preview These Notes for FREE

Get a free preview of these Notes, just enter your email below.

Unlock Preview
Unlock Preview

Preview these materials now for free

Why put in your email? Get access to more of this material and other relevant free materials for your school

View Preview

About this Document

These are the combined notes focusing on the main topics of what will be included on exam 2.
Behavior Modification
Dr. Cornelius
Study Guide
Psychology, Behavior Modification
50 ?




Popular in Behavior Modification

Popular in Psychlogy

This 13 page Study Guide was uploaded by Jessica Poland on Monday February 29, 2016. The Study Guide belongs to psy 310-03 at Grand Valley State University taught by Dr. Cornelius in Summer 2015. Since its upload, it has received 46 views. For similar materials see Behavior Modification in Psychlogy at Grand Valley State University.


Reviews for PSY 310 Exam 2 material


Report this Material


What is Karma?


Karma is the currency of StudySoup.

You can buy or earn more Karma at anytime and redeem it for class notes, study guides, flashcards, and more!

Date Created: 02/29/16
PSY 310 week 5 notes▯ -Behavior Modification generally relies on operant conditioning▯ -What is a reinforcer?—it is a consequence, stimulus/event that when provided contingent on a behavior will increase the future frequency of that behavior.▯ ▯ -The A-B-C method is the best method to use to test which reinforcer works best for the specific person.▯ ▯ -You do not really know if something is a reinforcer until you put it to use and see how it affects behavior. We usually make educated guesses as to what is a reinforcer for that person. ▯ ▯ netflix, rollerblading), and possessional (toy)▯ Social (praise/smile/hugs), Activity (reading, ▯ -Different reinforcers work for different people. Try to find one that works the best—why we use ABC model. Can compare results of different reinforcers. ▯ ▯ -How do items become reinforcers? ▯ ▯ -Unconditioned: Direct biological reinforcing effects▯ ▯ -Conditioned: Pairing with other reinforcers (Ice cream & ice cream truck music)▯ ▯ -Generalized: Tokens, praise- maintained because they are usually paired with other ▯ ▯ reinforcers▯ -Make sure you have a probable reinforcer before you waste too much time▯ ▯ -How do we identify a probable reinforcer? Indirect assessment—ask the person, use a checklist, use an interview▯ ▯ OR▯ ▯ direct assessment—what do they enjoy?▯ ▯ -Premack principle: We can use the opportunity to engage in a probably behavior as a reinforcer for an improbable behavior. (Ex: Eating enough protein improbable-watching Netflix probable. Use Netflix to reinforce behavior of eating enough protein)▯ ▯ -Preference Assessment:▯ ▯ -Single stimulus: Present each stimulus individually. The client is given the opportunity to ▯ briefly consume/approach the stimulus▯ ▯ ▯ -Paired stimulus: Present 2 items at once, see which the client approaches. Must ▯ ▯ present all possible pairs = Total items(Total items - 1) / 2 …This takes a lot of trials▯ ▯ -Multiple stimulus: Present all stimulus at once. Get to engage with one, then taken ▯ ▯ away. Then, they move onto the next item. Just because they pick it does not mean it is ▯ ▯ a reinforcer! it is just a good indication that it probably is a good reinforcer. Also good ▯ ▯ method because it gives you an order of preference. ▯ ▯ -Motivating operations: deprivation and satiation▯ ▯ Deprivation: Temporarily increase how reinforcing we find something and temporarily ▯ ▯ increase behavior to get that. (Carolyn is hungry, donuts would be really reinforcing)▯ ▯ ▯ Satiation: Exposed to the reinforcer, temporarily decrease how reinforcing it is and ▯ ▯ decrease the behavior to get that reinforcer. (Carolyn ate 10 donuts, she doesn't want ▯ ▯ any more)▯ ▯ -Factors effecting the effectiveness of a reinforcer:▯ ▯ -Amount of it (versus the “effort” of a behavior)▯ ▯ -latency between behavior and reinforcer ▯ ▯ ▯ -in non-verbal organisms, within 30 seconds▯ ▯ ▯ -can be extended by rules (indirect acting effects) (Ex: Study for test on monday, ▯ ▯ ▯ do better on the test on friday)▯ ▯ Positive Reinforcement contingencies▯ -immediate response contingent presentation of a reinforcer that results in an increase in the future frequency of that behavior.▯ -by definition, reinforcer must immediately follow behavior (but we know that with humans it can be delayed——test example)▯ ▯ -Contingent: press lever, get food. Increase frequency of lever pressing. ▯ ▯ -non-contingent: Every 5 seconds, give food. No increase in lever pressing. Will not change behavior except for coming back to that environment because of the presence of food. Problem: This can accidentally increase undesired behaviors (adventitious)▯ ▯ -reinforcement is everywhere▯ ▯ -Positive reinforcement program steps:▯ ▯ -choose and define target behavior▯ ▯ -choose reinforcer▯ ▯ -administer probable reinforcer contingent on behavior▯ ▯ -wean from contrived behavior to natural reinforcers▯ ▯ -Reinforcement done wrong:▯ ▯ -Accidentally reinforcing inappropriate behavior—(Kid head butts Aunt—gets attention—-▯ ▯ increases behavior)▯ ▯ -reinforcer isn't immediate▯ ▯ -assume that “negative attention” is punishing▯ ▯ PSY 310 Week 6 notes▯ Negative Reinforcement: taking something away to increase behavior. It is NOT punishment!!▯ ▯ 2 varieties: Escape and Aviodance▯ ▯ -Escape: immediate response contingent removal of an aversive condition that increases ▯ the future frequency of a behavior.▯ ▯ -Avoidance: Immediate response contingent prevention of an aversive condition that ▯ ▯ increases future frequency of a behavior. ▯ Example of escape: Alarm goes off in the morning—-hit snooze—-the noise stops. ▯ Example of Avoidance: Wake up before alarm sounds—-turn off alarm—-no noise heard.▯ **This example is “indirect acting”-you delay the consequence of hearing the alarm because you understand the “rule”—that the alarm will sound if you do not turn it off right away.▯ ▯ -Cascade of problem behavior usually happens when kids to to escape their demands: Passive, verbal, motor and then termination▯ ex: ignore, complain, stomp feet, tantrum▯ ▯ -Conditioned aversive stimuli (CAS): Behavior may change, but stimuli or people in the environment may become CAS. You may come to avoid the people or other stimuli in the environment because of the negative reinforcement. Ex: You are on the basketball team, and the coach yells at you for a certain technique. You come to hate the sport of basketball. ▯ ▯ -Sick social cycle: In escaping perpetrators aversive behavior, the victim unintentionally reinforces that aversive behavior.▯ Example:▯ Mom’s perspective: kid yelling—give pacifier—no more yelling▯ Kid’s perspective: no pacifier—-yell—get pacifier (May increase the behavior of yelling)▯ ▯ -Punisher: A consequence we provide contingent on the behavior. Can be physical, reprimands, time out, or response cost▯ ▯ -Do not know if something is a punisher for sure until we observe whether or not it decreases behavior▯ ▯ -Positive punishment: immediate response contingent presentation of an aversive condition that decreases the future frequency of behavior that preceded it.▯ ▯ -physical punisher: painful/uncomfortable▯ ▯ -reprimands: yelling/scolding▯ ▯ -Negative punishment: immediate response contingent removal of an aversive condition that decreases the future frequency of behavior that preceded it.▯ ▯ -Time out: from a highly reinforcing environment to a less reinforcing environment▯ ▯ -response cost: removing a tangible reinforcer/activity that they already have or were ▯ ▯ t e g o t g n i o ▯ g ▯ -Time out is often done incorrectly (caters to function of behavior) or it is hard to do▯ ▯ Time-out tips:▯ ▯ -speak on child’s level▯ ▯ -dont argue—use no more than ten words at a time▯ ▯ -1 minute per year of age▯ ▯ -set a timer▯ ▯ -dont talk to the child while in time out▯ ▯ -after time out is done, tell them what they did wrong▯ ▯ Enhancing punishment effectiveness:▯ ▯ -increase frequency of desirable behaviors at the same time▯ ▯ -try to identify the reinforcers that are maintaining the undesirable behavior▯ ▯ -dont assume something is a punisher▯ ▯ -should be administered right after behavior▯ ▯ -pairing with a rule may also be helpful▯ ▯ Problems:▯ -punishment may elicit affective (emotional) responses, like crying and fear▯ -tends to elicit aggression▯ -stimuli associated with punisher may become conditioned punchers as well (Ex: yelled at for peeing pants, gets scared of the sensation of having to pee)▯ -observational learning : they learn by watching you▯ ▯punishment does not facilitate the development of appropriate behaviors▯ PSY 310 Week 7 Notes Exam 2: Will be over ch 4, 5, 14, 13, 6, 12, 8 and reserve readings ▯ Physical Punishment-Is it ethical? ▯ Gonnerman: School of Shock: shocking as punishment -A few people died -No research to back up their methods -Being punished for reasons such as disobeying small rules such as swearing -Administered electric shock much later after undesired behavior ▯ Video watched: Intellectually challenged people administered shock when engaging in self-abuse. Shock relieves the self-abusive behavior, and this behavior could inflict much more damage than a small shock would. Many patients are able to lead better, more independent lives with the shock treatment. Is this ethical? Is spanking ethical? ▯ What abo-Imagine the coach uses punishers to suppress behavior (yelling at players, deriding them, grabbing them, making them run laps, etc.) -Potential problems: Takes the fun out of the sport, drives some people to quit, you do not necessarily know the correct behavior—just know the incorrect behavior, causes fear and anxiety toward the coach, fear of failure, decrease self-confidence, unpleasant environment, wastes practice time ▯ Using physical activity as a punisher is a BAD idea!! ex: making them run laps… teaches people that physical activity is unpleasant and should be avoided at all costs. Coach’s job is to promote lifelong activity—not to make people fear engaging physical activity for the rest of their lives ▯ It can still be an effective punishment if provided consistently, contingently, and immediately.. particularly for verbal organisms, you must transition from external punishment to internal punishment (Ex: instead of time out, have them describe that if they hit their friends it is not nice) ▯ Try other methods such as reinforcement before you initiate punishment. ▯ Consider use of punishment only: -when behavior is very maladaptive and it is in the client’s best interest to suppress the behavior quickly -the intervention meets ethical standards consent -punishment is applied under strict guidelines -the program includes safeguards to protect client ▯ ▯ ▯ ▯ Extinction: takes into account the function of the behavior -another way to alter the frequency of behavior -usually occurs when we stop reinforcing a previously reinforced response, which decreases the frequency of the response)so do extinction for a punishment contingency (would increase the -extinction is not the same as punishment—it involves withholding the consequence that previously maintained the behavior -extinction is not “just ignoring” the behavior example: No popsicle—Hits brother—-gets popsicle (positive reinforcement gone awry) ▯ no popsicle—hit brother—no popsicle (extinction. frequency of hitting brother decreases) ▯ no yelling—hit brother—yelling (positive punishment. frequency of hitting brother decreases) video games—hit brother—no video game (negative punishment. frequency of hitting brother decreases) ▯ Extinction—-withholding the consequence that used to maintain behavior (ex: kid throws tantrum b/c he wants a cookie—have to withhold the cookie) ▯ Can do extinction for things that were previously punished in order to increase behavior in the long term. ▯ child will not be scolded—child babbles in “baby talk”—child will not be scolded ▯ ▯ ex: player does not hear coach screaming—kicks ball—hears coach screaming ▯layer does not hear coach screaming—kicks ball—does not hear coach screaming Extinction is easier said than done. Challenges: -everyone in the environment must be on board—if reinforce/punish intermittently, you have strengthened the behavior—can also shape up more severe behaviors -the reinforcers/punishers you withhold must be the ones maintaining the behavior -if it is not possible to withhold the consequence (like if it is physiological) cannot do extinction ▯ With extinction, behavior gets better in the long term but gets worse in the short term (extinction burst) -temporary increase in the frequency of the behavior -behavior will often change/escalate -aggressive/destructive behavior may follow -do not reinforce if any of these happen!!! reinforces worse behavior -spontaneous recovery: the behavior that has been extinguished reappears after some time has passed ▯ Extinction means that we WITHHOLD THE CONSEQUENCE THAT PREVIOUSLY MAINTAINED THE BEHAVIOR If it was previously maintained by a reinforcer, we withhold THAT reinforcer If it was previously punished, we withhold THAT punisher Extinction is NOT the same as punishment- it involves withholding the consequence that previously maintained the behavior So, let’s talk about the article that you read for today- the Lovaas article. This is a classic article- it is arguably THE article that changed EVERYTHING about how we go about approaching the treatment of developmental disabilities. Just to give you context, prior to this series of articles, the overwhelming approach to “treatment” for developmental disabilities, including autism spectrum disorders was containment- the belief was that IF SOMEONE WAS DIAGNOSED WITH AUTISM, NOTHING WOULD HELP AT ALL. Obviously, that is a very different story now- we know that. There ARE treatments- in fact, insurances are REQUIRED to cover those treatments- that have DRAMATIC impacts on the trajectory of the disorder. They are very effective in making real, clinically significant changes in individual’s language, social behavior, functional behavior, etc. THIS article (and the ones that followed by this research team) changed EVERYTHING about how we approach autism. So, that is why I have you read it- it is such an important historical read. Let’s talk about how to read a scholarly article like this- I have given you some questions to guide you, and I would encourage you to try to work through them with the article. 1. The Lovaas (1987) study contended the autistic individuals had deficits in their learning repertoire. What were the deficits, and how was this intervention designed to address this deficit? The individuals were displaying a range of behavioral deficits, including deficits in language, play behavior, socialization, and IQ score. They also were evidencing some behavioral excesses, including self-injurious behavior. The interventions were deliberately designed to address these deficits through discrete trial training of these skill deficits. 2. Who were the subjects for this study, and what experimental groups were they assigned to? The subjects were individuals who met diagnostic criteria for autism spectrum disorder, and were young (less than 2-1/2 years old). They were assigned to 1 of 2 groups- the Control Group (who received 10 hours a week of intensive, 1-on-1 treatment) or the Experimental Group (who received 40 hours a week of intensive, 1-on-1 treatment). Both groups received the treatment for several years. 3. Explain the interventions in place for the treatment group. The interventions that they conducted were those that we have talked about in class- contingent reinforcement, punishment, time-out, and extinction. Basically, they were doing the types of things we have talked about in class as effective for behavior change. 4. What measures did they use to assess changes in the experimental condition? They assessed a variety of outcome measures, including IQ score, play behavior, language behavior, grade placement, etc. 5. What were the specific results, and what are the implications of the results? The results were dramatic- the participants who were assigned to the Experimental Group (40 hours/ week of intensive treatment) performed DRAMATICALLY better on all outcome measures. When they looked at one of the variables- 1 grade placement- 47% of the students in the Experimental Group were assigned to a normal 1 grade classroom, compared to 2% of the Control Group. Further, those children who improved were indistinguishable from their peers who were not diagnosed with autism- they did not show the signs of autism anymore – they had appropriate play, socialization, language, etc. This was AMAZING for the time- it was a time when the prevailing view was that NO improvement was possible for these patients, and this study proved that wrong. There were several subsequent studies that further provided evidence for the importance of intervening EARLY AND INTENSIVELY in autism spectrum disorders. These studies set the standard for current treatments for autism, and dramatically changed the entire approach to this disorder. Another way to decrease the frequency of behavior is Differential Reinforcement ¡ Uses reinforcement to decrease the frequency of some behavior while increasing the frequency of some other behavior ¡ This seems a little counter-intuitive, I know, since we have talked extensively about how reinforcement is used to INCREASE behavior- so how could it be used to DECREASE behavior? ¡ Really, what we are doing with differential reinforcement is COMBINING reinforcement and extinction together- we do them both at the same time with the objective being to decrease the frequency of a problem behavior. So Differential Reinforcement involves procedures for reinforcing one set of responses and withholding reinforcement for another set of responses ¨ Differential reinforcement versus plain old reinforcement So how does differential reinforcement differ from plain vanilla reinforcement? The answer is that they are PROCEDURALLY very similar, but slightly different in the GOALS. When do we use plain reinforcement- when we just want to increase the frequency of a behavior, but don’t really care much about its details. For example, I might reinforce any old response that occurs in class, even if it is a not- so-smart response (plain vanilla reinforcement). But, we may use differential reinforcement, when we wish to increase the frequency of one set of behaviors and decrease the frequency of another subset. So, I might differentially reinforce smart comments, and not reinforce other not-so-smart responses (i.e., I put these on extinction). Of course, plain reinforcement always involves differential reinforcement to some degree, because some responses will be reinforced and others will not. However, in plain reinforcement, the unreinforced class is defined by exclusion- any behavior that is not eligible for reinforcement. With differential reinforcement, we more often are trying to explicitly increase one behavior and decrease the frequency of another behavior. That is, the GOAL of the treatment is really to decrease the frequency of a behavior (we can call that behavior “Behavior X”). So my GOAL is really to decrease the frequency of Behavior X, so I fail to reinforce Behavior X (i.e., I put this on extinction) and reinforce other behaviors (maybe Behavior Y and Behavior Z). The idea here is that we are clearly defining the unreinforced behavior- we are saying- we are deliberately putting Behavior X on extinction BECAUSE we want to decrease its frequency. So the way that Differential Reinforcement works is that you put the problem behavior on Extinction (you fail to reinforce it) and then you reinforce other behaviors instead with that same reinforcer that maintained the problem behavior. The idea is that you are decreasing the problem behavior using extinction AT THE SAME TIME that you are increasing other, more appropriate behaviors using reinforcement. So, Differential Reinforcement is really the COMBINATION of Extinction and Reinforcement. So, why would we use Differential Reinforcement instead of other methods to reduce the frequency of behavior (like Punishment or Extinction)? Differential Reinforcement is able to minimize some of the problems that we know exist with other methods- like the extinction burst and aggression that happens with punishment. It also allows us to a way to increase the frequency of a prosocial behavior at the same time it is reducing the frequency of a problem behavior. This is a great video that talks about the procedures of Differential Reinforcement: Differential reinforcement of low rates (DRLrd Sometimes, a behavior is acceptable or tolerable if it occurs at a low rate. For example, if you are in a 3 grade classroom, it might be ok if a student talks without raising his hand occasionally, but if he is doing it too much it becomes problematic. However, if it happens at too high of a frequency, then it is problematic. So, what you would do is reinforce them (in the form of attention or answering their questions) if it happens at a low rate (maybe less than 3 times during the day), but anything above 3 times is put on extinction (i.e., you do not reinforce it). Similarly, if we had someone we were working with who was constantly complaining, we might reinforce (with attention and comments) if it occurred at a low rate (maybe 2 or less times an hour), but anything over 2 would be put on extinction (i.e., you would withhold attention). We also sometimes do this with people who text us- if they text us at a low rate (maybe less than 3 times a day) we reinforce them with responding, but anything above 3 is on extinction (i.e., we do not respond). So, another type of Differential Reinforcement that we might do is called Differential reinforcement of Other Behavior (DRO): Sometimes, a behavior is so detrimental that even low rates of the behavior is unacceptable. This procedure is known as differential reinforcement of other behavior, in which reinforcement occurs for any behavior OTHER THAN the problem behavior. So, what you are doing is reinforcing ANY OTHER BEHAVIOR besides the problem behavior, and then putting the problem behavior on extinction. Sometimes, people will refer to this procedure as Differential Reinforcement of Zero behavior, which is somewhat problematic because it violates the dead man rule: a dead man could NOT engage in a zero rate of behavior. So, I prefer to think of DRO as Differential Reinforcement of Other behavior- in which the person is reinforced for doing anything other than the very problematic behavior. So, if the problem behavior is Behavior X, you would put Behavior X on extinction (i.e., withhold the reinforcer) and reinforce ALL OTHER BEHAVIORS. However, sometimes it is neither practical nor appropriate to reinforce ALL OTHER BEHAVIORS (for example, what if one of those other behaviors was a different, also inappropriate behavior?), so a reasonable alternative to this is to do Differential Reinforcement of Alternative behavior (DRA) where we are reinforcing ONE alternative behavior to the problem behavior. For example, we might design a DRO where we want to decrease the frequency of head-banging behavior, and we do so by reinforcing ANYTHING OTHER THAN head banging behavior. • Finally, we can also do Differential Reinforcement where we are trying to decrease the frequency of a problem behavior by reinforcing a behavior that is INCOMPATIBLE with the problem behavior- that is, it is physically impossible to be doing both at the same time. In DRI, you are specifying an incompatible response that is to be reinforced, in which the incompatible behavior and the undesirable behavior cannot occur simultaneously. So, if a person is yelling out obscenities (the undesirable behavior), you could reinforce sitting with your mouth closed (incompatible behavior). Or, if someone is running around the classroom, you might reinforce the incompatible response of sitting in one’s desk quietly. When we decide to decrease a target response by withholding reinforcers for it and by reinforcing an incompatible response, this is DRI. Other examples would be DRI for taking a cab instead of driving drunk (can’t do both), keeping your hands in your lap instead of biting your nails, arriving to class on time instead of arriving late (you can’t do both!). Differential, in general: In medicine, specifically in pediatrics, targeting lack of compliance to treatment from children, i.e. tantrums or general refusal of treatment/needles/more painful and uncomfortable treatments. DRL: A child complaining about pain, because we want to treat the patient and make sure they're comfortable as possible, taking their illness into consideration, and giving them treatment for what is actually hurting them But if they're complaining about every little will be difficult to treat them and that type of pointless complaining is detrimental to their own health. DRO/DRA: I would say child throwing tantrums that makes it impossible to treat them, that is not acceptable because it's not only making the jobs of the hospital employees more difficult but, much more importantly, it's preventing the patient from receiving care/getting better. So reinforce good behavior and compliance to treatment. DRI: Refusal to take their medicine at the times they're supposed to is the problem behavior, when they take their medicine correctly, reinforce that behavior. v=PQE1RyFziv4&ebc=ANyPxKpmWhoxaNbHH_0MBkTQaAI9OhlE754QZDw- OiWIwqz3o3XzPIQ1KZ0XGsg3cMQ4a_Hp49zF1Um1nrJ2XIOMDAKm7ULx7Q • The frequency with which reinforcement occurs makes a difference in terms of the strength of the response. We know that if we fail to reinforce a behavior, it will stop occurring. It also matters on what schedule of reinforcement that behavior is being maintained. • There are various schedules of reinforcement that reinforcement can be administered on. Incidentally, this applies not only to reinforcement, but to punishment. A schedule of reinforcement is a rule specifying which occurrences of a given behavior, if any, will be reinforced. • Types of schedules: • Continuous reinforcement (crf): this is the simplest schedule of reinforcement, and involves administering the reinforcement every time the behavior occurs. Let’s say we were doing an intervention with a kid in the schools. You praise him every time he successfully completes a math problem. Or on a first date you laugh every time the person tells a joke- that is continuous reinforcement. That is continuous reinforcement- every instance of the target behavior gets reinforcement • Lots of behaviors in our lives are on a continuous schedule of reinforcement. Every time I turn the knob on the radio the radio comes on, every time I lift the handle of the faucet the water comes out, every time I take a sip of diet mountain dew I get the sweet, delicious taste, etc. • However, when we are implementing behavior modification interventions, particularly when we or another human is responsible for delivering the reinforcement, it is not always practical to administer the reinforcement every time the behavior occurs. Often, we will provide continuous reinforcement when the behavior is being acquired (acquisition phase), but will fade to a schedule of less reinforcement once the behavior has been established (maintenance phase). • Intermittent reinforcement: this specifies a rule or procedure for occasionally reinforcing a behavior. Intermittent reinforcement involves reinforcing certain instances of the behavior, but not all of the behaviors emitted. • In general, when we are talking about intermittent reinforcement, we are referring to free operant procedures, in which a person is allowed to respond repeatedly, in the sense that there are no restraints on responding. This is contrasted with discrete trial procedures, in which a stimulus is presented, the person is waited for to respond, and then reinforcement is provided, and then another stimulus is provided, the person can respond, reinforcement provided, etc. in successive trials. In discrete trial procedures, the rate of responding is constricted by the number of opportunities to respond. So, in general, when we talk about intermittent schedules of reinforcement patterns of responding, we are talking about free operant procedures, in which the person is free to respond at will. • A reinforcer follows the response only once in a while. There are several advantages of intermittent reinforcement, in that the reinforcer remains effective longer than continuous reinforcement because there is less satiation, intermittent schedules are more resistant to extinction, there is more consistent behaviors on intermittent behavior schedules, and this is more like what will occur in the natural environment. For example, if we are trying to train social skills and give the child a smile every time they say something socially appropriate, that is not analogous to what happens in the real world- appropriate behavior is reinforced intermittently, not continuously. This is how reinforcement and punishment works in the real world. We are not reinforced every time we get a math problem right, or give a compliment, or do exemplary work. Similarly, we are not punished every time we talk back to our parents, run a red light, or swear. This is usually the reason why parents say that “reinforcement or punishment does not work” – they are providing the consequences so intermittently that the child does not learn the relationship between a given response and the outcome, or is counting on the fact that the probability of a consequence following that behavior is unlikely. In general, responses maintained on intermittent schedules of reinforcement take longer to extinguish than those on continuous reinforcement. • There are different types of intermittent schedules of reinforcement: • Fixed Ratio: this means that you provide reinforcement only after a fixed number of correct responses. It might be after every 10 correct responses, 20, 100, etc. So, the worker might get paid (reinforcement) after they have picked a certain number of bushels. Similarly, in some factories they pay workers based on the number of completed parts (piece-rate pay). The way in which this is denoted is FR 16, whereas FR stands for fixed ratio, and the number corresponds to the number of behavior emitted needed for reinforcement to occur. • F-R responding occurs such that after a response in reinforced, no responding occurs for a period of time, and then responding occurs at a high, steady rate until the next reinforcer is delivered. These types of schedules tend to produce a rapid and steady response rate, as long as the number of responses required for reinforcement is not too large. • There is a post-reinforcement pause after the reinforcement is emitted, and the length of this pause is proportional to the value of the FR- the higher the value, the longer the pause. This schedule is resistant to extinction. • You generally will work up to a more thin FR schedule of reinforcement gradually, or else you will experience ratio strain, in which there is a deterioration of responding because you have made the schedule too thin too quickly. • Variable Ratio: reinforcement is provided after a variable number of correct responses. The number of responses required for each reinforcement in a VR schedule varies around some mean value. • So, for a VR 10, for example, the reinforcement may come after an average of 10 responses, but may come after 1, 3, 19 and 17 correct responses. • VR schedules produce a high rate of responding, with almost no post-reinforcement pausing. Gambling slot machines are maintained on variable ratio schedules – the gambler receives payment on an irregular basis for some responses and not for others. The response of dropping coins in slot machines is maintained at a high, steady level by the payoff that is delivered only after an unknown, variable number of coins has been deposited. Asking someone out on a date is reinforced at a VR schedule. Whining and complaining of children is often maintained on this schedule, such that sometimes this behavior results in reinforcement. • This schedule is very resistant to extinction (more so than F-R), and produces almost no post- reinforcement pause. VR schedules can also be maintained at a higher ratio of responses to reinforcement and progress to that high rate more quickly, without causing ratio strain to the system in which the organism stops responding at higher VR schedules. • VR is useful, but FR is more commonly used in behavioral programs because it is easier to administer. • Fixed interval: provides reinforcement for the first correct response made after a specified time interval. The size of the FI schedule is the amount of time that must elapse before reinforcement becomes available. So, you might decide that you are going to praise a child every five minutes for working on his math problems. This is a FI 5 schedule. • Note that it is not just the passage of time that is occurring with an FI schedule- the organism must make a response sometime after the specified time interval. Note that responses occurring before the specified interval is up has absolutely no effect on the occurrence of the reinforcer. With this schedule of reinforcement, most animals and humans learn to pause after each reinforcement and begin to respond again only at the end of the time interval approaches. • With the FI schedule, it often produces a scallop- a gradual increase in the rate of responding with responding occurring at a high rate just before reinforcement is available. No responding occurs for some time after the reinforcement. Let’s say that you have a hot date at 6pm every night, and you are very excited about it, you can hardly wait. Your behavior of looking at your watch is controlled in this typical pattern- you don’t start responding at high frequencies until the time for reinforcement is near, and you pause for a very long time before starting to respond again after the reinforcement has occurred. Or, if you are watching TV and you flip to another channel during the commercials, your responses will increase with frequency as you get closer to the reinforcement time (when your show will be back on), assuming that the commercial breaks are the same length. • Going to pick up a paycheck may be an example of an FI schedule, but only if that check is ready only after a certain period of time and going to the pay window before that does not result in it getting done any sooner. Working and getting paid by the hour is not an example of an FI, since you have to work (respond) before the time interval (one hour) in order to get paid. That is, responses made before the time interval elapsed do matter in getting paid by the hour, so it is not an FI schedule. • FI schedules tend to produce a scallop, and there is a postreinforcement pause, which is longer when the FI schedule is larger. • Variable interval: reinforcement is available after a variable amount of time has elapsed. The length of the interval varies around some mean value, and this value is specified in the designation of that particular VI schedule. For example, if a mean of 25 minutes is required before reinforcement becomes available, the schedule is abbreviated VI 25. For example, reinforcement may come after 2 minutes, 7 minutes, or 13 minutes. • Responses on this schedule tend to occur at a slow but steady rate. For example, if you call a friend and the line is busy, your redial behavior is likely on this type of schedule – you do not know when your friend will be off the phone (and thus, reinforcement is available for you) so you call again every few minutes. VI schedules produce a moderate, steady rate of responding, and very small post-reinforcement pauses. Checking email is on a VI schedule- reinforcement in the form of an email occurs after variable amounts of time. • Simple interval schedules of reinforcement are rarely used in Behavior Modification because they tend to produce long post reinforcement pauses, it generates lower rates of responding, and you have to continuously monitor behavior after the interval has expired, which is a pain. • Sometimes, a limited hold will be added to an interval schedule, which makes it much better. A limited hold is a finite time, after a reinforcer becomes available, that the response will produce the reinforcer. It is basically a deadline for meeting the response requirement of a schedule of reinforcement. If the response does not occur within that time frame, the reinforcement for that instance is lost forever. That is, once the reinforcement is “set up”, its availability is “held” only for a limited period. • These schedules produce effects similar to those produced by ratio schedules, especially if the hold is small. An example of an interval schedule with a limited hold is waiting for the bus- you know that the bus comes at relatively fixed intervals (every twelve minutes, let’s say), but the reinforcement (being able to get on the bus), is only there for a limited amount of time, like a minute. If you are not there before the hold is expired, you do not get the reinforcement. Calling someone whose line is busy is kind of like that- if you do not respond within a finite amount of time, reinforcement may not be available to you anymore (the person may talk to someone else or leave). • LH is useful when you want to produce ratio like behavior, but don’t have the time to count behaviors. • Duration schedules: these schedules involve reinforcement occurring after the behavior has been engaged in for a continuous period of time. In a fixed duration schedule, the behavior must be engaged in for a fixed amount of time for reinforcement to occur. In a variable duration schedule, the interval of time that the behavior must be engaged in changes unpredictably from reinforcement to reinforcement. A worker who is paid by the hour is on a FD schedule. A VD schedule might be waiting to cross the street. The behavior of waiting is variable, and must be engaged in continuously, in order for reinforcement (a clearing) to occur. Rubbing two sticks together to produce fire is another example of a VD schedule- you have to do it continuously, but the amount of time required varies. • These are sometimes useful, but only when the behavior can be measured continuously and reinforced based on its duration. Sometimes, it may be difficult to tell. For example, reinforcing a person for studying for an hour may seem like a good idea, but it is possible that something could look like studying (daydreaming, playing computer games), but is not really studying. • In Behavior Modification programs, eye contact with DD clients is commonly reinforced on a duration schedule. • Concurrent schedules of reinforcement: • In most situations, we have the option of behaving in a variety of ways. Each of these behaviors are likely reinforced on a different schedule of reinforcement. The different schedules of reinforcement that exist all at the same time are called concurrent schedules of reinforcement. • Matching law was proposed as an explanation for why a person chooses one activity over another. Matching law states that the response rate in a concurrent schedule is proportional to the rate of reinforcement of that activity relative to the rates of reinforcement on the other concurrent activities. • Other factors also affect which behavior we choose. These include the types of schedules that are operating, the immediacy of the reinforcer, the magnitude of the reinforcement, and the response effort required. ▯


Buy Material

Are you sure you want to buy this material for

50 Karma

Buy Material

BOOM! Enjoy Your Free Notes!

We've added these Notes to your profile, click here to view them now.


You're already Subscribed!

Looks like you've already subscribed to StudySoup, you won't need to purchase another subscription to get this material. To access this material simply click 'View Full Document'

Why people love StudySoup

Steve Martinelli UC Los Angeles

"There's no way I would have passed my Organic Chemistry class this semester without the notes and study guides I got from StudySoup."

Allison Fischer University of Alabama

"I signed up to be an Elite Notetaker with 2 of my sorority sisters this semester. We just posted our notes weekly and were each making over $600 per month. I LOVE StudySoup!"

Bentley McCaw University of Florida

"I was shooting for a perfect 4.0 GPA this semester. Having StudySoup as a study aid was critical to helping me achieve my goal...and I nailed it!"


"Their 'Elite Notetakers' are making over $1,200/month in sales by creating high quality content that helps their classmates in a time of need."

Become an Elite Notetaker and start selling your notes online!

Refund Policy


All subscriptions to StudySoup are paid in full at the time of subscribing. To change your credit card information or to cancel your subscription, go to "Edit Settings". All credit card information will be available there. If you should decide to cancel your subscription, it will continue to be valid until the next payment period, as all payments for the current period were made in advance. For special circumstances, please email


StudySoup has more than 1 million course-specific study resources to help students study smarter. If you’re having trouble finding what you’re looking for, our customer support team can help you find what you need! Feel free to contact them here:

Recurring Subscriptions: If you have canceled your recurring subscription on the day of renewal and have not downloaded any documents, you may request a refund by submitting an email to

Satisfaction Guarantee: If you’re not satisfied with your subscription, you can contact us for further help. Contact must be made within 3 business days of your subscription purchase and your refund request will be subject for review.

Please Note: Refunds can never be provided more than 30 days after the initial purchase date regardless of your activity on the site.