Study Guide for Exam 3
Study Guide for Exam 3 Psyc 4450
Popular in Learning
Popular in Psychlogy
verified elite notetaker
This 10 page Study Guide was uploaded by Elizabeth Heitmann on Thursday April 7, 2016. The Study Guide belongs to Psyc 4450 at Rensselaer Polytechnic Institute taught by Christopher L. Hubbell in Winter 2016. Since its upload, it has received 28 views. For similar materials see Learning in Psychlogy at Rensselaer Polytechnic Institute.
Reviews for Study Guide for Exam 3
Why didn't I know about this earlier? This notetaker is awesome, notes were really good and really detailed. Next time I really need help, I know where to turn!
Report this Material
What is Karma?
Karma is the currency of StudySoup.
You can buy or earn more Karma at anytime and redeem it for class notes, study guides, flashcards, and more!
Date Created: 04/07/16
Study Guide Exam 3 Instrumental Conditioning: Introduction I. Basic Procedures a. Mazes: Discrete trials i. Looks at behavioral changes between trials ii. Straight Alley iii. T-maze or Y-maze iv. Hampton Court Maze b. Skinner Boxes i. Measures steady state behaviors c. Shaping i. The Method of Successive Approximation II. Primary, Secondary and Social Reinforcers a. Primary Reinforcer i. No special training required to be effective in triggering behavior 1. Food, water, or sex 2. Access to complex sensory stimulation ii. Premack Principle 1. More probable responses will reinforce less probable responses b. Secondary Reinforcer i. Also called Conditioned Reinforcer ii. Training required to make reinforcer have meaning and be effective in triggering behavior c. Social Reinforcer i. Also a Conditioned Reinforcer III. Delay of Reinforcement a. Does Delay Matter i. Early Studies 1. T-maze with a Delay box a. Trap rat in delay box for a period of time, and then allow them access to goal b. Learning occur with delay up to 60 seconds c. Violated Law of Contiguity d. HullCharacteristics of the delay box became secondary reinforcers ii. Spence’s Hypothesis 1. Exteroceptive Stimuli: stimuli that originate outside the body 2. Interoceptive Stimuli: stimuli that originate inside the body a. Proprioceptive Stimuli: stimuli that originate in muscular movement b. Believed that a muscle movement caused a trace of that movement to be present in the brain after the movement occurred iii. Grice’s Test 1. Looked to see who was right: Hull or Spence 2. Food available after white space 3. Side that was white changed, so it was on the right 50% of the time and on the left 50% of the time a. Equal reinforcement of going right of left, so there is no proprioceptive reinforcement 4. Introduced various time delays in different groups a. Delay boxes did not gain secondary reinforcement properties 5. Delays of 1 or 2 seconds impaired learning 6. Supports Spence’s Hypothesis b. The Role of Interference i. Interference from competing responses 1. Why delay had such a devastating effect in Grice’s test IV. Schedules of Reinforcement a. Ratio and Interval Schedules i. The Schedules 1. Continuous Reinforcement (CRF) Schedule: when reward is given after every time a response is performed 2. Fixed or Variable a. Fixed Ratio (FR): the number of responses required for reinforcement is always the same b. Variable Ratio (VR): the number of responses required is averaged (ex: average presses is 10) c. Fixed Interval (FI): the length of time required for reinforcement is always the same d. Variable Interval (VI): the length of time required is averaged (ex: average time is 10 sec.) ii. The Partial Reinforcement Effect 1. Demonstrated by observing what happens during extinction 2. Increases resistance to extinction b. DRL and DRO Schedules i. Differential Reinforcement of a Low Rate (DRL) 1. Responses reinforced after a certain amount of time 2. Rat cannot respond during that time or the clock starts over ii. Differential Reinforcement of Other Behavior (DRO) 1. Eliminates a response or unwanted behavior 2. Lack of response rewarded when behavior hasn’t occurred in a time limit c. Concurrent Schedules i. Two things are happening ii. 2 levers: VI30 and VI60 iii. Over time, rats adjust behavior to get 2 pellets from VI30 lever and 1 from the VI60 lever iv. Herrstein’s Matching B 1(B 1B )2R /(1 +R1) 2 If B = VI30 and B = VI60 then B /(B +B )=R /(R +R ) = 2/3 1 2 1 1 2 1 1 2 V. Motivation a. Drive i. Clark (1958) 1. Phase I: rats bar press for food on VI 1 min schedule 2. Phase II: deprive rats of food in different groups for time between 1-23 hours 3. The more hungry the rats are, the more bar presses they performgreater drive b. Incentive i. Crespi (1942) 1. 3 groups of rats that are equally food deprived a. 1 group gets 1 food pellet at the end of the alley b. Another group gets 16 food pellets c. Another group gets 256 food pellets a the end of the alley 2. Measure running speeds of rats in each group 3. Adjust the number of pellets at the end of the alley to be 16 for all groups ii. Learning or Motivation 1. Thorndike’s Law of Affect: more learning is occurring 2. Crespi: behavior seen is based on motivation 3. The real reason is motivation iii. Runway experiment 1. 2 rat groups: High consumption ad Low consumption a. High averaged 1.42 g/kg of 6% ethanol in 2 hours with an average preference of 46% b. Low averaged .08 g/kg of 6% ethanol in 2 hours with an average preference of 2% 2. Rats in each group drink water for 15 minutes no longer thirsty 3. Place in alley with ethanol at the end 4. Measure running speed in alley with 120 sec. time limit 5. 10 trials 6. Alcoholic rats ran faster in the alley c. Learning and Motivation i. In general, an increase in motivation increases learning ii. Sometimes a high motivational factor impairs learning if the task is hard iii. Yerkes-Dodson Law 1. There is a peak motivation level that gets the best performance VI. Stimulus Control a. The Concept of Stimulus Control i. The idea that when a reinforcer appears it automatically strengthens the behavior that precedes it is misleading ii. Guttman and Kalish (1956) 1. Pigeons trained to peck a key for food on VI schedule 2. 30 minute training sessions 3. Key illuminated a yellowish orange color (580 nm) 4. Test: vary the wavelength of light in key and measure responses 5. Pigeons demonstrated stimulus controlled behavior iii. Jenkins and Harrison (1960) 1. Used same procedure as Guttman and Kalish, but used a 1000 Hz tone instead of a light 2. Experiment 1: a. Tone had no effect on behavior and the pigeons pecked a the key at the same rate at different tones b. Never taught to discriminate that the tone signals food will appear c. Tone didn’t mean anything 3. Experiment 2: a. Tone was on for 30 sec and off for 30 sec b. Pecking when on got a reinforcer (s+) c. Pecking when off got nothing (s-) d. Saw stimulus control b. Encouraging generalization i. Provide training in multiple settings so response occurs wherever the stimulus occurs c. Encouraging Discrimination i. Confine reinforcement of response to a particular situation VII. Preliminary Applications a. Dicky’s Glasses i. 9 month old with cataracts, needed glasses to see but wouldn’t wear them ii. Diagnosed with childhood schizophrenia (autism) iii. Wolf, Risely, and Mees (1964) 1. Reinforcement to get Dicky to wear glasses 2. Associated noise with candy 3. Noise presented when Dicky made a response towards putting on his glasses 4. Needed to deprive him of food completely in order to get him to put them on 5. Shaped drive b. The Importance of Gradual Change i. Sidman and Stoddard (1967) 1. Taught mentally disabled children to tell the difference between a circle and an ellipse 2. Group 1: trained with just the final task a. Pick the circle in the grid filled with ellipses and get the reward 3. Group 2: experienced fading procedure a. First shown a mostly blacked out grid with one square containing a circle i. Rest of the square fades to white as position of the circle changes b. Ellipse outline faded into blank grid with circle still changing positions 4. Results: a. Group 1: 1 in 10 kids could discriminate between a circle and an ellipse after 180 trials b. Group 2: 7 in 10 kids could discriminate between a circle and an ellipse after 20 trials Instrumental Conditioning: Details I I. Punishment a. Methodological Issues i. Positive reinforcement: an increase in the probability of a response due to the presentation of an appetitive stimulus ii. Negative reinforcement: an increase in the probability of a response due to the removal of an adversive stimulus iii. Positive punishment: the decrease in the probability of a response due to the presentation of an adversive stimulus iv. Negative punishment: the decrease in the probability of a response due to the removal of an appetitive stimulus b. Punishment in Animals i. During extinction, if punishment were to occur, extinction would happen faster 1. Boe and Church: shocked rats during extinction which lead to faster extinction ii. Intensitymore intense, the faster extinction iii. Delayshorter the delay, the faster the extinction iv. ScheduleCRF v. Stimulus Controlwill eventually only respond in certain circumstances c. Punishment in Humans i. Follows the same rules as animals ii. More intense iii. Immediate iv. Consistent v. Explanation helps to understand punishment and stop the behavior II. Side Effects a. Traumatic Example of Punishment i. Masserman and Pechtel (1953) 1. Rhesus and Spider monkeys 2. Took monkeys from a colony and taught them to bar press for food 3. Punished for bar pressing by waving a snake over food when it appeared after the bar press a. Monkeys would no longer bar press and wouldn’t accept food in the room b. Eating habits in the colony were affected c. Lost interest in sex d. Became submissive to weaker members of the colony b. Fear i. Animals become afraid and anxious around the place where punishment occurs c. Aggression i. Pain Elicited Aggression 1. Rats in the same box will start to box if the floor is shocked ii. Modelled Aggression 1. Children are more likely to show aggression towards something that adults have shown aggression towards 2. Copying a certain behavior d. Evaluating Aggression i. Punishment is an effective tool in shaping behavior ii. Punishment produces unwanted side effects iii. Sometimes it is better to encourage a related good behavior rather than punish the bad behavior III. Extinction a. Practical Application i. Williams (1950) 1. Child was sick for 2 years, parents doted on him 2. Child threw tantrums at bed time in order to get both parents to sit in the room with him 3. Told parents to ignore tantrums and they stopped b. Extinction as Punishment i. Not getting a reward can be seen as punishment ii. Extinction induced aggression 1. Put pigeons in a chamber together 2. 1 was free and could peck the key, one was not 3. When light was off and pecking the key didn’t elicit a reward, the free pigeon started to attack the chained pigeon iii. Frustration Effect 1. When extinction starts, there is a short period of time when response increases suddenly, then drops into extinction a. Called the extinction burst or frustrative non- rewarded responding IV. The Partial Reinforcement Effect a. The Discrimination Hypothesis i. The amount of responding during extinction depends upon the similarity of the stimuli present to those present during training ii. The more demanding the reinforcement schedule, the harder it is to extinguish the behavior b. Capaldi’s Sequential Model i. Extension of discrimination hypothesis ii. A non-reward is a stimulus event iii. Memory as a stimulus event N 1. S non-rewarded stimulus 2. Group 1: RRR (CRF) 3. Group 2: NRNRNR (FR2) 4. Group 3: NNNRNNNRNNNR (FR4) 5. Group 1 has not association between S and reward 6. Groups 2 and 3 have strengthened this connection iv. N-R Transition 1. How often was a non-reward followed by a reward v. N length 1. The number of non-reinforcement trials creates a memory event that is associated with the reinforcement 2. A different number of non-reinforcement events gets a different memory vi. New Predictions 1. Group 1: RNR 2. Group 2: RRN 3. Discrimination hypothesis predicts that each group will be extinguished at the same rate 4. Capaldi predicted that group 2 will be extinguished faster because the non-reinforcement trial was not then reinforced Instrumental Conditioning: Details II I. Reinforcement in the classroom a. Classroom behavior i. Hall, Lund and Jackson (1968) 1. Robby: 3 grader, troublemaker in the classroom 2. Baseline Measurement-7days a. Robby spent 25% of the time doing school work b. Good behavior was ignored, while bad behavior was punished 3. Reinforcement Phase I-9 days a. Appropriate behavior was praised, bad behavior was ignored b. Spent 80% of the time on schoolwork 4. Reversal Phase-5 days a. Good behavior ignored, bad behavior punished b. Time spent on schoolwork decreased to 50% 5. Reinforcement Phase II-10 days a. Same as Phase I b. Spent 80% of time on schoolwork 6. Follow up 14 weeks later a. Still spending 80% of time on schoolwork b. Robby’s good behavior had reinforced a new behavior in the teacher to reinforce good behavior and ignore bad b. Teaching Sports i. Allison and Ayllon (1980) 1. How to get college students interested in physical education 2. Taught them how to serve a tennis ball a. Method 1: the traditional approach i. Explanation and demonstration, then a practice period b. Method 2: reinforcement i. Explanation and demonstration then go practice ii. Correct serve reinforced and incorrect techniques punished 1. Made class stop while incorrect way was corrected a. Humiliation in front of classpunishment b. Created secondary reinforcement by showing the correct way to do it c. The Token Economy II. The Problem of Maintaining Behavior a. To be successful, a program must continue until natural reinforcement takes over b. Techniques to maximize persistence of behavior: i. Intermittent reinforcement ii. Variety of settings iii. Gradual, rather than abrupt, cessation of program III. Harmful Effects of Reinforcement a. Moral Objections i. Looks like bribery ii. Promotes greed b. Undermine Intrinsic Motivations i. Behavior should be self-satisfying ii. Use of reinforcers may devalue the activity iii. Gold Stars and Children Drawing 1. Children allowed to draw 2. 2 groups, one is rewarded for drawing, the other is not 3. Phase I: Baselinetest how much children draw before reinforcement 4. Phase II: Reinforcementgiven gold start to reward group for drawing 5. Phase III: Testhow much children draw without given reward 6. Experimental group spent less time drawing when they weren’t being rewarded anymore iv. Aversion to being controlled 1. Child will engage in a task to get a reward 2. Sense of being controlled can be seen as aversive 3. The over justification effect: Rewarding behavior that already occurs decreases that behavior IV. Alternatives to Reinforcement: Modeling a. Modeling in the treatment of Phobias i. Bandara, Blanchert and Ritter (1969) 1. Trained people to not be afraid of snakes 2. Procedure a. Group 1: controlno treatment b. Group 2: systematic desensitizationrelaxation training and then relaxing through fear hierarchy c. Group 3: Filmwatched film of people handling snakes without fear d. Group 4: modelinglive model with active participation e. 29 stage test i. Stage 1: approach caged snake ii. Stage 29: let snake crawl all over them f. % of subjects reaching stage 29 i. Control: 0% ii. SD: 25% iii. Film: 33% iv. Live: 92% b. Determinants of Imitation i. Characteristics of a Model ii. Consequences of the Model’s behavior V. Alternatives to Reinforcement: Self-control a. Seen as a behavior that alters future behavior b. Techniques of Self-control i. Stimulus control 1. Insomnia a. Worries and problems become associated with bed b. Need to associate only sleep with bed ii. Distraction 1. Concentrate on something else when doing something aversive iii. Self-reinforcement 1. Reinforcement is sometimes too delayed to work 2. When environment does not provide reinforcement, do it yourself 3. Bandara and Perloff (1967) a. Task: children had to turn a wheel b. Group 1: controlno reinforcement for turning wheel c. Group 2: self-reinforcementtake as many tokens as desired for turning the wheel x times, with no experimenter present d. Group 3: reinforcementexperimenter matches tokens taken by group 2 with children in group 3 e. Groups 2 and 3 were equally likely to spin the wheel i. Self-reinforcement is just as effective c. Development of Self-control i. A reinforcement analysis 1. Why didn’t group 2 cheat? a. Skinner would say that they reinforced appropriately because they had previously learned that self-control was reinforced positively ii. Self-control and modeling 1. Mischel and Liebert (1966) a. Children brought in to test a toy b. Model demonstrated toy and congratulated himself when he received a good score c. Stringent Group: saw model say good score only when he scored a 20 d. Lenient group: saw model say good score between 15 and 20 e. Take tokens as often as they want i. Stringent group only took tokens when they scored a 20 ii. Lenient group took token when they scored between 15 and 20
Are you sure you want to buy this material for
You're already Subscribed!
Looks like you've already subscribed to StudySoup, you won't need to purchase another subscription to get this material. To access this material simply click 'View Full Document'