Log in to StudySoup
Get Full Access to OSU - Psych 5600 - Study Guide
Join StudySoup for FREE
Get Full Access to OSU - Psych 5600 - Study Guide

Already have an account? Login here
Reset your password

OSU / Psychology / PSYCH 5600 / What are all the neural components to eye blink conditioning?

What are all the neural components to eye blink conditioning?

What are all the neural components to eye blink conditioning?


School: Ohio State University
Department: Psychology
Course: Psychobiology of Learning and Memory
Professor: Derek lindquist
Term: Spring 2016
Tags: Psychobiology of Learning and Memory
Cost: 50
Name: Midterm 2 Psychobiology Study Guide
Description: Includes descriptions and examples of all the terms Dr. Lindquist provided us with. I added the Ch.6 notes towards the end because he has not posted the terms for that chapter yet. In addition I added 4 sample essay questions that I think would be the hardest to respond to. Best of luck on the exam!
Uploaded: 02/21/2016
17 Pages 5 Views 11 Unlocks

Bella Halvorson (Rating: )

Clutch. So clutch. Thank you sooo much alvey.15!!! Thanks so much for your help! Needed it bad lol

Midterm 2 Psychobiology Study Guide. Dr. Lindquist. Created by Alexandra Alvey.   Ch. 4 Terms

What are all the neural components to eye blink conditioning?

Appetitive Conditioning

1) The US is positive in this classical conditioning paradigm

2) The quail sex experiment was an example of appetitive conditioning. A male quail was  placed in a chamber with a door.  Behind the door was a female quail in estrous and  above the door was a light.  The US would be the door because it allows access to the  female and the UR would be sex.  The CS was the light.  After many trials of pairing the  light with the opening of the door the quail began to stand by the door more often (CR)  waiting for access to the female

Aversive Conditioning

1) The US is negative  

2) The odor and fruit fly experiment is an example of aversive conditioning.  The flies were  exposed to two odors (UR).  One odor (CS) was paired with an electrical shock (US) and  the other was not.  When placed in a chamber with the two odors the flies moved  towards the chamber that had the neutral odor (CR).  

What are the neural components of instrumental conditioning?


1) When one pre-trained CS-US association blocks the ability for a second CS to elicit the  CR.

2) Participants in a study were taught to categorize different shapes (CS) in to group A or  group B (CR).  After a few trials of this the experimenters taught the participants a  different way to categorize by dots presented on the tops of circles and on the bottoms  of triangles.  The participants were then shown new shapes that they had to categorize  by dots (CR) and none of the participants could do it because they were all using the  original CS of categorizing by shape. We also discuss several other topics like usc chem 105a
If you want to learn more check out sdsu math


1) Important in both delay and trace eye blink conditioning.

2) Thought to contain the engram for eye blink conditioning.

Classic/Pavlovian Conditioning

1) Associative Learning

2) Components are CS, US, UR and CR.

What is the difference between distributed representation model and discrete component representation model?

3) Pavlov discovered classical conditioning when doing a experiment on the dog’s digestive  system.  He noticed that the dogs where salivating even before the food was presented.   He called these psychic secretions because the dog’s seemed to know the food was  coming even before it was presented.

Compound Stimulus

1) Two CS are presented at the same time or at different times with the same US.   Association between both conditioned stimuli and US is dependent on which stimulus  was presented first and the amount of times the stimulus was associated with the US.

2) If the first stimulus was paired with US many more times, then the association strength  will be a lot stronger than the second stimulus and US.

3) The compound stimulus is used in Kamin’s blocking effect.  If a mouse is trained to  associate a light (CS) with a shock (US) then when the compound stimulus of light/tone  is presented the light elicits a strong CR and the tone elicits a much smaller CR.

4) If the compound stimulus is originally associated with the shock the CR produced by  both the the light and tone will be equal in strength. Don't forget about the age old question of edgar ghossoub

5) The association strength between the compound stimulus and US can not be greater  than 100 or 1 depending on the model you are working with.  The higher the number  the greater the CS is predictive of the US.  If you have two CS, then the prediction  accuracy of the expected US will be split between them.  Example) A shock is  administered (US) to a mouse after the presence of a light/tone compound stimulus  (CS). The actual US is 100 because the shock occurred and the expected US is 0 for both  the light and tone because the mouse was not expecting the US to occur after the  stimuli.  The beta, or salience of the stimuli, is set to 0.3.  So the change in the light cue  being predictive of the shock is 0.3 multiplied by 100=30.  The tone will have the same value.  30+30= 60.  60 will be the expected US for the next trial. Don't forget about the age old question of Are We Having Sex Now or What?

Conditioned Stimulus

1) A stimulus, such as a light or tone, that is paired with a more salient stimulus like a  shock or air puff (US).  This pairing of CS and US over many trials elicits a strong CR. 2) Be able to determine the CS and US in any example.

Unconditioned Stimulus If you want to learn more check out fugji

1) The salient stimulus that elicits an UR.  Can be paired with the CS many times to  eventually not be needed and the CR is elicited by the CS alone.

2) Ask yourself what stimulus would naturally produced the response?  This will help you  determine what the US is.  For example, and air puff will automatically make you close  your eye and a shock will automatically make you jump.

Conditioned Compensatory Response

1) When the US is presented in the same context or environment the context becomes the  CS.  When the context or environment becomes the CS the nervous system will  compensate for the US with just the presentation of the CS.  This is a predictive  response that helps the organism adapt to the oncoming US.

2) When a dog is given an adrenaline shot (US) in the same environment many times the  dog’s heart rate will decrease right before the shot is administered.  The environment  serves as the predictive CS and helps the dog’s body adjust to the adrenaline. Don't forget about the age old question of occhis

3) This happens in humans with drugs as well.  If you do cocaine (US) in the same room  (CS) for a while, then your body becomes tolerant of the cocaine in that certain room.   You will have to increase your dose for the same high in that particular room because

your body will always try to compensate for the predicted effects of the cocaine.  If you  were to do the same amount of cocaine in a different area, like a party, then you could  possibly overdose because the new CS was not predictive of the US.


1) Closeness in space and time.  Aristotle believed contiguity always created associations  between events and objects.  This is proven to be false with the blocking effect with a  compound stimulus.

2) See blocking effect/compound stimulus above.  Even though the compound stimulus  (light/tone) was presented right before the shock the tone alone is not as predictive as  the light because the light was the CS for many more trials before the compound  stimulus.

CR (eye blink) timing

1) The rabbits have to produce a strong CR at the right time for it to be truly predictive of  the air puff.

2) The conditioned response is the eye blink.  On day 1 of training the rabbits blink after  the US, this means that the blink is the UR.  On day 2 the rabbits start to weakly blink  (CR) just before the presentation of the US.  On day 3 the rabbits have a strong eye blink  (CR) starting during the tone and with the presentation of the US.  Humans learn the eye  blink response faster than rabbits.

CS modulation theories

1) The RW model does not account for latent inhibition because it is a US dependent  model.  

2) Models that explain the diminishing effect of the CS for predicting the US when the CS is  presented alone.  Models also explain the blocking effect.

3) Macintosh Model/Pearce and Hall Model.  Do not have to know the specifics of each  model.

4) The general idea is that when the CS is presented alone it becomes a neutral stimulus.   After when the CS is paired with the US it takes longer for the animal to learn the  predictive value of the CS because it learned it as a neutral stimulus to begin with.  In  other words, it takes longer for the animal to pay attention to the CS.  In blocking the  second CS is redundant and the animal has less attentional resources to attend to the  second stimulus.

Delay Conditioning

1) Classical conditioning when the US is presented right after the CS.  Easier to learn  because the interstimulus interval is shorter.

2) Requires the cerebellum.


1) If the inferior olive could be disinhibited, then a US could be paired with multiple CS’s.

2) Blocking occurs because the inferior olive is inhibited via GABA by the interpositus  nucleus and the US can not be paired with another CS because the CR to the first CS is  already formed.


1) The interpositus nucleus located in the cerebellum is thought to be the engram of  classical conditioning.  It combines the CS and US information and eventually produces  the conditioned response by inhibiting the inferior olive and exciting the muscles  involved with the conditioned response.

2) When the interpositus nucleus is lesioned the animal can not learn conditioned  responses or perform previously learned conditioned responses.


1) When the CS is not followed by the US for many trials the association and CR is  inhibited.  Remember extinction is a forming of learning and not forgetting.  Extinction  competes with the former association to produce no response to the CS.

2) Extinction occurs in drug addiction when the CS, the room or context they usually do  drugs in, is presented without the US (drugs) over and over.

3) Evidence that extinction is inhibitory learning

a) Spontaneous recovery- After extinction learning and some time the CR  happens in response to the CS.  Extinction training happens again but it  

doesn’t usually take as long.  If extinction were simply forgetting  

spontaneous recovery wouldn’t happen because they would have to relearn  the excitatory association.

b) Rapid reacquisition- Faster learning rate of excitatory association after  extinction

c) Renewal- The CR only happens in a certain context and extinction of CR only  happens in a certain context.

d) Reinstatement- US triggers the CR.

Eye Blink Conditioning Neural Circuit

1) Classical conditioning is homosynaptic.  This means that eye blink conditioning is only  involved with the sensory neurons of the eye and motor neurons that produce the blink. 2) The CS (tone) information is being relayed through the pontine nuclei.  Afferents from  the pontine nuclei, called mossy fibers, synapse on the interpositus nucleus of the inner  cerebellum and the purkinje cells of the cerebral cortex.  Both of these synaptic contacts  are excitatory and use glutamate.

3) The US (air puff) information is relayed through the inferior olive of the brain stem.  The  afferents from the brains stem, climbing fibers, synapse on the interpositus nucleus and  purkinje cells.  Both synapses are excitatory using glutamate as a neurotransmitter.

4) The purkinje cells project an inhibitory signal, using GABA, about the CS and US timing to  the interpositus nucleus.  This helps establish the optimal time for the animal to  produce the eye blink to the tone (CR).

5) The interpositus nucleus combines excitatory signals from the pontine nuclei (CS) and  inferior olive (US) while also the inhibitory signals from the purkinje cells.  The  interpositus nucleus projects an inhibitory synaptic connection (GABA) on to the inferior  olive to reduce the production of the UR (eye blink) to the US (air puff) as the CR (eye  blink to tone) is established.

6) The combination of CS and US in the interpositus nucleus is projected via excitatory  (Glutamate) efferents to the muscles of the eye to produce the CR.

Eye blink conditioning

1) Mainly performed on rabbits because they have a low spontaneous eye blink rate and  they can hold still for long amounts of time.

2) US is air puff, the UR is the blink, the CS is the tone and the CR is the blink to the tone.


1) The bodies ability to adapt to external factors or stimuli like temperature, noise and  sensations.

2) Homeostasis plays a role in habituation.

Human eye blink conditioning

1) The same as rabbit eye blink conditioning except humans learn the CR a lot quicker so  the slope of the graph is much steeper.  Trials are on the x axis and CR acquisition is on  the y axis.

2) Human’s are especially better than rabbits at trace eye blink conditioning. Inferior Olive

1) In the brainstem and relays information about the US to the interpositus nucleus and  the purkinje cells of the cerebellum.  Helps develop the CS-US effects in the interpositus  nucleus.  The  

2) The inferior olive will not fire and send US information once the CS-CR association is  made because the interpositus nucleus inhibits the inferior olive with GABA. 3) This inhibition component explains the effects of compound conditioning.  When the CS CR connection is made another CS can not produce as strong of a CR because there is  not as strong of firing in the inferior olive which would normally connect the new CS and  US together in the interpositus nucleus and purkinje cells.

Interpositus Nucleus

1) The engram of classical conditioning.  

2) Combines CS and US information and produces the CR.  Located in the cerebellar nuclei.

3) Inhibitory efferent from the purkinje cells and excitatory efferent to the muscles of the  eye and inhibitory efferent to inferior olive.  Excitatory afferents from the pontine  nucleus and inferior olive.

Interstimulus Interval

1) The time between the CS and US.

2) Learning rate depends on the ISI.  Generally, the shorter the ISI the quicker the CR  acquisition.  

3) You have to eliminate the US to test whether the CS triggers the CR.  The CR should be  happening slightly before and during the time when the US was previously presented. 4) The longer the interval the less accurate the CR becomes.  The graph of the optimal ISI  should look like an inverted U with the CR occurring right around where the US  occurred.

Latent Inhibition

1) When the CS is presented with out the US for many trials then associative learning takes  longer because the CS is blocked from being predictive of a stimulus.

2) Performed by the hippocampus.  The salience of the CS is also controlled by the  hippocampus so when damaged the latent inhibition hamsters perform the same in  classical conditioning because the CS is not registered as distinguishing from one  training session to the next.

Prediction error

1) Part of the Rescorla Wagner Model

2) (Actual US- Expected US)

3) Actual US is either assigned a 100 or 0 meaning it either occurred or did not. 4) Expected US is assigned to every cue (light, tone, food, etc.) and increases with  excitatory association or decreases with inhibition association (extinction) on every trial. 5) The prediction error should get smaller with each passing trial until it reaches 0.

Pontine Nucleus

1) Synthesizes CS information and carries it to the interpositus nucleus and the purkinje  cells via excitatory afferents (Glutamate).

2) Located in the brain stem.

Purkinje Cells

1) Receives excitatory afferents from the pontine nucleus and inferior olive. 2) Sends inhibitory efferents to the interpositus nucleus in the cerebellar nuclei. 3) Located in the cerebellar cortex.

Rescorla-Wagner model

1) Driven by whether the US is present or not.  Error correction learning model so if the  animal did not react to CS in prediction of US as intended then a correction needs to be

made.  If the animal is correcting, it’s behavior and the prediction error is decreasing  then the animal is learning.

2) Change in cue = beta (Actual US – Expected US).

3) Beta is a constant assigned by the experimenter that stands for the learning rate of the  association between the CS and US.  Change in cue for each trial becomes the expected  US in the next trial.  All change in cues are summed on each trial.

4) Example) Training rats to associate a tone (CS) with a shock (US). Beta= .20 A) Trial 1- change in cue = .20 (100-0).  0 is the expected US because the rat’s  where not exposed to the shock at all until the first trial.  The expected US  will be 20 for the next trial.

B) Trial 2- change in cue= .20 (100-20) = 16.  Now add the expected US from the  first and second trial= 36.  36 stands for how predictive the CS is of the US.


1) Conditioned Compensatory Response creates tolerance in certain contexts or CS. 2) See examples of drug addiction and adrenaline administration above under conditioned  compensatory response.

Trace Conditioning

1) The CS is presented and then there is a delay before the US is presented.  The CS and US  are not presented continuously.

2) A form of explicit learning because it requires the subject to use working memory to  create a memory trace.  This memory trace helps the subject perform the CR at the right  time on the trials afterward.  

3) Requires the hippocampus and cerebellum.

4) Humans perform much better on this task than animals.

Unconditioned Response

1) The response that is automatically elicited by the US with out prior training.   2) Examples) Running when shocked, copulation when presented with a mate, eating  when presented with food.

Unconditioned Stimulus

1) The stimulus that will automatically elicit a response from the organism with out prior  training.

2) Examples) Sex, food, sleep, exercise

US Modulation Theory

1) Learning is proportional to the the difference between the actual US and expected US. 2) A animal will change its behavior if the outcome is not what it predicted.  If you got an B  (actual US) on the last test and you were expecting to get an A (expected US) you  probably changed your behavior (more reading, studying, going to class) so that this  next exam’s grade will be more accurate with your prediction.

Ch. 5 Terms

Anhedonia hypothesis

1) Hedonia means the “goodness of something”.  An- means without.  So Anhedonia  means “without goodness”

2) A Extinction Mimicry Hypothesis

3) Stated that dopamine gives objects there “goodness” and when dopamine is blocked in  a animal they loose the ability to recognize the goodness of that object and don’t work  to obtain it. Example) pressing a lever to obtain food.  After the rats press the lever and  receive food the “goodness” of this action is not associated.

4) Parkinson’s patients have low levels of dopamine in their basal ganglia but they still  enjoy obtaining certain things.  Also rats that were lesioned still showed pleasurable and  aversive responses.  This disproved the Anhedonia hypothesis.


1) Taking multiple S-R and chaining them together to produce multiple responses with one  consequence at the end of the chain of responses.

2) First a dog is trained to sit.  When the owner says sit he gets a treat.  After the dog can  perform sit with intermittent reward the owner teaches him to stay and when the dog  stays he gets a treat.  Eventually the owner should be able to say sit and stay to the dog  and reward the dog when he stays for the required amount of time.

Cocaine/meth and dopamine

1) Cocaine blocks the reuptake of dopamine on the presynaptic terminal.  This allows more  dopamine to stay in the synaptic cleft giving the high.

2) Meth produces more dopamine release from the synaptic vesicles.

3) Physiological changes at the synapse become structural changes and the body needs  more cocaine/meth to get the same high.

4) The high is a positive reinforcement because the dopamine is reinforcing the act of  taking the drugs.  The withdrawal symptoms are a negative reinforcement because the  uncomfortable sensations of loosing the high reinforce the act of taking more drugs.  It’s  a vicious cycle.

Continuous Reinforcement

1) When every response from a organism is given a consequence.

2) 1 response: 1 reinforcement

Discrete trial paradigm

1) The experimenter decides when to start the S-R-C trial with the animal.  It requires the  experimenter to give the stimulus and the consequence and each response by the  animal is a trial.

2) Used to experiment with S-R-C before the skinner box was invented.

Discriminative Stimulus

1) The stimulus used in operant conditioning to let the animal know when to respond.   Commonly used are tones and lights.

2) When the discriminative stimulus is absent or another stimulus is presented the animal  should not respond.

3) A rat is trained to respond by pressing a lever to only a green light to receive food. If a  tone or red light is presented the rat should not press the lever if the operant  conditioning was successful.

Dopamine and Reinforcement

1) Electrical stimulation of rat’s ventral tegmental area (VTA) in the brainstem made them  stay in a corner where they were being electrically stimulated.  Eventually the rats would stay in the corner and ignore food and water just to be stimulated.

2) The VTA uses dopamine to communicate to the nucleus accumbens of the basal ganglia.   The nucleus accumbens communicates with the dorsal striatum via dopamine. 3) A dopamine antagonist was used in the nucleus accumbens of rats to see if they would  press the lever (R) at the sight of the light (S) at the same rate as controls.  The rats that  received the dopamine antagonist had fewer lever pushes at the sight of the light  comparable to mice that received no reinforcement after the light.  There was a  negative slope between trial number and lever pushes in both groups.  The controls that  received reinforcement after the light performed way more lever presses than both  groups and maintained a pretty constant amount of lever pushes. They called this  phenomenon extinction mimicry.

4) The ventral tegmental projection to the nucleus accumbens is important in relaying  reinforcement information.

Edward Thorndike

1) Used the discrete trial paradigm in instrumental conditioning.

2) Documented trial and error learning by cats that were trying to escape a box to receive  food.

3) Law of effect states that the consequence is not a part of the S-R model but just  modulates the strength of the association between stimulus and response.  We now  know that consequence is an important part of the S-R-C model and Thorndike was  incorrect in his theory.

Extinction Mimicry

1) The phenomenon observed when a dopamine antagonist was placed in the nucleus  accumbens of rats after being operant conditioned to press a lever when a light comes  on to receive food.  These rats performed less and less lever pushes with each trial after  the antagonist was administered.  They performed very comparable to the rats that  were undergoing extinction training (the lever push response is not followed by food)

2) Two hypothesis were proposed to try to explain dopamine’s role in extinction mimicry:  Anhedonia hypothesis, Incentive Salience Hypothesis, Reward Prediction Hypothesis

Fixed Interval Schedule

1) Organism has to wait a certain amount of time before response is rewarded. 2) The first response after the allotted amount of time is rewarded

3) Say you are baking a cake and you don’t have a clock to time how long it is in the oven.   So you check every so often until you see the cake has browned and risen.  At this  moment you can take out the cake but before you had to check the cake many times  before actually being rewarded with the final product.

Fixed ratio schedule

1) Organism has to perform the response so many times before rewarded. 2) 2 responses: 1 reward,3 responses: 1 reward, 4 responses: 1 reward…

Free Operant Paradigm

1) A form of instrumental conditioning but it let’s the animal control when it responds to  the stimulus instead of the experimenter.  Operant as in the animal is operating the  object itself.

2) A more efficient way to study S-R-C developed by Skinner.  The skinner box was the  product that was designed by skinner to study operant conditioning.

3) A discrete stimulus is used to let the animal know when to make the response.

Habit Slip

1) A rat is trained on a maze (S) to run from one corner to the next (R) to receive food (C).   When food is placed in the middle of the maze the rat runs right past it to get to the  food it expects.

2) The S-R is so hard wired that it becomes a habit.

Incentive Salience Hypothesis

1) Hypothesis to explain the role of dopamine in the extinction mimicry observation. 2) The “wanting” of the food is damaged so the rats don’t push the lever as much. 3) Dopamine provides the incentive to work for reward.

4) You can think of the rats that have dopamine antagonized as couch potatoes because  they will eat more freely available food than controls but will not work for it as much as  controls.

5) Reverse of the Protestant-ethic effect.

Instrumental Conditioning

1) Consequence is dependent on the response the organism makes to the stimulus 2) Thorndike’s cats in a box

3) Different from classical conditioning in that it requires the animal to make the response  in order to receive the consequence.  In classical conditioning the consequence (US)  happens whether the organism responds or not (CR).

4) Learning curve reverse from classical conditioning learning curve because time to make  response decreases with passing trials.

Law of Effect

1) Proposed by Thorndike

2) Satisfying consequences strengthen S-R

3) Unsatisfying consequences weaken S-R

Learned Helplessness

1) Previous exposure to classical conditioning (tone (CS) and shock (US)) inhibits learning of  S-R-C.

2) A dog is placed in a cage with a barrier that he can jump over.  Every time a tone is  sounded he is shocked even when he jumps over to the other side of the cage. This is  the classical conditioning stage.  When one side is now a safe zone where the dog will  not get shocked he will not move because he previously learned that he could not  escape the shock.

Negative Contrast

1) Babies were give regular water to suckle (control) or sugar water to suckle  (experimental).  The babies that received sugar water sucked more than the controls but  when presented with regular water they sucked way less than the controls. 2) This experiment shows that responding is modulated by reward.

Negative Punishment

1) Punishment means trying to decrease a response.  Negative means removing the  consequence.

2) Removing a consequence to decrease a response.  

3) When trying to figure out what terms like this mean ask if you are trying to increase (reinforcement) the response or decrease (punishment) it? Then are you adding a  consequence (positive) or taking a consequence away (negative) to alter response?

4) An example of this is a child is hitting other children on the playground so you take away  their recess time as a consequence.

5) A rat presses a lever and every time he presses it you do not give him food.

Negative Reinforcement

1) You are trying to increase the behavior by taking a consequence away. 2) A rat is being shocked but when he presses a lever the shock stops.

Nucleus Accumbens

1) A part of the basal ganglia and is involved in S-R acquisition.

2) Inferior to the dorsal striatum of the basal ganglia.

3) Sends afferent excitatory signals via dopamine information to dorsal striatum.  Receives  afferent excitatory signals via dopamine from the ventral tegmental area of the brains  stem.

4) When a dopamine antagonist is place in the NA extinction mimicry happens because a  discriminative stimulus does not trigger the response.  Nucleus Accumbens important  for S-R-C association.

Orbito Frontal Cortex

1) Receives the highly processed sensory input from the basal ganglia via dopamine and  opioids.

2) Sends efferents to the basal ganglia to initiate response based on error prediction  (Actual US- Expected US).  The OFC contains the expected US or prediction based on  certain sensory signals and the basal ganglia communicates the actual US to the OFC.

3) Important for the R-C aspect of the S-R-C model.  Alters the response the organism  makes to the reinforce by altering dopamine release in the dorsal striatum. 4) Determines if the consequence is reinforcing or punishing and strengthens or weakens  the S-R association accordingly.  Sends information about response to the basal ganglia  and motor cortex.

Positive punishment

1) You are trying to decrease the response by adding a consequence.

2) Spanking a child when they are bad.

3) Punishment is not as salient as reinforcement because the behavior will continue once  the punisher is out of sight.

Positive reinforcement

1) You are trying to increase a response by adding a consequence.

2) Giving a child ice cream when they make good grades.

3) Giving a rat food when they press a lever.

Post-reinforcement pause

1) Happens in fixed ratio and fixed interval reinforcement.

2) The organism pauses after reinforced for response.

Premack principle

1) Rats drink less water than they run on a wheel but if you take away the wheel (preferred  behavior) until they consume so much water (less preferred behavior) then you can  increase the less preferred behavior by using the preferred behavior as a reinforcing  consequence.

2) Taking away a child’s video games until they do all their homework.

Primary Reinforcers

1) Reinforcers that have intrinsic value like sex, food, water and sleep.

2) Problem with primary reinforcers is that it depends on the organism’s state.  If the  organism is not hungry then it will not be reinforced by food.

Protestant Ethic Effect

1) A organism will continue to work for a consequence regardless of that consequence  being freely available.

2) Pigeon taught to peck on a button to receive food.  After many trials of the pigeon  pecking the button and receiving food the experimenter places freely available food in  the cage.  Regardless of the freely available food the pigeon continues to peck to receive  food.

3) Learned positive S-R association that inhibits other responses because of the  accumulated experience of pecking at the button and receiving positive reinforcement.


1) The act of decreasing a behavior either by adding consequences (positive) or taking  away consequences (negative).


1) The act of increasing a behavior either by adding consequences (positive) or taking away  consequences (negative).

Reinforcement Schedule

1) Fixed ratio, fixed interval, variable ratio, variable interval

2) Variable refers to an average period of responses (ratio) or time (interval) until  reinforcement.

3) Fixed refers to a set period of responses (ratio) or time (interval) until reinforcement.

Reinforcer devaluation

1) When a undesirable consequence is paired with a once favored reinforcer. 2) You get sick after eating your favorite food and now you can’t eat it anymore because of  the undesirable consequence.

Reward Prediction Hypothesis

1) Dopamine in VTA is involved in the expectation of forthcoming reinforcement 2) If a monkey is given juice without prior expectation you see neurons firing after  receiving the juice in the VTA.  If you pair a tone with the presentation of the juice, then  you start to see firing in the VTA after the sound of the tone.  The monkeys are  predicting the arrival of juice.  At this point the only time the neural firing changes is if  the juice is not presented after the tone.  You will see a increase in firing after the tone  but a inhibition in firing after no juice is presented.

3) This is very similar to the incentive salience hypothesis and both are considered valid  hypotheses for what dopamine communicates in the basal ganglia.

4) The “liking” aspect of instrumental conditioning is controlled by endogenous opioids in  the basal ganglia.  The dopaminergic system and the opioid system are two distinct  pathways but communicate to produce S-R.

Secondary Reinforcer

1) Does not have an intrinsic value like primary reinforcers.  Money, fame and status are  good examples.  

2) Better for consistent reinforcing capabilities in humans because it does not depend on  the organism’s homeostatic needs.  In other words, most humans always want  secondary reinforcers no matter their condition.


1) Training that is required to teach an organism S-R-C association.

2) You have to teach the rat to get closer and closer to the lever by rewarding him with  food every time he is near the lever.  Then when he finally presses the lever on accident  you give him food and reinforce only that behavior.  Next you only reinforce the times  when he steps on the lever after a light signal.

3) Demonstrates how instrumental conditioning takes way more time than classical  conditioning.

Skinner Box

1) Primarily used in operant conditioning.  

2) Allows for continuous trials to be performed at the will of the animal operating the  lever.


1) Superstitions develop when a neutral stimulus is followed by desirable consequence. 2) Every time a desirable consequence happens with the neutral stimulus is reaffirms the  individual’s belief in the association between the two things.

3) No actual correlational data to prove that the association is reliable.

4) Rain dance and batting routine are examples.

Variable-interval schedule

1) Average amount of time before a action is reinforced.

2) First response after average amount of time is reinforced.

3) You don’t see a post reinforcement pause.

Variable-ratio schedule

1) Average amount of responses before reinforcement.

2) A rat has to press a bar an average amount of times before being reinforced. 3) Gambling is an example as well because you play a random amount of times before  actually winning.

4) No post-reinforcement pause.

Ventral Tegmental Area

1) Stimulation of the VTA produces “wanting” response.

2) Projects to the nucleus accumbens via dopamine.

Ch. 6 Notes 2/18/16

Similar Stimuli Predicting Similar results.

Experiment with pigeons indicated that when trained to peck at a yellow light to receive food  the pigeon will peck at similar colors expecting the same consequence.  The colors are on a  continuum from most similar to least (called a generalization gradient) and the pecking amount  drops as you start moving away from the yellow.  This is an example of generalization of  knowledge that was favorable to the organism.

Two models were proposed to explain the generalization of similar stimuli 1) Discrete component representation

2) Distributed representation

Discrete component representation

1) Each input is a stimulus that projects to a single response node.

2) Each input is its own node.

3) So each color has its own node in the pigeon experiment.

4) The response node stands for expected US because the pigeon responds based on its  perception.  Responding is indicated by 1.

5) The stimuli nodes stand for actual US because it is the objective color regardless of the  pigeon’s perception.  

6) The weight between each stimulus node and the response node is modified by (Actual  US- Expected US)

7) The problem with this model is that it only leaves room for one perfect response to a  color (1-1).  There can be know generalization after the perfect response. 8) This model does work well for a compound stimulus in blocking paradigm because the  stimuli have little similarity and one stimulus will elicit a response while the other will  not because of redundancy.

Distributed representation  

1) Still a single response node but the input sensory nodes overlap

2) Three-layer node representation

A) Sensory input layer

B) Internal representation layer- physiology of nervous system

C) Response output layer- one node

3) The weights between the input and internal representation layer are fixed. The color  yellow activates multiple internal representation nodes so each node in the internal  representation is 1.

4) The weight between internal representation nodes and the output node is modifiable by  (Actual outcome-expected outcome).  Weights must sum to 1 or 100.

5) A color close to yellow will elicit a response but not as strong as yellow because they  share only a couple of internal representation nodes.

6) This is a good model for generalization.

Similar Stimuli Predict Different Consequences

1) Discrimination training is required to differentiate between similar stimuli 2) Individual cues can be combined to represent a different stimulus but a new CR must be  learned to respond to the compound stimulus

3) XOR-  The organism responds to the light or the tone but not both.

Possible Essay Questions (at least the ones that would be most difficult)

1) What are all the neural components to eye blink conditioning?  Explain the structure and function of the four main neural groups that are responsible for eye blink  conditioning.  How do they connect and communicate and where are they located?

2) What are the neural components of instrumental conditioning? Explain the structure and function of the four brain substrates?  How do they connect and communicate?   Name a behavior that would be produced by the interaction of these neural  mechanisms.

3) Describe the difference between distributed representation model and discrete  component representation model.  What were these models proposed to represent?   Which model actually represents the phenomenon?  What does the other model  represent?

4) What is the difference between reinforcement and punishment?  What are the  differences between the positive and negative versions of these consequences?


Page Expired
It looks like your free minutes have expired! Lucky for you we have all the content you need, just sign up here