New User Special Price Expires in

Let's log you in.

Sign in with Facebook


Don't have a StudySoup account? Create one here!


Create a StudySoup account

Be part of our community, it's free to join!

Sign up with Facebook


Create your account
By creating an account you agree to StudySoup's terms and conditions and privacy policy

Already have a StudySoup account? Login here


by: Alfonso Grady PhD


Alfonso Grady PhD

GPA 3.59

Jb Appel

Almost Ready


These notes were just uploaded, and will be ready to view shortly.

Purchase these notes here, or revisit this page.

Either way, we'll remind you when they're ready :)

Preview These Notes for FREE

Get a free preview of these Notes, just enter your email below.

Unlock Preview
Unlock Preview

Preview these materials now for free

Why put in your email? Get access to more of this material and other relevant free materials for your school

View Preview

About this Document

Jb Appel
Class Notes
25 ?




Popular in Course

Popular in Psychlogy

This 133 page Class Notes was uploaded by Alfonso Grady PhD on Monday October 26, 2015. The Class Notes belongs to PSYC 400 at University of South Carolina - Columbia taught by Jb Appel in Fall. Since its upload, it has received 22 views. For similar materials see /class/229640/psyc-400-university-of-south-carolina-columbia in Psychlogy at University of South Carolina - Columbia.




Report this Material


What is Karma?


Karma is the currency of StudySoup.

You can buy or earn more Karma at anytime and redeem it for class notes, study guides, flashcards, and more!

Date Created: 10/26/15
Psyc 400 Survey of Learning 85 Memory Section 001 MWF 125215m MM 214 Lecture Notes Fall 2009 INTRODUCTION A ADMINISTRATIVE MATTERS See Syllabus B PROLOGUE WHAT CONDITIONING AND LEARNING ARE ABOUT The July 1999 issue of American Psychologist the Professional Journal of the American Psychological Association APA begins with the following quotation I don t know why I did it But today I can recognize that events back then were part of a lifelong pattern in which thinking and doing have either come together or failed to come together I think I can reach a conclusion I turn the conclusion into a decision and then I discover that acting on the decision is something else entirely and that doing so may proceed from the decision but then again it may not Often enough in my life I have done things I had decided not to do Something whatever it might be goes into action it goes to the woman I don t want to see anymore it makes the remark to the boss that costs me my head it keeps on smoking although I have decided to quit and then quits smoking just when I ve accepted the fact than I m a smoker and always will be I do not mean to say that thinking and reaching decisions have no influence on behavior However behavior does not merely enact whatever has already been thought through and decided It has its own sources and is my behavior quite independently just as my thoughts are my thoughts and my decisions are my decisions Schlink B 1997 p 20 Learning While this statement introduces a series of four articles on intention unconscious motivation and volition or will it is also as good a justi cation for studying learning and memory directly as any I have seen in recent years To understand how people reach decisions or even how they think cogitate may or may not enable us to understand how they behave when they change their behavior or why they do what they do 1 The behavior of organisms as a scienti c datum Experimental analysis It is our belief metatheory that behavior and more particularly the relatively permanent changes in behavior we de ne as learning below can be analyzed functionally scienti cally or experimentally Minimally this means that behavior verbal or otherwise must be 1 observable either directly or indirectly However there are many problems with limiting experimental analysis to events that are observable Among these are i The distinction between unobservable and observable is not fixed What is unobservable today is not necessarily unobservable tomorrow when for example technology advances ii Including microbehavioral physiological events which are not observable under all conditions may sometimes increase the precision of a functional or experimental analysis For example measuring changes in autonomic nervous system ANS activity may enable us to predict subsequent emotional behavior better ie more reliably than simply exposing the organism to an external presumably aversive stimulus 2 Measurable Things or events that cannot be measured in principle dependent variables cannot be related to other events in an organism s past or present environment independent variables and therefore cannot be the subject matter science Learning 3 Specifiable Independent observers must be able to agree as to what is occurring in any situation that is being investigated b De nitions and views of science De ning science is surprisingly dif cult Most de nitions involve method rather than content Whatever else it might be method boils down to the behavior of people calling themselves scientists Minimally this involves quot a concern with things and events whose traits structures relations actions they the scientists are interested in knowing explaining predicting and controlling plus the products of such activities investigations in the form of descriptions theories and lawsquot J R Kantor 1963 The Scientific Evolution of Psychology p 4 De nitions are important because they predict behavior in this case of the individual using the de nition If you call apples or oranges quotfruitsquot you will probably eat them if you call them balls you are more likely to throw them 1 Naturalistic methodological views of science Many if not most people hold that there is a fixed foundation to all human knowledge and it is the job of natural philosophers ie scientists to discover what this foundation is In other words something called quotnaturequot natural law or quotrealityquot exists Positivists eg Auguste Compte 1798 1857 Ernst Mach 1838 1916 and their Empiricist predecessors such as Francis Bacon 1561 1626 argued that scientists should focus on what can be known with certainty through their own experience That is they should first observe events collect facts without prejudice or preconception and then as facts accumulate extract generalizations scienti c or descriptive laws that apply presumably without exception to them Once laws have been formulated scientists move on to prediction that is if we know the laws of nature and the Learning state of a physical system at time t it should be possible to predict events that will occur at time t 1 Although we commonly expect science to explain things positivistic scienti c explanation is nothing more than observation description generalization and prediction Among the many problems with naturalistic views of science is that they tend to underestimate the complex relationship between theory method and data i Since no theory can encompass everything the particular theory an investigator uses in effect tells him or her to look for ii Crude fact gathering is never as powerful as theoretically guided research because to the fact gatherer all facts are equally meaningful or meaningless iii Most importantly naturalistic views of science ignore the in uence of historical as well as socio psychological variables on the theory constructing behavior of the scientist 2 Socio cultural experiential views By contrast sociocultural views hold that doing inventing science is a human enterprise which is profoundly in uenced by history culture circumstances and context SSC Thus all scienti c theories and quotlawsquot are more than formal axiomatic statements for example about the conditions under which the probability of a response increases but incorporate hidden assumptions made by the scientist and his or her interpreters about the nature of reality eg response strength increases only when reinforcement occurs Such assumptions have been called the scientists worldview or Weltanshauung Suppe 1977 The best known socio cultural view of science is probably that of T S Kuhn The Structure of Scientific Revolutions 1970 Learning i Normal or working science To Kuhn 1922 1996 what a community of scientists does at any given period of time is called working or normal science To understand the history of science we must understand the nature of these communities and how historical forces in uence them ii Paradigms The basic set of assumptions that provide the framework within which a given community of scientist s works is called a paradigm Paradigms have two components the disciplinary matrix and shared exemplars The disciplinary matrix consists of a set of fundamental assumptions that are usually unstated often unconscious and typically not subject to empirical test These assumptions provide the basis for speci c hypotheses that are subjected to empirical tests For example consider psychological atomism the assumption that all mental or behavioral events can be reduced to assemblages of simple sensations images or ideas that are said to be quotheld togetherquot by the principles of association below This assumption is metaphysical and therefore untestable however once atomism is assumed to be true the scientist can explore particular hypotheses about how speci c complex acts are compounded out of simpler acts Is our perception of a house something more or other than the sum of our sensations of color hardness texture etc Shared exemplars are models that provide agreed upon methods for investigation For example consider the standard operant conditioning situation A hungry rat is placed in a chamber with a lever and food trough at one end When the rat presses the lever accidentally it gets rewarded with food After several such experiences the rat presses the lever regularly To the operant psychologist all learning occurs in Learning much the same way quotcorrectquot responses are acquired because they have been rewarded followed by reinforcing consequences Sharing exemplars or for that matter paradigms have many interesting effects the most important of which may be that they cause the quotsharersquot to have a response bias that is to see or interpret the world in a certain way Indeed all perception whether scienti c or otherwise is at least in part a matter of interpretation or bias as has been shown repeatedly in many experiments Stated differently what one quotseesquot depends on the paradigm one uses as well as the stimuli in the visual environment of the observer iii Crises anomalies and revolutions Kuhn believed that the paradigms of normal science such as those of Ptolemy and later Copernicus and Newton are so powerful that scienti c change occurs only with great dif culty and are therefore relatively rare Indeed they constitute scienti c revolutions What might be called the rst stage of a scienti c revolution is the occurrence of important insoluble problems which Kuhn called anomalies Consider for example Brahe s observations that the planets do not travel either in circular orbits or at constant speeds A large number of anomalies may eventually lead to a loss of con dence or faith in the existing Copernican paradigm Loss of faith in turn results in insecurity confusion and crisis However anomalies do not by themselves cause so called paradigm shifts or scienti c revolutions There must also be 1 a new alternative paradigm and perhaps more often than not 2 a change in the zeitgeist culture or in our terms below context SSc such as what occurred in cosmology in the 17Lb century Finally after much struggle a new paradigm will emerge and there is a revolution in which some Learning usually young scientists quotacceptquot the new paradigm Thereafter the new paradigm de nes quotnormal sciencequot until new anomalies arise and so it goes ad infinitum While many philosophers and historians of science do not necessarily accept Kuhn s view that scienti c change must always be revolutionary towards the end of his life Kuhn himself modi ed this position it is generally believed that the importance of contextual socio cultural psychological etc variables can no longer be dismissed from the business of understanding science As you will probably notice the view taken in this course is primarily socio cultural a Learning The process that we call learning is usually de ned as a relatively permanent change in behavior that results from experience and not some other factor eg fatigue sensory adaptation or maturation For convenience two aspects of learning can be distinguished Acquisition habit reaction potential and maintenance performance reaction occurrence Learning is the result of many variables These include but are not limited to 1 The genetic quotpotentialquot of the organism 2 The nature of its anatomy especially its brain 3 Events that may have occurred in the womb or prior to conception eg the organism s quotracial unconsciousquot and 4 Ability to retain learned material ie memory However this course will be concerned primarily with 1 How behavior is affected by environments that is Learning 8 Boundary of the Psychological EventField Setting Factors SSc Responding Stimulating Organism Object Contact Medium AND with 2 How different observers scientists interpret behavior enVironment interactions that is Boundary of the Investigative EventField Time Boundary of Psychological Event Field Setting Factors context Primary Responding Stimulating Observer Orgamsm Object Contact Medium Setting Factors Culture History Learning C The question of where such interpretations originate involves both stimulus and contextual variables and leads us directly to HISTORICAL BACKGROUND Four major in uences on the Psychology of Learning will be considered MechanismMaterialism EmpiricismAssociationism Functionalism Darwinism and NativismRationalism Mechanism materialism a Rene Descartes 1596 1650 1 Place in history Descartes is probably best known for i His invention of analytic geometry La Geometrie ii His description of an introspective method Discours de la methode involving a quest for what can and can not be known with certainty doubt and iii The conclusion of this quest the dictum Cogito ergo sum 1 think or doubt therefore I am iv His insistence that certain ideas eg God exists time and space exist the Pythagorean Theorem is correct cannot be derived from experience but rather known a priori prior to or before experience Thus they are innate immortal and eternal For this reason Descartes was a Nativist as well as a Rationalist However here we are concerned primarily with the Cartesian dualistic model which is sometimes called Psychophysiological or Psychophysical Interactionism 2 The Cartesian Model Descartes argued that the soul or mind l ame or res cogitans thinking thing which is unextended free and lacking in substance can both in uence bodily function and act independently By contrast the body is extended limited has substance and can only act re exively in response to other factors Learning 10 Like Plato Leibniz and many others Descartes divided the psychological world into a hierarchy of functions At the bottom of this hierarchy is the re ex which is involuntary automatic and mechanical it involves nothing but material body Descartes thought that the re ex is mediated by nerves which we now call sensory and motor that worked hydraulically much like the brakes of an automobile that is they are hollow tubes filled with animal spirits When disturbed by a pressure a mechanical stimulus the spirits ow through the nerves to the brain and on to muscles In describing the operation of the re ex Descartes anticipated the notion of change in synaptic resistance which is a mechanism still used to quotexplainquot psychological phenomena such as learning It is to be observed that the machine of our body is so constructed that all the changes which occur in the motion of the spirits may cause them to open certain pores in the brain rather than others and reciprocally that when any of these pores is opened in the least degree more or less than is usual by the actions of nerves which serve the senses this changes somewhat the motion of the spirits and causes them to be conducted into the muscles which serve to move the body in the way in which it is commonly moved on account of such an action Passions ofthe Soul 1629 Pp 172 173 Descartes Psychophysiological Interactionism Sense Sensory Organs Nerves Involuntary re exive Mind 39 39 quot Voluntary non re exive L Ame I I 4 Motor Response Muscle Nerves Learning 11 However in contrast to other animals which are to Descartes mere machines many if not most human actions are not re exive Rather they are voluntary because they involve some degree of volition will that is the intervention of the mind soul or in French 1 ame Sensation and perception are relatively base functions that involve both body and mind which interact above This interaction occurs through an unknown and necessarily indescribable mechanism quotinquot the pineal gland which is a small substance in the center of the brain Remembering consists of traces left in the brain and nerves by the previous ow of animal spirits Thinking reasoning etc are purely mental or cognitive acts that originate and continue to occur solely in the human mind The most important thing about Descartes at least to this observer is neither his advocacy of interactionistic dualism nor the problems this position inevitably causes how can unextended substance l ame which by de nition does not occupy space affect extended substance which does occupy space but his allowance that at least some human and all animal behavior is re exive Such behavior is not only automatic and mechanical but is among other things subject to observation and experimentation this could at least in principle result in the discovery or invention of natural scienti c laws b Thomas Hobbes 1588 1679 Hobbes agreed with Descartes that voluntary actions are caused by the mind but in addition argued that they can be explained and are therefore lawful His argument involved the principle of hedonism that is that people and other animals do things in order to obtain pleasure and avoid pain The similarity of this view to that of Thorndike and later Skinner and other Operant psychologists should be obvious Thus unlike Descartes Hobbes believed that both re exive and non re exive behaviors are knowable and governed by the laws of physics mechanics Learning 1 The mechanical model Many of Hobbes ideas eg his concern with motion were in uenced by his acquaintance Galileo 1564 1642 That is like the French materialist Julien de La Mettrie some 75 years later 1709 1751 Hobbes viewed the body as a machine in which the heart is a spring and the nerves are strings all of which are set in motion by God 2 Sensory psychology For example to Hobbes sense is produced by the pressure of external bodies or objects which arouses in the organism a sensation that is a counter pressure fancy quality or endeavor that gives the appearance of an object But let us let Hobbes speak for himself quotConcerning the Thoughts of man I will consider them rst Singly and afterwards in Trayne or dependence upon one another Singly they are every one a Representation or Appearance of some quality or other Accident of a body without us which is commonly called an Object Which Object worketh on the Eyes Eares and other parts of mans body and by diversity of working produceth diversity of Appearances The Original of them all is that which we call SENSE For there is no conception in a man s mind which hath not at rst totally or by parts been begotten upon the organs of Sense The rest are derived from that original The cause of Sense is the EXternall Body or Object which presseth the organ proper to each Sense either immediately as in the Taste and Touch or mediately as in Seeing Hearing and Smelling which pressure by the mediation of Nerves and other strings and membranes of the body continued inwards to the Brain and Heart causeth there a resistance or counter pressure or endeavour of the Heart to deliver itself which endeavour because Outward seemeth to be some matter without And this seeming or fancy is that which men call Sense and consisteth as to the Eye in a Light or Colour gured To the Eare in a Sound To the Nostrill in an Odour to the Tongue and Palat in a Savour And to the rest of the body the Heat Cold Hardnesse Softnesse and such other qualities as we discern by Feeling All which qualities Learning 13 called Sensible are in the object that causeth them but so many several motions of the matter by which it presseth our organs diversely Neither in us that are pressed are they anything else but diverse motions for motion produceth nothing but motion But their appearance to us is Fancy the same waking that dreaming And as pressing rubbing or striking the Eye makes us fancy a light and pressing the Eare produceth a dinne so do the bodies also we see or hear produce the same by their strong though unobserved actions For if those Colours and Sounds were in the Bodies or Objects that cause them they could not be severed from them as by glasses and in Echoes by re ection we see they are where we know the thing we see is in one place the appearance in another And though at some certain distance the real and very object seem invested with the fancy it begets in us Yet still the object is one thing the image or fancy is another So that Sense in all cases is nothing else but original fancy caused as I have said by the pressure that is by the motion or external things upon our Eyes Ears and other organs thereunto ordainedquot Hobbes Leviathan Pp 3 4 This is nothing less than the theory of the speci c energies of nerves later brain which was articulated in the 19Lb century by Johannes Miiller 1801 1858 This theory asserts that we do not directly sense external objects or events stimuli but rather our own bodily nervous activities or quotenergiesquot Note that Hobbes like Descartes is a dualist in the sense that he argues there are two kinds of realities or qualities physical or primary external objects or bodies the origins of sense and secondary psychological appearances the sensations Representations or Appearances He was however much more of an Empiricist below than Descartes An interesting more or less contemporary example of this way of thinking and some of its implications comes from no less of a philosopher of science than T S Kuhn above If two people stand at the same place and gaze in the same direction we must under pain of solipsism conclude that they receive closely similar stimuli Hobbes external objects If both could put their eyes at the same place Learning 14 the stimuli would be identical But people do not see stimuli our knowledge of them is highly theoretical and abstract Instead they have sensations Hobbes pearances and we are under no compulsion to suppose that the sensations of our two viewers are the same On the contrary much neural processing takes place between the receipt of a stimulus and the awareness of a sensation Among the few things that we know about it with assurance are that very different stimuli can produce the same sensations that the same stimulus can produce very different sensations and nally that the route from stimulus to sensation is in part conditioned by education ie learning Individuals raised in different societies behave on some occasions as though they saw different things If we were not tempted to identify stimuli one to one with sensations we might recognize that they actually do so Kuhn T S 1970 The Structure of Scienti c Revolutions 2nd edition Chicago The University of Chicago Press Pp 192 193 Italics JBA 2 Empiricism Associationism The Newtonian Paradigm The philosophical movements which came to be called Empiricism and later Associationism as well as the formulation of the quotlawsquot or principles of association date back to Aristotle 385 322 BC and Plato 427 347 BC a Empiricism The need for something like Associationism arose from the philosophical position known as Empiricism which held that all knowledge comes from sensory experience that is the particular elemental sensations of particular individuals or quotmindsquot In contrast to Nativists such as Descartes Empiricists such as Hobbes and more particularly John Locke 1632 1704 argued that at birth the mind is empty a quottabula rasaquot blank slate upon which sensations or quotideasquot are quotwrittenquot or quotimpressedquot There are no innate ideas which eXist a priori prior to or before experience Descartes and later Kant below Learning b Associationism Aristotle was the rst to de ne the quotprimaryquot laws of association similarity contrast and most importantly contiguity in both time and space Parenthetically Aristotle also distinguished between what he called recollection or quotactivequot recall which resembles the more contemporary construct of long term memory LTM and remembering which is similar to quotpassivequot recall short term or working memory STM However quotmodernquot Associationism began in the 17th century when it was realized that individual simple sensations must be uni ed or quotheld togetherquot to form more complex perceptions re ections apperceptions Hobbes Traynes of Thought above and ultimately knowledge and consciousness or understanding Locke What does this quotholdingquot is are the principles or laws of association which are to the mental or psychological world what gravity is to the physical world the force that pulls things together or repels them pushes them apart Much later Thomas Brown 1778 1820 described nine quotsecondaryquot laws of association eg frequency recency exercise etc which inspired the work of Ebbinghaus Thorndike and others Even more importantly John Stuart Mill 1806 1873 argued that compound ideas need not have the same qualities as the sensations that produce them The new science of mind was to be more like chemistry than physics 2H2 02 lt gt 2H20 gas gas liquid on earth at room temperature etc Empiricism Associationism is reductionistic above and atomistic It is also fixed and timeless The model or paradigm upon which it is based is Newtonian physics mein and evolutionary biology The Functional Paradigm a History The Darwinian or more accurately functional view of the world can also be traced to Aristotle in his treatise De Anima This does not mean that Aristotle was an evolutionist indeed like most if not all of the Greeks he believed that species were xed and immutable However De Anima was functional and biological in that it argued that the nature or essence of any Learning 16 object being or event cannot be understood until one knows its purpose or function as well as its structure Stated otherwise to know what something is one must also know what that something does or what it is for This view is perhaps best illustrated by Aristotle s idea of cause To Aristotle there are four causes of the existence or things or changes in the existence of things 0 The material cause that out of which the thing comes to be and persists The efficient cause the work or bringing material to the shape or perfection of the object The formal cause de nition or shape The final cause the plan function or purpose for which the thing is used Thus the material cause of a bowl might be the silver out of which it is made the efficient cause is the hammering of the silversmith the formal cause is its shape bowl like rather than say at and the nal cause is the function or use for which it is intended holding wine or water Aristotelian biopsychology was abandoned not long after the Alexandrian conquests and emergence of Roman hegemony for a variety of historico cultural reasons When science reemerged in the 16th and 17th centuries into a very different world physical structural or material as well as spiritistic models dominated Western thought including biology This lasted until the second half of the 19th century when among other things Darwin s Origin of Species by Means of Natural Selection was published 1859 It should be realized however that most of the relevant data in support of his theory were collected by Darwin during or prior to the voyage of the HMS Beagle 1831 1835 The important point is that in the environment which existed from the 5th through at least the 17th century any kind of functionalism was impossible Why this occurred is one of the most important questions in the history of Western culture and cannot be easily answered here Learning b Natural selection In addition to its emphasis on the continuity between human and animal species and the importance of changes over time there are at least three principles of evolutionary biology that profoundly in uenced the psychology of learning and memory diversity variation natural selection adaptation and retention 1 Variation Organisms differ from one another in structure and as we now know the genes that direct the building of these structures Similarly single organisms vary in their behavior from one environment to the neXt and in the states of the nervous system which regulate that behavior 2 Selection Particular environments favor quotnaturally selectquot some characteristics of organisms over others In biological evolution those organisms having the selected characteristics are more likely to survive and reproduce in that environment than organisms that do not have these characteristics Learned behavior is also selected naturally by the environment probably by processes involving the contiguity or contingencies among stimuli classical conditioning S gt S or responses and their consequences operant conditioning R gt St See Skinner 198 1 3 Retention Once a characteristic has been selected it must persist at least in a given environment if it is to be available for further selection Enduring changes in behavior memory form the basis of further learning Their underlying mechanisms probably involve neuronal cellular or sub cellular adaptations that are only now beginning to be understood see Kandel 200 1 Brembs et al 2002 Learning 18 Whatever else they may be both evolutionary biology and selectionistic psychology emphasize the importance of the environment both inside and outside the skin in shaping and modifying behavior 3 The Nativist Rationalist Tradition The Cognitive Paradigm Nativists such as Kant believe that the soul quotpsychequot or mind selects organizes or quotprocessesquot experience or quotraw sensationquot and thereby imposes order on a putative unknowable world of quotnoumenaquot to create quotphenomenaquot To the Nativist we see hear etc in certain ways because of what we are and what we know at birth how we are quothard wiredquot or how the nervous system is constructed It is not that we learn nothing from experience but that we do not learn everything from experience This is because the Empiricist s elements of quotmindquot sensations which may have either external or internal quotoriginsquot are organized by so called mental categories See diagram below by or in the acts of perceiving apperceiving and ultimately knowing These categories as well as the essentially similar aesthetic principles are ways of organizing thinking or knowing that precede experience or in more modern terms are determined genetically Thus no experience can teach us immortal and eternal Truths such as God exists or that all material quotthingsquot occupy space and time etc Apperceptive Mass AESTHETIC PRINCIPLES I Noumena I The Thlng 1n Sensation m Itself I CA TEGORIES Qu ality Phenomenal Quantity Thin g Relation 39 Modality KANT S EPISTEMOLOGICAL MACHINERY Phenomenal Mind Learning 19 As one example of how a priori categories operate consider Kant s quotanswerquot to Hume s experiential account of causality For Hume quotcausequot is a mental habit inferred from the observation of the constant conjunction of events that is the contiguity of A and B in space or the succession of A and B in time whenever A occurs B occurs There is no necessary connection that is causal relation between A and B in the world outside the mind of the observer However how do we know the meaning of concepts such as constant conjunction contiguity or succession Surely not from experience For the Nativist these are necessary ways of structuring or organizing experience and are themselves known a priori that is before the occurrence of particular events that may occur contiguously or successively We will see that each of these traditions has in uenced the psychology of learning At the risk of oversimpli cation recent work in classical conditioning and verbal learning is strongly reminiscent of the Mechanistic Materialistic and Empiricist Associationist traditions while behavior analysis operant conditioning is more functionally oriented Cognitive psychology appears to be both Associationistic as in expectancy theory and Nativistic as in information processing SINGLE EVENT LEARNING HABITUATION AND SENSITIZATION A INTRODUCTION 1 Relationships S gt S R gt S and SSc As we progress through this course we will argue that often it is not the intrinsic properties of a stimulus that make it important or meaningful but rather the relationship between the occurrence of that stimulus and other environmental events That is a A stimulus may be paired with or predictive of another stimulus as in Pavlovian or classical conditioning thus creating an S gt S relationship or b It may follow a response in instrumental or operant conditioning thus creating an R gt S relationship Note In much of this course the symbol gt will denote a conditional contingent or if then relationship the symbol will denote a conjunctive or and relationship Thus the notation a b gt c should be read ifa and b then c Learning 20 These relationships will be quotmappedquot in some detail when we outline the classical and operant conditioning paradigms below First however we must consider what happens when a stimulus is presented repeatedly to an organism without any observable or programmed relationship to other environmental events non associative learning that is when the stimulus has no known predictive value or consequence B HABITUATION Habituation is one of the most pervasive and important forms of learning It occurs in species as simple as protozoa and plant like hydra and as complex as humans Consider only the effects of the quotnoisequot of descending aircraft on a family that lives neXt to a busy airport 1 De nition Habituation refers to a learned or acquired decrement diminution or waning of a response to constant or repeated stimulation It cannot be the result of fatigue we may not respond readily to loud noise because we are tired or of sensory adaptation or damage we may not respond readily to noise if we are hearing impaired or deaf 1 Distinguishing true habituation from motor fatigue True habituation is stimulusspecific that is a response such as salivation or rating of pleasure may habituate to a taste such as lemon but reappear when the taste is lime Consider also a Surprise Adding a new stimulus b Stimulus change Changing the stimulus to which the response has been habituated Consider for example The Coolidge Effect In this situation changing one s mate keeps both the loves of both chickens and people alive c Other Change in context SSc such as moving the position of the mate in b above 3 Distinguishing true habituation from sensory adaptation Habituation is also responsespecific That is you may turn your head towards the professor when he makes an announcement during a test but this orienting response will soon habituate however other attentional responses such as listening may continue Learning 21 4 Several conditions maximize habituation a Stimulus intensity Perhaps surprisingly habituation occurs more rapidly to intense stimuli than to weak stimuli Davis 81 Wagner 1968 However it may also be true that the more people are made to suffer by exposing them to intense stressors the more suffering they can endure Sequence of stimulus intensities When organisms are exposed to a gradually increasing series of stimulus intensities more habituation occurs than when they are exposed to a maximum intensity stimulus from the beginning The situation is analogous to that of a baby who has relatively little fear of the noise of a vacuum cleaner while her mother vacuums a distant room in the house low intensity stimulus then near the baby s room moderate intensity stimulus then nally around the baby s bed Had her mother vacuumed the baby s room rst the baby would have been frightened from the beginning of the housecleaning operation that is habituation would not have occurred Interstimulus interval Short inter stimulus intervals maximize both the acquisition and extinction of habituation 5 Complexities While habituation is a relatively simple form of learning it is by no means free of complexities a b Different responses habituate at different rates The rate at which a given response habituates depends on quotmotivationalquot and other factors PV below Habituation of different responses eg startle response and maze running by rats involves different underlying physiological mechanisms In addition there are two kinds of habituation a Shortterm habituation occurs when stimuli are presented relatively rapidly eg every two seconds Recovery is usually rapid and complete Learning 22 b In longterm habituation stimuli are presented at longer inter trial intervals eg every 16 seconds and recovery is less rapid and less complete SENSITIZATION 1 De nitions Sensitization refers to the augmentation of a pre existing response by a stimulus There are two kinds a Incremental sensitization Incremental sensitization refers to the enhancement of a response due to repeated stimulus presentation Behaviorally it is the opposite of habituation although it probably involves a different and independent physiological neuronal mechanism below b Priming Priming is said to occur when a habituated or extinguished inhibited response suddenly re occurs following an unexpected intense or noxious stimulus This phenomenon is synonymous with dishabituation and similar to disinhibition an associative learning phenomenon below Amount of sensitization appears to be a direct function of stimulus intensity the more intense the stimulus the more sensitization occurs In addition sensitization declines as a function of number of stimulus presentations that is sensitization habituates THEORIES OF HABITUATION AND SENSITIZATION 1 Dual process theory Groves and Thompson 1970 suggest that habituation re ects a decrease in responsivity of quotinnate re exesquot that is a decreased tendency of a specific stimulus to elicit a speci c response S gt R process called Process 1 Note that even the simplest so called involuntary re exes such as the myotatic knee jerk re ex can be modulated by other events because of the existence and operation of interneurons which Learning 23 synapse with and can therefore deliver information from other parts of the nervous system or bodily organs n39n6nrv H mnwr mp mm v H Sammyneuron 4 ln0r ncumn l dummy lexmnsur WW n l lnh meumn Mmor neuron HHHJH mm I 7 A 2 a nun Imurncuum Lei extends On the other hand sensitization to Groves and Thompson reflects an increased rea iness to respond to all stimuli a more general increase in activation or arousal in the central nervous system that is a Smte process called Process 2 Repeated stimulation results in a decline in the ability of an S to evoke an R habituation but also leads to a general activating effect that increases responsivity sensitization The observed response on any trial is determined by these two competing inferred processes A considerable amount of evidence largely involving the acoustic smrtle response appears to support the Groves Thompson theory In one eXperiment two groups of rats received a brief 90 Ms 110 dB tone The tone occurred in a background noise of 60 dB in one group of animals and in an 80 dB noise in the other group Over the course of 100 trials the smrtle response habituated in the group eXposed to the relatively quiet 60 dB background and grew more intense became sensitized in the group eXposed to the louder 80 db noise The explanation given was that the nervous systems of the rats in the second group were more aroused than those of the first group Davis 1974 2 Opponent Process theory What is probably a more popular model is opponent process theory Solomon and Corbit 1974 a a and b processes A and B states This theory contains four basic concepts Learning 2 3 4 y stimulus that produces an immediate hedonic emotional effect a process also produces a later effect that is opposite in direction to the initial effect b process The magnitude and duration of the a process is fixed it is determined by the particular stimulus being experienced wever the bprocess is d amic i eated exposure to the stimulus the bprocess begins earlier has greater magnitude and lasts longer These changes in the bprocess reverse themselves as time passes without exposure to the stimulus Whether the bprocess grows with repeated stimulation depends critically on the time interval between stimulation If the stimulation is widely spaced there is no change in bprocess The actual hedonic or emotional state experienced by an organism is determined by the difference in magnitude between a and bprocesses at any given moment a b If a gt b the result is an Asmte while if a lt b the result is a B state 1A I37 mm mmw mullalmrw OH Off Off of n mu 1 mi Some phenomena quotexplainedquot by opponent process theory are 1 Drug tolerance Repeated exposure to cermin events eg injections of morphine or heroin which are initially pleasant causes these effects to los some of their pleasureenhancing qualities According to opponent process theory this Learning 25 occurs because of the increase in strength of the opponent B state since the amount of overall effect is the arithmetic difference ofA and B A B 2 Withdrawal and addiction When drug administration ends there is no A state but an intense B state persists this is known as withdrawal The addictive process according to Solomon and Corbit may be at least in part a coping response to the aversive B state that is an attempt to terminate escape from or prevent avoid withdrawal However it should be noted that i Addiction does not always occur to potentially addictive drugs such as alcohol and morphine ii Withdrawal symptoms and addiction do not necessarily accompany tolerance iii Neither tolerance nor addiction explains the unconditioned conditioned discriminative and reinforcing stimulus effects of many psychoactive substances below E NEURAL MECHANISMS Note Some knowledge of neuroscience cell biology and biochemistry is necessary to understand all of this section completely so do not be too concerned if you are unable to do so The material discussed on pages 29 33 is included for the bene t of those of you who plan to continue your studies in a graduate program in experimental psychology or a related neuroscience it will NOT be covered on quizzes or exams Habituation sensitization as well as basic associative Pavlovian conditioning below procedures in primitive animals with limited behavioral repertoires have proven to be useful in increasing our understanding of the cellular basis of more complex learning and memory processes in humans 1 Synaptic plasticity in Aplysia a A simple animal model In 2000 Erik Kandel won the Nobel Prize in physiology or medicine for a lifetime of research he and his colleagues at Learning 26 Columbia University had been conducting on the marine mollusk sea snail or slug Aplysia callfomica The reason Kandel chose to study this lobsterlike creature is informative Aplysia has a relatively simple nervous system with large neurons from w ich he could record both electrical and chemical activity moreover this system conmins only about 20000 neurons whereas the human brain contains at least 100 billion such cells a large percentage of which may receive as many as 10000 inputs In addition the brain conmins more than 300 billion glial cells which modulate neuronal activity in various ways Aplysia callfomica 11 1 b quot and en i atinn of gill What Kandel and his colleagues measured was withdrawal of the gill into the mantle R in response to a mctile stimulus S a touch of the siphon the organ through which water passes thus enabling the animal to breathe absorb oxygen This response habituates readily when repeatedly stimulated Trials llS below However if a single instance 0 ano er stimulus such as a shock to the tail of the animal is delivered along with the siphon touch eg Trial 14 a large and rapid gill contraction sensitization or dishabituation occurs Trial 1 Trial 6 Trial 13 Trial 14 Touch Touch Tnuch Shock mi and siphon siphon siphon touch siphon 4 8 4 8 12 4 8 i 8 12 Time s Time s Time s Time s Learning 27 This response enhancement which lasts up to an hour is called shortterm sensitization or short term memorf If the animal is exposed to repeated pairings of the tail shock US and siphon stimuli CS the withdrawal response can be altered for days or weeks this is known as longterm sensitization or memory which is an insmnce of learning or associative classical conditioning below Shortterm sensm39zajion D L0 rigterm sensin39zajion J E lail shocks 200 1000 150 w 4 trainsday 0rd da With one tail shock 500 m response 4 single tail shucka mil shocks No sh cks J 2 4 6 Time days a 2 Time hrs c Cellular mechanisms cell signaling The small number of neurons in the nervous system of Aplysia made it possible for Kandel and his colleagues to identify the neural circuits controlling the gill withdrawal response and its plasticity ie its ability to change as a function of eXperience Four types of neurons are involved in the gillwithdrawal response mecharwsensory neurons that innervate the siphon motor neurons that innervate muscles in the gill interneurons that receive inputs from the siphon and modulatory interneurons that receive inputs from other areas such as the tail v 73 W mm Gill sumulus g shack 39Mnkor neumn Imemeumn Modulator inlemeumn Learning 28 Touching the siphon activates the mechanosensory neurons w 39ch form excimtory synapses that release the neurotransmitter glutamate onto both the interneurons and the motor neurons white crosses on red dots this increases the probability that both of these postsynaptic targets will produce action potentials fire The interneurons also form excitatory synapses on motor neurons further increasing the likelihood of the motor neuron firing in response to mechanical stimulation of the siphon When the motor neurons are activated by the su med synaptic excitation of the sensory neurons and interneurons they release another neurotransmitter acetylcholine ACh that excites the muscle cells of the gill producing gill withdrawal Synaptic activity in this circuit is modified during habituation and sensitization During habituation transmission at the glummatergic synapse between sensory and motor neurons is decreased synaptic depression this is thought to be caused by e uction in the number of synaptic vesicles available for release from the axon terminal of the presynaptic sensory neuron resulting in a reduction in the amount of glummate available to excite both the interneuron and the motor neuron Sensitization modifies the function of the circuit by recruiting additional neurons that is the tail shock causes sensory neurons in the mil to fire These neurons excite modulatory interneurons that release ye ano er neurotransmitter serotonin 5HT onto the presynaptic terminals of the sensory neurons of the siphon This enhances glutamate release from the siphon sensory neuron terminals and leads to increased synaptic excitation of the motor neuron an effect that lasts approximately one hour the duration of shortterm sensitization of the gill withdrawal response C m 00 u 00 Motor neuron EPSI quotA control 0 10 20 30 40 Time mins Learning 29 More speci cally the mechanism responsible for shortrterm sensitization is as follows Serotonin released by the facilitatory interneurons binds to Gr proteinrcoupled receptors on the cell membranes of presynaptic terminals of the siphon sensory neurons Step 17 below this stimulates the production of a sorcalled second or intracellular messenger cyclic adenosine monophosphate cAMP Step 2 cAMP binds to a regulatory subunit of an enzyme called protein kinase A PKA Step 3 which phosphorylates several proteins The phosphorylated proteins reduce the probability that K channels open and thus prolong the presynaptic action potential This causes more presynaptic Ca2 channels to open Step 5 and increases the amount oftransmitter glutamate released onto the motor neuron Step 6 In summary shortterm sensitization of the gill withdrawal response is mediated by a signal transduction cascade that involves neurotransmitters serotonin glummate ACh intracellular messengers CAMP PKA and ion channels K Ca2 This cascade ultimately enhances synaptic transmission between sensory and motor neurons within the gill withdrawal circuit The same serotonininduced enhancement of glutamate release that mediates shortterm sensitization is also thought to control longterm sensitization however longterm sensitization lasts for at least several weeks The prolonged duration is probably caused by c anges in gene expression transcription and hence protein synthesis recall the dogma of modern biology Learning DNA gt transcription gt RNA translation gt Protein With repeated conditioning trials tail shock presentations in the presence of the siphon touch the serotoninractivated PKA involved in shortrterm sensitization now phosphorylates and thereby activates the transcription activator cAMP response element binding protein CREB in the cell nucleus This increases the rate of transcription of RNA from DNA in downstream genes resulting in the increased synthesis of ubiquitin and other unknown proteins which ultimately causes changes in oell structure and function In Fatlliulnry summm 2 Plasticity in the mammalian CNS These studies of Aplysia and related work on other invertebrates such as the fruit y Drasaphila melanagaster have led to several generalizations about the neural mechanisms underlying learning memory and other forms of plasticity in the adult nervous system than presumably extend to mammals and other vertebrates a Plasticity can arise from changes in the efficacy of synaptic transmission b These changes can be either shortterm effects that rely on modifications of existing synaptic proteins or longterm effects that require changes in gene expression new protein synthesis and perhaps even growth of new synapses or the elimination of existing ones Probably the bestknown example plasticity in the mammalian nervous system is long term potentiation LTP of the excitability of granule cells in the dentate gyrus of the hippocampus part of the limbic system of the midbrain Behavioral data indicate that these socalled place cells are involved in the formation of memories based on spatial cues at least in rats Learning 31 lesions damage to this brain area interferes with learning new msks memory formation or anterograde amnesia but has no effect on the ability to perform tasks acquired prior to injury retrograde amnesia Work on LTP that began in the 1960 s in Norway demonstrated that a few seconds of highfrequency electrical stimulation tetanic shock of the hippocampus can enhance neurotransmission for days or even weeks More recent studies have demonstrated the mechanism involved Malinow et al 1989 I llppommyn 5cm Her follnmmls gj l Granule all 1 k39Aflpymmidal Mn W Pox iormupmh W fibvrs LDLumlugyms Diagram of a section through the rodent hippocampus which shows the major regions excitatory pathways and synaptic connections LTP potentiation has been observed at each ofthe three connections shown as white crosses on red circles Although LTP occurs in regions other than the hippocampus e g cortex amygdala and cerebellum most of the work on this phenomenon has focused on the synaptic Connections between the Schaffer collaterals and CA1 pyramidal cells A below Electrical stimulation of the collaterals generates excitatory postsynaptic potentials EPSPs in the postsynaptic CA1 cells Ifstimulation occurs only two or three times per minute the size of the evoked potential in the CA1 neurons remains constant B causes LTP which appears as a longrlasting increase in EPSP amplitude C However a brief highrfrequency train of stimuli to the same axons tetanus Learning A CA3 pyramidal cells Scha ffer calla feral s B Pathway 1 Pathway 2 g Stimulus Stimulus 3 i MM tetanus i After tetanus to pathway 1 E E I I39 Before tetanus Baf e emus to pathway 1 0 25 50 75 25 50 75 100 l Time ms C High frequency stimulation SUD 20a E39I39s39xgwxypv39 45 m 715 1 I 5 30 Time min Despite the fact that LTP was discovered more than 30 years ago its molecular mechanisms were not Well understood until recently During lowrfrequency stimulation glutamate released by the Schaffer collaterals binds to both NMDA and AMPA glutamam receptors Ifthe postsynaptic neuron is at its normal resting membrane potential the voltage sensitive NMDA channels are blocked by Mg ions and no current will ow left diagram below However if the postsynaptic neuron is depolarized by highrfrequency stimulation Mg is expelled from the NMDA Learning 33 channels allowing Ca2 to enter the neuron and cause heightened excitability LTP right diagram below m sling pmm al During pasIsynaplic depnlariza nn Presynapnc terminal 39 The Ca2 ions that enter the cell through the NMDA channel are also Thought to activate protein kinases which act postsynaptically to insert new AMPA receptors into the dendritic spine of the postsynaptic neuron thereby increasing the sensitivity of that cell to glutamate Learning 34 III EVENTEVENT S S LEARNING CLASSICAL CONDITIONING A INTRODUCTION 1 quotAssociativequot learning Classical conditioning is generally considered to be the most basic kind of associative learning that is learning in accordance with the laws of association However this does not mean that it is entirely Associationistic in the classic sense of the term above 2 Simplicity According to writers such as Domjan classical conditioning is the simplest mechanism whereby organisms learn about relationships between stimuli and come to alter their behavior accordingly Italics added by JBA 3 Reductionism Pavlov 1927 Bechterev 1928 and Watson 1916 believed that even the most complex behaviors could be quotreducedquot to concatenations of unconditioned and conditioned responses Therefore to these and other investigators understanding the laws of conditioning was the basis of understanding all of psychology 4 Procedure and Process Classical conditioning can be de ned procedurally below in terms of the association pairing or contingent presentation of two or more stimuli a conditional conditioned stimulus CS and an unconditional unconditioned stimulus US However the same term conditioning also is used to refer to a mechanism process or more accurately inferred process an increase excitation or decrease inhibition in response strength latency magnitude or probability which results from and is presumably caused by the pairing of the CS and the US It is important to understand the difference between a procedure and a process We assert categorically for example that classical and instrumental operant conditioning are clearly different procedures involving S gt S and R gt S contingencies respectively Whether or not they are different processes involving different neuronal mechanisms or are different kinds of learning are questions we cannot yet answer Learning 35 quotPsychicquot secretions a I P Pavlov Ivan Petrovitch Pavlov won the Nobel Prize for studies of the physiology of digestive processes in 1904 It should be noted parenthetically that this achievement resulted from the analysis of simple functional relationships between two variables in individual organisms usually dogs or as we say down heah dawgs b Serendipity In studying the functional relationship between latency of salivation and amount of meat powder placed into a particular dog s mouth Pavlov noticed that dogs which were familiar with the experimental situation began to salivate before the food US was presented Since this result was unexpected to say the least it was called psychic that is mental or unknown c Lawfulness However unlike most previous investigators of quotpsychicquot phenomena Pavlov decided to study the secretions he observed and related re exes systematically and experimentally and did so for the rest of his long life perhaps because he considered them to be 1 Objective within the province of quotnaturalquot science 2 Physiological 3 Cortical ie a quothigher mental processquot 4 Adaptive Fortunately for the future of scienti c psychology this study resulted in much of what we know of the laws of learning ie acquisition extinction generalization discrimination inhibition etc Perhaps less fortunately Pavlov s particular physiological biases left contemporary psychology with a somewhat less than accurate view of what is involved in many if not most classical conditioning situations see Rescorla 1988 Learning 36 d Variety While Pavlovian conditioning may have originally been concerned with a glandular response salivation in one species dog it is no longer con ned to these responses or species More frequently used preparations experimental situations studied in the United States include 1 Eyeblink conditioning in rabbits and rats 2 Conditioned suppression or fear the conditioned emotional response or CER in rats and other animals 3 Autoshaping sometimes called sign tracking or conditioned key pecking in pigeons 4 Taste aversion in rats and other animals 5 Interoceptive conditioning conditioned drug effects e Importance Correctly de ned and understood Pavlovian conditioning is ubiquitous while its main importance probably lies in the modi cation of emotional states such as fear below it is by no means limited to this important area DEFINITIONS THE CLASSICAL CONDITIONING PARADIGM 1 The unconditional unconditioned re ex Because they are more meaningful we prefer that the terms quotconditionalquot and unconditional be used when discussing classical conditioning However quotunconditionedquot and quotconditionedquot which arose from a miss translation of the Russian have become traditional for this not very good reason the two nomenclatures will be used interchangeably herein a De nitions 1 The US and the UR Certain stimuli or more accurately classes of stimuli when presented to an organism in a certain context under some speci able set of conditions SSc elicit ie are followed reliably or unconditionally by certain speci c responses Learning The 37 Any such stimulus is de ned as an unconditional stimulus US and the response that occurs in its presence unconditionally without any known prior conditioning is de ned as an unconditional response UR 2 The Unconditional Re ex The functional relationship US gt UR between the US and the UR is de ned as an unconditional or unconditioned re ex Examples 1 The classical appetitive reward re ex Meat powder US gt Salivation UR Insulin injection US gt Hypoglycemia UR 2 The defense aversive re ex Bechterev Electric shock US gt Leg Flexion UR Comments The number of possible US s UR s and unconditional relationships is relatively small There is nothing in our de nitions which assert that unconditional responses or re exes are quotautomaticquot quotinnatequot involuntary or even quotre exivequot in the sense that the response re ects the stimulus Indeed such language has no place in the psychology of learning Recall Cartesian dualism Similarly we have said nothing about the neuronal mechanisms that supposedly mediate underlie or quotsubservequot the UR re exive behavior conditional conditioned re ex De nitions 1 The Conditional Stimulus CS Any quotneutralquot stimulus that is any stimulus that does not elicit either the UR in which case the stimulus would be a US or a response that is physically incompatible with the Learning 38 UR in which case conditioning would be impossible can become a conditional or conditioned stimulus CS 2 The orienting investigatory response Before and during early stages of conditioning below such a stimulus elicits orienting or investigatory responses r s which normally habituate above Such responses are excellent indicators of the experimental subject s attention and ability to detect the potential CS and thus its conditionability if that is a word 3 The Conditional Response CR A conditional conditioned response CR is de ned as any response that occurs following the CS as a function of contingent or conditional on exposure to a conditioning procedure in some context That is a CR is any response which 0 Does not to our knowledge occur in the presence of the CS prior to exposure to the conditioning procedure or experimental situation and 0 Does occur following such exposure 4 The Conditional Re ex The so called conditioned re ex is the functional relationship CS gt CR between the CR and exposure to the conditioning procedure 0 Note 1 It is important to note that these de nitions do not say anything about the relationship between the CR and the UR Empirically these responses are sometimes similar eg salivation and sometimes not similar eg leg exion fear or even may be opponent they are rarely if ever identical For example conditioned and unconditioned salivation can be differentiated both qualitatively chemically and quantitatively 0 Note 2 In addition these de nitions assert nothing about the need for contiguity between the CS and the US which may be relevant to but is neither necessary nor suf cient for conditioning to occur see Rescorla 1988 Learning 39 b Examples 1 Appetitive classical conditioning CS I r Metronome Orienting Ear Prick US gtUR Meat Powder Salivation 1 CS gt CR Metronome Salivation 2 SSc 2 Defense aversive classical conditioning CS gt r Tone Orienting Ear Prick US UR Shock FleXion Tone Fear SSc c Comments Although the particular US may limit the kind of CR that is likely to occur in a given situation the number of potential CS s CR s and conditional relationships is essentially in nite o In addition conditioning may be exteroceptive or interoceptive skeletal or autonomic and observable or unobservable to the naked eye indeed there are few responses that cannot be conditioned Learning 40 Classical conditioning involves 1 A CS 2 An US 3 A contingent conditional relationship between them if CS then US While the CR may be biologically adaptive there is no programmed contingency between the subject s behavior CR and the occurrence of the US sometimes erroneously called the reinforcer in classical as opposed to operant conditioning Perhaps a simpler way to state this is if CR then US if no CR then US d The Classical Conditioning Paradigm We may summarize what we have said thus far as follows SSC History The importance of context contextual or background cues SSc when discussing functional relationships or indeed anything else cannot be overemphasized Organisms are exposed to conditioning procedures under particular sets of conditions or contexts and generalizations based on such exposures must specify or at least state the conditions under which they supposedly hold true Learning C 41 Finally learning by de nition involves the effects of past experience which necessarily occur over time Thus stimulus gt stimulus and stimulus gt response relationships at any given moment must be considered in the context of historical conditions that is time For this reason the laws and theories of learning as well as those of all of science are limited to particular sets of circumstances or to a particular Paradigm Personally we are more than willing to accept such a limitation iBe tuba geek absolute unlimiteh immortal eternal menu must seek it elsewhere In addition we remind you in the words of the late Lord Russell Russell s Paradox ll quotWhere are no absolutesquot BASIC CONDITIONING PHENOMENA l Excitatory classical conditioning The process of conditioning takes place when The CR occurs in the presence of the CS and not in the absence of the CS and The CR occurs because of the association or pairing of the CS with a US that is the quotCRquot is not the result of other processes such as sensitization above or pseudoconditioning below This reminds us that the manner in which the CS and US are paired is an important determinant of how much conditioning acquisition occurs a CS gt US relationships At least ve 5 different temporal relationships can exist between the CS and the US Learning 42 Classical Conditioning Procedures Time Short delayed conditioning Trace conditioning Long delayed conditioning Simultaneous conditioning Backward conditioning CS US CS US CS US CS US 1 Short Delayed Conditioning standard pairing In this procedure CS onset precedes US onset by some relatively short interval eg 05 sec and the CS continues at least until the US begins Other things being equal which they rarely are relatively short delays maximize the probability that conditioning will occur however the length of the CS gt US interval has different effects on different CR s Indeed the evidence suggests that there may be at least three quotfamiliesquot of conditional responses i The best conditioning of simple skeletal re exes such as the eyeblink seems to occur with a CS gt US interval of less than 05 sec Learning 43 ii Responses mediated by the autonomic nervous system such as the galvanic skin response GSR heart rate and conditioned suppression or quot fearquot seem to require a longer time between CS onset and occurrence of the US from 5 10 sec to 1 2 min iii When the US involves gastric distress as for example taste aversion the CS gt US interval may be as long as 24 hours However the best conditioning usually occurs at an interval of about 30 min In addition such conditioning may occur in as few as 1 or 2 trials 2 Trace conditioning Trace procedures are similar to delayed conditioning procedures in that CS onset occurs before US onset but differ in that the CS ends before the US begins the trace interval Relatively good acquisition also occurs with some trace conditioning procedures though not as much as under short delayed conditioning Other things being equal amount or rate of acquisition is inversely related to the trace interval the longer the interval the less the acquisition 3 Long Delayed Conditioning Intervals greater than about 20 sec are usually called long however the precise time depends on the response being conditioned above 4 Simultaneous conditioning In this procedure the onset of the CS and US occurs at the same time simultaneously Offset times vary in different experiments but are usually also simultaneous The most interesting thing about this procedure is that in spite of the contiguity between the CS and US little or no acquisition occurs A thought question why 5 Backward conditioning In this procedure CS onset occurs after US offset Learning 44 Although there is evidence to the contrary little or no acquisition usually occurs in this situation which is often used as a not very satisfactory control for forward conditioning in delayed and trace procedures 6 Temporal conditioning In this procedure there is no external CS the US is presented at regular intervals eg every 30 sec Considerable amounts of conditioning can occur under this procedure After many trials of long delayed or temporal conditioning the CR begins to occur just before the US occurs such quotanticipatoryquot responding is called temporal discrimination In a sense quottimequot or events associated with the passage of time including the occurrence of the US becomes the internal CS or SD which elicits or more accurately sets the occasion for the CR b Contiguity The fact that relatively short delayed and to a lesser extent trace procedures maximize conditioning above suggests that contiguity between events stimuli is an important determinant of amount of conditioning However the occurrence of phenomena such as taste avor aversions and the blocking effect below cast serious doubt that such contiguity between the CS and US is necessary c Contingency We will see that what is necessary for conditioning to occur is that there be a differential contingency between the CS and the US Rescorla 1967 That is the CS must be a differential predictor of or provide information about the US Ideally pUSCS1 pUS CSO p USCSO p US CS1 Learning 45 Of course in the real world these relationships are rarely this simple but if the CS provided absolutely no information about the US that is pUSCSpUS CS 05 conditioning does not occur Consider the following situations p USCS 100 p USCS 000 CS m US II II Time p USCS p USCS 050 While the number of CS s and US s are identical in A and B considerable conditioning occurs in A but not in B Why d Locating the US in time Another determinant of the information value of the CS and hence of conditioning is the extent to which the CS enables the organism to localize the US in time That is as we have seen in the case of delay and trace conditioning above the longer the CS or CS US interval the less the conditioning Interestingly it is not the absolute duration of the CS or CS a US interval that is important but the ratio of the time the CS is on to the time it is off the inter trial or time out period What this means is that it is the ability of the CS relative to other cues eg SSc to provide information as to when the US is going to occur that is critical for conditioning Learning 46 Stimulus intensity and salience 1 US intensity While the nature of the US eg shocks vs air puffs may sometimes be the most important determinant of both the amount and the rate of conditioning CR strength is usually a direct function of US intensity at least up to a point 2 CS intensity When an organism experiences only one CS in a particular experimental situation the strength of the CS does not appear to have much effect on amount of conditioning that is an intense CS does not usually produce an appreciably greater CR than a weak CS However if both a weak and a strong CS are experienced in the same or similar situation the strong CS will produce a signi cantly greater CR than the weak CS 3 CS quotsaliencequot Both humans and other animals appear to have an evolutionary species speci c predisposition or preparedness to associate particular conditional stimuli with speci c unconditional stimuli In addition certain stimuli are dif cult if not impossible to condition to particular unconditional stimuli a phenomenon known as counterpreparedness For example in taste aversion proximal or internal stimuli such as odors or saccharin induced sweetness are relatively easy to condition to drug or X ray induced sickness interoceptive conditioning but are dif cult to condition to shock in leg exion experiments exteroceptive conditioning On the other hand distal external stimuli such as lights or tones are easy to condition to shock but are dif cult to condition to sickness The likelihood that a particular stimulus will be able to elicit a CR after it has been paired with a US de nes the salience of the CS Learning 47 Thus odors are salient stimuli in taste aversion and lights or tones are salient stimuli in eXteroceptive conditioning situations 2 Compound conditioning In compound conditioning two or more stimuli CS s are paired with the same US In such situations the question arises as to which CS is associated with the US a Overshadowing Usually the more salient or intense stimulus overshadows the less salient or weaker CS that is more CR develops to the more salient cue b The blocking effect Studies by Kamin and his associates 1968 demonstrate that the presence of a predictive cue CS 1 will prevent or block the development of an association between a second cue CS2 with the same US Thus for conditioning to occur a CS must not only predict the US but also provide information not signaled by other cues More importantly the contiguity of a CS and a US does not guarantee that conditioning will occur Consider the following situation PHASE 1 C31 PHASE 2 I I CS 1 PHASE 3 CS2 Time How much conditioning is observed during Phase 3 Why Learning 2 48 c Higher second order conditioning It is possible to use well established CS to condition a new CS Usually this can occur through two second order or possibly three third order steps Phase I C81 r Tone Ear Prick US gtUR Simck Leg Flexion C51 gt CR 1 Tone Fear Phase II C82 gt r Llight Ear Prick CS 1 gt CR 1 Tone Fear i CS 2 gt CR2 Light Fear Among other things higher order conditioning demonstrates that it is possible to condition a response to a particular CS without directly pairing that CS with a US d Sensory preconditioning In a related procedure sensory preconditioning CS s are paired before being associated with a known US Inhibitory conditioning In inhibitory conditioning a CS CS comes to predict the absence of the US and eventually leads to a decrease in CR strength such a stimulus is as informative as an excitatory CS For inhibition to occur excitation must also occur that is some CS CS must predict a US either at the same time or prior to inhibition so that the CR can be inhibited by the CS Thus inhibitory conditioning is necessarily more complex than excitatory conditioning Learning 49 a Acquisition As shown in the diagram below inhibitory conditioning can be established in at least three ways Standard procedure cs l l cs l US Differential Inhibition Discrimination CS cs l 115 Negative CSUS contingency cs lll us b Latent inhibition Latent inhibition which is also called the CS preexposure effect occurs when a potential CS is presented repeatedly without being followed by any known US before conditioning Some thought questions What happens during this procedure and why c Extinction and inhibition 1 De nitions Like quotconditioningquot the term quotextinctionquot refers to both a procedure and a theoretical process i The extinction procedure In classical conditioning or more accurately following classical conditioning extinction refers to the repeated presentation of a CS without being followed by any known US That is CS US Learning 50 ii The extinction process The extinction process refers to the results effects of the extinction procedure above This is usually but not always a decrease in the strength of the CR 2 Internal inhibition Extinction is sometimes said to be or be the result of internal inhibition Pavlov 1927 That is what is presumed to happen during the extinction procedure is that the CS becomes associated with Pavlov or comes to predict Rescorla the absence of the US US 3 External inhibition and disinhibition Any unusual a distracting novel or unexpected stimulus change in context SSc that may not be paired with either the CS or US may produce a decrease or decrement in the CR during conditioning external inhibition or an increase in the CR during extinction internal inhibition Such inhibition of inhibition is called disinhibition Consider the following External Inhibition and Disinhibition Conditioning Extinction Per cent CR Trials 4 Spontaneous recovery If a subject is removed from the apparatus for a period of time during an extinction procedure the magnitude of the CR may increase when it is subsequently returned to the experimental situation This is called spontaneous Learning 51 recovery which can be viewed as a type of disinhibition in which time during which the time the animal is removed from the experimental apparatus SSc acts as the disinhibiting stimulus Spontaneous Recovery Day 1 Day 2 Day 3 Per cent CR Trial s INTEROCEPTIVE CONDITIONING 1 Variety of conditioned responses Many different kinds of responses can and have been conditioned These include but are not limited to responses that are mediated primarily by the peripheral autonomic nervous system so called quotinvoluntaryquot or quotautomaticquot responses and those that are mediated centrally skeleto muscular responses Among other things this means that different conditioning quotprocessesquot eg classical and instrumental cannot be differentiated by the neural mechanisms that subserve them involving for example the peripheral PNS vs the central nervous system CNS any more than they can be said to be involuntary or voluntary a Conditioned quotautonomicquot responses include GSR Heart Rate Emotional Responses CER Pupillary changes Learning 52 Interoceptive changes Allergic responses b Conditioned quotskeletalquot responses include Sucking Eyeblink Jaw movements Leg FleXion Verbal responses In addition as we have noted CR s may be either interoceptive autonomic emotional etc or eXteroceptive skeletal Since we have already considered eXteroceptive paradigms in some detail we will discuss one interesting example of interoceptive conditioning 2 Conditioned drug effects Among the stimulus properties of many psychoactive drugs is their ability to function as unconditional and conditional stimuli in classical conditioning paradigms as well as contextual and motivational cues or states in both classical and operant conditioning paradigms and discriminative and reinforcing stimuli in operant conditioning paradigms a Drugs as Unconditional Stimuli Cameron and Appel 1972 used a conditioned suppression procedure similar to that of Estes and Skinner 1941 as well as Kamin and Rescorla above Rats were trained to press a bar for water under a VI 30 sec schedule of reinforcement below in the presence of either a red or a white light red is normally not visible to rats The white light was then followed CS gt US interval 3 min by an intraperitoneal ip injection of an inactive substance NaCl habituation sessions The inactive substance was then replaced by an active drug either chlorpromazine 8 10 mgkg or LSD 02 mgkg What occurred can be described as follows Learning 53 CS I r Light Orienting l Ear Prick US gt UR LSD Hallucination CS gt CR Light Hallucination SSc Subsequent experimental work has demonstrated that this sort of conditioning 1 Is speci c to certain centrally acting drugs 2 Generalizes to other values of the CS and 3 Varies with the intensity dose of the US Similar and better known work by S Siegel 1976 is particularly interesting because of its relevance to drug tolerance and withdrawal above Consider the following situation CS I r lnj Env Orienting Ear Prick US gt UR Morphine Analgesia 1 CS gt CR Inj Env Hyperalgesia SSc The CR is hyperalgesia increased sensitivity to pain What happens over trials Recall Opponent Process Theory above Learning 54 If a is the US morphine A is the UR analgesia decreased sensitivity to pain The opponent B response hyperalgesia increased sensitivity to pain which occurs to the CS b the injection environment increases over trials days This provides a behavioral explanation for tolerance which is the sum of UR analgesia and the CR hyperalgesia It should be noted that Siegel and his colleagues have modi ed this conditioning analysis of tolerance in response to criticisms by several authors including Dworkin 1993 and Eikelboom amp Stewart 1982 It is now apparent the initial application of the Pavlovian conditioning paradigm to drug administration was somewhat super cial The UR to a pharmacological stimulus in common with re ex responses to other stimuli consists of responses generated by the central nervous system CNS The drug effect that initiates these CNS mediated responses is the US not the UR For many effects of drugs the UR consists of responses that compensate for drug induced perturbations These unconditionally elicited compensatory responses are responsible for acute tolerance After some pairings with the predrug CS and pharmacological US drug compensatory responses can be elicited by pre drug cues It has been speculated that such conditional compensatory responses CCR s mediate the development of tolerance by counteracting the drug effect Siegel et al 2000 p 277 CS I r Orienting l Ear Prick US gt UR Analgesia Hyperalgesia CS gt CR Inj Env Hyperalgesia SSc In other words the UR and the CR are in fact similar or at least in the same direction as Pavlov originally argued they are NOT opposing Learning 55 In this regard Dworkin 1993 notes Conditioned drug responses when adequately isolated dissected and understood exemplify in an uncomplicated way the phenomenon rst described by Pavlov The conditioned re ex resembles the unconditioned re ex and as it develops it augments the effect of the unconditioned re ex Dworkin 1993 p 38 b Drugs as Conditional Stimuli DAmphetamine is a central nervous system quotstimulantquot which normally unconditionally increases locomotor activity and the rate of at least some schedule controlled behaviors Turner and Altshuler 1976 and more recently Overton demonstrated that Pavlovian conditioning could reverse this effect Rats were trained to press a bar for food under a VI 1 schedule Training sessions were then suspended and animals were given 1 Intraperitoneal injections of d amphetamine 08 mg kg CS followed 15 min later by inescapable shock 200 Ma 05 sec in duration with a 45 sec inter shock interval US or 2 Unpaired amphetamine injections and shock All animals were then given amphetamine during additional VI sessions In the paired but not the unpaired control group amphetamine unaccompanied by shock signi cantly suppressed responding CS r Amphetamine Hyperactivity US gtUR Shock Fear CS gt CR Amphetamine Fear SSc Learning 56 Note that the conditioned quotfearquot or whatever caused suppression was sufficient to overcome the unconditioned stimulating effects of the drug A thought question What are the implications of this experiment for the quottreatmentquot of drug taking behavior and perhaps drug dependence addiction THEORIES OF PAVLOVIAN CONDITIONING 1 Introduction Lack of time prevents a detailed discussion of general theories of learning However we will summarize a few current theories which speci cally address classical conditioning None of them is free of difficulties or stated otherwise each rests on tenuous empirical foundations Conditioning theories can be divided not so conveniently into those involving either S gt S or perhaps surprisingly R gt S associations 2 S gt S Theories a Stimulus substitution One example of an S gt S theory is called stimulus substitution Pavlov believed that a neural trace or representation of a US becomes attached through spatially contiguous association in the cortex to a representation of a CS Thus the CS becomes a substitute for the US in the elicitation of the UR Tolman Guthrie Hilgard amp Marquis and Hull all have voiced the most important criticism of this theory which was supported by Bechterev and Watson among others substitution implies that the CR and UR must be very similar or identical As we have seen this is not necessarily the case b Sometimes Opponent Process SOP response theory While the CR and UR might sometimes be similar they also might be or appear to be different or even opposite in direction For example shock produces tachycardia UR while a CS paired with shock produces bradycardia UR In an attempt to explain these phenomena Wagner 1981 and his colleagues developed SOP an extension of Solomon and Corbit s opponent process theory above Learning 57 SOP theory holds that the US elicits two UR s A1 and A2 The A1 response is elicited rapidly by the US and decays quickly after the US ends both the onset and decay of the A2 response are gradual The A2 response may be the same as or different from the A1 response Conditioning occurs only to the A2 response that is the CR is always the same as the A2 response The UR and the CR will appear to be the same only when A1 and A2 are the same Suppose a rat is exposed to a brief electric shock US as is usually the case in conditioned suppression CER experiments The initial response to shock A1 is agitated hyperactivity which is followed by a longer lasting hypoactivity or quotfreezingquot A2 It is the freezing response CR that appears to be conditioned to the stimulus that is paired with the shock c The Rescorla Wagner associative model We have seen that conditioning occurs to any given CS only to the extent that that CS is a good predictor of the US Stated more cautiously conditioning occurs only to the extent that the CS is the best available predictor of the US in a particular context or background SSc The Rescorla Wagner 1972 model which was developed to explain these and related phenomena has four main ideas 1 There is a maximum or asymptotic associative strength A that can develop between a CS and a US The US determines this asymptote that is different US s support different maximal levels of conditioning 2 While the associative strength increases with each conditioning trial AVn the level of prior training V I11 affects the amount by which it increases on a particular training trial That is acquisition is not uniform but is quotnegatively acceleratedquot In other words the amount we learn is inversely proportional to the amount we already know stated somewhat differently the more we know the less we learn Learning 3 4 58 The RescorlaWagner Model Conditioning Strength Blocks of Trials Rate of conditioning depends on the particular CS and US used Thus some stimuli acquire associative strength more quickly than others The amount of conditioning on a given trial is also in uenced by the level of previous conditioning to other stimuli associated with the US Mathematically the Rescorla Wagner states the following lAvnK Vn1 l V is the associative strength between the conditioned stimulus and the US AVn is the change in associative strength that develops between trial 11 1 and trial 11 when the CS and the US are paired K is a re ection of the salience of the CS used in a particular experiment and L is the asymptotic level of conditioning supported by a particular US Learning F 59 While a considerable number of studies have supported this model many others involving CS preeXposure cue predictiveness and cue de ation do not Nevertheless the model has been in uential in that it is stimulated a considerable amount of research R gt S Theories a quotSuperstitiousquot reinforcement According to this theory any response that happens to occur in the presence of the CS will be followed by a quotreinforcerquot or more accurately the US such as food or shock termination Thus the response is acquired because of fortuitous accidental operant conditioning The situation is similar to one described by Skinner 1948 who found that presenting a grain magazine periodically to deprived pigeons would maintain temporarily any response that happened to occur immediately prior to grain delivery b Adaptation or preparedness theory In this theory responses that occur in the presence of a CS are conditioned only if they somehow quotpreparequot the subject for the US Thus salivation is conditioned because it prepares the stomach for food delivery or reduces the aversiveness of an acid US or leg eXion reduces the aversiveness of shock eyeblinks avoid the unpleasant effects of air blast etc In response to such an hypothesis one might ask the following if only adaptive responses are conditioned are such responses in fact conditioned classically APPLICATIONS OF CLASSICAL CONDITIONING The fact that many different kinds of responses can and have been conditioned suggests that classical conditioning might have many applications beyond the analysis of why doggies drool when they see food or become frightened when they are exposed to a stimulus that signals shock delivery The principle but by no means only application of classical conditioning to human affairs concerns the acquisition and treatment of what can become pathological affective or emotional states such as Post Traumatic Stress Disorder PTSD and related behavioral problems such as phobias Learning 60 The conditioning of affective states a Experimental Neurosis A phenomenon which came to be called experimental neurosis was rst reported by E Shenger Krastikovnika a student of Pavlov who was studying the limits of discrimination learning differential conditioning A dog was trained on a Pavlovian procedure in which a circle CS was followed by food and an ellipse was not followed by food CS The ellipse was then changed such that its appearance became increasingly circular When the difference between the CS and the CS was too small to be easily detected that is was beyond the animal s sensory quotcapacityquot the discrimination broke down and performance accuracy deteriorated markedly More interestingly other responses such as whining squealing restless movements defecation urination etc begin to occur as soon as the animal was placed in the experimental situation These quotdistressquot responses had all the features of human anxiety Pavlov believed that the constellation of inappropriate distress or anxiety reactions that he named experimental neurosis was the result of strong con icting tendencies toward excitation and inhibition produced by the same to the dog stimulus Note While such responses also occur when other species including humans confront quotdifficultquot discrimination problems experimental neurosis is by no means universal For example it does not usually occur when fading techniques are used such that discriminations are acquired with relatively few errors ie quoterrorlessly b Phobias One of the best known experiments in the history of psychology was conducted by the quotfather of Behaviorismquot John B Watson and his student and future wife Rosalie Rayner Watson and Rayner 1920 were interested in the extent to which human fears are innate heritable or are determined by the environment conditioned One of the few events they believed was naturally fearful to children is a sudden loud noise US Learning 61 They therefore used noise as an US to establish a conditioned fear in a human infant Albert B quotlittle Albert by pairing it with a white rat CS Not only did Albert become frightened of the rat but his fear generalized to objects that resembled white rats in one way or another eg rabbits Santa Claus masks etc c Fear and anxiety The CER revisited In clinical psychology psychiatry and common English it is sometimes useful to distinguish between fear which is usually focused on particular objects or situations and anxiety which is more unfocused free oating and diffuse We are often unaware of what we are anxious about Both fear and anxiety can be analyzed experimentally with the conditioned suppression or conditioned emotional response CER paradigm above 1 Recall that if a stimulus such as a tone CS is presented say 1 min prior to a foot shock US a fear response becomes conditioned to the tone This CS has at least some information value in that it enables the subject to predict and possibly minimize or at least prepare for the shock What is interesting about this situation is what happens in the absence of the tone The non occurrence of the tone has information value that is it is an inhibitory stimulus for fear In a sense it quottells the dog not to be afraidquot or stated in another way it is a safety signal for a presumably more normal affective state relaxation 2 Suppose however there were no tone in a situation in which shocks occurred unpredictably or that a tone occurred but did not predict the shock reliably What might then develop is free oating neurotic anxiety such as that which is seen in patients suffering from Generalized Anxiety Disorder GAD rather than fear That is the animal or patient never knows when the shock or stressor is coming the entire experimental situation SSc or social environment therefore becomes the only stimulus that is informative predictive of shock Anxiety occurs because there is no safety signal there is never any time to relax Learning 62 2 The Pavlovian treatment of psychopathology While little is known of Albert s fate because he left the hospital shortly after Watson and Rayner conducted their conditioning experiment another baby little Peter was conditioned and later unconditioned by one of Watson s students Mary Cover Jones 1924 who should probably be recognized as the rst behavior therapist We will consider brie y three unconditioning or therapeutic techniques counterconditioning ooding and systematic desensitization a Counterconditioning In this procedure a C8 that had originally been paired with say a fear producing stimulus U81 is presented with a new stimulus U82 that is different from and more powerful than the original U81 Hopefully the responses that develop to U82 will be incompatible with the ones produced by U81 which therefore stop occurring For example we might pair a tone that had previously been a C8 for salivation with a strong electric shock The shock would produce fear responses to the tone and conditioned salivation would cease Another interesting example of counterconditioning occurs in the treatment of alcohol abuse where the sight and taste of alcohol C8 is paired with a drug disul ram Antabuse U82 which makes people sick when they drink anything containing alcohol Finally on a more positive note we could pair Albert s rat which had been conditioned to the noise and therefore elicited fear with the sight of Albert s mother s breast and presentation of her nipple The rat should then elicit hope joy salivation anticipatory sucking responses etc all of which are incompatible with fear b Flooding Flooding is essentially an extinction technique in which the patient agrees to be exposed to a fear producing C8 continuously or over many trials without the U8 being presented Learning 63 The logic behind this technique is simple enough to extinguish or inhibit fears people must experience the CS without the US c Systematic desensitization A technique systematic desensitization that depends on counterconditioning was developed by Joseph Wolpe primarily to treat phobias and anxieties Wolpe and Lazarus 1969 In this procedure the patient tries to identify a hierarchy of situations that might elicit whatever response eg fear s he wishes to eliminate from the least to the most fearful The therapist then asks the patient to imagine the least fearful of these situations and trains him or her to relax by a procedure that resembles hypnosis When complete relaxation is achieved the patient is asked to imagine the next least fearful situation and the relaxation therapy is repeated The process continues until the patient has been conditioned to relax in what s he imagines to be the most fearful situation In theory at least some generalization of this treatment occurs outside the therapeutic situation so that the patient can relax whenever the real as well as the imagined fear producing event occurs Learning 64 IV BEHAVIOREVENT R S LEARNING OPERANT CONDITIONING A INTRODUCTION AND OVERVIEW Note In this section we will de ne a relatively large number of terms which will be discussed in more detail later Sections IV B D 1 Operant behavior When we use the terms Operant or instrumental we mean any behavior that is quotgoal directedquot quotpurposivequot quotconsequentialquot quotvoluntaryquot or quotmeaningfulquot that is any behavior which effects or operates on the environment More speci cally the term operant like many others in this area of psychology including for example reinforcement and punishment are de ned functionally rather than structurally or topographically That is they are de ned by their effects what they do rather than what they are ie their structure topology or anatomy o In common English we also de ne things or events functionally or as being functionally equivalent when they are used for the same purpose or have a common effect For example the following are functionally but certainly not structurally equivalent in that they are all means of transportation Learning 65 0 Similarly events such as tasting food feeling high after smoking crack cocaine and stimulating one s own hypothalamus are all positive reinforcers below because they increase the probability of the behavior that produces them A functional de nition of Operant Behavior When behavior or more accurately class of behaviors R is followed by gt a consequence S and thereafter the probability of the same or similar R is functionally related to that S the behavior is de ned as operant behavior Note S follows R that is presentation of the consequence reinforcing or punishing stimulus is conditional or contingent on the occurrence of R If R then S if not R then S does not occur This is operationally or procedurally different from classical conditioning in which the response UR or CR is conditional or contingent on the occurrence of a stimulus US or CS a R S and gt contingencies 1 The response class R R is any class of behaviors which has a given consequence effect or function eg any R which closes a switch results in reinforcement or punishment etc 2 Reinforcing and punishing stimuli are also de ned functionally below for example food is a reinforcer to a hungry rat only when it increases the probability of the response that precedes its delivery 3 In R gtS the relationship denoted by gtquot may be accidental If this is the case any R that is followed closely in time by a potentially reinforcing or punishing stimulus may increase or decrease in probability below Indeed Skinner 1948 called such a situation superstition The stimulus class S Contingencies dance gt rain If gt is programmed or scheduled the relationship is called contingent Learning 66 Contingency vs contiguity It is becoming increasingly clear that what is required for operant conditioning to occur is a contingent rather than merely a contiguous relationship between two events in this case the R and its consequences That is the organism must learn that its behavior controls or more accurately appears to control changes in its environment Consider the following experiment Hammond 1980 An experimental session is divided into 1 sec periods In each period rats can press a lever or not press the lever If the rat presses the lever food is delivered with a probability of 005 That is p foodlever press 005 p foodno lever press 000 Thus there is a contingency between lever pressing and food rats rapidly learn to press the lever Next the probability of food following a lever press is made equal to the probability of food following no lever press p foodlever press p foodno lever press 005 That is the contiguity between the response and its consequence or lack of consequence remains the same but the contingency is removed What do you think eventually happens to the probability of pressing the lever Note that in real life we do not always know whether a given relationship is accidental or contingent Presumably functional relationships are contingent accidents do not happen too often ii Emotional effects of contingency learning Two interesting side effects of contingency learning are Locus of Control and Learned Helplessness Learning 67 o Locus of Control Considering the following experiment involving 3 month old infants Watson 1967 1971 An infant is placed in a crib with a switch under its pillow Whenever its head moves and closes the switch a mobile over the crib moves for a few seconds The child quickly learns to turn its head and move the mobile moreover it smiles giggles and seems delighted In second yoked child in another crib a similar mobile moves whenever the rst infant makes it move Not surprisingly the second child fails to learn to move its head because such movements have no consequences What is more interesting is that although this child seems to like the movement of the mobile at rst it quickly loses interest in it Apparently the reinforcing properties of the moving mobile depend on the extent to which it can be controlled by the infant This is a special case of one of the most important principles of operant conditioning reinforcement is not an intrinsic property of a stimulus or event but a dependent property of a relationship between that event and the response that causes its occurrence 0 Learned Helplessness A classic series of experiments by Seligman 1975 ff involves a similar paradigm with dogs and aversive rather than positively reinforcing stimuli In this kind of research dogs are typically trained to jump over a barrier to escape from or avoid painful stimulation below Before training however a dog A can turn off a shock by pushing a panel A yoked dog B is shocked in an identical manner but has no control over the shock Whenever dog A turns off the shock dog B s shock is also removed Learning 68 Dog A quickly learns to jump over the barrier and in a short time receives few if any shocks Untrained dogs which were never exposed to shock act similarly However dog B the yoked dog fails to learn Instead it whines grows passive and eventually lies down and makes no effort to escape or avoid the shock This learned helplessness effect has become a popular animal model of human depression related disorders and a useful screening tool for anti depressant drugs 2 The Operant Paradigm A more precise de nition of operant behavior involves specifying all the variables and relationships of which R is a function An approximation to this in the case of a single response is 39 History SiD sD SA R gt s 0 SSc Time R The operant response or de ned above more accurately response class S i Consequences Positive and negative reinforcement and sometimes punishment below SD and SA Discriminative stimuli SiD Instructional control PV Potentiating variables SSc Contextual background stimuli Learning 69 Consequences Si Loose de nitions i Reinforcement strengthening or selecting behavior by attaching a consequence to it ii Punishment weakening behavior by attaching a consequence to it iii Extinction removing consequences behavior returns to prior operant level More precise de nitions i Reinforcement Given that a stimulus S occurs following some response R that there is a contingent relationship between the presentation of the S and R R gt S and pR is increased pRT the occurrence or removal of S is de ned functionally as reinforcement Note Reinforcement requires three 3 things 0 The occurrence of S following an R o A contingent relationship between R and S R gt S and pRT Positive Reinforcement S If the consequence of the response is the presentation of some stimulus and pR is subsequently increased the procedure is de ned as positive reinforcement and the stimulus is called a positively reinforcing stimulus or reinforcer not a reinforcement Negative Reinforcement S39 If the consequence of a response is the elimination removal or postponement of the stimulus and pR is subsequently increased the procedure is de ned as negative reinforcement and the stimulus is called aversive Learning 70 There are two kinds of negative reinforcement 0 Escape removal of S o Avoidance postponement of S ii Punishment Given that a stimulus S occurs following some response R that there is a contingent relationship between the presentation of S and R R gt S and pR is decreased pRL the occurrence or removal of S is de ned functionally as punishment Note Punishment also requires three 3 things 0 The occurrence of S following an R o A contingent relationship between R and S R gt S and p RN Positive punishment If the consequence of the response is the presentation of the stimulus and pR is subsequently decreased the procedure is de ned as positive punishment The stimulus is called a n aversive stimulus a punishing stimulus or simply a punisher Negative punishment If the consequence of the response is the elimination or removal of the stimulus and pR is subsequently decreased the procedure is de ned as negative punishment The stimulus which is removed is called a reinforcer its removal is aversive There are two kinds of negative punishment 0 Time out or elimination reinforcer removal 0 Omission reinforcer postponement Learning 71 Note 1 Terms such as quotnon contingent reinforcementquot and quotnon contingent punishmentquot are self contradictory and therefore meaningless Stated otherwise both reinforcement and punishment are always response contingent Among other things this means that we reinforce or punish particular responses of organisms or children we do not reinforce or punish organisms or children Note 2 Punishment and negative reinforcement are NOT synonymous They can and are differentiated by their behavioral effects In punishment the probability of the response on which the presentation of the punisher is contingent pR decreases in negative reinforcement the probability of the response that removes the aversive stimulus pR increases 3 Extinction Given the occurrence of an R which once gt S but no longer gt S the procedural change from past to present contingencies ie consequences to no consequences is de ned as extinction Whether pR decreases or increases during extinction depends on previous conditions that is When R gt O and pR decreases the procedure is extinction following reinforcement When R gt O and pR increases the procedure is extinction following punishment However this procedure has another more popular name recovery We can summarize what has been said thus far about consequences as follows Learning 72 The Consequence Matrix PRT PRN Positive Positive R gt S Rein Punishment Negative Rein Negative Pun Escape Time Out R S Avoidance Omission Recovery Extinction I gt 0 Piior Pun Pfior Rein Stimulus control Discrimination and generalization 1 Background Operant behavior is usually appropriate ie reinforced under a speci c set of conditions eg stop car on red light go on green come to class at 125 PM on M W and F etc Thus environmental events can become quotsignalsquot or quotcuesquot for behavior that is they set the occasion for the reinforcement or punishment of speci c responses they do not quotelicitquot anything 2 De nitions i Discriminative stimuli SD and SA More formally discriminative stimuli are environmental events in the presence of which behavior is differentially reinforced that is stimuli that define reinforcement contingencies An SD pronounced quotess deequot is a stimulus in the presence of which a response if it occurs is followed by reinforcing or punishing consequences Learning 73 An SA pronounced quotess deltaquot is a stimulus in the presence of which a response is not followed by either reinforcement or punishment Note 1 Neither SD nor SA is de ned functionally by the responses of the subject rather they are procedural rules which determine reinforcement contingencies given the occurrence of appropriate behavior Note 2 SD and SA are interdependent that is you cannot have one without the other Note 3 Discrimination is said to occur when responding becomes differentially related to the occurrence comes under the control of SD and SA That is we say an organism can discriminate between SD and SA when pR is high in the presence of SD and pR is low or zero in the presence of SA D 111 Instructions Si Instructions are discriminative stimuli rules for obtaining reinforcement that are presented in advance eg before an experimental or quottestquot session Such stimuli are used primarily with human subjects and will not be discussed further in this course c Contextual constant background stimuli SSc Discussed previously d Potentiating variables quotMotivationquot Potentiating variables are procedures for regulating the effectiveness of consequences they include 1 Deprivation satiation For food to be an effective reinforcer we must deprive the animal of food The concept of quothungerquot which may or may not be useful as an intervening variable is unnecessary Learning 74 Conversely the reinforcing effectiveness of food declines if it is presented continually or repeatedly think of the Thanksgiving turkey on Sunday evening These relationships hold also for punishment How long will a wife s nagging continue to punish her husband s football watching behavior effectively 2 Functional equivalents of deprivation and satiation To make water an effective reinforcer we can deprive an animal or human but we can also feed it salt dry food put it in the sun ask it to give a lecture etc Why do smart bartenders provide pretzels or peanuts to their customers during quothappy hour 3 Linkage See Conditioned generalized reinforcement below 4 Consequence variables i Delay of reinforcement In general a reinforcer is as effective as it is immediate ii Magnitude or amount of reinforcement In general the greater the mass or number of reinforcers an animal receives following a response the more reinforcing that stimulus is However are two food pellets necessarily twice as reinforcing as one food pellet ii Response requirements amount of work amount of pay Sometimes a stimulus will be reinforcing if not much effort is required to obtain it but is not reinforcing worth the effort if a lot of work is required This is of course the story of the fox and the sour grapes Parenthetically it is also the idea behind Hull s 1943 concepts of response inhibition IR and conditioned inhibition SIR which subtract from performance or response strength SE12 thus in this theory SER is a Learning 75 multiplicative function of habit strength SHR motivation or drive D and incentive K minus the sum of inhibitory factors SER SHR x D x K SIRIR B POSITIVE REINFORCEMENT S Because time will not permit us to consider all aspects of the operant paradigm in detail we will consider only some its most important components 1 Introduction a Classi cation 1 Primary and secondary reinforcement Given that suf cient potentiation is present positive reinforcement may be classi ed into at least two types each of which has several names Unlearned Learned SR sr Unconditioned Conditioned Primary Secondary Natural Acquired As we will see below 0 Some of these terms are similar to those used in classical conditioning and o The number of SR s like US s is limited while the number of possible Sr s like CS s is virtually in nite There are however certain kinds or categories of reinforcement that do not t neatly into this or any other classi cation scheme These include Learning 76 2 The Premack Principle Premack 1961 introduced another way of conceptualizing reinforcers This is the idea that behavior itself or more accurately the opportunity to engage in behavior is reinforcing or punishing The Premack Principle or Prepotent Response Theory states that the opportunity to engage in behavior that normally occurs at a high frequency will reinforce behavior that occurs at a lower frequency This explains among other things apparent anomalies such as satiated rats eating or drinking in order to run in a wheel In addition the opportunity or necessity to engage in less frequently occurring behavior can be punishing The Premack principle is often the best and sometimes the only way to identify what is reinforcing a given subject or patient under a given set of conditions To determine what is reinforcing one need only observe what an organism does most often and make the opportunity to do it contingent on the behavior one wishes to encourage 3 Species Speci c Reinforcers Certain events maintain behavior only during so called critical periods in the lives of various species For example at about 2 1 days of age a duckling will follow any moving object that happens to cross its eld of vision In other more biologically oriented systems such as ethology the stimuli which the organism follows imprints are known as releasing stimuli or releasers Since moving objects will reinforce any response eg key pecking if they cross a particular organism s visual eld at a certain stage of development it can be argued that such objects are simply reinforcers under severely restricted conditions of potentiation Unconditioned primary reinforcement SR a De nition Given adequate potentiation motivation or incentive there appears to be a limited number of events stimuli that maintain Learning 77 behavior without any known pairing or association with other events Such stimuli are called unconditional unconditioned or primary reinforcers SR s presumably because they reinforce behavior unconditionally We cannot answer the question of why a given event is or is not an SR under a given set of conditions Such relationships may be determined biologically genetically culturally historically or by The Will of God We also cannot answer the theoretically interesting questions of 1 whether reinforcement necessarily involves need or drive reduction Hull 2 whether or not organisms must be aware or conscious of what is reinforcing or motivating his or her own behavior Freud or 3 whether or not a particular brain circuit eg the mesocorticolimbic DA circuit subserves primary reinforcing events What we can do is identify and say a few things about some of the more salient SR39s that have been studied over the years It should be noted that the value of even these primary reinforcers is affected by many variables for example the elasticity of their demand below b Some unconditioned reinforcers 1 Food The most widely used SR at least in animal studies is food However this stimulus is reinforcing only when an organism has been deprived quothungryquot In humans there is a quotdiseasequot Anorexia nervosa in which food loses its ability to reinforce consummatory responses to the extent that patients die of starvation One question concerning food and its relation to ingestive behavior is what aspect of the stimulus is reinforcing Is it its taste the act of chewing swallowing lling the stomach the conversion of food to blood sugar the activation of certain neuronal circuitry such as the mesocorticolimbic DA system or some combination of all or none of these events While interesting work has been done in these areas eg Miller and Kessen 1952 the answer to this question remains unknown Learning 78 2 Brain Stimulation ESB quotpleasurequot centers In 1954 Olds amp Milner and Delgado et al working independently reported that response contingent electrical stimulation of certain regions of the brain eg septum lateral hypothalamus VTA MFB maintains considerable amounts of operant behavior in the rat Stimulation of other areas eg in the thalamus does not have this reinforcing effect This nding has been replicated many times in diverse species sh pigeons cats dolphins monkeys and humans maintained under schedule control and analyzed anatomically There is in short no doubt that ESB is a robust reliable phenomenon 3 Drugs In diverse species including rats cats monkeys and humans various psychotropic drugs effectively maintain a great many operant behaviors on which their self administration or ingestion is contingent Suf ce it to say here that animals other than humans will acquire behavior such as bar pressing in order to obtain intravenous or intracerebral injections of several classes of compounds Roberts amp Goeders 1989 Interestingly those agents that are most reinforcing to rhesus monkeys for example are those that are most often self administered and abused by humans they include CNS Stimulants amphetamine cocaine Opiates heroin morphine Alcohol ethanol Barbiturates Pentobarbital PCP quotAngel dustquot and A9THC marijuana Other psychotropic drugs such as the hallucinogen LSD the dysphorigenic opiate cyclazocine as well as anti psychotic medications that are used to treat schizophrenia Thorazine Haldol and Risperdal are NOT reinforcing in humans Learning 79 Conditioned Secondary Reinforcement Sr a De ning Characteristics Essentially any neutral event can acquire reinforcing properties by being paired with one or more primary reinforcers However unlike primary reinforcers such events lose their ability to maintain behavior when their association with the primary reinforcer is terminated ie when they are no longer quotbacked upquot Because of the similarity of Sr gt SR to CS gt US relationships the term conditioned reinforcement is sometimes used to describe the procedure of pairing potentially reinforcing stimuli the term conditioned reinforcer is used to describe the Sr b Establishing a conditioned reinforcer We have already discussed many of the laws of conditioned reinforcement because they parallel those of classical conditioning Thus one way to establish an event as a conditioned reinforcer is by pairing it with an unconditioned reinforcer As we have seen the best acquisition occurs when short delay or trace procedures are used above A thought question How do we know when a given stimulus is in fact a conditioned reinforcer or a CS A conditioned reinforcer is not necessarily a CS and vice versa 0 Consider pairing a light with the presentation of grain to a food deprived pigeon Make presentation of the light contingent on a response If the animal will acquire this response ie work for the light the light is by de nition a reinforcer However if the light is not followed by food occasionally the response will extinguish As in Pavlovian conditioning the more informative a stimulus is the more effective a conditioned reinforcer that stimulus will be at least when the stimulus predicts good news Indeed animals will make observing responses to obtain an Sr such as a dim light which informs them of the reinforcing or punishing contingencies in effect at a given moment for example will food come after 10 or after 90 responses Learning 80 c Chaining Conditioned reinforcers can be established in ways that do not involve classical conditioning directly For example time and other responses may intervene between the Sr and the SR Indeed even the simplest operant always involves conditioned reinforcement The following situation describes more precisely what has been called quotchainingquot That is R1 3 rD R2 S rD gt gt Bar press Food del Going to food Slght Of fOOd In such a response sequence or chain the stimuli ultimately associated with food are both Sr s for the R that precedes them and SD s for the R that follows them that is they are Sr39D s Note that the best way to establish such a chain is to proceed backwards That is the first response we must establish is the last link in the chain the one most directly associated with unconditioned reinforcement eg eating in the goal box When training occurs in this manner even in complex educational systems questions of quotrelevancequot never arise because the goal is clear from the beginning d Generalized conditioned reinforcers Any Sr may be paired with more than one unconditioned reinforcer Examples of such generalized conditioned reinforcers in the quotreal worldquot are money tokens grades and less tangible or quanti able social events including attention affection and changes in facial expression such as smiles The more SR s with which a given Sr has been associated the more powerful and enduring the Sr will be since extinction to all Sr s is unlikely to occur simultaneously Thus quotmoney makes the world go roundquot and the management of institutionalized psychotics and classroom behavior has been revolutionized for better or worse through the use of quottoken economiesquot Ayllon and Azrin 1968 Learning 8 1 e Properties of conditioned reinforcers Conditioned reinforcers have a variety of different functions These include but are not limited to 1 Providing feedback Sr by telling the organism that it has done the right thing 2 Telling the organism what to do neXt SD 3 Bridging long periods of time or many responses that might occur or be required between unconditioned reinforcers 4 Hedonic functions they make the organism feel good 4 Schedules of Reinforcement a Introduction We have already mentioned that the amount of behavior required to produce a consequence eg reinforcer determines the effectiveness of that consequence This is true of many aspects of the behavior involved However we will consider only one such aspect its frequency or rate We have mentioned that in classical conditioning the strength of a conditioning is directly related to the number of times a CS has been paired or associated with a US In operant conditioning however the situation is quite different Reinforcement is in fact almost always intermittent or partial Indeed the precise relationship between a response and its consequences SR de nes various programs or schedules of reinforcement which can maintain considerable amounts of behavior for long periods of time b The Cumulative Record A useful way to picture what is going on during different schedules is by plotting cumulative records Learning 82 A Cumulative Record Positive Negative Cumulative Responses Time minutes Note 1 The slope of the curve at any point is proportional to the rate of responding and 2 no responding produces a at horizontal line c quotSimplequot schedules the analysis of response rate or pattern Certain schedules are referred to as simple because only one contingency or program is involved 1 Continuous reinforcement CRF or FR 1 In this schedule every response is followed by reinforcement All other schedules involve so called quotpartialquot or quotintermittent reinforcement o CRF FR 1 is quotbasicquot in the sense that it precedes all other schedules that is it is usually the rst schedule under which an organism is trained 0 Otherwise it is not very interesting because it sustains relatively little behavior due to the rapid development of satiation to most common reinforcers such as food or water but not to ESB 0 10 20 30 40 50 60 Learning 2 83 Fixed ratio FR N Under this schedule every Nth response is reinforced where N is a positive integer that de nes the value of the ratio i iii In the quotreal worldquot FR is like piecework that is pay off depends on amount of work completed Note that CRF above can be considered a fixed ratio in which N 1 Properly trained ratio schedules in which the ratio value is raised gradually generate very rapid response rates probably because rate of responding determines rate of reinforcement If reinforcement requirements are raised too rapidly or if the ratio is too high post reinforcement pre ratio pausing occurs This is called fixed ratio straining The duration of the pause varies as a function of the value of the ratio N However quotlocalquot rates always remain high and appear to be quotlocked inquot Thus FR performance typically is a make break all or none affair all or nothing at all Cumulative Responses Fixed Ratio Responding 10 20 30 4O 50 Time minutes 60 Learning 84 3 Fixed interval Fl Under this schedule the rst response after t seconds or more commonly minutes is followed by reinforcement Thus pay off depends upon elapsed time as well as the occurrence of a single response Fixed Interval Responding Cumulative Responses 0 10 20 30 40 50 60 Time minutes At the beginning of our discussion of excitatory Pavlovian conditioning we mentioned that a particular pattern of anticipatory or timed responding sometimes appears That is more and more responding may occur as the time approaches for the US to be delivered When responding is under the control of an F1 schedule a similar pattern emerges this is called the xed interval scallop because of its resemblance to the shell of that shape However how do we know that quottimingquot or worse an internal quottiming mechanismquot is involved Perhaps the scallop represents the sum of excitatory and inhibitory in uences caused by say the delivery of the reinforcer which may be both an SR for responding that precedes its delivery and an SD for subsequent non responding Learning 85 4 Variable Schedules contingency vs frequency In the quotreal worldquot ratios and intervals are rarely xed Thus a variable ratio VR schedule can be de ned in terms of the average number of responses N required for reinforcement VR schedules generate very high response rates Pausing which occurs as a direct function of the value of the VR may occur at any time during the ratio ie pausing does not always follow reinforcement as is the case under FR Under a variable interval VI schedule reinforcement follows a response on an average of once every t seconds VI schedules generate stable patterns of responding in which rate varies as a function of the value of t at least when independent groups of subjects are used Under schedules such as VR and VI how do we determine the extent to which the contingency or the frequency of reinforcement governs response rate One way to examine this question is by using a quotyoked quotcontrol in which reinforcement becomes available to subject B whenever subject A is reinforced Reynolds 196 1 Thus subject A is under a VR schedule and subject B is under a VI schedule but frequency of reinforcement is the same in both animals Variable Interval Contingency vs Frequency Pigeon A VR Cumulative Responses O 1 O 20 30 4O 50 60 Time minutes Learning 86 The results of such an experiment are shown in the graph above Because Pigeon A has a much higher response rate than Pigeon B even though both birds are reinforced equally often we might argue that the contingency of or ability to control reinforcement is more important than the frequency of reinforcement d Extinction after quotsimplequot schedules The pattern of behavior during extinction sessions as well as the amount of resistance to its occurrence is determined by the schedule of reinforcement that had been maintaining behavior during conditioning For example consider behavior that had previously been reinforced under either an FR or VI schedule What usually happens is Extinction After FR After VI Cumulative Responses O 10 20 30 4O 50 60 Time minutes e quotComplexquot schedules It is possible to design different complex schedules by exposing subjects successively or simultaneously to various simple schedules Ferster amp Skinner 1957 If this is done behavior appropriate to more than one schedule can be studied at the same time within the same session in a single organism thus considerable information can be gained at relatively little cost Learning 87 Examples of successive complex schedules include i Multiple Schedules Mult Under a multiple schedule involving a single response manipulandum two or more independent schedules components are presented successively each in the presence of a discriminable stimulus The simplest possible multiple schedule Mult FR 1 light Ext dark is often called successive discrimination learning 0 After many sessions performance characteristic of the individual components of the schedule occurs Occasionally the subject might perform on one component as if the other were present that is an interaction is observed ii Mixed Schedules MIX Under a mixed schedule two or more schedules are presented without any correlated exteroceptive stimulus 0 Such schedules can be learned easily if the individual components alternate regularly probably because the occurrence of the reinforcer as well as the subject s own behavior functions as an SD iii Chained Schedules CHAIN Under a chained schedule two or more schedules are presented successively each in the presence of a discriminable stimulus However the only quotreinforcementquot for the completion of the rst component link of the chain is the presentation of the stimulus associated with the second link 0 Chained schedules have been useful in the experimental analysis of conditioned reinforcement Kelleher and Gollub 1962 Learning 88 2 Concurrent schedules The Analysis of Response Choice Other complex schedules involve the use of more than one response and sometimes more than one reinforcer For example under concurrent schedules reinforcement for responding on each manipulandum is programmed independently Such schedules have been used in the analysis of choice behavior Indeed in many laboratories choice has replaced rate as the principle dependent variable of operant conditioning This may be because rate under a particular schedule is itself shaped or selected by reinforcementquot and is therefore too sensitive to the effects of such variables as drugs Thus even extremely high possibly toxic doses of psychoactive drugs either have no effect on the local rate of responding under an FR schedule or disrupt responding completely i The matching law An important result of the analysis of choice is the so called matching law which states that the relative frequency of choosing a given response in a 2 choice situation quotmatchesquot the relative frequency of reinforcement of responding in that situation That is responses on lever a responses on a responses on b Reinforcers following a response on lever a A Reinforcers on A Reinforcers on B or This quotlawquot seems to hold in a many situations involving many different kinds of reinforcers and organisms measures of reinforcement such as its magnitude and delay and frequency rate of reinforcement However the matching law is coming to be viewed as a special case of economic principles that are far more general Learning 89 ii Behavioral economics 0 Demand The demand for a given commodity is the amount of that commodity that will be purchased or consumed at a given price When consumption of a commodity eg bread is relatively unaffected by its price its demand is said to be inelastic when consumption is dramatically affected by price eg watching movies demand is said to be elastic In the laboratory the equivalent of price is the schedule of reinforcement the number of responses or amount of time that must be spent to purchase a reinforcer or different reinforcers When animals are confronted with a choice between levers that produce either ESB or food they press the lever associated with ESB more often when both of these reinforcers are cheap eg FR 2 However when the price increases to FR 8 they prefer the lever associated with food Similar relationships occur when saccharine is used rather than ESB Thus what distinguishes between these reinforcers is not their value whatever that might be but their elasticity That is food is inelastic while ESB and saccharine sweet taste are elastic Thus biologically necessary events might be less elastic than other less necessary but pleasurable or reinforcing events It turns out that the matching law holds only when the demand curves for both reinforcers are the same or similar in form elastic or inelastic 0 Demand and income In determining the demand for a commodity absolute price is actually less important than relative price that is price as a proportion of income Learning 90 In an operant conditioning experiment the rat or pigeon s income is the number of responses it has available If price goes up the subject could adjust to this change by increasing its rate of responding Conditions like this minimize the effects of demand on choice because increases in absolute price need not imply changes in relative price To create a situation more like the human one a procedure in which animals had only a fixed number of responses available per session ie income is required One experiment is of particular interest Baboons were given a choice between food and heroin infusion Under conditions in which there was no constraint on the number of responses that could occur in a session neither demand for food nor heroin was elastic That is neither food choices nor heroin choices were dramatically affected by price FR requirement Conditions were then changed such that the baboons were given a fixed income of responses per day that could be allocated for the purchase of either food or heroin When both reinforcers were cheap the baboons chose them equally often However as the cost schedule increased demand for heroin dropped while demand for food stayed roughly the same Thus heroin demand was elastic while food demand remained inelastic o Substitutability of commodities The matching law precisely speci es how alternative sources of a given reinforcer will affect the power of that reinforcer to sustain responding If the reinforcers for two alternatives are the same or very similar eg gasoline or gasohol they are said to be substitutable The more the price of gasoline increases the more people would purchase gasohol If they interact with each other eg food or water they are said to Learning 91 be complementary The more an animal will eat dry food the more it will drink In general the matching law appears to apply to cases involving substitutable but not complementary reinforcers iii A paradigm shift in operant conditioning One implication of or impetus for a continuing interest in Behavioral Economics as well as a somewhat related area Foraging Behavior is that its ndings tend to support a view that choice behavior can best be studied over much longer periods of time between operant responses and their consequences than researchers previously thought were possible for subjects to process Thus such writers as Herrnstein and Baum 2001 argue for a molar rather than a more molecular paradigm In such a view animals are able to adjust their responding to maximize the probability of reinforcement or minimize the probability of being shocked below over an entire experimental session rather than relying exclusively on moment to moment changes in R gt S contingencies C AVERSIVE CONTROL The analyses of negative reinforcement and punishment are often treated together under the more general title of Aversive Control We will deal with them in a similar manner herein 1 Introduction a Prevalence Skinner and others have often argued that in the real world most human as well as animal behavior is controlled by aversive rather than reinforcing events ght ight and fright Response cost below is the modus operandus of our legal system and of our schools and many behaviorally oriented practitioners use so called aversion therapy Indeed the use of aversive control is so widespread in the context of the regulation of behavior that commentators equate aversive control with any and all kinds of control Learning 92 Nevertheless the effects of only a limited number of aversive events have been analyzed systematically in the laboratory and generalizations from such studies to complex human social situations should be made cautiously b De nitions Aversive Control is relatively easy to de ne as the study of the functional relationship between behavior and aversive stimuli SA that is R f SA under c Unfortunately this leaves us with the problem of de ning aversive stimuli which turns out to be dif cult if not impossible I Ostensive de nitions S A s are unpleasant noxious or quotpainfulquot events 2 Operational de nitions SA s are events i From which organisms will ee or escape ii Which organisms avoid iii Which suppress behavior In addition the removal of a reinforcer is often considered to be aversive and events associated with the presentation of S As or removal of SR s are conditioned aversive stimuli But an organism may work to remove a stimulus such as electric shock in one situation and work to produce the same stimulus in another situation Kelleher et al 1963 Similarly a response contingent stimulus that suppresses a response ie is an effective punisher may not always maintain escape or avoidance behavior In other words the aversive like the reinforcing ef cacy of any event is always relative that is such properties depend upon the prevailing state of the environment context SSc the history of the organism etc Indeed the intrinsic properties of events often tell us little about their behavioral effects Azrin 1958 Learning 2 93 Consider for example a time out Time out from shock or a boring TV commercial may be quotreinforcingquot but the same time out from a low FR schedule of ESB or an exciting football game may be aversive Nosology of aversive control The eld of aversive control is still concerned primarily with consequence 2 in terms of the consequence matrix we have considered the upper left and lower right quadrants we are now concerned with the remaining four The Consequence Matrix PRT PRi Positive Punishment R gt S Rein Negative Rein NegatiVe Purl R S Escape Response Cost v01dance Om1ss1on Recovery Extinction R 0 Prior Pun Prlor Relni To review recall the following de nitions above a Negative Reinforcement Squot If the consequence of a response is the elimination removal or postponement of a stimulus and pR is subsequently increased the procedure is defined as negative reinforcement and the stimulus that has been eliminated or removed is called aversive More specifically when the consequence is elimination of an ongoing aversive or conditioned aversive stimulus the procedure is known as escape conditioning When the consequence is the postponement of the aversive or conditioned aversive stimulus the procedure is known as avoidance conditioning or the conditioned avoidance response CAR Learning 94 b Punishment Given that a stimulus S occurs that there is a contingent relationship between the presentation of S and some response PAS and pR is decreased p R the occurrence or removal of S is defined functionally as punishment Note Punishment also requires three 3 things 1 The occurrence of S following an R 2 A contingent relationship between S and R and 3 p W I Positive punishment If the consequence of a response is the presentation of some stimulus and pR is subsequently decreased the procedure is defined as positive punishment The stimulus is called a n aversive stimulus a punishing stimulus or simply a punisher I Negative punishment If the consequence of a response is the elimination or removal of a stimulus and pR is subsequently decreased the procedure is defined as negative punishment The stimulus which is removed is called a reinforcer its removal is aversive There are two kinds of negative punishment I Time out elimination reinforcer removal and I Omission reinforcer postponement c Conditioned suppression The Conditioned Emotional Response CER Another procedure involving aversive control has already been considered during our discussion of classical conditioning because it is in fact a classical conditioning procedure In the C E R a quotneutralquot stimulus CS is paired with an aversive event US and is either simultaneously or successively superimposed on some baseline of operant behavior usually a VI schedule of food or water reinforcement non contingently Learning 95 Escape conditioning a Positive and Negative Reinforcement Intuitively it would seem that whatever differences might be observed between positive and negative reinforcement results from the properties of the stimuli involved therein Thus pleasing events appear to have intrinsically different characteristics than painful events for example pleasing sounds are usually less intense and of lower frequency than non pleasing or harsh ones However recall that by de nition positive reinforcement involves the response contingent presentation of a stimulus while negative reinforcement involves the response contingent removal or postponement of a stimulus regardless of the affective qualities of these events This means that the two procedures necessarily impose different temporal relationships among responses and the prevailing states of the environment at the time they occur Any stimulus in addition to its programmed use in a particular paradigm in this case as a reinforcer elicits a variety of unconditional discriminative conditional and other competing responses at least when it is initially presented For example the presence of food is normally a positive reinforcer for bar pressing but it is also among other things an unconditional stimulus for salivating and a discriminative stimulus for eating In positive reinforcement these competing responses have no effect on the response we are trying to condition eg bar pressing because they do not occur until after the reinforced response has been emitted and the food is presented below Positive Reinforcement Negative Reinforcement R S R S Learning 96 However in negative reinforcement these relationships are quite different in that the to be conditioned escape response UR and the competing response R must occur at about the same time when the aversive stimulus is present Such response competition is the principle reason why escape conditioning can be difficult to establish and maintain no one has yet written quotSchedules of Negative Reinforcementquot It should be pointed out that competing responses may also be facilitative for example if the unconditioned response to an electric shock in a runway is running shock might increase the probability that the escape response running toward a goal box will occur It should also be mentioned that escape behavior can be conditioned in an operant conditioning chamber if adequate experimental control is established a dif cult empirical problem Indeed schedules involving termination of shock unconditioned escape and especially termination of stimuli associated with shock conditioned escape have many of the properties of similar schedules involving other reinforcers o For example in one interesting study Azrin et al 1962 a drug d amphetamine was found to have the same effects on bar pressing maintained under an FR schedule of food reinforcement or of escape from a stimulus during which brief shocks sometimes occurred conditioned escape Interestingly neither motivation nor incentive how hungry the animals were in uenced the outcome b Aversive stimulus variables Three factors play particularly important roles in determining both the amount and rate of escape conditioning 1 Intensity Other things being equal the greater the intensity of the aversive stimulus the faster will be the conditioning of the escape response and the higher the asymptotic level of responding 2 Amount The greater the decrease in the severity of S A following the occurrence of the escape response the faster the Learning 97 acquisition and the higher the nal level of escape responding 3 Delay The longer the delay in the termination of SA the slower the acquisition of the escape response and the lower will be the asymptotic response level Avoidance Conditioning The Conditioned Avoidance Response a Procedures Many of the problems encountered in the analysis of avoidance conditioning as well as the theories designed to explain its acquisition maintenance and extinction or lack thereof have resulted from the use of a particular experimental procedure discrete trial avoidance conditioning derived from Bechterevian classical defense conditioning This is important not only in psychology but in any science because it is a good example of how our ideas concepts hypotheses theories and paradigms are often determined if not driven by the procedures we use to study them and vice versa 1 Discrete trial discriminated avoidance This procedure always involves not only discrete trials as the name implies and sometimes intertrial intervals but more importantly a discriminative stimulus SD which is often erroneously called a conditioned or conditioned aversive stimulus CAS That is S gt S pairings appear to occur as in classical conditioning Learning 98 However avoidance conditioning is fundamentally different from classical conditioning in that the occurrence of the Si US depends critically on the occurrence or more accurately the non occurrence of a response the so called avoidance response that is as we have stated elsewhere there is an R gt S relationship in operant but not in classical conditioning Indeed in discrete trial avoidance procedures there are in fact three kinds of responses and hence three possible R gt S relationships 1 avoidance responses R gt S 2 escape R gt S responses and sometimes 3 inter trial responses R gt O Escape Response SD l l R S H Avoidance Response SD RH SA Intertrial Response SD l R H SA Time Note that avoidance responses middle panel above have at least two consequences in this situation 1 they end the SD CS and 2 they prevent or delay ie reduce the frequency of the SA US Learning 99 Discrete trial avoidance has a long history which dates back to 1907 and has involved hundreds of different SD s responses and subjects S A has usually been shock i Parameters of discrete trial avoidance SD variables The frequency intensity and schedule of SD presentation usually are not important determinants of the rate of discrete trial avoidance acquisition or extinction SA variables As is often the case in psychology the functional relationship between amount of avoidance responding and intensity of the S A takes the form of an inverted U this is the so called Yerkes Dodson effect Per cent Responding 2 The YerkesDodson Effect 1 5 2 Intensity volts Free operant quotSidmanquot avoidance This procedure the rst major alternative to discrete trial avoidance was rst described by Sidman on 1953 and for that reason avoidance is usually called free operant or Sidman sometimes however it is referred to as continuous or less often nondiscriminated avoidance It differs from discrete trial avoidance above in that Learning 100 All responses have the same consequence delay of shock There are no quottrialsquot While a discriminative or quotwarningquot stimulus SD may or may not be used this stimulus does not de ne the procedure Escape responses from the SD or CS can not occur because SA is always short eg less than 05 sec Response rate rather than latency is usually the principal dependent variable i 8 8 and R S intervals An S usually shock occurs every t seconds the 8 8 interval However the occurrence of a response postpones the neXt scheduled S by t seconds the R S interval Thus in the absence of an R the 8 8 interval is in effect each R removes the 8 8 interval and establishes an R S interval If a sufficient number of Rs occur S never occurs ii Properties and parameters a Acquisition is difficult and variable in the rat rapid in monkeys and quothigherquot species b The best acquisition occurs when the 8 8 interval is relatively short 2 5 sec and the R S interval is long 20 sec This is probably because the animal learns to respond during a high frequency of shock in order to produce a lower frequency of shock below Most animals learn at least this stage of quotescapequot behavior They may or may not subsequently learn to respond in the absence of shock Well conditioned animals also learn a quottemporalquot discrimination that is to withhold responding until the latter portion of the R S interval when the shock is about to occur Anger 1963 Learning 3 Shock frequency Reduction Recall that in discrete trial avoidance a reinforced response that is a response during a quottrialquot has two D consequences 1 1t terminates an S and 2 prevents or delays S A In free operant avoidance responses may have the same two consequences if we consider time after shock to be a conditioned aversive quotstimulusquot Anger 1963 To separate the contribution of shock prevention that is true avoidance from that of termination of a conditioned aversive event that is escape a procedure presumably involving no intero or eXteroceptive conditionable stimuli was designed Herrnstein amp Hineline 1966 Sidman Avoidance Procedure S S R S s R Shock Frequency Reduction Schedule A ScheduleB S III II R In this procedure which most animals including rats learn surprisingly rapidly the only programmed consequence of a response is a reduction in the probability of frequency of shock S A presentation In the absence of a response shock is presented randomly on Schedule A above If a response occurs shock is delivered for some period of time on Schedule B Learning 102 Response probability increases when A gt B and decreases when B gt A b Extinction of Avoidance Responding An interesting question concerns what happens when an aversive stimulus is not presented after an animal has learned or over learned an avoidance response that is is avoiding something effectively Answer Nothing that is the animal continues to respond for hours days weeks or even months after the need to avoid no longer exists If we did not know the animal s history of reinforcement we might consider the occurrence of such apparently unreinforced behavior to be bizarre neurotic or even traumatic Solomon amp Wynne 1953 However under these conditions how does the animal quotknowquot that contingencies have been changed from response contingent non presentation of SA to extinction If effective avoidance behavior has been acquired SA rarely if ever occurs by de nition Removal of the aversive stimulus may be an inappropriate extinction procedure under certain circumstances A better procedure might be to present the SA but remove the response gt stimulus delay relationship Note that this problem may also occur in situations involving positive reinforcement That is quotextinctionquot is often confounded because when it is introduced two things actually happen 1 A stimulus eg food is removed so that among other things the organism can no longer eat in the experimental chamber SSc and 2 The response gt reinforcement removed contingency is Perhaps certain quotextinctionquot procedures cause increases in responding often accompanied by quotaggressivequot behavior towards objects or other organisms in the environment because they alter stimulus conditions SSc rather than change the R gt S contingency Learning c Theories of avoidance conditioning Historically the fundamental problem of all theories of quotavoidancequot conditioning and extinction following avoidance conditioning has been how the non occurrence of an event SA can be reinforcing Rescorla amp Solomon 1967 For this reason most theories of avoidance are theories of escape from a conditioned aversive stimulus Recall that under a discrete trial procedure avoidance responses have two consequences 1 Escape it removes a stimulus paired with an aversive event and 2 Avoidance it reduces the frequency of occurrence of an aversive event To most theorists only the rst of these consequences escape has been of interest indeed the second has not even been considered Consider so called two process theories According to such theories so called quotavoidancequot learning occurs in two stages or processes 1 Stage 1 involves classical conditioning A quotCSquot eg tone is paired with a quotUSquot eg shock This procedure results in the conditioning of a diffuse autonomic pattern of responses collectively called quotfearquot to the CS CS r Tone Orlentmg US UR Shock Fear CS CR Tone Fear SSC The second stage involves operant conditioning This stage occurs after Stage 1 has been completed A speci c skeletal response eg bar pressing is acquired during the CS SD because it is negatively reinforced by escape from the stimuli associated with the responses Learning 1 O4 elicited by the quotCSquot quotfearquot as well as by the termination of the CS tone sD R s Tone Bar press Tone removal Note that no mention need be made of avoidance of anything Therefore in two process theory all responses other than inter trial responses are escape responses either from the unconditioned US the shock or the conditioned CS the tone In support of this theory the important parameter of Stage 1 is the quotCS gt USquot interval while that of Stage 2 is the R gt CS termination interval ie the immediacy of quotreinforcementquot However recall that a quotCSquot is neither necessary nor suf cient for avoidance to occur Herrnstein amp Hineline 1966 Therefore all theories in which escape from such a stimulus plays a critical role are untenable Responding may well occur in the absence of any signal expectation or quotfearquot simply because such responding reduces the overall frequency of aversive stimulation Herrnstein 1969 Thus when all is said and done avoidance behavior may be just that avoidance and the non occurrence of an event indeed may be reinforcing in a particular context Such speculations have in uenced the paradigm shif from a molecular to a more molar position mentioned at the end of our discussion of choice behavior 5 Punishment a Introduction 1 De nitions revisited Recall that there are two kinds of punishment Positive Punishment R gt S pR L Learning Negative punishment Response Cost R gt S 39 pR L Basic Properties of Punishment R gt S 1 Complexity All situations involving punishment are necessarily complex because the response we chose to punish must exist at some level or strength if we are interested in suppressing it reducing its probability or frequency Therefore if it is an operant the response must have some maintaining presumably desirable consequences to the organism or once have had such consequences Thus in all procedures involving punishment response strength is always a function of both reinforcing events or variables presumably responsible for its maintenance and punishing events presumably responsible for its suppression R f SR SA under c 2 Aversive Stimulus Variables To study SA variables we must hold the effects of SR variables constant and vice versa When this is done amount of suppression of the rate of a single food or water reinforced operant response is a direct function or stated differently rate of responding is an inverse function of such variables as i The intensity of the aversive punishing stimulus Per cent Responding n n2 DA DE DE 12 4 15 a 2 Shock Intensity milliamperes Learning ii The duration of SA Other things being equal longer durations of punishment produce more behavioral suppression than shorter durations iii Frequency of presentation of SA The more often we punish the more effective the punishment of a given intensity etc is iv Delay of presentation of SA Immediate ie response contingent punishment is more effective than delayed punishment v Abruptness of SA An S A of a given intensity will cause more suppression if it is introduced immediately at its maximal value than if it is introduced gradually at weaker intensities vi Predictability of SA Predictable eg Fl punishment of a given frequency is generally less effective than non predictable eg VI punishment vii The correlation between the SA and the SR The more the punishing stimulus is associated or correlated with the reinforcing stimulus the less the punisher will suppress the punished response Thus if response contingent shock is given in a goal box or at the end of a fixed ratio rewarded by food it can be expected to be less effective than if it were given closer to the start box or earlier in the ratio 3 Reinforcer Variables Given comparable SA parameters intensity duration frequency etc the amount of suppression is determined by the reinforcing and potentiating variables that maintain the response These include Learning 107 i Schedule of reinforcement Under an FR schedule concurrent punishment further exacerbates the quotstrainquot increases the pause duration seen under relatively high ratios but has no effect on quotlocalquot response rates Effects of Punishment on FR Responding No Punishment Cumulative Responses Time minutes However when responding is maintained under a VI schedule punishment generally lowers rate uniformly Effects of Punishment on VI Responding No Punishment Punishment 0 Cumulative Responses Time minutes Learning 108 Under any particular schedule the more often reinforcement occurs the less are the effects of punishment iii Potentiation The extent to which punishment suppresses food reinforced behavior depends upon amount of food deprivation that is quothungryquot animals will be less susceptible to punishment than satiated animals iv Alternative Responses The whole picture changes dramatically when an alternative equally reinforced response is made available In such a situation very mild quotpunishmentquot can suppress behavior completely or at least to a considerable extent Effects of an Alternative Response Pun No alternative Cumulative Responses Time minutes A thought question why is the availability of an alternative response such a powerful determinant of the effects of punishment on a reinforced operant See Appel 1969 Azrin amp Holz 1966 v Recovery One of the major problems with the use of punishment is the occurrence of recovery this phenomenon appears to have at least three aspects Learning C 109 a With mild and sometimes moderately intense punishment the rate or frequency of punished responding gradually returns to normal pre punishment levels in the presence of continued exposure to the punishing stimulus This may occur even when the initial effect of the S A is to reduce response rate to near zero b When the punishing stimulus is subsequently removed responding may immediately return to or even quotovershootquot pre punishment levels Such overshooting is sometimes called punishment contrast c With severe or suddenly introduced punishment suppression may be long lasting or even quotpermanentquot Moreover if punishment is removed the animal may never learn that this is the case since it is not responding and quotsamplingquot its environment This is very similar to yet in a sense the opposite of traumatic avoidance learning above Response Cost Ferster 195711 has shown that time out TO the response contingent removal of reinforcement or stimuli associated with reinforcement can function as an effective punisher under certain conditions in species ranging from pigeons to humans The effectiveness of this punisher depends on many variables including its duration the schedule of reinforcement upon which it is superimposed and the number of alternative responses that may lead to reinforcement above d Theories of Punishment Three kinds of theories will be considered brie y 1 quotHedonisticquot theories Thorndike s original law of effect 1911 stated that satis ers reinforcers quotstamp inquot strengthen learned associations connections while annoyers punishers quotstamp outquot weaken such associations Learning llO Thorndike later abandoned this theory mainly because he found that verbal punishers such as the word wrong to be ineffective in modifying the test taking behavior of young children in the classroom However more recent investigations found that the original law is essentially correct that is reinforcement and punishment are similar processes response consequence contingencies that alter behavior in opposite though not necessarily equal directions 2 Conditioned quotfearquot motivation theories Estes Skinner and others have argued that the response suppression that occurs during punishment procedures has nothing to do with punishment per 56 that is with the R gt SA contingency Instead they maintain that suppression is caused by conditioning of stimuli CS s which are produced during the emission of the punished response R and the delivery of the SA US Thus from this point of view what happens is that fear or anxiety is classically conditioned CER The fear response eg freezing is usually but not necessarily incompatible with appetitive punished behavior eg bar pressing and thus decreases the frequency of its occurrence Note This theory is in many people s view untestable How can we distinguish between a response and a response produced stimulus 3 The Avoidance Theory of Punishment Dinsmoor 1954 contends that what occurs in punishment paradigms is that all responses other than the one being punished are eventually negatively reinforced by the non occurrence or avoidance of the SA Thus quotpunishmentquot doesn t weaken suppress a punished response but indirectly strengthens other non punished responses This theory implies that punishment should work very slowly because many other responses have to occur before they can be negatively reinforced as we have seen this is not the case punishment sometimes suppresses behavior very rapidly indeed The theory is however compatible with the results of experiments in which clear unpunished Learning 111 alternative routes to reinforcement are made available above e Effectiveness For some reason people often like to ask the question does punishment quotworkquot Besides being the kind of question that can never be answered does reinforcement or stimulus control quotworkquot one must at least inquire as to what is being asked Work for whom The organism being punished The punisher Society The question may be stated better as follows Does punishment suppress reduce the frequency of the behavior upon which its administration is contingent The answer is as is often the case in psychology it depends Fortunately it is possible to list the conditions that maximize the effectiveness of punishers Some of these are that the punisher should be intense immediate long frequent unpredictable and be introduced abruptly Potentiation of the reinforcement of the punished response should be minimized and perhaps most importantly alternatives to the punished response should be provided Another approach is to compare punishment with other procedures that have been used to reduce or suppress behavior According to Holz and Azrin 1963 punishment is more effective ie quotbetterquot than satiation extinction stimulus change and physical restraint in terms of its immediacy endurance completeness and ease of being reversed f Desirability In spite of its effectiveness there are reasons to question the desirability of using punishment in most quotreal worldquot situations For example 1 It is often impractical if not impossible to meet the conditions that maximize the effectiveness of the punisher above 2 Mild or moderate punishment may lead to recovery 3 Intense quotpunishmentquot can sometimes lead to i quotPermanentquot suppression Azrin amp Holz 1966 to the contrary not withstanding Learning ii Emotional or physiological changes quotstressquot iii Aggression 4 Punishers can also cause escape responses such as eeing from the punishing situation If this happens no behavior desirable or undesirable can be studied or modi ed in that situation since the situation no longer exists STIMULUS CONTROL 1 Introduction Recall that because operant behavior is usually appropriate under specific sets of conditions as well as in a particular context we have defined the following a Discriminatiue stimuli are environmental events in the presence of which behavior is differentially reinforced that is stimuli that de ne reinforcement contingencies b An SD is a stimulus in the presence of which a response is reinforced SD R Si l c An SA is a stimulus in the presence of which a response is not reinforced SA R 0 d A discrimination occurs when responding becomes differentially related to the presentation comes under the control of SD and SA that is when pR is high in the presence of SD and low or zero in the presence of SA e Generalization occurs to the extent that discrimination does not occur Perhaps the easiest if not the best way to begin a discussion of stimulus control is to return to classical conditioning 2 Stimulus control and classical conditioning In classical conditioning Pavlov noted that both generalization and discrimination are extremely common for example both processes occur when a new CR is acquired Learning 1 1 3 a Generalization Thus when a dog is conditioned to respond to a tone of a particular frequency it will also respond to tones of different frequencies and intensities Similarly a baby often learns to say quotda daquot in the presence of men other than its male parent often to the embarrassment of its mother 1 The excitatory generalization gradient If we plot some measure of response quotstrengthquot say amount of salivation CR as a function of a physical dimension of a training stimulus CS say a tone of 500 Hz we obtain what is called a generalization gradient Hypothetical Generalization Gradients Per cent CR Frequency Hz Maximal responding salivation usually but not always occurs at the frequency of the training stimulus in this case 500 Hz As we will see the steepness slope of the gradient is an indication of the degree of stimulus control which is a function of many variables the most important of which is probably training conditions Note Such gradients are usually obtained under extinction conditions Why Learning b Discrimination Differential conditioning We have already considered differential conditioning when we introduced the concept of inhibition Recall the following Differential Inhibition Discrimination C 8 Tone cs No tone Us Food After several trials the CS will elicit salivation and the CS will not When this happens discrimination differential conditioning or inhibition is said to have occurred Stimulus control and operant conditioning 39 History SiD sD SA R gt s 0 SSc Time With respect to the Operant Paradigm we are now concerned with SD SA and SSc time will not permit us to consider SiD a quotAttentionquot Contextual constant or background stimuli SSc are always present when behavior occurs but are not differentially related to the R gt Si contingency This means among other things that the subject may or may not quotattendquot to them Learning 115 Unfortunately the distinction between SSc and SD or stated in a slightly different way between what is programmed to be contextual and what is programmed to be discriminated is not always clear Indeed it may be determined at least in part by the organism rather than by the experimental design Consider for example an experiment by Reynolds 1961 appropriately called quotAttention in the pigeonquot Two pigeons were reinforced under a VI 3 schedule for pecking at a key on which a compound stimulus consisting of a white triangle on a red background was projected They were then given test trials on which each element white triangle or red background was projected separately one bird pecked signi cantly more at the red key than at the white triangle and the other bird did exactly the opposite Why did this happen How long will such differences in responding last Note In this situation it can be said either that 1 Differential responding to the red key or white triangle occurred 2 That the two birds attended to or discriminated different stimuli or 3 That the bird s behavior came under the control of different stimuli b Operant generalization As mentioned previously stimulus generalization is the opposite of stimulus discrimination that is generalization is said to exist whenever the subject fails to respond differentially to different stimuli or stimulus characteristics dimensions etc 1 Generalization Consider the following situation Guttrnan and Kalish 1956 pigeons are trained to peck at a yellowish light 580 nm projected on a response key They are then exposed to lights of different wavelengths 530 560 nm or colors during extinction test periods Guttman and Kalish 1956 350 Learning Maximal responding occurs at the training wavelength SD 580 nm as the wavelength color becomes more and more different from less and less similar to the SD the amount of responding during testing decreases 2 Assessment of stimulus control The slope or steepness of generalization gradients can be used to determine the extent to which a particular aspect of the environment or stimulus quotdimensionquot controls behavior below Relatively quot atquot gradients are generated by color blind birds or birds reared in monochromatic light Peterson 1962 3 Mechanisms Pavlov 1927 believed that the training stimulus activated a spot on the cortex from which excitation quotirradiatedquot outwards in a circular pattern much like that of a pebble CS being thrown into a calm lake Lashley and Wade 1946 were among the rst to argue convincingly that generalization re ects failure to discriminate That is animals have to learn to respond to stimuli as being quotsimilarquot or quotdifferentquot Such responses depend among other things on amount of differential training that the animal experiences ie amount of exposure to SD and SA Learning 117 Discrimination 1 Sensory capacity While the range of our abilities to see hear touch taste or smell are major determinants of the extent to which we can detect or discriminate objects in our environment these capacities are not fixed Rather they are known to depend on developmental and experiential as well as biological variables capacity 2 Training Perhaps the most important experiential variable is training or more accurately training history i Meaning SD and SA make stimuli meaningful important etc by linking them to particular consequences ii Successive differential training procedures The simplest successive discrimination training procedure is a special case of a multiple schedule of reinforcement eg Mult FR 1 SD Extinction SA For example Jenkins and Harrison 1960 1962 trained three groups Differential training SD VI 1 1000 Hz tone SA Ext 950 Hz tone No tone SD 1000 Hz tone VI 1 SA Ext No tone No differential training tone on continuously Animals that received the differential training exhibited the steepest gradients that is Learning Jenkins and Harrison 1960 950 Hz INo Tone No Di Trn Per cent total 200 400 600 800 1000 1200 1400 1600 1800 2000 2200 Frequency Hz iii What is learned during discrimination training There are three possibilities a Animals can respond to SD in order to obtain reinforcement and ignore the SA b Animals can ignore the SD and inhibit responding to SA c Animals can learn to both respond to SD and inhibit responding to SA Spence 1936 and others have demonstrated that the third possibility is correct That is gradients of excitation occur around the SD and gradients of inhibition occur around SA This is nicely demonstrated in an experiment by Honig et al 1963 These investigators trained one group of pigeons to discriminate a 90 line SD from no line SA and another group to discriminate no line SD from a 90 line SA Learning 3 Mean Total Responses 3O 60 90 120 Degree of Tilt Excitatory Inhibitory quotErrorlessquot learning It should be noted that inhibition does not always occur in SD One instance of this is in quoterrorlessquot learning Terrace 1964 that is procedures in which incorrect responses do not occur because SD is introduced for a brief duration at low intensity and is incremented gradually or to its nal value Complexities i The peak shift Harrison amp Jenkins Group 2 received what is known as intradimensional training ie training within a single stimulus quality or dimension frequency A phenomenon that often occurs in such situations is called the peak shift Hanson 1959 trained three groups For all animals the SD in all animals was a 550 nm light group In Group 1 SA was a 590 nm light In Group 2 SA was a 555 nm light Group 3 had no differential training Learning 120 The results were as follows maximal responding the quotpeak of the obtained gradient shifted away from the SD in a direction opposite to that of SA The amount of this shift was a function of the similarity of SD and SA The greater the similarity the greater the shift Hanson 1959 Responses Wavelength nm ii Transposition and relational learning On the basis of results of experiments in which chickens were trained peck at the brighter of two stimuli Kohler 1939 proposed that animals learn relationships brighter rather than the individual properties of stimuli its brightness K hler s Transpositional Learning Il Learning 4 Theories of discrimination learning i Excitation gt lt nomqmul Spence 1936 proposed a model that can at least in part quotexplainquot phenomena such as the peak shift and transposition Gradients of excitation and inhibition were said to occur around SD and SA respectively The obtained response strength was postulated to be the difference in excitation generated by the two gradients Spence39s Explanation of the PeakShift Exc itatory gradient gen gradient Wavelength nm Staddon 1983 put forth a similar idea This theory proposes that the instrumental excitatory response is conditioned to the SD and that other interim responses are conditioned to the SA The extent to which a test stimulus is followed by the reinforcer relevant response depends on how much competition this response receives from interim responses elicited by the test stimulus Learning 1 2 2 REFERENCES Note The following list contains 1 speci c references that have been cited in these notes and 2 other suggested readings usually reviews and textbooks that may or may not have been mentioned in class Abrams T W and Kandel E R 1988 Is contiguity detection in classical conditioning a system or a cellular property Learning in Aplysia suggests a possible molecular site Trends in Neurosciences 1 1 128 135 Ader R and Cohen N 1985 CNS immune system interactions Conditioning phenomena Behavioral and Brain Sciences 8 378 395 Alkon D L 1989 Memory storage and neural systems Scienti c American 261 42 50 Anger D 1963 The role of temporal discrimination in the reinforcement of Sidman avoidance behavior Journal of the Experimental Analysis of Behavior 6 477 506 Appel J B 1964 Analysis of aversively motivated behavior Archives of General Psychiatry 10 7 1 83 Appel J B 1969 On punishment Midway 9 3 13 Axelrod S and Apsche J 1983 The effects of punishment on human behavior New York Academic Press Ayllon T and Azrin N H 1968 The token economy A motivational system for therapy and rehabilitation New York Appleton Century Crofts Azrin N H 1956 Some effects of two intermittent schedules of immediate and nonimmediate punishment Journal of Psychology 42 3 2 1 Azrin N H 1958 Some effects of noise on human behavior Journal of the Experimental Analysis of Behavior 1 183 200 Azrin N H and H012 W C 1966 Punishment In W K Honig Ed Operant behavior Areas of research and application pp 380 447 New York Appleton Century Crofts Azrin N H Holz W C and Hake D 1962 Intermittent reinforcement by removal of a conditioned aversive stimulus Science 136 78 1 782 Baum W M 200 1 Molar versus molecular as a paradigm clash Journal of the Experimental Analysis ofBehavior 75 200 1 338 341 Learning 1 2 3 Baum W M 2002 From molecular to molar A paradigm shift in behavior analysis Journal of the Experimental Analysis of Behavior 78 95 1 16 Bechterev V M 1928 General principles of human re exology New York International Brembs B Lorenzetti F D Reyes F D Baxter D A and Byrne J H 2002 Operant reward learning in Aplysia Neuronal correlates and mechanisms Science 296 1706 1709 Brogden W J 1939 Sensory pre conditioning Journal of Experimental Psychology 25 323 332 Brown P L and Jenkins H M 1968 Auto shaping of the pigeon s key peck Journal of the Experimental Analysis of Behavior 1 1 1 8 Cameron 0 G and Appel J B 1972 Conditioned suppression of bar pressing by stimuli associated with drugs Journal of the Experimental Analysis ofBehavior 17 127 137 Campbell B A and Church R M Eds 1969 Punishment and aversive behavior New York Appleton Century Crofts Carlsson A 2001 A paradigm shift in brain research Science 294 102 1 1024 Catania A C Ed 1968 Contemporary research in operant behavior Glenview 111 Scott Foresman Churchland P M 1988 The signi cance of neuroscience for philosophy Trends in Neurosciences 1 1 304 307 Colwill R M and Rescorla R A 1988 Associations between the discriminative stimulus and the reinforcer in instrumental conditioning Journal of Experimental Psychology Animal Behavior Processes 14 155 164 Davis M 1974 Sensitization of the rat startle response by noise Journal of Comparative and Physiological Psychology 87 57 1 58 1 Delgado J M R Roberts W W and Miller N E 1954 Learning motivated by electrical stimulation of the brain American Journal of Physiology 179 587 593 DiCara L V 1970 Learning in the autonomic nervous system Scientific American 222 30 39 Learning 1 24 Dickenson A 1980 Contemporary animal learning theory Cambridge Cambridge University Press Dinsmoor J A 1954 Punishment I The avoidance hypothesis Psychological Review 61 34 46 Dinsmoor J A 1968 Escape from shock as a conditioning technique In M R Jones Ed Miami Symposium on the Prediction of Behavior 1967 Aversive Stimulation pp 33 75 Coral Gables FL University of Miami Press Dinsmoor J A 2001 Stimuli inevitably generated by behavior that avoids electric shock are inherently reinforcing Journal of the Experimental Analysis ofBehavior 75 3 1 1 333 Domjan M 1987 Animal learning comes of age American Psychologist 42 556 564 DomjanM 2003 The principles of learning and behavior Fifth edition Belmont CA WadsworthThomson Donahoe J W and Palmer D 1994 Learning and complex behavior Boston Allyn and Bacon Doty R W and Giurgea C 1961 Conditioned re exes established by coupling electrical excitation of two cortical areas In A Fressard R W Gerard and J Konorski Eds Brain mechanisms and learning Pp 133 15 1 London Blackwell Scienti c Publications Ltd Dworkin B R 1993 Learning and physiological regulation Chicago University of Chicago Press Eikelboom R and Stewart J 1982 Conditioning of drug induced physiological responses Psychological Review 89 507 528 Ellis H P and Hunt R R 1989 Fundamentals of human memory and cognition Fourth edition Dubuque IA W C Brown Estes W K 1944 An experimental study of punishment Psychological Monographs 57 Whole No 263 Estes W K 1969 Outline of a theory of punishment In B A Campbell and R Church Eds Punishment and aversive behavior New York Appleton Century Crofts Estes W K and Skinner B F 1941 Some quantitative properties of anxiety Journal of Experimental Psychology 29 390 400 Learning 125 Ferster C B and Skinner B F 1957 Schedules of reinforcement New York Appleton Century Crofts Flaherty C F 1985 Animal learning and cognition New York Alfred A Knopf Garcia J Kimmeldorf D J and Koelling R A 1955 Conditioned aversion to saccharin resulting from exposure to gamma radiation Science 122 1 57 1 58 Garcia J and Koelling R A 1966 Relation of cue to consequence in avoidance learning Psychonomic Science 4 123 124 Gormezano I 1969 Classical conditioning In J B Sidowski Ed Experimental methods and instrumentation in psychology pp 385 420 New York McGraw Hill Greengard P 200 1 The neurobiology of slow synaptic transmission Science 294 1024 1030 Guthrie E R 1935 The psychology of learning New York Harper Guttman N and Kalish H I 1956 Discriminability and stimulus generalization Journal of Experimental Psychology 51 79 88 Groves P M and Thompson R F 1970 Habituation A dual process theory Psychological Review 77 419 450 Hake D F and Azrin N H 1965 Conditioned punishment Journal of the Experimental Analysis of Behavior 6 297 Hanson H M 1959 Effects of discrimination training on stimulus generalization Journal of Experimental Psychology 58 321 333 Hearst E 199 1 Psychology and nothing American Scientist 79 432 443 Herrnstein R J 1969 Method and theory in the study of avoidance Psychological Review 76 49 69 Herrnstein R J 1990 Rational choice theory Necessary but not sufficient American Psychologist 45 356 367 Herrnstein R J and Hineline P N 1966 Negative reinforcement as shock frequency reduction Journal of the Experimental Analysis of Behavior 9 421 430 Hollis K 1982 Pavlovian conditioning of signal centered action patterns and autonomic behavior A biological analysis of function In J S Rosenblatt Learning 1 26 R A Hinde C Beer and M Busnel Eds Advances in the Study of Behavior Vol 12 New York Academic Press Holz W C and Azrin N H 1961 Discriminative properties of punishment Journal of the Experimental Analysis of Behavior 4 225 232 Honig W K Boneau C A Burstein K R and Pennypacker H S 1963 Positive and negative generalization gradients obtained under equivalent training conditions Journal of Comparative and Physiological Psychology 56 1 1 1 1 16 Honig W K and Urcuioli P J 1981 The legacy of Guttman and Kalish 1956 Twenty ve years of research on stimulus generalization Journal of the Experimental Analysis of Behavior 36 405 445 Hull C L 1943 Princtples ofbehavior New York Appleton Century Crofts Humphreys L G 1939 The effect of random alternation of reinforcement on the acquisition and extinction of conditioned eyelid reactions Journal of Experimental Psychology 25 14 1 158 Jenkins H M and Harrison R H 1960 Effects of discrimination training on auditory generalization Journal of Experimental Psychology 59 246 253 Jenkins H M and Harrison R H 1962 Generalization gradients of inhibition following auditory discrimination learning Journal of the Experimental Analysis of Behavior 5 435 441 Jones M C 1924 The elimination of children s fears Journal of Experimental Psychology 7 382 390 Kamin L 1968 quotAttention likequot processes in classical conditioning In M R Jones Ed Miami Symposium on the Prediction of Behavior 1967 Aversive Stimulation Pp 9 3 1 Coral Gables Florida University of Miami Press Kandel E R 1970 Nerve cells and behavior Scientific American 223 57 70 Kandel E R 2001 The molecular biology of memory storage a dialog between genes and synapses Science 294 1030 1038 Karni A Tanne D Rubenstein B S Askenasy J J M and Sagi D 1994 Dependence on REM sleep of overnight improvement of a perceptual skill Science 265 679 682 Science 265 679 682 Kelleher R T and Gollub L R 1962 A review of positive conditioned reinforcement Journal of the Experimental Analysis of Behavior 5 543 597 Learning 1 27 Kelleher R T Riddle W C and Cook L 1963 Persistent behavior maintained by unavoidable shocks Journal of the Experimental Analysis ofBehavior 6 507 517 Keller F S 1941 Light aversion in the white rat Psychological Record 4 235 250 Klein S B Learning Principles and applications Second edition New York McGraw Hill 1991 Kohler W 1939 Simple structural functions in the chimpanzee and in the chicken In W D Ellis Ed A source book of Gestalt psychology New York Harcourt Brace Jovanovich Konorski J 1948 Conditioned re exes and neuron organization Cambridge Cambridge University Press Kuhn T S 1970 The structure of scientific revolutions Second edition Chicago The University of Chicago Press Lashley K S and Wade M 1946 The Pavlovian theory of generalization Psychological Review 53 72 87 Leaton R L 1981 Habituation of startle response lick suppression and exploratory behavior in rats with hippocampal lesions Journal of Comparative and Physiological Psychology 95 813 826 Leaton R N and Tighe T Eds 1976 Habituation Perspectives from child development animal behavior and neurophysiology Hillsdale NJ Erlbaum Locurto C M Terrace H S and Gibbon J Eds 1981 Autoshaping and conditioning theory New York Academic Press Mackintosh N J 1974 The psychology of animal learning London Academic Press Mackintosh N J 1975 A theory of attention Variations in the associability of stimuli with reinforcement Psychological Review 82 276 298 Mackintosh N J 1983 Conditioning and associative learning Oxford Oxford University Press Malmo R B 1965 Classical and instrumental conditioning with septal stimulation as reinforcement Journal of Comparative and Physiological Psychology 60 1 8 Learning 1 28 Mazur J E 1981 Optimization theory fails to predict performance of pigeons in a two response situation Science 214 823 825 Mednick S and Freeman J L 1960 Stimulus generalization Psychological Bulletin 57 169 200 Miller N E 1948 Studies of fear as an acquirable drive Journal of Experimental Psychology 38 89 10 1 Miller N E and KessenM L 1952 Reward effects of food via stomach stula compared with those of food via mouth Journal of Comparative and Physiological Psychology 45 555 564 Miller R R and Spear N E Eds 1985 Information processing in animals Conditioned inhibition Hillsdale NJ Erlbaum Mineka S 1985 The frightful complexity of the origins of fears In F R Brush and J B Overmeir Eds Affect conditioning and cognition Pp 55 73 Hillsdale NJ Erlbaum Moore J W 1972 Stimulus control Studies of auditory generalization in rabbits In A H Black and W F Prokasy Eds Classical conditioning II pp 206 230 New York Appleton Century Crofts Morris R G M Kandel E R and Squire L R 1988 The neuroscience of learning and memory Trends in Neurosciences 1 1 125 127 Muenzinger K F 1944 Motivation in learning 1 Electric shock for correct responses in the visual discrimination habit Journal of Comparative Psychology 17 267 277 Newman F L 1967 Differential eyelid conditioning as a function of the probability of reinforcement Journal of Experimental Psychology 75 412 417 Nicholls M F and Kimble G A 1964 Effects of instructions upon eyelid conditioning Journal of Experimental Psychology 67 400 402 Notterman J M Schoenfeld W N and Bersh P J 1952 Conditioned heart rate responses in human beings during experimental anxiety Journal of Comparative and Physiological Psychology 45 1 8 Olds J and Milner P 1954 Positive reinforcement produced by electrical stimulation of septal area and other regions of the rat brain Journal of Comparative and Physiological Psychology 47 419 427 Park D C 1999 Acts of will American Psychologist 54 461 Learning 1 29 Patterson M M Cegavske C R and Thompson R F 1973 Effects of a classical conditioning paradigm on hind limb ex on nerve response in immobilized spinal cats Journal of Comparative and Physiological Psychology 84 88 97 Pavlov I P 1927 Conditioned re exes Translated by G V Anrep New York Dover Peterson N J 1960 Control of behavior by presentation of an imprinted stimulus Science 132 1395 1396 Peterson N J 1962 Effect of monochromatic rearing on the control of responding by wavelength Science 136 774 775 Premack D 196 1 Predicting instrumental performance from the independent rate of the contingent response Journal of Experimental Psychology 61 163 17 1 Rachlin H and Herrnstein R J 1969 Hedonism revisited On the negative law of effect In B A Campbell and R M Church Eds Punishment and aversive behavior Pp 83 109 New York Appleton Century Crofts Razran G 1957 The dominance contiguity theory of the acquisition of classical conditioning PsychologicalBulletin 54 1 46 Razran G 1961 The observable unconscious and the inferable conscious in current Soviet psychophysiology Interoceptive conditioning and the orienting re ex Psychological Review 68 8 1 147 Reese E P 1978 Human operant behavior Second edition Dubuque IA W C Brown Rescorla R A 1967 Pavlovian conditioning and its proper control procedures Psychological Review 74 7 1 80 Rescorla R A 1969 Pavlovian conditioned inhibition Psychological Bulletin 72 77 94 Rescorla R A 1988 Pavlovian conditioning its not what you think it is American Psychologist 43 151 160 Rescorla R A and Solomon R L 1967 Two process learning theory Relationships between Pavlovian conditioning and instrumental learning Psychological Review 74 15 1 182 Rescorla R A and Wagner A R 1972 A theory of Pavlovian conditioning Variations in the effectiveness of reinforcement and non reinforcement Learning 1 30 In A H Black and W F Prokasy Eds Classical conditioning II Pp 64 99 New York Appleton Century Crofts Reynolds G S 1961 Attention in the pigeon Journal of the Experimental Analysis of Behavior 4 57 7 1 Rizley R C and Rescorla R A 1972 Associations in higher order conditioning and sensory preconditioning Journal of Comparative and Physiological Psychology 8 1 1 1 1 Ross L E 1959 The decrement effects of partial reinforcement during acquisition of the conditioned eyelid response Journal of Comparative and Physiological Psychology 57 74 82 Russell M Dark K A Cummins R W Ellman G Callaway E and Peeke H V S 1984 Learned histamine release Science 225 733 734 Schneiderman N 1973 Classical Pavlovian conditioning Morristown NJ General Learning Press Schuster C R and Johanson C E 1981 An analysis of drug seeking behavior in animals Neuroscience and Biobehavioral Reviews 5 315 323 Seligman M E P 1975 Helplessness San Francisco Freeman Sidman M 1953 Two temporal parameters of the maintenance of avoidance responding by the white rat Journal of Comparative and Physiological Psychology 46 253 261 Siegel S 1976 Morphine analgesic tolerance Its situation speci city supports a Pavlovian conditioning model Science 193 323 325 Siegel S 1983 Classical conditioning drug tolerance and drug dependence In Research advances in alcohol and drug problems Vol 7 Pp 207 246 New York Plenum Siegel S Baptista M A S Kim J A McDonald R V amp Weise Kelly L 2000 Pavlovian Psychopharmacology The Associative Basis of Tolerance Experimental and Clinical Psychopharmacology 8 276 293 Schlink B 1997 The Reader New York Random House Skinner B F 1938 The behavior of organisms New York Appleton Century Crofts Learning 131 Skinner B F 1948 Superstition in the pigeon Journal of Experimental Psychology 38 168 172 Skinner B F 1953 Science and human behavior New York Macmillan Skinner B F 1957 Verbal behavior New York Appleton Century Crofts Skinner B F 1981 Selection by consequences Science 213 501 504 Skinner B F 1987 What ever happened to psychology as the science of behavior American Psychologist 42 780 786 Skinner B F 1989 The origins of cognitive thought American Psychologist 44 13 18 Sokolov E N 1963 Higher nervous functions The orienting re ex Annual Review ofPhysiology 25 545 580 Solomon R L and Brush E S 1956 Experimentally derived concepts of anxiety and aversion In M R Jones Ed Nebraska Symposium on Motivation Pp 2 12 305 Lincoln University of Nebraska Press Solomon R L and Corbit J D 1974 An opponent process theory of motivation I The temporal dynamics of affect Psychological Review 81 1 19 145 Solomon R L and Wynne L C 1953 Traumatic avoidance learning acquisition in normal dogs Psychological Monographs 67 No 4 Whole No 354 Spence K W 1936 The nature of discrimination learning in animals Psychological Review 44 430 444 Sperry R W 1988 Psychology s mentalist paradigm and the religionscience tension American Psychologist 43 607 613 Staddon J E R 1983 Adaptive behavior and learning Cambridge Cambridge University Press Suppe F Ed 1977 The structure of scientific theories Second edition Urbana IL University of Illinois Press Terrace H 1963 Discrimination learning with and without quoterrorsquot Journal of the Experimental Analysis of Behavior 6 1 27 Terrace H 1964 Wavelength generalization after discrimination learning with and without errors Science 154 1677 1680 Learning 1 32 Thompson R F 1986 The neurobiology of learning and memory Science 233 941 947 Thompson R F 1988 The neural basis of basic associative learning of discrete behavioral responses Trends in Neurosciences 1 1 153 155 Thompson R F and Spencer W A 1966 Habituation A model phenomenon for the study of neuronal substrates of behavior Psychological Review 73 16 43 Thompson T and Pickens R 1971 Stimulus properties of drugs New York Appleton Century Crofts Thorndike E L 1911 Animal intelligence Experimental studies New York Macmillan Tolman E C 1932 Purposive behavior in animals and men New York Appleton Century Crofts Toulman S 1972 Human understanding Princeton New Jersey Princeton University Press Turner E G and Altshuler H L 1976 Conditioned suppression of an operant response using damphetamine as the conditioned stimulus Psychopharmacology 50 139 143 Wagner A R 1976 Priming in STM An information processing mechanism for self generated or retrieval generated depression in performance In T J Tighe and R N Leaton Eds Habituation Hillsdale NJ Erlbaum Watson J B 1916 The place of the conditioned re ex in psychology Psychological Review 23 89 1 17 Watson J B and Rayner R 1920 Conditioned emotional reactions Journal of Experimental Psychology 3 1 14 Watson J S 1967 Memory and quotcontingency analysisquot in infant learning MerrillPalmer Quarterly 13 55 76 Watson J S 1971 Cognitive perceptual development in infancy Setting for the seventies MerrillPalmer Quarterly 12 139 152 Weiner H 1962 Some effects of response cost upon human operant behavior Journal of the Experimental Analysis of Behavior 5 20 1 208 White N M amp Milner P M 1992 The psychobiology of reinforcers Annual Review ofPsychology 43 443 47 1 Learning 1 33 Wilson M A and McNaughton B L 1994 Reactivated hippocampal ensemble memories during sleep Science 265 676 679 Wolpe J and Lazarus A A 1969 The practice of behavior therapy New York Pergamon Woods P J 1974 A taxonomy of instrumental conditioning American Psychologist 29 584 596 Worden F G 1973 Auditory habituation In H V S Peeke amp M J Herz Eds Habituation Vol II Physiological Substrates New York Academic Press


Buy Material

Are you sure you want to buy this material for

25 Karma

Buy Material

BOOM! Enjoy Your Free Notes!

We've added these Notes to your profile, click here to view them now.


You're already Subscribed!

Looks like you've already subscribed to StudySoup, you won't need to purchase another subscription to get this material. To access this material simply click 'View Full Document'

Why people love StudySoup

Steve Martinelli UC Los Angeles

"There's no way I would have passed my Organic Chemistry class this semester without the notes and study guides I got from StudySoup."

Kyle Maynard Purdue

"When you're taking detailed notes and trying to help everyone else out in the class, it really helps you learn and understand the I made $280 on my first study guide!"

Bentley McCaw University of Florida

"I was shooting for a perfect 4.0 GPA this semester. Having StudySoup as a study aid was critical to helping me achieve my goal...and I nailed it!"

Parker Thompson 500 Startups

"It's a great way for students to improve their educational experience and it seemed like a product that everybody wants, so all the people participating are winning."

Become an Elite Notetaker and start selling your notes online!

Refund Policy


All subscriptions to StudySoup are paid in full at the time of subscribing. To change your credit card information or to cancel your subscription, go to "Edit Settings". All credit card information will be available there. If you should decide to cancel your subscription, it will continue to be valid until the next payment period, as all payments for the current period were made in advance. For special circumstances, please email


StudySoup has more than 1 million course-specific study resources to help students study smarter. If you’re having trouble finding what you’re looking for, our customer support team can help you find what you need! Feel free to contact them here:

Recurring Subscriptions: If you have canceled your recurring subscription on the day of renewal and have not downloaded any documents, you may request a refund by submitting an email to

Satisfaction Guarantee: If you’re not satisfied with your subscription, you can contact us for further help. Contact must be made within 3 business days of your subscription purchase and your refund request will be subject for review.

Please Note: Refunds can never be provided more than 30 days after the initial purchase date regardless of your activity on the site.