Exam Two Study Guide
Exam Two Study Guide Psych 360
Popular in Motivational Psychology
Popular in Psychlogy
verified elite notetaker
verified elite notetaker
verified elite notetaker
verified elite notetaker
verified elite notetaker
verified elite notetaker
This 29 page Study Guide was uploaded by Winny Lu on Saturday April 2, 2016. The Study Guide belongs to Psych 360 at University of Maryland Baltimore taught by Bernard Rabin in Spring 2016. Since its upload, it has received 70 views. For similar materials see Motivational Psychology in Psychlogy at University of Maryland Baltimore.
Reviews for Exam Two Study Guide
You're awesome! I'll be using your notes for sure moving forward :D
Report this Material
What is Karma?
Karma is the currency of StudySoup.
You can buy or earn more Karma at anytime and redeem it for class notes, study guides, flashcards, and more!
Date Created: 04/02/16
Exam 2 Study Guide Arousal and Activation I. Arousal Theory a. Develop in response to the difficulty in distinguishing between motivation and emotion i. The differences between motivation and emotion are quantitative( in degree not type) b. Concept of level of arousal i. General arousal factor ii. Not related to goal directions iii. Depends on internal and environmental factors 1. Ex: one does not have to be hungry to enjoy food II. Definition of arousal a. Physiological measures i. EEG: alpha blocking ii. GSR: sweating iii. Heart rate/ blood pressure Objective!! b. Physiological theory (defines the relationship between arousal and performance) i. Optimal theory c. Defines arousal in physiological terms but this is still a psychological theory III. YerkesDodson Law Describes relationship between arousal and performance Not physiological a. b. As level of arousal increase, the performances increase to optimal level and then decreases in performance IV. Support: Belanger and Feldman (1962) a. They took rats and measured the heart rates as a function of deprivation b. High deprivation = high heart rate c. Results d. Support: Support: physiological response psychological theory Used physiological Performance at measures to define optimal level and arousal then decreases V. Evidence Against a. Effects on heart rate require instrumental response i. The heart rate does not increase if there are no bar presses b. Poor correlations between different measures of arousal i. Physiological measures have different correlation of behavior 1. Ex: the arousal level measured by EEG does not equal the arousal level measured by sweating VI. Optimal level of arousal (PSYCHological theory) a. Optimal level of arousal is related to the task at hand b. Lower optimal level for complex tasks c. Higher optimal level for simple tasks VII. Other factors a. Individual differences i. Need external motivation b. Cognitive factors i. Situational factors 1. Rollercoasters a. Pounding heart and shaky knees good because that means the rollercoaster was good 2. At Dean’s office a. Pounding heart and shaky kneesbad VIII. Physiological Theory (Hebb, 1955) a. Ascending Reticular Activating System (ARAS) i. A physiological mechanism in the brain b. Sensory input i . Indirect through ARAS to alert organism 1. Ex: to wake up ii . Direct through thalamus and cortex to direct behavior 2 pathways of the mechanism 1. Ex: where to go or what to do c. Each sensory stimulus functions to arouse and direct behavior IX. Hebb (1955) a. b. c . Efficiency of cue: the ability of the brain to process sensory information d. The level of ARAS activity is what makes it a physiological theory X. Evaluation Against Support Lesions of the ARAS the EEG Goodman (1970)ARAS activity shows that the cat is sleeping but in correlated with performance reality the organism is still able to took monkeys and put reading electrodes in the ARAS and engaged learn Lesions of posterior hypothalamus them in a vigilance task (stare at a the rat’s EEG shows that the rat is alert screen and if there is a stimulus they would make a quick response) but in reality it is in a coma unable to learn found that there is an optimum level There are dissociation of ARAS of ARAS activity which was activity and behavior correlated with the optimal levels of performance XI. Environmental Stimulation a. Responsiveness of the organism to environmental stimulation i. Indicative of a need for a stimulation (sensory deprivation) ii. Behavior released by stimulation (stimulus seeking behavior) XII. Sensory deprivation a. Bexton, Heron and Scott (1954) i. General results confirmed 1. College students volunteered to lie on a couch in a sound proof rom where they wore goggles to prevent any patterned vision a. Despite significant reward, they terminated the experiment quickly b. Aversive situation 2. The experimenter also tried it a. Lasted longer but still aversive ii. Aversiveness vary with motivation (for pay? For science?) and specific procedures ( how is deprivation produced?) XIII. Sensory Deprivation: effects a. Perceptual effects i. Hallucinations b. Cognitive deficits i. Lower performance in complex reasoning tasks c. Emotional changes i. Boredom, restlessness, irritability d. Changes in arousal i. EEG indicated high arousal ii. GSR indicated low arousal I. Animal Studies a. Thompson and Melzak (1956) i. Isolation reared dogs show cognitive deficits ii. Sensory deprivation affections number of different species not just humans. II. Sensory Deprivation: Evaluation a. Reduction in stimulation is aversive (leads to the arousal of goal directed behavior) b. BUT!! i. What are the antecedents? 1. Absence of environmental stimulation 2. Not internal to the organism III. Stimulus Seeking Behavior a. Presence of certain types of stimuli in the environment leads to the arousal of goal directed behavior b. Range of behaviors (responses to stimuli): i. Play ii. Exploration and curiosity iii. Manipulation (ex: moving an object) c. NOT related to need systems i. Example: one does not need Sudoku to survive IV. Play a. Definitions i. Unmotivated behavior 1. Behavior done for its own sake ii. Human 1. Games with rules 2. “as if” behavior iii. Dr. Rabin’s definition of play: Behavior lacking a consummatory response V. Classical Theories a. Content of play is independent of the cause of the play i. Surplus of energy theories 1. The young are taken care by parents so they have excess energy to burn by playing 2. The contents are determined by the parent/ environment ii. Growth theories (content) 1. Facilitate mastery over the environment a. Ex: monkey bars, legos VI. Specific theories a. Freud i. Cause: fantasy and wish fulfillment ii. Content: deny grounds for anxiety and promote active coping devices b. Piaget i. Piaget’s Definition of play: behavior in which bend reality to fit existing forms of thought 1. Age dependent thought processes (cause) ii. Games with rules iii. Function: fix and retain new abilities net result of play (content) VII. Animal play a. Cause: longer period of dependency on parents b. Content: survival = dependent on learned behavior i. Generalist species 1. Ex: wolves 2. Play is related hunting a. Pretend to hunt sibling b. wrestling 3. Surplus of energy (cause) c. In humans, parents and peers determine content VIII. Motivation a. Boredom drive i. Play as a response to boredom, play to relieve boredom b. BUT! i. Drive is an internal source of arousal 1. Physiological changes in organism to maintain homeostasis but there is no physiological arousal ii. Play is external: response to environmental stimulus 1. Not internal so there isn’t anything to be drive of IX. Exploration (Montgomery, 1953) a. Tendency of organisms to move about their environment b. Exploration increases as a function of i. Novelty ii. Complexity iii. Amount of change c. Exploration decreases i. As time in maze increases exploration decreases X. Curiosity (Berlyne,1960) a. Visual exploration i. Curiosity increases as the number of contours increases (checkerboard patterns get more attention) ii. Incongruity in college students 1. Example: running through series of slides can get boring so we can get surprise at uncertainty and complexity. iii. Surprise uncertainty and complexity XI. Curiosity/ exploration: motivation a. Relationship to survival i. The exploration of the environment can lead to increase chance of survival ii. BUT! 1. Increasing hunger or fear decreases exploration iii. Hull: reactive inhibition 1. sEr = (sHr x D) –(lr x slr) a. every time one makes a response, there is another response to prevent one from make that response again iv. Learned inhibition 1. Example: one touches hot plate and then learn not to touch it again XII. Glanzer (1953) a. an experiment that supports Montgomery b. In trial 1 the rat goes to the left with the top portion blocked while in the second trial, the rat went to the right starting from the opposite direction i. If the rat went to the right(same place in trial 1) then Berlyn’s theory will be correct while if the rat went to the left ( another place) then Montgomery’s theory will be correct XIII. Manipulation (Harlow, 1950) a. Tendency of organism to manipulate (play with ) objects in the environment b. Most readily observed in primates i. made a puzzle with locks ii. If the researcher gave them food to do the puzzle, the performance are disrupted iii. If the researcher just let the monkey continue to do the puzzle and resetting it, it does not disrupt the performance iv. Incentive = low performance v. No incentive = high performance vi. Since the monkeys were only 2030days old, this behavior was not learned 1. Shows that the ability to manipulate is reinforcing XIV. Motivation for stimulusseeking behavior a. Antecedent conditions i. Presence of certain types of stimuli ii. Non homeostasis iii. External not physiological b. Drive theory i. Boredom drive 1. Not related to internal physiological drive 2. Problem: no longer internal related to stimuli c. Incentive theory i. Stimulus seeking behavior is an incentive ii. Reinforce behavior iii. Problem: it does not find underlying motivation d. Arousal theory i. Stimulus seeking behavior changes the arousal behavior 1. Modulates arousal level ii. Movement away from optimum is aversive iii. Movement towards optimum is reinforcing Reinforcement I. Why Study Reinforcement? a. Define motivation in terms of factors the produce an immediate change in performance i. Reinforcement falls within that category. b. Behavior is a function of the conditions of reinforcement = INCENTIVE i. In order to understand incentive, one have to understand reinforcement II. Law of Effect a. Consequences of a behavior determine whether or not that behavior will recur i. Positive consequence (rewards) = increase tendency for behavior to repeat ii. negative consequence = decrease tendency for behavior to repeat b. reinforcement functions to “strengthen” the connection so that the stimulus comes to elicit the specific response in a reflex fashion i. general response tendency strengthen 1. example: the handwriting on paper and board looks the same but each action uses different muscle a. the general response is handwriting III. Approaches to Law of Effect a. Weak law i. Reinforcement is sufficient for learning ii. Theory does not specify the factors that make a stimulus a reinforcer b. Strong law i. Reinforcement is necessary for learning ii. Theory specifies the criteria that a stimulus must meet to be a reinforcer c. Example: i. Hull’s theory is a strong law 1. Reinforcer is necessary 2. Defines a reinforcer as the reduction in drive IV. 5 Major Theories of Reinforcement a. Functional approaches b. Drive reduction c. Stimulus complexity/ change d. GlickmanSchiff e. Premack Principle V. Functional Approaches a. Reinforcement defined in terms of its functional effects i. If a stimulus increases responding, then it is a reinforcer b. Types of reinforcers i. Positive reinforcer 1. Desired stimulus is gained ii. Negative reinforcer 1. Undesirable stimulus is removed VI. Drive reduction a. Hull (1943) i. Drive reduction is a necessary condition for reinforcement ii. Distinguish between drive stimulus reduction and need reduction iii. Not all biological needs give rise to psychological drive which arouses behavior iv. Support: 1. Miller and Kesner (1952) a. Training rats for tasks b. There are 2 reinforcement groups i. Group 1: when there is a correct response the rats were able to drink milk 1. Need reduction present drink the same amount of milk 2. Drive reduction present the act of drinking ii. Group 2: when there is a correct response milk was directly transported into the stomach 1. Need reduction present drink the same amount of milk c. Results: i. Group 1 learned faster. d. The need and drive reduction is not identical 2. Pain reduction 3 . Fear reduction Reduction of any intense stimulus is 4. Fistula rewards: organism is sensitive to the amount of drive reduction independently of its behavior a. Less drive reduction = more behavior b. More drive reduction = less behavior v. Against 1. Hedonic reinforcers a. If the basis of reinforcement is drive reduction, then why does the lack of drive reduction still reinforcing i. Example: 1. Rats still consume sachrin (a no calories liquid) but rats still consumes it 2. Example: eat food for taste not just to satisfy hunger 2. Stimulus complexity a. Increase in fear can be reinforcing i. Example: go on a rollercoaster because it generates fear and excitements vi. Evaluation 1. Drive reduction significant not necessary a. Can have reinforcement in the absence of drive reduction VII. Stimulus complexity a. Arousal Theory i. Increases or decreases in arousal level towards the optimum ii. Dember and Richman (1989) 1. Exposure to complex, changing stimuli is reinforcing iii. Bower et al (1966) 1. Information can be reinforcing a. Studies pigeons b. There are two keys that lead to food i. 1st key tells the pigeon that the food is coming ii. 2nd key food iii. Results: pigeons picked the key that tells them that the food is coming VIII. Glickman and Schiff a. Activation of the neural systems underlying a response is a sufficient condition for reinforcement i. Support: 1. Electrical stimulation of the brain can be used as a reinforce a. Rats that were given electrical brain stimulation learned faster b. Reinforcement keeps the brain circuit active for a longer period of time IX. Premack Principle ( 1959) a. “any response A will reinforce any other response B, if and only if, the independent rate of A is greater than that of B” b. “preference value” (PV) i. high PV reinforces low PV behavior 1. example: professor’s son has his ears pierced to spite him in high school (rebelling against his dad has a high preference value) but when he was interviewing for a job he took it out ( job has a greater preference value than rebelling against his dad) c. Support: i. When rats have unlimited access to water and the running wheel, the rats will drink water to have access to the running wheel ii. BUT! iii. Taking the same rats and deprive them of water, they run to receive water. iv. In water deprived rats, drinking reinforce running, while in confined rats, running can reinforce drinking as an instrumental response. d. Evaluation i. The most generally accepted view of reinforcement today ii. Stresses hedonic nature of the reinforcer and the response rather than respond rate iii. Assume that the response rate reflects the hedonic value of reinforcer compared to emotional response iv. Deprivation changes PV! X. Secondary reinforcement a. “primary” reinforcement i. Biologically based reinforcers associated with the maintenance of homeostasis b. “secondary” reinforcement i. are previously neutral stimuli which by association with a primary reinforcers comes to serve some of the same function as a primary reinforcer 1. neutral stimuli with association with primary reinforcer = same function of primary reinforcer ii. The concept of secondary reinforcement is important only when reinforcement is drive reduction! c. Functions of secondary reinforcers i. Acquisition of new responses ii. Maintaining behavior during extinction iii. Mediating delay of reinforcement iv. Establishing and maintain schedules of reinforcement 1. Example: a. Rats bar press to get food. i. Rat bar press ii. Hear clicks iii. Get food iv. When the rats do not get food, their behavior of bar pressing are prolonged when they still hear the click v. Click= secondary reinforcer d. Factors affecting the strength of secondary reinforcer i. Number of associations with a primary reinforcer ii. Amount of primary reinforcement iii. Probability that the secondary reinforcer will be followed by a primary reinforcement iv. Temporal association between primary and secondary reinforcement 1. The closer they are , the stronger the association v. Secondary reinforcers extinguish rapidly if they are not paired with primary reinforcers 1. Closer they are the stronger the association I. Theories of Secondary Reinforcement a. Stimulus Theories b. Response Theories c. Motivational Theories The need for secondary reinforcement is because of the definition of reinforcement (drive reduction). If reinforcement is defined differently then there will not be a need for secondary reinforcement. II. Stimulus Theories a. A secondary reinforce is a: i. Conditioned stimulus 1. A previous neutral stimulus paired with a primary response therefore the neutral stimulus cam to acquire the primary response ii. Discriminative stimulus 1. Linked to primary response a. Secondary response present primary reinforcement present b. Secondary response not present primary reinforcement NOT present c. The presence of a reinforce discriminates between the absence or the presence of the presence of primary response iii. Informative stimulus 1. Provides unique and reliable information about the forthcoming primary reinforcement III. Motivational Theories a. A secondary reinforce may lead to a reduction in drive stimulus intensity b. Mowrer(1966) i. Each drive producing condition has associated with an emotional component 1.Example: food deprivation = “hungerfear” ii. Secondary reinforce functions to reduce the emotional component by signaling forthcoming primary reinforcement IV. Secondary Reinforcement: Evaluation a. Do we need the concept of “secondary reinforcement” at all? i. It is only necessary when reinforcement is defines as primary drive reduction ii. Example: 1.There’s a rat in a box 2.Have to press bar for food and every time the rat hears a click 3.Gets food a. How do we know that the click is a secondary reinforcer? i. The rat will continue to press the bar longer in the absence of food when the rat hears the click b. Confounding Factor: Frustrative nonreward c. d. When the rats went to the box with no food, there was a significant increase in speed (frustrative non reward) V. Secondary Reinforcement Evaluation a. Same procedure withholding expected reinforcement called secondary reinforcement or frustrative on reward depending upon experimental design b. Distinction between “primary” and “secondary” may not be meaningful i. Distinction is based upon specific definition of reinforcement as drive reduction c. Different definitions of reinforcement do not require distinction between “primary” and “secondary” Incentive I. Drive and Incentive a. Incentive: behavior is a function of the conditions of reinforcement Drive Incentive Innate, unlearned learned Momentary, Transient History of reinforcement Process energize behavior Associative phenomenon b. For drive, is one manipulates deprivation conditions it will change the value of set reinforcers II. Tolman and Expectancies a. Cognitive psychology i. On the molar level behavior is always purposive 1.Purposive inferred determinant of behavior a. Cannot be observed ii. Always oriented towards approach or avoidance of a particular goal b. Types of expectancies i. S1 S2 1. Because S1 occurred then S2 will occur (classical conditioning) ii. S1R2 S2 1.Because R1 (response) to S1, S2 will occur a. S2 is contingent upon behavior i. Example: hate getting rain(S1) so there is always an umbrella in the car(R1) takes an umbrella out of the car when raining(S2) c. Expectancies have a value attached to them i. Can be positive or negative which leads to approach or avoidance III. Cognitive Maps a. Components : i. Character/ nature of goal object ii. Location of goal object iii. Means of achieving the goal b. As a function of experiences, the organism develops experiences about the outcomes of its own behavior c. Cognitive map develops out of experience IV. Tolman’s Theory a. Performance tendency= f(expectancy, drive stimulation. Incentive valence) i. Expectancy 1.Develops as a result of learning and experience ii. Drive stimulation 1.Internal state of organism related to maintaining homeostasis iii. Incentive valence 1.Expected value of behavior 2.Learned from experience from a particular goal a. Can be positive or negative iv. Example: 1.When a rat presses the bar a. Expect food (expectancy) b. Hunger (drive stimulation) c. The good feeling after eating (incentive value) V. Blodgett (1929) Latent Learning a. Rats to learn a maze i. Rat one only gave reward everyday followed normal learning goal ii. Rat two gave reward on day 10 performance increased to the same level as the rat with continuous reinforcement iii. VI. Crespi (1942/1944) a. measure time it takes to go from start to end box i. Group 1 gave 16 piece of cheese for achieving goal each time ii. Group 2 gave 256 piece of cheese for achieving goal each time iii. Group 3 gave 1 piece of cheese for achieving goal each time b. On day 20, every group got 16 pieces of cheese for achieving the goal c. Results: i. Group 1 no change in performance (control) ii. Group 2 decrease in performance; below the control iii. Group 3 increase in performance; above the control d. Shows that performance does not only depend on present reinforcement but the history of reinforcement as well Group 3 Increased performance Control: group1 No change e. Group 2 f. Decreased performance Positive contrast: The group that performs above the control NIt’s the areaast: The group that performs below the control group Positive and negative contrast bracketed are void of emotion. DO NOT USE ELATED EFFECT (+) OR I. DEPRESSION EFFECT (-) TO DESCRIBE POSTIVE ANDehavior is a function of the conditions of reinforcement b. Variable: NEGATIVE CONTRAST i. Quantity of reinforcement ii. Quality of reinforcement iii. Delay of reinforcement c. Deprivation is not a reinforcement variable II. Quantity of reinforcement a. The experiment by Crespi described above show incentive shift i. Manipulated the number of cheese b. Beach and Jordan (1956) i. History of reinforcement 1. Level of reinforcement on the immediate preceding trial III. Quality of reinforcement a. Basic work with sucrose or saccharin and nondeprived rats (no homeostatic factors) i. Taste factor (sweetness) critical concentration b. More reliable contrast effects IV. Delay of reinforcement a. Interval between the performance of an instrumental response and the reinforcement of that response b. Longer the interval the poorer the performance c. Immediate reinforcement trumps delayed punishment d. Grice (1948): discrimination task i. Rats to learn a task Delay How many trials to get to 100% 0 sec 50 0.5 sec 150 1.2 sec 275 2 min 500 5 min Only can go up to 80% 10 min Only can go up to 50% V. Theoretical Perspectives: Relationship between Drive and Incentive a. Drive and incentive are independent sources of motivation i. sER= f ( sHr* D*K) ii. k is incentive b. Concept of drive is not needed to account for the arousal of goal directed behavior c. Hull i. sER= f ( sHr* D*K) ii. BUT if any of the values are 0 then there will be no behavior d. Spence i. sER= f ( sHr* (D+K)) e. Both are right i. If D and K > 0 Hull ii. If D or K = 0 Spence VI. Theoretical Perspective –Hull: Fractional Anticipatory Goal Response (r ): G a. There is a problem with the idea of reinforcement in that how can the conditions of the reinforcements (occurs at the end of behavior) effect behavior at the start of the instrumental response? This is a theory Hull came up with to “solve” this issue Start Runway Goal Box (end the theory (start theory here here because the because the delay delay is the in reinforcement is longest) the shortest) World SS (stimulus of S R(stimulus of S G(stimulus of the (stimulus from the runway) the runway) goal box ie. food) the world) S S ads to R S S R ads to R R S G ads to R G R (response of R (response of R (response of S R G the runway) the runway) the goal box ie. eat) S leads to R S leads to R D S D R S D ads to R G Organism S (drive S (drive stimuli) S (drive stimuli) D D D (stimulus in the stimuli) organism) S D ads to r G S D ads to r G SD eads to r G rG fractional rG fractional **Read purple first then blue then green and then red. The red arrows point to what it leads to. rG fractional anticipatory goal anticipatory goal anticipatory goal response) response a response response that can occur without goal object ie S S salivation) G G Example: using high r (the rats High feed back high r (256 pieces G G the Crespi’s moved faster) of cheese) study previously discussed low rG the rats low feed back low rG 1 piece of moved slower) cheese) b. S Gis the feedback consequence of r G i. The red occurs because of classical conditioning. c. r Gcan vary when goal response varies i. high R G= high r G ii. low R G= low r G d. This theory basically shows classical conditioning. i. Organism produces a response from a stimulus or a drive stimulus and the rG an produce a response even without the stimulus e. Hull showed that the response condition that varies with the conditions of reinforcement can occur at the start of an instrumental sequence. f. Goal response can only occur when the goal object is there. VII. Problems with r G a. Attempts to manipulate it have not been successful i. Cannot distinguish the response by extinguishing the r G 1. Kraeling (1961) a. Used sugar solution concentrations i. There was a higher licking speed at 20% sugar solution compared to the 5% sugar solution 1. There is a higher r Gfor the 20% b. Forced the rats to drink at the same rate regardless if it was 20% or a 5% solution i. This leads to the same rG 1. Eliminates the r factor G 2. Hypothetical prediction: if the motor response is the same then the r must be G the same. If rG is the same then the running speed must also be the same. c. Results: i. The running speed of the rats varied based on the sugar concentration the 20% sugar concentration was faster 1. Showed that there is a taste factor 2. From a different experiment: there is a e ; where there is an emotional response a. BUT! Not accepted as much because people are reluctant to believe that rats have emotions b. Alternate approaches i. Tolman ii. Young VIII. Theoretical Perspectives: Tolman a. Performance tendency = f (expectancy, drive stimulation, incentive valence) i. Used commas does not specify the nature of interaction between variables b. Incentive valence subject to a range of factors i. Level of deprivation ii. Level of reinforcement IX. Theoretical Perspectives: Young a. Young (1956): Hedonic theory b. Motivation results from affective arousal (emotional arousal) i. Hedonic continuum 1. Where behaviors are categorized into negative, indifferent or positive 2. Maximize positive ( behaviors that are enjoyable) 3. Minimize negative ( behaviors that are not enjoyable) X. Principles of Hedonic Theory a. Stimuli have affective as well as sensory consequences i. Approach stimuli that is associated with the positive affect b. Motivation develops from affective arousal i. Primary affective arousal is directly produced by the stimulus ii. Conditioned affective arousal leads to anticipatory arousal c. Affective processes affect behavior by influencing choice – approach or avoidance d. Affective values of stimuli can be modified by internal states i. Deprivation can increase the acceptability of a wider range of behaviors 1. Example: usually do not eat grasshopper a. But have been starving for 3 days, one would most likely eat the grasshoppers. XI. Theoretical Perspectives: opponent process theory a. Every affective state tends to arouse the opponent state ( opposite state) i. There is an A state and a B state 1. A state process directly aroused by a stimulus situation is dominant while the situations last 2. B state opponent process becomes dominant when the primary stimulus for emotional arousal changes b. Effect non associative i. Does not involve learning Aversion and Avoidance I. Escape Learning a. Experimental design i. CS + (Conditioned stimulus) is paired with aversive stimulus ii. CS is paired with aversive stimulus iii. Learned to escape CS (+) iv. Reinforcement: reduction in aversive stimulation II. Factors Affecting Escape Learning a. Amount of drive reduction i. Relative reduction, not absolute 1. Example: reduce voltage that was inflicted on a rat by 100 a. But if the original voltage was 300200v b. It will be different it the original voltage was 1000 v ii. Greater reduction , faster learning b. Delay in reinforcement c. Incentive shift effects III. Theories of Escape Learning a. Drive i. Shock produces a drive ii. Drive reduction is reinforcing iii. Problem: rats can anticipate reductions 1. It is difficult to explain how rats can learn how to anticipate reductions how does drive explain this? b. Incentive i. The answer to the above question is incentive. ii. “anticipatory relaxation” 1. The rats can anticipate the intensity of the stimulation 2. There are cues associated with the reduction in painful stimulus conditioned to the environment. IV. Avoidance Learning a. Experiment: i. Rats were place in a runway where there is a shock section and a safety section ii. Shocks Safety iii. Found that after a number of trials in the same compartment, the rat does not wait to get the shock. The rat avoids it by getting in the safety zone. iv. Problem: what is reinforcing the avoidance behavior? ( the rat left before the aversive stimulus) V. TwoFactor Theory a. Part 1) i. Cues elicit fear b. Part 2) i. the reduction in fear is a reinforcement of avoidance response c. Administration of shock classically conditions fear to cues in the white compartment d. Fear elicited by cues in white compartment elicited running to color compartment (safety zone) e. Reduction in fear reinforces the avoidance response f. Basis of avoidance responding is fear i. If fear is preventedprevent avoidance learning VI. Support for Two Factor Theory a. Effect of Tranquilizers i. Tranquilizers can help reduce fear by lowering levels of anxiety 1. Diazpam can suppress avoidance behavior but NOT approach behaviors a. It does not affect learning just avoidance behaviors b. Solomon and Wynne (1950) i. Cut the sympathetic nervous systems (no fear) before the training 1. Poor learning of the avoidance response ii. Cut the sympathetic nervous systems (no fear) after the training 1. No effect on the learning of the avoidance system a. Because they already learned it with the fear emotion before the sympathetic nervous system was cut c. Solomon and Turner (1962) Curare Learning i. trained dogs to make avoidance responses by pairing tone and shock 1. gave dogs curare ( poison that paralyzed muscles) and paired a tone with a shock 2. let the dog recover 3. presented the dog with the tone but without the shock a. the dogs ran because they have learned that it was associated with the shock b. learned avoidance response even without the physical experience when exposed to the aversive stimulus c. showed that the dogs learned avoidance through fear because running away is a sign of fear within dogs VII. Problems with TwoFactor Theory a. Dissociation of fear and avoidance responding i. Using the above study (Solomon and Turner) 1. Condition avoidance response with light a. CS to condition the second avoidance response 2. Associated the tone with light 3. The dogs responded to the light as if it was the tone BUT! It was not conditioned to that aversive stimulus but rather anther stimulus b. Extinguish avoidance response without extinguishing fear VIII. Cognitive Interpretations a. CS becomes a signal for responding b. Expectancies c. Time to respond increases Example Exam Question: The twofactor theory of avoidance proposes that ___________ Fear elicits avoidance which is in turn reinforced by a reduction in fear Punishment I. Definition a. Stimulus: i . Delivery of an aversive stimulus following some response ii. Organism is expected to suppress responding in order to avoid more punishment iii. Problem: one has to know if the stimulus is aversive to the individual (subjective) b. Response: i . Delivery of a stimulus that suppresses the behavior that precedes it ii. Independent of whether or not this stimulus is shown to be aversive II. Relationship to behavior a. Traditional view i. Reward strengthens the SR (stimulus response) bond ii. Punishment weakens SR bond b. Moral and ethical view adopted by psychology c. Thorndike i. Split a group of college student into to two groups ii. In one group when the answers to the questions were right he would tell them that they were correct iii. In the second group when the answers to the questions were right he would tell them that they were wrong (punishment) iv. Result: both groups learned at the same rate 1. Thorndike was initially accepted but because he viewed telling college students that they were wrong as a punishment which is not necessarily true later rejected a. Biphasic “law of effect” III. Experiments a. Estes (1944) i. Split rats into two groups 1. One group receive electric shocks during the process of extinction 2. Second group does NOT receive electric shocks during the process extinction 3. Results: a. The electric shocks did not affect the speed extinction i. It took the same number of trials to reach extinction b. Punishment does not accelerate extinction b. Holz and Azrin (1961) i. Pigeons were trained to pick a key for food 1. When they pick the key they get a shock immediately ii. the group of pigeons were then split into 2 groups 1. one group received electric shock during the process of extinction 2. second group received NO shock during extinction iii. result: 1. group 1: maintained responding for a longer time 2. group 2: maintained responding for a shorter time iv. The electric shock functions as a secondary reinforce leading to the first group to maintain responding for a longer time. 1. The pigeon associated the shock to the primary reinforce of food 2. The punishment facilitates responses in this situation v. Punishment does not weaken the stimulus and response bond IV. Alternate Response hypothesis a. “A punishment response is less likely to occur because it has been replaced, at least temporarily, by some other response that it more likely to occur” i. Response to severe punishment is fear ii. Fear competes with punishmentthe punishment is effective in altering behavior b. Fowler and Miller (1963) i. Placed rats in a straight runway 1. Rats were food deprived 2. In the goal box was food ii. There is an electric grid between the start and goal box 1. Electrify front paws the rat jumped back ( performance is impaired) 2. Electrify back paws jump forward faster (performance is improved) iii. Whether or not punishment hinder performance depends on the nature of the punishment 1. Response of punishment competes with running forwardimpaired behavior 2. Electric shock can hinder of facilitate responses iv. If the punishment leads to fear, then the reduction of fear is reinforcing V. Punishment paradigms a. Response contingent i. Immediate presentation of aversive stimulus following response b. Stimulus contingent (noncontingent) i. Presentation of an aversive stimulus following some specified stimulus 1. The punishment depends on the presence of the stimulus in the environment a. Example: i. A child broke a vase, mother said “wait until your father comes back” 1. The punishment depends on the presence of the father ii. Conditioned Emotional Response CS the presence of light Response contingentwhen the light turned on there was an immediate shock Stimulus Contingent the light turned on and there is a random electric shock The response contingent had a little more response than the stimulus contingent No light Response contingent there was an increase in responses because the rats learned that only in the presence of light there is the shock Stimulus Contingent there was not a significant increase because the rats did not know what caused the shock Extinction: have light but no shocks Response contingent there was good extinction curve, the rats learned that the shock has stopped and is no longer associated with the light; increase in response Stimulus Contingent there was not as much response because the rats never learned what caused the shock, therefore they did not know that the shocks really did stopped VI. Learned Helplessness a. Seligman (1967) i. Gave dogs shocks with no possible escape ii. When tested later (dogs can run to the other side of the runway to escape the shock) only 33% of the dogs attempted to escape compared to the 94% of the naïve dogs 1. Learned that escape is not possible and does not attempt iii. Helplessness is a function of lack of control iv. Characteristics of dogs that did not escape 1. Passivity 2. Associative retardation (learning problems) 3. Decreased aggressiveness VII
Are you sure you want to buy this material for
You're already Subscribed!
Looks like you've already subscribed to StudySoup, you won't need to purchase another subscription to get this material. To access this material simply click 'View Full Document'