×
Log in to StudySoup
Get Full Access to UO - PSY 201 - Study Guide
Join StudySoup for FREE
Get Full Access to UO - PSY 201 - Study Guide

Already have an account? Login here
×
Reset your password

UO / Psychology / PSY 201 / a relatively enduring change in behavior resulting from experience

a relatively enduring change in behavior resulting from experience

a relatively enduring change in behavior resulting from experience

Description

School: University of Oregon
Department: Psychology
Course: Mind and Brain >3
Professor: Dassonville p
Term: Fall 2015
Tags:
Cost: 50
Name: PSY 201 MIDTERM Chapters 6-8 STUDY GUIDE
Description: Chapter 6 Terms to Know: • Learning: a relatively enduring change in behavior, resulting from experience • Nonassociative learning: Responding after repeated exposure to a single stimulus, or event
Uploaded: 12/02/2015
26 Pages 6 Views 17 Unlocks
Reviews

Mckenzie.raze (Rating: )

. Other: It is only a few pages. Its not letting me view the full document, I paid for it too!


Leah P (Rating: )

. Other: There's no file!!! Please help, the exam's tomorrow and she only posted this last night!



Chapter 6  


An increase in behavioral response after exposure to a stimulus.



Terms to Know:  

• Learning: a relatively enduring change in behavior, resulting from experience • Nonassociative learning: Responding after repeated exposure to a single stimulus, or event.  ◦Habituation: A decrease in behavioral response after repeated exposure to a stimulus  ‣ If something around us is neither rewarding nor harmful, habituation leads us to ignore  it. Unlike sensory adaptation in hat you can still perceive stimuli. You just don't respond  to them.  

◦Sensitization: An increase in behavioral response after exposure to a stimulus  • Associative learning: Linking two stimuli, or events, that occur together  

• Classical Conditioning (Pavlovian Conditioning): A type of associative learning in which a neutral  stimulus comes to elicit a response when it is associated with a stimulus that already produces a  response  


Who founded behaviorism?



◦Behaviorism: founded by John B. Watson in 1913 as a way of establishing the credibility of  Psychology as a scientific discipline (a backlash to Introspectionism)

◦Ivan Pavlov: studied dogs to understand classical conditioning We also discuss several other topics like What is the processing of pyruvate?

◦Salivary reflex: This automatic, unlearned response occurs when a food stimulus is  presented to a hungry animal, including a human.  

◦Neutral Stimulus: A stimulus that elicits no reflexive response

◦Unconditioned stimulus: A stimulus that elicits a response, such as a reflex, without any  prior learning  

◦Unconditioned response: A response that does not have to be learned, such as a reflex  ◦Conditioned stimulus: A stimulus that elicits a response only after learning has taken place  ◦Conditioned response: A response to a conditioned stimulus; a response that has been  learned  

◦Acquisition: The gradual formation of an association between the conditioned and  unconditioned stimulus  


What is Hebbian rule?



◦Extinction: A process in which the conditioned response is weakened when the  conditioned stimulus is repeated without the unconditioned stimulus  

◦Spontaneous recovery: A process in which a previously extinguished conditioned response  reemerges after the presentation of the conditioned stimulus  

◦Stimulus generalization: Learning that occurs when stimuli that are similar but not identical  to the conditioned stimulus produce the conditioned response  If you want to learn more check out What is Transformation?

◦Stimulus discrimination: A differentiation between two similar stimuli when only one of  them is associated with the unconditioned stimulus  

◦Second-order conditioning: The CR can be learned without the learner ever associating the  CS with the original US  

◦John B. Watson & Little Albert: Using "Little Albert" as a subject, Watson tested whether a  phobia (acquired fear that is out of proportion to the real threat posed by the object or  situation) could be created with classical conditioning (fear conditioning).

◦Phobia: Acquired fear that is out of proportion to the real threat posed by the object or  situation  

◦Fear conditioning: Animals can be classically conditioned to fear neutral objects  ◦Counterconditioning: When a person suffers from a phobia, a clinician might expose the  patient to small doses of the feared stimulus while having the client engage in an enjoyable

task

◦Conditioned taste (or food) aversion: When one eats a particular food and later became ill  with nausea, upset stomach and vomiting. Whether or not the foot caused the illness, most  people don't like to eat that food anymore  

◦Rescorla-Wagner model: A cognitive model of classical conditioning; the strength of the  CS-US association is determined by the extent to which the unconditioned stimulus is  expected  We also discuss several other topics like advantages of coelom over pseudocoelom

◦Positive prediction error: strengthens the association between the CS and US (after a  stimulus appears, something better than expected occurs)  

◦Negative prediction error: Weakens the CS-US relationship (an expected event does not  happen)  

◦Blocking effect: First, an animal will more easily associate an unconditioned stimulus with a  novel stimulus than with a familiar stimulus (ex. a dog can be conditioned more easily with  a sound new to it like a metronome vs. a sound it knows like a whistle). Second, once a CS is  learned, it can prevent the acquisition of a new conditioned stimulus.  We also discuss several other topics like joel strayer university of alabama

• Operant (Instrumental) Conditioning: A learning process in which the consequences of an  action determine the likelihood that it will be performed in the future  

◦Thorndike's puzzle box: a small cage with a trapdoor. Thorndike would put animals in the  cage and see if they could escape. The trapdoor would open if the animal performed a  certain task  

◦Law of Effect: Thorndike's general theory of learning: Any behavior that leads to a  "satisfying state of affairs" is likely to occur again, and any behavior that leads to an  "annoying state of affairs" is less likely to occur again.  If you want to learn more check out

◦B.F. Skinner formalized the methods and theories of operant conditioning using his operant  chamber (aka "Skinner Box"), which is modeled after, but more more complex than  Thorndike's puzzle box  

◦Operant chamber (Skinner box): Inside a small cage, one lever is connected to a food  supply and a second lever is connected to a water supply. An animal placed in that cage  learns to press one lever to receive food and the other lever to receive water  

◦Shaping: A process of operant conditioning; it involves reinforcing behaviors that are  increasingly similar to the desired behavior  

◦Primary vs. secondary reinforcers  

‣ Primary reinforcers: reinforcers that are necessary for survival  

‣ Secondary reinforcers: Events or objects that serve as reinforcers but do not satisfy  biological needs  We also discuss several other topics like chapter 9 chemistry test

◦Premack principle: a more-valued activity can be used to reinforce the performance of a  less-valued activity (ex. Parents telling their children, "Eat your spinach and then you'll get  dessert."  

◦Reinforcement vs. Punishment  

‣ Reinforcement is something that increases the probability of a behavior's being  repeated  

‣ Punishment is something that decreases the probability of a behavior's being repeated  ◦Positive vs. Negative conditioning  

‣ "Positive" means that something is being added, not whether the reinforcement is  good

‣ "Negative" means that something is being removed, not whether the reinforcement is  bad  

◦Positive reinforcement: Often called reward; The administration of a stimulus to increase  the probability of a behavior's being repeated  

◦Negative reinforcement: The removal of an unpleasant stimulus to increase the probability  of a behavior's being repeated  

◦Positive punishment: The administration of a stimulus to decrease the probability of a  behavior's recurring  

◦Negative punishment: The removal of a stimulus to decrease the probability of a behavior's  recurring  

◦Continuous vs. Partial reinforcement  

‣ Continuous reinforcement: A type of learning in which behavior is reinforced each  time it occurs  

◦Ratio vs. Interval schedule  

‣ Ratio schedule: Based on the number of times the behavior occurs  

‣ Interval schedule: based on a specific unit of time  

◦Fixed vs. Variable schedule  

‣ Fixed schedule: predictable  

‣ Variable schedule: less predictable than fixed schedule  

◦Partial-reinforcement extinction effect: The greater persistence of behavior under partial  reinforcement than under continuous reinforcement  

◦Behavior modification: The use of operant conditioning techniques to eliminate unwanted  behaviors and replace them with desirable ones  

• Gambling  

◦Losses Disguised as Wins (LDWs): payouts smaller than the original bet are revealed just  like wins (flashing lights, bells & music)  

◦Near misses: displayed with higher probability than chance alone would cause. Almost win  ‣ Nucleus accumbens reacts to near miss just as much (or even more than) a win • Latent learning: Learning that takes place in the absence of reinforcement  • Mental representation (mental model)  

◦Cognitive map: A visual/spatial mental representation of an environment  • Observational learning  

◦Modeling: The imitation of observed behavior  

◦Vicarious learning: Learning the consequences of an action by watching others being  rewarded or punished for performing the action  

◦Mirror neurons: Neurons in the brain that are activated when one observes another  individual engage in an action and when one performs a similar action  

• Brain mechanisms of learning  

◦Reward/Pleasure center: Positive reinforcement is associated with the release of the  neurotransmitter dopamine in the "Reward Centers" (or "Pleasure Centers") of the brain (eg.  the nucleus accumbens, ventral tegmental area).  

‣ Food, sex, $$$  

‣ Amount of dopamine released sets a reinforcer's reward value  

• more dopamine=more effective reward  

◦Intracranial self-stimulation: rats press a lever to self-administer electrical stimulation to the

reward centers of the brain. Stimulation is so rewarding that they will do it to the point of  exhaustion.  

◦Nucleus accumbens: one of the reward centers of the brain  

◦Dopamine: reward neurotransmitter  

◦Hebbian Rule: a synapse is strengthened if it is repeatedly active when the postsynaptic  neuron fires

◦Synaptic plasticity: the basis of learning involving a change in synaptic structure of  biochemistry that alters the efficiency of the synapse in a positive or negative way.  (Plasticity=the capacity for being molded or altered)

◦Long-term potentiation (LTP): Strengthening of a synaptic connection, making the  postsynaptic neurons more easily activated by presynaptic neurons

◦NMDA receptor: a type of glutamate receptor that opens only if a nearby neuron fires at the  same time  

◦Doogie mice: Mice genetically-enhanced to have more NMDA receptors have better  learning abilities  

Questions to Consider:  

1. How does classical conditioning work?  

A. An unconditioned stimulus (food) produces an unconditioned response (salivation). This  stimulus and response do not have to be conditioned. Then, the unconditioned stimulus  is paired with a conditioned stimulus (such as a metronome). After the conditioned  stimulus is paired with the unconditioned stimulus for a while, the conditioned stimulus  alone will produce a conditioned response (salivation in response to the metronome).  

2. How can counterconditioning and extinction be used to minimize the effects of a phobia?  A. Counterconditioning works similarly to classical conditioning, but phobias are paired with  positive stimuli instead of feared stimuli in order to make it less scary.  

3. What is the role of classical conditioning in drug addiction?  

A. There is a neutral stimulus (such as the environment stimuli associated with a drug  addiction). The US is the drug's effects on the brain and the UR is the reactions to the  drugs. The neutral stimulus is paired with the US and then becomes a CS. The CS leads to  the CR of the reactions to the drugs even if the drug is not present, and makes the user  crave the drug and want more.  

4. How can operant conditioning be used to shape an animal's behaviors?  A. If an animal does something well, they can be rewarded with a positive reinforcement  (something that is given to increase the behavior that precedes the reinforcement, such as  a treat) or even a negative reinforcement (something that is taken away to increase the  behavior that precedes the reinforcement, such as a shock being stopped after they do  the correct behavior). If they do something bad, they could be given a positive  punishment (something that is given to decrease the behavior that precedes the  punishment, such as a shock) or a negative punishment (something that is taken away to  decrease the behavior that precedes the punishment, such as taking away a toy).  5. What are some examples of positive reinforcement, negative reinforcement, positive  punishment and negative punishment?  

A. Positive reinforcements administer a desirable stimulus (ex. treats, money praise). Positive  punishments administer an undesirable stimulus (ex. spanking, parking ticket); not as

effective. Negative reinforcements remove an undesirable stimulus (ex. annoying, buzzer  stops when seatbelt latched). Negative punishments remove a desirable stimulus (ex.  remove privileges: grounding, time outs)  

6. Which is more effective in operant conditioning, reinforcement or punishment, and why?  A. Reinforcements are more effective because punishments can cause fear, anxiety and  resentment.  

7. Why is immediate gratification more effective for conditioning than delayed gratification?  A. If the reward is immediate, then the person/animal being conditioned will have the  wanted behavior fresh in their mind rather than if the reward is given later, they might not  remember what they did/did not do.  

8. How do the different reinforcement schedules vary in terms of rate of learning, persistence of  learning, and rate of responding?  

A. Schedule of reinforcement modulates effectiveness  

a. Continuous reinforcement: reinforcement occurs with every occurrence of desired  behavior  

b. Partial reinforcement: reinforcement occurs intermittently  

1. Ratio schedule: reinforcement based on number of times behavior performed  2. Interval schedule: reinforcement provided after a specific amount of time  3. Fixed schedule: reinforcement provided after a fixed, predictable number of  occurrences or amount of time  

4. Variable schedule: reinforcement provided after a variable, unpredictable  number of occurrences or amount of time  

5. Faster learning with continuous reinforcement vs. partial reinforcement  6. More persistent learning occurs with partial reinforcement vs. continuous  reinforcement (partial-reinforcement extinction effect)  

7. Faster rate of responding with ration schedules vs. interval schedules  

9. How is operant conditioning related to the development of superstitions?  A. If a reinforcement of punishment is given to a person/animal accidentally a couple times,  it can cause superstitions (which are false beliefs) that one thing will lead to another, even  if they are not related whatsoever.  

10. What role does the nucleus accumbens play in operant conditioning?  

A. The nucleus accumbens is one of the reward centers in the brain. If one is given a positive  reinforcement (reward), then the nucleus accumbens plays a role in that.  

11. How do addictive drugs (e.g., cocaine) affect the reward centers of the brain?  A. Addictive drugs provide a shortcut to the brain’s reward system by flooding the nucleus  accumbens with dopamine, which affects the reward centers in the brain in the long run.  12. Why are slot machines so effective at causing gambling addictions.  

A. Slot machines use operant conditioning to efficiently train addictive behaviors  a. Partial reinforcement to maximize persistence of behavior  

b. Ratio schedules are used to maximize the rate of responding  

c. Flashing lights, bells & music heighten excitement with each win  

d. Losses Disguised as Wins (LDWs): payouts smaller than the original bet are revealed  just like wins (flashing lights, bells & music)  

e. Near misses are displayed with higher probability than chance alone would cause  f. Nucleus accumbens reacts to near miss just as much (or even more than) a win

13. What is some of the evidence for and against the idea that violence in the media can cause  violent behaviors in viewers?  

A. It is proven that people learn a lot just from observing (proved by Bandura's Bobo doll  experiment). If viewers watch violent TV shows or play violent games, it can cause the  observer to become more violent. However, sometimes people's surroundings can cause  violence and aggression and it might not be because of the violence in the media.  14. What evidence demonstrates that LTP is involved in learning?  

A. Evidence for Role of LTP in Learning  

a. LTP prominent in brain areas involved in learning  

b. Drugs that block LTP also impair learning abilities  

c. Drugs that facilitate LTP enhance learning abilities  

d. Mice genetically-enhanced to have more NMDA receptors have better learning  abilities (Doogie mice).

15. What does the phrase "Neurons that fire together, wire together," mean?  A. It means that when neurons fire at the same time, they are more likely to fire at the same  time again in the future, producing a stronger signal.  

Chapter 7  

Terms to Know:  

• Memory: The nervous system's capacity to retain and retrieve skills and knowledge.  • Encoding, storage, consolidation & retrieval  

◦Encoding: the processing of information so that it can be stored  

◦Storage: the retention of encoded representation over time  

◦Consolidation: the neural process by which encoded information becomes stored in  memory  

◦Retrieval: the act of recalling or remembering stored information when it is needed  • Engram: refers to the physical site of memory storage; the place where memories "live" (term  established by Karl Lashley)  

• Karl Lashley: spent a lot of his time figuring out where in the brain memories are stored  • Long-term potentiation (LTP): Strengthening of a synaptic connection, making the postsynaptic  neurons more easily activated by presynaptic neurons

◦(see also the related Ch. 6 lecture slides)  

◦Donald Hebb & the Hebbian Rule: a synapse is strengthened if it is repeatedly active when  the postsynaptic neuron fires

◦Synaptic plasticity: the basis of learning involving a change in synaptic structure of  biochemistry that alters the efficiency of the synapse in a positive or negative way.  (Plasticity=the capacity for being molded or altered)

◦NMDA receptor: a type of glutamate receptor that opens only if a nearby neuron fires at the  same time  

◦Doogie mice: Mice genetically-enhanced to have more NMDA receptors have better  learning abilities  

• Morris water maze  

◦A tub of water with a hidden platform. Animals were placed in the tub and had to find the  hidden platform. Over time, animals could find the platform easily by looking at  environmental cues in the room.

◦Environmental cues in room provide information that permits animals to orient themselves  in space and learn the location of a hidden platform  

◦Performance of normal vs. hoppocampectomized rats. Lack of hippocampus impairs  performance  

• Medial temporal lobes & hippocampus  

◦Medial temporal lobes: middle section of the temporal lobes responsible for the formation  of new memories  

◦Hippocampus: neural structure involved in spatial memory  

• Patient H.M. (Henry Molaison)  

◦Underwent surgery for severe epileptic seizures in 1953 at age 27.  

◦Removal of both medial temporal lobes, including hippocampus.  

◦Almost total loss of the ability to encode new long term memories.  

◦But preserved short term memory...holds normal conversations, intact reading ability, and  other skills that require only temporary maintenance information.  

• Clive Wearing  

◦In 1985, damage to the medial temporal lobe due to an encephalitis infection.  ◦Profound anterograde and retrogade amnesia.  

◦Can still play the piano, although he has no recollection of ever having being trained.  ◦Amnesia: A profound impairment of memory function as the result of brain injury.  ‣ Retrograde Amnesia: cannot remember events prior to brain damage. Can remember  childhood memories  

‣ Anterograde Amnesia: cannot later remember events that occur after brain damage.  Can form new memories  

‣ Declarative memory affected by amnesia  

• Consolidation & reconsolidation

◦Consolidation: the neural process by which encoded information becomes stored in  memory  

◦Reconsolidation: Neural process involved when memories are recalled and then stored  again for retrieval  

◦Reconsolidation view of memory:  

• What would happen if a person was given a certain drug that prevented the  process of reconsolidation for a 6 hour period? Memories recalled during that 6  hour. period, or shortly before that, would be lost.  

• Rat undergoes classical fear conditioning. Anisomycin prevents reconsolidation only after recollection of the information, so that the information is forgotten.  • Sensory memory: a memory system that briefly stores sensory information in close to its original  sensory formal. Only lasts a fraction of a second  

◦Iconic and echoic memory

‣ Iconic memory: visual sensory memory  

‣ Echoic memory: auditory sensory memory  

◦High capacity and very short duration (up to a few seconds).  

◦Easily accessible, but vulnerable  

• Working (Short-term) memory: A memory storage system that briefly holds a limited amount of  information in awareness  

◦Small capacity and short duration (seconds, or minutes with active rehearsal).  ◦Easily accessible, but vulnerable

◦Retrieval, transformation, substitution: three processes that mak distinct and independent  contributions to updating the contents of working memory  

◦Memory span: limited amount of information that working memory can hold is generally 7  items, plus or minus 2 (George Miller's 7±2 items)  

◦Chunking: Organizing information into meaningful units to make it easier to remember  • Long-term memory: The relatively permanent storage of information  

◦Large capacity and indefinite duration (up to decades/lifetime).  

◦Difficult to access, but durable

◦Serial position effect: The idea that the ability to recall items from a list depends on the  order of presentation, with items presented early or late in the list remembered better than  those in the middle.  

‣ Primacy & recency effects  

• Primacy effect: the better memory that people have for items presented at the  beginning of the list  

• Recency effect: the better memory that people have for the most recent items, the  ones at the end of the list  

◦Mental representation: the concept of something (ex. a dog; when your visual system  senses a shaggy, four-legged animal and your auditory system senses barking,BYOU  perceive a "dog")  

◦Levels of processing model: the more deeply an Intel is encoded, the more meaning it has  and the better it is remembered  

‣ Maintenance vs. elaborative rehearsal  

• Maintenance rehearsal: simply repeating the item over and over  

• Elaborative rehearsal: encodes the information in more meaningful ways, such as  thinking about the item conceptually or deciding whether it refers to oneself.  ◦Schemas: Cognitive structures that help us perceive, organize, process and use information  ◦Networks of association: an item's distinctive features are linked to identify the item  ◦Spreading activation model: stimuli in working memory activate specific nodes in long term memory  

◦Retrieval cue: Anything that helps a person (or a nonhuman animal) recall information  stored in long-term memory  

◦Encoding specificity principle: The idea that any stimulus that is a nodded along with an  experience can later trigger memory for the experience  

‣ Context-dependent memory: when the recall situation is similar to th encoding  situation  

‣ State-dependent memory: Memory can be enhanced when a person's internal states  match during encoding and recall

◦Mnemonics: Learning aids, strategies and devices that improve recall through the ice of  retrieval cues.  

• Explicit (Declarative) learning: learning facts and information of which we can be aware (what  we normally think of as memory)  

◦Episodic memory: autobiographical memories (ex. childhood memories, memories from  episodes of your life)  

◦Semantic memory: generalized memory of facts  

• Implicit (Nondeclarative) learning: memory about perceptual and motor procedures of which  we are unaware  

◦Priming: exposure to one stimulus alters response to another  

◦Procedural (motor or skill) memory: involves motor skills and behavioral habits (also known  as motor memory)  

◦Conditioning: learning the relationship between stimuli (classical conditioning), or between  behaviors and outcomes (operant conditioning)  

• Prospective memory: Remembering to do something at some future time  • Forgetting  

◦Transience: memory decay, forgetting over time  

‣ Proactive vs. retroactive interference  

• Proactive interference: interference that occurs when prior information inhibits the  ability to remember new information  

• Retroactive interference: interference that occurs when new information inhibits  the ability to remember old information  

◦Blocking: The temporary inhabiting to remember something  

‣ Tip-of-the-tongue phenomenon: which people experience great frustration as they try  to recall specific, somewhat obscure words. Like the saying, "it's on the tip of my  tongue" when you can't quite remember a word.  

◦Absentmindedness: the inattentive or shallow encoding of events  

◦Persistence: The continuous recurrence of unwanted memories  

◦Source mistattribution: Memory distortions that occurs when people misremember the  time, place, person, or circumstances involved with a memory  

‣ Source amnesia: A type of misattribution that occurs when a person shows memory for  an event but cannot remember where he or she encountered the information  ‣ Cryptomnesia: A type of misattribution that occurs when a person thinks he or she has  come up with a new idea, yet has only retrieved a stored idea and failed to attribute  the idea to its proper source  

◦Bias: a tendency people most seem to follow  

◦Suggestibility: The development of biased memories from misleading information  • Amnesia: A profound impairment of memory function as the result of brain injury.  ◦Declarative memory affected by amnesia  

◦Retrograde vs. anterograde amnesia  

‣ Retrograde Amnesia: cannot remember events prior to brain damage. Can remember  childhood memories  

‣ Anterograde Amnesia: cannot later remember events that occur after brain damage.  Can form new memories  

• Flashbulb memories: Vivid episodic memories for the circumstances in which people first

learned of a surprising, consequential, or emotionally arousing event  

• Eyewitness testimony  

◦Conformation bias: people tend to remember evidence that confirms their beliefs.  • False memories: When a person imagines an event happening, he or she forms a mental image  of the event. The person might later confuse that mental image with a real memory.  • Repressed memories: memories of traumatic events are repressed. Repressed memories hold  strong and passionate beliefs.  

Questions to Consider:  

1. How do the three stages of the Information Processing Model of Memory (Sensory, Short-term  and Long-term) differ in characteristics such as capacity, duration, accessibility and durability?  A. Sensory memory has high capacity and very short duration (up to a few seconds). It is  easily accessible, but vulnerable. Short-term memory has small capacity and short  duration (seconds, or minutes with active rehearsal). It is easily accessible, but vulnerable.  Long-term memory has a large capacity and indefinite duration (up to decades/lifetime).  It is difficult to access, but durable.  

2. What are the effects of damage to the medial temporal lobes and the hippocampus on  memory? Would someone with damage to theses structures forget how to ride a bike? Why or  why not?  

A. The medial temporal lobes and hippocampus are involved more with memories of facts  and events (explicit memory). Not motor memories so damage to these structures will not  cause someone to forget how to ride a bike.  

3. How does remembering a distant memory introduce an opportunity for distortions of that  memory?  

A. The memories that you have encoded more recently could distort the recent memory, or  you may just forget the memories.  

4. Why is working memory likened to a mental "workbench"?  

A. It is called a mental "workbench" because it is where we can examine, evaluate, transform  and compare different pieces of information  

5. Describe how chunking can be used to expand the capacity of working memory?  6. What are some ways to foster the storage of information in long-term memory?  A. You could link the information to something you already know. You can use retrieval cues  and mnemonics to help remember the information in a more meaningful way.  7. What are some of the ways in which eyewitness testimony can be flawed?  A. It can be flawed because of confirmation bias, when people tend to remember evidence  that confirms their beliefs, and that information may or may not be true. Also, the  eyewitness could have a false memory and remember something that didn't actually  happen. If the event was emotionally scarring, the eyewitness could have repressed  memories that they try to ignore and when it resurfaces, the information could be false.  8. What are the seven sins of memory?  

A. Transience, blocking, absentmindedness, persistence, misattribution, bias, suggestibility  9. What evidence demonstrates that LTP is involved in learning?  

A. Evidence for Role of LTP in Learning  

a. LTP prominent in brain areas involved in learning  

b. Drugs that block LTP also impair learning abilities

c. Drugs that facilitate LTP enhance learning abilities  

d. Mice genetically-enhanced to have more NMDA receptors have better learning  abilities (Doogie mice).

10. What does the phrase "Neurons that fire together, wire together," mean?  A. It means that when neurons fire at the same time, they are more likely to fire at the same  time again in the future, producing a stronger signal.  

Chapter 8  

Terms to Know:  

• Cognition: The mental activity that includes thinking and the understandings that result from  thinking.  

• Thinking: The mental manipulation of representations of knowledge about the world.  • Analogical representations: Mental representations that have some of the physical  characteristics of objects; they are analogous to the objects  

• Symbolic representations: Abstract mental representations that do not correspond to the  physical features of objects or ideas  

• Categorization: grouping th gas based on shared properties  

◦Concept: A category, or class, of related items; it consists of mental representations of  those items  

‣ Prototype model: A way of thinking about concepts: Within each category is a best  example-a prototype-for that category  

‣ Exemplar mode: A way of thinking about concepts: All members of a category are  examples (exemplars); together they form the concept and determine category  membership  

• Schemas (see also Chapter 7)  

◦Stereotypes: Cognitive Schemas that allow for easy, fast processing of information about  people based on their membership in certain groups  

◦Scripts: Schemas that directs behavior over time within a situation  

• Decision making: Attempting to select the best alternative from among several options  • Normative decision theory: Attempts to define how people should make decisions  ◦Expected utility theory: People make decisions by considering the possible alternatives and  choosing the most desirable one  

‣ Expected utility=value the outcome x probability of obtaining it  

• Descriptive decision theory: Attempts to predict how people actually make choices, not to  define ideal choices. Tries to account for actual behavior  

• Heuristics: Shortcuts (rules of thumb or informal guidelines) used to reduce the amount of  thinking that is needed to make decisions  

◦Anchoring: The tendency, in making judgments, to rely on the first piece of information  encountered or information that comes most quickly to mind.  

◦Framing: In decision making, the tendency to emphasize the potent al losses or potential  gains from at least one alternative  

◦Availability heuristic: Making a decision based on the answers that most easily comes to  mind  

◦Representativeness heuristic: Placing a person or object in a category if that person or  object is similar to one's prototype for that category

◦Affective heuristic: The tendency for people to overestimate how events will make them  feel in the future.  

◦Paradox of choice: too much choice leads to frustration, indecision and dissatisfaction with  the eventual selection.

• Maximizers vs. Satisficers

◦Maximizers seek to identify the perfect choice among a set of options  

◦Satisficers seek to find a “good enough” choice that meets their minimum requirements  • Problem solving  

◦Subgoals: steps you need to achieve to achieve the larger whole goal  

◦Restructuring: A new way of thinking about a problem that aids its solution  ‣ Functional fixedness: And problem-solving, having fixed ideas about the typical  functions of objects  

◦Algorithm: A guideline that if followed correctly, will always yield the correct answer  ◦Working backward: When the appropriate steps for solving a problem or not clear,  preceding from the goal state to the initial state can help yield a solution  

◦Appropriate analogies:  

◦Insight: The sudden realization of a solution to a problem  

Questions to Consider:  

1. How do analogical representations differ from symbolic representations?  A. Analogical representations have some of the physical characteristics of objects (ex.  images; typically faster to be processed. Needs no decoding). Symbolic representations are abstract, with no resemblance to objects (ex. words; typically slower to be processed.  Requires decoding)  

2. How does the use of schemas sometimes lead to stereotyping?  

A. Stereotypes are made up of schemas that allow for easy, fast processing of information  about people based on their membership in certain groups. Sometimes, we group  people without having all of the information about them.  

3. What are some examples of behaviors that demonstrate that our decisions are not always as  rational as normative decision theories would have you think?  

A. Which would you prefer:  

a. a 50% chance of winning $50, and a 50% chance of getting nothing  

1. expected utility = $50 x 0.5 + $0 x 0.5 = $25 + $0 = $25  

b. a 50% chance of winning $200, and a 50% chance of losing $100  

1. expected utility = $200 x 0.5 - $100 x 0.5 = $100 - $50 = $50  

c. Expected utility theory suggests that option B has the greatest expected utility, so  people should prefer B, but most choose A because we are not always rational.  4. Why does having a very expensive item listen on a restaurant's menu cause people to spend  more money, even if they don't buy that item?  

A. It has some relation to the heuristic called anchoring. When people see the restaurant's  expensive item first, then they think that the other items (even though they are expensive)  are more reasonably riced.  

5. How is the recognition of subgoals useful in completing a complex task?  A. It makes the complex task seem more manageable. Instead of having a huge goal to  accomplish at once, subgoals allow you to break up the large goal into smaller ones and

make it seem more reasonable.  

6. How can functional fixedness prevent the completing of complex tasks?  A. Sometimes, we are so set on the function of certain things and we don't look "outside the  box" to find a new, unique way to solve a problem. For example, if someone's window  was stuck and they only have a hammer, the first thought would be to break the glass  because hammers are usually used to pound things, but you could use it as a lever to  open the window more easily.  

7. What is some of the evidence that chimpanzees have insights similar to humans?  A. Chimpanzees try to solve problems, such as reaching bananas that are too high. One  chimp stacked several boxes on top of each other and stood on them to reach the  bananas. This behavior suggested that the chimp solved the problem through insight, just  like how a human would.

Chapter 6  

Terms to Know:  

• Learning: a relatively enduring change in behavior, resulting from experience • Nonassociative learning: Responding after repeated exposure to a single stimulus, or event.  ◦Habituation: A decrease in behavioral response after repeated exposure to a stimulus  ‣ If something around us is neither rewarding nor harmful, habituation leads us to ignore  it. Unlike sensory adaptation in hat you can still perceive stimuli. You just don't respond  to them.  

◦Sensitization: An increase in behavioral response after exposure to a stimulus  • Associative learning: Linking two stimuli, or events, that occur together  

• Classical Conditioning (Pavlovian Conditioning): A type of associative learning in which a neutral  stimulus comes to elicit a response when it is associated with a stimulus that already produces a  response  

◦Behaviorism: founded by John B. Watson in 1913 as a way of establishing the credibility of  Psychology as a scientific discipline (a backlash to Introspectionism)

◦Ivan Pavlov: studied dogs to understand classical conditioning

◦Salivary reflex: This automatic, unlearned response occurs when a food stimulus is  presented to a hungry animal, including a human.  

◦Neutral Stimulus: A stimulus that elicits no reflexive response

◦Unconditioned stimulus: A stimulus that elicits a response, such as a reflex, without any  prior learning  

◦Unconditioned response: A response that does not have to be learned, such as a reflex  ◦Conditioned stimulus: A stimulus that elicits a response only after learning has taken place  ◦Conditioned response: A response to a conditioned stimulus; a response that has been  learned  

◦Acquisition: The gradual formation of an association between the conditioned and  unconditioned stimulus  

◦Extinction: A process in which the conditioned response is weakened when the  conditioned stimulus is repeated without the unconditioned stimulus  

◦Spontaneous recovery: A process in which a previously extinguished conditioned response  reemerges after the presentation of the conditioned stimulus  

◦Stimulus generalization: Learning that occurs when stimuli that are similar but not identical  to the conditioned stimulus produce the conditioned response  

◦Stimulus discrimination: A differentiation between two similar stimuli when only one of  them is associated with the unconditioned stimulus  

◦Second-order conditioning: The CR can be learned without the learner ever associating the  CS with the original US  

◦John B. Watson & Little Albert: Using "Little Albert" as a subject, Watson tested whether a  phobia (acquired fear that is out of proportion to the real threat posed by the object or  situation) could be created with classical conditioning (fear conditioning).

◦Phobia: Acquired fear that is out of proportion to the real threat posed by the object or  situation  

◦Fear conditioning: Animals can be classically conditioned to fear neutral objects  ◦Counterconditioning: When a person suffers from a phobia, a clinician might expose the  patient to small doses of the feared stimulus while having the client engage in an enjoyable

task

◦Conditioned taste (or food) aversion: When one eats a particular food and later became ill  with nausea, upset stomach and vomiting. Whether or not the foot caused the illness, most  people don't like to eat that food anymore  

◦Rescorla-Wagner model: A cognitive model of classical conditioning; the strength of the  CS-US association is determined by the extent to which the unconditioned stimulus is  expected  

◦Positive prediction error: strengthens the association between the CS and US (after a  stimulus appears, something better than expected occurs)  

◦Negative prediction error: Weakens the CS-US relationship (an expected event does not  happen)  

◦Blocking effect: First, an animal will more easily associate an unconditioned stimulus with a  novel stimulus than with a familiar stimulus (ex. a dog can be conditioned more easily with  a sound new to it like a metronome vs. a sound it knows like a whistle). Second, once a CS is  learned, it can prevent the acquisition of a new conditioned stimulus.  

• Operant (Instrumental) Conditioning: A learning process in which the consequences of an  action determine the likelihood that it will be performed in the future  

◦Thorndike's puzzle box: a small cage with a trapdoor. Thorndike would put animals in the  cage and see if they could escape. The trapdoor would open if the animal performed a  certain task  

◦Law of Effect: Thorndike's general theory of learning: Any behavior that leads to a  "satisfying state of affairs" is likely to occur again, and any behavior that leads to an  "annoying state of affairs" is less likely to occur again.  

◦B.F. Skinner formalized the methods and theories of operant conditioning using his operant  chamber (aka "Skinner Box"), which is modeled after, but more more complex than  Thorndike's puzzle box  

◦Operant chamber (Skinner box): Inside a small cage, one lever is connected to a food  supply and a second lever is connected to a water supply. An animal placed in that cage  learns to press one lever to receive food and the other lever to receive water  

◦Shaping: A process of operant conditioning; it involves reinforcing behaviors that are  increasingly similar to the desired behavior  

◦Primary vs. secondary reinforcers  

‣ Primary reinforcers: reinforcers that are necessary for survival  

‣ Secondary reinforcers: Events or objects that serve as reinforcers but do not satisfy  biological needs  

◦Premack principle: a more-valued activity can be used to reinforce the performance of a  less-valued activity (ex. Parents telling their children, "Eat your spinach and then you'll get  dessert."  

◦Reinforcement vs. Punishment  

‣ Reinforcement is something that increases the probability of a behavior's being  repeated  

‣ Punishment is something that decreases the probability of a behavior's being repeated  ◦Positive vs. Negative conditioning  

‣ "Positive" means that something is being added, not whether the reinforcement is  good

‣ "Negative" means that something is being removed, not whether the reinforcement is  bad  

◦Positive reinforcement: Often called reward; The administration of a stimulus to increase  the probability of a behavior's being repeated  

◦Negative reinforcement: The removal of an unpleasant stimulus to increase the probability  of a behavior's being repeated  

◦Positive punishment: The administration of a stimulus to decrease the probability of a  behavior's recurring  

◦Negative punishment: The removal of a stimulus to decrease the probability of a behavior's  recurring  

◦Continuous vs. Partial reinforcement  

‣ Continuous reinforcement: A type of learning in which behavior is reinforced each  time it occurs  

◦Ratio vs. Interval schedule  

‣ Ratio schedule: Based on the number of times the behavior occurs  

‣ Interval schedule: based on a specific unit of time  

◦Fixed vs. Variable schedule  

‣ Fixed schedule: predictable  

‣ Variable schedule: less predictable than fixed schedule  

◦Partial-reinforcement extinction effect: The greater persistence of behavior under partial  reinforcement than under continuous reinforcement  

◦Behavior modification: The use of operant conditioning techniques to eliminate unwanted  behaviors and replace them with desirable ones  

• Gambling  

◦Losses Disguised as Wins (LDWs): payouts smaller than the original bet are revealed just  like wins (flashing lights, bells & music)  

◦Near misses: displayed with higher probability than chance alone would cause. Almost win  ‣ Nucleus accumbens reacts to near miss just as much (or even more than) a win • Latent learning: Learning that takes place in the absence of reinforcement  • Mental representation (mental model)  

◦Cognitive map: A visual/spatial mental representation of an environment  • Observational learning  

◦Modeling: The imitation of observed behavior  

◦Vicarious learning: Learning the consequences of an action by watching others being  rewarded or punished for performing the action  

◦Mirror neurons: Neurons in the brain that are activated when one observes another  individual engage in an action and when one performs a similar action  

• Brain mechanisms of learning  

◦Reward/Pleasure center: Positive reinforcement is associated with the release of the  neurotransmitter dopamine in the "Reward Centers" (or "Pleasure Centers") of the brain (eg.  the nucleus accumbens, ventral tegmental area).  

‣ Food, sex, $$$  

‣ Amount of dopamine released sets a reinforcer's reward value  

• more dopamine=more effective reward  

◦Intracranial self-stimulation: rats press a lever to self-administer electrical stimulation to the

reward centers of the brain. Stimulation is so rewarding that they will do it to the point of  exhaustion.  

◦Nucleus accumbens: one of the reward centers of the brain  

◦Dopamine: reward neurotransmitter  

◦Hebbian Rule: a synapse is strengthened if it is repeatedly active when the postsynaptic  neuron fires

◦Synaptic plasticity: the basis of learning involving a change in synaptic structure of  biochemistry that alters the efficiency of the synapse in a positive or negative way.  (Plasticity=the capacity for being molded or altered)

◦Long-term potentiation (LTP): Strengthening of a synaptic connection, making the  postsynaptic neurons more easily activated by presynaptic neurons

◦NMDA receptor: a type of glutamate receptor that opens only if a nearby neuron fires at the  same time  

◦Doogie mice: Mice genetically-enhanced to have more NMDA receptors have better  learning abilities  

Questions to Consider:  

1. How does classical conditioning work?  

A. An unconditioned stimulus (food) produces an unconditioned response (salivation). This  stimulus and response do not have to be conditioned. Then, the unconditioned stimulus  is paired with a conditioned stimulus (such as a metronome). After the conditioned  stimulus is paired with the unconditioned stimulus for a while, the conditioned stimulus  alone will produce a conditioned response (salivation in response to the metronome).  

2. How can counterconditioning and extinction be used to minimize the effects of a phobia?  A. Counterconditioning works similarly to classical conditioning, but phobias are paired with  positive stimuli instead of feared stimuli in order to make it less scary.  

3. What is the role of classical conditioning in drug addiction?  

A. There is a neutral stimulus (such as the environment stimuli associated with a drug  addiction). The US is the drug's effects on the brain and the UR is the reactions to the  drugs. The neutral stimulus is paired with the US and then becomes a CS. The CS leads to  the CR of the reactions to the drugs even if the drug is not present, and makes the user  crave the drug and want more.  

4. How can operant conditioning be used to shape an animal's behaviors?  A. If an animal does something well, they can be rewarded with a positive reinforcement  (something that is given to increase the behavior that precedes the reinforcement, such as  a treat) or even a negative reinforcement (something that is taken away to increase the  behavior that precedes the reinforcement, such as a shock being stopped after they do  the correct behavior). If they do something bad, they could be given a positive  punishment (something that is given to decrease the behavior that precedes the  punishment, such as a shock) or a negative punishment (something that is taken away to  decrease the behavior that precedes the punishment, such as taking away a toy).  5. What are some examples of positive reinforcement, negative reinforcement, positive  punishment and negative punishment?  

A. Positive reinforcements administer a desirable stimulus (ex. treats, money praise). Positive  punishments administer an undesirable stimulus (ex. spanking, parking ticket); not as

effective. Negative reinforcements remove an undesirable stimulus (ex. annoying, buzzer  stops when seatbelt latched). Negative punishments remove a desirable stimulus (ex.  remove privileges: grounding, time outs)  

6. Which is more effective in operant conditioning, reinforcement or punishment, and why?  A. Reinforcements are more effective because punishments can cause fear, anxiety and  resentment.  

7. Why is immediate gratification more effective for conditioning than delayed gratification?  A. If the reward is immediate, then the person/animal being conditioned will have the  wanted behavior fresh in their mind rather than if the reward is given later, they might not  remember what they did/did not do.  

8. How do the different reinforcement schedules vary in terms of rate of learning, persistence of  learning, and rate of responding?  

A. Schedule of reinforcement modulates effectiveness  

a. Continuous reinforcement: reinforcement occurs with every occurrence of desired  behavior  

b. Partial reinforcement: reinforcement occurs intermittently  

1. Ratio schedule: reinforcement based on number of times behavior performed  2. Interval schedule: reinforcement provided after a specific amount of time  3. Fixed schedule: reinforcement provided after a fixed, predictable number of  occurrences or amount of time  

4. Variable schedule: reinforcement provided after a variable, unpredictable  number of occurrences or amount of time  

5. Faster learning with continuous reinforcement vs. partial reinforcement  6. More persistent learning occurs with partial reinforcement vs. continuous  reinforcement (partial-reinforcement extinction effect)  

7. Faster rate of responding with ration schedules vs. interval schedules  

9. How is operant conditioning related to the development of superstitions?  A. If a reinforcement of punishment is given to a person/animal accidentally a couple times,  it can cause superstitions (which are false beliefs) that one thing will lead to another, even  if they are not related whatsoever.  

10. What role does the nucleus accumbens play in operant conditioning?  

A. The nucleus accumbens is one of the reward centers in the brain. If one is given a positive  reinforcement (reward), then the nucleus accumbens plays a role in that.  

11. How do addictive drugs (e.g., cocaine) affect the reward centers of the brain?  A. Addictive drugs provide a shortcut to the brain’s reward system by flooding the nucleus  accumbens with dopamine, which affects the reward centers in the brain in the long run.  12. Why are slot machines so effective at causing gambling addictions.  

A. Slot machines use operant conditioning to efficiently train addictive behaviors  a. Partial reinforcement to maximize persistence of behavior  

b. Ratio schedules are used to maximize the rate of responding  

c. Flashing lights, bells & music heighten excitement with each win  

d. Losses Disguised as Wins (LDWs): payouts smaller than the original bet are revealed  just like wins (flashing lights, bells & music)  

e. Near misses are displayed with higher probability than chance alone would cause  f. Nucleus accumbens reacts to near miss just as much (or even more than) a win

13. What is some of the evidence for and against the idea that violence in the media can cause  violent behaviors in viewers?  

A. It is proven that people learn a lot just from observing (proved by Bandura's Bobo doll  experiment). If viewers watch violent TV shows or play violent games, it can cause the  observer to become more violent. However, sometimes people's surroundings can cause  violence and aggression and it might not be because of the violence in the media.  14. What evidence demonstrates that LTP is involved in learning?  

A. Evidence for Role of LTP in Learning  

a. LTP prominent in brain areas involved in learning  

b. Drugs that block LTP also impair learning abilities  

c. Drugs that facilitate LTP enhance learning abilities  

d. Mice genetically-enhanced to have more NMDA receptors have better learning  abilities (Doogie mice).

15. What does the phrase "Neurons that fire together, wire together," mean?  A. It means that when neurons fire at the same time, they are more likely to fire at the same  time again in the future, producing a stronger signal.  

Chapter 7  

Terms to Know:  

• Memory: The nervous system's capacity to retain and retrieve skills and knowledge.  • Encoding, storage, consolidation & retrieval  

◦Encoding: the processing of information so that it can be stored  

◦Storage: the retention of encoded representation over time  

◦Consolidation: the neural process by which encoded information becomes stored in  memory  

◦Retrieval: the act of recalling or remembering stored information when it is needed  • Engram: refers to the physical site of memory storage; the place where memories "live" (term  established by Karl Lashley)  

• Karl Lashley: spent a lot of his time figuring out where in the brain memories are stored  • Long-term potentiation (LTP): Strengthening of a synaptic connection, making the postsynaptic  neurons more easily activated by presynaptic neurons

◦(see also the related Ch. 6 lecture slides)  

◦Donald Hebb & the Hebbian Rule: a synapse is strengthened if it is repeatedly active when  the postsynaptic neuron fires

◦Synaptic plasticity: the basis of learning involving a change in synaptic structure of  biochemistry that alters the efficiency of the synapse in a positive or negative way.  (Plasticity=the capacity for being molded or altered)

◦NMDA receptor: a type of glutamate receptor that opens only if a nearby neuron fires at the  same time  

◦Doogie mice: Mice genetically-enhanced to have more NMDA receptors have better  learning abilities  

• Morris water maze  

◦A tub of water with a hidden platform. Animals were placed in the tub and had to find the  hidden platform. Over time, animals could find the platform easily by looking at  environmental cues in the room.

◦Environmental cues in room provide information that permits animals to orient themselves  in space and learn the location of a hidden platform  

◦Performance of normal vs. hoppocampectomized rats. Lack of hippocampus impairs  performance  

• Medial temporal lobes & hippocampus  

◦Medial temporal lobes: middle section of the temporal lobes responsible for the formation  of new memories  

◦Hippocampus: neural structure involved in spatial memory  

• Patient H.M. (Henry Molaison)  

◦Underwent surgery for severe epileptic seizures in 1953 at age 27.  

◦Removal of both medial temporal lobes, including hippocampus.  

◦Almost total loss of the ability to encode new long term memories.  

◦But preserved short term memory...holds normal conversations, intact reading ability, and  other skills that require only temporary maintenance information.  

• Clive Wearing  

◦In 1985, damage to the medial temporal lobe due to an encephalitis infection.  ◦Profound anterograde and retrogade amnesia.  

◦Can still play the piano, although he has no recollection of ever having being trained.  ◦Amnesia: A profound impairment of memory function as the result of brain injury.  ‣ Retrograde Amnesia: cannot remember events prior to brain damage. Can remember  childhood memories  

‣ Anterograde Amnesia: cannot later remember events that occur after brain damage.  Can form new memories  

‣ Declarative memory affected by amnesia  

• Consolidation & reconsolidation

◦Consolidation: the neural process by which encoded information becomes stored in  memory  

◦Reconsolidation: Neural process involved when memories are recalled and then stored  again for retrieval  

◦Reconsolidation view of memory:  

• What would happen if a person was given a certain drug that prevented the  process of reconsolidation for a 6 hour period? Memories recalled during that 6  hour. period, or shortly before that, would be lost.  

• Rat undergoes classical fear conditioning. Anisomycin prevents reconsolidation only after recollection of the information, so that the information is forgotten.  • Sensory memory: a memory system that briefly stores sensory information in close to its original  sensory formal. Only lasts a fraction of a second  

◦Iconic and echoic memory

‣ Iconic memory: visual sensory memory  

‣ Echoic memory: auditory sensory memory  

◦High capacity and very short duration (up to a few seconds).  

◦Easily accessible, but vulnerable  

• Working (Short-term) memory: A memory storage system that briefly holds a limited amount of  information in awareness  

◦Small capacity and short duration (seconds, or minutes with active rehearsal).  ◦Easily accessible, but vulnerable

◦Retrieval, transformation, substitution: three processes that mak distinct and independent  contributions to updating the contents of working memory  

◦Memory span: limited amount of information that working memory can hold is generally 7  items, plus or minus 2 (George Miller's 7±2 items)  

◦Chunking: Organizing information into meaningful units to make it easier to remember  • Long-term memory: The relatively permanent storage of information  

◦Large capacity and indefinite duration (up to decades/lifetime).  

◦Difficult to access, but durable

◦Serial position effect: The idea that the ability to recall items from a list depends on the  order of presentation, with items presented early or late in the list remembered better than  those in the middle.  

‣ Primacy & recency effects  

• Primacy effect: the better memory that people have for items presented at the  beginning of the list  

• Recency effect: the better memory that people have for the most recent items, the  ones at the end of the list  

◦Mental representation: the concept of something (ex. a dog; when your visual system  senses a shaggy, four-legged animal and your auditory system senses barking,BYOU  perceive a "dog")  

◦Levels of processing model: the more deeply an Intel is encoded, the more meaning it has  and the better it is remembered  

‣ Maintenance vs. elaborative rehearsal  

• Maintenance rehearsal: simply repeating the item over and over  

• Elaborative rehearsal: encodes the information in more meaningful ways, such as  thinking about the item conceptually or deciding whether it refers to oneself.  ◦Schemas: Cognitive structures that help us perceive, organize, process and use information  ◦Networks of association: an item's distinctive features are linked to identify the item  ◦Spreading activation model: stimuli in working memory activate specific nodes in long term memory  

◦Retrieval cue: Anything that helps a person (or a nonhuman animal) recall information  stored in long-term memory  

◦Encoding specificity principle: The idea that any stimulus that is a nodded along with an  experience can later trigger memory for the experience  

‣ Context-dependent memory: when the recall situation is similar to th encoding  situation  

‣ State-dependent memory: Memory can be enhanced when a person's internal states  match during encoding and recall

◦Mnemonics: Learning aids, strategies and devices that improve recall through the ice of  retrieval cues.  

• Explicit (Declarative) learning: learning facts and information of which we can be aware (what  we normally think of as memory)  

◦Episodic memory: autobiographical memories (ex. childhood memories, memories from  episodes of your life)  

◦Semantic memory: generalized memory of facts  

• Implicit (Nondeclarative) learning: memory about perceptual and motor procedures of which  we are unaware  

◦Priming: exposure to one stimulus alters response to another  

◦Procedural (motor or skill) memory: involves motor skills and behavioral habits (also known  as motor memory)  

◦Conditioning: learning the relationship between stimuli (classical conditioning), or between  behaviors and outcomes (operant conditioning)  

• Prospective memory: Remembering to do something at some future time  • Forgetting  

◦Transience: memory decay, forgetting over time  

‣ Proactive vs. retroactive interference  

• Proactive interference: interference that occurs when prior information inhibits the  ability to remember new information  

• Retroactive interference: interference that occurs when new information inhibits  the ability to remember old information  

◦Blocking: The temporary inhabiting to remember something  

‣ Tip-of-the-tongue phenomenon: which people experience great frustration as they try  to recall specific, somewhat obscure words. Like the saying, "it's on the tip of my  tongue" when you can't quite remember a word.  

◦Absentmindedness: the inattentive or shallow encoding of events  

◦Persistence: The continuous recurrence of unwanted memories  

◦Source mistattribution: Memory distortions that occurs when people misremember the  time, place, person, or circumstances involved with a memory  

‣ Source amnesia: A type of misattribution that occurs when a person shows memory for  an event but cannot remember where he or she encountered the information  ‣ Cryptomnesia: A type of misattribution that occurs when a person thinks he or she has  come up with a new idea, yet has only retrieved a stored idea and failed to attribute  the idea to its proper source  

◦Bias: a tendency people most seem to follow  

◦Suggestibility: The development of biased memories from misleading information  • Amnesia: A profound impairment of memory function as the result of brain injury.  ◦Declarative memory affected by amnesia  

◦Retrograde vs. anterograde amnesia  

‣ Retrograde Amnesia: cannot remember events prior to brain damage. Can remember  childhood memories  

‣ Anterograde Amnesia: cannot later remember events that occur after brain damage.  Can form new memories  

• Flashbulb memories: Vivid episodic memories for the circumstances in which people first

learned of a surprising, consequential, or emotionally arousing event  

• Eyewitness testimony  

◦Conformation bias: people tend to remember evidence that confirms their beliefs.  • False memories: When a person imagines an event happening, he or she forms a mental image  of the event. The person might later confuse that mental image with a real memory.  • Repressed memories: memories of traumatic events are repressed. Repressed memories hold  strong and passionate beliefs.  

Questions to Consider:  

1. How do the three stages of the Information Processing Model of Memory (Sensory, Short-term  and Long-term) differ in characteristics such as capacity, duration, accessibility and durability?  A. Sensory memory has high capacity and very short duration (up to a few seconds). It is  easily accessible, but vulnerable. Short-term memory has small capacity and short  duration (seconds, or minutes with active rehearsal). It is easily accessible, but vulnerable.  Long-term memory has a large capacity and indefinite duration (up to decades/lifetime).  It is difficult to access, but durable.  

2. What are the effects of damage to the medial temporal lobes and the hippocampus on  memory? Would someone with damage to theses structures forget how to ride a bike? Why or  why not?  

A. The medial temporal lobes and hippocampus are involved more with memories of facts  and events (explicit memory). Not motor memories so damage to these structures will not  cause someone to forget how to ride a bike.  

3. How does remembering a distant memory introduce an opportunity for distortions of that  memory?  

A. The memories that you have encoded more recently could distort the recent memory, or  you may just forget the memories.  

4. Why is working memory likened to a mental "workbench"?  

A. It is called a mental "workbench" because it is where we can examine, evaluate, transform  and compare different pieces of information  

5. Describe how chunking can be used to expand the capacity of working memory?  6. What are some ways to foster the storage of information in long-term memory?  A. You could link the information to something you already know. You can use retrieval cues  and mnemonics to help remember the information in a more meaningful way.  7. What are some of the ways in which eyewitness testimony can be flawed?  A. It can be flawed because of confirmation bias, when people tend to remember evidence  that confirms their beliefs, and that information may or may not be true. Also, the  eyewitness could have a false memory and remember something that didn't actually  happen. If the event was emotionally scarring, the eyewitness could have repressed  memories that they try to ignore and when it resurfaces, the information could be false.  8. What are the seven sins of memory?  

A. Transience, blocking, absentmindedness, persistence, misattribution, bias, suggestibility  9. What evidence demonstrates that LTP is involved in learning?  

A. Evidence for Role of LTP in Learning  

a. LTP prominent in brain areas involved in learning  

b. Drugs that block LTP also impair learning abilities

c. Drugs that facilitate LTP enhance learning abilities  

d. Mice genetically-enhanced to have more NMDA receptors have better learning  abilities (Doogie mice).

10. What does the phrase "Neurons that fire together, wire together," mean?  A. It means that when neurons fire at the same time, they are more likely to fire at the same  time again in the future, producing a stronger signal.  

Chapter 8  

Terms to Know:  

• Cognition: The mental activity that includes thinking and the understandings that result from  thinking.  

• Thinking: The mental manipulation of representations of knowledge about the world.  • Analogical representations: Mental representations that have some of the physical  characteristics of objects; they are analogous to the objects  

• Symbolic representations: Abstract mental representations that do not correspond to the  physical features of objects or ideas  

• Categorization: grouping th gas based on shared properties  

◦Concept: A category, or class, of related items; it consists of mental representations of  those items  

‣ Prototype model: A way of thinking about concepts: Within each category is a best  example-a prototype-for that category  

‣ Exemplar mode: A way of thinking about concepts: All members of a category are  examples (exemplars); together they form the concept and determine category  membership  

• Schemas (see also Chapter 7)  

◦Stereotypes: Cognitive Schemas that allow for easy, fast processing of information about  people based on their membership in certain groups  

◦Scripts: Schemas that directs behavior over time within a situation  

• Decision making: Attempting to select the best alternative from among several options  • Normative decision theory: Attempts to define how people should make decisions  ◦Expected utility theory: People make decisions by considering the possible alternatives and  choosing the most desirable one  

‣ Expected utility=value the outcome x probability of obtaining it  

• Descriptive decision theory: Attempts to predict how people actually make choices, not to  define ideal choices. Tries to account for actual behavior  

• Heuristics: Shortcuts (rules of thumb or informal guidelines) used to reduce the amount of  thinking that is needed to make decisions  

◦Anchoring: The tendency, in making judgments, to rely on the first piece of information  encountered or information that comes most quickly to mind.  

◦Framing: In decision making, the tendency to emphasize the potent al losses or potential  gains from at least one alternative  

◦Availability heuristic: Making a decision based on the answers that most easily comes to  mind  

◦Representativeness heuristic: Placing a person or object in a category if that person or  object is similar to one's prototype for that category

◦Affective heuristic: The tendency for people to overestimate how events will make them  feel in the future.  

◦Paradox of choice: too much choice leads to frustration, indecision and dissatisfaction with  the eventual selection.

• Maximizers vs. Satisficers

◦Maximizers seek to identify the perfect choice among a set of options  

◦Satisficers seek to find a “good enough” choice that meets their minimum requirements  • Problem solving  

◦Subgoals: steps you need to achieve to achieve the larger whole goal  

◦Restructuring: A new way of thinking about a problem that aids its solution  ‣ Functional fixedness: And problem-solving, having fixed ideas about the typical  functions of objects  

◦Algorithm: A guideline that if followed correctly, will always yield the correct answer  ◦Working backward: When the appropriate steps for solving a problem or not clear,  preceding from the goal state to the initial state can help yield a solution  

◦Appropriate analogies:  

◦Insight: The sudden realization of a solution to a problem  

Questions to Consider:  

1. How do analogical representations differ from symbolic representations?  A. Analogical representations have some of the physical characteristics of objects (ex.  images; typically faster to be processed. Needs no decoding). Symbolic representations are abstract, with no resemblance to objects (ex. words; typically slower to be processed.  Requires decoding)  

2. How does the use of schemas sometimes lead to stereotyping?  

A. Stereotypes are made up of schemas that allow for easy, fast processing of information  about people based on their membership in certain groups. Sometimes, we group  people without having all of the information about them.  

3. What are some examples of behaviors that demonstrate that our decisions are not always as  rational as normative decision theories would have you think?  

A. Which would you prefer:  

a. a 50% chance of winning $50, and a 50% chance of getting nothing  

1. expected utility = $50 x 0.5 + $0 x 0.5 = $25 + $0 = $25  

b. a 50% chance of winning $200, and a 50% chance of losing $100  

1. expected utility = $200 x 0.5 - $100 x 0.5 = $100 - $50 = $50  

c. Expected utility theory suggests that option B has the greatest expected utility, so  people should prefer B, but most choose A because we are not always rational.  4. Why does having a very expensive item listen on a restaurant's menu cause people to spend  more money, even if they don't buy that item?  

A. It has some relation to the heuristic called anchoring. When people see the restaurant's  expensive item first, then they think that the other items (even though they are expensive)  are more reasonably riced.  

5. How is the recognition of subgoals useful in completing a complex task?  A. It makes the complex task seem more manageable. Instead of having a huge goal to  accomplish at once, subgoals allow you to break up the large goal into smaller ones and

make it seem more reasonable.  

6. How can functional fixedness prevent the completing of complex tasks?  A. Sometimes, we are so set on the function of certain things and we don't look "outside the  box" to find a new, unique way to solve a problem. For example, if someone's window  was stuck and they only have a hammer, the first thought would be to break the glass  because hammers are usually used to pound things, but you could use it as a lever to  open the window more easily.  

7. What is some of the evidence that chimpanzees have insights similar to humans?  A. Chimpanzees try to solve problems, such as reaching bananas that are too high. One  chimp stacked several boxes on top of each other and stood on them to reach the  bananas. This behavior suggested that the chimp solved the problem through insight, just  like how a human would.

Page Expired
5off
It looks like your free minutes have expired! Lucky for you we have all the content you need, just sign up here