Memory and Cognition Notes for 1st Exam!
Memory and Cognition Notes for 1st Exam!
Popular in Memory and Cognition Psychology
Popular in Psychology
verified elite notetaker
This 52 page Bundle was uploaded by Rachel Brotman on Tuesday September 27, 2016. The Bundle belongs to at George Washington University taught by Sohn in Fall 2015. Since its upload, it has received 6 views. For similar materials see Memory and Cognition Psychology in Psychology at George Washington University.
Reviews for Memory and Cognition Notes for 1st Exam!
Report this Material
What is Karma?
Karma is the currency of StudySoup.
You can buy or earn more Karma at anytime and redeem it for class notes, study guides, flashcards, and more!
Date Created: 09/27/16
CHAPTER 1: The Science of Cognition Brain: Neuronal level Functional Organization Signal of brain activity Cognitive Neuroscience: the study of how cognition is realized in the brain. Information-Processing Analyses: Information processing approach: breaks a cognitive task down into a set of abstract information-processing steps Information processing is discussed without any reference to the brain Sternberg Paradigm: you’re shown a set of numbers. The set goes away. Now you are shown a number (“probe”) that may or may not have been in the set. You have to say whether it is or isn’t. Memory set: set size varies Probe: o Target: “yes” response: this is a technical term for a probe that is part of the set o Foil: “no” response. This is a technical term for a probe that was not part of the set. Response time o Set size effect o Target-foil effect o Serial position effect Variables that could manipulate the experiment: The length of the string of numbers (“set size”) The order of the numbers Serial Position Effect: Does it matter what order the items in the list are presented in? Depends on the type of search. Serial Search: looking at each item in the set one by one. The more items there are in a serial search, the longer the reaction time will be. Parallel search: taking all items in the set in at once. Looking at everything. If the search is parallel, the set size doesn’t matter. There is no slope. When doing a parallel search, you don’t predict a serial position effect. The position of each item in the set doesn’t matter because you’re processing everything at the same time. For yes items, you stop the search as soon as you find the item. This is called “self terminating”. For no items, you have to go through every item in order to be sure that the probe was not in the set. This is called an exhaustive search. If people perform self termination searches for yes trials, then the search’s slope will not be as steep as in the no trials. Set size effect: if people perform a serial search, then as the memory set size increases, reaction time increases. Positive slope. The larger the memory set, the steeper the slope. Reaction time can be the independent variable. f the minimum is greater in one condition than in another, we can make an inference that one condition requires more cognitive resources or a different kind of cognitive resources than the other. Reaction time in the Sternberg Paradigm is measured from the onset of the probe. From encoding to comparison to judgment. If people perform parallel searches, the number of comparisons does not change regardless of the amount of items. If it is self-terminating, for yes trials you have a smaller number of comparisons since you can stop once you find the item. Therefore, we can predict that the slope will be steeper for foils than for targets. Some people will perform an exhaustive search meaning that even if they find a match, they will keep looking through everything. In this case, the slope will be the same for both a foil and a target. Information Search: Serial vs. Parallel o Serial: one item after another o Parallel: multiple items at one time o Limited capacity or capacity free? Exhaustive vs. Self-terminating o Exhaustive: all of the items in the search set o Self terminating: until the match is found o This gives insight into different processing stages. Information Processing: Slope: o Reflects time for comparison o Differs by size set, not by target-foil Intercept: o Reflects encoding and decision making and judgment o Indifferent with set size o May differ by stimulus quality, complexity of responses. Brain: Neuron, synapse, neurotransmitter o How information is transmitted in the brain Organization of the brain: o Basal Ganglia, hippocampus o Cortical subdivision o Hemispheric asymmetry Research methods: o How to determine which method is good for which type of research. Neuron Structure: From an information-processing point of view, neurons are the most important component of the nervous system. Neuron: a neuron is a cell that receives and transmits signals through electrochemical activity. They are the smallest unit that make up the brain. The human brain contains about 100 billion neurons, each of which has about the same processing capacity as a small computer. Neurons come in all different shapes and sizes but there is a general prototype of what a neuron looks like. The body of the neuron is called the soma. Attached to the soma are dendrites. Extending from the soma is a long tube called the axon. Axons can vary in length. Axons provide a fixed bath by which neurons can communicate with one another. The axon of one neuron extends towards the dendrites of another neuron. The near contact between axon and dendrite is called a synapse. Neurons typically communicate by releasing chemicals called neurotransmitters, from the axon terminal on one side of the synapse. The right side of the brain controls the left side of the Neurons.. They receive information and then integrate information in the cell body and send information out through the axon. The receiving parts are called dendrites. The axon terminal will then send neurotransmitters to the syntax. It reaches the postsynaptic neuron. This creates electric activity called action potential. The charge inside the axon rises from -70 mV to +40 mV. This is called the rising phase of the action potential. Once the charge inside the neuron reaches +40 mV, the sodium channel closes and the potassium channel opens. Positively charged potassium rushes out of the axon causing the charge inside the axon to become more negative. The charge goes from +40 mV charge has returned to resting potential the potassium stops flowing in. Transmitting Information Across a Gap: What happens when action potential reaches the end of an axon? There is a small space between neurons called the synapse. When action potentials reach the end of a neuron, they trigger the release of chemicals called neurotransmitters. Neurotransmitters are stored in synaptic vesicles in the sending neuron. The neurotransmitter molecules flow into the synapse to small areas on the receiving neuron called receptor sites. Receptor sites are sensitive to specific neurotransmitters. Receptor sites exist in a variety of shapes that match the shapes of particular neurotransmitter molecules. When a neurotransmitter makes contact with a receptor site matching its shape, it activates the receptor site and triggers a voltage change in the receiving neuron. When an electrical signal reaches the synapse, it triggers a chemical process that causes a new electrical signal in the receiving neuron. Two types of responses can happen at the receptor sites: Excitatory responses: when the inside of a neuron becomes more positive, a process called depolarization. Inhibitory response: occurs when the inside of the neuron becomes more negative, a process called hyperpolarization. Excitation increases the chance that a neuron will generate action potentials and is associated with increased rates of firing. Inhibition decreases the chances that a neuron will generate action potentials. A typical neuron receives both excitation and inhibition. The response of the neuron is determined by the interplay of excitation and inhibition. Action Potential: inside and outside of the neuron, there is a change of ions. Typically inside the neuron, there are negative ions (resting potential 70 volts). During resting potential the neuron is ready to fire. Depolarization: The difference between the negative and positive ions will reduce. This only lasts for a millisecond. Refractory period: the interval between the time one nerve impulse occurs and the next one can be generated in the axon. The refractory period for most neurons is about 1 ms. The upper limit of a neuron’s firing rate is about 500 to 800 impulses per second. Rate-of-firing: the number of action potentials or nerve impulses that an axon transmits per second. Myelination is meant to speed up the information process. The dendrites of the postsynaptic neurons must have a receptor. Presynaptic neurons can only release. Nerve impulse pre-synaptic axon vesicle neurotransmitters post- synaptic dendrites. The process within the neuron is different than the process between neurons. If one neuron receives information from another, it is possible for the receiving neuron to be inhibitory, meaning that it does nothing with the information. It gets it and just stops. A connection can either be excitatory or inhibitory. Information Transmission: Between neurons o Chemical o Neurotransmitters o Excitatory and inhibitory Neurotransmitters: Acetylcholine: o Memory function o When it’s too low, memory decays o When there’s too much, there is excessive arousal Dopamine o Attention, learning motivation o When it’s too high, schizophrenia can occur o When it’s too low in the basal ganglia, Parkinson’s disease occurs. Brain: Gross Anatomy The central nervous system (CNS) consists of the brain and the spinal cord. The major function of the spinal cord is to carry neural messages from the brain to the muscles, and sensory messages from the body to the brain. It appears that lower parts of the brain are responsible for more basic function. Medulla: controls breathing, swallowing, digestion and heartbeat Hypothalamus: regulates the expression of basic drives Cerebellum: plays an important role in motor coordination and voluntary movement. Thalamus: a relay station for motor and sensory information from lower areas to the cortex. The cerebral cortex accounts for a large fraction of the human brain. There is a large amount of folding and wrinkling of the cortex in humans since it is such a large sheet. A bulge of the cortex is called the gyrus. A crease passing between the gryi is called a sulcus. The neocortex is divided into left and right hemispheres. The right part of the body tends to be connected to the left hemisphere of the brain, and the left part of the body is connected to the right hemisphere. The left hemisphere controls motor function and sensation in the right hand. The right ear is the most strongly connected to the left hemisphere. The cortical regions are organized into four lobes: Frontal: two functions. The back portion of the frontal lobe is involved with motor functions. The front part, called the prefrontal cortex controls higher-level processes such as planning. Parietal: handles some perceptual functions including spatial processing and representations of the body. It also is involved in the control of attention. Occipital: contains the primary visual areas. Temporal: receives input from the occipital area and is involved in object recognition. Humans are distinguished by having disproportionately larger anterior portions of the prefrontal cortex than animals. The neocortex is not the only region that plays a significant role in higher level cognition. There are other important circuits that go from the cortex to subcortical structures and back again. The limbic system is a particularly significant area for memory. The limbic system contains a structure called the hippocampus. The hippocampus is critical for human memory. Damage to the hippocampus can cause amnesia. Hippocampus: helps creates associations and consolidate information into long term memory. Is in charge of explicit, declarative, associative knowledge When impaired: o It cannot form a new trace of memory o Memory of things that happened before the injury is intact o Intact implicit expression of learning Basal Ganglia: Is in charge of basic motor control and in complex cognition. Damage to the basil ganglia causes Parkinson’s disease and Huntington’s disease. People who suffer from these disorders have dramatic motor control and have difficulty with cognitive tasks. When impaired, cognitive learning is intact but expression of learning is a problem. Parkinson’s Disease: lack of dopamine in the basil ganglia (part of motor control system and procedural learning system and reward system). Dopamine is released to make you feel better etc… So if the dopamine level gets lower in the basil ganglia, you will have trouble controlling your movements. Brain Organization: Localization o Primary areas: motor cortex, somato-sensory cortex, visual, auditory, spatial attention, reasoning/language Lateralization Topography In general, the left hemisphere of the brain is associated with linguistic and analytic processing and the right hemisphere is associated with perceptual and spatial processing. The left and right hemispheres are connected by a broad band of fibers called the corpus callosum. In some patients, the corpus callosum is severed in order to prevent epileptic seizures. Such patients are called split-brain patients. A lot of information we have about the differences between the hemispheres comes from these patients. There are areas in the left hemisphere of the brain that are involved in speech. Damage to these regions results in aphasia, the impairment of speech. The two areas are: Broca’s area: production of speech is poor. People speak in short, ungrammatical sentences. Wernicke’s area: comprehension of speech is poor. You can speech, but you can’t make a coherent sentence. These people don’t know what to say. They can sound fluent, but the actual words they are saying probably do not make sense. There is some independence of brain structures. We can lose one ability without losing another. Amnesic Brain: Movie Memento: A man gets a concussion and suffers from amnesia. He forgets everything that happens. It’s good to form an association in order to help move something from short term to long-term memory. Lateralization: Hemispheric Asymmetry: o Mostly contralateral control o Partially Ipsilateral control o Corpus callosum, split brain o Example: language Each hemisphere of the brain processes slightly different information from the other. Therefore, the two sides of the brain need to communicate with each other. They do this through the corpus callosectomy. Contralateral control: Left side of the brain controls right side of the body. The right side of the brain controls the left side of the body. Information received by the right eye will end up in the left hemisphere and information received by the left eye will end up in the right hemisphere. Topographic Organization: Map like organization Motor, somato-sensory cortex Over-represented area o Precise motor control o Acute sensation Visual, auditory cortex The center of visual-auditory field o Over represented in the cortex o Precise perception at the center Functionally exaggerated. Not 1 to 1 A big part of our somatosensory cortex and motor cortex are devoted to movement of our fingers. We feel more in our fingers than in our abdominal area. This is because a bigger part of the cortex is devoted to this. Adjacent cells in the cortex tend to process sensory stimuli from adjacent areas of the body. We can measure electrical activity in the skull. The temporal resolution is very good. If we are concerned with two processes and one takes a few milliseconds longer than another, an EEG can pick that fast movement up. Signals from Brain EEG/ERP PET: You need some sort of injection. A radioactive tracer is injected into the bloodstream. Can produce a very good and clear image.. fMRI: detects the level of oxygen in our brains. When our brain is working, blood comes in to a certain area. fMRI is basically a huge magnet. When blood levels change, the magnetic activity changes. Temporal resolution is not so great for fMRI. An ideal set up would be to have both an EEG and fMRI. Neither PET scans or fMRI measures neural activity directly. They measure metabolic rate or blood flow in various areas of the brain, relying on the fact that more active areas of the brain require greater metabolic expenditures and have greater blood flow. fMRIs are used more often than PET scans because they offer better spatial resolution and are less intrusive. fMRIs do not require any injection. EEG/ERP: Electrical activity on the scalp o Measuring activity in the underlying brain regions Because it moves so fast it’s hard to see where in the brain the signal/activity is coming from. fMRI: provides relatively good information about the location of neural activity but poor information about the time course of that activity. Consumption of oxygen o Radio waves are passed through the brain, causing iron in the hemoglobin to produce a local magnetic field that is detected by the magnetic sensors surrounding the head. This offers a measure of the amount of energy being spent in a particular brain region. o BOLD (Blood Oxygenation Level Dependent) signal If you have some type of metal in your body you cannot do an fMRI study. The fMRI signal that fMRI’s are concerned with is BOLD. Because fMRI’s are slow, it takes a full 9-12 seconds. So if something else happened within those 9-12 seconds, you wouldn’t know what led to the result. The two events that occurred would be lumped together. This is another reason why fMRI’s give poor temporal resolution. In class experiment pertaining to contralateral control: A student is asked to come up and read the following sentence. She has to read the sentence as fast as she can. She must type the letter “u” as many times as she can. She has to do this twice. Once typing with her left hand and once typing with her right hand. The number of u’s typed is different. Why? She typed more u’s with her right hand because the language center is on the right hemisphere of the brain. She is right handed, but typed more u’s with her left hand. Perhaps that is because she did the exercise with her right hand first. Since she had already done it once, perhaps she was faster the second time due to practice effects. Sentence: “Four score and seven years ago our fathers brought forth on this continent a new nation, conceived in liberty, and dedicated to the proposition that all men are created equal” What if we did this experiment with somebody was left-handed? What would you expect? We would expect the results to be opposite. Why? For a right-handed person, the language center is nearly always in the left hemisphere. But for left handed people, 10-15% have a language center on the right side. It’s harder to assume for left handed people that their language center is located in the left hemisphere. CHAPTER 2: Perception Early Visual Information Processing: Light passes through the lens and falls on the retina at the back of the eye The retina has the photoreceptor cells which have light sensitive molecules in them that undergo structural changes when light hits them. The image that falls on the retina is not 100% sharp One function of early visual processing is to sharpen the image. There are two different photoreceptors in the eyes; rods and cones. Cones are involved with color vision and produce high resolution and acuity. Less light energy is necessary to trigger a response from the rods, but the rods produce a poorer resolution. The rods are responsible for the black and white vision we have at night. Cones are concentrated in a small area of the retina known as the “fovea”. When we look at an object, we move our eyes so that the image of the object is on the fovea. That is because the fovea has the cones, which produce high resolution, and we want to see the sharpest image we can. Foveal vision is responsible for fine details. Peripheral vision detects more “global information” such as movements in the background. The receptor cells synapse onto bipolar cells and these onto ganglion cells, whose axons leave the eye and form the optic nerve which connects to the brain. There are about 800,000 ganglion cells in the optic nerve of each eye. The ganglion cells encode information from a small region of the retina called the cell’s receptive field. The optic nerves from both eyes meet at the optic chiasma. The optic chiasma is where nerves from inside the retina cross over and go to the opposite side of the brain. The nerves from outside the retina continue to the same side of the brain as the eye. The left hemisphere of the brain processes information about the right part of the world and the right hemisphere processes information about the left part. Subcortical: means that the structures are located below the cortex. From the primary visual cortex, information tends to follow two pathways, a “what” and a “where” pathway. The pathway goes to regions of the temporal cortex that are specialized in identifying objects. The where pathway goes to parietal regions of the brain that are specialized for representing spatial information and coordinating vision with action. Information Coding in Visual Cells: Information is encoded by the ganglion cells For some ganglion cells, if light falls on a small region of the retina at the center of the cell’s receptive field, their spontaneous rates of firing will increase. If light falls on the region around this sensitive center, the rate of firing will decrease. Light that falls even further from the center experiences no change. Ganglion cells that respond like this are called on-off cells. There also can be off-on ganglion cells that respond in the opposite way. For these cells, light in the center of the retina causes a decrease in firing rates and light around the center causes in increase. Visual cortical cells respond in a more complex way than ganglion cells Ganglion On-off cells and off-on cells have a circular shape. Cortical cells are elongated in shape. Some, known as edge detectors respond positively to light on one side of a line and negatively to light on the other side. They respond most if there is an edge of light lined up as a boundary point. Bar detectors are another type. They respond positively to light in the center and negatively on the sides, or vice versa. We have hundred of regions of space represented separately for each eye. Different cells code for different sizes and widths of line Our visual system can also perceive the colors of objects and whether they are moving In sum, our visual systems have three different processes for the dimensions form, color and movement. The visual system analyzes a stimus into many independent features in specific locations. These representations of visual features are known as feature maps. We have separate maps for color, orientation and movement. Early processing vs. Late processing: Early Processing: o Processing of simple shapes, size, color, features Late processing: o Integration of features and pattern recognition Visual Agnosia: individuals with damage to certain parts of their brain are able to see but cannot recognize anything visually. They know that there is an object there and can detect light, but they cannot make out complex figures. Apperceptive Agnosia: the individual cannot recognize simple shapes such as circles or triangles. Associative Agnosia: The individual can recognize simple shapes and can draw them, but they cannot make out complex objects. Patients with apperceptive Agnosia are generally believed to have problems with early processing of information in the visual system. Patients with associative Agnosia generally have intact early processing but have difficulty with pattern recognition later on. Perception: how we perceive the stimuli we are looking at. Problem of Perception: distal information is not the same as proximal information Goal of perception: making sense of the external world and building stable representations. Visual processing Pattern Recognition Speech perception Context effect Distal Stimulus: what is out there. Distal information is rich, unlimited early information. Proximal Stimulus: what is near to you. This is what actually reaches our sensory organs. It is constrained, selected late information. Spiral image: the stimulation is a bunch of different circles broken in some weird manner. It looks like a big spiral. Goals of Perception: Goal 1: making senses out of ambiguous, insufficient information Goal 2: Making stable representation of external world: o Illusion: same sensation sometimes needs to be perceived differently o Constancy The Ponzo illusion is an example of illusion. We compensate for distance when in reality, the red lines are the same lengths in both photos. Sometimes we need to perceive the same stimulus in a different way. Constancy: Distal stimulation constantly changes Size constancy, distance cue Shape constancy, distance cues of parts Color constancy, tied to object constancy Depth Cues: Monocular cues o Texture gradients (example of rainy day image): change of texture can give the feeling of distance, even when viewing a 2D image. o Relative size o Interposition o Linear perspectives o Aerial perspectives o Location in the picture plane o Motion parallax: near objects move faster and in the opposite direction of the viewer. Far objects move slower and in the same direction of the viewer. Binocular depth cues: o Binocular convergence: convergence of eyeballs. Muscular movement is the cue. o Binocular disparity: degree of disparity. Stereopis is the ability to perceive 3D depth based on the fact that each eye receives a slightly different view of the world. Object Perception Basic tasks: o Recovery of meaningful objects from preattenntively available features o Figure-Ground segregation o Boundary assignment Gestalt principles of organization Gestalt principles of organization: The principle of proximity: elements close together tend to organize into units. The principle of similarity: objects that look alike tend to be grouped together. The principle of good continuation: we perceive two lines, one from A to B and the other from C to D even though there is no reason the lines can’t be from A to D and C to B. The principle of closure: We see objects with missing or hidden parts as closed objects. The principle of good form says that we perceive the occluded part as a circle, not as an arc. Visual Pattern Recognition: The last step is to recognize what the objects we have put together are. We will do so through pattern recognition. Palmer (1977): Study unfamiliar objects Recognition test on components Better memory for components that follow the gestalt principles Theories of Perception: Bottom-up Theories: o Explain how external stimulus changes the knowledge about the world o Data-driven processes Top-down, constructive perception: o Explains how existing knowledge, expectation, etc, change the way we perceive o Concept driven processes Template-matching theory: a retinal image of an object is transmitted to the brain and the brain attempts to compare the image directly to various stored patterns called templates. (This is a bottom up theory). Allows recognition of exact match Not flexible, can’t explain constancy Good for auto recognition system o Limited or fixed input pattern o Bank account in a check, fingerprint Problems arise when the image is a different size as the template, is depicted at a different orientation or when it takes a different form (such as different fonts that shape the same letter differently) Feature Theory: another method of pattern recognition that fixes some of the limitations of template matching. Feature analysis is when a stimulus is thought of as being combinations of elemental features. o Recognizing features o Relations among features o No need for templates Evidence: o Featural confusion, C vs. G o Stabilized image Neural adaptation Feature based adaptation Recognition-by-components Theory (Biederman) Bottom up theory States that there are three stages in our recognition of an object as a configuration of simpler components: 1. The object is segmented into a set of basic subobjects 2. Once an object has been segmented into basic subobjects we can classify the category of each subobject. There are 36 categories of subobjects, which are known as geons. Geons are kind of like an alphabet for composing objects. All objects are made out of different variations of the 36 geons. 3. We identify the pieces from which the object was made and are able to recognize what the object is. We have feature detectors in our brain for various orientations, colors etc.. Face Perception: We have a special module for processing faces o Context effects o Prosopagnosia (inability to recognize faces): caused by damage to the temporal lobe. o FFA: fusiform face area of the temporal cortex that becomes active when observing faces. Module for fine-grained distinction o Car experts o Bird experts o FFA may be needed for highly fine-grained distinction of complex objects Fusiform gyrus: a particular region of the temporal lobe that responds when faces are present in the visual field. There is a difference between showing an upside down picture of a building and an upside down building of a face. People have an easier time recognizing the upside down building than the face. This example of the upside down building and upside down face and the phenomenon of prosopagnosia confirm that there must be a special module for face recognition. Otherwise, prosopagnosia wouldn’t exist- people would then not be able to recognize buildings either. Speech Recognition: Phonemes: the basic units for speech recognition. It is the minimal unit of speech that can result in a difference in the spoken message. Damage to the left side of the temporal lobe can result in difficulties recognizing speech. Feature Analysis of Speech: Consonantal Feature: the features are specific to the consonants (as opposed to the vowels). Voicing: a feature of phonemes produced by vibration of the vocal cords. You can detect whether or not a word is voiced by placing your hand on your larynx as you make the sound of the letter. Place of Articulation: the location at which the vocal tract is closed or constricted in the production of the phoneme. Bilabial: the lips are closed while they generate the sound. Labiodental: the bottom lip is pressed against the front teeth when producing the sound. Dental: the tongue presses against the teeth. Alveolar: the tongue presses against the alveolar ridge of the gums just behind the upper front teeth. Palatal: the tongue presses against the roof of the mouth just behind the alveolar ridge. Velar: the tongue presses against the soft palate or velum in the rear roof of the mouth. Sounds that are bilabial: b, p, m, w Sounds that are labiodental: f, v Sounds that are dental: th Sounds that are alveolar: t, d, s, z, n, l, r Sounds that are palatal: sh, ch, j, y Sounds that are velar: k, g Categorical Perception: the perception of stimuli as belonging to distinct categories and the failure to perceive the gradations among stimuli within a category. People are able to discriminate between two sounds only if they fall on different sides of a phonetic boundary. Voice onset time o Time from release of air to voicing The factor controlling the perception of a phoneme is the delay between the release of air and the vibration of the vocal cords. There are two different views about what exactly categorical perception means: 1. That we experience stimuli as coming from distinct categories 2. A stronger viewpoint is that we cannot discriminate among stimuli within a category. In sum, there is increased discriminability between categories and decreased discriminability within categories. There is also an adaptive paradigm fr voicing feature in speech recognition. Babies can distinguish between all kinds of phonemes of all different languages when they are born. Eventually they get used to the phonemes of their own language and stop being able to naturally distinguish phonemes from other languages. Detecting within category difference is less critical as long as confined in one language. Category knowledge guides auditory perception to reduce variances in speech utterance. Context and Pattern Recognition: Top-down processing: high-level general knowledge contributes to the interpretation of the low level perceptual units. The perceiver builds up a cognitive understanding out of stimulus. The perceiver is considering both sensory information and prior experience. Bottom-up processing: without regard to the general context. Context Effect: If knowing something about the upcoming stimulus makes a difference in terms of perception about the object, context effect is in play. Existing knowledge affects your perception. Top Down processing is when context effects are at play. Context effect can exist at even a subliminal level. Context priming: processing the relevant context facilitates subsequent processing of an object. Example: we are primed with a photo of a kitchen. Then we are showed an image of a loaf of bread, a mailbox and a drum set. We will recognize the bread of loaf the most quickly because it relates the kitchens and we were primed with a photo of a kitchen. We think that processing less information should take less time, but this isn’t always the case. Take a look at the word superiority effect: Example: Participants are briefly showed either the letter “D” or the word “WORD”. If they were showed the letter D, they will be asked whether or not they were shown a D or a K. if they were shown the word “WORD” they will be asked whether or not they saw the word “WORD” or “WORK”. Even though both examples are only different by the same letters, participants were 10% better at identifying the word that they were presented than the letter, even though they had to process more letters. Perception can proceed successfully when only some of the features are recognized, with context filling in the remaining features. For example, to read a string of words we do not necessarily need to perceive every letter. Top Down Theory: Evidence: Configural-superiority 3D superiority over 2D Word superiority Processing more makes you take less time to process components. Processing of individual line segments can take longer than processing shapes that include those line segments. Other examples of context and recognition effects: Phoneme restoration effect: sometimes when we leave out a phoneme the listener does not even notice it because context allows them to assume what was said. For example, If you say “it was found that the eel was on the orange”, some listeners will think that you said peel and won’t notice that you actually said eel. Scene Perception o Post-cue object identification is better with a coherent scene o Contextual cues facilitates processing of individual components Change blindness o People are unable to keep track of all the information in a typical complex scene. If elements of the scene change at the same time as some retinal disturbance occurs (such as eye movement) people often do not notice the change. Inattention to goal-irrelevant stimuli Context and Feature Massaro: o Argues that perceptual information and context provides two independent sources of information about the identity of the stimulus and that they are just combined to provide a best guess of what the stimulus might be. o Context effect is the same regardless of featural evidence o Independent contribution of features and context toward object recognition. CHAPTER 3: Attention Attention: Space-based attention Feature processing and binding Object-based attention Central attention Attention as selection: Information overload Knowledge overload Bottom-up, stimulus-directed Top-down, goal-directed Serial Bottlenecks: Serial Bottlenecks: the point in the path from perception to action at which people cannot process all the incoming information in parallel. It is easier to do things that involve two different motor systems (such as walking and chewing gum) than it is to do two things at once that use the same motor system. There are various theories regarding when the bottlenecks occur. Early selection theory: A theory of attention stating that serial bottlenecks occur early in information processing. Late selection theories: A theory of attention stating that serial bottlenecks occur late in information processing. Whenever there is a bottleneck our cognitive system has to decide which pieces of information to pay attention to and which to disregard. Goal-directed attention: allocation of processing resources in response to one’s goals. Stimulus driven attention: Allocation of processing resources in response to a salient stimulus. Different brain systems control goal-directed attention and stimulus- driven attention. The goal directed attention system is more left lateralization and the stimulus driven system is more right lateralization. Dichotic Listening Task: Participants wear a set of headphones. They hear two messages at the same time, one in each ear and are asked to repeat back the words from only one of the messages. Most participants are able to attend to only one message and to tune out the other. Very little information is processed about the unattended message. Shadowing requires semantic analysis. Participants can pick up some physical characteristics from the unattended ear. They can usually say whether the unattended message was a human voice or a noise such as an instrument. They also can say whether the voice was male or female and whether the gender of the voice switched during the test. They cannot tell what the message was about or what language it was spoken in. Early Selection Model: Filter Theory: Broadbent (1958) o Selection on the basis of physical characteristics o Selected information (further processes for perceptual, semantic information) o Unattended information (only sensory input) Filter Theory: sensory information comes through the system until some bottleneck is reached. At that point, a person chooses which message to process on the basis of some physical characteristic (for example the pitch of the speaker’s voice). The person filters out the other information. This is an example of early selection theory. It is possible for participants to shadow a message on the basis of meaning rather than on the basis of what each ear hears. If the words spoken in each individual ear are real words but don’t create real sentences, the person might combine the words spoken in each ear to make sense. Attenuation Theory: Treisman: o Incomplete filtering o Only attenuation on the basis of physical information o Semantic information from the ignored channel can be processed to some extent. Treisman’s study shows that sometimes people follow a physical characteristic (ex. physical ear) to select which message to listen to and sometimes people chose to follow semantic content. The Attenuation Theory and Late-Selection Theory: The attenuation theory hypothesized that certain messages would be weakened but not filtered out entirely on the basis of their physical properties (physical property being the ear the message is spoken into) This would mean that in a dichotic listening task, the ear that is not attended to is not completely blocked out. Deutsch & Deutsch o Parallel processing up to perceptual processes o The product of perception is a meaning (object, person, concept) o Selection is on the basis of this meaning o Response bottleneck. A Deutsch and D. Deutsch provide a different explanation in their late selection theory. They believed that all information is processed completely without attenuation. They thought the capacity limitation was in the response system and not in the perceptual system. If people were using meaning as a criterion, they would switch ears to follow the message if the message switched ears. If they were using ear of origin as a criterion they would not switch ears. Both early selection and late selection theories assume that there is a filter or bottleneck in processing. Early selection theories believe that the filter selects which message to attent to and late selection theories believe that the filter occurs after the perceptual stimulus has been analyzed for verbal content. Evidence of attenuation: Treisman & Geffen (1967) o Shadowing & monitoring both ears o Better monitoring in the shadowed ear o Performance difference between channels Evidence for late selection: Lewis (1970) o Semantic relationship between shadowed and non- shadowed word affected shadowing rate Corteen & Wood (1972) o Galvanic Skin Response (GSR) to unattended words Semantic processing from an unattended channel can be processed. Visual Attention: Posner, Nissen & Ogden (1978) o Attention, not the same as fixation but related Spotlight metaphor o Small: detailed, serial processing o Large: parallel, shallow processing Neisser & Becklen (1975) o Physical cue and content cue can guide each other to stay focused. The bottleneck in visual attention is even more evident than the bottleneck in auditory information processing. In choosing where to focus our vision, we choose to devote our most powerful visual processing resources to a particular part of the visual field and we limit the resources allocated to other processing parts of the field. The focus of visual attention is not always identical with the part of the visual field being processed by the fovea. People can be instructed to fixate on one part of the visual field (using fovea) while attending to another, nonfoveal region of the visual field. People can attend to regions of the visual field as 24 degrees away from the fovea. Successful control of eye movements requires us to attend to places outside the fovea. We must attend to and identify an interesting nonfoveal region so that we can guide our eyes to fixate on that region to achieve the greatest acuity in processing it. A shift in attention proceeds eye movement. When processing a complex visual scene we must move our attention around in the visual field to track visual information. Content cues vs. physical cues. In a study where participants are watching a movie, their fovea fixates on an area. They only know to redirect their fovea when the content of the movie tells them to. When people are observing places, the parahippocampal area of the temporal cortex becomes activated. When people are observing faces, the fusiform area of the temporal cortex becomes activated. Participants are shown an image that has a face reflected over houses. They are asked to look for repetition of either the houses or of the face. Which part of the brain gets activated depends on which image they choose to pay attention to. Neural Basis of Visual Attention Mangun, Hillyard, & Luck (1993) o Visual attention = enhanced neural processing in V4 but not in V1 V4, not V1 (primary visual cortex) o Delay between stimulation and enhanced neural signal o Attention requires integration of information The neural mechanisms underlying visual attention are very similar to those underlying auditory attention. Just like auditory attention directed to our ear enhances the cortical signal from that ear, visual attention directed to a spatial location seems to enhance the cortical signal from that location. When a person attends to a particular spatial location, a distinct neural response in the visual cortex occurs within 70 to 90 ms after the stimulus is presented. When a person attends to a particular object, we do not see a response for 200 ms. The visual cortex (located at the back of the head) is topographically organized with each visual field (right or left) represented in the opposite hemisphere. Therefore there is enhanced neural processing in the portion of the visual cortex corresponding to the location of visual attention. Visual Search: Conjunction search: to look for a target that is different from the others only by conjunction of more than one feature. Identification of the target requires serial examination. Feature Search: to look for a target that is different from the others only by one feature. Feature detection may not be necessarily tied to spatial location Detailed object processing requires spatial information Serial slope for conjunction search means if we can’t afford to combine the features into an object then all we have left are just a bunch of features that don’t have any representation of an object. Neisser’s search experiment: Multiple lines of random letters are presented. You are asked to find one specific letter among all of the lines, such as “K”. Brain imaging experiments have found strong activation in the parietal cortex during Neisser’s search experiment. Treisman studied pop out effects. In one experiment, he put one “T” in a field of 30 “I’s”. It took participants about 400 ms to find the T. It was easy for them because all they had to do was look for the crossbar of the T. He repeated the experiment a second time, this time hiding the T among I’s and Z’s. Since Z’s had crossbars too, the participants now how to look for a conjunction of vertical and horizontal lines to find the T. This took participants 800 ms. It is necessary to search through a visual array for an object only when a unique visual feature does not distinguish that object. Binding: Visual object identification = spatial or temporal conjunction of features. After binding you recognize objects. There are different types of neurons in the visual system that respond to different features such as colors, line orientations and objects in motion. Illusory conjunction: features can be combined illusorily if the binding process is disturbed. Individual features are processed before location, preattenntively Then conjoined to form an object, attentive process Features are independent of locations For example, if you are shown an orange triangle and a blue circle, you might claim that you saw a blue triangle. People will have an accurate idea of the feature, for example they will know the orientation of the triangle. Visual Neglect: Posner, Cohen & Rafal (1982) o Damage to the right parietal region produces distinctive patterns of deficit. Because the right parietal lobe processes the left visual field, damage to the right lobe impairs its ability to draw attention back to the left visual field once attention is focused on the right visual field. o Asymmetry in attention shift Unilateral visual neglect: o Patients with damage to the right hemisphere completely ignore the left side of the visual field. Patients with damage to the left hemisphere ignore the right side of the visual field. Hemispheric asymmetry o The left parietal lobe is more responsible for attending to local features. o The right parietal lobe is more responsible for global spatial attention. Object-based attention: Space-based attention: people allocate their attention to a region of space. Object0based attention: people focus their attention on particular objects rather than regions of space. Same Object Benefit: o Within-object comparison is better than between-object comparison o Object rather than location may be the basis of attention Example of bumps: it’s easier to identify features when they are on the same object than when they are on different objects, even if the distance between the two features on the one object are further apart. Inhibition of return: If we have looked at a particular region of space, we find it a little bit harder to return our attention to that region. If we look at location A, then look at location B, we are slower to return our eyes to location A than to some new location C. This can work as an advantage in some situations. o People respond faster to the target that applies on the un- cued location. The cue has no validity. o Temporary inefficiency to maintain attention to a cued location o Delay sensitive, 200-1000ms between cue and target, target detection is slower at cued location o Otherwise facilitation at cued location o In a dynamic display, location and object are dissociated o Inhibition can be object based, not just location based. Central Attention: What about thinking? o Which line of thinking are we going to maintain? o One thinking at one time on a given input o Selection bottleneck Three possibilities: o Perfect time-sharing: we have no resource limitation. We can do multiple things at the same time. o No time-sharing at all: You have two tasks. You have to wait for one task to completely be over before you can start the next task. o Selection bottleneck: you may be able to encode multiple stimuli at the same time and to produce multiple responses at the same time if those responses involve different modalities. Mixture of parallel processing and serial processing. Central Attention: Byrne & Anderson: task 1: verify addition 3 + 4 = 7. Task 2: multiplication 3 X 7 is what? o Two arithmetic tasks o Dual task deficit o Selection bottleneck Schumacher at al. Task 1: visual manual. Task 3: auditory verbal o No dual task deficit o No bottleneck o Perfect time sharing For Byrne & Anderson’s experiment there was relatively long selection time. For Schumacher’s there was relatively short selection time. Parallel processing is possible if different processes overlap. Automaticity: Expertise through Practice The general effect of practice is to reduce the central cognitive component of information processing. Automaticity: when one has practiced the central cognition component of a task so much that the task requires little or no thought, it is automatic. The degree of how automatic the task is is referred to as automaticity. Practice can enable parallel processing. Think about your ability to change stations on the radio or to carry out a conversation while you are driving. The Stroop Test: Automatic processes are difficult to prevent. For example, it is possible to look at a common word and not read it. The more practiced dimension will be dominant and will interfere with the other dimension. Dominance is relative to practice level. The Stroop Effect: this task requires participants to say the ink color in which words are printed. The words and colors are presented in three ways: 1. Control condition: the words presented are not color names 2. Congruent condition: the words presented are color names and the ink colors match the color name printed. 3. Conflict condition: the words presented are color names but the ink colors do not match the name of the color printed. Participants were much slower in the conflict condition. Participants had no trouble reading a word even when it was different from the color ink it was presented in. This shows how automatic reading is. Reading is so automatic that participants are unable to inhibit reading the word and the reading can interfere with being able to name colors. Executive control: the direction of central cognition, which is carried out mainly by prefrontal regions of the brain. Damage to prefrontal regions results in deficits of executive control. Damage in prefrontal cortex: Inefficient goal-oriented processing Utilization behavior: dominant stimulus-driven processing There are two prefrontal regions that are very important in executive control, the dorsolateral prefrontal cortex and the anterior cingulate cortex. Dorsolateral Prefrontal Cortex: The upper portion of the prefrontal cortex. It is high (dorsal) and to the side (lateral). This portion of the brain is important for setting intentions and for control of behavior. Anterior cingulate cortex: folded under the visible surface of the brain along the midline. This portion of the brain is particularly active when people must monitor conflict between competing tendencies (such as in the stroop test). Stroop, Simon Task Flankard Task Memory and Cognition Psychology CHAPTER 2 OUTLINE Visual Agnosia: individuals with damage to certain parts of their brain are able to see but cannot recognize anything visually. They know that there is an object there and can detect light, but they cannot make out complex figures. Apperceptive Agnosia: the individual cannot recognize simple shapes such as circles or triangles. Associative Agnosia: The individual can recognize simple shapes and can draw them, but they cannot make out complex objects. Patients with apperceptive Agnosia are generally believed to have problems with early processing of information in the visual system. Patients with associative Agnosia generally have intact early processing but have difficulty with pattern recognition later on. Early Visual Information Processing: Light passes through the lens and falls on the retina at the back of the eye The retina has the photoreceptor cells which have light sensitive molecules in them that undergo structural changes when light hits them. The image that falls on the retina is not 100% sharp One function of early visual processing is to sharpen the image. There are two different photoreceptors in the eyes; rods and cones. Cones are involved with color vision and produce high resolution and acuity. Less light energy is necessary to trigger a response from the rods, but the rods produce a poorer resolution. The rods are responsible for the black and white vision we have at night. Cones are concentrated in a small area of the retina known as the “fovea”. When we look at an object, we move our eyes so that the image of the object is on the fovea. That is because the fovea has the cones, which produce high resolution, and we want to see the sharpest image we can. Foveal vision is responsible for fine details. Peripheral vision detects more “global information” such as movements in the background. The receptor cells synapse onto bipolar cells and these onto ganglion cells, whose axons leave the eye and form the optic nerve which connects to the brain. There are about 800,000 ganglion cells in the optic nerve of each eye. The ganglion cells encode information from a small region of the retina called the cell’s receptive field. The optic nerves from both eyes meet at the optic chiasma. The optic chiasma is where nerves from inside the retina cross over and go to the opposite side of the brain. The nerves from outside the retina continue to the same side of the brain as the eye. The left hemisphere of the brain processes information about the right part of the world and the right hemisphere processes information about the left part. Subcortical: means that the structures are located below the cortex. From the primary visual cortex, information tends to follow two pathways, a “what” and a “where” pathway. The pathway goes to regions of the temporal cortex that are specialized in identifying objects. The where pathway goes to parietal regions of the brain that are specialized for representing spatial information and coordinating vision with action. Information Coding in Visual Cells: Information is encoded by the ganglion cells For some ganglion cells, if light falls on a small region of the retina at the center of the cell’s receptive field, their spontaneous rates of firing will increase. If light falls on the region around this sensitive center, the rate of firing will decrease. Light that falls even further from the center experiences no change. Ganglion cells that respond like this are called on-off cells. There also can be off-on ganglion cells that respond in the opposite way. For these cells, light in the center of the retina causes a decrease in firing rates and light around the center causes in increase. Visual cortical cells respond in a more complex way than ganglion cells Ganglion On-off cells and off-on cells have a circular shape. Cortical cells are elongated in shape. Some, known as edge detectors respond positively to light on one side of a line and negatively to light on the other side. They respond most if there is an edge of light lined up as a boundary point. Bar detectors are another type. They respond positively to light in the center and negatively on the sides, or vice versa. We have hundred of regions of space represented separately for each eye. Different cells code for different sizes and widths of line Our visual system can also perceive the colors of objects and whether they are moving In sum, our visual systems have three different processes for the dimensions form, color and movement. The visual system analyzes a stimus into many independent features in specific locations. These representations of visual features are known as feature maps. We have separate maps for color, orientation and movement. Depth and Surface Perception: Even once edges and bars have been identified a great deal of information still needs to be processed before we can gain a visual perception of the world. Change of texture can give the feeling of distance, even when viewing a 2D image. Stereopis is the ability to perceive 3D depth based on the fact that each eye receives a slightly different view of the world. Motion parallax: provides information about 3D structure when a person and/or the objects are I motion. Distant objects move across the retina more slowly that close up objects. Object Perception: Gestalt principles of organization: o The principle of proximity: elements close together tend to organize into units. o The principle of similarity: objects that look alike tend to be grouped together. o The principle of good continuation: we perceive two lines, one from A to B and the other from C to D even though there is no reason the lines can’t be from A to D and C to B. o The principle of closure: We see objects with missing or hidden parts as closed objects. The principle of good form says that we perceive the occluded part as a circle, not as an arc. These principles help us organize new stimuli into units. It is currently believed that the ability to identify the position and shape of an object in 3D is innate. Visual Pattern Recognition: The last step is to recognize what the objects we have put together are. We will do so through pattern recognition. Template-Matching Models: Template matching: a retinal image of an object is transmitted to the brain and the brain attempts to compare the image directly to various stored patterns called templates. Problems with template matching can arise when the image is a different size as the template, is depicted at a different orientation or when it takes a different form (such as different
Are you sure you want to buy this material for
You're already Subscribed!
Looks like you've already subscribed to StudySoup, you won't need to purchase another subscription to get this material. To access this material simply click 'View Full Document'