Visual Perception Midterm #1 Study Guide
Visual Perception Midterm #1 Study Guide PSYC 3124
Popular in Visual Perception
verified elite notetaker
Popular in Psychlogy
This 64 page Study Guide was uploaded by Freddi Marsillo on Tuesday February 16, 2016. The Study Guide belongs to PSYC 3124 at George Washington University taught by Dr. John Philbeck in Spring 2016. Since its upload, it has received 214 views. For similar materials see Visual Perception in Psychlogy at George Washington University.
Reviews for Visual Perception Midterm #1 Study Guide
Report this Material
What is Karma?
Karma is the currency of StudySoup.
You can buy or earn more Karma at anytime and redeem it for class notes, study guides, flashcards, and more!
Date Created: 02/16/16
Visual Perception Notes – Weeks 1-3 2/16/16 3:02 AM History 1860 – “Elements of Psychophysics” – Gustav Fechner Stimulus vs. Perception Experience vs. Performance • Experience: “Did you see a light?” • Performance: “Press the button when you see a light” Threshold: Minimum amount of a stimulus that it takes for produce a sensation Methods: • Detection: Is something there? • Discrimination: Is A different from B? • Scaling: How much of A is there? • Recognition/Identification: What is A? • This graph shows predicted responses during a psychophysics experiment to determine the absolute threshold (the lowest level of a stimulus that one is able to detect) • The proportion of “yes” responses versus stimulus intensity should yield a step function (a function that increases or decreases abruptly from one constant value to another) • Below absolute threshold, there should be no instances where a stimulus is detected. Above absolute threshold, the stimulus should always be detected • The transition between these two sets of responses represents the absolute threshold of stimulus Detection Experiments • Method of Adjustment – Adjust a knob until you can detect an image (stimulus) o Very fast method (a positive aspect) but can be imprecise (a negative aspect) • Method of Limits – Different way of letting participant adjust to the stimulus o Increase intensity until visible, and vice versa • Method of Constant Stimuli – One constant stimulus intensity Classical Psychophysics Absolute threshold: • Humans are not ideal detectors of stimuli, as there are many different sources of variability o The stimulus itself is a factor o Sensory system adds noise o Judgment of stimulus perception can be influenced by a number of physical, emotional, and cognitive factors o Factors may vary with time and within an individual subject • This graph shows actual responses during a psychophysics experiment to determine the absolute threshold • The S shape of the response profile is due to various physical, biological, and cognitive factors – this curve is known as a psychometric function • The absolute threshold is stimulus intensity that produced 50% “yes” responses Difference Threshold The difference threshold is the extra bit of physical intensity that you would need to add in order to make a stimulus just noticeably brighter • Difference threshold example with light intensities: o Point A = point of perceptual equivalence o Point B = conventional response level as the measure for a noticeable increment in the stimulus o Difference threshold = ΔI = b – a • This psychometric function for a difference threshold experiment shows a similar S-shaped curve • The subject must determine whether a test stimulus is more intense or less intense than a reference stimulus • Difference threshold (ΔI) is the amount of extra intensity (b minus a) needed to produce a just noticeable difference in sensation Weber’s Law • The difference threshold increases in a linear fashion with stimulus intensity • ΔI = k × I OR: ΔI/I = k o ΔI = difference threshold (or JND for just noticeable difference) o k = Weber’s fraction (a constant) o I = stimulus intensity • The shape of the function changes when intensity of reference light changes • Difference threshold (b – a) will increase as intensity level increases ▯ Weber studied how the difference threshold varies for different intensity levels ▯ Three different psychometric functions are shown here for progressively higher intensity values ▯ Bar lengths along the x-axis show that the higher the intensity (I), the greater the difference threshold (ΔI) Fechner’s Law • Goal is to obtain the relationship between sensory magnitude and stimulus intensity • Fechner assumed that a constant change in sensory magnitude (ΔS) was needed to produce a just noticeable difference regardless of the starting level of sensation • Higher intensity levels require a greater change in the physical stimulus (ΔI) to produce identical changes in sensation (ΔS) • Deriving the stimulus-sensation relationship: S = k x log(I) o S = sensation magnitude o k = a constant that is related to but not identical to the constant in Weber’s law o At low levels of intensity, the magnitude of our sensations can change dramatically with small changes in stimulus intensity o At high levels of intensity, less change is seen in the magnitude of our sensations with comparable changes in stimulus intensity Method of Constant Stimuli • A target was always present, BUT: • Sometimes people would say “target present” when equipment malfunctioned • People could detect fainter targets if you asked them to look harder Signal Detection Theory (SDT) • Criterion effects: o SDT assumes that each subject establishes a criterion such that sensory magnitudes above that point will lead to a YES response, and sensory magnitudes below that point will lead to a NO response o Criterion is affected by expectations, motivations, payoffs, costs, etc. o A random nature of background noise can be represented as a normal distribution o Presence of a stimulus adds to sensory magnitude produced by the noise o Result is a combined normal distribution shifted to the right o y = What’s the probability of any particular value of sensation magnitude? • This graph shows criterion level (ß) established by each subject for sensation magnitude necessary to produce a “yes” response • Liberal criterion would move ß to the left, and conservative criterion would move it to the right • Relative proportions of four possible outcomes in Signal Detection Theory experiment are determined by interaction of ß with noise and signal + noise distributions o Correct rejection (when the participant correctly says “no”) and false alarm (when the participant incorrectly says “yes” to seeing a signal when there is no signal present) ▯ there is no signal in the experiment o Hit & miss ▯ there was a signal there (miss = participant doesn’t see it; hit = participant detects it) Signal Detection Theory (SDT) • The ROC Curve (“Receiver Operating Characteristic” Curve): o The curve plots the probabilities of hits and false alarms, showing how these values change with respect to each other at all possible criterion levels o ROC Curve is a plot of hit versus false alarm rate o Shift in ß produces different values for hit and false alarm rates o Liberal criterion produces high hit and false alarm rate; the opposite is true for conservative criterion • Signal intensity and detection sensitivity: o Reflected in the signal + noise distribution o d’ = a measure of separation between the two distributions o d’ is greater when the signal is stronger and/or when the subject is a better detector of the signal o Noise and signal + noise distributions become more separated with increasing signal strength and detector sensitivity o Separation of peaks of both distributions is termed d’ • Sensitivity and d’: o Provides a numerical estimate of a person’s sensitivity and is unaffected by non-sensory factors o Shows ROC curves for different d’ values using a continuously variable criterion for each distribution: ▯ Larger values of d’ cause ROC curve to be bowed upward o ROC profile depends on the value of d’ o Greater values of d’ produce ROC curves that are bowed toward upper left o ROC curve is a straight line when noise and signal + noise curves overlap exactly; equal probability of hit and false alarm regardless of criterion • “Catch trials” are trials where there is no stimulus How does SDT allow us to estimate how people are responding when they’re guessing? • Criterion = some level of sensation magnitude; if above that level, ▯ “yes, I saw it” and if below that level, ▯ “no, I didn’t see it.” Experimenter can manipulate this level o High criterion: experimenter tells participant to say “yes” only if they’re 100% sure they saw it o Low criterion: experimenter tells participant to go ahead and say “yes” if they think they saw it Modern Psychophysics began in the 1930s with Stevens’ work • Proposed a set of direct methods for studying sensation • Revealed psychophysical relationships that in some cases did not resemble logarithmic functions • Magnitude estimation and the power law: o Stevens asked participants to provide a direct rating of the sensation that they experienced: Technique is called magnitude estimation ▯ Example: Participant is presented with the modulus (standard stimulus) and is told that it represents a certain value (e.g., 10) ▯ Participant is then presented with stimuli that vary randomly along some dimension ▯ Participant provides a relative numerical rating for the other stimuli o Sensory data from magnitude estimation fit into mathematical functions that were consistent with a power law: b ▯ S = k x I ▯ S = sensation experienced the subject ▯ I = physical intensity ▯ k = scaling constant ▯ b = power value ▯ Magnitude estimation experiments showed that sensory magnitude related to stimulus intensity by a power function (S = k x I )b ▯ Most power law functions for sensory perception have exponent values that are less than 1.0 ▯ Some sensory functions have greater exponent values (e.g., electric shock) o This graph is an example of cross-model matching experiment relating loudness perception to 10 other sensory stimuli o Adjust sound level until it matches with perceived intensity from another sensory domain o The slope of each function is determined by power law exponent values The Nature of Light General properties of light: • Wavelength: o This is the physical distance between two identical points in a wave, thereby spanning a complete cycle o Variations in wavelength set apart the different types of electromagnetic radiation o The electromagnetic spectrum contains radiation spanning a very large range of wavelengths, from gamma rays to ELF waves o Visible light represents a very tiny component of this spectrum, spanning 400 to 700 nm o The actual wavelength of light in turn produces different color sensations Refraction • Light bends when it travels from one medium to another due to changes in the speed of light in different mediums • The refractive index indicates the speed of light in a particular medium • Two important properties of refraction: o The more different the refractive indexes between two media, the more refraction o Light ray must strike the boundary at an angle away from perpendicular (“the normal”) o Bends TOWARD normal when entering a higher refractive index o Bends AWAY from normal when entering a lower refractive index o The graph shows how light travels at different speeds through different media; the higher the refractive index of the medium, the lower the light speed o Light bends (refracts) at the interface between two media of different refractive indices o Bending is toward the normal when entering medium with higher refractive index; the opposite happens when entering medium with lower refractive index Convergence and Divergence of Light The figure on the top panel represents a concave lens, where light rays are diverted away from the optic axis The figure on the bottom panel represents a convex lens (which forms an image), and where light rays are converged toward the optic axis • Glass lenses refract light either toward or away from the optic axis, depending on type of lens • Lens power (or thickness of lens) affects image location o Curvature of lens determines its refractive power o Relatively flat convex lens (top panel) has weak power and therefore image is created farther away o Strongly curved convex lens (bottom panel) has greater power and therefore image appears closer to lens Object distance affects image location • Object at optical infinity produces parallel incident light, which is focused at fixed point for this lens (known as focal point) • The closer the object moves to the lens, the farther the image moves away The Human Eye This cross-section of the eye shows its various internal components Cornea and crystalline lens serve as the two refractive elements of the eye Light rays are focused onto the retina Accommodation: • Changing the surface curvature of the lens in response to changing object distances o Neural blur processor triggers ciliary muscle that causes the lens to become more rounded o This increases refractive power of the lens o Near point = limit of maximum accommodation o Retinal blur occurs when an object moves closer from optical infinity o Blur stimulus produces a fast, reflexive change in the crystalline lens that causes it to become more rounded o Greater curvature of its surfaces provides added refracting power to the eye; this places the image on retina o Accommodation can take place over a certain range – from far point to near point o Near point moves farther away with increasing age due to reduced accommodative ability o Greatest decline starts around age 40-45 when lens starts to harden and lose elasticity ▯ This condition is known as presbyopia – it can be corrected with reading glasses Optical Disorders of the Eye Presbyopia: • A decrease in the elasticity of the lens that causes a progressive loss in accommodative ability, as depicted in the graph above • Treatable with additional refractive power provided by a convex lens or reading glasses Refractive Errors Hyperopia (farsightedness) • Refractive errors of the eye that occur because the eyeball is too short • Ability to see far objects, but near objects are out of focus • Corrective lens used to correct the refractive error Myopia (nearsightedness): • Refractive errors of the eye that occur because the eyeball is too long • Ability to see near objects, but far objects out of focus • Corrective lens used to correct the refractive error • This is a schematic top-down view of dynamic range of vision for normal and optically-abnormal eyes • Far point (FP) is not affected in presbyopia, but near point (NP) moves progressively farther away • FP is not affected in hyperopia, but NP is farther away depending on severity of condition • Both FP and NP are affected in myopia; magnitude depends on severity of condition Astigmatism: • When the cornea is more curved along one direction than another • Results in blurring of parts of the retinal image The Photoreceptor Array Photoreceptors: • Light-absorbing neurons in the retina that serves as the first element in the visual pathway • Convert light energy into a neural signal Pigment epithelium: • A layer of pigmented cells lying adjacent to the photoreceptor array • Captures light that is not absorbed by the photoreceptors • This image shows how an array of photoreceptors line up along the inside back surface of the retina • Photoreceptors are embedded into fibrous matrix of eyeball • The two different types of photoreceptors are rods and cones • Two important landmarks in the retina are the fovea (pit in the center of retina) and optic disk (hole through which retinal fibers leave eyeball) • The retina can be divided into nasal half and temporal half on either side of the fovea • Rods and cones have different distribution profiles across the retina (bottom panel) • Rods are maximum in periphery; cones are maximum in fovea; there are no photoreceptors in the optic disk Rods and Cones • Rod and cone photoreceptors mediate different visual functions: o Rods mediate night vision o Cones mediate day vision • Photoreceptor distribution across the retina: o Fovea: ▯ A small pit in the center of the retina that contains no rods and many cones ▯ Rod density increases in periphery of the retina o Optic disk: ▯ The point where nerve fibers from the retina come together and exit the eyeball • Rods and cones: o The foveal retina is specialized for detailed vision: ▯ We move our eyes so that the image of an object of interest falls on the fovea ▯ Cones in fovea are dedicated to processing image details o The peripheral retina is specialized for low light detection: ▯ Fovea is handicapped under low light conditions ▯ Peripheral retina with high concentration of rods functions best in low light o Visual transduction: ▯ Photoreceptors contain photopigment material ▯ Rhodopsin = photopigment in all mammalian photoreceptors ▯ Key role in phototransduction Visual Transduction: • Phototransduction involves a cascade of biochemical elements o With light exposure, the energy of the photon activates the rhodopsin, which interacts with a G-protein, which activates a specific enzyme, which converts cGMP within the outer segment into GMT. Spectral relationships: • Absorption spectrum of rod rhodopsin: o Spectrophotometry: a way of measuring how much light of different wavelengths is being absorbed by rhodopsin o This figure shows how photopigment material can be extracted from outer segment and placed in a test tube o Absorption spectrum can then be determined by giving light of a particular wavelength and detecting how much passes through o This process is repeated for other wavelengths in the visible spectrum • Absorption spectrum of rod rhodopsin o Absorption spectrum: A curve showing how much light is absorbed by rod rhodopsin across the entire visible spectrum o Maximum absorption by rods at 500 nm (bluish-green light) • Absorption spectrum (left graph) shows relative efficiency of rhodopsin absorption in rods across the visible spectrum • Spectral sensitivity of rod-based vision (right graph) shows detection ability in humans across much of the visible spectrum Spectral sensitivity of rod (scotopic) vision: • Scotopic vision: Visual processes that are mediated only by rod photoreceptors • Scotopic spectral sensitivity function: Curve showing the rod’s sensitivity to light of different wavelengths by finding the absolute threshold of different wavelengths of light when only rod vision is tested o Peak sensitivity at 500 nm Absorption spectrum of cone rhodopsins: • Microspectrophotometry: Allows light absorption estimation at the single photoreceptor level • Showed each of three types of cones has own absorption spectrum: o S-cone = peak absorbance at 440 nm o M-cone = peak absorbance at 530 nm o L-cone = peak absorbance at 560 nm • Absorption spectrum of the three classes of cone photoreceptors (left graph) shows that each absorbs maximally at a different wavelength • Spectral sensitivity for cone-based vision in humans (right graph) peaks at 550 nm • The nature of the sensitivity function suggests that it is largely mediated by M and L cones Spectral sensitivity of cone (photopic) vision: • Photopic vision: day vision • Regulated by the collective output of three types of cones • Peak sensitivity of cones at around 550 nm Sensitivity, bleaching, and recovery: • Rods become bleached at moderate intensities: temporarily unable to absorb any more photons until the photopigment molecule is regenerated o Rod photoreceptors are blind beyond the upper limit of scotopic vision • Cones continue to respond at all daylight intensity levels o Activation threshold for cones occurs at the upper limit of scotopic vision o Extremely intense light is needed for cones to become bleached • Visual recovery from bleaching: o Dark adaptation: detection sensitivity to a light spot when measured in the dark at various times after photoreceptor bleaching ▯ The process of visual recovery after photopigment bleaching o Cones recover from bleaching faster than rods ▯ Cones recover very quickly after bleaching and show their lowest detection thresholds within about five minutes Neural Processes in the Retina The retinal ganglion cell (RGC): • The last set of neurons in the retina, whose output is sent to the brain • The first site within the retina where action potentials are generated • Emergence of parallel visual streams Structure and layout of RGCs in the retina: • Midget ganglion cell: Relatively small size and compact dendritic field • Parasol ganglion cell: Relatively large size and dendritic field • Axons from midget and parasol ganglion cells project to separate segments within the next structure along the visual pathway Electrical activity in ganglion cells: • Retinal ganglion cells sometimes fire action potentials in the absence of light or in uniform illumination • Receptive field: An area of the retina that, when stimulated, influences the firing rate of a cell • A microelectrode can be used to record electrical activity from ganglion cells • Signals are amplified and displayed on a monitor or oscilloscope • Area of retina that influences the activity of a visual neuron (such as a ganglion cell) is known as its receptive field (RF) Receptive field structure of ganglion cells: • ON/OFF cell: Center circular region of receptive field is excited by light stimulation, whereas the surrounding zone is inhibited by light • OFF/ON cell: Center circular region of receptive field is inhibited by light stimulation, whereas the surrounding zone is excited by light • An ON/OFF ganglion cell is shown on the left, and an OFF/ON ganglion cell is shown on the right • Receptive field (RF) of a ganglion cell can be projected onto a screen due to the geometrically precise ways in which images are formed • RFs can have one of two arrangements: central excitatory area with surround inhibitory zone (ON/OFF cell) or vice versa (OFF/ON cell) • A small light stimulus falling on the screen that is imaged to either part of the receptive field will generate a corresponding response • Retinal ganglion cells are optimized for detecting contrast: this is a consequence of the center-surround organization • Firing rate of ganglion cell depends entirely on how a light stimulus aligns with the RF of the cell; in the absence of any light stimulus, ganglion cells fire action potentials at a spontaneous rate • Ganglion cells begin to fire as soon as light falls on an excitatory area, as shown in this example of center stimulation of an ON/OFF cell (top panel) • Maximum firing occurs when a light spot fills an excitatory area (second panel) • As the light spot expands and encroaches into the inhibitory surround, the firing rate becomes progressively less (third panel) • Spontaneous firing occurs when light fills the entire RF (bottom panel) Electrical circuits in the retina: • Bipolar cells mediate the center response: o Each photoreceptor is linked to two different bipolar cells (On and OFF), and each bipolar cell is linked to a separate ganglion cell o This leads to response property of the two types of ganglion cells • Transmission of signals from photoreceptors occurs through so- called ON or OFF bipolar cells • The two cell types behave different in response to glutamate release by photoreceptors • Differential effect is responsible for producing either an excitatory (ON) or inhibitory (OFF) center response within RFs of ganglion cells • Horizontal and amacrine cells mediate the surround response: o Horizontal cells: Make contact with adjacent photoreceptors to gather signals from a larger area of the retina o Amacrine cells: Lateral connections at the bipolar and ganglion level; responsible for lateral inhibition • Center response in ganglion cell is mediated by direct vertical connection with bipolar cells (B) • Surround response is mediated by lateral interactions through horizontal (H) cells at the photoreceptor level and amacrine (A) cells at the level of bipolar and ganglion cells • Lateral interactions are always opposite in nature to the center response and therefore sets up the center-surround antagonistic nature of ganglion cell receptive fields • Retinal processing of rod photoreceptor signals: o Like cone circuits, rod signals are transmitted to bipolar cells and also have horizontal cells o At the ganglion level, rod signals merge with cone signals to arrive at the same set of ganglion cells Perceptual Aspects of Retinal Function Sensitivity: • Scotopic vs. photopic sensitivity: o Two curves are shifted both horizontally and vertically with respect to each other • Photochromatic interval: o The sensitivity difference between scotopic vision and photopic vision o Greatest at low wavelengths and decreasing at longer wavelengths • Scotopic and photopic spectral sensitivity curves are shifted with respect to each other • Vertical difference at any given wavelength is known as the photochromatic interval • Horizontal shift of peak sensitivity is known as the Purkinje shift • Scotopic system is more sensitive at all wavelengths except beyond 650 nm Scotopic vs. photopic sensitivity: • Purkinje shift: Objects whose colors are associated with lower wavelengths appear brighter in low illumination because the scotopic system is more sensitive at those wavelengths Sensitivity: • Rods “converge” more than cones • This explains greater sensitivity of rods • Also explains greater acuity of cones • Rods have greater convergence onto a ganglion cell than cones • Rod vision is therefore more sensitive because of the greater pooling (summation) • Cones vision has better resolution because smaller stimuli can be more separately detected • Tradeoff between resolution and sensitivity represents a fundamental feature of central (cone dominated) versus peripheral (rod dominated) vision Resolution: • Physical and biological limits to resolution: o Point spread function (PSF): the light distribution in a point image that is blurred is due to physical factors – two PSFs must be sufficiently spaced apart before they can be physically distinguished o The limit to resolution in biological terms is stipulated by the packing density of the photoreceptor array • Increment threshold experiment reveals how much extra intensity is needed to distinguish stimulus from background • As background intensity increases, so does increment threshold • Rod and cone vision display different increment threshold curves; boxed region of each shows a constant response as specified by Weber’s law Resolution and retinal eccentricity: • The graph below shows visual acuity vs. retinal eccentricity o Sharp peak in the fovea, and a rapid and symmetrical decline on either side o Due to decrease in cone density and an increase in signal convergence • Plot of visual acuity (above) as a function of retinal eccentricity shows a sharp rise at the fovea, accompanied by rapid decline on either side • Loss of spatial resolution in periphery means that alphabetic characters have to be progressively larger in order to be distinguishable Resolution: • Measures of spatial resolution – Snellen chart (an eye chart that can be used to measure visual acuity) – see image on the left side below • Strokes of each letter subtend a precise angle when measured at a testing distance of 20 feet • Landolt rings (see the image on the right side below) – same as Snellen chart except you see rings instead of letters and you have to indicate which side the opening is on (ideal for people who cannot identify letters, such as young children) • Measures of spatial resolution – contrast sensitivity function: o Use grating approach o Can specify the spatial frequency of each grating 20/20 vision = you can see something clearly at 20 feet • 6/6 vision (same except uses meters, not feet) 20/100 vision = you can see something clearly at 20 feet that a person with normal vision can see at 100 feet 20/10 is better than normal vision – you can see something at 20 feet that a person with normal vision can see at 10 feet • Sine-wave gratings have a sinusoidal intensity profile as a function of space • Spatial frequency is specified as the number of cycles per degree of visual angle at the eye • Contrast of each grating can be modulated for any given frequency • Spatial vision can be assessed by obtaining minimum contrast (threshold) necessary to detect gratings of different spatial frequencies Measures of spatial resolution – contrast sensitivity function: • Find the minimum contrast that is needed to make a grating of a particular spatial frequency just visible • Repeat across multiple spatial frequencies • Plot threshold data in terms of sensitivity • Human contrast sensitivity function (CSF) shows peak at 8-10 cycles/degree • Contrast sensitivity declines at lower and higher spatial frequencies • Detection of grating at a particular spatial frequency is likely mediated by appropriately sized receptive fields (inset) • S = 1/T ▯ sensitivity = 1/threshold Properties of the CSF in terms of retinal functions: • Small decline in grating visibility at very low spatial frequencies: o Due to increasing width of the bars encroaching into the OFF areas of the biggest receptive fields • Rapid decline in grating visibility at higher frequencies: o Due to a limit to the density of the photoreceptor array Factors that affect contrast sensitivity • CSF is affected by light level, age, and disease • Photopic CSF shows highest range and sensitivity; scotopic CSF has the least • High-frequency cutoff decreases with age Center-surround effects: • Lightness constancy: the similarity in apparent lightness of objects despite large changes in environmental illumination Mach bands • Mach band illusion can be seen at the borders separating each of the bars • Narrow dark band appears to the left of the border; narrow white band appears to the right • The illusion can be explained by considering the relative output of ganglion cells whose receptive fields interact with the stimulus at different points The Retinal Projection to the Brain Subcortical targets of the retinal output: • Lateral geniculate nucleus = in thalamus • Superior colliculus • LGN to visual cortex • Signal splitting at the optic chiasm: o Nasal fibers cross over General layout of the retinal projection: • Signal splitting at the optic chiasm: o Each optic nerve carries signals from each eye o Each optic tract carries signals from half of the retina of each eye o Each hemisphere processes visual information from the opposite side of the visual field • The foveal representation: o It is unlikely that projections from the fovea are partitioned in a perfect midline o It is more likely that projection patterns along entire midline, including the fovea, is fuzzy – some ganglion cells project along ipsilateral pathway and others along contralateral pathway The lateral geniculate nucleus (LGN): • Structural and functional properties: o STRUCTURE: Six layers: ▯ Bottom two layers contain magnocellular neurons ▯ Top four layers contain parvocellular neurons ▯ FUNCTION: LGN neurons have receptive fields with concentric circular pattern (ON/OFF and OFF/ON) • LGN is a paired neural structure located deep inside the brain • Magnified view shows that neurons are largely confined to six major layers • Layers are numbered from the bottom up; bottom two layers are distinct from the top four layers • Organization of visual signals – retinotopy: o LGN neurons in each layer are monocular o Topographic layout of retina is formed in each layer of LGN o Foveal retina has greater representation in LGN, while peripheral parts of the retina have less • Organization of visual signals – functional segregation: o Parasol ganglion cells project to magnocellular layers of LGN, and midget ganglion cells project to parvocellular layers of LGN o Magnocellular neurons are more tuned to light contrast levels o Parvocellular neurons convey information about color contrast • Regulation of information flow: o LGN neurons not only send signals to visual cortex, o LGN neurons also receive signals from visual cortex as to which signals to amplify in response to a greater attentional interest The superior colliculus: • Resides below LGN • Pathway through pulvinar projects to visual cortex: o Can account for blindsight • Receives input from visual cortex, as well as somatosensory and auditory systems • Major ou to motor areas of brainstem to control saccades The Primary Visual Cortex • Located in the occipital lobe • First cortical area to process visual information • Also known as striate cortex, area 17, and area V1 • Structure and layout of area V1 – six layers: • The retinopic layout of area V1: o Fovea enjoys greater cortical area than peripheral parts of the retina Properties of area V1 neurons: • The emergence of binocularity: o Many area V1 neurons can be activated by light stimulation of either eye or both eyes together o Ocular dominance scale: ▯ Individual neurons show different degrees of preference for one particular eye or both eyes ▯ Most neurons in area V1 show some degree of binocular influence • Orientation selectivity: o ON and OFF subfields in area V1 are rectangular rather than circular • Orientation selectivity: • An elongated light stimulus will trigger maximum activity only in those neurons with a similar receptive field orientation • For a neuron with a vertically elongated receptive field, firing rate will increase as the light bar is more vertical • Orientation tuning curve: o Plot neuron’s response profile as a function of the light bar’s orientation o • Neural response of an orientation-selective neuron depends on relationship of the light bar’s orientation to that of the RF • Firing profiles show that vertical bar optimally stimulates a neuron with a vertical RF orientation; minimum firing occurs when bar is horizontal • Summary of firing rate as a function of bar orientation is known as the orientation tuning curve Properties of V1 Neurons Directional motion selectivity: • Some neurons in area V1 show significantly greater firing to stimulus movement in one direction in comparison to all others • However, some neurons are non- directional The Primary Visual Cortex Functional architecture of area V1: • Ocular dominance columns: o A vertically oriented collection of neurons spanning the entire thickness of area V1 that shows preference for light stimulation of a particular eye • Orientation columns: o Neurons with similar orientation preferences are clustered together in vertically oriented columns o There are discrete shifts in orientation preferences from one column to the next o Series of columns together represents all possible orientations o • Neural projections for LGN arrive into discrete interdigitated sectors of layer 4C in area V1 • Patches alternate in terms of eye preference due to right eye versus left eye projections • Anatomical patterns produce vertically oriented columns of eye preference, known as ocular dominance columns (ODCs) • The ice cube model proposes that ODC and orientation columns are situated perpendicular to each other • Ice cube model: ocular dominance columns run in one direction, whereas the orientation columns are arrayed along a perpendicular axis • Hypercolumn: a cortical module that encompasses one pair of ocular dominance columns and a complete series of orientation columns Visualizing architecture of area V1 – functional anatomy: • Optical imaging: o Generates maps which display areas of high neural activity in the visual cortex in response to activation by a stimulus o Indicates brain areas that are more active reflecting less light than inactive areas o Can look at how activity changes with different stimuli • Ocular dominance columns do not run in straight lines like the ice c model but have a meandering quality (black and white patches in image below • Orientation columns do not run in straight lines but appear as a series of pinwheel patterns • Optical imaging can be used to reveal both ocular dominance and orientation columns (left panel) • Enlarged view of orientation columns show that they appear as a pinwheel pattern on top of cortex where multiple columns converge into a central core • Pinwheel segments extend down through the thickness of the cortex to make up a series of vertically oriented orientation columns Visual information processing beyond the occipital lobe splits into two major pathways: • Dorsal stream: the “where” pathway in the parietal lobe • Ventral stream: the “what” pathway in the temporal lobe Higher Cortical Functions and Object Perception Dorsal cortical stream: • Signal output from area MT to the parietal lobe • Involved in the visual coordination of body and eye movements and in encoding spatial relationships • The “where” pathway Ventral cortical stream: • Signal output from area V4 to the temporal lobe • Involved in processing object detail and identity • The “what” pathway • Visual areas in the temporal lobe depicted below: Object perception: • Agnosia: a disorder in which people have difficulty recognizing objects due to selective damage to the ventral visual pathways o Patients with agnosia are unable to compare and match different structures o Another type of agnosia spares perceptual function but produces an inability to name the structure • Structuralism: the psychological theory that mental experiences result from the assembly of elemental structural units that can be deduced through careful introspection • Gestalt theory of object perception: object perception has the intrinsic quality that was based on the wholeness of structure that could not be reduced to its constituent parts • Law of similarity: similar items appear to be grouped together because they share common features • Closure: a single closed pattern can obscure its components • According to Gestalt law of similarity, items that are similar in nature appear to be grouped together to create form • Interacting items can create a closed pattern, according to Gestalt principle of closure • Law of proximity = near items appear to be grouped together • Law of simplicity = items are organized into figures in the simplest way possible • Law of good continuation = the tendency to perceive clusters of individual elements as forming a single contour • Law of common fate = items moving in the same direction are grouped together • Kanizsa figures: shows importance of holistic mechanisms; sensory analysis at the structural level cannot account for certain features of the figures • Kanizsa figures show structure when it is not explicitly defined • Neither the white triangle nor the cube actually exists in either image, but they are instead created by the mind • Figure-ground segregation: a salient figure or foreground impression stands out and is distinguishable from background stimulation o Vase-face image has two possible interpretations o o Relationship between figure and ground can alternate in ambiguous situations o Ruben’s vase figure (left picture) alternates between a white vase and two opposing black faces • Past experiences also affect perceptual organization • Vase more salient if figure is inverted (faces are upside down) • Dalmatian instantly perceived once you have seen it Modern structural theories • Gestalt principles are regarded not as laws but as heuristics: the most plausible solution to a particular problem given the circumstances Feature integration theory (FIT) is based on the assembly of low-level features into a complex visual object 1) Pre-Attentive Stage: Basic characteristics of the features in the pre- attentive map can be identified with the pop-out phenomena: some features immediately pop out regardless of the number of other distracters • Differences in elementary features such as orientation or contrast immediately pop out and can be quickly detected regardless of figure density • More complex forms, such as characters or numerals, require an attention-driven search; the greater the number of distracters, the longer the search time 2) Attentional Stage: at this stage, feature integration is believed to take place • The binding problem = how elementary tokens are assembled into a visual object (binding requires attention and takes time) • Once the features are bound together, the resulting object is compared to memory – a positive match then leads to identification Modern structural theories Recognition-by-components theory (RBC) says that visual objects are initially parsed into simple geometric volumes that are later assembled to create a 3D representation • Basic features are volumetric primitives called geons • These are examples of geons, along with some common objects that can be assembled by various combinations of these particular geons Analytical approaches – Marr’s computational algorithm: • Early stages of vision: Edge detection algorithm creates spatial primitives composed of edges, lines, blobs, and terminations o Fits nicely with contrast-detecting functions of early visual neurons • Next stage: Links primitive features into larger ones and groups similar elements together: o Produces a representation of an object’s surface and layout, then is transformed into a 3D representation • Edge detection algorithms can be applied at different spatial scales • Broad changes in intensity provide course resolution (top right pictures), intermediate changes (bottom left), and abrupt intensity changes identified at fine resolution (bottom right) • The three spatial levels of analysis allow an algorithm to pick out sharp borders, as well as broad intensity changes (e.g., shadows and highlights) Face perception Neural processing of faces: • Prosopagnosia = inability to recognize faces caused by impairment in sensory processing of high-level visual functions in the temporal lobe • Single neurons in the monkey temporal lobe are responsive to face stimuli in a highly specific manner • fMRI studies with humans show increased activity in the fusiform face area (FFA) o = Area of brain; functional activity in human brain in response to faces shows a focus of activation in the fusiform gyrus Perceptual aspects of face processing: • Mental representations of faces are fundamentally holistic in nature (supports Gestalt theory) • Faces and objects may be treated differently by our visual system (familiarity?) • Inverted faces are hard to recognize • Holistic nature of face perception can be demonstrated with overlapped face images • Even though features of the two faces blend into each other, each face is distinctly visible as a whole – the same is not true with non- face objects, such as overlapped houses Depth and Size Perceptions change based on state – for example, a hill might appear to be steeper if you are fatigued Equivalent Configurations There’s an infinite variety of things in thr world that can give rise to the pattern of light that’s on the back of our eye (an infinite variety of things can create the same or similar images to our eyes) – how is it that we’re able to narrow it down? • How does our brain figure it out? ▯ Distance and depth information are extremely important • The number of equivalent configurations is infinite if you can’t perceive distances • The above image shows two different things in the real world creating the same pattern of light on the back of the eye • Once you can see the depth information, you can narrow down the amount of equivalent configurations and allows you to differentiate between images Egocentric distance (or absolute distance) – distance between yourself and some object Exocentric distance (or relative distance) – the distance between two things out there in the world, not including yourself There is a distinction between distance and depth – they are related, but not exactly the same What we can do to gauge distance – Information about the above list: Absolute Distance Monocular cues – cues that can only come about if you have two eyes • Accommodation o The process by which the eye changes optical power to maintain a clear image or focus on an object as its distance varies • Absolute motion parallax (parallax = seeing something from two different locations) o Motion parallax comes about when your head or the object is moving from one location to another, i.e. from side to side • Familiar size o Knowing the typical size of the object can help you gauge how far away it is • Angular elevation o This has to do with how far down in your visual field you have to look to see an object – how high up in the visual field does the object appear? If the object is very low in your visual field, it must be close, and if it is very high up in your visual field, it must be farther away Binocular cues – cues you have if you cover up one eye • Convergence o There is a muscular part and a visual part to the cue o Visual part of cue: parallax – you have two eyes at different locations • Muscular part of cue: you have to literally use your eye muscles and move your eyes to see object – a separate signal for your brain to gauge the distance. This is the signal your brain uses to decide how much you need to turn your eyes in • Convergence is not as good as angular elevation, but it’s better than the others Relative Distance Monocular cues • Relative motion parallax o Also called “optic flow” – depth perception cue in which objects that are closer appear to move faster than objects that are further away o Focus of Expansion (FOE) = a point in the optic flow from which all visual motion seems to emanate and which lies in the direction of forward motion; it is the single point on the projected image where the object appears to be coming from • Relative angular extent “linear perspective” texture gradient o Linear perspective: the appearance of lines tending to converge – the relative size, shape, and position of objects are determined by visible or imagined lines converging at a point on the horizon o Texture gradient: the distortion in size which closer objects have compared to objects farther away; groups of objects appear denser as they move farther away • Angular elevation o Objects appearing closer to horizon tends to seem farther away, and objects farther away from the horizon tends to seem closer to you • Aerial perspective o Useful for great distances – Has to do with the amount of water vapor in the air – things that are very far away will tend to seem more blue in color, and will tend to seem lower in contrast o This is because when something is farther away, it has to pass through more of the water vapor (almost like looking through an ocean) o For example, on a very humid day, objects will tend to seem farther away • Interposition • Lighting and shading Binocular cues • Disparity o Our eyes get slightly different views of the world; our two eyes are separated just enough to get a slightly different perspective of the world • We have two eyes, but somehow our brain merges these two images into a single viewpoint that has depth to it o Disparity = the physical cue and quantity that can be measured o Stereopsis = Binocular disparity is the physical cue, and stereopsis is the perception of depth based on that cue • If you keep increasing disparity, you’ll still experience depth but you’ll start to see double images • For example, hold out two fingers, one on each hand. Now close one eye, and then the other. One eye sees a bigger distance between your two fingers than the other eye; this difference in distance is disparity Lighting and shading can also provide cues for distance perception • Gives us a sense of 3D – Parts of the figure that are light at the top seem to be popping out of the screen, and parts that are darker at the top seem to be going into the screen • Interposition – something appearing to be in front of another; e.g. the clouds are between us and the moon; clouds are in front of the moon and are therefore closer to us than the moon is Corresponding Points • Representation of retinal surface from behind the eyes shows relative image locations of four peripheral objects • Image location can be specified by distance (d) from fovea and whether they are situated in nasal (n) or temporal (t) retina • Four pairs of images are formed on corresponding retinal points; distance from fovea is identical in the two eyes (d=d) • If you’re getting the same image of an object in your two eyes, that is, if the image is formed on corresponding points between the two eyes, then the disparity is 0 Binocular Depth Perception Stereoscopic cues and binocular disparity: Horopters: • Vieth-Muller circle = a theoretical circle in space on which all objects produce optical images at analogous retinal points in the two eyes • Horopter = the set of environmental points that produce an image at analogous retinal sites for a given fixated object ▯ contains all those points in space whose images fall on corresponding points of the retinas of the two eyes Horopter is a general term; Vieth-Muller circle is a type of horopter This image is a top-down view of a pair of eyes fixated on object “F” • All objects situated on the Vieth-Muller circle project to corresponding points in the two retinas Objects located behind the horopter create binocular images on non- corresponding retinal points Random-Dot Stereograms • A stereo pair of images of random dots which when viewed with the eyes focused on a point in front of or behind the images produces a sensation of depth, with objects appearing to be in front of or behind the display level Correspondence problem = the problem of ascertaining which parts of one image correspond to which parts of another image Relationship between size, distance, and visual angle: if you change one of these variables, another one must also change Change: SIZE Hold constant: DISTANCE ▯ Visual angle must change Change: DISTANCE Hold constant: SIZE ▯ Visual angle must change Change: VISUAL ANGLE Hold constant: DISTANCE ▯ Size must change Emmert’s Law Relationship – you have a constant angle and you change the perceived distance of the object, so the perceived size also appears to change • When the moon is on the horizon, it looks a lot bigger than it does when it’s up in the sky. It’s an illusion, because retinal image size is actually the same, regardless of where the moon is positioned • Explanation: when the moon is on the horizon you actually perceive it to be farther away than it is when it’s directly overheard – consistent with the idea that we have more physical cues when we’re looking at the horizon – lots of distance cues near the horizon moon, so we perceive it to be farther away 2/16/16 3:02 AM 2/16/16 3:02 AM
Are you sure you want to buy this material for
You're already Subscribed!
Looks like you've already subscribed to StudySoup, you won't need to purchase another subscription to get this material. To access this material simply click 'View Full Document'