Description
Philosophy and Neuroscience Study Guide: Test 1
Churchland Neurophilosophy Introduction
∙ Excitable Cell – a cell that can pass a tiny electrical effect down its extent and that can be appropriately excited so that the organism may move, thereby feeding, fleeing, fighting, or reproducing
∙ Our own brains are massive mounds of excitable cells
∙ In an attempt to understand our brains, certain intriguing problems arise such as how to study the brain, how to conceive of what it is up to, and how our commonsense conceptions of ourselves might fit with what we discover (examples of philosophical problems)
∙ Philosophical questions such as these span over a range of subject areas and as such it may make no difference if asked by a philosopher or a neuroscientist as they are all part of the same general investigation
∙ What can or cannot be imagined by the empirical world is not independent of what is already understood and believed about the empirical world
∙ Sustaining conviction of book: that topdown strategies and bottomup strategies for solving the mysteries of mindbrain function should not be pursued in icy isolation from one another We also discuss several other topics like What is a code of ethics?
∙ For neuroscientists, a sense on how to get a grip on the big questions and of the appropriate overarching framework with which to pursue handson research is essential ∙ For philosophers, an understanding of what progress has been made in neuroscience is essential to sustain and constrain theories about such things as how representations relate to the world, whether representations are propositional in nature, how organisms learn, whether mental states are emergent with respect to brain states, whether conscious states are a single type of state, etc.
Don't forget about the age old question of What is hedged position?
∙ Guiding aim: to paint in broad strokes the outlines of a very general framework suited to the development of a unified theory of the mindbrain.
∙ Dualist argue: mind is a separate and distinct entity from the brain
∙ Want: unified theory about how the mindbrain works. We want a theory of how the mindbrain represents whatever it represents and of the nature of the computational processes of underlying behavior
∙ A characterization of the nature of representations is fundamental to answering how it is that we can see or intercept a target or solve problems
Philosophy Meets the Neurosciences
∙ Philosopher Herbert Feigl proposed the idea of an autocerebroscope through which people could examine the activities of their own brains.
∙ Positron Emission Tomography (PET) and functional magnetic resonance imaging (fMRI) were tools for studying the brain that provided avenues for revealing which brain areas are usually active when individuals perform specific tasks. This can help us better understand which mental processes are involved in performing a given task.
∙ The idea that neurons constitute the basic functional, cellular units of the brain was not widely accepted, until the beginning of the twentieth century, but this provided the foundation for subsequent microlevel research Don't forget about the age old question of What are the divisions of italy?
∙ Until recently, psychologists didn’t have access to the tools examining brain activity and so had to rely on indirect measures
o 1. To measure the time it took for a subject to respond to a specific stimuli (known as reaction time RT).
o 2. Another was to note the error patterns that could be induced by manipulating the conditions under which the stimuli were presented, from which researchers could hypothesize about what operations the brain must be performing
∙ Psychologist George Miller and neurobiologist Michael Gazzaniga coined the term cognitive neuroscience to designate the collaborative inquiry that integrates the behavioral tools of the psychologist with the techniques for revealing brain function to determine how the brain carries out the information processing that generates the behavior. Don't forget about the age old question of What is defined as the process in which people attempt to influence other people's perceptions through information in social interaction like the style of dress by turnerbowker?
∙ As a result of diminished scientific content of the field, philosophy became identified primarily with inquiries into values (ethics) and attempts to address foundational and general questions about ways of knowing (epistemology) and conceptions about knowing (metaphysics).
o Thus, epistemology became preoccupied with whether justified true belief sufficed for knowledge and metaphysics addressed questions as whether events or objects and property are the basic constituents of reality.
Epistemic Issues in Procuring Evidence About the Brain: The Importance of Research Issues and Techniques
∙ Evidence for scientific theories stems from observation
∙ Question: The fact that evidence consists of altered phenomena raises the question: to what degree is what is taken as evidence just the product of the alteration or in what respects does it reflect the original phenomena for which it is taken to be evidence? If you want to learn more check out How to determine work function?
∙ A variety of indirect measures are used for evaluating instruments and techniques: o 1. Whether the instrument or technique is producing welldefined or determinate results
o 2. The degree to which the results from one instrument or technique agree with results generated in other ways If you want to learn more check out It is a type of amnesia that a person can recall some events during the circumscribed period. what is it?
o 3. The degree to which the purported evidence coheres with theories that are taken to be plausible
∙ Goal: To promote an awareness of an epistemic challenge that is central to scientific practice
∙ One issue arises from the studying of brains being done on members of different species with these various techniques
o Issue arises due to significant differences in the organization of brains across species, rendering the task of making inferences across species challenging
∙ Some classic discoveries of neuroanatomy include:
o The discovery that neurons are the functional units of the nervous system
o The identification of areas of the brain with different neural
composition, patterns of connectivity, etc.
∙ As critical as neuroanatomy is for understanding the brain, understanding function requires techniques that directly intervene of the functioning of the brain and render the functional processes salient
∙ One of the oldest approaches to identifying the function of brain components is analysis of the deficits resulting from lesions (localized damage) to those components
o The goal of this approach is to identify a psychological deficit associated with it and to infer from that what contribution the damaged area made to normal psychological function
∙ One challenge in lesion research is determining precisely what areas of the brain are injured
o The most general inference is that the damaged area was in some way necessary to the normal performance. Still, this inference is problematic due to there being times an organism can recover or develop an alternative way of performing a function over time after brain injury. The challenge is
even greater when one tries to specify just what aspect of the task the
damaged part played.
∙ One strategy: To attempt to dissociate 2 mental functions by showing that damage to a given brain part may interfere with both functions – typically shown through double dissociation, but this method is not fullproof
∙ Another method of inducing targeted brain function is with electrical stimulation. However, just as with lesion studies, the challenge is to constrain the
interpretation of the contribution of the stimulated site to normal mental function
Neuroanatomical Foundations of Cognition: Connecting the Neural Level with the Study of Higher Brain Areas
∙ Important to recognize that many neuroscientists approach the brain as a stratified system o Generally acknowledged that there are a wide range of levels at which to investigate the brain (ranging from macro to micro)
o Levels from micro to macro: neurotransmitters, synapses, neurons, pathways, brain areas, systems, the brain, and central nervous system
o Each level has its own methods and problem domains
o Furthermore, some levels lend themselves more easily than others to interdisciplinary involvement
∙ Goal (1): To show that even if one’s philosophical interest in neuroscience are confined mainly to its role in cognitive explanations, understanding some important issues at the micro level can enhance one’s ability to draw from this science
∙ Goal (2): To encourage the philosophical appreciation of neuroscience as a science, apart from its contributions to other fields
∙ This chapter seeks to emphasize the neuron doctrine (the view that neurons are visibly discrete, cellular units) and the reaction of the doctrine to higher level theories ∙ Goal (3): Facilitate a deeper, more informed perspective of the grosser level, the level from which philosophers most commonly draw in relating neuroscience and philosophy ∙ Neurons = nerve cells
o Consist of major parts: cell body (soma), axon, and dendrites
o Axons aka processes
o Dendrites: receive incoming signals and carry them to the cell body o Axons: carry signals (action potentials) away from the cell body and toward the synapse
o Synapse: a junction between 2 neurons, that is, between the axon terminal of the presynaptic cell and the dendrite of the postsynaptic cell
Can be chemical or electrical, but most are chemical
Signal transmission between the 2 cells is mediated by neurotransmitters Synaptic Cleft: open space between the presynaptic cell and the
postsynaptic cell
Electrical synapses are mediated by the flow of electrical current from one cell to the next
At these, the presynaptic and postsynaptic cells are linked by a gap junction
∙ Key Concepts of Neural Structure:
o In chemical synapses there is no physical contact between presynaptic and postsynaptic neurons
o Neurons are discretely bounded, physically separate, and distinct cells o Axonal and dendritic processes are physically integral to the nerve cell and continuous with the cell body
o Functionally relevant differences among neuronal types can be microscopically observed
∙ The Cell Theory: the theory that animal and plant tissues are composed of and generated from cells
o Attraction to Schwann’s 1839 Theory
All living tissue is composed of the same, basic, cellular elements
He proposed a general mechanism for cellular generation whereby cells are formed from inside out, by the gradual accretion of new material in a matter analogous to crystallization
Later in the 1850s, others proposed that cells are generated through cellular division
Despite this, Schwann’s theory is still credited with establishing cells as basic building blocks of life
∙ The Neuron Doctrine: the view that neurons are discrete, individual cells, one physically discontinuous from another
Can Cognitive Processes be Inferred from Neuroimaging Data?
∙ Reverse Inference: the engagement of a particular cognitive process in inferred from the activation of a particular brain region
Introduction
∙ Functional neuroimaging techniques, such as fMRI, provide a measure of local brain activity in response to cognitive tasks undertaken during scanning
∙ Contrary to typical use of this tool, there is increasing use of neuroimaging data to infer the engagement of particular cognitive functions based on activation in particular brain regions
∙ Goal: To analyze reverse inference and to characterize some limitations on the effectiveness of this strategy
∙ Goal of cognitive psychology: to understand the underlying mental architecture that supports cognitive functions
o They examine the effects of task manipulation on behavioral variables and use this data to test models of cognitive function
Inference in Neuroimaging
∙ Reverse inference reasons backwards from the presence of brain activation to the engagement of a particular cognitive function
∙ Cognitive neuroscience is generally interested in mechanistic understanding of the neural processes that support cognition
Estimating Selectivity Using the BrainMap Database
∙ The greatest determinate of the strength of a reverse inference is the degree to which the region of interest is selectively activated by the cognitive process of interest o Unfortunately, it’s quite difficult to determine clearly the selectivity of activation in a particular brain region
Improving Reverse Inferences
∙ 2 ways to improve confidence in reverse inferences
o Increase the selectivity of response in the brain region of interest
o Increase the prior probability of the cognitive process in question
∙ Selectivity is outside of the control of the experimenter
∙ But an estimate of selectivity can at least be obtained
∙ The size of the region of interest will affect selectivity, suggesting reverse inference to smaller regions will provide more confidence
Conclusion
∙ There is substantial excitement about the ability of functional neuroimaging to help researchers discover the organization of cognitive functions
∙ Caution should be exercised in the use of reverse inference, esp. in cases where the prior belief in the engagement of a cognitive process and selectivity of activation in the region of interest are low
Mining the Brain for a New Taxonomy of the Mind
∙ Topic: The article summarizes a debate for the right taxonomy for understanding cognition and the proper role of neuroscientific evidence in specifying this taxonomy ∙ Everyone in the psychological sciences agrees that the mind is organized o The debate is over what is the best way this organization is described ∙ Concepts and categories of psychology stem from
o Folkpsychological notions
o Metaphysical assumptions
o Analytic frameworks
o The need to explain specific observation
∙ Mechanistic perspective – every machine needs its driver
∙ Ontology – a set of assumptions about what elementary building blocks of the reality to be represented actually are
∙ Brain’s native ontology – categories it uses to interpret the world
∙ Goals: Review some of the researcher’s efforts to explore the brain’s native ontology and the outcome of their use of these explorations to motivate revisions to the basic categories of psychology, attempt to clarify their motivations, and reflect on some possible implications
The Cognitive Ontology: Why Worry?
∙ Motivation of debate: the growing realization that cognitive processes do not map to the brain in a particularly straightforward way
∙ Many to many – mapping between cognitive processes and the brain that generally require a number of different neural regions; and any given brain region typically supports multiple different cognitive processes
∙ Three groups with differing views:
o Conservatives – expect that it should be possible to specify a set of fundamental operations that will allow cognitive theories and process models to map more cleanly onto the brain than is currently evident
They advocate the consideration of neurobiological evidence during the analysis and decomposition of a cognitive process into its components o Moderates – suspect that elements in the current ontology are composites, and some elements may not reflect any aspect of psychological reality
Arguing the brain can and should act as one arbiter of the psychologically real
Like conservatives, they expect the end result to be a set of mental operations that will often map to brain regions onetoone
o Radicals – ready to rethink the very foundations of psychology in light of evidence from neuroscience and evolutionary biology
They expect the end result to lead to very few onetoone mappings between brain regions and the psychological primitives
∙ These groups differ by degree of revision and amount of onetoone function structure mapping
∙ All the groups suppose we reject the autonomy of psychology from neuroscience ∙ The debate’s about the ontological requirements for a unified science of the mind and the proper role of neurobiological evidence in the construction of such an ontology ∙ Visual word form area – response not just when words are viewed, but also when they are heard and when they are read in Braille, and responses not just to words but to other kinds of visual objects as well
∙ For Prince and Friston the goal of functional labels should be to explain and predict how an area responds in different contexts
∙ 2 inferences:
o Reverse Inference – the ability to conclude that a particular mental operation is occurring given the observation of brain activity
o Forward Inference – the compliment: the ability to predict which brain region(s) will be engaged by a particular psychological process
o Generic functional attributions don’t sufficiently preserve and reflect what we know about the functional differentiation of different regions of the brain ∙ What the functional import of any instance of neural firing is will depend on the larger neural and environmental context
∙ Poldrack argued – if we generate proper ontology, it will be possible to map compoment operations to specific regions of the brain
∙ PennerWilger and Anderson have advocated for crossdomain modeling: o Aim of which is the specification of “workings” – low level domain general component operations that can be put to many different uses
∙ Conservative approach requires neural functional segregation sufficient to support robust reverse inference
∙ Lenartowicz found that all constructs were discriminable from one another, with the interesting exception that task switching was discriminable from neither response selection nor response inhibition
o Requires that the overall pattern of activity is relatively more similar within than it is between categories
∙ Clear implication: failures of neural selectivity render psychological constructs suspect ∙ To accurately predict from brain data the category of tasks being performed, and to predict brain activation patterns from the task category, often requires a dimensional space that does not align well with our current psychological categories ∙ The author urges that we should seriously consider the likelihood that the best way to understand brain’s native ontology is in evolutionarily inspired, ecological, and inactive terms
Conclusions
∙ Original question of exploration: Whether, how, and to what degree should neuroscience data ought to influence the concepts and categories used to structure psychology ∙ Looks at the relationship between scientific and folk concepts of the mind and experience
Predictive coding explains binocular rivalry: An epistemological review Abstract
∙ Binocular rivalry occurs when the eyes are presented with different stimuli and subjective perception alternates between them.
∙ This review takes an epistemological approach to rivalry that considers the brain as engaged in probabilistic unconscious perceptual inference about the causes of its sensory input.
∙ The core of the explanation is that selection of one stimulus, and subsequent alternation between stimuli in rivalry occur when:
o (i) there is no single model or hypothesis about the causes in the environment that enjoys both high likelihood and high prior probability and
o (ii) when one stimulus dominates, the bottom–up, driving signal for that stimulus is explained away while, crucially, the bottom–up signal for the suppressed stimulus is not, and remains as an unexplained but explainable prediction error signal.
Introduction
∙ Binocular Rivalry If one stimulus is shown to one eye and another stimulus to the other, then subjective experience alternates between them. For example, when an image of a house is presented to one eye and an image of a face to the other, then subjective experience alternates between the house and the face.
o Binocular rivalry is a challenge to our understanding of the visual system, and it is of special importance for studies of phenomenal consciousness in humans and monkeys, because the stimulus presented to subjects can be held constant while the phenomenal percept changes
∙ Most approaches to rivalry stress the role of inhibition, adaptation and stochastic noise. o We take the approach of epistemology—the theory of knowledge— to go behind these approaches and ask the more fundamental theoretical question: ‘‘why should a perceptual system, such as the brain, have and exploit such mechanisms in the first place?”
o The motivation behind this approach is the idea that binocular rivalry is an epistemic response to a seemingly incompatible stimulus condition where two distinct objects occupy the same spatiotemporal location.
o Our intent: to describe a unifying framework for the burgeoning class of data already in hand concerning binocular rivalry
Core properties of predictive coding
∙ A core task for the brain is to represent the environmental causes of its sensory input. This is computationally difficult; it is difficult to compute the causes when only the effects are known:
o as Hume reminded us, causes and effects are distinct existences and, in principle, many different environmental events could be causes of the same sensory effect. o Conversely, the same environmental causes can occur in different contexts, so the same environmental event can be the cause of many different sensory effects. ∙ Rather than trying to work backwards from sensory effects to environmental causes, neuronal computational systems work with models, or as we shall say hypotheses, that predict what the sensory input should be
∙ The hypothesis that generates the best predictions then determines perceptual content. The hierarchical inversion of the generative models needed to finesse this inverse
problem can be reduced to quite simple processes that, in principle, can be implemented by the brain.
∙ Bayesian Perceptual Inference According to this kind of Bayesian theory, the hypothesis with the highest posterior probability (i.e., most probable given the input) wins and gets to determine the perceptual content of the system. The posterior probability depends on the likelihood (i.e., how well the hypothesis predicts the input); and on the prior probability of the hypothesis (i.e., how probable the hypothesis was before the input). These prior expectations are constructed hierarchically and are contextsensitive.
∙ In a hierarchical setting, that uses empirical Bayes, priors are not extracted directly from the natural scene statistics, nor are they free parameters. They emerge naturally on interaction with the world as learning suppresses prediction errors at all levels of a hierarchical model. The hierarchal nature of these models is central to empirical Bayes because priors on lower levels are themselves constrained by, and accountable to, higher levels.
∙ The prediction error signal plays a crucial role in inference since it helps to update hypotheses at higher levels, such that better predictions can be issued and the prediction error continually minimized.
∙ A system can minimize freeenergy by changing its configuration to change the way it samples the environment, or to change its expectations. These changes correspond to action and perception, respectively, and lead to an adaptive exchange with the environment that is characteristic of biological systems. In short, any change to the brain’s state or connection parameters that reduces freeenergy renders sensory input less surprising.
∙ What ultimately determines the resulting conscious perception is the best hypothesis: the one that makes the best predictions and that, taking priors into consideration, is consequently assigned the highest posterior probability.
Two problems concerning rivalry: Selection and alternation
∙ In dichoptic viewing conditions, where one stimulus is shown to one eye and another to the other eye, binocular matching fails because two different objects seem to occupy the same spatiotemporal position.
∙ The epistemological task for the system, given this incompatible or ‘‘unecological” condition is then to explain the combined bottom–up signal stemming from the two stimuli: it does this rather elegantly by selecting only one stimulus at a time and then alternating between them. To account for binocular rivalry, two things must then be
explained
∙ The selection problem: why is there a perceptual decision to select one stimulus for perception rather than the other, and, further why is one of the two stimuli selected rather than some conjunction or blend of them?
∙ The alternation problem: why does perceptual inference alternate between the two stimuli rather than stick with the selected one?
∙ The core of these various approaches to rivalry is that selection and alternation in rivalry must be explained in terms of two mechanisms: inhibition of the incoming signal from the stimulus which is not dominant, which is meant to explain selection; and adaptation of the inhibitory influence of the relevant neural populations, which is meant to explain alternation.
A Bayesian approach to the selection problem
∙ Assume the stimuli are a house and a face and that the percept currently experienced by the subject is the face. Then the question, from a Bayesian perspective, is why the face hypothesis (F) has the highest probability, given the conjoint evidence (I) of a house and a face. The question splits into two: (i) why is F favored over the hypothesis that it is a house (H)? (ii) Why is F selected over some kind of conjunctive or blended hypothesis that it is a ‘houseface’ (F AND H)?
o If, for some reason, F has a higher prior, then it will be selected for perceptual dominance.
Solving the alternation problem
∙ The key point is that even though F successfully explains the facesignal, there remains a large error signal, stemming from the housestimulus
∙ Structural instability can be mediated using deterministic mechanisms. Another possible mechanism for perceptual alternations relies on stochastic or random effects. Because the brain is trying to minimize its freeenergy, it has to explore the freeenergy landscape. A generic scheme for this exploration relies on random or stochastic effects (cf., random
mutations in evolutionary selection or random noise in simulated annealing). ∙ In multistable dynamical systems, this can be expressed as stochastic resonance.
∙ Put simply, random changes, due to neuronal noise, in the brain’s state can occasionally push it over the freeenergy barrier separating the house and face wells. This mechanism does not involve changes in, or adaptation of, the freeenergy landscape but rests on dynamical instability introduced by random fluctuations in the brain’s state.
∙ In short, either structural or dynamic mechanisms of predictive coding, or a combination, can explain perceptual alternation. Alternation ensues in rivalry conditions specifically where there is a large unexplained but explainable error signal. In Bayesian terms, in this situation no one hypothesis has both high likelihood and high prior, and inference becomes unstable.
∙ In this framework, the inhibition is not of the bottom– up, incoming signal per se. Rather it is inhibition of the competing highlevel hypotheses that could explain away the sensory signal.
o In other words, inhibition decreases top–down predictions of the suppressed stimulus. The epistemological motivation for this is that the best performing hypothesis dominates perceptual content. The explanation for competition among high level explanations is simple; our experience of the world tells us that only one object can exist in the same place at the same time. This hyperprior is learnt and engrained in our neuronal circuits as an empirical prior.
∙ In sum, the proposal therefore motivates inhibition and adaptation in a more principled way than nonepistemological accounts, and thus explains rivalry as an unavoidable and emergent outcome of representational systems like the brain. It rests on the recurrent dynamics required by hierarchical inference and positions itself in direct opposition to conventional heuristics that frame perception in terms of feedforward dynamics
Integrating psychophysical evidence under the predictive coding framework
∙ Rivalry tends to occur when there is an increasing incompatibility between the stimuli presented to the two eyes. More consistent stimuli will tend to fuse. This fits within the predictive coding framework because it is a case where the conjoint hypothesis does have high prior.
o That is, were the stimuli a mouthless face and a mouth, then the updated, dominant hypothesis F* (‘‘it’s a face with a mouth”) would have a substantial prior. Fusion would then be allowed since the most likely hypothesis will have a high prior and the system will settle in a deep third well.
∙ Often, there is no clearcut shift between percepts in binocular rivalry. Dominance breaks through in small patches of the visual field and gradually spreads before completely or partially suppressing the competing image
∙ So there are periods where the subject experiences some of the face and some of the house. This is explained by the attempts to update the currently dominating hypothesis by exploring the freeenergy landscape in response to the prediction error signal. The system does not stabilize with these patches because much prediction error still is unac counted for and because the conjoint hypothesis has a very low prior.
∙ Interocular Grouping: Subjects may also experience rivalry where they perform visual grouping of items presented to both eyes. For example, if there is an image of half a face and half a house presented to one eye, and an image of the other halves of the face and of the house presented to the other eye, then there may be perceptual rivalry between a house and a face; the two halves of the two images have been grouped together, and it is the regrouped percepts that are rivalling, not the original segmented images
∙ In a different type of paradigm Blake and colleagues lowed one stimulus to achieve dominance before they gradually decreased the intensity of the stimuli and swapped them. When the suppressed stimulus is swapped to the eye of the dominant stimulus, it becomes dominant, suggesting a role for eyedominance rather than pattern competition in rivalry. This is also consistent with predictive coding. With respect to processing for
the dominant eye stimulus it is a situation where there is successful prediction of (gradual or nonrapid) changes in the world.
∙ On the assumption that there is less adaptation effect for such changing stimuli the system should remain relatively stable, and one should expect the dominant eye to continue its domination.
∙ Rivalry can also occur for a single stimulus presented to one or both eyes. The experience of monocular rivalry is less stable than in binocular rivalry and seems to occur mostly for fairly rudimentary stimuli such as a mesh of blurred green and red gratings.
o We think this reflects dynamically stable priors for the hypothesis that the environment has line segments of different distinct orientations. In other words, this is something the visual system is always expecting
∙ Our account of Levelt’s Second Proposition is that, when the fixed eye stimulus is dominant, changes in the unexplained prediction error from the suppressed stimulus in the variable eye induces changes in the overall energy landscape, such that the perceptual decision for the fixed stimulus is brought away from or towards transitions over the free energy barrier.
∙ (i) When one stimulus is viewed in a congruent context and the other in a noncongruent context, the dominance duration of the former tend to increase. Introducing a congruent context does not increase bottom–up error signal strength when suppressed, so the predictive coding framework can explain why context modulation does not give shorter dominance periods for the noncongruent stimulus.
o the other hand, context increases the prior for a congruent stimulus relative to a noncongruent stimulus, so it would take longer for the posterior for the dominant, updated hypothesis to be destabilized. This would explain the increased dominance periods.
∙ (ii) With practice, voluntary (endogenous) attention can prolong dominance periods for the attended stimulus without however being able to extinguish rivalry (on the other hand, endogenous attention to properties of the suppressed stimulus will not bring that stimulus out of suppression.
o This is an example of a top–down process modulating dominance. In the predictive coding framework we can view endogenous selective attention as increasing or enforcing priors for a certain hypothesis.
7. Accounting for conflicting neurophysiological and imaging evidence ∙ It is important to remember that neuronal implementations of predictive coding require both the representation of the prediction and the prediction error in hierarchically ordered pairs of levels in the brain. It is the hierarchal deployment of reciprocal changes among these that will offer an explanation for diverse empirical findings.
∙ Single unit studies in monkeys, yield the following consistent picture. Starting with the LGN, there seems to be no evidence of rivalry related changes in the geniculo striate system
∙ Successive stages of the visual cortex show increasing levels of activity in phase with the animal’s reports of dominance: at low levels (V1) only few units selective for a given stimulus will fire in phase with dominance and suppression. At middle levels (V4, MT) more will, but the picture is somewhat mixed with some cells more active than other, almost no cells completely suppressed and some cells even active when their preferred stimulus were suppressed.
o This suggests that single unit recording can selectively sample either the predicting neurons or the prediction error neurons.
∙ In general, fMRI studies in humans furnish a different picture. These studies have found that activity during rivalry corresponds to activity during physical alternations of stimuli over a large posterior portion of the brain ranging from temporal (fusiform and parahippocampal) areas, over V1, including monocular areas such as the blind spot representation and extending all the way to the lateral geniculate nucleus.
o Thus, in these areas of the brain, fMRI activity during dominance is comparable to activity during monocular viewing and activity during
suppression is comparable to when the stimulus is not presented to the subject. ∙ Perceptual rivalry presents a particular challenge to interpreting fMRI results in terms of predictive coding. This is because lowlevel areas that represent the elemental features of both stimuli will always express prediction error, because only one set of sensory signals can be explained away at any time.
o This means that there may be no difference in fMRI signals between the two perceptual states in these regions.
∙ In short, fMRI correlates of rivalry may be driven by top–down predictions, whereas electrophysiological responses may reflect predictions or prediction error, depending on which population or unit is recorded.
∙ Irrespective of these considerations, the highest prediction error (freeenergy and BOLD signal) would be anticipated during perceptual transitions, when neither stimulus is explained away.
o In summary, generative models and predictive coding therefore provide a framework that is capable of unifying the apparently conflicting findings on binocular rivalry.
Discussion
∙ Under the account described here, an empirical Bayes framework with generative models and implemented with predictive coding or freeenergy minimization explains many aspects of binocular rivalry; because dichoptic viewing of mutually inconsistent stimuli creates a situation where no hypothesis about the environmental causes of the incoming sensory signal has both a high prior and high likelihood.
Conclusions
∙ Core properties of a theoretical framework for perceptual inference in the brain based on generative models and predictive coding can be described in fairly basic probabilistic
terms. The framework can explain and unify many aspects of binocular rivalry, in particular why one stimulus is selected for perception and why there is alternation be tween stimuli.
∙ The framework also accommodates many of the major psychophysical findings on rivalry and provides a unified interpretation of the apparently conflicting singleunit and fMRI studies of rivalry.
What is it like to be a Bat?
∙ Consciousness is what makes the mindbody problem really intractable ∙ It occurs at many levels of animal life, though we cannot be sure of its presence in the simpler organisms, and it is very difficult to say in general what provides evidence of it ∙ The fact that an organism has conscious experience at all means, basically, that there is something it is like to be that organism.
∙ Fundamentally an organism has conscious mental states if and only if there is something that it is like to be that organism something it is like for the organism. ∙ We may call this the subjective character of experience.
∙ It is impossible to exclude the phenomenological features of experience from a, reduction in the same way that one excludes the phenomenal features of an ordinary substance from a physical or chemical reduction of itnamely, by explaining them as effects on the minds of human observers
∙ If physicalism is to be defended, the phenomenological features must themselves be given a physical account. But when we examine their subjective character it: seems that such a result is impossible. The reason is that every subjective phenomenon is essentially connected with a single point of view, and it seems inevitable that an objective, physical theory will abandon that point
∙ Bats used as an example:
o The essence of the belief that bats have experience is that there is something that it is like to be a bat. Most bats perceive the external world primarily by sonar, or echolocation, detecting the reflections, from objects within range, of their own rapid, subtly modulated, highfrequency shrieks.
o Their brains are designed to correlate the outgoing impulses with the subsequent echoes, and the information thus acquired enables bats to make precise discriminations of distance, size, shape, motion, and texture comparable to those we make by vision.
o But bat sonar, though clearly a form of perception, is not similar in its operation to any sense that we possess, and there is no reason to suppose that it is subjectively like anything we can experience or imagine.
∙ Our own experience provides the basic material for our imagination, whose range is therefore limited
∙ I want to know what it is like for a bat to be a bat. Yet if I try to imagine this, I am restricted to the resources of my own mind, and those resources are inadequate to the task.
∙ So if extrapolation from our own case is involved in the idea of what it is like to be a bat, the extrapolation must be incompletable. We cannot form more than a schematic conception of what it is like
∙ Edge of the topic: namely, the relation between facts on the one hand and conceptual schemes or systems of representation on the other.
∙ My realism about the subjective domain in all its forms implies a belief in the existence of facts beyond the reach of human concepts
o one might believe that there are facts which could not ever be represented or comprehended by human beings, even if the species lasted foreversimply because our structure does not permit us to operate with concepts of the requisite type.
∙ Reflection on what it is like to be a bat seems to lead us, therefore, to the conclusion that there are facts that do not consist in the truth of propositions expressible in a human language.
o We can be compelled to recognize the existence of such facts without being able to state or comprehend them.
∙ The point of view in question is a type
o It is often possible to take up a point of view other than one's own, so the comprehension of such facts is not limited to one's own case.
There is a sense in which phenomenological facts are perfectly objective: one person can know or say of another what the quality of the other's
experience is.
They are subjective, however, in the sense that even this objective ascription of experience is possible only for someone sufficiently similar to the object of ascription to be able to adopt his point of viewto
understand the ascription in the first person as well as in the third, so to speak
∙ In our own case we occupy the relevant point of view, but we will have as much difficulty understanding our own experience properly if we approach it from another point of view as we would if we tried to understand the experience of another species without taking up its point of view.
∙ Bears directly on the mindbody problem
o For if the facts of experiencefacts about what it is like for the experiencing organismare accessible only from one point of view, then it is a mystery how the
true character of experiences could be revealed in the physical operation of that organism.
o The latter is a domain of objective facts par excellencethe kind that can be observed and understood from many points of view and by individuals with differing perceptual systems
∙ A Martian scientist with no understanding of visual perception could understand the rainbow, or lightning, or clouds as physical phenomena, though he would never be able to understand the human concepts of rainbow, lightning, or cloud, or the place these things occupy in our phenomenal world.
∙ The objective nature of the things picked out by these concepts could be apprehended by him because, although the concepts themselves are connected with a particular point of view and a particular visual phenomenology, the things apprehended from that point of view are not: they are observable from the point of view but external to it; hence they can be comprehended from other points of view also, either by the same organisms or by others
∙ We appear to be faced with a general difficulty about psychophysical reduction. In other areas the process of reduction is a move in the direction of greater objectivity, toward a more accurate view of the real nature of things.
o This is accomplished by reducing our dependence on individual or species specific points of view toward the object of investigation
∙ Experience itself, however, does not seem to fit the pattern.
o If the subjective character of experience is fully comprehensible only from one point of view, then any shift to greater objectivity that is, less attachment to a specific viewpointdoes not take us nearer to the real nature of the phenomenon: it takes us farther away from it.
∙ The reduction can succeed only if the speciesspecific viewpoint is omitted from what is to be reduced.
∙ It would be truer to say that physicalism is a position we cannot understand because we do not at present have any conception of how it might be true
∙ Donald Davidson has argued that if mental events have physical causes and effects, they must have physical descriptions. He holds that we have reason to believe this even though we do notand in fact could nothave a general psychophysical theory
o Davidson's position is that certain physical events have irreducibly mental properties, and perhaps some view describable in this way is correct
∙ We cannot genuinely understand the hypothesis that their nature is captured in a physical description unless we understand the more fundamental idea that they have an objective nature (or that objective processes can have a subjective nature)
∙ At present we are completely unequipped to think about the subjective character of experience without relying on the imaginationwithout taking up the point of view of the experiential subject.
o This should be regarded as a challenge to form new concepts and devise a new methodan objective phenomenology not dependent on empathy or the
imagination. Though presumably it would not capture everything, its goal would be to describe, at least in part, the subjective character of experiences in a form comprehensible to beings incapable of having those experiences.
Brain Mechanisms of Vision
∙ In man, the cerebral cortex almost completely envelopes the rest of the brain ∙ The degree to which an animal depends on an organ is an index on the organ’s importance that is even more convincing than size, and dependence on the cortex has increased rapidly as mammals have evolved
o i.e. a man without a cortex is almost a vegetable, speechless, sightless, senseless ∙ The cerebral cortex is complex in its structure as well as its functions ∙ Goal: To sketch out the present state of knowledge of one subdivision of the cortex: the
primary visual cortex, the most elementary of the cortical regions concerned with vision o Will lead to biological purpose of visual perception
∙ The cerebral cortex: a highly folded plate of neural tissue about 2mm thick ∙ In late 19th century, it was noted that a brain injury, depending on location, could cause paralysis or blindness or numbness or speech loss
∙ Systematic mapping of the cortex led to a fundamental realization: most of the sensory and motor areas contained systematic 2dimensional maps of the world they represented ∙ An important feature of cortical maps is their distortion
o The scale of the map varies as it does in a Mercator projection, the rule for the cortex being that the regions for highest discrimination or delicacy of function occupy relatively more cortical area
∙ Issue that was left neglected by the advances in mapping cortical projections: How the brain analyzes information?
∙ Goal 2: Show that, for vision at least, the world is represented in a far more distorted way ∙ The 1st major insight into cortical organization was the recognition of this subdivision into areas having widely different functions, with a tendency to ordered mapping
o Important basic notion: Information on any given modality such as sight or sound is transported first to a primary cortical area and from there, either directly or via the thalamus, to successions of higher areas
∙ The 2nd major insight: the realization that the operations that the cortex performs on the information it receives are local
o Sets of fibers bring information to the cortex; by the time several synapses have been traversed the influence of the input has spread vertically to all cell layers; finally several other sets of fibers carry modified messages out of the area o What is common to all regions is the local area of the wiring
Whatever any given region of the cortex does, it does locally. At stages where there is any kind of detailed, systematic topographical mapping the analysis must be piecemeal
One can only assume that as the information on visual or touch or sound is relayed from one cortical area to the next the map becomes progressively more blurred and the information carried more abstract
o The 1st understanding of the local analysis that the cortex must perform came in the primary visual area, which is now the best understood of any cortical region and is still the only one where the analysis and consequent transformations of information are known in any detail
o Main point: Primary cortex is in no sense the end of the visual path o The retinal ganglion cells and the cells of the retinal geniculate are primarily concerned with making a comparison between the light level in one small area of the visual scene and the average illumination of the immediate surround ∙ The 1st of the two major transformations accomplished by the visual cortex is the rearrangement of incoming information so that most of its cells respond to specifically oriented line segments
o A typical cell responds only when light fall sin a particular part of the visual world, but illuminating that area diffusely has little effect or none, and small spots of light are not much better
o The simplest cells behave as though they received their input directly from several cells with centersurround, circularly symmetrical fields – type found in layer IV. ∙ The 2nd major group of orientationspecific neurons are the far more numerous complex cells
o Main feature is that they are less particular about the exact position of a line ∙ The 2nd major function of the monkey visual cortex is to combine the inputs from the 2 eyes
∙ Cells of like complexity tend to be grouped in layers, with the circularly symmetrical cells lower in layer IV, simple cells just above them, and complex cells in layers II, III, V, and VI.
∙ The relation between layer and projection site probably deserves to be ranked as a 3rd major insight into cortical organization
∙ The next stimulus variable to be considered is the position of the receptive field in the visual field
∙ Eccentricity – the distance of a cell’s receptive field from the center of gaze ∙ Aggregate field – the pile of superimposed fields that are mapped in a penetration beginning at any point on the cortex
o The size is a function of eccentricity
∙ Regularity: moving the electrode about 1 or 2mm always produces a displacement in visual field that’s roughly enough to take one into an entirely new region o Observation suggests the way the visual cortex solves a basic problem: how to analyze the visual scene in detail in the central part and much more crudely in the periphery
∙ 2deoxyglucose technique for assessing brain activity: capitalizes on the fact that brain cells depend mainly on glucose as a source of metabolic energy and that the closely similar compound 2deoxyglucose can to some extent masquerade as glucose (Sokoloff)
o The Sokoloff Procedure is to inject an animal with deoxyglucose that’s been labeled with the radioactive isotope carbon 14, stimulate the animal in a way calculated to activate certain neurons and then immediately examine the brain fro radioactivity, which reveals active areas where cells will have taken up more deoxyglucose than those in quiescent areas.
∙ For most physiologically defined systems mentioned, there are no anatomical correlates so far
o On the other hand, in the past few years several anatomists have shown the connection from one region of the cortex to another terminate in passages that have regular periodicity of about a millimeter.
∙ Fine periodic subdivisions are a general feature of the cerebral cortex ∙ For the visual cortex: Particular stimuli turn neurons on or off; groups of neurons do indeed perform particular transformations
Decomposing and Localizing Vision: An Exemplar for Cognitive Neuroscience
∙ Exemplar: an example of successful research which provides a model to be emulated ∙ We aren’t aware of intermediate operations that the brain’s performing
Getting Started: Identifying the Locus of Control
∙ More popular locus: occipital lobe
∙ Ferrier’s later claim (1881): both the angular gyrus and occipital lobe figured in vision and that only legions to both could produce complete and enduring blindness, but he continued to emphasize the angular gyrus
o In retrospect, Ferrier’s lesions produced deficits due to the fact his incisions were cut deeply and severed the nerve pathways from the thalamus to the occipital cortex
o Moreover, his failure to eliminate vision with occipital lobe lesions was due to incomplete removal of the visual processing areas in the occipital lobe ∙ Beginning of the 20th century, the striate cortex became generally accepted as the locus of visual processing
From Simple Localization to Mechanistic Explanation
∙ Henschen (1893)
o Showed that deficits in different parts of the occipital lobe produced blindness in different parts of the visual field and proposed that the occipital lobe must be topographically organized so that different parts of the retina projected on to
different areas of the visual cortex (leading him to refer to it as the cortical retina). o His occipital lobe map was reverse from the accepted map
∙ Microlesion studies could reveal topographical organization, but not the actual function performed by cells in the striate cortex since the result of lesions was complete blindness ∙ Hubel and Wiesel made the discovery that cells in the striate cortex responded most vigorously not to spots of light but to oriented lines or bars
o What they termed simple cells had receptive fields with spatially distinct on and off areas along a line at a particular orientation
o What they termed complex cells were responsive to bars of light at a particular orientation anywhere within the receptive field
o Hypercomplex cells: responded maximally only to bars extending just the width of their receptive field
o Hubel and Wiesel proposed a decomposition of processing within striate cortex, with one type of cell supplying information to other cells and each carrying out its own information processing
o Also proposed the discovery of the primary function of striate cortex, but with a prophetic caveat: “The elaboration of simple cortical fields from geniculate concentric fields, complex from simple, and hypercomplex from complex is
probably the prime function of the striate cortex – unless there are still other as yet unidentified cells there”
o Proposed that in one direction successive columns were dominated by alternative eyes while in the other direction successive columns were responsive to different orientations of the stimulus
o Consequence(s) of research:
to reveal complexity of striate cortex
demonstrate that the striate cortex is not the sole locus of visual
processing, since detecting oriented bars of light is not yet perception
o Question for further research: Where else is visual information processed, and what does each of these areas contribute?
Beyond Direct Localization: Identifying PreStriate Visual Areas
∙ The second means of moving beyond direct localization is to discover additional components that contribute to the function
∙ In the 1st half of the 20th century, brain research was dominated by an antilocalizationist sentiment that construed most of the cortex as jointly sub serving cognitive capacities, without any particular part playing a specialized role
∙ NeoPhrenological localizationists – suggest that individual parts of the cortex could be removed without any loss of any particular cognitive ability
∙ Lashley insisted “visual habits were dependent upon striate cortex and upon no other part of the cerebral cortex”
∙ Achromatopsia – the inability to see objects as colored (patients presumably suffered legions in V4)
Beyond Direct Localization: Expanding Visual Analysis into temporal and parietal cortexes
∙ Parietal cells also linked to arm and hand manipulation
∙ Anderson and colleagues demonstrated that cells in the posterior parietal cortex mapped stimuli in terms of spatial location, a feature to which temporal lobe cells are relatively unresponsive
Whatever Next? Predictive Brains, Situated Agents, and the Future of Cognitive Science Abstract
∙ Brains – Essentially prediction machines; Bundles of cells that support perception and action by constantly attempting to match incoming sensory inputs with topdown expectations or predictions
o Achieved using a hierarchal generative model
Introduction: Prediction Machines
∙ “The whole function of the brain is summed up in: error correction” – W. Ross Ashby o One of the brain’s key tricks is to implement dumb processes that correct a certain kind of error: error in the multilayered prediction of input
∙ Helmholtz model of perception: a process of probabilistic, knowledgedriven inference o Sensory systems infer sensory causes from their bodily effects
This in turn involves computing multiple probability distributions, since a single such effect will be consistent with many different sets of causes
distinguished only by their relative (and context dependent) probability of occurrence.
∙ “Analysisbysynthesis” the brain tries to predict the current suite of cues from its best models of the possible causes.
o In this way: The mapping from low to highlevel representation (e.g. from acoustic to wordlevel) is computed using the reverse mapping, from high to lowlevel representation.
∙ The Helmholtz Machine sought to learn new representations in a multilevel system (thus capturing increasingly deep regularities within a domain) without requiring the provision of copious preclassified samples of the desired inputoutput mapping.
o In this respect, it aimed to improve upon standard backpropagation driven learning. It did this by using its own topdown connections to provide the desired states for the hidden units, thus (in effect) selfsupervising the development of its perceptual “recognition model” using a generative model that tried to create the sensory patterns for itself
∙ A generative model aims to capture the statistical structure of some set of observed inputs by tracking the causal matrix responsible for that very structure.
∙ The strategy of using topdown connections to try to generate, using highlevel knowledge, a kind of “virtual version” of the sensory data via a deep multilevel cascade lies at the heart of “hierarchical predictive coding” approaches to perception
∙ Such approaches form the main focus of the present treatment. These approaches combine the use of topdown probabilistic generative models with a specific vision of one way such downward influence might operate. That way (borrowing from work in linear predictive coding – see below) depicts the topdown flow as attempting to predict and fully “explain away” the driving sensory signal, leaving only any residual “prediction errors” to propagate information forward within the system
∙ Predictive coding itself was first developed as a data compression strategy in signal processing
∙ The code for a rich image can be compressed by encoding only the “unexpected” variation: the cases where the actual value departs from the predicted one. What needs to be transmitted is therefore just the difference (a.k.a. the “prediction error”) between the actual current signal and the predicted one.
∙ The information that needs to be communicated “upward” under all these regimes is just the prediction error: the divergence from the expected signal.
∙ Later, when we consider predictive processing in the larger setting of information theory and entropy, we will see that prediction error reports the “surprise” induced by a mismatch between the sensory signals encountered and those predicted. This is known as surprisal
∙ Perception and action are intimately related and work together to reduce prediction error by sculpting and selecting sensory inputs.
∙ Black Box All that the brain “knows”, in any direct sense, are the ways its own states (e.g., spike trains) flow and alter. In that (restricted) sense, all the system has direct access to is its own states. The world itself is thus offlimits (though the box can, importantly, issue motor commands and await developments).
o The task is to infer the nature of the signal source (the world) from just the varying input signal itself.
∙ The beauty of the bidirectional hierarchical structure is that it allows the system to infer its own priors (the prior beliefs essential to the guessing routines) as it goes along. It does this by using its best current model – at one level – as the source of the priors for the level below, engaging in a process of “iterative estimation” that allows priors and models to coevolve across multiple linked layers of processing so as to account for the sensory data.
∙ Rao and Ballard’s (1999) model of predictive coding in the visual cortex o At the lowest level, there is some pattern of energetic stimulation, transduced by sensory receptors from ambient light patterns produced by the current visual scene. These signals are then processed via a multilevel cascade in which each level attempts to predict the activity at the level below it via backward connections. The backward connections allow the activity at one stage of the processing to return as another input at the previous stage
o Where there is a mismatch, “prediction error” occurs and the ensuing (error indicating) activity is propagated to the higher level. This automatically adjusts probabilistic representations at the higher level so that topdown predictions cancel prediction errors at the lower level (yielding rapid perceptual inference).
In the visual cortex, such a scheme suggests that backward connections from V2 to V1 would carry a prediction of expected activity in V1, while forward connections from V1 to V2 would carry forward the error signal9 indicating residual (unpredicted) activity
∙ For immediate purposes, what matters is that the predictive coding approach, given only the statistical properties of the signals derived from the natural images, was able to induce a kind of generative model of the structure of the input data: It learned about the presence and importance of features such as lines, edges, and bars, and about combinations of such features, in ways that enable better predictions concerning what to expect next, in space or in time.
∙ Hosoya et al.’s (2005) account of dynamic predictive coding by the retina: o What this means, in each case, is that neural circuits predict, on the basis of local image characteristics, the likely image characteristics of nearby spots in space and time (basically, assuming that nearby spots will display similar image intensities) and subtract this predicted value from the actual value. What gets encoded is thus not the raw value but the differences between raw values and predicted values. o In this way, “Ganglion cells signal not the raw visual image but the departures from the predictable structure, under the assumption of spatial and temporal uniformity.” This saves on bandwidth, and also flags what is (to use Hosoya et al.’s own phrase) most “newsworthy” in the incoming signal.
∙ Hosoya et al. predicted that, in the interests of efficient, adaptively potent, encoding, the behavior of the retinal ganglion cells (specifically, their receptive field properties) should vary as a result of adaptation to the current scene or context, exhibiting what they term “dynamic predictive coding.”
∙ Putting salamanders and rabbits into varying environments, and recording from their retinal ganglion cells, Hosoya et al. confirmed their hypothesis: Within a space of several seconds, about 50% of the ganglion cells altered their behaviors to keep step with the changing image statistics of the varying environments.
o In sum, retinal ganglion cells seem to be engaging in a computationally and neurobiologically explicable process of dynamic predictive recoding of raw image inputs, whose effect is to “strip from the visual stream predictable and therefore less newsworthy signals.”
∙ Hohwy et al.’s (2008) hierarchical predictive coding model of binocular rivalry: o Binocular rivalry is a striking form of visual experience that occurs when, using a special experimental setup, each eye is presented (simultaneously) with a different visual stimulus.
o Pursues an “epistemological” approach: one whose goal is to reveal binocular rivalry as a reasonable (knowledgeoriented) response to an ecologically unusual stimulus condition.
o In the binocular rivalry case, however, the driving (bottomup) signals contain information that suggests two distinct, and incompatible, states of the visually presented world
∙ Actionoriented predictive processing
o perception and action both follow the same deep “logic” and are even implemented using the same computational strategies. A fundamental attraction of these accounts thus lies in their ability to offer a deeply unified account of perception, cognition, and action.
o Perception is here depicted as a process that attempts to match incoming “driving” signals with a cascade of topdown predictions (spanning multiple spatial and temporal scales) that aim to cancel it out.
unifying perspective on perception and action suggests that action is both perceived and caused by its perception.
o Very roughly – see Todorov (2009) for a detailed account – you treat the desired (goal) state as observed and perform Bayesian inference to find the actions that get you there.
∙ Freeenergy minimization framework:
o Thermodynamic free energy is a measure of the energy available to do useful work
Transposed to the cognitive/informational domain, it emerges as the difference between the way the world is represented as being, and the way it actually is
o Entropy, in this informationtheoretic rendition, is the longterm average of surprisal, and reducing informationtheoretic free energy amounts to improving the world model so as to reduce prediction errors, hence reducing surprisal (since better models make better predictions).
o The overarching rationale:
good models help us to maintain our structure and organization, hence (over extended but finite timescales) to appear to resist increases in
entropy and the second law of thermodynamics. They do so by rendering us good predictors of sensory unfoldings, hence better poised to avoid damaging exchanges with the environment.
o The “freeenergy principle” itself then states that “all the quantities that can change; i.e. that are part of the system, will change to minimize freeenergy”
1. Representation, inference, and the continuity of perception, cognition, and action ∙ Depicted as a kind of duplex architecture: one that at each level combines quite traditional representations of inputs with representations of error.
∙ What is most distinctive about this duplex architectural proposal (and where much of the break from tradition really occurs) is that it depicts the forward flow of information as solely conveying error, and the backward flow as solely conveying predictions.
From actionoriented predictive processing to an architecture of mind
∙ Despite that truly impressive list of virtues, both the hierarchical predictive processing family of models and their recent generalizations to action face a number of important challenges, ranging from the evidential to the conceptual to the more methodological.
∙ The best current evidence tends to be indirect, and it comes in two main forms. The first (which is highly indirect) consists in demonstrations of precisely the kinds of optimal sensing and motor control that the “Bayesian brain hypothesis” suggests.
∙ Another example is the Bayesian treatment of color perception, which again accounts for various known effects (here, color constancies and some color illusions) in terms of optimal cue combination.
∙ According to Mumford:
o a very general worry that is sometimes raised in connection with the largescale claim that cortical processing fundamentally aims to minimize prediction error, thus quashing the forward flow of information and achieving what Mumford evocatively describes as the “ultimate stable state.”
It can be put like this: How can a neural imperative to minimize prediction error by enslaving perception, action, and attention accommodate the
obvious fact that animals don’t simply seek a nice dark room and stay in it? Surely staying still inside a darkened room would afford easy and nigh perfect prediction of our own unfolding neural states? Doesn’t the story thus leave out much that really matters for adaptive success: things like boredom, curiosity, play, exploration, foraging, and the thrill of the hunt?
Simple Response: animals like us live and forage in a changing and challenging world, and hence “expect” to deploy quite complex “itinerant” strategies to stay within our speciesspecific window of viability.
∙ If what we want to understand is the specific functional architecture of the human mind, the distance between these very general principles of predictionerror minimization and the specific solutions to adaptive needs that we humans have embraced remains daunting.
Content and consciousness
∙ It might be suggested that merely accommodating the range of human personallevel experiences is one thing, while truly illuminating them is another.
∙ It seems correct (see, e.g., Coltheart 2007) to stress that perceptual anomalies alone will not typically lead to the strange and exotic belief complexes found in delusional subjects. But must we therefore think of the perceptual and doxastic components as effectively independent?
o A possible link emerges if perception and beliefformation, as the present story suggests, both involve the attempt to match unfolding sensory signals with top down predictions.
o Importantly, the impact of such attempted matching is precisionmediated in that the systemic effects of residual prediction error vary according to the brain’s confidence in the signal
o With this in mind, Fletcher and Frith (2009) canvass the possible consequences of disturbances to a hierarchical Bayesian system such that prediction error signals are falsely generated and – more important – highly weighted (hence accorded undue salience for driving learning).
∙ Another area in which these models are suggestive of deep facts about the nature and construction of human experience concerns the character of perception and the relations between perception and imagery/visual imagination.
∙ Predictiondriven processing schemes, operating within hierarchical regimes of the kind described above, learn probabilistic generative models in which each neural population targets the activity patterns displayed by the neural population below. What is crucial here – what makes such models generative as we saw in section 1.1 – is that they can be used “topdown” to predict activation patterns in the level below.
∙ The practical upshot is that such systems, simply as part and parcel of learning to perceive, develop the ability to selfgenerate perceptionlike states from the top down, by driving the lower populations into the predicted patterns.
Taking stock
∙ Concerning representation, the stories on offer are potentially radical in at least two respects.
o First, they suggest that probabilistic generative models underlie both sensory classification and motor response.
o And second, they suggest that the forward flow of sensory data is replaced by the forward flow of prediction error.
∙ Actionoriented predictive processing models come tantalizingly close to overcoming some of the major obstacles blocking previous attempts to ground a unified science of mind, brain, and action.
∙ They take familiar elements from existing, wellunderstood, computational approaches (such as unsupervised and selfsupervised forms of learning using recurrent neural network architectures, and the use of probabilistic generative models for perception and action) and relate them, on the one hand, to a priori constraints on rational response (the Bayesian dimension), and, on the other hand, to plausible and (increasingly) testable accounts of neural implementation.
Consciousness and Neuroscience
∙ Main Purposes: to set out for neuroscientists one possible approach to the problem of consciousness and to describe the relevant ongoing experimental work.
Clearing the Ground
∙ Two reasons why neuroscientists do not attempt to study consciousness: o They consider it to be a philosophical problem, and so best
left to philosophers
o They concede that it is a scientific problem, but think it is
premature to study it now
∙ Major question that neuroscience must first answer:
o It is probable that at any moment some active neuronal processes in your head correlate with consciousness, while others do not: what is the difference between them?
o In particular, are the neurons involved of any particular neuronal type? What is special (if anything) about their connections? And what is special (if anything) about their way of firing? The neuronal correlate of consciousness is often referred to as the NCC. Whenever some information is represented in the NCC it is represented in consciousness.
∙ Tentative assumption (Crick and Koch, 1990):
o All the different aspects of consciousness (pain, visual awareness, self consciousness, and so on) employ a basic common mechanism or perhaps a few such mechanisms. If one could understand the mechanism for one aspect, then, we hope, we will have gone most of the way towards understanding them all.
∙ Until the problem of consciousness is understood much better, any attempt at a formal definition is likely to be either misleading or overly restrictive, or both ∙ It is plausible that some species of animals — in particular the higher mammals — possess some of the essential features of consciousness, but not necessarily all. For this reason, appropriate experiments on such animals may be relevant to finding the
mechanisms underlying consciousness. It follows that a language system (of the type found in humans) is not essential for consciousness — that is, one can have the key features of consciousness without language.
∙ It is probable, however, that consciousness correlates to some extent with the degree of complexity of any nervous system. When one clearly understands, both in detail and in principle, what consciousness involves in humans, then will be the time to consider the problem of consciousness in much simpler animals.
∙ There are many forms of consciousness, such as those associated with seeing, thinking, emotion, pain, and so on.
o Selfconsciousness — that is, the selfreferential aspect of consciousness — is probably a special case of consciousness.
Visual Consciousness
∙ Our visual percepts are especially vivid and rich in information. In addition, the visual input is often highly structured yet easy to control.
Why Are We Conscious?
∙ We have suggested (Crick and Koch, 1995a) that the biological usefulness of visual consciousness in humans is to produce the best current interpretation of the visual scene in the light of past experience, either of ourselves or of our ancestors (embodied in our genes), and to make this interpretation directly available, for a sufficient time, to the parts of the brain that contemplate and plan voluntary motor output, of one sort or another, including speech
∙ Zombie concept: a person uses current visual input to produce a relevant motor output, without being able to say what was seen.
∙ As pointed out to us by Ramachandran and Hirstein (1997), it is sensible to have a single conscious interpretation of the visual scene, in order to eliminate hesitation ∙ Milner and Goodale (1995) suggest that in primates there are two systems, which we shall call the online system and the seeing system. The latter is conscious, while the former, acting more rapidly, is not.
The Nature of the Visual Representation
∙ To be aware of an object or event, the brain has to construct a multilevel, explicit, symbolic interpretation of part of the visual scene.
o By multilevel, we mean, in psychological terms, different levels such as those that correspond, for example, to lines or eyes or faces. In neurological terms, we mean, loosely, the different levels in the visual hierarchy (Felleman and Van Essen, 1991). The important idea is that the representation should be explicit
∙ We postulate that one set of such neurons will be all of one type (say, one type of pyramidal cell in one particular layer or sublayer of cortex), will probably be fairly close together, and will all project to roughly the same place.
∙ As a working hypothesis we have assumed that only some types of specific neurons will express the NCC. It is already known that the firing of many cortical cells does not closely correspond to what the animal is currently seeing.
∙ An alternative possibility is that the NCC is necessarily global (Greenfield, 1995). In one extreme form this would mean that, at one time or another, any neuron in cortex and associated structures could express the NCC. At this point we feel it more fruitful to explore the simpler hypothesis — that only particular types of neurons express the NCC — before pursuing the more global hypothesis.
Where is the Visual Representation?
∙ The conscious visual representation is likely to be distributed over more than one area of the cerebral cortex and possibly over certain subcortical structures as well. ∙ We have argued that in primates, contrary to most received opinion, it is not located in cortical area V1 (also called the striate cortex or area 17)
∙ Suggest is that the neural activity there is not directly correlated with what is seen. What is Essential for Visual Consciousness?
∙ When one is actually looking at a visual scene, the experience is very vivid. This should be contrasted with the much less vivid and less detailed visual images produced by trying to remember the same scene. (A vivid recollection is usually called a hallucination.)
∙ We are concerned here mainly with the normal vivid experience. (It is possible that our dimmer visual recollections are mainly due to the back pathways in the visual hierarchy acting on the random activity in the earlier stages of the system.)
∙ If we do not pay attention to some part or aspect of the visual scene, our memory of it is very transient and can be overwritten (masked) by the following visual stimulus. ∙ Our impression that at any moment we see all of a visual scene very clearly and in great detail is illusory, partly due to everpresent eye movements and partly due to our ability to use the scene itself as a readily available form of memory, since in most circumstances the scene usually changes rather little over a short span of time
∙ It seems to us that working memory is a mechanism for bringing an item, or a small sequence of items, into vivid consciousness, by speech, or silent speech ∙ Consciousness, then, is enriched by visual attention, though attention is not essential for visual consciousness to occur
∙ Attention is broadly of two types: bottomup, caused by the sensory input; and topdown, produced by the planning parts of the brain.
∙ It is important to discover the difference between the online system, which is unconscious, from the seeing system, which is conscious.
o Milner and Goodale (1995) suggest that the online system mainly uses the dorsal visual stream. They propose that rather than being the ‘where’ stream, as suggested by Ungerleider and Mishkin (1982), it is really the ‘how’ stream. This might imply that all activity in the dorsal stream is unconscious.
o The ventral stream, on the other hand, they consider to be largely conscious. An alternative suggestion, due to Steven Wise (personal communication and Boussaoud et al. 1996), is that direct projections from parietal cortex into premotor areas are unconscious, whereas projections to them via prefrontal cortex are related to consciousness.
Bistable Percepts
∙ The present most important experimental approach to finding the NCC is to study the behavior of single neurons in the monkey’s brain when it is looking at something that produces a bistable percept.
∙ The visual input, apart from minor eye movements, is constant; but the subject’s percept can take one of two alternative forms.
∙ Allman suggested a more practical alternative: to study the responses in the visual system during binocular rivalry.
o If the visual input into each eye is different, but perceptually overlapping, one usually sees the visual input as received by one eye alone, then by the other one, then by the first one, and so on. The input is constant, but the percept changes.