### Create a StudySoup account

#### Be part of our community, it's free to join!

Already have a StudySoup account? Login here

# THEORETICAL NEUROSCIENCE CAAM 415

Rice University

GPA 3.58

### View Full Document

## 26

## 0

## Popular in Course

## Popular in Applied Mathematics

This 239 page Class Notes was uploaded by Walker Witting on Monday October 19, 2015. The Class Notes belongs to CAAM 415 at Rice University taught by Staff in Fall. Since its upload, it has received 26 views. For similar materials see /class/225003/caam-415-rice-university in Applied Mathematics at Rice University.

## Reviews for THEORETICAL NEUROSCIENCE

### What is Karma?

#### Karma is the currency of StudySoup.

#### You can buy or earn more Karma at anytime and redeem it for class notes, study guides, flashcards, and more!

Date Created: 10/19/15

CAAM415 2008 Gabbiani 1 Instructions Solve the following two exercises 1 Spatiotemporal Gabor lter receptive elds a Plot the one dimensional spatial receptive eld of the following odd and even Gabor lters 1 2 2 1 2 2 rm 2 k 7m 27x k 55 m 6 cos we 50 m 6 sin 9mm ltgtgltgtm0m ltgt Use the following values Um 01 deg rm27f 42 ie 42 cyclesdeg with a sampling step dx 0004 deg and in MATLAB notation z em21 1 nw2 gtk dx with 711 128 points b Plot the one dimensional temporal receptive eld of the following odd and even Gabor transfer functions 1 1 ftet x 27m V 27W Use the following parameter values rt27f 8 cyclessec at 31 msec Compute the temporal response of each of the lters for a vector t consisting of 71 128 points in MATLAB notation 7nt2 1 1 nt2 gtk dt and dt 2 msec 6422 cosktt ft0t 6422 sinktt c Plot the 3 d spatiotemporal pro le of the following two lters 95 gseftet7 g7 gselt gtfte2lttgt 1 gsoft0t Hint Use the MATLAB function meshc Try to optimize the viewing angle If you are not familiar with these functions consult the MATLAB Help section on 3 D visualization 1 Plot the Fourier domain pro les of the two lters 95x t and gx t Hints Use the formulas given in the notes Use an appropriate range of spatial and temporal frequency values e Explain with your own words how the differences in frequency domain characteristics will impact their responses to drifting sinusoidal gratings CAAM415 2008 Gabbiani 2 2 Response of Gabor lters to moving grat ings Compute and plot the responses Rt of the lters 95z7t and g at de ned in the previous exercise to a 1 dimensional sinusoidal grating drifting in their receptive eld Use a grating moving to the right with spatial fre quency nm27T 4 cyclesdeg and temporal frequency nt27T 8 cyclessec Simulate 2000 ms with a step At 2 ms Compute and plot the response of the same lters to the grating moving to the left Hints Represent the grating as a matrix with columns and lines corre sponding to different space and time values7 respectively Take advantage of the fact that gs is separable7 which allows you to compute the spatial and temporal components of the responses successively Use matrix multi plication to compute the spatial component rst and convolution7 ie7 the MATLAB function C OIlV to compute the temporal component CAAM415 Gabbiani Solution to exercise 2 The steady state cable equation is dZV A7 V dz We use the Ansatz7 Vz ClemA Cge mA to obtain dV 01 m A 02 im A dZV 01 m A 02 im A 1 EXe7Xe 7 sothat WpeXe pVz From the boundary conditions7 dV d dV 0 7 7 an 7 dx 10 Y7 dx ml we obtain C C C C X1 7 X2 7 y and X161 7 X2671 0 Rearranging gives7 cl 7 02 77A and 02 ClemA Combining these two equations7 we obtain rst 7 762lA7 Y 01 m and then 02 m We now plug the values of 01 and 02 in the Ansatz7 i 7 m A EZIA Y 4 A V 1521A5 71521A5 7 77Ae lA 71 A 7 21 A 7 7 e 1 e emA zzA6 WA eltz7lgtA mmAL e 7 e yA mcoshl 7 We can now compute V0 and rewrite coshl 7 x V V 0 z cosh lA CAAM415 Gabbiani 2 To take the limit l a 007 we rewrite 61A67z eizlAeizA W WW Since 6 21 a 0 as l a 007 the 2nd additive terms in both the numerator and denominator of this last equation tend to zero Thus7 Vz V0e m Example of cable constant From the values given in the exercise state ment and the formula for the space constant in the lecture notes7 we obtain A7 10070009cm2 10 4cm701581 71581 V 10OQcm 4 7 39 cm m Since the dendrites of pyramidal cells are of smaller length N 300nm than the space constant7 the in nite cable approximation is not appropriate CAAM415 2008 Gabbiani 1 Instructions Solve the following two exercises 1 Integration of synaptic inputs by a leaky integrate and re neuron a Implement the Matlab code for two alpha function conductances repre senting AMPA and GABA synapses respectively Use the following param eter values 1 Peak activation time 1 msec AMPA and 7 msec GABA 2 Peak conductance 1nS AMPA and 4 US GABA 3 Reversal potential 0mV AMPA and 70 mV GABA Plot the time course of each of these conductances 952mm and 95mm Hints The alpha function decays to less than 1 percent of its peak value within StPWk You therefore need only consider this interval of time b Generate a 1 second sequence of excitatory synaptic conductance activa tion to be fed to the MP neuron that you constructed during the previous ex ercise series Assume my 1000 independent excitatory synapses activated at the same rate psy following a Poisson distribution Plot the conductance activation for psy equal to 55 65 75 and 85 activationssec Hints Since the synapses are assumed to be independent we can replace them by a single Poisson synapse activated at a rate my p52 Generate a sequence of zeros and ones sampled at 001 msec resolution with ones corresponding to an activation as an homogeneous Poisson process by us ing the exprnd function If this sequence is called synmct you can obtain the corresponding synaptic conductance activation by using the Matlab conv function where the second parameter is set to the unitary synaptic con ductance computed above 952mm Note that conv will generate a vector longer than your original 1 sec long synvm sequence Truncate its tail to the appropriate length CAAM415 2008 Gabbiani 2 c Feed this input to your LlF model for a 1sec long simulation Compute the mean lSl and coef cient of variation of the lSl Change the single synapse activation rate between 55 and 75 activationssec in steps of 05 and plot the CV as a function of the mean lSl Hint Modify your LlF model so that it accepts a synaptic conductance vector instead of a constant current as input The corresponding current is then computed from the conductance and membrane potential at each time step as explained in the lecture notes To make sure that your modi ed LlF model is working7 compare its steady state potential output to the exact solution given in the lecture notes for a constant synaptic conductance 1 Repeat in the case where 100 inhibitory conductances are also activated independently according to a Poisson process with constant rate of 1 activa tionsec Vary the range of excitatory synapses activation between 6 and 8 activationssec in steps of 05 Compare the CV obtained in this case with the CV obtained when excitation is present alone Hint In both c and d7 your excitatory inputs should generate mean inter spike intervals between 40 and 140 ms7 approximately ln d7 for the lowest excitatory input rate7 the inhibitory conductance averaged over time should be about half the excitatory conductance7 averaged over time eRead the Shadlen and Newsome paper given on the Lecture web site An swer the following questions H How many inputs typical range do you expect a cortical neuron to receive 3 How many of those do you expect to be inhibitory C40 How many of those inputs are expected to come from neurons within a radius of 200 pm of the target cell 4 Why would you expect neurons within a 100 um radius of the target cell to respond to similar stimuli Cf What is the typical impact of a single EPSP on the membrane potential of a cortical cell ie7 typical depolarization as a percentage of action potential threshold CAAM415 2008 Gabbiani 3 6 What are the arguments in favor of inhibitory inputs playing a rela tively important role in determining the subthreshold behavior of the target cell 3 arguments 2 Retinal Ganglion Cell Receptive eld The contrast sensitivity function depicted in Fig 154 belongs to an RGC neuron whose spatial receptive eld is described by the following parameters To 024 deg r5 096 deg rskc 006 a Plot the Fourier transform of the receptive eld in the frequency domain using eq 1510 Scale the Fourier transform so as to have a peak value of 50 as in gure 154 left Hints Use Af 001 cyclesdeg in the frequency domain Use the loglog function to plot in double logarithmic coordinates b Plot the corresponding spatial receptive eld using eq 158 of the lecture notes and the scaling factor determined in a Hints Use Ax 0006 deg in the space domain N 2048 points N2 71 negative spatial positions x 0 and N2 positive spatial positions c Fast Fourier transform the spatial receptive eld and verify that it matches the theoretical result by plotting them together Hints This is essentially similar to the exercise on the Fourier transform of the HF except for one additional complication the function values at negative positions x lt 0 should be placed appropriately in wrap around order Let f4 the value of f at the rst negative spatial position x iAz Since the FFT assumes that f is periodic we actually have f4 fN1 and similarly f4 fN2 etc until fN21 fNZH Rewrite rst the spatial receptive eld in this order illustrate by a plot and then Fourier transform Plot the real part only to avoid small imaginary parts introduced by the nite precision of the calculation Why can you ignore the imaginary part Plot positive frequencies only 1 Generate 3 1 dimensional cosine gratings maximal contrast i1 with spatial frequencies f1 01 1 2 cyclesdeg drifting across the receptive eld CAAM415 2008 Gabbiani 4 for 1000 msec sampled at 1 msec resolution at a temporal frequency ft 3 cyclessec Plot the drifting grating contrast vs position at various times over the receptive eld and show that it is moving in the right direction Compute numerically the response of the LGN cell to the drifting grating from the spatial receptive eld Plot the time varying ring rate and verify that the maximal amplitude modulation matches the theoretical prediction obtained from the Fourier transform of the receptive eld obtained in a What is the translation speed of the three gratings Hints Generate the moving gratings by lling an array of 1000x2048 points the rst dimension is time and the second dimension is space Use the MATLAB matrix multiplication to compute the response of the LGN cell CAAM415 Gabbiani 1 Instructions Solve the following four exercises 1 Proximal vs distal inhibition Reproduce the graphs of the Vu and Krasne 1992 Science paper their gures 1A1 and 1A2 using the notes on proximal vs distal inhibition and the appendix to lecture 3 The paper is available in PDF format from the Lecture web site integration of synaptic inputs in dendritic trees Hint Be sure to read the Krasne and Vu paper and in particular the legend of Fig 1 for appropriate parameters 2 Steady state cable equation Solve7 by hand7 the steady state cable equation in the case of nite cylinder of length l H Verify that the Ansatz given in the lecture notes is indeed a solution Determine the value of the two constants 01 and 02 from the boundary conditions given in the lecture notes 3 C40 Find the value of 1450 and show that eq 3 of the lecture notes is the correct solution 4 Justify the result given in eq 4 of the lecture notes in the limit l a 00 Hints Use the de nition of the hyperbolic sine and cosine 1 1 sinhz ex 7 67m coshx ew 67x Compute the space constant of a dendritic cable of diameter 1pm7 for Rm 1007 000907712 and R 10090771 The apical dendrites of pyramidal cells have typical lengths of 300 m ls the in nite cable approximation justi ed in this context Explain CAAM415 Gabbiani 2 3 Fast Fourier Transform Read the Fourier transform chapter from the lecture notes Chapter 5 Solve the following problem a vector consisting of 2048 points was obtained by Fourier transformation of a corresponding 2048 point long time domain discrete signal The Nyquist frequency of sampling was fNqust 25 kHz Only 4 components of this vector are different from 0 and are all equal to 1 The components are number 1207 3267 1724 and 1930 1 2 3 4 H Compute the original sampling step in the time domain At Compute the corresponding sampling step in the frequency domain A Compute the frequency values assigned to the 4 points with Fourier coef cient different from 0 given above Can you make any statements about the properties of the signal in the time domain from which this Fourier transform was obtained Specif ically7 how do its real and imaginary part look like and what are its symmetry properties For example is ft an odd it 7ft or an even it ft function of time Justify your answer Transfer function of the subthreshold leaky integrateand re neuron Plot the real and imaginary part of the transfer function of the leaky integrate and re neuron with parameters 739 30771866 and C 1 Plot for frequency values between 0 and 500 HZ Hints 1 Use the function semilogx to plot in semi logarithmic coor dinates as in the lecture notes 2 Use a consistent set of units seconds for time and HZ for frequency Dont forget that circular frequency is related to temporal frequency through we 27m 3 Compute the real and imaginary part from eq 56 with R 739 since 0 1 CAAM415 Gabbiani 3 D 00 4 Cf 03 Use fft to compute the discrete Fourier transform of Gt Compute and plot the transfer function Gt in the time domain be tween t 0 and t 511 ms at a temporal resolution of 1 ms for a total of 512 sample points Compute for the transfer function Gt that you plotted above the corresponding Nyquist frequency and the sampling frequency in the frequency domain when you use the MATLAB fft function Which components of the transformed vector correspond to zero frequency w 07 which components correspond to positive frequencies and which components correspond to negative frequencies Explain Can you compute directly without using the function fft the discrete Fourier component at zero frequency from Gt7 How does it com pare with the value obtained from fft and with the theoretical value obtained from eq 56 in Chapter 5 Verify that the numerical result of 4 matches the exact result of 1 by plotting real and imaginary parts on the same graph Decisionmaking LECWMW E 7 Types of decisions Rewa rdcost Decision Action Rewardbased decisions Perceptual decisions Sensory evidence 0 Time course of evidence accumulation Decision 1 1 What to decide 2 When to decide Action Multiple alternatives an i n n ill 1 u 4 I S 4 x n r I x n t M I I I auwww 1 395 735 in z t 9 GIICI IIJI 4 5 5 mum My n l quotP c x Ulwkwwf hm c ung as nan i w a 5M MaaGonGaiFa 36 655 4 i n hv39kathwPm as R 39 i 4quot 5 0 a Cm oauwErccwlmu V i 24 t r mum ummmm L 3 n Hacksaw Sc 3 2x k I w Ovsla39Same t 2 215 quot u uri I 225 9 a Q 139 2 vc 9 n n c Mr A It E 175 u vs I Hr e a z ma 3 w W 102m 45 u awn Savnn lB e1 al Gaw Mam a 75 s we Chlc enmPnrtYmGawMeln sc 1 as I n a y 3 x a a as ws2 In a It s x A n nk 39 r macsc u a m vwaw me urSamc um I u k I A x v Invw Chlnemveg a 550 u a e u Ah 4mm quavk39l39n na am am lt u 11 Ampw cudl 4 am 5 r u r n 39 r x nu f a 11 anu w a PorkchouSueI a s at n rIFSGC39an Susy saw n r i E 1 I M Vegelahlecmx Su J a 1X 3 mI39mMBat ean u o t s a g Egret35 H a u 3 Jmnth Mxed ngeiabgs s 0 x9 e a 1quot 37V Lungpma quot YOPNG u L 4 as Homespwlcmpsueg It 4 z i 32 Mumwm Egg mung mile a a g f 37 HuaswomigqFooVaung a m r a 39 magma F051 rm 391 R 3quotquot quotE99F 39 9 18 v I 40 WFI BG Fm 81 i I 55 SW Eggco unc g z s n u szmuue lau m II t Is 2 BeafFriadRi 3 u n 24 muiupauunn 3 n I I 44 L r 39 on 4 ii a n 211 45 VulyChkw Fl39ud an a n a 3 A 45a malnanme 215 4r L0 MEIR Wm an n Ly quot mun ngnlamolnulm man quot x 1 K E 477 HUSH K DOIUI 7 3 k1 39 e Coon r 391 5 45 H r Finnsmcme Madecnsyly dele n n mm lm FxJuJL 1m mm x u u n 52 Nous 57mm Lo mm 1 H011 SPICY Physiology binary choice a Perceptual discrimination task c Freechoice task Left Motion Sugrue Corrado Newsome 2005 Types of experiments Response time procedure free response protocol unlimited time Response signal procedure interrogation rotoco limited time Motion discrimination response time Dbsewe 0 motion 25 motion 50 motion coherence coherence coherence k 1 g x 2 r W 4 rgt I 11V i H 39 39 Palmer Huk Shadlen 2005 Response Time ms Proportion Correor 1000 Monkey N r 800 w Monkey B 600 400 an 100 075 Monkey B 03K O 100 Motion Strength u coherence Types of models Diffusion models Race accumulator models Bayesian models Behavioral models Neural models Diffusion model Race model Pros and cons Diffusion models Wald GoldShadlen Ratcliff Simple Good description of psychophysics What is the decision variable 39 AccumUIator mooels UsherMcClelland McMillenHolmes Neural flavor More parameters Worse fit to the data Diffusion model time gt Relative Evidence bound for response Ha 394 I dECISIDn made at this time Wm MIVY2V Jysample pam mean can may slan pDIrII bound lur response H5 0 Time ms 1000 u In P r 71 C 1 ngA39kH I 7 A mum13911 39 r T h L R Palmer Huk Shadlen 2005 Binary choice response time T JP 10 100 U i 10 100 0 i 10 100 S a s it gt I m o a Response Time m m o o o E 3 Correct f Proponinn oe Palmer Huk Shadlen 2005 Leaky accumulator model Integration White noise Leakage rate k slope 5 dx dx g 2 White noise 406 dl dz Leaky accumulator model dxi jdt I i th unit s change in time dt Leaky accumulator model dxl S jolt i th signal drift Leaky accumulator model i th unit s leakage Leaky accumulator model white noise into i th unit Leaky accumulator model jii i global inhibition dxl S kxl wZ xjjdt0dWi Leaky accumulator model Typical behavior noleakage m thleakage X I with leakage and global inhibition time gt Activity When to decide Absolute threshold Unit gt19 Relative threshold max vs average gt19 Relative threshold max vs next IIIIIIIIIIgtI9 Unit Unit Binary choice Response signal g t 09 8 08 g 2 5 07 U 1amp3 06 05 I 0 500 1000 1500 2000 6 1 I g 09 8 08 E E 07 U g 06 I 0 05 I 0 500 1000 1500 2000 6 1 2 g 09 3 08 3 07 U c 06 g 05 I O 500 1000 1500 2000 Time ms Usher and McClelland 2001 Problems with existing models Limited to binary or discrete choice Uncertainty not encoded What if certainty changes over time or over trials No normative solution Stages of decisionmaking Two stages to decisionmaking 1 2 Accumulating the evidence Action selection Can we build a neural network that performs accumulation and action selection optimally for any number of choices No neural model is known beyond two choices Would this network account for existing data Beck Ma et al 2008 SiM Wbl pecr kmeptu re Action selection attractor dynamics muuuao 10 A m 8 8 0 I ms 5 8 6 00 U 4 0 Integration 100 spiking LNP g o 0 39 2 neurons With atera 3 Q3 0 connections and long tIme 400 100 Preferred saccade d1rect10n constant C C t 39 gt Termination rule Stop 10 A integration when peak m 8 F300 ms activity reaches a preset g 6 o O O bound U 4 o 9 cu Q 9 O o U Sensory data 100 direction 0 100 100 I Preferred motion direction 59 ec 39Ve neurons 0 Decision boundary LIP I SingleTrial Activity Spike Counts 400 0 100 Preferred saccade d1rect10n Action selection attractor dynamics auuuuuuuo 10 A 9 8 bvo t300 ms Integration 100 spiking LNP g 4 0 2 1 0 Q3 neurons With lateral 0 conneCtlons and long Preggrred saccade digggion constant C C t 39 Termination rule Stop 10 integration when peak m 8 t300 ms activity reaches a preset E 6 o bound g 4 29390 0 a 269 OOMW Sensory data 100 direction 0 100 100 Preferred motion direction gt SElECthE neurons Decision boundary LIP I Why is such a network near optimal Probability distributions from neural activity 6 A i H V ActIVIty Probability I Preferred stimulus PSFOCPFSPS Time 1 evidence Time 2 evidence A Time 3 evidence A Time 1 evidence pS I t 1 E 5 Optimal evidence accumulation P b b39l39t Time 2 evidence pS I t Z 2 m a Hy p0ptimalS I t Z Time 3 evidence pS I t Stimulus A Bayesian evidence accumulation Given the pattern of activity from MT since the beginning of a trial the optimal decision should be based on the posterior distribution ps rtMTr1MT 0C prMT lSpl 2MT lSpl 1MT ls How can LIP compute and represent this probability distribution Poissonlike neural variability pr s gp1396hs39r Includes independent Poisson Allows for Fano factors different from 1 Allows for correlated variability Kernel hs determined by tuning curves and correlation structure of population Makes optimal cue integration easy to implement Integration over time is optimal in decision area sensory input x H 195 l Iz1 rtTl 0C p5 Iz1quot39I9S 11le Integrating neural activity over time corresponds to multiplying the encoded probability distributions which is the condition for optimality If LIP performs temporal integration it encodes the optimal posterior distribution at all times Binary Choice Distribution Encoded in LIP Firing rate Hz Probability 8O 6O ps r 40 Bayes 2 20 3 0 ya o 180 360 8 180 2300 Fireferred sacc de direction Saccade direction 200 ms 150 ms OUT IN 100 ms 50ms Binary Choice Activity of IN and OUT neurons Firing rate Hz Firing rate Hz 60 60 Coherence 50 512 50 256 40 EEEEEEEEEIIIEI 128 in DA 40 30 32 0 ea 20 30 0 50 100 150 0 50 100 150 Time ms Time ms Data from Roitman and Shadlen 2002 Log odds grow linearly in time ps Log odds 3 25 2 15 1 05 O O 50 100 1 50 200 Time ms 0g Zoolroarlso ps1800Ir0r180 0C76 7380 Log odds 25 Coherence 512 2 256 128 15 O O 50 100 150 200 Time ms Optimal Action Selection LIP encodes the optimal posterior distribution at all times But how is it read out optimally Which action is the most likely to be right 9 maximumlikelihood estimate Can the SC take LIP activity at decision time and generate the maximum likelihood saccade For general noise distributions the answer is no However if the noise is Poisson like then a solution exists A network with a line attractor can be tuned to recover the maximum likelihood estimate Deneve Latham Pouget 1999 Maximum likelihood estimate sc 0 11 2 2 Action selection attractor P 400 o 1 referred saccade d1rect10n dynamlcs cuuuuao O O 1 2 Preferred sacde direction CCC CCCCCC CC C 10 A 8 t300 ms 2 6 O 29 PM lt12 4 0 6 O o I 3 2 55 OOMQSP 0 400 H 100 Preferred motion direction Binary Choice Performance and Reaction Time Probability correct Reaction time ms 1 900 09 08 07 r data U39b 39 model 03956 1390 0 1390 coherence coherence Data from Mazurek Roitman Ditterich Shadlen 2003 Beyond binary Binary 39 eg Roitman and Shadlen 2002 Con nuous Continuous choice Distribution encoded in LIP Firing rate Hz 8O 60 4o 20 O O 180 Preterred saccade direction Bayes 200 ms 150 ms 100 ms 50ms Probability O 180 baccade direction 360 Experimental predictions Continuous case width of population activity does not change over time LIP encodes a probability distribution over the stimulus This distribution reflects both the reliability ofthe evidence and the performance ofthe animal Same weights regardless of coherence or number of choices Log odds at decision time 2 ChOice 4 choice 2 g 26 398 18 393 Model 30 on 4 10 3 25 14 0 10 100 0 10 100 coherence coherence 3 3 26 Data 398 398 on on o o i4 M 25 10 100 0 10 100 coherence coherence Data from Roitman and Shadlen 2002 Data from Churchland et al 2008 The role of inhibition 0 Bayesian model Inhibition is needed only to keep neurons in their dynamical range Other models mutual inhibition essential Open question 0 Conclusions Decision making Reward based vs perceptual Response time vs response signal Binary vs multiple alternativescontinuous uncertainty can change over time or trials Diffusion models Accumulator models Bayesian models Effect of correlations on information example calculation We are interested in the impact of uniform correlations of magnitude 0 on the Fisher information in a population assuming correlated Poisson variability Correlation matrix N dimensional 1 c c C c 1 c 1 cIcllT c c 1 N dimensions where I is the identity matrix and 1 1 1T Fisher information in terms of the covariance matrix 2 13 f39TSE 1Sf39S Relation between covariance matrix and correlation matrix 2 C1 U since we assume Poisson variability variance mean This can be J 01039 J 1 written as J71 0 0 Jf 0 0 2 0 i 39 0 C 0 i 39 0 00M00m and therefore Fisher information is 13 f1 f9 0 0 fr 0 0 J2 J2 If T2 1f f T 0 0 C4 0 0 f E C4 E apTC lp 1 7 fN fN f f 39 T where we denote it 1 N J71 M The vector 1 is an eigenvector ofC with eigenvalue 1 N 1c Now observe that C is symmetric Therefore its eigenvectors are all orthogonal and right and left eigenvectors are the same Any other eigenvector than 1 must be orthogonal to 1 Conversely all vectors that are orthogonal to 1 are eigenvectors Cxl clxcllTxl cx All these 39 have 39 39 10 The 39 39 lc has 39 1 with dimension N l and this space is orthogonal to the the vector 1 All vectors orthogonal to 1 are eigenvectors and any orthogonal set of vectors which span the eigenspace can be used If the tuning curves are symmetric then u is orthogonal to 1 Therefore it is an 11TH We can diagonalize C as C XAX I with X a matrix of orthonormal eigenvectors andAthe diagonal matrix of eigenvalues Because the eigenvectors are orthonormal XT X 1 and C XAXT or in other words C 227xle Similarly C71 ZlleIXIT eigenvector with eigenvalue lc We normalize it to obtain x0 Information is therefore 1 T 1 I 50 lu HTZMJTFEHT u 15 For c 02 the minimal variance of estimates will be 20 smaller than if 00 Note also that 12 T Ishm dlsZW This is the amount of information contained in a population in which individual neurons have the same response distributions as above but are uncorrelated This is achieved by shuf ing the trials Furthermore I Sw unlzmu13 f TEEEEf39 TEEQEEEQH uTCu 1 c This is the amount of information that would be extracted from the correlated population when using a suboptimal decoder that is optimal for the shuf ed decorrelated responses Integration of synaptic inputs in dendritic trees Theoretical Neuroscience Fabrizio Gabbiani Division of Neuroscience Baylor College of Medicine One Baylor Plaza Houston7 TX 77030 e rnailgabbianibcrntrncedu 1 Introduction In intact animals7 neurons process information by integrating synaptic inputs originating from the presynaptic neurons connected to them Subsequently they generate action po tentials to communicate with their postsynaptic target neurons Thus7 a substantial part of the action takes places in the dendrites of a cell where synaptic inputs are integrated Our understanding of how synaptic inputs are processed within dendritic trees is much more recent than the understanding of how action potential are generated and propagate along axons One reason is that synaptic potentials represent much smaller electrical sig nals than action potentials and are thus more dif cult to record A second reason is that dendritic structures are small and less accessible to recording electrodes7 than for example the giant squid axon Furthermore7 dendrites are very diverse in their shape7 synaptic density and distribution of ion channels In this lecture we want to describe some of the characteristics of synaptic integration in dendritic trees In particular we will see that 1 Both excitatory and inhibitory subthreshold conductance changes due to synaptic transmission interact non linearly D The summation of postsynaptic potentials depends both on the location of synaptic inputs in a dendritic tree and on their temporal pattern of activation 00 Activation of synaptic inputs can be expected to have a large effect on the electro tonic structure of a cell7 which in turn will contribute to changing the integration of synaptic inputs in a dynamic way 7 Active membrane conductances located in the dendrites signi cantly affect the inte gration of synaptic inputs These aspects of synaptic integration are by now fairly well established However7 it is important to stress that the way in which dendritic integration properties combine to determine how information is processed by neurons is still poorly understood and constitutes an active area of research One important ingredient of synaptic integration that we have not yet discussed is inhibition We therefore start by describing inhibition originating from GABAA synaptic conductances and then analyse some simple models for the interaction of synaptic inputs in dendritic trees 2 Synaptic inhibition The predominant chemical inhibitory synaptic transmitter substance is GABA y aminobutyric acid7 although other substances like glycine in the vertebrate spinal cord7 histamine in arthropod photoreceptors and acethylcholine in molluscan neurons also have inhibitory effects GABA binds onto several distinct receptor channels that mediate different types of postsynaptic signals Fast inhibitory synaptic transmission is mediated by GABAA receptors while slower7 second messenger mediated inhibitory synaptic transmission oc curs through GABAB and GABAC receptors We will here concentrate on fast GABAA transmission Upon binding onto GABAA receptors7 GABA causes the opening of Cl selective ion channels allowing an inward ow of Cl and thus an outward ow of positive current An important property of these channels is that the reversal potential for chlo ride ions is close to the resting potential of the cell lndeed7 as we have seen in previous lectures7 Cl ions are thought to be in large part responsible for the resting conductance of neurons Thus the effect of GABA is more easily detected when a neuron is depolarized above rest7 because the driving force of the chloride reversal potential then tends to bring the membrane potential back towards its resting value When the membrane potential is close to rest7 as is the case most of the time during normal conditions7 the main effect of GABA is to increase the conductance of the neuronal membrane Excitatory synaptic transmission increases the conductance of the membrane as well7 but because the reversal potential is far from rest typically 80 mV positive with respect to rest the most con spicuous effect of excitation is to shift the membrane potential away from its resting value For this reason fast inhibitory synaptic transmission is often said to be silent or of the shunting type We will see in the next sections that this asymmetry between excitation and inhibition has several consequences on the integration of synaptic inputs by neurons The dynamics of synaptic transmission both excitatory and inhibitory can be ob tained from detailed biophysical models of the opening and closing of the corresponding ion channels in response to neurotransmitter substances As we have seen in a previous lecture7 this is particularly important in the case of NMDA receptor mediated synaptic transmission since the conductance of the corresponding channel is voltage dependent In contrast7 most other ionic synaptic channels behave to a good approximation as perfect ohmic resistors and we can therefore restrict ourselves to a phenomenological description of the time course of conductance change corresponding to the activation of a few hundred such channels simultaneously in response to release of a synaptic vesicle A choice that works quite well in practice is the 04 function model gsynt C te ttwk This function increases from a value of 0 at t 0 and peaks at tpeak as may be seen by setting the derivative gsy t 0 before decaying towards rest The constant C is chosen such that the peak conductance gpwk is reached at tpwk ie7 C gpeak etpwk 3 Steadystate patch model of synaptic integration A simple way to investigate synaptic integration is to use the patch membrane model of the previous lectures Fig 1 In addition to the membrane capacitance cm and resting conductance gr reversal potential m7 we introduce two synaptic conductances gs and g with reversal potentials Us and 111 respectively These conductances will be interpreted as excitatory and inhibitory synaptic conductances7 respectively Because our calculations are general7 we can also interpret gi as a second excitatory input by setting its value equal to gs The membrane voltage 11m obeys the familiar differential equation dvm 6mg 971quot 7 m 951quot 7 He 911quot 7 m m where im is an external current for example applied through an electrode or owing from a di ferent dendritic compartment We make an important simpli cation by taking gs constant 9 constant and im constant The assumptions of time independent synaptic activation is of course unrealistic since as we have seen above usually an 04 function is needed to describe the kinetics of synaptic activation The advantage of this approximation is that it will allow us to solve exactly for the membrane voltage and thus yield useful insight on the effect of synaptic inputs on membrane voltage dynamics We de ne the membrane time constant Tm cmgT the ratio of excitatory to rest ing membrane conductance Ce 959 and the ratio of inhibitory to resting membrane conductance c gigT If we set T Tm U m Cave cm imgr 1cec 99 1ce0i 7 we can rewrite the above equation as d1 7 m iym i 557 and the solution is 7 I vmt 1155 7 1155 7 vm0e tT 1 Thus the membrane potential relaxes exponentially to its steady state value 1155 We can now make several interesting observations from this simple model 4 Local summation of excitatory and inhibitory inputs Single excitatory input We rst look at the simplest case of a single excitatory input on the patch assume z m 0 and c 0 The steady state potential is given by 1 i 1 i 0515 55 i 1 Ce 7 and rearranging c 1155 1 Qe 7 1 2 8 This last equation tells us that the steady state membrane potential depolarization de pends in a sigmoidal way on the fraction of activated excitatory conductances CB and saturates at We 7 1 This makes intuitive sense since when Us 7 1 is reached the driving force of the excitatory input cannot drive further current into the cell Two excitatory inputs What happens if two identical excitatory synaptic conduc tances are activated in the same compartment In our previous patch model this corre sponds to setting Ui Us 7 cl 05 im 0 We then have m 205115 i SS 1 205 We can again rearrange the equation to see how the activation of the synaptic conduc tances shifts the membrane potential from rest7 205 1 205 W55 r l We 7 yr How does this compare to the depolarization predicted by the linear sum of two inputs According to eq 2 we expect 205 1 Ce W55 r l We 7 1 Thus7 we see that summation of 2 excitatory inputs is highly non linear Fig 2 One excitatory and one inhibitory input We now investigates what happens when the second input corresponds to shunting inhibition Ui UT and im 0 We obtain 7 m 05715 cm v55 1 Us ci iv Ur l ceye l ciyri 7 T 1 Us ci T 7 and rearranging7 Ce 0551 0571 1 Us Ci This is essentially the same result as obtained for two excitatory inputs above When 01 gtgt 1 Us or equivalently7 when 91 gtgt 9 gs we can simplify to obtain 8 C 55 g r 7We 7 yr Ci In other words7 shunting inhibition has a divisive e ect on membrane potential Fig 2 5 Impact of electrotonic parameters on synaptic activation Membrane time constant Another important aspect of synaptic activation is its e ect on the passive properties of a membrane patch or a dendritic cable Of course7 activa tion of additional conductances reduces the input resistance of the patch or7 equivalently7 increases its conductance One immediate consequence is seen in eq 1 for the time dependent convergence towards steady state As usual7 the membrane potential relaxes exponentially towards steady state but the time constant of relaxation is not the mem brane time constant 739 any more It is rather the membrane time constant scaled by the relative value of conductances opened by synaptic activation If we de ne cm Us 01 we have 7quot Tl cm Thus the opening of additional channels causes the membrane potential response of the cell to be faster than at rest To get a better feeling for the magnitude of the effect7 we study a speci c example Assume that the speci c membrane conductivity is Rm 507 000 Q 077127 a plausible value for a dendritic membrane patch The corresponding conductance per micrometer of square area is Gm 2 10 4 nSpmz and if we consider a spherical patch of membrane of radius r 15 pm we obtain a total area 47TT2 of E 2827 M7712 and therefore an input conductance of E 057 715 Since a typical synaptic activation results in a peak conductance change of approximately 05 715 we see that a single synapse would nearly decrease by a factor two the membrane time constant in such a case Dendritic cable constant A characteristic property of dendritic cables is their space constant de ned as A Rm d Gi d 7 amp4 Gm where Gm R77 is the speci c membrane conductance conductance per unit area the typical units of the speci c membrane resistance7 Rm7 are Qcm2 of the cable membrane The constant Gi Put 1 is the speci c intracellular conductivity in units of conductance per unit length the typical units of the intracellular resistivity7 Ri are 9 cm Note that RZ is often called R2 in the literature7 and we have used this alternative notation in previous lectures The number d is the diameter of the cable The space constant characterizes the distance over which the membrane potential is attenuated along the cable For example7 during injection of a constant current in the middle of an in nite cable7 the membrane potential decays exponentially along the cable and reaches 37 6 1 of its value at a distance from the injection point equal to the space constant Therefore a doubling of the speci c membrane conductance will decrease the space constant by a factor This means that the electrotonic length of a cable its real length measured in units of the space constant will increase by the same factor Thus activation of synaptic conductances will also modify the net effect of currents at a distance and increase the relative electrical separation of points along a cable To justify these statements we consider again the cable equation7 3V 32V Omi V Gii 6t ML 6262 We rst multiply both sides by Rm By using the de nition of the membrane time constant7 Tm RmCm and the above de nition of the space constant we can rewrite 3V 32V W7 VV4lt 7 at 6x2 It is now easy to see that if Vzt is a solution of the cable equation above7 then the scaled version Tait VzAtT is a solution of 2 V 27 where i zA7 t tT Consider two dendritic cables extending from 0 l and 0 2l that have cable constants A and 2A7 respectively If current is injected at one end of both cables7 then a point z ad with 04 E 01 on the rst cable will experience the same depolarization as the point z Oz 2l on the second cable Thus7 A is a measure of the electrical length of the cable To see this in a speci c example7 let us consider a nite cable of extent 0 l that receives a constant current pulse IO at z 0 We concentrate on the steady state solution of the cable equation7 ie7 the membrane potential7 Vzt a oo7 once transients have settled down This steady state membrane potential7 lssz satis es a di erential equation that is obtained from the cable equation by setting the time dependent derivative equal to zero Q7 3 l w less dz with boundary conditions7 A2 VSS steady state cable equation dz sts dz 0 ml 77 m0 7 where y IoRgAc is the injected current scaled by the intracellular resistivity R2 R and the cross sectional area of the cable The solution of this equation is obtained from the Ansatz lssz ClemA Cge mA coshU Tm V5905 Vss0m 3 We see directly from this equation that if A a 2A and l a 2l then the same steady state membrane potential is obtained at 0 in the rst cable and at 2x0 in the second cable In the limit of an in nite cable l a 00 we obtain V39ss V39ss0eimA39 Thus7 in this limit the cable constant uniquely characterizes the decay of the membrane potential as a function of distance from the current injection point The approximation l a 00 is only valid if the length of a dendritic cable is much larger than the cable constant7 which is usually not the case Thus A gives only an approximate but intuitively simple characterization of the electrical length of the cable All the e ect described above decrease in input resistance7 decrease in time constant and increase in electrotonic length are expected to play a role in synaptic integration In vivo7 neurons receive constant background synaptic activity from other neurons to which they are connected Spontaneous activity rates are on the order of 5 spks lf each neuron receives 5 000 synaptic inputs typical cortical neurons will receive from 5000 to 20 000 such inputs this corresponds to 25000 activated synapses per second Although each such activation affects minimally the electrotonic properties of a neuron the combined effect can be expected to be large Simulations show that cortical pyramidal cells can be expected to see their membrane time constant decrease by a factor ve due to spontaneous activity Fig 4 Similarly their electrotonic length will increase by a factor 2 Similar but considerably larger changes are expected during sensory processing when many more inputs are activated simultaneously Thus one should not think of a neuron as having static electrical parameters lts electrical properties change dynamically as a function of time due to spontaneous or evoked activity 6 Proximal vs distal inhibition The relative location of synaptic inputs can also be expected to play a role in their integration We illustrate this by considering a model consisting of two compartments in series and investigating the interaction between two inputs one excitatory and the other inhibitory depending on their location Fig 1 The rst compartment should be thought of being distal to the spike initiation zone of the neuron It could represent the dendritic tree of the neuron for example The second compartment should be thought of being more proximal to the spike initiation zone and could for example represent the soma of the neuron We investigate the effect of inhibition either placed proximally or distally on the activation of excitatory distal input Anatomical evidence suggests that inhibition is often placed in these two different positions with respect to excitatory inputs Proximal inhibition The circuit diagram illustrated in Fig 1 The circuit leads to the following two equations obtained by applying Kirchoif7s law dUp 6 dt dlld Cd dt where gpd and cpd are the conductance and capacitance of the proximal distal compart ments geZ are the conductance of the excitatory inhibitory synapses and gdp is the axial conductance between the two compartments We now assume that dvpdt dvddt 0 and that the activated conductances are time independent Of course these steady state assumptions are not realistic but they will allow us to solve for the membrane potential in the proximal compartment exactly and therefore allow us to gain insight in the effect of proximal inhibition on distal excitation Under these assumptions the equation system above reduces to an algebraic equation for 11p and 1 9171 7 1 911 7 pi i gdp1p 7 1hi 07 9AM w 94 i Us 9dp11d 112 07 9171 7 1 911 7 pi gdp1p 7 1hi 07 5 91 7 m 95 7 Us 9dplt1ld 7 Up 7 0 We further assume that vi UT ie7 inhibition is of the shunting type and rede ne the potentials with respect to rest7 17p vpim 17d vdim 175 05 717 The algebraic system of equations 5 may be rewritten as 917 9139 gdp lp 7 gdp d 07 gdp p 9p 98 gdp d 95175 Setting a 9p gi gdp7 c igdp d 9p gs gdp we have b A 0v 73 A 10v Zhi gape Zd gape 7 ab 71 1 d 70 A7ltc d7 A fadib27b aquot A straightforward calculation yields adi b2 04 695 ygi 95 with 04 gdgp gdgdp gpgdp B 917 gdp y gd gdp and we can now solve for 1717 with 17 gegdp e p a 695 WH 9591 The best way to understand the signi cance of this equation is to plot a speci c case see Appendix Al We can nevertheless get an understanding of its signi cance by looking at the behavior of 1717 when the excitatory conductance is large i gdp e lim 1 7 6 9H p B This equation tells us that7 no matter how large the excitatory input is7 the inhibitory input can e ectively act as a veto to excitation it is always possible to increase the inhibitory conductance and overcome the e ect of excitation Distal inhibition When inhibition is located in the same distal compartment as exci tation7 Kircho 7s law yields the following current conservation equations d1 517 9171 7 yr gdpwp Dd 07 d1 Cadiz 9AM Urge11d Us gilld M gdp1d Up 0 We can now rearrange using the same assumptions as in the proximal inhibition case7 9p gdp p gdp d 07 gdp p 9d l 95 7 9139 l gdp d 95175 De ning a 9p gdp7 b c gdpv d 9d 95 9139 gdp7 We obtain ad 7 b2 04 gs g and therefore7 gdpgeye 1 7 p a 695 69 Once again7 this equation is best understood by plotting it Appendix A1 Taking the same limit as in the proximal inhibition case7 we obtain i gdp e lim 11p 7 954quot 9p grip The important difference with eq 6 is that distal inhibition cannot veto excitation In other words7 an increase of excitation can always overcome distal inhibition since the limiting value is independent of 91 7 Spatio ternporal activation patterns of synaptic inputs By now we have seen that synaptic inputs interact non linearly and that these interac tions depend on relative spatial location We have also seen that background synaptic inputs can signi cantly affect the electrotonic properties of a neuron What about tempo ral effects All our calculations have concentrated on steady state conductance changes because they are amenable to exact solution7 but realistic inputs have both spatial and temporal activation pro les The rst important temporal effect that we describe has to do with the membrane potential spread along a dendritic cable following activation of an excitatory synaptic input As one moves further away from the site of EPSP activation7 the peak membrane potential depolarization decreases7 due to loss of current through membrane conductances along the cable The second effect is a spread of the membrane potential depolarization in time Thus7 the farther away one is from the site of activation7 the more spread out in time the depolarization is This is due to low pass ltering caused by the passive cable Simulations also show that the spatiotemporal pattern of activation of excitatory synaptic inputs along a dendritic cable can have a signi cant effect on the depolariza tion observed at the soma of a neuron This is illustrated in Fig 3 In this example7 the same excitatory inputs are activated along a dendritic cable7 but their sequence of activa tion differs The inputs are activated from the dendritic periphery towards the trunk in the rst case and using the opposite sequence in the second case As one can see from the gure7 this results in distinctly different temporal shapes of the postsynaptic potential at the soma7 and a reduced peak depolarization in the trunk to distal case Such spatiotem poral activation patterns along a dendritic cable are thought to contribute to directional selective responses observed in certain retinal ganglion cells 8 Effect of active conductances on synaptic integration In the past decades7 it has become increasingly clear that the dendrites of neurons are not well approximated as passive cables There are a number of voltage dependent conduc tances that are present in dendrites and that could potentially in uence the integration of synaptic inputs One early conductance that was reported to be localized in the dendritic tree of cerebellar Purkinje cells is a transient potassium current called the A current IA Recently it has become clear that many cortical neurons possess an H current IH in their dendritic trees and that the density of channels increases with the distance from the soma This current is very unusal because it is a mixed Nair K4r conductance with a reversal potential above resting membrane potential typically between 20 and 40 mV with respect to the extracellular potential Furthermore the conductance opens fully when the membrane potential is hyperpolam39zed and will therefore tend to bring back the membrane potential towards rest in such cases The H current is estimated to be active at about 7 8 of its peak conductance at rest One important consequence of the higher density of H in the dendrites is that the membrane conductance is increased7 leading to a shorter time window for summation of excitatory inputs Drugs that block IH show increased summation of excitatory inputs7 particularly for dendritic EPSPs A Appendix A1 Plot of 1717 as a function of the excitatory conductance We plot 1717 using values proposed by Vu and Krasne in an article investigating the e ect of proximal and distal inhibition on the generation of escape responses in cray sh We assume 175 100 mV7 gp gd and RpRp de 01 or equivalently gdp 19gp From these assumptions we can rst compute a 04093 with 040 1 297 6 609d with 60 1 19 and B y In the proximal inhibition case7 we obtain 1 gs 1 1 771 99d 5 ao o j 337 Setting z gegd7 y gigd we obtain 17p 7 9 Oto olt 9gt y In the distal inhibition case we have 1 gs 1 DP five 95 9239 7 99d 040 50 50 175 z and using the same conventions as above7 175 z 1 7 p 9040 olt 9gt 10 Figure Legends Figure 1 Schematics of a single patch and of 2 compartment electrotonic models with proximal and distal inhibitory synapses7 respectively Figure 2 Top Summation of two excitatory inputs at steady state in a local patch of membrane Bottom Divisive e ect of inhibition on excitatory postsynaptic responses Figure 3 Effect of spatiotemporal sequences of excitatory inputs on a soma dendrite model 10 compartments7 compartment 1 is the soma An excitarory conductance pulse is activated simultaneously at two location 6 top insets for the duration indicated at the bottom on the abscissa Membrane depolarization is reported normalized with respect to the rest ET and excitatory reversal potentials The dotted curve is the response to total activation of the same conductance distributed over compartments 2 through 9 over the same time period tT from 0 to 1 Adapted from Rall7 1964 Figure 4 Effect of mean background activity lt fb gt on input resistance7 membrane time constant7 electrotonic length and resting potential in a single cable7 a passive and an active pyramidal cell model Adapted from Koch7 1999 single patch 9 cm 3Vm vr ve Vic 9r 9e i Q 2compartments proximal inhibition A A A de 9d 9e 9i 9P CP I ll T w l R Kw l 2compartments distal inhibition A A TiP Cd w l Figure 1 Figure 2 steadystate membrane potential mV Tinputs linear sum 2 inputs 15 Ce 20 Figure 3 A OOOOOOO At 0 WC All i i 6 W A392 c GOOOO OOO A12 2 i i 5 W AT a W A13 A i LJ 0 W N A W Aquot I u I5 lt quotL D C B A m l gt5 i A Bgtco quot 05 o i A39I A12 A13 A O 5 4 I quotf 5 2 a Input Resistance b Time Constant 1000 100 I T m R 100 x in msec K Passive neuron Single cable 10 Mm e u 10 H L Eassive neuron r Active neuron Single cable Active neuron V 1 39 39 1 39 39 39 0 4 6 8 10 2 4 6 8 lt fbgt Hz lt fbgt H2 c Electrotonic Length d Resting Potential Single cable 3 Single cable 7 u 7 Pass e neuron 7 60 7 if 39 2 i if u L Active neuron rest mV 1 0 A 6 1 70 239 A 6 3 lt fbgt Hz lt fbgt HZ Figure 4 CAAM415 2008 Gabbiani 1 Instructions Solve the following three exercises 1 Inhomogeneous Poisson process Plot 10 spike trains 1 sec long from a Poisson process with constant rate p 40 spksec and 10 spike trains 1 sec long from an inhomogeneous Pois son process with time dependent rate pt pmem pm sin27rfwt with pmem 40 spksec and pva r 40 spksec and fm 10 HZ How many spikes do you expect over a 1 sec interval in both cases Explain Plot an histogram of the probability density of lSls obtained over the 10 1 sec long spike trains in each case Do you see any differences Can you explain them Hints Use the integration algorithm explained in the lecture notes and forward Euler with a time step of 005 ms Use the hist function to generate an histogram of lSls with bins of 2 msec centered at 0ms 2ms etc up to 250 msec Normalize to obtain a probability density and use bar to plot the 181s 2 TsodyksMarkram model a Plot the steady state release probability value ltPTgtSS and the postsynaptic rate fpost p ltPgtSS of the TM model for T750 500 mseC7 10m 04 and rates p between 0 and 100 HZ b There is a typographical error in the legend of Figure 518B of Dayan and Abbott Can you nd it and explain c Reproduce Fig 519 of Abbott and Dayan by integrating the equation for the average instantaneous rate of release given in the lecture notes Use the same parameters as in the gure PTO 1 pW 04 T750 500 ms and presynaptic rates as given in the gure Hints Use forward Euler with a time step of 005 ms CAAM415 2008 Gabbiani 2 3 Integration of synaptic inputs by a leaky integrate and re neuron Implement the Matlab code for a leaky integrate and re neuron with the following parameters H Resting membrane potential 70 mV 3 Time constant 739 30 msec C40 lnput resistance R 20 M9 4 Spiking threshold 54 mV Reset at 70 mV after a spike Cf Refractory period tref 2 msec lmplement an absolute refractory period ie7 the membrane potential stays equal to the rest potential during the refractory period and integration of inputs start only after the refractory period is over a Compute rst the theoretical f l curve from eq 92 of the lecture notes7 then compute the f l curve numerically by simulating the HF model and compare the two Hints Use either forward or backward Euler with a time step of 001 msec For simulations7 use injected currents between 1 and 111 nA in steps of 5 nA Re calibrate the membrane potential so that 0 mV is the resting membrane potential7 this will make computations easier Be sure to adopt a consistent set of electrical units7 for example mV7 nA and M97 8 Correlating Spikes and Behavior Theoretical Neuroscience Fabrizio Gabbiani Division of Neuroscience Baylor College of Medicine One Baylor Plaza Houston7 TX 77030 e mailgabbianibcmtrncedu 1 Introduction Studying how sensory perception arises from the encoding and processing of informa tion by nerve cells and neuronal networks is probably one of the most fascinating and challenging aspect of systems neuroscience The sensory stimuli to which animal species respond and the behaviors that they elicit are so diverse that a multitude of approaches and techniques have been devoted to this goal In this lecture7 we will focus on a very restricted set of sensory perception tasks involving the detection of signals embedded in noise These tasks have been studied at the level of individual human subjects7 a eld called psychophysics Many of the methods used in psychophysics are closely related to those originally developed for signal detection in an engineering context We will see how these methods can also be applied to study perception in animals and to analyze neuronal signals7 thus opening a way to relate perception to neuronal processing 2 Single photon detection in the visual system The Hecht Shlaer39 and Pirenne HSP experiment In 19427 Hecht7 Shlaer and Pirenne published a study in which they investigated the threshold of human subjects for detecting brief7 weak light ashes The experimental conditions were carefully optimized to maximize the sensitivity of the human subjects Prior to the task7 subjects were kept in the dark for at least 30 mins to ensure full dark adaptation The ashes were delivered at a horizontal distance 20 degrees away from the fovea in a region where the density of rod photoreceptors is high The area covered by the stimulus 10 minutes of arc was also optimized to yield the highest sensitivity Stimuli were presented for 1 msec and the wavelength of the light stimulus was 510 um green7 a value at which the eye is known to be most sensitive for dim vision In the experiments7 the energy of the light ash or equivalently the mean number of photons delivered at the cornea was varied and the frequency at which the observers detected the ashes was recorded The results of the experiments are presented in Fig 1 for 3 subjects Typically7 the number of photons at the cornea needed to detect 60 percent of the ashes ranged between an average of 54 and 148 light quanta Based on the data available at the time7 Hecht Shlaer and Pirenne estimated that 4 percent of the light would be re ected by the cornea7 50 percent of the remaining photons would be absorbed by the ocular media before reaching the retina and 80 percent of the light would pass through the retina without being absorbed by photoreceptors Thus7 only about 96 percent of the photons available at the cornea could be responsible for light detection in these experiments This corresponds to an average of 5 14 light quanta This number is surprizingly small and suggests that absorption of two photons by the same photoreceptor is highly unprobable since the area covered by the light stimulus corresponded to approximately 500 rods photoreceptors 4 percent probability Thus7 one predicts that rods should be sensitive to single photons and that the simultaneous absorption of a small number of them leads to conscious sensation Because the average number of absorbed photons is so small one expects considerable uctuations in the number of photons absorbed from trial to trial Thus it is conceivable that a large fraction of the subject response variability is caused by uctuations in the absorbed photon number If we assume that photon absorptions are independent random events of constant probability we expect their distribution to follow a Poisson distribution just as the number of photons emitted by the light source and observed at the cornea Let a be the average number of absorbed photons for a given average ash intensity Hecht Shlaer and Pirenne assumed that a om where n is the average number of photons measured at the cornea and 04 is an attenuation factor related to the optical properties of the eye and retina Let Pk denotes the probability of k photons being absorbed then k a 7a Pk e If a human observer sees the experimental light ash only when a xed threshold number of photons k0 is absorbed we expect a probability of seeing the stimulus given by PDa 76 1 The curves PD are plotted as a function of log10a for various values of kg in Fig 2a The average number of absorbed photons a for a given average number of corneal photons is of course unknown If the probability of seeing PD is plotted as a function of log10n the curve becornes identical in shape to that determined by PD as a function of log10a except for a shift along the horizontal axis since log10a log10n log10a Fitting the appropriate value of kg to the experimental data then becomes very easy it simply amounts to matching the curves shape to that of the cumulative Poisson distributions of eqs Thus the two parameters of the model kg and alpha are determined by the slope of the frequency of seeing curve and its shift along the abscissa respectively The ts obtained in Fig 1 for the probability of seeing as a function of the average number of corneal photons rnatches well this expectation for values of k0 between 5 and 7 Barlow7s dark light hypothesis The HSP experirnent suggests that most of the variability in the observers7 responses is due to noise in the physical stirnulus rather than biological noise As pointed out a decade later the experimental design and its interpretation have however several shortcornings 1 If rods are indeed sensitive to single photons why would observers not be as well given that biological noise is assumed to be inexistant 2 The HSP experiment is by itself sornewhat arnbiguous an observer could always lower its threshold and thus give the appearance of 77seing better Barlow proposed a solution to these two problems by interpreting the results differently and by proposing a modi ed model of photon absorption Following Hecht Shlaer and Pirenne he proposed that rods are sensitive to single photons7 but he postulated that several rods must be activated simultaneously when a weak ash is detected to overcome biological noise One plausible source of noise is the random spontaneous decay of the rod photopigments rhodopsin in the absence of light This decay would give the illusion of photon arrival and thus the registration of a single photon would in turn be unreliable to signal the presence of weak light ashes Other sources of noise might result from central nervous system processing and were lumped together with spontaneous rhodopsin decay in the Barlow model Let us assume that in the absence of light the mean number of absorbed photons dark light is z and follows a Poisson distribution When presented with 77blank77 trials where no ash occurs7 an observer is expected to report a light ash even if none ocurred in a fraction of the trials because of the noise If we call PFA the probability of such 77false alarms 7 it is given by M 13mm Z We 2 1ssz It depends both on the amount of noise and the detection threshold kg of the observer In the presence of a light ash7 the mean number of absorbed photons will be due both to absorption related to the light ash7 om7 and to the noise7 x If both processes follow independent Poisson distributions7 their sum is also Poisson with mean a omz Thus7 an k 7 0m x 13130 Z 5 t 3 1ssz 39 The model has three parameters instead of two in the HSP formulation the threshold level7 k0 the fraction of absorbed photons7 oz and the 77dark light level77 x Fitting the model to the data now becomes more complex because the parameters cannot be simply interpreted geometrically The additional parameter can be t to the data by using the false alarm rate obtained from presenting 77blank77 trials As illustrated in Fig 27 the model offers good ts to the HSP data Typically7 the fraction of absorbed photons 04 is predicted to be higher in the presence of noise z 31 07 this is consistent with later estimates of the probability of photon absorption predicted to be higher7 20 than at the time of the HSP experiment Furthermore7 by encouraging subjects to report less probable stimuli7 the threshold is observed to decrease in parallel with an increase in the probability of false alarms This is in agreement with point 2 above and emphasizes the needs to monitor thresholds with independent data Detection of light in darkadapted retinal ganglion cells How does the perfor mance of neurons in detecting weak light ashes compare with the observers performance Since retinal ganglion cells are the rst spiking neurons that convey information to the central nervous system7 it is natural to investigate their responses to such weak light ashes The experiments were performed in the cat using ON center retinal ganglion cells and a representative experimental result is illustrated in Fig 3 The stimulus consisted either of a weak light ash 5 photons on average of 10 ms duration or of a 77blank77 trial The spiking response of the retinal ganglion cell was recorded during a time window of 200 ms starting at ash onset In the absence of light the cell was spontaneously active with an average of 414 spk whereas in the presence of light the mean spike count was increased to 662 Does the distribution of spike counts match the Barlow model described above If this were the case one would expect the spike counts to be Poisson distributed both for the spontaneous and evoked response with a difference in means equal to the mean number of absorbed photons Am qa om and a difference in variance A02 1 so that the Fano factor given by AUZAm 1 However the experimentally measured difference in variance is usually larger than that expected from a Poisson distribution Let us assume that for each absorbed photon an average of A spikes are produced Then Am Aqa and A02 Azqa so that AUZAm A The variance in the evoked spike count distributions is consistent with the assumption that between 2 and 3 spikes are red in response to each absorbed photon ie 2 S A 2 3 Thus the response of retinal ganglion cells is consistant with a process of ampli cation of the absorbed photons at low light levels The performance of retinal ganglion cells at detecting light can be assessed by chosing a xed threshold spike count Athres and computing the corresponding probability of detect ing the light ash in the above experiment In a trial that consists with equal probability of a light ash or a 77blank the observer will report that a light ash occurred if Athres or more spikes are counted Otherwise the observer reports that no ash occurred blank77 trial As pointed out above the probability of detection PD P 2 Athresl ash will of course depend on the selected threshold decreasing Athres leads to higher probabili ties of detection This is however offset by an increase in the probability of false alarrns PFA P 2 Athreslblank ie the probability of reporting a ash in a 77blank77 trial A plot of PD as a function of PFA is called an ROC curve Receiver Operating Charac teristic a term originating from early applications to radar during WWII Such a plot is illustrated in Fig 3C for the retinal ganglion cell of Fig 3A B Each labelled dot 1 23 represents the performance for a spike count threshold AWE 1 23 Plotting PD as a function of PFA instead of using directly the threshold Athres is a better representation of the data because this fully characterizes the performance of the observer and is independent of the particular way in which the classi cation decision was made This allows to compare performance with that of an observer based on the Barlow model the rornan numbers correspond to the PFA PD values obtained from the Barlow model with parameters 04 018 z 65 at detection thresholds of 1 2 and 3 absorbed quantas respectively Because the Barlow model describes accurately the psychophysical perfor mance of human observers this suggests that the performance of single retinal ganglion cells is comparable to that of humans This conclusion is based on the assumption that cats would report light ash occurrences in a similar manner than humans or vice versa that human retinal ganglion cells responds like cat retinal ganglion cells to light ashes Single photon sensitivity in rods Do rods really respond to single light quanta This question was nally answered at the beginning of the eighties when Dennis Baylor and colleagues developed a technique that allowed to record responses of single rods isolated from the retina of salamanders to weak ashes of light Their results unambiguously demonstrated responses to single photons7 thus verifying the claim that dark adapted rod photoreceptors are highly sensitive detection devices 40 years after the original HSP experiment 3 Signal detection theory and psychophysics Psychophysics is the sub eld of psychology devoted to the study of physical stimuli and their interaction with sensory systems Psychophysical tasks using weak visual stimuli or stimuli embedded in noise have been extensively used to draw conclusions on how sensory information is processed by the visual system These tasks can be analyzed using methods originally developed in the context of engineering for the detection of weak physical signals In this section7 we introduce the formal framework used to describe and analyze the experiments reported in the previous section This will allow to show that discrimination performed in terms of a threshold in spike number or using the number of absorbed photon as in sect 2 is ideal under certain circumstances We also introduce a second type of psychophysical task7 different from that considered in the previous section This task7 the 2 alternative forced choice task7 will give us a better understanding of the signi cance of ROC curves Yesno rating experiments Experiments like those described in sect 2 are called yes rm rating erperimerits In these experiments7 either one of two stimuli so and s1 is randomly presented with equal probability An observer is to report after each stimulus presentation which one of so or s1 was presented In a typical situation so is noise and s1 corresponds to a signal presented simultaneously with the noise signal plus noise In sect 2 the noise condition would correspond to the blank stimulus and the signal plus noise condition to the ash stimulus The responses are denoted as r 0 or 1 depending on whether noise or signal plus noise is chosen by the observer Two alternative forcedchoice 2AFC experiments A 2 alternative forced choice experiment is one in which the subject is required to respond only after two successive stimulus presentations Both so and s1 are presented exactly once with equal probability in the two presentation intervals After the second interval7 the subject is asked to report in which interval s1 signal plus noise was presented In the ash detection experiments described above7 this corresponds to presenting the blank stimulus in one interval and the ash in the other interval and asking subsequently the subject to report in which of the two intervals the ash appeared In this case7 responses r 0 or 1 indicate the rst or second interval7 respectively Correct detection and falsealarm probabilities In a yes no rating experiment7 the probability of correct detection7 PD is the probability of reporting the signal when it was indeed present ie PD Pr 1 51 and the probability of false alarrn is the probability of incorrectly reporting the signal when it was absent ie PFA POquot 1 50 The total error rate of the observer is given by averaging both types of errors by their probability of occurrence 1 1 7P 71713 6 2FA2 D In a 2 AFC experiment we de ne PC as the probability of correct response ie r 0 when 51 was presented in the rst interval and r 1 when 51 was presented in the second interval Psychometric curves When the strength of the signal is continously varied over a range of values a plot of the detection probability as a function of signal strength is called a psychometric curve Such psychornetric curves can be computed either for yes no rating or 2 AFC experiments It is usual to de ne from a psychornetric curve a detection treshold to be able to compare the responses of subjects across different conditions Typically detection thresholds are de ned as 50 correct performance for yes no rating experiments and 75 correct performance for 2 AFC experirnents These de nitions are somewhat arbitrary and some authors de ne detection thresholds using different values such as 68 correct performance for 2 AFCs ROC curves For a yes no rating experiment the ROC curve is a plot of PD as a function of PFA for a xed signal strength In psychophysical experirnents ROC curves are often plotted for a signal strength equal to the psychophysical threshold As explained below such ROC curves fully characterize the performance of the observer for a xed set of physical stirnulus conditions Statistical distribution of stimuli or responses When a psychophysical detection experiment is carried out one typically has access either to the probability distribution of the 77noise and 77signal plus noise77 or to some physiological variable such as the number of spikes red by a neuron in response to 77noise and 77signal plus noise The question then arises as to how that information can be used to 77optirnally decide which of the two stirnuli was presented As we will see later on this question can be given a precise answer if we specify what 77optirnal77 means For concretness we will start by considering two examples the rst one is closely related to the model used in sect 2 to describe photon absorption by rods The second example is formulated in terms of stimulus noise properties instead of spike count responses It will play an important role in the next section in a slightly different context Example 1 We assume that our response variable is Poisson distributed with mean values m0 and m1 m1 gt me for stimuli 50 and 51 The response variable could for example represent the distributions of spikes generated by a neuron in response to the two stirnuli lf m0 z and m1 om x we obtain the distribution of activated rods to the 77blank77 and ash stimuli in Barlow7s model Thus k k Pkl50 emo and Pam 5m1 4 Let us assume that k spikes have been observed under the assumption of The probabilities PklsO1 can then be thought of as the likelihood of this observation under conditions 501 respectively Thus a natural quantity to consider is the likelihood ratio Pkl51 7711 k A k 7 e m1 m0 mg The ratio Ak will be large when k is much more likely to originate from 51 than from so and vice versa Thus a plausible decision rule is to opt for 51 when Ak exceeds a threshold 77 ie Ak 2 77 i 51 Ak lt 77 i so Equivalently one may consider the threshold logi7 on the log likelihood ratio logA since the logarithm is monotone increasing Because logA klogm1 7 log me 7 m1 7 me this decision rule is equivalent to imposing a threshold on the number of spikes 1 7 k 2 kg 0gquot m1 quot102 51 log m1 7 log me k lt kth gt 50 The probabilities of correct detection and false alarm are given as in sect 2 by ml 7m1 mg 7mg PD Z We and PFA Z We 5 kath 39 kazh Thus xing a threshold k0 gives a probability of false alarm PFAO and a corresponding probability of correct detection PDQ as determined by eqs lf kt k01 then PFA1 lt PFAO and P131 lt PDQ What if we would like to obtain a probability of correct detection between P131 and PDQ say PDl PDO This may be achieved by the following strategy if k 2 k0 1 choose 51 and if k lt k0 choose so If k k0 choose 50 with probability and 51 with probability This corresponds to using the decision rule determined by kg and the one determined by k0 1 with probability and yields a probability of correct detection that is the average of those two decision rules ie PDl PDO Such a decision rule is called a randomized decision rule Although this may seem rather arti cial at this point we will see later on how this example helps understand a fundamental result on optimal decision rules The ROC curves for such decision rules are plotted in Fig 5 Example 2 Let us consider a slightly di erent task than that considered in the previous section A dot of xed contrast 00 is presented on a background whose value is drawn from a Gaussian distribution with zero mean and standard deviation an The task is to detect the presence or absence of the dot The distribution of the contrast 77noise so n has zero mean and standard deviation an 1 Mo 72 1 27T039n Signal and noise are assumed to add independently yielding the distribution of s1 co 71 from the value of co and the distribution of 71 670220 67cico22039 1 p c s l 1 M For a given observed value of the contrast 0 we compute the log likelihood ratio 100151 C0 03 lo Aclo 7 77 g g pc 80 Hg QUEL Similarly as in the previous example the log likelihood ratio depends linearly on contrast using a threshold value logn is therefore equivalent to setting a threshold on contrast to decide between so and s1 For convenience we formulate the decision rule in terms of the normalized contrast l i and the normalized distance d 7 between the means of the two distributions pclso and pcls1 d log77 llt 7gt80 d log Because l N N01 for so and l N Nd 1 for s1 the probabilities of false alarm and correct detection are given by 00 1 l d PFA e w22dz erfc Ogn 7 1 4 W 27139 d 2 27139 1 7 2 logn d e yZd erfc 77 1 7 27T y lt d 2 0 1 2 PD 767m d 2dz y z 7 d V where erfc ff e Zdz is the complementary error function Thus both PD and PFA depend on d in this example ROC curves are plotted for di erent values of d in Fig 5 Ideal decisions rules ideal observers We are now ready to de ne more precisely the decision rules introduced above and to state the basic result asserting that optimal ideal decisions are always based on the likelihood ratio Let X be the set of values that can be taken under so and s1 irrespective of whether stimulus 0 or 1 is presented In the rst example above X N integers and in example 2 X R real numbers In the context of a yes no rating experiment a decision rule or equivalently a test is a map o X a 0 1 assigning to each possible observation m E X either stimulus so or stimulus s1 In the context of a 2 AFC experiment a decision rule is a map o X gtlt X a 01 that assigns to each pair of responses 12 a number 0 or 1 corresponding to the interval rst or second in which the 77signal plus noise77 appeared There are many ways of de ning ideal or optimal decision rules depending on the op timality criterion chosen We focus on the Neymari Pearsori and miriimum error criteria In the context of a yes no rating experiment a Neyman Pearson ideal observer is one that maximizes the probability of detection PD for a xed value say a of the probability of false alarm PFA Such a decision rule is called most powerful test of size a In the context of both yes no rating and 2 AFC experiments a minimum error ideal observer is one that minimizes the probability of error or equivalently maximizes the probability of correct decisions PC The fundamental result is the following Neymari Pearsori Lemma Let Po and P1 be two probability distributions with densities po and p1 corresponding to two conditions so and s1 A test of the form 1 if p1gtki 007 y if p1zk1007 0 if p1ltki 007 for some threshold k 2 0 and a number 0 S y S 1 is the most powerful test of size oz gt 0 When z 0 choose so and when z 1 choose s1 lf z y ip a 77y coin and choose s1 with probability 7 the probability that the coin turns up heads The test de ned above is essentially unique up to changes on a subset of values z E X with zero probability of occurrence The test may also be formulated in terms of the likelihood ratio ie 1 if Az gt k we v if Altzgt k 0 if Az lt k In most cases the probability that Az k is effectively zero In example 2 above for example both probability densities are Gaussians and thus probabilities are only non zero over intervals of nite length In such cases the threshold k is determined by oz OOpAzlso dz 6 k where pAzlso is the probability distribution of the likelihood ratio when so is in effect The probability of correct detection is similarly given by 00 Pp pAzl51 dz k In the case of example 17 the probability of false alarm 04 may lie between two values 040 and 041 determined by discrete thresholds kg and k1 When this occurrs7 one sets k k1 and a randomized test is needed Minimum error test Assume that so and 51 are presented with equal probability 12 The minimum error test is a likelihood ratio test with threshold k 1 Alternatively the minimum error test can be determined from the ROC curve by computing PFA lt17PD as a function of PFA and selecting the minimum value Minimum error in a 2 AFC39 erperimerit If the observers response is not biased towards one of the two presentation intervals7 the minimum error test in a 2 AFC experiment is to compare the likelihood ratio of the two presentations zl zg and select response r 1 for the presentation interval with the highest likelihood ratio 10951l51 10952l51 pause gt pause 10951l51 10952l51 10951l50 10952l50 r0 r1 Note that in this case7 no threshold is needed This can be understood intuitively from the fact that one presentation interval effectively serves as the threshold for the other one Properties of ROC curves We state without proof some of the most important geometric properties of ROC curves Converity of ROC curves The fact that ROC curves are convex follows by an argu ment similar to that used in example 1 If we have two points tests PFAl7 P131 and PFAZ Pm on a ROC curve7 the randomized tests built as linear combinations of these two tests yields a straight line connecting the two points The most powerful tests of the Neyman Pearson lemma have to be at least as performant as the randomized tests7 ie7 they have to lie above the straight line connecting PFAl7 P131 and PFAZ PDQ By de nition7 this means that an ROC curve is convex Slope of ROC curves The slope of an ROC curve is the threshold value of the corre sponding Neyman Pearson test This means that dPD 7 dPFAa 7 where k is determined by eq Area under an ROC curve The area under an ROC curve for a yes no rating task equals the expected ideal observer performance in the corresponding 2 AFC task The area under an ROC curve is thus often used as a measure of discrimination performance7 since it is independent of the chosen threshold and since it predicts performance in the corresponding 2 AFC task 4 Motion detection by MT neurons and psychophysical perfor mance We now present a series of electrophysiological and psychophysical experiments aimed at understanding better the relation between the activity of single neurons and perception in the context of a motion detection task These experiments are similiar in spirit to the ones described in sect 2 An important difference is that both electrophysiological recordings and behavioral experiments were performed simultaneously in trained awake behaving monkeys7 thus allowing a direct comparison of the single neuron responses with behavior Experimental con guration Monkeys were trained to perform a rating task in which dots moved within a circular window on a video screen Fig 6 A fraction of the dots was updated from frame to frame in such a way as to move coherently in a speci ed direction while the remaining dots were updated randomly The stimulus was presented for a period of 2 sec and the animal was to report the direction of motion of the dots by making an eye movement towards one of two lights at the end of the trial By changing the level of coherent motion7 the dif culty of the task could be varied from easy 100 coherence to dif cult close to 0 coherence7 ie random motion of the dots in any possible direction In most experiments7 the activity of neurons in the middle temporal area MT was recorded simultaneously during the task MT neurons receive inputs from V1 and most of them N 90 are directionally selective Their responses are thought to be well de scribed by the motion energy model described earlier The receptive elds of MT neurons are typically considerably larger than those of V1 neurons suggesting a convergence of information from V1 cells with different receptive elds During the recordings7 the direction of preferred motion of the recorded cell was rst determined and the stimulus was displayed in a circular region optimally covering the cells receptive eld The direction of dot motion was matched to the preferred or anti preferred direction of the cell7 to maximize the likelihood that the recorded cell contributed to the motion detection task Average neuronal and behavioral performance Figure 7a illustrates the responses of an MT neuron during the presentation of stimuli of increasing coherence Responses are typically variable both for preferred and anti preferred motion direction The number of spikes per trial is well tted by a Gaussian distribution7 a situation similar to that of 11 example 2 in sect 3 As the coherence of dot motion is increased7 the two distributions of spikes became better separated7 thus conveying more information on the presence of preferred vs anti preferred motion stimuli The animals performance in detecting preferred motion direction during the task is illustrated in Fig 7b open dots and dashed line In this experiment7 correct performance reached an 82 threshold for 61 motion coherence To compare the neuron performance with the observers performance7 the authors computed the area under the ROC curve obtained from the spike number distributions for preferred vs anti preferred motion at each coherence level This data is plotted as lled dots connected by a solid line in Fig 7b The neuron7s performance reaches 82 performance at a coherence level of 44 and is thus performing the task better than the monkey observerl This result is similar to that reported in sect 2 for retinal ganglion cells Typically7 the neuronal performance determined by this method was within a factor 2 of the psychophysical performance for 76 of the neurons As we have seen in sect 37 the area under the ROC curve is a measure of correct performance for a 2 AFC task rather than a rating task Thus7 the neurometric curve plotted in Fig 7b should ideally be compared with the psychophysical performance in the corresponding 2 AFC task It is however considerably more dif cult to train animals to do 2 AFC tasks rather than rating tasks Another interpretation is to assume that during a single rating trial7 neuronal decision are based on two neurons with opposite perferred motion directions7 but otherwise identical spike number distributions in response to the motion signals If this were the case7 listening to these two neurons and basing a decision on differences in their spike number would be suf cient to reproduce or exceed the animals performance Lesion and microstimulation studies The visual cortex consists of a large number of areas besides V1 and MT and neurons sensitive to motion stimuli are found in many of these areas Thus7 it is entirely possible that the correlation between average neuronal performance and behavior described above is not due to a causal relation An alternative possibility is that behavior is determined in another brain area and that MT neurons merely re ect the outcome of computations carried out in that area Two methods can be used to make this alternative unlikely The rst one consists in making a brain lesion restricted to area MT and measuring the behavioral performance of the animal before and after the lesion Such an experiment is illustrated in Fig 8 Fig 8A shows the threshold for 82 correct performance in the motion coherence task before and immediately after the lesion to MT for a range of dot motion speeds The threshold is increased by about a factor 10 for a large range of speeds Fig 8B shows that this effect is speci c if the motion stimulus is presented in the opposite half of the visual eld motion information will be represented in the MT area located on the opposite side of the brain Since that area was not lesioned7 one would expect unchanged performance if the effect of the lesion were speci c to the lesioned area This is indeed the case This experiment shows that area MT is necessary to perform the psychophysical task although decisions may not be taken in MT7 they are based on information computed there Another experiment consists in electrically stimulating neurons in area MT while the monkey is performing the psychophysical task In such an experiment7 a cell is recorded in MT7 its preferred motion direction is determined and the stimulus direction of motion is adjusted to match the preferred direction of the cell During the task7 a small electrical current is passed on a random subset of the trials Such an electrical current is expected to excite neurons around the recording electrode Since neighboring neurons will have similar response properties7 this manipulation is expected to increase the activity in a population of cells with identical motion direction preference as the recorded cell Fig 9 A and B show two examples of such microstimulation experiments In both cases the psychometric curve during microstimulation is shifted towards lower positive coherence levels Thus7 during microstimulation a higher fraction of correct responses is obtained even if the coherence is lower This suggests that microstimulation can replace coherence as a factor contributing to increased performance In Fig 9A7 the effect of microstimulation on psychophysical thresholds is equivalent to adding 77 coherence to the dots while in B the effect was even higher 201 coherence This experiment suggests that altering the response properties of a small pool of cells probably on the order of hundreds of cells in uences motion perception Thus7 the psychophysical decision are expected to be taken using the signals from this pool of neurons Trialby trial correlation of neuronal responses and behavior The analysis pre sented in Fig 7 compares the average performance across trials of single neurons with animal performance It does not7 however7 address the question of whether single neuron responses covaried with behavioral responses on a trial by trial basis In other words7 at a xed level of coherence such as 32 in Fig 8a7 the animal is performing a sizable number of mistakes when the stimulus is moving in the preferred direction of the cell If one breaks down the distribution of ring rates to that stimulus stapled histogram bars according to the animals responses7 is there a correlation between responses and ring rate Such an analysis is presented in Fig 10 As may be seen from the gure7 at each of the levels of coherence considered7 the distribution of spikes for preferred direction decisions is slightly shifted towards the right as compared to the distribution of spikes for anti preferred motion direction decisions However7 in the 64 coherence case7 the separation between the two distributions seems considerably smaller than that for 32 coherence in Fig 8a In these three cases7 the area under the ROC curve is given by 0577 075 and 0787 respectively Across neurons7 the area under the ROC curve for such distributions is equal to 056 and thus the correlation between single neuron responses and behavior on a trial by trial basis is weak Thus7 no single neuron appears to have a large impact on a single decision7 in spite of the fact that many of them are on average as reliable as the animal This suggests that psychophysical decisions are based on pooling information from hundreds of neurons7 rather than relying on the neurons that are most reliable for the task Figure Legends Figure 1 Left Cumulative Poisson distributions for count thresholds from 1 to 9 Right Psychometric curve for 3 observers tted with the cumulative Poisson distributions on the left Adapted from Hecht Shlaer and Pirenne J Gen Physiol 25819 840 1942 Figure 2 Left Fit of the Barlow model to the HSP data 04 013 z 89 kg 21 Right Performance of one observer under two conditions one is which the subject was encouraged to report only seen ashes k0 19 and one in which the subject was encouraged to report possible or seen ashes k0 17 False alarm rates were 0 and 1 respectively Other parameters as on left Adapted from Barlow J Opt Soc Am 46634 639 1956 Figure 3 Responses of a single retinal ganglion cell to 5 quanta average of light A PSTH 10 ms bin width 100 repetitions B Pulse count distribution in the presence solid outline and absence dotted area of the stimulus C ROC curve ie probability of c or more spike in presence of dark light only and probability of c or more spikes in the presence of the ash plus dark light Arabic numeral indicate threshold values for 0 Roman numerals and crosses indicate values for an ideal detector assuming 04 018 and z 65 and k0 1 7 4 respectively Adapted from Barlow et al Vision Res Suppl 387 101 1971 Figure 4 Two types of psychophysical tasks used to investigate the detection of weak signals In the yes no rating task only a single instance of the stimulus is presented either so or 51 and the subject is asked to choose the stimulus presented During the feedback period the subject is informed on his performance for example a high tone is used to indicate a correct choice and a low tone for incorrect choices In the 2 AFC task both so and 51 are presented but their order of presentation is randomized The subject is asked in which of the two intervals 51 was presented Figure 5 Left Receiver operating characteristic curves ROC for the ideal observer of two Gaussian distributions so and 51 with equal variance and unequal means The various curves correspond to different values of the normalized distance d Right Per formance of the ideal observer of two Poisson distributions with different means m0 and ml respectively The circles and squares are labeled with the corresponding thresholds values km and linear interpolation between these values correspond to randomized tests built using the two nearest threshold values as explained in example 1 Figure 6 Schematics of the experimental con guration used in the Newsome experi ments Dots are presented in an aperture matched to the receptive eld of the recorded MT neuron bottom The fraction of dots moving coherently in one direction is varied from zero coherence top left dots move in all possible directions to 100 coherence top right7 dots all move in the same direction The stimulus is presented for 2 seconds and the subject is asked to answer the trial by making an eye movement towards one of 2 LEDs on the visual screen The animal is required to xate the xation point throughout stimulus presentation Adapted from Britten et al J Neurosci 124745 47657 1992 Figure 7 a Distribution of the number of spikes of a MT neuron for three different correlation levels 60 trials per correlation level hatched bars preferred direction7 black bars anti preferred direction b Neurometric curve solid points and line derived from a The signi cance of this curve is explained in the text The open dots and dashed line indicate the psychometric curve of the subject during the same trials In this case7 82 performance is reached for a 44 coherence in the neuron and 66 coherence for the animal Adapted from Newsome et al7 Nature 34152 54 Figure 8 Effect of lesions of area MT on the threshold coherence needed to detect reliably the motion stimulus de ned as 82 correct performance The left panel illustrates thresholds before and immediately after the lesion The right side are the thresholds when the visual stimuli are presented in the opposite side of the visual eld7 which is processed by the MT area on the opposite side of the brain Adapted from Newsome and Pare7 J Neurosci 82201 22117 1988 Figure 9 Effect of electrical microstimulation in MT on the performance of a rhesus monkey in the direction discrimination task Correlations on the abcissa are positive if they correspond to the preferred direction of the neuron and negative otherwise The lled dot correspond to decisions taken during microstimulation trials and the open dots to decisions taken during trials without stimulation In A microstimulation leads to a shift of the psychometric function leftwards corresponding to 77 correlated dots ln l37 the shift is much larger7 201 correlated dots Adapted from Salzman et al7 Nature 3461747 1990 Figure 10 Distribution of spikes of an MT neuron at 3 different coherence levels as a function of the psychophysical responses Correct responses correspond to the upward stapled spike distributions and incorrect responses to the downward spike distributions Adapted from Britten et al Vis Neurosci 1387 1007 1996 3 10 m o K as Q 30 51 Q 31 u 5 V s 0 f E s a 04 N 40 A Q 390 0 Eng 3 20 n6 Q V a I K o A l 1 o I as za 2 45 20 a z 20 25 102 499mg WW M 51 P0300 LayerWm average number by perfasb Figure 1 39c 3 to o 398 g m g m E 3 3 a O gt u 2 E 4 D 5 o 0 a 3 21 u 24 IL 01 1 1 I I 1 o4 x I 44 16 13 20 22 24 E 1 j I fa v L06 3 th7 l9 2 23 Flgure 2 A Stim 5 quanta MG Count gate t PSTH 80 U 100 sweeps 8 9 393 a 0 1 T F I I I 1 1 1 0 0 0392 0394 06 08 Time soc C ROC curve g B PNDs m E a 0 quanta 81 0392 n 5 quanta 08 3 es 06 2 2 7 3 a 0 4 gig 0 quanta quotE V vs Z I 0392 5 quanta quot0quot sE N f3 0 5 to 5 o 02 04 06 03 to Figure 3 L N PcR Yesno rating task stimulus response feedback presentation 2AFC task stimulus stimulus response feedback presentation presentation Figure 4 Preferred LED Receptive Field Fixation point Stimulus aperture Null LED Figure 6 Figure 5 U A Proportion correct Proportion Preferred Decisions a A o Lesion Hemifield B 0 Control Hemifield correlation 128 a 2 1 I o o pro 3 H Pre I l poet I post i L f correlation 32 A x l A 239 r s 2 c c 9 o 9 o 5 16 I 2 39 2 1 correlation 08 8 1 8 1 Spikes per trial w w o 022 39 quot3392 n quot3922 022 f 22 quotquot22 Speed degsec Speed degsec BL Figure 8 6 l r 01 10 10 100 Correlation Figure 7 64 coherence 7 preferred decisions 00 03 1 LO 0 3 0 coherence ES t 5 V o I O H stimulation 0 F lquot aquot Go no stimulation 2 00 J LI I I I I 39 39 I jstSo t O a b 9 03 l 4 I l I I I I d 0 3 64 coherence 3 I 3 3 H stimulation r i l O I l 39 one no stirnulotion V 30 2o lO 0 10 20 30 1 Correlation 15 03 I l I I Figure 9 w 10 I 20 3930 Response Impulsessec Figure 10 Simpli ed models of neural activity Theoretical Neuroscience Fabrizio Gabbiani Division of Neuroscience Baylor College of Medicine One Baylor Plaza Houston7 TX 77030 e mailgabbianibcmtmcedu 1 Introduction A principle that has proven to be very fruitful in modeling neural systems is to consider the simplest model capable of predicting the experimental phenomenon under consideration This approach allows to capture the essential points of a particular phenomenon without obscuring the picture with unnecessary details This is precisely the approach taken by Hodgkin and Huxley to model action potential propagation along the squid giant axon in terms of sodium an potassium conductances We have also seen how a simpli cation ofthe Hodgin Huxley model to a two variable reduced Fitzhugh model allows to characterize in more detail the ring properties of the Hodgkin Huxley system in terms of phase plane analysis When considering issues of synaptic integration or processing of sensory inputs by neurons7 a set of simpli ed models are often used as a rst pass to attack these problems We will present in the following several models that are universally used as rst approximations 2 The leaky integrateand re neuron A simpli ed model often used to describe the activity of single neurons in response to various inputs is the leaky integrate and re neuron LlF neuron In this model7 the conductances responsible for spike generation gNa and 9K in the Hodgkin Huxley model are ignored and the spiking mechanism is replaced by a voltage threshold7 Dimes This means that the membrane potential follows the differential equation7 dV V Odti RL tgt0 1 where I t is some stimulation current and we adopt the initial condition V0 Vb When Vt1 Whres reaches threshold a spike is emitted at t1 and the voltage is reset to zero Note that at steady state and without input current the membrane voltage is equal to zero which corresponds to the resting membrane potential value of the model Subthreshold behavior Below threshold7 the membrane voltage satis es a linear dif ferential equation that is none other than the passive patch equation of our second lecture Thus the approximation made in the leaky integrate and re model amounts to neglecting all active membrane conductances as well as the electrotonic structure of the dendritic tree of a neuron We can immediately solve for the subthreshold membrane potential until the next spike by using the methods of lecture 2 Dividing by C and using the de nition 739 R0 of the membrane time constant we may rewrite7 dV V i t dt T O and use the integrating factor etT to obtain lte Tvlttgtgt vltogt V0 Integrating between 0 and t yields t I etTVt 7 V5 esT d5 0 O and after multiplication by 647 t I m 155quot e t 9T ds 0 O This solution consists of two parts the rst one is a simple exponential decay from the starting value of the membrane potential to its steady state value that we will ignore in the following ie we set Vb 0 The second part of the solution re ects the contribution of the current It If we de ne 0 tlt0 Gt getT 1ft 0 We may rewrite t m 0 GtisIsds 2 Eq 2 is the convolution of the functions G and I The function Gt is called the Green function associated with the differential equation 1 The Green function is the solution of eq 1 when a delta current pulse is injected It 6t This can be seen by plugging It 6t in eq 2 and using the de nition of the delta function f0 6o 7 we dt we Note that the integral in eq 2 can be extended from foo to 00 since It 0 and Gt 0 for t lt 0 Eq 2 states that for a linear differential equation such as 1 the solution at time t is obtained by linear superposition of the response to delta current pulses ie the Green function Gt 7 5 scaled by Is relative to their time of occurrence We can now gain further insight into how the injected current is transformed to subthreshold membrane potential by taking the Fourier transform of Gt since convolutions correspond to multiplications in frequency space The Fourier transform see appendix A1 of Gt is given by A 1 R R1 72741 G w Cw 1739 1 in 1 7 Awgi ltwgt7 3 with R Aw W tan w 77w Equation 3 tells us that if a subthreshold sinusoidal current pulse t sinwt is injected into a leaky integrate and re model starting at time t 0 then once that the transients have vanished the membrane potential is given by Vt Aw sinwt w see appendix A2 In other words Vt is simply scaled and phase shifted with respect to the driving sinusoidal current as expected for a linear system The attenuation factor or gain Aw is frequency dependent such that the higher the frequency of the current pulse the stronger the attenuation Thus the leaky integrate and re neuron acts as a low pass lter Similarly the phase factor b changes from 0 deg to 790 deg as frequency increases both functions A and b are plotted in Fig 1 f I curve for constant current injection We now compute the steady state ring rate in response to a constant current pulse starting at t 0 If Vb 0 eq 2 implies an exponential relaxation to the steady state value V00 IR Vt IR1 7 547 Thus the injected current I has to be larger than the threshold current HMS EwesR if the cell is to re For currents above this value the threshold voltage is reached at times where times satis ed the following equation Vth IR1 7 e ttWST and therefore th39res t 755 7 l 17 7 t T oglt IR gt If the model is endowed with an absolute refractory period tmf the ring rate is obtained from 1 1 4 tref tth res tref 7 T 7 th39resIR The ring rate staturates at a frequency foo 1tref in the limit of large injected currents Perfect integrator limit In the limit of very high membrane resistance we obtain a perfect integrator governed by the differential equation dV 0 dt 7 t In this case past inputs are not forgotten over time and sum up perfectly Under constant current injection the membrane potential grows at a rate 0 and thus reaches threshold when Whres timesC or equivalently the ring rate without refractory period is given by f ICWMBS The perfect integrator is a good approximation to the leaky integrate and re neuron when the rate of inputs and thus the output ring rate of the model is large compared to the membrane time constant In this case the capacitance does not have time to discharge signi cantly so that inputs do not get forgotten In this limit it is clear that integrate and re neurons average their inputs and reduce variability if n Poisson inputs are needed to re the model7 then the output spike train will be times less variable as the input7 as seen in lecture on spontaneous activity and quanti cation of neuronal variability 3 Variants of the LIF neuron The basic principle of the leaky integrate and re neuron is to replace the dynamics of fast spike generation conductances by a voltage threshold A large number of variants are possible on this main theme In the following7 we list some of the variants that have proven useful to t the LIF model to experimental data We start by listing several possible ways to represent synaptic inputs Synaptic inputs Synaptic inputs to a LIF neuron have been simulated either as delta current pulses7 syn E 516t 7 E 7 tj7 139 739 or as true conductance changes7 I 3th 7 2W 7 V 29m 7 WV 7 V The time dependent conductances are usually taken as a functions Sometimes an inter mediate approach is taken where the time dependent conductance change is replaced by an average value over the time course of activation7 for example gmt a gem Reset voltage after an action potential The usual choice for the reset membrane potential V7555 after an action potential is the resting membrane potential7 West which is equal to zero in our formulation However7 nothing forbids us to chose a different reset value If V7555 is different from West then the f l curve of eq 4 depends on 6 Wm 7 V7555 and its slope for high step currents is given by 100 assuming tmf 0 and using the approximation log 1z g z for z small Thus7 the value ofthe reset voltage allows to control the slope of the f l curve independently of Whres A second consequence of a high reset value is that the membrane potential hovers close to threshold and is thus much more sensitive to transient coincident inputs This increases the variability of the spike train under random inputs Relative refractory period Sometimes a relative refractory period is implemented by incrementing the threshold after each action potential and letting it decay towards its steady state value dV V i V th39res th39res i 6h res and 2785 7 lms threso TVt wes Additional subthreshold conductances Another common practice is to add addi tional subthreshold conductances to the HF neuron to study their effect on the ring characteristics of the model For example instead of modeling a relative refractory pe riod as explained above7 one can introduce an after hyperpolarization conductance AHP with a dependence on a slow varying variable like the calcium concentration 4 The inhomogeneous Poisson process De nition The inhomogeneous Poisson process is a model of spike train generation that is often used when studying the encoding of sensory stimuli by neurons In contrast to the leaky integrate and re neuron7 it does not assume a mechanistic model ofspike generation and emphasizes the random aspects of spike generation that are often encountered in viva The inhomogeneous Poisson process is a generalization of the homogeneous Poisson process that we rst encountered while describing the spontaneous activity of nerve cells and the time dependence of spontaneous neurotransmitter release At that time the rate at which spikes were generated or vesicles released from the presynaptic terminal was constant in time and given by p We can generalize this model by assuming a time dependent rate of spike generation according to a function pt Formally the de nition of the inhomogeneous Poisson process is identical to the one given in lecture g2 except for the replacement of p by pt 1 No two or more spikes can occur at the same moment in time 2 Spikes are generated randomly and independently of each other with a mean rate pt that is time dependent We formulate this by assuming that for each interval a b the mean number of spikes generated is given by pt dt and that it follows a Poisson distribution7 b t dt k PNa b k eff tW fa p2 lndependence is again guaranteed if for any two separate intervals 1 b and c d with void intersection7 the number of spikes generated in both intervals are independent7 ie7 PNab7NCd PNabPNCd Probability density of ring times Repeating exactly the same steps presented in lecture g2 and using the fact that for a suf ciently small interval a 7 At2a Alt27 Luigi2 pt dt 2 paAt we obtain the probability density of observing spikes at times t1 tn during the observation interval 0 T T Plt0T1t17 7 W1 ptn6 f0 quotW 5 This equation re ects the fact that at each time point t17 tn the probability density of observing a spike is proportional to the instantaneous rate and the probabilities multiply toghether since they are independent of each other The nal exponential factor is a normalization constant ensuring that the probability of observing an arbitrary number of spikes within the interval at arbitrary times sums up to 1 Equivalence with random threshold IF model Equation 5 provides one way to simulate an inhomogeneous Poisson process split the time axis in small intervals of length At and ip a coin for each interval The probability of spiking should then be proportional to ptAt This method works but is inef cient because it requires use of a random number generator once per interval At We now describe a more accurate method based on computing the probability distribution of lSls This will also establish a connection between the inhomogeneous Poisson process and the perfect integrator model of sect 2 Assume that a spike is generated at time 1 Just as in lecture g27 the probability density of the next interspike interval is obtained from PAt0 lt b i 117 PAt0 2 b i 117 PNa b 0 17 Giffmm 1 7 67 fem ptadt39 Setting again b 7 a At and taking the derivative we obtain paAt Ma AweifoAtKtwm39 This equation directly generalizes the formula that we derived in lecture g2 for the homo geneous Poisson process To make effective use of this formula we de ne At y yAt 0 ptadt The probability density of y is obtained from the probability density of At and the trans formation law for probability densities under a smooth variable change7 dy 1 a At 7 199 p dm 57y see appendix A3 This equation tells us that y is an exponentially distributed random variable with rate 1 We can thus simulate a spike train corresponding to the inhomogeneous process with rate pt using the following procedure 1 Seta0 2 Select an exponentially distributed random threshold value yi for the starting index 239 0 3 Integrate t Yt pa 5 d5 0 until the threshold Yt y 2 0 is reached Call this time point t 2 0 4 Generate a spike at ti set a t 2 0 and repeat from point 2 for index 2 2 1 ie 123 and so forth Note that we have implicitly assumed in the previous algorithm that a spike occurs at time point zero by setting a 0 as a starting value The algorithm presented in steps 1 4 is identical to that used to simulate a perfect integrate and re neuron satisfying the differential equation dVdT pt and a threshold Whres that is updated randomly according to an exponential distribution after each spike 5 Simpli ed models of bursting neurons An important property of many neurons is their ability to generate short bursts of spikes andor exhibit low frequency oscillations in their subthreshold membrane potential under constant current injection typically between 1 16 times per second This points to the existence of ionic conductances that are able to activate and deactivate periodically on a time scale much slower than the action potential causing small excursions of the membrane potential from rest These excursions are important because they raise the membrane potential close to threshold thus allowing actions potential to be red is small 77packets or bursts Intrinsic properties lead to different bursting behaviors The ionic conductances responsible for bursting andor oscillations have been investigated in several different cell types and it is now clear that several distinct mechanisms are at play We brie y enumerate some of the salient points from these analyses 1 Bursting can be caused by the activation of low treshold conductances that are usu ally inactivated or closed at restsuch as the calcium permeable T conductance IT Hyperpolarization of the membrane potential removes the inactivation or opens the channels and allows a depolarizing current to turn on leading to 77 rebound excita tion A prominent example of this bursting mechanism are relay neurons in the thalamus Fig 2 In other neurons different low threshold conductances such as for example the 1 current play a similar role 3 A second widespread mechanism of bursting involves spatial interactions between the soma and dendritic compartments of a neuron Fig 3 An example is given by CA3 pyramidal cells of the hippocampus that possess calcium channels localized in the dendritic compartments but not the soma Upon current injection in the soma the depolarization of the dendritic compartment is delayed in time with respect to 9 7 U the soma7 causing signi cant current ow to and from the soma Delayed activa tion of dendritic calcium conductances eventually sustains the depolarization of the soma causing the cell to burst A similiar mechanism of bursting has been described in cortical neurons called chattering cells Fig 2 In these neurons the burst fre quency can be unusually high up to 40 Hz and relies also on a 77ping pong77 effect between soma and dendrites based on fast sodium conductances7 instead of calcium conductances Dendritic morphology can play an important role in determining ring characteristics given a xed set and distribution of conductances in various functional compartments of a neuron An example that has been investigated in some detail includes various types of excitatory neurons of the cerebral cortex pyramidal cells of different sizes7 smooth and stellate cells It has been shown by simulations that larger neurons with decoupled somatic and dendritic compartments are much more prone to bursting than more compact neurons Fig 4 In many neurons7 the frequency of spikes during a burst is initially high and gradually decreases over the course of the burst At least in one case7 pyramidal cells of the electrosensory lateral line lobe of weakly electric sh7 exactly the opposite behavior has been observed in Ultra Although our discussion focuses on intrinsic properties that can cause cells to burst or oscillate7 it is important to remember that such effects are typically observed within networks of cells and that the properties of bursts andor oscillations are thus in part determined by interactions among different neurons of a network Functional role of bursts Bursts and oscillatory neuronal membrane potentials are thought to ful ll various functional roles in the nervous system These include 1 D Rhythm generation Many tasks such as for example locomotion7 swimming or diges tion of food involve the rhytmic activation of muscles Such rhythms are typically generated by networks of oscillating neurons activated in de nite sequences Thus7 both the intrinsic properties of nerve cells and their pattern of synaptic connections in uence the generation of rhythms One system that has been analyzed in great detail in this respect is the stomatogastric ganglion of crabs7 responsible for rhyth mic digestive behavior Other favorite systems include swimming in the leech or lamprey7 the scratch re ex of turtles or the activation of nervous circuitry during ying in locusts Safety against unreliable synapses Bursts have long been thought to be effective at safely signalling important events7 sometimes over long distances One reason is that synaptic transmission is stochastic and therefore unreliable as we have seen in earlier lectures Thus7 stimulating repetitively a synaptic target offers a way to overcome this problem and assure that a message is delivered reliably Therefore bursts of spikes could represent a 77safety factor77 in synaptic transmission In cortex for example layer 5 pyramidal cells are those most prone to burst and typically send long range connections toward other cortical or subcortical areas 00 Detection of sensory events Bursts could be used in sensory systems to signal important events for example the occurrence of a salient object in the visual eld 6 Tsodyks Markram model of synaptic release As we have seen in lecture g1 synaptic transmission is often characterized by short term changes in synaptic ef cacy such as depression or facilitation We study a model of synaptic transmission well suited to describe such short term changes and that allows us to draw some interesting conclusions on how they could affect transmission of information between neurons We focus on short term synaptic depression because it is the most interesting case but short term facilitation can be handled similarly We assume a synapse consisting of N independent release sites active zones and denote by P the probability that a synaptic vesicle is docked at the active site ready to be released In the absence of activity we assume that P relaxes exponentially to its steady state value on div P110 P11 7 7780 where 75 is the time constant of recovery of the releasable vesicle pool During an action potential we assume that a vesicle at an active site is released with probability 10 so that the total probability of vesicle release is P pP After an action potential the value of P is thus decremented by P P a P i P 7 Multiplying both sides of eqs 6 and 7 with p we can rewrite in terms of the single variable P dP r PTO 7 PT 8 dt 750 P a P 7 me fdP after each action potential 9 with fd 1710W lt1 We can now derive a differential equation for the average rate of release ltPgt t of such a synapse when it is stimulated with spike trains generated according to an homogeneous Poisson process with rate p Let 0 lt t1 lt t2 lt be a sequence of such action potentials We combine eqs 8 and 9 into a single differential equation dP riP39rO7P39r dt 7 Tree 7 p39r39uP39r 7 i1 Taking the average at each time point t over all such Poisson spike trains with rate p7 we obtain a differential equation for the average rate7 M 7pm ltPT5ttigt39 Tree The last term is the average of PTt conditional on the occurrence of a spike at time It By assumption7 this average is independent of the occurrence of a spike at time t because PTt depends only on earlier spike occurrences that are generated independently of each other homogeneous Poisson process We may therefore rewrite Tree The last term is the average rate of spike occurrences at time t ie7 p see appendix A4 We therefore obtain a simple differential equation for the average rate7 P707ltPrgt 7 m PT 10 dt TM p M gt We can now use eq 10 to compute the steady state7 average rate of release in response to Poisson spike trains of rate p by setting the left hand side equal to zero PT 7 PT 59 0 0 H ip39r39up ltPrgt55 T750 Solving for the steady state rate ltPTgtSS yields PTO P 11 lt gt 1necpmp The average postsynaptic rate fpost is given by fpost p ltPTgtSS At low presynaptic ring rates7 p7 the postsynaptic rate is roughly proportional to p fpost 2 Prop Fig 5 In this range the synapse effectively implements a rate code However at high ring rates7 the postsynaptic rate becomes independent of p fpost g PTO7750p In that regime7 changes in the postsynaptic rate are proportional to relative changes in the presynaptic rate when p abruptly changes to p Ap the postsynaptic rate switches from Prop N PTO 1 l 7150me i Trecp39r39u PTOPAP 2 P70 Pro 1necpmp 7 7mm Trecp39r39u p 39 Therefore the change in postsynaptic rate is proportional the relative change App This point is illustrated in Fig 57 where the two transients in response to changes from 25 to 100 HZ and 10 to 40 HZ are nearly of the same size relative change App equal to 100 7 2525 3 and 40 71010 37 respectively 10 A Appendices A 1 Fourier transform De nition The Fourier transform of a function g is a decomposition in the frequency domain in terms sinusoidal components The de nition of the Fourier transform w is given by 00 9a MW dz 00 The Fourier variable to is called the circular frequency of the signal The inverse Fourier transform is given by z i i 00 AueiW do 9 T 27139 700 g 39 Not all functions possess a well de ned Fourier transform and there are many important technical details involved in their proper de nition We will however ignore for the most part these technical issues in our treatment In numerical work the Fourier transform is often expressed in terms of the variable f w27T instead of the circular frequency w Under the change of variable to a f we can rewrite the above 2 equations as 00 00 W memos gm Mew df OO 00 If x has units of length for example meter then f has units of cyclesmeter Similarly for z in units of seconds f would have units of cyclessecond or Herz An important property of the Fourier transform is that it maps a convolution into a multiplication in the frequency domain and vice versa the inverse Fourier transform of a multiplication is a convolution As we will see in later lectures linear systems are characterized by such convolutions or equivalently by multiplication in the frequency domain thus the Fourier transform is a natural tool to describe such systems Let 91 and 92z be two functions whose convolution is de ned as 9396 00 9196 y92y dy Then the Fourier transform of 3w is equal to the product of 1w and 2w This result is known as the convolution theorem A2 Subthreshold response of the LIF neuron to a sinusoidal current pulse Using the solution derived in eq 2 we have 1 t 1 Vt 60 6 0in sinws d5 Ee tTO 657 sinws d5 12 Performing the Change of variables x 57 in this last integral and de ning a tT7 B Tu we obtain t 0 657 sinws d5 7 6m sin x dx 0 0 Integration by part yields em sin x dx em sin x13 7 em cos z dx 0 0 e sin a 7 6 6m cos z dz 0 a 6m cos z dx em cos x 13 7 a em7 sin dx 0 0 e cos a 7 1 6 6m sin x dz 0 Combining these two results gives7 0 1 0 6m sin x dx W sir a 7 Beeswax 6 Converting to the original variables and plugging this result in eq 127 we obtain Vt e tT 2 at1sinwt 7 Tu coswt 7w 7 1 7 R 7tT WltltSmltwtgt 7 meow m gt 739 in In the limit t 7 00 this gives Vt sinwt 7 Tu coswt If we de ne V 17 77w and b angle of V with the X axis7 we have 1 77w tan gt 77w cos b W sm W so that R i i Vt Cos smwt sin Coswt V1 7 R i W smwt b A3 Transformation of probability densities under smooth variable change Assume that X is a random variable with probability density p over the interval I a b with possibly in nite boundary values a and b By de nition7 PX 3 x0 mo p dz Let Y gX be a new random variable obtained by a smooth transformation of X such that dgdx 31 0 for example Y 4X and set 0d 91 We are looking for an explicity formula for the probability density of Y7 POSyiWQWN7 0 in terms of the probability density of X De ne the indicator function for the interval 0y0 as follows 1 y y 1cyoy 0 0 otherwise The probability that Y 3 yo is then given by the probability over the random variable X that the indicator function 1Cyogz is equal to 1 17 mayo 1yoltgltzgtgtpltzgtdz We now perform the change of integration variable x a y g so that dx a dsdy dy dgdx 1dy 29 1d a y d PWSMhmmmfwm Comparing with eq 13 we obtain qgtpmx99 71 dx 39 Generation of an exponentially distributed random variable random number generators are usually designed to draw uniformly in the interval Ogl We can now apply the previous result to generate an exponentially distributed random variable from a uniformly distributed one The trick presented in the following is general and can be used to generate other distributions Let Y be exponentially distributed with unit rate between 0 and 007 ie py e y De ne Y 2900 pydy16 y lt14 13 The distribution of Z is obtained by the law on transformation of probability densities derived above pltzgt 11942 7 e yey17 71 that is7 Z is uniformly distributed between 0 and 1 Note that Y may be obtained from Z by inverting eq 147 Y 7 log1 7 Z This leads to the following algorithm to generate Y exponentially distributed 1 draw numbers 2 uniformly distributed between 0 and 1 2 Transform obtained numbers according to y 7 log1 7 The resulting numbers are exponentially distributed A4 Campbell7s theorem Campbell7s theorem states that lt2Wmgt p7 15 where the average is taken over all possible spike trains for an homogenous Poisson process of rate p To justify this result7 let At be a suf ciently small interval such that the probability of two spikes occurring in the interval At is exceedingly small pAt ltlt 1 We then have N 1 ftl t7At t Atf t1 26ttidt 1 E orsome PM 1 0 otherwise On average7 the rst alternative occurs with frequency pAt while the second alternative occurs with frequency 1 7 pAt Thus7 tAt lt 26 7 ti dtgt pAt t7At Dividing by At and taking the limit At 7 0 yields eq 15 Figure Legends Figure 1 Phase and normalized gain of a leaky integrate and re neuron with a mem brane time constant of 30 ms see eq Figure 2 Left Rebound bursts are observed in thalamic relay neurons in response to hyperpolarizing current pulses from a depolarizing holding potential Adapted from Jahnsen and Llinas7 J Physiol 349227 2477 1984 Right Response of a chattering cell to a depolarizing current pulse Adapted from Gray and McCormick7 Science 274109 1137 1996 Figure 3 Two compartment model of burst generation for pyramidal cells of the elec trosensory lateral line lobe in weakly electric sh Adapted from Doiron et al J Comput Neurosci 125 257 2002 Figure 4 Effect of neuronal morphology on ring properties Neurons with identical densities of active conductances exhibit qualitatively different spiking behaviors as mor phology is changed Adapted from Mainen and Sejnowski7 Nature Z gtE 2Z gt6Z gt Z gt667 1996 Figure 5 Top Plot ofthe steady state probability of release and of the steady state post synaptic ring rate see eq 11 of the Tsodyks Markram model Bottom Illustration of the post synaptic ring rate dynamics in reponse to three changes in presynaptic ring rate Adapted from Dayan and Abbott7 MIT Press7 2001 09 E 08 CD E5 2 07 U g E 06 CD 5 05 g 5 04 S C Q 39a 03 m 02 01 O I 0 1 2 1 0 1 2 10 10 10 10 10 10 10 10 fre uenc Hz fre uenc Hz FIgure1 q yi q yi 24s I 78m b KL I20mV 40ms 100ms Figure 3 Figure 2 a 10 08 1 4 A A 39ltPrelgt 1 0 Equot 06 ltPr elgtr 1 3 quot3 51 v 9 04a 2 02 1 E 0 39I 391 iI b 0 O 20 40 60 80 100 r Figure 4 E m 1quot 10 Hz 40 Hz ii 9 0 391 I I i 0 200 400 600 800 1000 1200 15 ms Figure 5 Correlated activity in populations Laotoma 2 Lastlecture 1 Population response noise distributions 2 Population decoders winnertakeall centerof mass templatematching maximumlikelihood Bayesian Goodness of decoders 9 Fisher information 4 Decoding uncertainty probability distributions over the stimulus Today Quantifying the effect of correlations on information Modeling correlated population activity Averbeck Latham Pouget 2006 Neural correlations population coding and computation Nat Rev Neurosci 75 358 66 Pillow et al 2008 Spatiotemporal correlations and visual signalling in a complete neural population Nature 454 7207 9959 Population codes Activity o r r1r2rN O O A s 51 given stimulus 0 stimulus estimate Preferred stimulus IZZCCICCCICCCIIO N neurons Quality of a population code How much information about a stimulus s does a population r contain 9 How well does the best possible decoder do in decoding s from r 9 Fisher information 2 response distribution of 6 5 lta10gpl I Sgt the population Independent Poisson variability 9 2 5 N slopes of tuning curves 22 s Population response distribution Conditional independence N prlsHpnIs i1 0 Noise correlations prs 1pnls 0 Different from signal correlations Signal correlations Tuning curves of two neurons Activity Stimulus Change stimulus ignore variability Activity Activity Neurons with similar tuning N C 9 10 s 10 D C q o gt 4 2 4 U o lt o n o n 0 10 Stimulus Activity of neuron 1 Neurons with dissimilar tuning N C o 10 3 10 D C q o gt 4 2 4 U lt 0 0 7t 0 7t 0 10 Stimulus Activity of neuron 1 Noise correlations Fix stimulus examine variability across trials Activity Stimulus Noise correlations Noise Noise correlation correlation Neuron 2 Neuron 2 s Neuron 1 1 Neuron 1 Noise correlations C Uncorrelated d Correlated Preferred Preferred stimulus stimulus Noise versus signal correlations 0 Noise correlations pltrlsgt Hpltmsgt 0 Signal correlations pr lpn How do noise correlations affect information Can go either way General conclusions about redundancy or synergy notjustified How to quantify the impact of correlations on information What is your control Shuffling responses Spike counts in response to a fixed stimulus Spike counts with trials shuffled separately Neuron 1 Neuron 2 Trial Neuron 1 Neuron 2 1 8 12 2 7 13 3 4 6 4 5 6 5 1 O 6 6 8 Positively correlated 6 1 8 4 7 5 No correlation O 6 13 6 8 12 Ashuf ed Shuffling preserves variability of each neuron individually prs Destroys correlations Measure of information in correlations AI I I shuf ed shuf ed Information relates to discriminability Information I in unshuf ed responses Al lt0 shuf ed W4gt Neuron Z spikes H N O O 1 Z 3 4 Neuron 1 spikes Neuron 2 spikes Information Ishumed in shuf ed responses O 1 Z 3 4 Neuron 1 spikes bAI Neuron 2 spikes Information I in unshuf ed responses gt0 shuF ed 0 1 2 3 4 Neuron 1 spikes Information Ishumed In shuf ed responses w4gt x Neuron 2 spikes N O 0 1 Z 3 4 Neuron 1 spikes cAI Neuron 2 spikes Information I in unshuf ed responses 0 shuF ed 0 1 Z 3 4 Neuron 1 spikes Information Ishumed In shuf ed responses wb H Neuron 2 spikes O 0 1 2 3 4 Neuron 1 spikes Interaction between signal and noise correlations 0 If signal correlations are positive positive noise correlations decrease information 0 If signal correlations are negative positive noise correlations increase information Correlations can depend on the stimulus In cortex 0 Alshuf ed lt 10 pairs of neurons Rat barrel cortex Macaque V1 prefrontal somatosensory cortex 0 But small effects of pair correlations can have large effects in populations Encoding versus decoding perspective So far how do correlations affect the total amount of information in a population Encoding perspective redundancy synergy Decoding given a correlated population how much worse would you do when ignoring the correlations AIdiag 0 Train a decoder on the shuffled uncorrelated data 0 Apply the same decoder to the true correlated data 9 extract information Idiag I I diag Md iag Estimate wdi g on Apply to unshuf ed shuf ed responses responses measures 69 a Aldiag 0 8 8 4 e lt D Q 3 1 2 N N 2 8 5 3 s 1 a o 0 Z Z W Wcliag 0ptima 0 1 Z 3 4 0 1 Z 3 4 Neuron 1 spikes Neuron 1 spikes Estimate wdiag on Apply to unshuf ed shuf ed responses responses measures IdiaQ b Aldiaggt0 g 4 El 4 Woptimal x A4 a 3 5 3 L L l 2 2 Wdiag gt El 2 Wdiag Woptimal 8 1 E 1 5 5 CL 0 Z 0 I I I r Z 0 I I I r O 1 2 3 4 O 1 2 3 4 Neuron 1 spikes Neuron 1 spikes Ignoring correlations ldiag cannot be greater than I unlike shuf ed Estimating correlations is data intensive Trade off between decoding performance and data needed to measure correlations Aloliag z 10 in experiments pairs of neurons Mouse retina Rat barrel cortex Macaque SMA V1 and motor cortex Modeling correlated populations Complete populations are different from pairs of neurons So far no model based characterization of correlations Pillow et al 2008 Complete population Encoding model for spike times 9 allows examining temporal correlations Parameters can be fit to physiological data Stimuli binary white noise Not a single number but a time series for every pixel i x Recordings Neural data 27 retinal ganglion cells in vitro ON and OFF cells Nearly complete mosaics ON mosaic OFFOmCogtsaic 00 o 000 080 eo o 120 pm Poisson neurons stochastic firing rates Step spike counts f F k stochastic spl mg step 5 pike trains robablllt p y yt ft LN Pneurons inputs i hnear combination nonlinearity spiking probability fst stimuli or other neurons p spike X 2 f6 Kx J stochastic step spiketrains Vi Fitting the parameters of the model to the data V OOOOOOllOOOOOlOOlOOOOOOOOOO110001000 Likelihood pdata model H pyt model parameters 1 Log likelihood log pdata model 210g pyt model parameters 1 Z logpyt1K6 Z logpytOK6 spiking t nonspiking t Z logf6KXt5t Z log1 f6KXt51 spiking t nonspiking t Paninski 2003 Coupled spiking model Stimulus Stochastic filter Nonlinearity spiking Neuron 1 quotuncoupled modelquot Coupling ignoring correlations filters Q Neuron 2 Crosscorrelation function 630 a ONON Retinal data X 30 Full model E Uncoupled 1 O 50 0 50 M yz r M y yz d Other tests Triplet correlations Peri stimulus time histograms average single cell responses to new stimuli Predicting a single cell s spike train from the stimulus and the activity ofthe rest of the population So far encoding perspective What about decoding perspective Bayesian decoding Bayes39ruEZ pSrocpl SpS fi posterior likelihood prior Activity Probability 1 ps 1 Bayes rule Preferred stimulus Stimulus Bayesian decoding single pixel 18 samples MSW 218 stimuli s Bayesian decoding II II III II III I I I II II I II I I II III III I Illll III I ll IIII II I III I II I III III I II I I II I III I IV III II I II II I I II I III II I I II I l I log SNR bits per 5 Decoder performance Bayesian decoder under coupled model extracts about 20 more information than under the uncoupled model Uncoupled model 039 C 390 O 0 D 390 s U D C l Poisson model Full model Pairwise 9 just 10 as in previous studies v Bayesian decoding Summary Signal versus noise correlations Shuffling trials is a way to study the effect of correlations on information Correlations can increase or decrease information If signal and noise correlations have the same sign they tend to decrease information Encoding versus decoding perspective Ignoring pairwise correlations reduces information by 10 Effect of pairwise correlations on entire population is not clear Pillow et al 9 20 more information when exploiting full correlations LNP and coupled spiking models are convenient phenomenological models for correlated populations Exercises 0 All in notes of lecture 1 on class website Due Saturday March 21 end of day quotBonusquot exercises are optional Gestalt psychology Bayesian networks and Bayesian model comparison n if Lemme Done so far Population encoding and decoding Role of correlations in populations Perception as Bayesian inference explaining visual illusions Cue combination a simple Bayesian computation This lecture Gestalt psychology cornerstone of higher level vision in psychology beyond sensory uncertainty 0 Bayesian models in practice how to compute probabilities when it gets hard how to generate behavioral predictions Bayesian model comparison how to show that model A is better than model B Occam s razor Gestalt psychology Observers tend to order their experience in a manner that is regular orderly symmetric and simple 0 quotThe whole is different than the some of its parts Gestalt psychologists attempt to discover refinements of this idea 9 Gestalt quotlaws of grouping Law of closure The mind tends to complete incomplete figures that is to increase regularity We may experience elements that are not physically present Law of proximity Spatial or temporal proximity of elements may induce the mind to perceive a collective entity Law of similarity 000000 00 000000 00 000000 00 The mind groups similar elements into collective entities This similarity might depend on relationships of form color size or brightness Law of continuity The mind continues visual auditory and kinetic patterns When something is introduced as a series the mind tends to perpetuate h series Law of common fate When element move in the same direction we tend to see them as a collective entity Criticisms quotVague and inadequate v Bruce et al 1996 quotRedundant and uninformative Wikipedia quotHaphaza rd Trevor Holland March 29 2009 Descriptive rather than explanatory Gestalt as Bayesian inference psingle objecti Jclxzx9 pindependent objects i Jclxzx9 No sensory uncertainty but uncertainty about higherlevel structure How to compute Bayesian probabilities when it gets hard Bayesian networks Exercise Compute pA EF based on the conditional probabilities indicated in this Bayesian network How to compute probabilities in practice Markov chain 0 pABCpApBApCB G Conditional independence pABCpApBApCA pABC pApBApCA 19AlBaC pBjC ZAlpApBAPCA pABpABDC pApBA ZPADBDCI9A19BA Independent sources pABCpApBpCAB How to predict behavioral data Example change localization Where was the change 1000 ms 100 ms Step 1 What are the parameters Number of items N assumed known Where did the change occur L 1N How big was the change A What were the original features 91 GN What were the new features p1 pN Internal representations of original features x1 xV Internal representations of new features y1 yN Step 2 Draw generative model write down prior and conditional probabilities PL 96 pA constant I P amp14A5 4 AL xi BI 2 1 2 e 20x 14x0y pf 2 391 27w N 1 20392 2 e y H 2 py n n E q Step 3 Compute the posterior over the task variable using probability calculus L X y 9 L 9 A 11 X0 WP 10ley0C PLXyIJJpLXy0pdAd0dp lllPLP9pAp l0LApx0pypdAd9dp 0CJUMXl9lp 0LApylpdltpd0dA ZMNquotl9l5 9 A1LpylPdltpd0dA llpxl9py 0A1Ld9dA Step 4 Pick a decoder eg MAP xL yL 2 A 2 2 LXy argrnax1 0ij oiLezkx LMy L L Step 5 Monte Carlo simulation Draw many sets of x y trials from generative model but with priors given by experiment in each experimental condition separately Compute xy on each trial 9 Histograms p experimental condition How to compare models to data What makes model A better than model B If it describes the data better What do we mean by quotdescribing betterquot Lower error higher goodness of fit lxlxlxl What is the right error or good ness of fit measure to use 1 Look up in statistics book pull out of hat t test R2 x2 SSE Maximumlikelihood fitting DataD ModelVI pMDocpDMpM ll39l Model likelihood Flat model prior Find model with highest likelihood argmax pDM Maximumlikelihood fitting 0 Model parameters 9 0 Find parameters that work best for given model A 9ML argmaxpDM9 9 pDIMpDIM ML 0 Repeat for a candidate models Example linear regression Data D XY Model M y ax b Gaussian noise with fixed variance pDM6pXYaba pYXab0pX 1 Y aX b2 5113 2 argminZK aXl b2 ab Example probability distributions Data histogram n1n2nB Model M n drawn from multinomial with probabilities p9 pDM6pnp6 n1nB 1916 1 quot39pB 6Y3 nlimnB B logpD M 6 2111 log pl 6 constant i1 Is a better fit always better 180 y 143X 66 160 R2 y 149 X2 065 X 303 180 R2 078 180 160 140 120 100 80 60 4O 20 y V1 V2 V3 y21 R2 1 Why is this not a good model Occam s razor parsimony quotSimpler models are better Simpler fewer assumptions fewer parameters But not a rigorous formulation Can only decide between two models that fit the data equally well Balance between complexity and power 9 Bayesian model comparison Bayesian model comparison p6IDMocpDIM6p 9M pDM jpDM9p6Md6 a goodness of fit averaged over all possible parameter combinations How does this help Assume p6M is flat 1 Volume of parameter space 199lM p6DMocpDM6 1 Volume of parameter space 7 Many parameters 9 large volume pDM jpDM6d9 Likelihood landscape kalM pDMB is high ifthe data are fit well compared to other possible data Error bars on parameters I Innnrmaliwarl but averaged pDMjpDM9p9Md9 Bayesian model comparison A DM DM 9A 6Md6 Penalizes poorly fitting models pDM6 low overall Penalizes non specific models peak of pDM6 is low since it is normalized over D Penalizes models that have to be finely tuned width of pDM6 is low Penalizes models with many parameters low p6 M Penalizes models with poor choice of prior range of parameters p6 M doesn t overlap with pDM6 How to compute the integral A DM DM 9A 6Md6 Sum over all possible parameter combinations Say 4 parameters each parameter takes 50 values each model simulation takes 10 ms 9 17 hours Approximation would be useful Approximating it PeakofpDM6iSpDiM MAP Width OfpDiM6 5 GeiD Width ofp6iM is 69 Then DMj DM6 6Md6 pDM MAP M03961D zpltDM 0 Compare A Occam factor pDMpDM6ML Laplace approximation pDMpDM AMApp9AMAPIM 1H fdetg Hessian of the og posterior H 2 VV log pa DM 98MAP Exercises Prove this What is H when the posterior is a multivariate Gaussian centered at MAP Goodness of a model pMDonDMpM pDM jpDM6p6Md6 Relative goodness of two models M PM2 JpDMp6 p6 Md6 JpD M26 p6 M2d6 pDMlpMl log log pD M2pM2 log Exercises Exercise 28 1 3 Rnncloin armbles r coine mdepcudenth iroin e proloolnliiy distribution Pm According to model Hn pie is o nnilorni clismlne non P1quotHD l 28 20 Accordlng io model Hi Pm is o nonuniform d lbution Vuth an on known parameter m e 71 Exercise 25 zlgl Ddt elpomts lat sire belmeel to ooine from si stl mght Lula Tlic experimenter cliooses r and r is Gonssieneclisirilinteel about i LL D Hum 28 22 with zu mnce 03 According m inodel Hi the straight bus is horizontal so my 0 According to model Hg in is e persuneier ii 1th pnu distribw non Nol mnl01 Both model to Up Given ihe decosei D 788 7210611 the noise level is on 1 what is the evldence for eecli inodc 7 and assuming l Phlm mum 71 1 David MacKay Information theory inference and learning algorithms 2003 Bayesian model comparison and Gestalt laws quotLaw of continuity Bayesian model comparison Model 1 Model 2 2 lines 2 angles Each line 2 free parameters Each angle 4 free parameters 9 4 free parameters 9 8 free parameters Assume each takes 50 values Assume each takes 50 values Uniform priors Uniform priors Bayesian model comparison Model 1 Model 2 pDM1zIpDM176p9M1d91j4 PDM2 T PMl 4N 19W2 ID pDM2pM2 50 6250000 Open questions Can the Gestalt laws be written as outcomes of Bayesian model comparison Can such Bayesian models be tested by changing parameters and measuring human behavior How is hierarchical inference implemented in neural networks Small project Auditoryvisual speech perception data Identify a syllable as ba or da Factorial design In each condition responses IlbaII and IldaII Auditory 2 3 39l DAnum BA Visual V 4 Massaro et al 1993 httpmamboucscedupsldatama5593ahtml Approach 1 Model str re a Inference model vs modeler s model b What are the free parameters c First pass fix feature values of intermediates equidistant equal between modalities V2A2 V3A3 A4V4 I II I II ba da 2 Predict responses using Bayesian model a Assume conditional independence b Collapse onto two categories c Assume variances independent of s d Make other assumptions if necessary auditoryvisual auditory visual gtgtlt Q XV 3 Is the Bayesian model better than the established model a Work out alternative model FLMP multiplies response frequencies b Maximumlikelihood fitting c Bayesian comparison integrate over free parameters approximate where necessary 4 Discuss results and caveats Due by Saturday April 11 Cue combination Lmtmm 4 Why study cue combination Very common within and between modalities Simple computation but still a computation Illustrates key notions of Bayesian optimality Can be linked to neural basis Humans integrate visual and haptic infonna on in a statistically Robert J van Beers Anne C Sittig Jan 1 Denier van dcr Gon Mm quotquotquot39 WWW am How humans combine simultaneous proprioceptive VisionSrit m39eProgram5111001LjiOpi uIilt39rr Unit39t riityLif Ctliiii and visuaI position information 947202010 USA Current Bloioqy Val 14 2577262 February 3 2004 E 2004 Else39vler Science Ltd AH Hams resen I Optlmal integration of texture and motlon cues to depth The VentrllquIst Effect Results from NearOptimal Bimodal lntegratio Robert A Jacobs Lunm39 far i39tmill Strum UnitHui 0 Ratmum RULIH AIL I NY 14677 SA n t it t much 1W David Alais and David Eurr Is iiillio di Nellrosci nze Llel CNR 55127 P l M I I I My 395quot otlon I USIOI IS as 0 tlma EI39CE t5 Auditory Neuroscience Laboratory Department at Physiology U 39 39t 18 an meguilfw es 30 Yau39Wetssl Eero R bll HDHCeiilZ and Edward H Adelson3 V i r Yv M i Univemty oi Florence v tortviiiiivmm A Hiliiltilgimt Hm NEW tinchw mt mm m4 1 v gt4 i i 4M i a 39i mi WW LipReading Aids Word Recognltion Most in Moderate izilly integrate stereo and texture humiliation Noise A Bayesian Explanation Using HighDimensional gtI judgments 01 surface slant Feature Space avid C Knill Ieli39rey A Saunders Wei Ji Ma Xiang znouquot Lars At R0553quot John J Foxes Lucas c Pami1 39rr39nm 1 quotmm intu 2 Donut Rurhrrr39n 2 4 Mirmm HM Mirmm N I463 t39l 2 VLCICJ iii mieii im tll 2 April iiiu ii It t4 N i N i um Katt 31 Menu mom s in l mgmm iii agnmv quotMimirim m wk minim mm mm United Sum oi 1mm What is he saying perceived auditory J visual l ba da ga McGurk and MacDonald Nature 1976 Demo from httpwwwmediauionopersonerarntmMcGurkengishhtm Why does this happen Syllables very similar 9 conflict not noticed Both stimuli come with uncertainty Integrating sound and vision is normally useful The brain interprets observations in terms of their causes perception as inference Let39s start with the Umm hmmm forensic evidence umm hmmm DOH It was ba It was ga Generative model speech noBe noBe 39NFER sound hps Exercise what is the posterior over 5 given this generative model 75 anxV 0C 7anxV SPS PXA SpxV Sps Conditional independence 9 multiplying likelihood functions Single source or two sources This generative model assumes that there is a single source In most cue integration experiments there are in fact two sources However these are kept close enough for the subject to believe that the conflict is due to noise and that there is really one source Later we will examine the case when there can be one or two sources PSXAXVOCPXAXV SPS pxASPXVSPS Assumptions about these distributions xAS2 pxAs1Ze 263 27m A xVS2 1 e 2039 pxVSW V p s constant Cue integration without artificial conflict auditoryvisual ps l xAxV oc pxA lspxV ls auditory PlXAIS belief Cue integration with artificial conflict not really different auditory visual auditory visual XA S XV 5 auditory visual auditory visual XA XV S Exercise Given psxAxVocpxAspxVs 1 equot32 1 6702 PxASW PXVSW A V show that ps IXA XV is a normal distribution over 5 with mean wx wx 1 M where wA 2 2 and w A 1 s 2 WAWV o A oV and standard deviation oAoV O39AV 2 2 or equwalently 2 O39A O39V AV Weighting by reliability Urban legend about Bayesian inference all about the prior Here assumed flat prior still Bayesian inference Key taking into account uncertainty 0A IV on a single trial 9 allows weighting by reliability Requires knowledge of uncertainty Automatic in Bayesian coding posterior distribution plsl r Bayesian inference is about keeping track of probability distributions over stimuli instead ofjust single values Another urban legend Bayesian inference is the same as Bayesian decoding Over many trials the internal representation follows a distribution when conditioned on a particular stimulus value pXV Probability 5 Internal representation X However on a single trial the brain has to perform Inference over the stimulus based on a single set of noisy internal representations XA XV What was s XA XV On a single trial we have a posterior distribution overs pXA XV However this is a belief not an empirical distribution Probability XA XV Stimuluss On a single trial the posterior produces a single response WAXA WVXV xAxV sz WAlWV Across many repetitions of the same stimulus s the responses form a response distribution This distribution can be measured experimentally p s Probability Experimental techniques Estimation Discrimination 9 psychometric curve can also be regarded as extra step in generative model Exercise A W x W x A leen s z A A V V calculate mean and variance of ps 5 WA WV W 1 1 A 2 W O39A V 0 1 7XA 5A2 02 What If XA IS drawn from pxA sA Ze 2 A 27Z39UA XV SVy 1 20 and varompxVsV e I 2 270 The posterior wiggles around from trial to trial POSTERIOR DISTRIBUTION I I IA What The variance of the response distribution is equal to the variance of the posterior The relation between a singletrial estimate and the observations is the same as that between the mean estimate and the true stimuli 0 Is this generally true Probability No Consequence of Gaussian distributions and multiplicative operation In general there are three completely different distributions RESPONSE DISTRIBUTION NOISE DISTRIBUTION POSTERIOR DISTRIBUTION many trials single trial many trials 3 ps s gt 4 E 0 A U 7 s 0 o 9 9 o D XA XV Internal representation X Stimuu5 5 Do not confuse them Common mistake in Bayesian modeling There is no direct way to measure the posterior on a single trial Activity Neural version RESPONSE DISTRIBUTION many trials p s NOISE DISTRIBUTION many trials POSTERIOR DISTRIBUTION single trial p5 rArV prV Probability Probability Preferred stimulus Stimuluss s Exercise What is the general equation for the response distribution assuming some decoder in terms of the posterior distribution p s Exercise Work out a case where the posterior distribution and the response distribution are both continuous but very different from each other For example choose non Gaussian distributions andor a more complex generative model Bonus make as general as possible the conditions under which the variance of posterior and response distribution are the same Fisher information 62 AV S lt 10gpanxV sgt ltT10gpxA i spxV i Sgt ltlogpxA i Sgt lt 1 gpxV sgt A sIV s Optimal cue integration preserves Fisher information What does this mean in the Gaussian cue integration case Nonoptimal cue integration 1 What was s XA XV Example suppose 0A2 100 and 0V 1 Then the variance of the estimate is 1014 z 25 A 001xA xV Optimal estimate s 101 Variance of optimal estimate 1001 1 z 099 Multisensory bias In the presence of a cue conflict 5V A 5A A what is the mean multisensory estimate A 7 7170 171ng LM 7 5 5 owu 339 g medum CV in o I 0 h h g lg UV 5 75 WW 5 n 5 5 5 5 o 5 Audiovisual Con icl A degs Lines are predicted slopes from unisensory experiment Alais and Burr 2004 Exercise Even when a single stimulus has to be inferred from a single cue a bias can arise due to a prior Assuming a Gaussian noise model and a Gaussian prior with specified mean and variance compute the bias as a function of the stimulus Causal inference You don t always integrate cues Two cues often have two different sources How to decide whether there are one or two sources Bayesian inference on number of sources Generative model number of sources 31 C2 number of sources c1 C2 2 psAxAxVZpsAxAxVCpCxAxV C 1 pCxAxVOCp anxVICpC pC pxAxVsAsVpsAsVCdsAdsV C tt pxA I SApxV ISVpSASV ICdSAdSV psAsV C1k sA SV 2 pC 1xAxV kpC 1 pxAxV SASV5SA SVdSAdSV kpC1IpxAS1pxr ISAdSA 7xA xV2 kPC1 1 e 40 27ro3921 03 Ventriloquist effect Bayesian explanation Small project 1 Auditoryvisual speech perception data Auditory Identify a syllable as ba or da 1 3 Factorial design 3 V xsual 4 In each condition responses IlbaII IldaII Predict responses using a Bayesian model Compare predictions with those of established model FLMP Massaro et al 1993 httpmamboucscedupsldatama5593ahtml

### BOOM! Enjoy Your Free Notes!

We've added these Notes to your profile, click here to view them now.

### You're already Subscribed!

Looks like you've already subscribed to StudySoup, you won't need to purchase another subscription to get this material. To access this material simply click 'View Full Document'

## Why people love StudySoup

#### "There's no way I would have passed my Organic Chemistry class this semester without the notes and study guides I got from StudySoup."

#### "I signed up to be an Elite Notetaker with 2 of my sorority sisters this semester. We just posted our notes weekly and were each making over $600 per month. I LOVE StudySoup!"

#### "There's no way I would have passed my Organic Chemistry class this semester without the notes and study guides I got from StudySoup."

#### "Their 'Elite Notetakers' are making over $1,200/month in sales by creating high quality content that helps their classmates in a time of need."

### Refund Policy

#### STUDYSOUP CANCELLATION POLICY

All subscriptions to StudySoup are paid in full at the time of subscribing. To change your credit card information or to cancel your subscription, go to "Edit Settings". All credit card information will be available there. If you should decide to cancel your subscription, it will continue to be valid until the next payment period, as all payments for the current period were made in advance. For special circumstances, please email support@studysoup.com

#### STUDYSOUP REFUND POLICY

StudySoup has more than 1 million course-specific study resources to help students study smarter. If you’re having trouble finding what you’re looking for, our customer support team can help you find what you need! Feel free to contact them here: support@studysoup.com

Recurring Subscriptions: If you have canceled your recurring subscription on the day of renewal and have not downloaded any documents, you may request a refund by submitting an email to support@studysoup.com

Satisfaction Guarantee: If you’re not satisfied with your subscription, you can contact us for further help. Contact must be made within 3 business days of your subscription purchase and your refund request will be subject for review.

Please Note: Refunds can never be provided more than 30 days after the initial purchase date regardless of your activity on the site.