THEORETICAL NEUROSCIENCE CAAM 415
Popular in Course
Mrs. Dedric Little
verified elite notetaker
Popular in Applied Mathematics
This 25 page Class Notes was uploaded by Walker Witting on Monday October 19, 2015. The Class Notes belongs to CAAM 415 at Rice University taught by Staff in Fall. Since its upload, it has received 23 views. For similar materials see /class/225003/caam-415-rice-university in Applied Mathematics at Rice University.
Reviews for THEORETICAL NEUROSCIENCE
Report this Material
What is Karma?
Karma is the currency of StudySoup.
You can buy or earn more Karma at anytime and redeem it for class notes, study guides, flashcards, and more!
Date Created: 10/19/15
Learning Lecture 8 Examples of learning Learning to play the piano Learning to win in chess Learning to cross the street safely Learning to catch prey Learning theoretical neuroscience The brain developing its representation of objects Three types of learning Supervised learning using feedback learn to produce correct output given new input 0 Reinforcement learning learn to act in a way that maximizes future rewards Unsupervised learning finding structure in data Hebbian learning How neural networks change in response to input activity dependent synaptic plasticity Neural basis of learning and memory quotNeurons that fire together wire together Arrive at curb Look to sides 0 0 Can be supervised or unsupervised Presynaptic activity u Postsynaptic activity v Weights w Simple Hebb rule I Wvu t Averaged Hebb rule Iii Vltvu T average over ensemble of inputs Unstable Constraint needed to prevent weights from growing indefinitely Simple case v W u Averaged Hebb rule 1 uuw Qw Correlationbased rule With subtractive normalization and constraint 9 can predict ocular dominance columns j 391 N1 1 33235 3 33 snare i 1 397 Adams et al 2007 Supervised Hebbian learning Paired samples With decay Til V ltvsusgt aw Steady state w lvsus 05 Weights proportional to input output cross correlation Neural networks for function approximation I 5 saJZSaafNS Computing a new function uutput neuron v snoum have tuning hs Tuning curves ufs vwuwfs Learning rule EltlthltsSgt wrltss 2gt training data Gradient descent w gt W EVWE W gt W 8lthSS VSSfSgt Stochastic gradient descent delta rule w gt W8hSS VSSfS training data Representational learning How do neurons acquire their response selectivities Natural images are richly structured and highly constrained System learns statistical structure of visual images and builds a model to reproduce structure 9 generative model Use this to identify objects in particular images 9 recognition model Types of representational learning Mixture of Gaussians Factor analysis Principal component analysis Independent component analysis Sparse coding Helmholtz machine Sparse coding Inference on retinal images Model images as linear superposition of basis functions Sparseness simple representation minimize interference between different patterns of input save energy Statistically independent reduces redundancy Image model 1xay Zaigoi xay77xoy Noise with variance 02 image BaSIS functions coefficients output activities Infinite number of solutions for a d when basis set is overcompete more output units than input units Questions 0 When a given image is presented what should output activity a be recognition model 0 Across all images what is the best choice of basis functions Prior over output activities pltagtrlipltaigt pm 1 05 pal oc e SW O 5 O 5 Small coefficients are favored sparseness output nit d Matrix of basis functions 5 5 5 space Assume d fixed and known Posterior over coefficients based on a given image I paIq0CpIaIpaI 1lt1gta2 p1aqgtoce 262 palt1gt0CHe S i Best possible coefficients a argmaX pa Lap 2 argnaX log pa Lab 2 argnin WZsq Learning the coefficients through gradient descent IIIDallz Aaoc V3T 2ZSal TIIa S39a residual image Can be implemented in recurrent neural network Learning the basis functions 0 So far fixed I learned coefficients a What about different I 0 Maximize average log likelihood of parameters minimize KL distance Objective function for learning log likelihood of model d L lt10gp1 q Average over input images pI o 191 37 Paq da Gradient descent on d 6L 1 Ad 0C a yltltltl 39 qaaT gtpa1lt1gtgt Hebbian learning Approximate by posterior maximum AmocltI qaaTgt Constraint needed to prevent growth without bound Exercise Reproduce this Using sparse coding learn Gabor Iike basis functions from any set of photos Make assumptions where necessary Due April 26 by email Expectation maximization Objective function with two parameter sets F ltlogpaIIgt free energy Step 1 fix I find a expectation Step 2 fix a optimize I maximization Repeat Converges to local maximum References Olshausen and Field Emergence ofsimplecell receptive field properties by learning a sparse code for natural images Nature 381 607 9 Olshausen Sparse codes and spikes In Probabilistic models of the brain MIT Press 2002 Chapter 10 in Dayan and Abbott Theoretical Neuroscience MIT Press 2001 Lectures so far Neural o ulation coding how to decode Role of correlations in information processing Perception as Bayesian inference Cue combination Bayesian models of behavioral data Bayesian model comparison Neural implementation of Bayesian inference Models of perceptual decisionmaking Representational learning sparse coding Thursday optimal inference in higherlevel cognition
Are you sure you want to buy this material for
You're already Subscribed!
Looks like you've already subscribed to StudySoup, you won't need to purchase another subscription to get this material. To access this material simply click 'View Full Document'