New User Special Price Expires in

Let's log you in.

Sign in with Facebook


Don't have a StudySoup account? Create one here!


Create a StudySoup account

Be part of our community, it's free to join!

Sign up with Facebook


Create your account
By creating an account you agree to StudySoup's terms and conditions and privacy policy

Already have a StudySoup account? Login here


by: Walker Witting


Marketplace > Rice University > Applied Mathematics > CAAM 415 > THEORETICAL NEUROSCIENCE
Walker Witting
Rice University
GPA 3.58


Almost Ready


These notes were just uploaded, and will be ready to view shortly.

Purchase these notes here, or revisit this page.

Either way, we'll remind you when they're ready :)

Preview These Notes for FREE

Get a free preview of these Notes, just enter your email below.

Unlock Preview
Unlock Preview

Preview these materials now for free

Why put in your email? Get access to more of this material and other relevant free materials for your school

View Preview

About this Document

Class Notes
25 ?




Popular in Course

Popular in Applied Mathematics

This 25 page Class Notes was uploaded by Walker Witting on Monday October 19, 2015. The Class Notes belongs to CAAM 415 at Rice University taught by Staff in Fall. Since its upload, it has received 23 views. For similar materials see /class/225003/caam-415-rice-university in Applied Mathematics at Rice University.

Similar to CAAM 415 at Rice University

Popular in Applied Mathematics




Report this Material


What is Karma?


Karma is the currency of StudySoup.

You can buy or earn more Karma at anytime and redeem it for class notes, study guides, flashcards, and more!

Date Created: 10/19/15
Learning Lecture 8 Examples of learning Learning to play the piano Learning to win in chess Learning to cross the street safely Learning to catch prey Learning theoretical neuroscience The brain developing its representation of objects Three types of learning Supervised learning using feedback learn to produce correct output given new input 0 Reinforcement learning learn to act in a way that maximizes future rewards Unsupervised learning finding structure in data Hebbian learning How neural networks change in response to input activity dependent synaptic plasticity Neural basis of learning and memory quotNeurons that fire together wire together Arrive at curb Look to sides 0 0 Can be supervised or unsupervised Presynaptic activity u Postsynaptic activity v Weights w Simple Hebb rule I Wvu t Averaged Hebb rule Iii Vltvu T average over ensemble of inputs Unstable Constraint needed to prevent weights from growing indefinitely Simple case v W u Averaged Hebb rule 1 uuw Qw Correlationbased rule With subtractive normalization and constraint 9 can predict ocular dominance columns j 391 N1 1 33235 3 33 snare i 1 397 Adams et al 2007 Supervised Hebbian learning Paired samples With decay Til V ltvsusgt aw Steady state w lvsus 05 Weights proportional to input output cross correlation Neural networks for function approximation I 5 saJZSaafNS Computing a new function uutput neuron v snoum have tuning hs Tuning curves ufs vwuwfs Learning rule EltlthltsSgt wrltss 2gt training data Gradient descent w gt W EVWE W gt W 8lthSS VSSfSgt Stochastic gradient descent delta rule w gt W8hSS VSSfS training data Representational learning How do neurons acquire their response selectivities Natural images are richly structured and highly constrained System learns statistical structure of visual images and builds a model to reproduce structure 9 generative model Use this to identify objects in particular images 9 recognition model Types of representational learning Mixture of Gaussians Factor analysis Principal component analysis Independent component analysis Sparse coding Helmholtz machine Sparse coding Inference on retinal images Model images as linear superposition of basis functions Sparseness simple representation minimize interference between different patterns of input save energy Statistically independent reduces redundancy Image model 1xay Zaigoi xay77xoy Noise with variance 02 image BaSIS functions coefficients output activities Infinite number of solutions for a d when basis set is overcompete more output units than input units Questions 0 When a given image is presented what should output activity a be recognition model 0 Across all images what is the best choice of basis functions Prior over output activities pltagtrlipltaigt pm 1 05 pal oc e SW O 5 O 5 Small coefficients are favored sparseness output nit d Matrix of basis functions 5 5 5 space Assume d fixed and known Posterior over coefficients based on a given image I paIq0CpIaIpaI 1lt1gta2 p1aqgtoce 262 palt1gt0CHe S i Best possible coefficients a argmaX pa Lap 2 argnaX log pa Lab 2 argnin WZsq Learning the coefficients through gradient descent IIIDallz Aaoc V3T 2ZSal TIIa S39a residual image Can be implemented in recurrent neural network Learning the basis functions 0 So far fixed I learned coefficients a What about different I 0 Maximize average log likelihood of parameters minimize KL distance Objective function for learning log likelihood of model d L lt10gp1 q Average over input images pI o 191 37 Paq da Gradient descent on d 6L 1 Ad 0C a yltltltl 39 qaaT gtpa1lt1gtgt Hebbian learning Approximate by posterior maximum AmocltI qaaTgt Constraint needed to prevent growth without bound Exercise Reproduce this Using sparse coding learn Gabor Iike basis functions from any set of photos Make assumptions where necessary Due April 26 by email Expectation maximization Objective function with two parameter sets F ltlogpaIIgt free energy Step 1 fix I find a expectation Step 2 fix a optimize I maximization Repeat Converges to local maximum References Olshausen and Field Emergence ofsimplecell receptive field properties by learning a sparse code for natural images Nature 381 607 9 Olshausen Sparse codes and spikes In Probabilistic models of the brain MIT Press 2002 Chapter 10 in Dayan and Abbott Theoretical Neuroscience MIT Press 2001 Lectures so far Neural o ulation coding how to decode Role of correlations in information processing Perception as Bayesian inference Cue combination Bayesian models of behavioral data Bayesian model comparison Neural implementation of Bayesian inference Models of perceptual decisionmaking Representational learning sparse coding Thursday optimal inference in higherlevel cognition


Buy Material

Are you sure you want to buy this material for

25 Karma

Buy Material

BOOM! Enjoy Your Free Notes!

We've added these Notes to your profile, click here to view them now.


You're already Subscribed!

Looks like you've already subscribed to StudySoup, you won't need to purchase another subscription to get this material. To access this material simply click 'View Full Document'

Why people love StudySoup

Bentley McCaw University of Florida

"I was shooting for a perfect 4.0 GPA this semester. Having StudySoup as a study aid was critical to helping me achieve my goal...and I nailed it!"

Allison Fischer University of Alabama

"I signed up to be an Elite Notetaker with 2 of my sorority sisters this semester. We just posted our notes weekly and were each making over $600 per month. I LOVE StudySoup!"

Steve Martinelli UC Los Angeles

"There's no way I would have passed my Organic Chemistry class this semester without the notes and study guides I got from StudySoup."

Parker Thompson 500 Startups

"It's a great way for students to improve their educational experience and it seemed like a product that everybody wants, so all the people participating are winning."

Become an Elite Notetaker and start selling your notes online!

Refund Policy


All subscriptions to StudySoup are paid in full at the time of subscribing. To change your credit card information or to cancel your subscription, go to "Edit Settings". All credit card information will be available there. If you should decide to cancel your subscription, it will continue to be valid until the next payment period, as all payments for the current period were made in advance. For special circumstances, please email


StudySoup has more than 1 million course-specific study resources to help students study smarter. If you’re having trouble finding what you’re looking for, our customer support team can help you find what you need! Feel free to contact them here:

Recurring Subscriptions: If you have canceled your recurring subscription on the day of renewal and have not downloaded any documents, you may request a refund by submitting an email to

Satisfaction Guarantee: If you’re not satisfied with your subscription, you can contact us for further help. Contact must be made within 3 business days of your subscription purchase and your refund request will be subject for review.

Please Note: Refunds can never be provided more than 30 days after the initial purchase date regardless of your activity on the site.