### Create a StudySoup account

#### Be part of our community, it's free to join!

Already have a StudySoup account? Login here

# Machine Learning CMPSCI 689

UMass

GPA 3.57

### View Full Document

## 10

## 0

## Popular in Course

## Popular in ComputerScienence

This 55 page Class Notes was uploaded by Roman McCullough on Friday October 30, 2015. The Class Notes belongs to CMPSCI 689 at University of Massachusetts taught by Sridhar Mahadevan in Fall. Since its upload, it has received 10 views. For similar materials see /class/232264/cmpsci-689-university-of-massachusetts in ComputerScienence at University of Massachusetts.

## Similar to CMPSCI 689 at UMass

## Popular in ComputerScienence

## Reviews for Machine Learning

### What is Karma?

#### Karma is the currency of StudySoup.

#### You can buy or earn more Karma at anytime and redeem it for class notes, study guides, flashcards, and more!

Date Created: 10/30/15

CMPSCI 689 Learning in Matrix Vector Spaces Sridhar Mahadevan mahadevacs umas s edu University of Massachusetts Outline I Matrix vector spaces I Basic matrix theory eigenvalues and eigenvectors I Multivariate gaussian and the Mahalanobis distance I Singular value decomposition and GoogleTM I Discriminant functions and risk minimization 689 F03 p 2239 Applications of Linear Algebra I Image compression JPEG HDTV I Face recognition using eigenfaces Mogaddam and Pentland I Solves the problem of face recognition using linear algebra and multivariate gaussian density estimation I Eigenvectors produce a reduced representation of faces I Information retrieval Google I Construct a singular value decomposition of the adjacency matrix underlying the web by doing a random walk I Look for dominant eigenvectors I Early layers in the visual cortex compute reduced representations of highdimensional sensory data Theoretical Neuroscience Dayan et al MIT Press 689 F03 p 3239 Face recognition using eigenvectors MTurk and APentland Eigen Faces for Recognition Journal of Cognitive Neuroscience 31 1991 689 F03 p 4239 Matrix Vector Spaces A is an mn matrix dimension dimension r ROW r space RA Column space CA x b X X R n x m AX b R xn nuuspace dImensIon NA 14 left dimension null ace NAT n r 689 F03 p 5239 Column and Row Spaces I The column space C A associated with a m n matrix A is the set of all linear combinations of the columns of A I Ax Alxl AQZCQ where A1 is the first column of A etc 1 1 1 1 2 5 4 x2 5 4 I The row space CAT associated with a m n matrix A is the column space of the transpose AT I The system of equations Ax b is solvable if and only if b is in the column space of A 689 F03 p 6239 Matrix Null Spaces I The null space NA associated with a m n matrix A is the set of all vectors x such that Ax 0 I The left null space N AT associated with a m n matrix A is the set of all vectors x such that ATx 0 I What is the nullspace of I We solve the below linear system of equations I The nullspace NA consists of all scalar multiples of l Icnw 39 w 689 F03 p 7239 Example of Matrix Vector Spaces I Considerthe matrixA 1 2 3 I What is the column space CA I What is the row space CAT I What is the null space NA I What is the left null space NAT I Repeat the above for the matrix A 689 F03 p 8239 Independence I The columns of a matrix A are linearly independent if the only solution to Ax 0 is x 0 I Eg 10 01 are independent I Any set of n vectors in R is dependent if n gt m I A set of vectors span a space if their linear combination fills the space I A basis for a vector space is a set of linearly independent vectors that span the space 689 F03 p 9239 Subspaces and Dimension I A subspace of a vector space is a set of vectors that has the 9 element and is closed under addition and scalar multiplication eg all vectors on a line passing through 0 0 constitute a subspace of R2 I The dimension of a vector space is the number of vectors in every basis eg the dimension of Rquot is n I The dimension of the row space 2 7 number of independent rows I The dimension of the column space 2 7 number of independent columns I If the dimension of the rowcolumn space 2 number of rows columns the matrix A is considered full rank and has an inverse A l I The dimension of the nullspace n 7 I The dimension of the left nullspace m 7 I Give the dimensions of the vector spaces associated with the matrix A 1 2 3 I Repeat the above for the matrix A 2 3 689F03 p10239 Orthogonal Spaces I Two subspaces W and V are orthogonal if every 111 E W is perpendicularto every 11 E V I A subspace W forms an orthogonal complement to another subspace V if for each vector v E V there exists a unique perpendicular vector vi E W I The nullspace NA forms an orthogonal complement to the row space CAT I The left nullspace NAT forms an orthogonal complement to the column space CA 6892F03 p11239 Orthonormal vectors and matrices I A set of vectors are orthonormal if 1qu 61739 which is 1 exactly when 1 j otherwise 0 I So orthonormal vectors are both perpendicular to each other and are unit vectors I Show that the matrix Q whose columns are orthonormal vectors has the following property QTQ I I If Q is a square matrix show that this implies QT Q l cos oz sin oz I Example of an orthonormal matrix Sll l oz COS oz I This matrix corresponds to doing a coordinate rotation by oz I Show that the permutation matrix is orthonormal 0 1 0 1 0 0 0 0 1 689 F03 p 12239 Eigenvalues and Eigenvectors I Generally when we multiply a vector x by a matrix A the resulting vector Ax is not in the same direction as x I Occasionally however this does happen and the resulting vector is then called an eigenvector I Formally the eigenvectorx of a matrix is such that Ax 2 Am where is a scalar called an eigenvalue I Eigenvectors lie in the nullspace of A I because they satisfy the equation A AIM 0 I Algorithm Solve the equation A I 0 to find the eigenvalues I For each resulting eigenvalue M solve the equation A AIM 0 to find the corresponding eigenvector 1 1 2 I Exercise Find the eigenvalues and eigenvectors of 689 F03 p 13239 Eigenspace Properties I The product of the n eigenvalues gives us the determinant of the matrix so if the matrix is singular 0 must be an eigenvalue I The sum of the eigenvalues equals the trace or sum of the diagonal entries of the matrix I If a square n n matrix A has n linearly independent columns then the following property holds S lAS A equivalently it follows that A 2 SAS 1 A10 Sx1xn0 A2 0 lg M I The columns of S are made up of the eigenvectors To prove this first show that AS 2 SA and then it follows directly I Suggest a fast way to compute Ak given its decomposition above 689 F03 p 14239 Eigenvalues of Symmetric Matrices I If a matrix A is symmetric that is AT 2 A then all its eigenvalues are real I This leads to a fundamental spectral theorem I Every symmetric matrix A has the factorization A QAQT where the eigenvector matrix Q is comprised of an orthonormal set of vectors I Interesting property If Q is orthonormal then QT Q l I Find the orthonormal set of eigenvectors for the previous matrix I Show that any symmetric matrix can be written as the sum of projection matrices A 2 Max l l Anxnxz Each of these matrices is a projection along an eigenvector space 689 F03 p 15239 Positive Definite Matrices I A symmetric matrix A is positive de nite if xTAx gt 0 for every nonzero vector x if we allow equality we get the property of positive semidefiniteness I Symmetric matrices with positive eigenvalues are positive definite if Ax 2 Am then xTAx AxTx gt 0 when gt 0 a b C xTAxx y ax22bxycy2gt0 I Note that this is an equation for an ellipse What we need to determine is the orientation and the parameters of the ellipse I Result Given a positive definite matrix A QAQT the graph of xTAx 1 is an ellipse or ellipsoid x X x yQQT X YA Y 1X2 l 2Y21 y 6892 F03 16239 1 and 1 P XV 72 I The halflength of the major and minor axes are Multivariate Normal Distribution 2D Normal t s 5 s 555 5321 31 689 F03 p 17239 Multivariate Normal Distribution The multivariate normal distribution in d dimensions is given by a x gt 2 x gt p1 9 2Wd2 Z 12 explt M M I g Ez is a ddimensional vector I Z Ez FL 1 mT is a d x d covariance matrix I 2 is the determinant of Z Covariance Matrix I The covariance matrix in 2 dimensions is E E1 M11 M1 E1 M12 M2 0 012 2 E2 M21 1 E2 M22 2 012 02 I The covariance matrix is symmetric which means that the eigenvalues are real It also means that the eigenvectors form an orthonormal set independent unit length I SO if Eu 2 Aim then uZTuj SZ39j I Since 2 is symmetric it can be written as E QAQ l where A is the eigenvalue diagonal matrix and Q is the orthonormal eigenvector matrix It immediately follows that the inverse covariance matrix can also be written in such a form why I This implies that the inverse covariance matrix 2 1 can be expressed as a sum of 1 d 1 T prolectlon matrices E 211 yum 689 F03 p 19239 Mahaanobis Distance I The exponent in the multivariate normal distribution A2 x MTE l x u is called the Mahalanobis distance If the covariance matrix is diagnonal E a it reduces to Euclidean distance I Consider a change of coordinate from the original as to the following y U1c u where the rows of U are the eigenvectors I Then using the above expression for 2 1 it can be shown that A2 Z 9 d 21 Notice this shows that the inverse covariance matrix is positive semidefinite s 83910 y2 12 12 72 x1 689 F03 p20239 Transformed Multivariate Gaussmn I Note that in the transformed y U1c u the multivariate gaussian can be written in the following simplified form 2 d y 1 2jj e dyj 1 27rj I This is a product of d independent normal distributions with mean 0 and variance 2 j jth eigenvalue J I Thus the transformation y U1c u decorreates the original componeg gsg3p212 Singular Value Decomposition I So far we have seen how to decompose symmetric matrices into a product of orthogonal matrices the eigenvector matrix and a diagonal matrix the eigenvalue matrix I What about nonsymmetric matrices The key insight behind SVD is to find two orthogonal basis representations U and V such that for any matrix A the following decomposition holds AzUEVT where U and V are orthogonal matrices so UTU I and VTV I I E is a diagonal matrix whose entries are called singular values hence the meaning behind the term SVD 689 F03 p22239 Singular Value Decomposition I We want to choose the orthogonal matrices in such a way that not only are they orthogonal but we want the result of applying A to them to also be orthogonal I That is the vi are are a orthonormal basis set unit vectors and moreover Am also orthogonal to each other I We can then construct the m 2 3 to give us the second orthonormal basis set 0391 0 A111 201u1 02u2u1 712 0 0392 I So we get the following relationships AVUE or U lszz or UTAVE AV 2 US or AVV 1 UEV 1 or A UEVT 689 F03 p23239 Orthonormal Basis in SVD I How do we go about finding U and V I Here is the trick We can eliminate U from the equation AV 2 US by premultiplying A by its transpose ATA UEVTT UEVT VETUTUEVT VETEVT I Since ATA is always symmetric why the above expression gives us exactly the familiar spectral decomposition we have seen before namely QAQT except that now V represents the orthonormal eigenvector set of ATA I In a similar fashion we can eliminate V from the equation AV 2 US by postmultiplying A by its transpose AAT UEVT UEVTT UEVTVETUT UEETUT 2 I The matrix EST 2 STE 2 01 whose entries are the square of the 0 02 Singular values 689F03 p24239 q Examples of SVD I Find the SVD of the followm matrices Ah 3 AM SVD underlies GoogleTM I GoogleTM uses SVD to accelerate finding relevant web pages Here is a brief explanation of the general idea I Define a web site as an authority if many sites link to it I Define a web site as a hub if it links to many sites I We want to compute a ranking x1 xN of authorities and 311 yM of hubs I As a first pass we can compute the ranking scores as follows as is the number of links pointing to z and y is the number of links going out of z I But not all links should be weighted equally For example links from authorities or hubs should count more So we can revise the rankings as follows 30 Z yngTyO and y 2 xngx0 j links to i i links toj I Here A is the adjacency matrix where aij 1 ifz links to j 689 F03 p26239 SVD underlies GoogleTM I Clearly we can keep iterating and compute x5 2 ATA ask 1 and yf AAT yk l I This is basically doing an iterative SVD computation and as k increases the largest of eigenvalues dominate I GoogleTM is doing an SVD over a matrix A of size 4 x 109 by 4 x 109 689 F03 p27239 Linear Estimation in Hilbert Spaces Sridhar Mahadevan mahadevacs umas s edu University of Massachusetts Sridhar Mahadevan CMPSCI 689 p 12 Motivation I We have covered three approaches to parameter estimation I Maximum likelihood I Bayesian estimation I Leastsquares I In complex problems maximum likelihood and Bayesian estimation are difficult to use since they require knowledge of the full joint distribution I Often a simplifying assumption that is made is that the variables are Gaussian where the first order and second order moments fully define the distribution I This assumption coincides with the least squares approach which forms the richest set of practical applications of statistical estimation I To deal with timeseries problems least squares estimation needs to be modified to work incrementally Sridhar Mahadevan CMPSCI 689 p22l Outline I Recursive weighted leastsquares estimation also essential to next week s lecture on logistic regression I GaussMarkov theorem minimum variance unbiased estimators I Hilbert space of random variables I Kalman filter I Applications Sridhar Mahadevan CMPSCI 689 p32 Motivation I Many realworld applications involve the following problem I A series of incremental observations are made using some noisy measurement system I The goal is to continually estimate some underlying parameters or hidden state I The aim is to solve this problem recursively so that the estimates can be updated extremely rapidly online I Thousands of realworld applications tracking a spacecraft monitoring the US economy robot navigation automatic landing of a 747 I This problem was solved in 1960 for a very general class of estimation problems involving timevarying processes by R E Kalman The solution now called a Kalman filter is a beautiful instance of minimumvariance unbiased estimation in Hilbert spaces I Kalman filters are used in many realworld systems and remains one of most widely used estimation procedures Sridhar Mahadevan CMPSCI 689 p42l Kalman Filter 14 12 10 gt 8 6 4 El true 9 observed gtlt filtered ale 2 I I I 5 1O 15 20 x 25 Sridhar Mahadevan CMPSCI 689 p52 LeastSquares Estimation I Assume a robot measures the distance x to an object using a sonar range sensor obtaining the following measured values 50 73 81 I Imagine the robot s goal is to estimate the true distance x by some approximation 1 which minimizes the squared error Err0rx x 502 l x 732 l x 812 I We know previously from statistics that the average minimizes the meansquared error We can derive the same result trivially using calculus 8Err0rx 8x 507381 2x 50 l 2x 732x 810gtx 3 I We can also derive the same result using geometry Sridhar Mahadevan CMPSCI 689 p62 Projection onto a subspace I Given a subspace M generated by n linearly independent vectors a1 an the vector closest to b minimizes the orthogonal error vector b Ax2 ATAa ATb a ATA 1ATb I In our simple robotics problem we get 111 111l lll l81l 5073l 81 3 jg Sridhar Mahadevan CMPSCI 689 p72l Weighted LeastSquares I What if the measurements were not identical in the sense that some measurements were more reliable than others I In this case we can modify the leastsquares criterion by adding a weighting cofficient Minimize WAx b I which immediately leads to the weighted least squares estimate WATWAE WATWb gt ATWTWAzf ATWTWb I lfwe define C WTW we can simplify this to get a ATCA 1ATCb 2 Lb I This is an example of a linear estimatorthat is of profound importance in practice Sridhar Mahadevan CMPSCI 689 p82i MinimumVariance Unbiased Estimator I How do we choose C WTW optimally We need to introduce a model of the errors made in the measurement process Define e to be the error vector where bzAx l e I In many practical applications the errors made by the measurement system are such that the average error is 0 This implies that E EA1c b 0 I In addition to modeling the mean error we also need to model its variance Let us define the covariance matrix V E T Our goal is to determine a linear estimator 5 2 Lb which is unbiased and minimizes the variance Recall that an unbiased estimator is one where E1c a E1c Lb E1c LAx L6 EI LAW 2 0 I The estimator will be unbiased if LA 2 I How to pick C such that the estimator will be minimum variance Sridhar Mahadevan CMPSCI 689 p92l GaussMarkov Theorem I GaussMarkov theorem Pick C V1 and 57 ATV 1A1ATV1b I Proof Since we are minimizing the variance we have Ex Ex ET Ex Lbx LbT Ex LAx Le1c LAx L6T ELeLegtTgt LEeeTLT LVLT I We need to show L0 ATV 1A 1ATV 1 is a MVUE Note that LOA I so it is unbiased For any estimator L define L L L0 L0 P LVLT LOVLZ L LONLOT LeiL L0T L L0VL L0T LOVLZ L L0VL L0T minimized at L 2 L0 I Note that P LOVLOT ATV 1A 1 so when V I this reduces to Standard leastSquares Sridhar Mahadevan CMPSCI 689 p102i Recursive LeastSquares I Suppose we estimate 0 from measurements be by approximating ono m b0 Given a new dataset b1 how to compute a revised estimate 1 from A1 301 m b1 without redoing the previous computations I The information matrix or inverse of the error covariance matrix is T 1 A0 V0 0 A0 131 1 ATV 1A A1 0 V1 A1 AOTVO1A0 Aft1141 130 1 Aft1141 I Note that since the normal equations are ATV 114 ATV 1b we get T A b P1 0 V1 0 A1 b1 531 P1AOTVO1b0 AlTVflbl P1P11x0 AlTvl lAlxo AlTVflbl T 1 Sridhar Mahadevan CMPSCI 689 p112t x0 P1A1 V1 b1 A1360 General Form of RLS 12 1 1311 AV171Ai 510239 xi l Kiwi AiZUi l I where the gain matrix is Ki 2 PiAngtfl I The error bi Aix 1 can be viewed as the projection of the new data into the subspace formed by the previous data It represents the component that cannot be explained by the old data and prediction I The key idea in Kalman filters is to add the orthogonal component of this error amplified by the gain matrix to the previous estimate I We have to generalize RLS to nonstationary processes Sridhar Mahadevan CMPSCI 689 p122 Hilbert Space of Random Variables I Suppose x1 xn is a set of scalar random variables The Hilbert space defined by them is the set of all linear combinations y aim where the inner product between yl and 312 is defined as lt 9192 gt Elty1y2gt EltltZ amxz BMW 239 j I We assume Eyz2 My 2 lt 00 so variables are of finite length I Let 2391 zm be a set of vector random variables each of which has dimension n The Hilbert space consists of linear combinations of these variables defined as y K121 szm I where K is a realvalued n x n matrix and the the inner product is defined as lt 311312 gt 2 Trace I The length or norm of a vector random variable is defined as 2 Trace Ech Sridhar Mahadevan CMPSCI 689 p132 GaussMarkov Revisited I Let us bring in the perspective of Hilbert spaces The task is find the minimum variance unbiased linear estimator a 2 Lb where the measurements are noisy and produced by the process b 2 Ax e I Geometrically speaking find the vector 1 with minimum expected length E 5II2 E Lb2 LAZU Lell2 Elt x LAx Lax LAx L6 x LAx2 Trace LVLT 2 Trace LVLT I Let ll be the ith row of L and aj be the jth column of A We define a set of minimization problems Minimize lZTVlZi subject to lfaj 6 I lfwe define the inner product lt x y gtV xTVy we get a set of minimum norm problems with constraints Minimize lziV subject to lt l V1aj gtV 5i Sridhar Mahadevan CMPSCI 689 p142i Projection onto Affine Subspaces I Recall the projection theorem that we proved for Hilbert spaces given any finitedimensional subspace M of a Hilbert space H and a vector x E H but not in M we want to find the closest vector to x that is in the subspace Let M be generated by vectors yl yn Denote the closest vector be are E M lt x x0 yj gt 0 which implies that lt x Emmy gt 0 which means lt xyj gt2 E on lt y yj gt 139 But lt ghyj gt 6 meaning that 05139 lt xyyz39 gt which gives the inner product expansion x0 Z lt x7 9239 gt 31139 Z I What if the subspace M is not finitedimensional There is an important special case where the projection theorem can be easily extended Define an affine subspace to be the set of all vectors of the form M x y where y E M 80 M96 is the subspace M that is translated by the vector x Sridhar Mahadevan CMPSCI 689 p152l Projection Theorem for Affine Subspaces I Consider now an affine subspace My which consists of all vectors x E H satisfying constraints of the form as we will see soon kernel methods use this formulation lt xy1 gt 01 lt any gt2 on I where y1 yn are n linearly independent vectors forming an n dimensional subspace Y of H Suppose c 0 for all 2 Then My 2 Yi is the orthogonal subspace of Y 80 in the general case of nonzero 6139 the affine subspace My is the orthgonal subspace Yi translated by the vector C1 cn I We say that an affine subspace My is of codimension n if it is the orthgonal complement perhaps translated of a subspace of dimension n I Projection Theorem Let H be a Hilbert space and Y be a finite dimensional subspace generated by n linearly independent vectors 311 yn Among all vectors as satisfying the above constraints there exists a unique vector 30 mg with minimum norm whose coefficients satisfy the equations lt91791gt31mltymylgt n01 lt yny1 gt 81 l lt yn yn gt n Cn SridharMahadevan CMPSCIese p162 Improving on GaussMarkov I In the Gauss Markov formulation we assumed that the vector x to be estimated was completely unknown but fixed We now show that if we instead treat as as a random vector whose statistics are known we can get a minimum variance estimator with lower variance than that produced by the GaussMarkov estimator I Theorem Let b and x be random vectors where b 2 Ax e The minimum variance linear estimate a of x is given by a EbeEbbT1b and covariance matrix of the error is given by EE x xT EvxT may Sridhar Mahadevan CMPSCI 689 p172 Minimum Variance Estimate I Since there are no constraints the solution becomes just a normal instance of applying the projection theorem I We are finding the vector 1 closest to x that lies in the subspace spanned by the elements of b the observations We know that the error vector must be orthogonal to the subspace so ltbj7i ZIA7 gt ltbjxi lZTbgt Ebjz lZTb0 Edam Eltbjlbgt where li is the ith row of the linear estimator matrix L recall that a 2 Lb I Written out in vector format the complete set of normal equations becomes Ebe EbbTLT gt L EbeEbbT1 I The proof now follows by plugging in the form of the estimator a 2 Lb EbeEbbT1b Sridhar Mahadevan CMPSCI 689 p182 Minimum Variance Estimate I How do we know that we have indeed found a lower variance estimator I Corollary Let b 2 Ax e where b is an observed mdimensional data vector x is an unknown n dimensional random vector with known covariance R ExxT and e is an unknown mdimensional random vector with known covariance V E T Further assume that EexT 0 so the errors are uncorrelated with the data I The minimum variance linear estimate a minimizing E1c 57 is given by a RATARAT V1b Ev EU 5T R RATARAT V 1AR I Proof Note that the covariance of b is now EbbT EAx mm QT ARAT V and that Ebe ExAx QT RAT Applying the above minimum variance theorem gives the form of the linear estimator aboe e Sridhar Mahadevan CMPSCI 689 p192l Minimum Variance Estimator I To check that we have a lower variance estimator we rewrite the above corollary in an alternate format Note the matrix identity RATARAT V 1 ATV 1A R l 1ATv 1 which can be shown by postmultiplying both sides by ARAT V and premultiplying by ATV 1A R l This implies that the minimum variance estimator can be written as a RATARAT V 1b ATV 1A R l 1ATV 1b I The error covariance can be rewritten similarly as Ev EU 5T R RATARAT V 1AR R ATV 1AR 1 1ATV 1AR ATV 1A R 11 after some simplification I The Gauss variance estimate is the limiting case of the mirtgjgmmnvagiahgewmango2 estimate as R 1 0 Minimum Variance Estimate I There are several other useful properties of minimum variance estimators I Theorem The minimum variance linear estimate of a linear function of 3 Le Tm is equal to the same linear function applied to the minimum variance linear estimate of 3 Le T57 I Theorem If a 2 Lb is the linear minimum variance estimator of x then a is also the linear estimate that minimizes Ex EEx ET for any positive semidefinite n x n matrix 2 Sridhar Mahadevan CMPSCI 689 p212 Orthogonal Projections I A fundamental property of Hilbert spaces is that if Y1 and Y2 are two subspaces the projection of a vector 3 onto the combined subspace Y1 Y2 is equal to the projection of 3 onto Y1 plus the projection onto Y2 where Y2 is orthogonal to Y1 I This result has significant ramifications for sequential updating of a minimum variance estimator which is formalized in the theorem below I Theorem Let x be an element of a Hilbert space of random variables Let x1 denote the orthogonal projection of 3 onto a closed subspace Y1 Let 312 be a random vector which generates a subspace Y2 Let g2 denote the projection of y2 onto Y1 Let the error vector be 332 y2 332 Then the projection of 3 onto the subspace Y1 Y2 denoted by dc can be computed as 2 x1 Eltx 2TlEz2z2Tl 1 2 Sridhar Mahadevan CMPSCI 689 p222 Recursive Estimation usmg Orthogonallty I Suppose that an optimal estimate 5 of a random vector x has been formed from past measurements where the covariance of the estimate is R Ex Ex ET I New measurements become available in the form b 2 Ax e where e is a random error vector with mean 0 and uncorrelated with the true vector x and past measurements We can exploit orthogonality to compute an updated estimate as well update the estimate of the error covariance I The best estimate of b from past measurements is 3 ALE and so the surprising component is E b Az I Applying the above orthogonality theorem the revised estimate of x is 55 a ExQ2TlEQ2Q2Tl1Q2 a RATARAT V1 b An I The updated error covariance is ridhar Mahadevan CMPSCI 689 p232l Ev f x 510T R RATARAT gg 1AR Random Processes I Assume a random process is modeled as follows I xk is n dimensional state vector each component of which is a random variable I k is a known n x n transition matrix I uk is an n dimensional random input vector with mean 0 whose variance is given by EukuTl C2006 I The process dynamics is modeled as xk 1 kxk uk I Measurements of the process are modeled as 11k Mkxk 112k where 11k is an mdimensional vector Mk is an m x n matrix and 112k is an mdimensional random measurement error vector having mean 0 with covariance EwkwTj Rk6kj where Rk is positive definite I The initial random vector is x0 whose estimate is do having error covariance EE0 x0 0 x0T P0 I Finally the random vectors x0 uj 112k are all uncorrelated for j 2 0 k 2 0 I Determine the linear minimumvariance estimate of the state vector x from the measurements 1 Sridhar Mahadevan CMPSCI 689 p242 KalmanFilter I The Kalman filter solves the above estimation process and consists of the following recursive steps I Define 57kj to be the optimal estimate of xk given observations up to time instantj As we saw for HMMs ifj k the problem is one of filtering ifj lt k the problem is one of prediction and ifj gt k the problem is one of smoothing We restrict ourselves to the filtering problem m llk kPkMTkMkPkMTk Rk1 Mk Mk kk 1 k kk 1 I where Pk is the covariance of kk 1 which itself is recursively generated by mm ltIgtltkgtPltkgt I MTltkgtiMltkgtPltkgtMTltkgtRltkgtr1MltkgtPltkgtl gtlt ltIgtTltkgtc2ltkgt Sridhar Mahadevan CMPSCI 689 p252 Kalman Filter 6 lt19 quotHQ 0 o a I l 2 location 11 velocity I X Imlywmvy and Y lauly I State transitions xt1 2 Am N0 Q I Observations yt1 th1 N0 R Sridhar Mahadevan CMPSCI 689 p262 KF for Weather Prediction I Victoria Manfredi s synthesis project I Probem Track storm cell or tornado over time with the goal of obtaining information to intelligently target radars I Approach Use Kalman filters and switching Kalman filters to get location predictions Sridhar Mahadevan CMPSCI 689 p272 Kalman Filter Prediction e a me observed I Onestep prediction using Kalman filter on hurricane data from httpweatherunisyscomhurricaneatlantic1990data Sridhar Mahadevan CMPSCI 689 p282

### BOOM! Enjoy Your Free Notes!

We've added these Notes to your profile, click here to view them now.

### You're already Subscribed!

Looks like you've already subscribed to StudySoup, you won't need to purchase another subscription to get this material. To access this material simply click 'View Full Document'

## Why people love StudySoup

#### "There's no way I would have passed my Organic Chemistry class this semester without the notes and study guides I got from StudySoup."

#### "I signed up to be an Elite Notetaker with 2 of my sorority sisters this semester. We just posted our notes weekly and were each making over $600 per month. I LOVE StudySoup!"

#### "I was shooting for a perfect 4.0 GPA this semester. Having StudySoup as a study aid was critical to helping me achieve my goal...and I nailed it!"

#### "Their 'Elite Notetakers' are making over $1,200/month in sales by creating high quality content that helps their classmates in a time of need."

### Refund Policy

#### STUDYSOUP CANCELLATION POLICY

All subscriptions to StudySoup are paid in full at the time of subscribing. To change your credit card information or to cancel your subscription, go to "Edit Settings". All credit card information will be available there. If you should decide to cancel your subscription, it will continue to be valid until the next payment period, as all payments for the current period were made in advance. For special circumstances, please email support@studysoup.com

#### STUDYSOUP REFUND POLICY

StudySoup has more than 1 million course-specific study resources to help students study smarter. If you’re having trouble finding what you’re looking for, our customer support team can help you find what you need! Feel free to contact them here: support@studysoup.com

Recurring Subscriptions: If you have canceled your recurring subscription on the day of renewal and have not downloaded any documents, you may request a refund by submitting an email to support@studysoup.com

Satisfaction Guarantee: If you’re not satisfied with your subscription, you can contact us for further help. Contact must be made within 3 business days of your subscription purchase and your refund request will be subject for review.

Please Note: Refunds can never be provided more than 30 days after the initial purchase date regardless of your activity on the site.