### Create a StudySoup account

#### Be part of our community, it's free to join!

Already have a StudySoup account? Login here

# AMATH 301 Midterm 1 Study Guide AMATH 301

UW

GPA 3.72

### View Full Document

## About this Document

## 447

## 20

1 review

## Popular in Beginning Scientific Computing

## Popular in Applied Mathematics

This 7 page Study Guide was uploaded by Jeremy Dao on Sunday May 1, 2016. The Study Guide belongs to AMATH 301 at University of Washington taught by HETMANIUK,ULRICH L. in Spring 2016. Since its upload, it has received 447 views. For similar materials see Beginning Scientific Computing in Applied Mathematics at University of Washington.

## Reviews for AMATH 301 Midterm 1 Study Guide

Better than the professor's notes. I could actually understand what the heck was going on. Will be back for help in this class.

-*Maryjane Parisian*

### What is Karma?

#### Karma is the currency of StudySoup.

#### You can buy or earn more Karma at anytime and redeem it for class notes, study guides, flashcards, and more!

Date Created: 05/01/16

AMATH 301 Midterm 1 Study Guide ▯ Matricies/Vectors: – Matrices in Matlab: Use commas (",") or spaces to separate elements of a row, and use semicolons (";") to separate rows and start a new one. Example: 2 3 1 2 3 A = 44 5 6 5 7 8 9 is written as: A = [1, 2, 3; 4, 5, 6; 7, 8, 9]; ▯ Norms: – A norm is an abstract notion of a certain kind of distance. – kxk is a function from R ! R and is a norm when: P n p 1 – The 1;2;3;:::;1 norms (called the p-norms) are given by the eqpation i=1 :i (). – For p = 1 we get the 1-norm (Manhattan Norm) the sum of the absolute value of all the elements of x 2 3 n x X 1 kxk 1 jxij 4 x25 = jx1j + j2 j +3jx j i=1 x3 1 – For p = 2 we get the 2-norm, which is our traditionally understanding of the norm in that is gives the Euclidean distance (or the magnitude of the vector). Distance "as the bird ﬂies". u 2 3 u Xn x1 q kxk2= t jxij 4 x25 = x + x + x2 1 2 3 i=1 x3 2 Note that if the subscript is omitted from the norm (kxk) this is generally taken to be the 2-norm. – The 1-norm (max norm) is the maximum of the absolute value of the element: 2 3 x 1 kxk1 = max jx i 4 x25 = max(jx1j;j2 j;3x j) 1▯i▯m x3 1 – Note: kx1 ▯ kxk2▯ kxk1 – In Matlab to ﬁnd the p - norm of a use the command: norm(a, p). By default, Matlab calculates the 2-norm, so norm(a) gives the 2-norm. ▯ Matlab commands: – rand(# rows, # cols) ! produces matrix of given size with random entries (between 0 & 1) – zeros(# rows, # cols) ! produces matrix of given size with all entries are zero – ones(# rows, # cols) ! produces matrix with all entries 1 – eye(# rows, # cols) ! produces identity matrix (1’s along diagonal, zero everywhere else) – size(matrix) ! returns dimensions of matrix – To save a result in a ﬁle use the save command: save a1.dat tmp -ascii a1.dat is the ﬁle name that tmp is saved to. -ascii speciﬁes the format that it will be saved in. 1 2 – [maxvector, location] = max(A) ! returns the maximum element of each column of a matrix, and their location (index) in the column. max(A, [ ], 2) ﬁnds max in each row max(A(:)) ﬁnds max in whole matrix ▯ Matlab logic, loops, and iterations: – Loops: for statements ! Do something for a certain amount of time. ▯ Example: a = 0; for j = 1:5 %j will iterate from 1 to 5 a = a+j; end %ends the loop ▯ If instead use: for j = 1:2:5 j will iterate from 1 to 5 with step of size 2 (1 ! 3 ! 5) ▯ Can also iterate j through a set array of numbers: for j = [3, 0, 6] so j will equal 3, then 0, then 6. ▯ "break" command breaks out of a loop ▯ "continue" command skips the next iteration – Logic: if statements: Do something if " " is true. Example: if (logical statement) execute else if (logical statement) execute else execute end ▯ Function handles: Deﬁne a function inline Example: f = @(x) exp(x)-tan(x); So now can call f(2) and will evaluate the function at x = 2. 3 ▯ Solving Linear Systems: – Gaussian Elimiation ▯ We can write a system of linear functions as a function of matrices. For example, take the system: 2x + x + x = 2 1 2 3 x1▯ x 2 x 3 1 x1+ 2x 2 0x 3 3 2 32 3 2 3 2 1 1 x1 2 4 54 5 4 5 This is equivalent to: ▯1 ▯1 x2 = 1 , or Ax = b, where 2 3 12 3 0 2 33 3 2 1 1 x 2 1 A = 41 ▯1 ▯1 5 ;x = 4x25 ;b = 415 1 2 0 x3 3 ▯ To solve this, we subtract equations (rows) to get an upper triangular system from which we can then back-substitute to ﬁn1 x2;x3;x . ▯ In Matlab, we use the "backslash" command: x = Anb 3 ▯ Gaussian Elimination has cost: O(n ) – LU factorization ▯ We can decompose a matrix into the product of a lower triangular matrix (L), a upper triangular matrix (U), and a permutation matrix (P). We can then use this to solve Ax = b: PA~x = Pb Get LU of PA LU~x = Pb Let Ux = y then, y = Pb Since U is upper triangular, just need to back substitute to get x ! O(N ) 2 Since L is lower triangular, just need to back substitute to get y ! O(N ) ▯ Advantages: Though getting L, U, and P is O(N ), we only need to do that once, then we can solve Ax = b using L,U, and P with O(N ) ▯ In Matlab to solve Ax = b using LU decomposition: [L, U, P] = lu(A); x = U\(L)(P*b); – Inverses: ▯1 ▯1 ▯ The inverse of A is a matrix such that AA= I, where I is the identity matrix ▯ If have A1, can solve Ax = b ! x = A1b ▯ The inverse only exists if if A is non-singular ▯ Can check singularity of A by checking its determinant. If determinant is zero, then A is singular (has no inverse). In Matlab use the command det(A) to get the determinant. ▯ Calculating the determinant may be complex and could have error, so using the condition number may be more robust. If the condition number of A is high, it may be singular. cond(A) in Matlab gives the condition number of A. ▯ Eigenvectors and eigenvalues: 4 n▯n – For any square matrix A 2 R , a non-zero vector x is called an eigenvector if Ax = ▯x for some constant ▯, which is the eigenvalue associated with x. – The eigenvalues are calculated by ﬁnding the roots of the "characteristic polynomial", which is the determinant of A ▯ ▯I – The eigenvectors are then found by ﬁnding the nullspace of A▯▯I for each ▯ (eigenvalue) ▯ ▯ – Example: Find the eigenvalues and eigenvectors of A = 2 7 ▯1 ▯6 ▯ ▯ det(A▯▯I) = ▯ ▯ ▯ ▯ 7 ▯= (2▯▯)(▯6▯▯)+7 = ▯ +4▯▯5 = (▯+5)(▯▯1) = 0 ▯ ▯1 ▯6 ▯ ▯ ▯ ) ▯1= ▯5;▯ 2 1 ▯1= ▯5 : ▯ ▯ ▯ ▯ 7 7 0 0 0 0 (A + 5Ij0) = ▯1 ▯1 0 ! 1 1 0 ▯ ▯ x = ▯x ▯(1)= 1 1 2 ▯1 (Any vector that sa▯isﬁes1x = ▯x2▯s a▯so an eig▯nvector fo1 ▯ = ▯5) 1 7 0 1 7 0 ▯ = 1 : (A ▯ 1Ij0) = ▯1 ▯7 0 ! 0 0 0 ▯ ▯ 7x = ▯x ▯(2)= ▯7 2 1 1 (Any vector that statisﬁes 2x = ▯1 is also an eigenvector fo1 ▯ =▯▯5)▯ ▯ ▯ 1 ▯7 So the eigenvalues of A are -5 and 1 with corresponding eigenvecto▯1 ; 1 re- spectively. – If x is an eigenvector of A with eigenvalue ▯, then cx is also an eigenvector with the same eigenvalue for c 2 R. So every eigenvalue has inﬁnitely many eigenvectors. n – If A 2 R , then A has at most n unique eigenvalues. Note that A can have repeated eigenvalues (meaning an eigenvalue with more than 1 distinct eigenvector). – A is non-singular (its inverse exists) exactly when 0 is notgenvector. – If A is a triangular matrix (upper or lower), then its eigenvalues are just the diagonal entries. – In Matlab: [V, D] = eig(A); V contains the eigenvectors of A in its columns, and D whose diagonal elements are the corresponding eigenvalues in the same column number. – If A has no repeated eigenvalues then it can be diagonalizedaning A can be written ▯1 as A = V DV where V;D are as above. This makes taking the power of A very easy: A = V D Vn ▯1 ▯ Iterative Methods of solving Ax = b: – Jacobi Iteration Method ▯ Decompose A into a lower triangular part w/o diagonal ▯L ), an upper triangular part w/o diagonal (U▯), and a diagonal part (D) such that A =▯L + D + ▯ . In Matlab this is done with: Lstar = tril(A, -1); D = diag(diag(A)); Ustar = triu(A,1); For Lstar and Ustar, the -1 and 1 respectively are to not include the diagonal in the lower and upper triangular part. For D, the inner diag extracts the diagonal 5 elements from A then the outer diag takes those elements and puts them along the diagonal of a matrix. ▯ We then iterate from an initial gukss x by the formula: ▯1 xk+1 = xk+ D (b ▯ Ak ) Iterate through until converge to an answer. – Gauss-Seidal: ▯ Decompose A the same way as above in Jacobi Iteration ▯ Iterate x by the equation: ▯1 xk+1 = xk+ (L ▯ D) (b ▯ Axk) – Iterative methods for solving Ax = b are used when A is large and sparse (mostly zeros) ▯ Interpolation: Function goes exactly through data pts. – Interpolate between two consecutive x values means that we ﬁnd a function that passes though those two x values and use the function to estimate the values between the data pts. We can use any degree polynomial to interpolation. – In Matlab to interpolate between points x1 and x2, using an n-degree polynomial: p = polyfit( [x1; x2], [y1, y2], n); This will return the coeﬃcients of the polynomial interpolation in p. To evaluate the function at a point x we use: p = polyval(p, x); – LaGrange interpolationis interpolation using all data points, so for n data points we ﬁt a a n-1 degree polynomial. – However, this is susceptible to the Runge phenomenon. When you try to ﬁt a high- degree polynomial to a evenly spaced x, high risk of overshooting and undershooting at the edges. ▯ Curve Fitting – Least-Square Fit with linear polynomial: Try to draw a line that minimizes the residuals ▯ (distance between line and data point) 9 2 3 2 3 2 3 y1= p 1 1 p 2 ▯ 1 > y1 x1 1 1 y = p x + p + ▯ = 6y 7 6 x 17 ▯ ▯ 6 ▯7 2 1 2 2 2 6 27 = 6 2 7 p1 + 6 27 . > 4 .5 4 . 5 p2 4 .5 ; yn= p 1 n p 2 ▯ n yn xn 1 ▯n n X 2 T T min j▯ij = min ▯ ▯ = min(y ▯ xp) (y ▯ xp) (1 2pi=1 (p1;2 ) (p) T ▯1 T solution: p = (x x)x y Note that we minimize the root mean square error. 6 – In Matlab (1) Deﬁne f (function handle) f = @(p,x) p(1)exp(-p(2)*x) + p(3); (2) Deﬁne sum-of-squares error err = @ (p) sum((yin - f(p, xin)).^2); (3) Use "fminsearch" to ﬁnd p [pFit, errMin] = fminsearch(err, "init. guess"); ▯ Splines: Smoothly connect data points, make ﬁrst and second derivatives match at each data point. In Matlab use the spline command: ysp = spline(xdata, ydata, interpolation values); ▯ Optimization Iterative Algorithms – Fixed Point Iteration (Jacobi and Gauss-Seidel) to solve Ax = b Start with initial guess 0 , and iterate until n+1 ▯x kn< ▯ for some tolerance value ▯ – Finding a zero of f: f(x) = 0, where f : R ! R. Uses the bisection method. Requires 2 a starting intervaland a zero with a change of sign. (ex. f(x) = x ! problematic) – Newton algorithm ! uses value and derivatives of f. Start from initial guess x0;f(x0) 6= 0 and use Taylor series expansion to ﬁrst order: 0 f(x1) = f(x 0 + (x 1 x )0 (x )0 0 Deﬁne x s1ch that f(x ) 0 (x ▯ 1 )f (0 ) = 0: x = x ▯ f(x 0 1 0 f (x0) ▯ Minimizing a function: Minimizing a function f(x) over some domain S – Case S = R f : R ! R initial guess interval [1 ;2 ] fminbnd (f;x;x ) ! uses only values of f m 0 m – Case S = R (m > 1) f : R ! R initial guess x 0 fminsearch(f;x )0can have 3 outputs [xmin, fmin, exitﬂag] uses only values of f – Case S = R n f : R ! R initial guess x 0 fminunc (f;x0) uses values and derivatives of f and requires Optimization toolbox – All algorithms can ﬁnd global minima, if given a small interval around it (or close initial guess). But if initial guess is close to a local min or initial interval is includes a local min, there is a chance the algorithm will ﬁnd local min instead of the global min. So take care when choosing initial guess/interval, and be aware that answer may not be right. – Note that maximizing a function f(x) is equivalent to minimizing ▯f(x) ▯ Linear Programming: Minimization of linear function with constraints – To solve 8 ▯ < Ax ▯ b minp x; S = x 2 R ; n Aeq = b eq x2S : lb▯ x ▯ u b use the "linprog" command as: [xopt, fopt, exitflag] = linprog(p, A, b, Aeq, beq, lb, ub) where p, A, b are constants of the function, Aeq and beq are the the equilibrium states, lb is the lower bound, ub is the upper bound, and the outputs are xopt (optimum x value), fopt (optimum f value, f(xopt)) and exitﬂag (description of whether the algorithm found an optimum value or not). – If there are no constraints (like no Aeq or lb, etc.) use an empty matrix "[ ]" in place of the constraint. – Example: A company produces 2 types of heaters, S and L. They sell S heaters for $40 and L heaters for $88. Machine M1 needs 2 minutes to make a S heater, and 8 minutes to make an L. Machine M2 needs 5 minutes for S and 2 minutes for L. Maximize the 7 hourly revenue. Hourly revenue: f(x) = 40x 1 88x 2 Mimize ▯f(x) with constraints: ▯ 2x1+ 8x 2 60 5x + 2x ▯ 60 Ax ▯ b 1 2 Code: p = [-40; -88]; A = [2, 8; 5, 2]; Aeq = [ ]; beq = [ ]; %no equilibrium costraints lb = [0; 0]; ub = [ ]; %define bounds [xopt, fopt, exitflag] = linprog(p, A, b, Aeq, beq, lb, ub); And we ﬁnd that x = 10;x = 5 and fopt = -840 ! maximum hourly revenue is 1 2 $840/hr

### BOOM! Enjoy Your Free Notes!

We've added these Notes to your profile, click here to view them now.

### You're already Subscribed!

Looks like you've already subscribed to StudySoup, you won't need to purchase another subscription to get this material. To access this material simply click 'View Full Document'

## Why people love StudySoup

#### "There's no way I would have passed my Organic Chemistry class this semester without the notes and study guides I got from StudySoup."

#### "I used the money I made selling my notes & study guides to pay for spring break in Olympia, Washington...which was Sweet!"

#### "I was shooting for a perfect 4.0 GPA this semester. Having StudySoup as a study aid was critical to helping me achieve my goal...and I nailed it!"

#### "It's a great way for students to improve their educational experience and it seemed like a product that everybody wants, so all the people participating are winning."

### Refund Policy

#### STUDYSOUP CANCELLATION POLICY

All subscriptions to StudySoup are paid in full at the time of subscribing. To change your credit card information or to cancel your subscription, go to "Edit Settings". All credit card information will be available there. If you should decide to cancel your subscription, it will continue to be valid until the next payment period, as all payments for the current period were made in advance. For special circumstances, please email support@studysoup.com

#### STUDYSOUP REFUND POLICY

StudySoup has more than 1 million course-specific study resources to help students study smarter. If you’re having trouble finding what you’re looking for, our customer support team can help you find what you need! Feel free to contact them here: support@studysoup.com

Recurring Subscriptions: If you have canceled your recurring subscription on the day of renewal and have not downloaded any documents, you may request a refund by submitting an email to support@studysoup.com

Satisfaction Guarantee: If you’re not satisfied with your subscription, you can contact us for further help. Contact must be made within 3 business days of your subscription purchase and your refund request will be subject for review.

Please Note: Refunds can never be provided more than 30 days after the initial purchase date regardless of your activity on the site.