×

### Let's log you in.

or

Don't have a StudySoup account? Create one here!

×

or

by: Jeremy Dao

312

10

7

# AMATH 301 Final Exam Study Guide AMATH 301

Marketplace > University of Washington > Applied Mathematics > AMATH 301 > AMATH 301 Final Exam Study Guide
Jeremy Dao
UW
GPA 3.72

Get a free preview of these Notes, just enter your email below.

×
Unlock Preview

Study guide for the final exam. Goes over all material after midterm 1: Numerical Integration and differentiation, solving ODE's (first order, second order, and stiff), review of costs, singular va...
COURSE
Beginning Scientific Computing
PROF.
HETMANIUK,ULRICH L.
TYPE
Study Guide
PAGES
7
WORDS
CONCEPTS
Linear Algebra, Matlab, Numerical Integration, Differential Equations, Fourier transform, SVD, PCA
KARMA
50 ?

## Popular in Applied Mathematics

This 7 page Study Guide was uploaded by Jeremy Dao on Thursday June 2, 2016. The Study Guide belongs to AMATH 301 at University of Washington taught by HETMANIUK,ULRICH L. in Spring 2016. Since its upload, it has received 312 views. For similar materials see Beginning Scientific Computing in Applied Mathematics at University of Washington.

×

## Reviews for AMATH 301 Final Exam Study Guide

×

×

### What is Karma?

#### You can buy or earn more Karma at anytime and redeem it for class notes, study guides, flashcards, and more!

Date Created: 06/02/16
AMATH 301 Final Exam Study Guide Rb ▯ Numerical Integration: Evaluate numerically af(t)dt – Midpoint Rule: ▯ Z ▯ ▯ b a + b f(t)dt = (b ▯ a)f + E a 2 where E is the error. ▯ Midpoint rule integrates exactly constant functions and polynomials of degree 1. ▯ Composite version: m sub-intervals, H =a m Z b Xm f(t)dt = H f(a + (j ▯1)H) + E~ a 2 j=1 2 ▯ When f is C : b ▯ a d f E = H2 (▯) ▯ 2 [a;b] 24 dt2 – Trapezoidal Rule: ▯ Z ▯ ▯ b 1 1 f(t)dt = (b ▯ a) f(a) + f(b) + E a 2 2 where E is the error. ▯ Integrates exactlynstant functions and polynomials of degree 1. ▯ Composite version: m sub-intervals, H =a 2 m 3 Z m▯1 b 1 X 1 f(t)dt = H4 f(a) + f(a + jH) + f(b) + E~ a 2 j=1 2 ▯ When f is C : b ▯ a 2d f E = ▯ H 2 (▯) ▯ 2 [a;b] 12 dt ▯ In Matlab: trapz 1 2 – Simpson’s Rule: ▯ Interpolates 2nd degree polynomial between a, b, and midpoint. ▯ Z ▯ ▯ ▯ ▯ b b ▯ a 1 + b f(t)dt = f(a) + 4f + f(b) + E a 6 2 where E is the error. ▯ Integrates exactly polynomials of degree 0, 1, 2, 3. ▯ Composite version: m sub-intervals, H =;t = a + kH m k 2 Z 2 m▯1 m▯1 3 b H X X f(t)dt = 4f(a) + 2 f(t2k + 4 f(2k+1)f(b) + E~ a 6 j=1 j=1 ▯ When f is C : b ▯ a d f E = ▯ H 4 (▯) ▯ 2 [a;b] 2880 dt4 ▯ In Matlab: quad ! more advanced version of Simpson’s Rule, uses dynamically changing H – Note: Numerical Integration is well-conditioned – In general, more intervals you take, less error. In order of least accurate to most accurate: Trapezoid, Middle, Simpson’s – All integration techniques are interpolating data: ▯ Numerical Differentiation: – Numerical diﬀerentiation is ill-conditioned – Using Taylor series expansion and limit deﬁnition of derivative: 8 > f(x+h)▯f(x+ O(h) Forward diﬀerence < f(x+h)▯f(x▯h) 2 0 h + O(h ) Center diﬀerence f (x) => f(x)▯f(x▯h) : h + O(h) Backward diﬀerence ▯f(x+2h)+4f(x+h)▯3f+ O(h ) One-sided diﬀerence h – When derivate (slope) is positive, forward diﬀ over-predicts and backward diﬀ under- predicts. Vice-versa is true when the derivate is negative. – Second Derivative: f(x + h) ▯ 2f(x) + f(x ▯ h) 2 h + O(h ) – Central diﬀ is generally better when possible, though it is not possible when computing in real time (don’t have access to future data) or when computing at boundaries of data. ▯ Ordinary Differential Equations (ODE) 0 – y = f(t;y), unknown is a function of t – Order of the ODE: Highest-degree of derivatives in equation. 3 – Any ODE of order higher than 1 can be decomposed into a system of order 1 ODE’s 2 Example: Newton’s Law m dtu= f ! order 2 du Let v = dt, then: ▯ du ▯▯ ▯▯ ▯ ▯ dt = v d u v dv d u ! v = f m dt = m dt2 = f dt m – General Case: Order k k ▯ k▯1 ▯ d y = f t;y;dy ;:::;d y dtk dt dtk▯1 02 31 2 3 dy y B6 2t 7C 6 dy 7 d B6 d2y7C 6 dt 7 B6 d. 7C = 6 . 7 dt @4 . 5A 4 ▯ . ▯5 d▯1y dy dk▯1y dt▯1 f t;y; dt;:::;dtk▯1 ▯ Numerical Methods for solving ODE’s: – Will appox. the solution y with discrete valuesky at discrete time samplek t – Forward Euler Method (ﬁrst order method) global error is O(▯t) ▯ dg g(t + ▯t) ▯ g(t) 2 (t) = + O(▯t ) dt ▯t ▯ Iterative method: yk+1 = y k ▯tf(t ;k )k with tk= t 0 k▯t ▯ Forward Euler is an explicitethod, there is no equation to solve. ▯ F.E method is stable when the time step ▯t is 0 < ▯t < 2 for y = ▯y j▯j – Backward Euler Method (ﬁrst order method) global error is O(▯t) ▯ dg g(t) ▯ g(t ▯ ▯t) 2 dt (t) = ▯t + O(▯t ) ▯ Iterative method: yk+1 = yk+ ▯tf(t k+1;yk+1) with tk+1 = t0+ (k + 1)▯t ▯ Backward Euler is an implicit method, there is an extra equation to solve in order to get k . Can call fzero to solve: w ▯ ▯tk=1 ;w) ▯ yk= 0 ▯ B.E method is stable for all positive time steps (▯t is 0 < ▯t) for ▯ < 0;y = ▯y – Trapezoid Method (second order accuracy) ▯ ▯t yk+1 = yk+ (f(tk;yk+ f(t k+1;yk+1)) 2 ▯ Trapezoid Method is an implicit method. ▯ Trapezoid method is stable for when: ▯ ▯t ▯ ▯1 + ▯ 2 ▯ ▯ ▯ ▯< 1 ▯1 ▯ ▯ 2 ▯ 4 For any ▯t where Re(▯) < 0 – Runge-Kutta, order 2 (second order accuracy) ▯ ▯ ▯ ▯t ▯t yk+1 = yk+ ▯tf (tk+ 2 ;yk+ 2 f(tk;yk) Note that yk+ ▯tf(tk;ykis a linearization of k 2 ▯ Explicit method. ▯ In Matlab: ode23() – Runge-Kutta, order 4 (fourth order accuracy) ▯ ▯t yn+1 = y n 6 (k1+ 2k 2 2k +3k ))4 where k1= f(t ny n k = f(t + ▯t ;y + ▯tk ) 2 n 2 n 2 1 k = f(t + ▯t ;y + ▯tk ) 3 n 2 n 2 2 k = f(t + ▯t ;y + ▯tk ) 4 n 2 n 3 ▯ Explicit method. ▯ In Matlab: ode45() ▯ 2nd-Order ODE’s – Example: y + y ▯ 6y = 0;y(0) = 2;y (0) = ▯1 ode23, ode45, etc. can not solve 2nd order as a scaler equation – Vectorize: turn into system of 1st order ODE ▯ ▯ ▯ ▯ d y y0 0 = 0 dt y ▯y + 6y ▯ ▯ d Y = AY = 0 1==6 ▯1 Y dt f = @(t, x)[x(2); -x(2)+6*x(1)]; [t_out, y_out] = ode23(f, [0; 1], [2; -1]); ▯ Absolute Stability – A numerical method is absolutely stable (for y = ▯y) if, for ▯t ﬁxed y remains n bounded as t n 1 – Forward Euler:j1 + ▯▯tj ! 0 ▯ t ▯ 2 j▯2 – Runge-Kutta 2nd order: j1 + ▯▯t + ▯ ▯tj < 1 22 – Runge-Kutta 4th order: j1 + ▯▯t + ▯ 2t j < 1 ▯ Stiﬀ ODE’s – Stiﬀ diﬀerential equations do not have stable approximations and require special solvers – Sometimes, ordinary methods like Euler can solve with many, many timesteps, but stiﬀ solvers can solve with much less timesteps and more eﬃciently. – Stiﬀ solvers use "multi-step" methods, where they look at a bunch of points to ﬁnd future points. 5 R – Adams-Bashforth: Approximate y(t k+1) = y(tk) + tk+1f(s;y(s))ds as k+1 = yk+ Rt k tk+1p(s)ds where p(s) is a polynomial that interpolates f(t;yk+1▯n;yk+1▯n );:::;(k ;k ) k ▯t yn+1 = yn+ (3f(tn;yn) ▯ f(tn▯1;yn▯1)) p is linear 2 ▯t yn+1 = y n (23f(tn;yn) ▯ 16f(n▯1 ;yn▯1+ 5f(t n▯2;yn▯2))) p is quadratic 12 – If a multi-step method is a q step method, then it has order q + 1 accuracy – To initialize, ﬁrst use one-step methods to get a few pts, then use multi-step ▯ (Reduced) Singular Value Decomposition: – Consider A 2 C n▯c(n ▯ c), then A can be decomposed into A = U▯V^ T where: ▯ ▯ is a c ▯ c matrix where 2 3 ▯1 0 ▯ = 6 .. 7 4 . 5 0 ▯c . ▯1▯ ▯ 2 ..▯ ▯ c 0 are the singular values of A ^ ^ ^ ^ ▯ U is a n ▯ c orthonormal matrix, meaning that U ▯ U = I, the columns of U are orthonormal. ▯ V is a c ▯ c unitary matrix, meaning V V = I ! V ▯1 = V T – The SVD always exists, though it may not always be unique. ▯ (Full) Singular Value Decomposition: – Consider A 2 C n▯c(n ▯ c), then A can be decomposed into A = U▯V T where: ▯ ▯ is a c ▯ c matrix where 2 3 ▯1 0 6 .. 7 6 . 7 ^ 6 0 ▯ 7 ▯ = 6 c7 6 7 4 0 5 ▯ U is a n ▯ n square unitary matrix. ▯ V is a c ▯ c square unitary matrix. n▯c T c▯n – In the case where n < c, A 2 C ! A 2 C . Then: ▯ ▯ T A = U▯V ~ ~T ! A = U▯V ~ ~T = V ▯ U T – In Matlab: 0 [U;S;V ] = svd(A) ! full SVD, "A = U ▯ S ▯ V ", size(A6 size(U) [Ur;Sr;V r] = svd(A, 0) ! reduced SVD "A = Ur ▯ Sr ▯ V r ", size(A) = size(U) S = svd(A) ! vector of singular values. – Given the SVD: Xc A = ▯i ivT i i=1 where ui;viare the columns of U;V respectively. Recall that ▯’s are decreasing, so if they decrease quickly can only take ﬁrst few ▯’s are still get good approximation at lower cost (less to store) 6 – This leads to Low Rank Approximation kA ▯ Apk = rank(B)=PkA ▯ Bk 2 ▯ p+1 gives best approx. of A, using only P terms(vectors) Xp Xp Ap= ▯i ii B = Aj jwj i=1 j=1 – Complete Storage: mn complex numbers ! Storage: p(m+n+1) ▯ Review of Costs – O(n ) ▯ LU decomp: (A is m ▯ n) [L;U;P] = lu(A) PA = LU ▯ Eigendecomp: (A is n ▯ n) [V;D] = eig(A), AV = VD Note above DOES NOT imply diagonalizable, meaning can not go from AV = VD to A = VDV ▯1 ▯ SVD (A is n ▯ n) A = U▯V ▯ ▯ ▯ U U = I;V V = I and V is square, ▯ is diagonal V▯ is unitary, meaning that V = V 0 (If A is m ▯ n, O(mn )) 2 – O(n ) ▯ Lnx;Unx for L lower tri and U upper tri ▯ A ▯ x A is n x m ▯ Principle Component Analysis (PCA) – 1 set of data: x = 1x 2x ;dotsNx ] s P N P N 2 i=1xi i=1(xi▯ x▯) 2 mean: ▯ std. dev: s = variance: var(x) = s N N ▯ 1 – 2 sets of data: X, Y P N (xi▯ x▯)(yi▯ ▯) covariance: cov(X;Y ) = i=1 N ▯ 1 we are mostly interesting in the sign of covariance: (+) tells dimensionality increases together, (-) one increases while other decreases. – PCA goals: Extract info from data, Compress size of data (only have important info), simplify description of data, analyze structure – PCA Algorithm/Procedure: (1) Organize data as an n▯m matrix X where m is # of measurement types, n is # of trials (2) Subtract oﬀ mean of each row x (each row has zero mean) T i (3) Compute SVD: X = U▯V (4) Principle components can be found: Y = U X (only take # of columns that you want in principle components) ▯ Discrete Fourier Transform (DFT) – Will use the Fast Fourier Transform, FFT (an algorithm) to get the DFT 7 – Fourier series it a trigonometric interpolation of a function (uses sin, cos, and 1 instead of powers of x). This is more appropriate for cyclic of periodic functions. We can represent data as a linear combination of sin and cos, decomposing the data into its frequency components ▯▯ ▯ 2i▯▯ – Recall that e = cos(▯)+isin(▯). Let the frequency componentn! = exp ▯n and ▯ = 1 n !n – DFT: ▯ ▯T DFT ▯ ▯T x0;dots;n▯1 ▯▯▯! y0;:::;n▯1 y is always complex X▯1 ▯ 2i▯jk▯ X▯1 yk= xjexp ▯ = xj(!n)jk n j=0 j=0 – Inverse DFT: ▯ ▯T iDFT ▯ T y0;:::;n▯1 ▯▯▯! x0;dots;xn▯1 n▯1 ▯ ▯ n▯1 1 X 2i▯jk 1X jk xk= yjexp = yj▯n) n j=0 n nj=0 – Let f be the Fourier coeﬃcients (output) and f be the data (input), then: 2 f^ 3 2 1 1 1 ▯▯▯ 1 32 3 0 2 n▯1 f0 6 f1 7 6 1 ! n !n ▯▯▯ ! n 76 f1 7 6 ^ 7 6 1 ! 2 !4 ▯▯▯ ! 2(n▯176 f 7 6 f2 7 = 6 n n n 76 2 7 4 . 5 4 . . . ... . 54 . 5 . 2 fn▯1 1 ! n▯1 !n(n▯1) ▯▯▯ ! nn▯1) fn▯1 n – Both the DFT and inverse DFT are matrix-vector multiplications and thus have cost 2 O(n ) – However, the FFT (don’t know need to know how the algorithm works) has cost O(nlog(n)), much better! – In Matlab: ▯ if length(x) = 2 (a power of 2): y = fft(x); x = ifft(y); ▯ if length(x6 2 : y = fft(x, 2^nextpow2(length(x))); %(length(y) > length(x) xnew = ifft(y); x = xnew(1:length(x));

×

×

### BOOM! Enjoy Your Free Notes!

×

Looks like you've already subscribed to StudySoup, you won't need to purchase another subscription to get this material. To access this material simply click 'View Full Document'

## Why people love StudySoup

Steve Martinelli UC Los Angeles

#### "There's no way I would have passed my Organic Chemistry class this semester without the notes and study guides I got from StudySoup."

Allison Fischer University of Alabama

#### "I signed up to be an Elite Notetaker with 2 of my sorority sisters this semester. We just posted our notes weekly and were each making over \$600 per month. I LOVE StudySoup!"

Bentley McCaw University of Florida

#### "I was shooting for a perfect 4.0 GPA this semester. Having StudySoup as a study aid was critical to helping me achieve my goal...and I nailed it!"

Parker Thompson 500 Startups

#### "It's a great way for students to improve their educational experience and it seemed like a product that everybody wants, so all the people participating are winning."

Become an Elite Notetaker and start selling your notes online!
×

### Refund Policy

#### STUDYSOUP CANCELLATION POLICY

All subscriptions to StudySoup are paid in full at the time of subscribing. To change your credit card information or to cancel your subscription, go to "Edit Settings". All credit card information will be available there. If you should decide to cancel your subscription, it will continue to be valid until the next payment period, as all payments for the current period were made in advance. For special circumstances, please email support@studysoup.com

#### STUDYSOUP REFUND POLICY

StudySoup has more than 1 million course-specific study resources to help students study smarter. If you’re having trouble finding what you’re looking for, our customer support team can help you find what you need! Feel free to contact them here: support@studysoup.com

Recurring Subscriptions: If you have canceled your recurring subscription on the day of renewal and have not downloaded any documents, you may request a refund by submitting an email to support@studysoup.com