### Create a StudySoup account

#### Be part of our community, it's free to join!

Already have a StudySoup account? Login here

# NUMERICAL ANALYSIS MTH 654

OSU

GPA 3.79

### View Full Document

## 42

## 0

## Popular in Course

## Popular in Mathematics (M)

This 67 page Class Notes was uploaded by Mrs. Dedric Little on Monday October 19, 2015. The Class Notes belongs to MTH 654 at Oregon State University taught by Staff in Fall. Since its upload, it has received 42 views. For similar materials see /class/224442/mth-654-oregon-state-university in Mathematics (M) at Oregon State University.

## Popular in Mathematics (M)

## Reviews for NUMERICAL ANALYSIS

### What is Karma?

#### Karma is the currency of StudySoup.

#### You can buy or earn more Karma at anytime and redeem it for class notes, study guides, flashcards, and more!

Date Created: 10/19/15

Electromagnetic characterization of damage in Space Shuttle foam Nathan L Gibson gibsonnmath oregonstate edu In Collaboration with Prof H T Banks CRSC Dr W P Winfree NASA N IA NATIEINAL INSTITUTE UF AEHDEF39AEE Department of Mathematics 7 p 1 Motivating Application Hard outer skin Water settling in crack External tank skin Ice or water may become trapped between foam and tank Cracked foam Cracks or other defects suak up water fmm rain or high humidity such as was prevalent during Columbia39s stay an the pad The water in direct contact with the super ccld tank cab ring the This process can widen the cracks causmg mere ice chunks r put the er to freeze er to seep in forming The particular motivation for this research is the detection of defects in the insulating foam on the space shuttle fuel tanks in order to help eliminate the separation of foam during shuttle ascent V 39 r U HM UNEHNMUIIIVHSM Department of Mathematics 7 D 2 Outline 0 1D Gap Detection Model 0 Numerical Methods for Forward Problem 0 Inverse Problem 0 Ordinary Least Squares Alternate Approach 0 Computational Results Department of Mathematics 7 p 3 Picometrix T Ray Setup Stepblock can be used to interrogate varying thicknesses of foam It can be turned upside down to sample varying gap sizes H Department of Mathematics 7 p 4 THz through foam THZ signal recorded after passing through foam of varying thickness in a pitchecho eX periment gt500 TDD in an 1m amunnSiahUIwusny Department ofMaihemaiicyus FFT of THz Signal FFT of waveforms through various thicknesses of foam i i i i Reference 7 L0 5quot Spectral Amplitude arb units L39IIIIIIIIIIIIIII L 3 4 Frequency Hz 11 Oregon State Universi y c0 d m k D 0 v1 P n O on C e t e D p a G pg IllIIIIIIIIIIIH Model mme 01915 MOO9E E MOJ8 in Q U 20 TPP 0 s EOOE inQ E cE Z0 0 Ez1 0 E0 z E390 z 0 P07 z 0 Where 18757 z 6zsmwt10tft and er 11 Qgt0 1IQ Department of Mathematics 7 p 8 Numerical Discretization 0 Second order FEM in space 0 piecewise linear splines 0 Second order FD in time CrankNicholson P 0 Central differences E en pn en gtZ9n1 gt o E equation implicit LU factorization used Department of Mathematics 7 p 9 Finite Element Method in Space The resulting system of differential equations in semidiscrete form can be written M1 Mg M3 A2pZ 770J 25 A15 6dAMQe 2 Where 770 uoeo 6d65 600 A1CT e and p are vectors representing the approximate values of E and P respectively at the nodes 2 15M9p Where M 9 is the mass matrix integrated only over 2 Department of Mathematics 7 p 10 Finite Difference in Time 19 Our nite difference approximation for 2 is AAt pn1 pn l mlt dMQ n6 pm where enjEtn zj pnjM9Ptm zj ijjh The value eng en 1 6en1 is a weighted average of an and enH for relaxation to help with stability of the method Note we take 6 12 Department of Mathematics 7 p 11 Finite Difference in Time 6 Applying second order central differencing with averaging to 1 gives A1 n2 A2 nl l A3 n l AtZUOJnJrl A2At2pnh 4 As an depends explicitly on an and an we could substitute 3 here and have one implicit equation for the update of 6 Note we use LU factorization as A1 does not change over time m Mathematicng Sample Problem Signal at t01182 ns Signal at t0141841 ns i electric eld voltsmeter I electric eld voltsmeter I 200 0005 001 0015 002 0 0005 001 0015 002 z m t 2 meters e ers Signal at t0165481 ns Signal at t0189122 ns electric eld voltsmeter I electric t39eld voltsmeter I Computed solutions at different times of a Windowed electromagnetic pulse at f100GHz incident on a Debye medium With a crack 60002m Wide located d02m into the material V 39 I i HM UNEHNMUIIIVHSM Department of Mathematics 7 D 13 Sample Problem C0nt E a E 2 3 a E a c electric Re ected signal received at 20 1m amunnSiahUIwusny Department ofMaxhemaxicsrp 14 Sample Problem Cont Signal received at z0 E a E 2 3 gt I 2 s 2 a Close up look at re ected signal received at 20 Shows important parts of the signal 1m amunnSiahUIwusny Department ofMaihemaiicsrp 15 Gap Detection Inverse Problem 0 Assume we have data E recorded at 20 0 Given d and 6 we can simulate the electric eld Need a fast numerical method 0 Estimate d and 6 by solving an inverse problem Find qd 6 E Qad such that the following objective function is minimized S 1 A j1Q E E Eti70 Q Ez l2 21 ie nd the value of q that results in E g which is a best match to the data E in a least squares sense m Mammy j1q Surface Plot function demonstrating peaks in j 1 and exhibiting many local minima Hm DreunnSiahUIwusny Department ofMathematics 7p 17 Out of phase Comparison of s1 for different deltas using 215 sample points l l delta 02 delta 0181 01 O cu I electric field voltsmeter 01 O 1OO 15O Improved Objective Function Consider the following formulation of the Inverse Problem Find qd 6 E Qad such that the following objective function is minimized S 1 A 2 M E Z lIEmwl IEil 2 1 Department of Mathematics 7 p 19 q Surface Plot de lh Close up surface plot of our Modi ed Least Squares objective function demonstrating lack of peaks in jg but still exhibiting many local minima Hm DreunnSiahUIwusny Department ofMathematics 7p 20 Erroneous Local Minima Signal received at E a E 2 s 3 2 a I 2 s 2 a 035 0355 036 0365 Data from d 6 and a simulation from the check point d 204 6 20 ET The simulated signal s largest peak matches with that of the data 1m amunnSiahUIwusny Department ofMathemaiicsrp 21 Check Point Method The diagonal trench occurs approximately along the line 1 a Also the minima occur every gm along this line Therefore if our optimization routine detects a local minima we test 2 on either side of the local minima to see if there is a smaller minima nearby If so we restart our optimizer at the new smallest point d 6 6d m Mame LevenbergMarquardt Method We rewrite the objective function as J iRTR q 28 Where R Et 0 q is the residual To update our approximation to q we make the GaussNewton update step qqc 50 Where Sc R ltqcgtTR ltqcgt VCI 1R ltqCTRqC is the step qc is the current approximation and q is the resulting approximation The value 10 is called the LevenbergMarquardt parameter m Mmmcwza Con dence Intervals for d 9 0 6 d 02 N 2048 0002 200005 i 930284 X 10 7 X 10 2 0004 200001 i 650411 X 107 X 10 2 0008 200001 i 491232 X 107 X 10 2 6 d 04 N 4096 0002 400013 i 162162 x 106 x 102 0004 400001 i 119064 x 10 6 x 10 2 0008 400002 i 905240 x 107 x 102 Con dence intervals for the OLS estimate of d when the data is generated with no noise ie V500 Department of Mathematics 7 p 24 Con dence Intervals for d 9 1 6 d 02 N 2048 0002 200000 i 472903 x 10 5 X 10 2 0004 200003 i 339327 x 105 X 10 2 0008 200003 i 279911 X 105 X 10 2 6 d 04 N 4096 0002 400014 i 548283 x 105 X 10 2 0004 400002 i 387474 X 10 5 X 10 2 0008 400003 i 319526 x 105 X 10 2 Con dence intervals for the OLS estimate of d when the data is generated with noise level Ur01 Department of Mathematics 7 p 25 Con dence Intervals for 6 9 0 5 d 02 N 2048 0002 0004 0008 5 199272 i 0000182978 x 10 4 400035 i 0000201885 X 10 4 799833 i 0000136586 x 10 4 d 04 N 4096 0002 0004 0008 198142 i 0000317616 X 10 4 400737 i 0000369841 X 10 4 800332 i 0000251291 X 10 4 Con dence intervals for the OLS estimate of 5 when the data is generated with no noise ie V500 Department of Mathematics 7 p 26 Con dence Intervals for 6 y 1 5 d 02 N 2048 0002 0004 0008 5 200017 i 000932701 X 10 4 400070 i 00105331 X 10 4 799698 i 000778563 x 10 4 d 04 N 4096 0002 0004 0008 197674 i 00107203 X 10 4 401229 i 00120445 X 10 4 800361 i 000886925 X 10 4 Con dence intervals for the OLS estimate of 5 when the data is generated with noise level Ur01 Department of Mathematics 7 p 27 Comments on 1D Gap Problem Our modi ed Least Squares objective function xes peaks in j Can test on both sides of detected minima to ensure global minimization We are able to detect a 2mm Wide crack behind a 200m deep slab Even adding random noise equivalent to 20 relative noise does not signi cantly hinder our inverse problem solution method and only slightly broadens the con dence intervals in a sensitivity analysis Department of Mathematics 7 p 28 t hang H I 4 u Ej i iru gt vi m 574 quot tl w e quotnampr ti J if a e i flv Q a allva fit H1 1 M tallMi Y my 39 39 f f tztgaamw l 39 L a it 1 1 a a 5 W g mn I L Yl l k Material heterogeniety may have signi cant effects on the output of an interrogating signal especially pulsed UWB signals Consider distributed parameters homogenization or distributions Department of Mathematics 71129 MW VQDJJSLLQIHDLE Vammnoe Relative Noise vs Constant Variance o o E a E 2 3 a 20 a c 2 U 2 a 032 0325 033 0335 t ns The difference between data with relative noise added and data with constant variance noise added is clearly evident when E is close to zero or very large m mamwao Gradientbased Methods for Optimization Part 1 Nathan L Gibson gibsonnmath oregonstate edu Department of Mathematics Oregon State University M OSU AMCSW N MM Outline Unconstrained Optimization Newton s Method Inexact Newton Quasi Newton Nonlinear Least Squares Gauss Newton Method Steepest Descent Levenberg Marquardt Method m U M SW N m 2 Unconstrained Optimization Minimize function f of N variables Ie nd local minimizer 55 such that fa g f1 for all a near a Different from constrained optimization fa g f1 for all a E U near 55 Different from global minimizer fa g f1 for all a possibly in U OSU 7 AMC Seminar NOV 2007 r p 3 Sample Problem Parameter Identi cation Consider u cu ku 0 u0 uog u 0 0 1 where it represents the motion of an unforced harmonic oscillator e g spring We may assume uo is known and data u il is given for some times tj on the interval 0 T Now we can state a parameter identi cation problem to be nd a 0 MT such that the solution ut to 1 using parameters a is as close as possible to uj when evaluated at times tj M U M SW N m 4 Objective Function Consider the following formulation of the Parameter Identi cation problem Find ae MT such that the following objective function is minimized 1 M are E 2 WWW all 31 This is an example of a nonlinear least squares problem OSU 7 AMC Seminar NOV 2007 r p 5 Iterative Methods An iterative method for minimizing a function f usually has the following parts Choose an initial iterate 310 Fork0l If 13k optimal stop Determine a search direction d and a step size A Sew61 CUk OSU 7 AMC Seminar NOV 2007 r p 6 Convergence Rates The sequence 1 is said to converge to 55 With rate p and rate constant C if 1 w C k gtoo 56Hp Linear p l and 0 lt C lt 1 such that error decreases Quadratic p 2 doubles correct digits per iteration Superlinear If p l C 0 Faster than linear Includes quadractic convergence but also intermediate rates M U M SW N m 7 Necessary Conditions Theorem 1 Let f be twice continuously di erentiable and let 513 be a local minimizer off Then Vf 0 2 and the Hessian off V2f is positive semide nite Recall A positive semide nite means xTAxZO VJJERN Equation 2 is called the rstorder necessary condition OSU 7 AMC Seminar NOV 2007 r p 8 Hessian Let f RN gt R be twice continuously differentiable C2 then The gradient of f is Vf 9817 The Hessian of f is if if 78N OSU 7 AMC Seminar NOV 2007 r p 9 Suf cient Conditions Theorem 2 Let f be twice continuously di erentiable in a neighborhood of 513 and let Wm 0 and the Hessian off V2f be positive semide nite Then 13 is a local minimizer off Note second derivative information is required to be certain forinstance if f 313 OSU 7AMC Seminar Nov 2007 7p 10 Quadratic Objective Functions Suppose fI vTHSU IJTb then we have that V2fa H and if H is symmetric assume it is Vf H 1 17 Therefore if H is positive semide nite then the unique minimizer a is the solution to Ha b OSU 7 AMC Seminar NOV 2007 7 p 11 Newton s Method Newton s Method solves for the minimizer of the local quadratic model of f about the current iterate 13k given by mm HM V wk x wk kTV2fc If V2f is positive de nite then the minimizer n1 of mk is the unique solution to 0 mG v Vfak V2fvkv 3 mm OSU AMCSW N MW Newton Step The solution to 3 is computed by solving V2fvksk for the Newton Step 8 Then the Newton update is de ned by k1 13k 81 Note the step 8 has both direction and length Variants of Newton s Method modify one or both of these um OSU AMCSW N 2m Standard Assumptions Assume that f and 55 satisfy the following 1 Let f be twice continuously differentiable and llV2fv V2fyl S 7sz yll 2 Vf1 0 3 V2f 513 is positive de nite OSU 7AMC Seminar NOV 2007 7p 14 Convergence Rate Theorem 3 Let the Standard Assumptions hold Then there exists a 6 gt 0 such that if 5130 E 85CU the Newton iteration converges quadratically to 513 16 Hek1H S KHekHZ If 10 is not Close enough Hessian may not be positive de nite If you start Close enough you stay Close enough OSU 7 AMC Seminar NOV 2007 7 p 15 Problems and solutions Need derivatives Use nite difference approximations Needs solution of linear system at each iteration Use iterative linear solver like CG Inexact Newton Hessians are expensive to nd and factor 0 Use chord factor once or Shamanskii Use Quasi Newton update H k to get H H1 0 Use Gauss Newton rst order approximate Hessian m OSU AMCSW N 2m Nonlinear Least Squares Recall 1 M m E 2 ijv w j1 Then for 1 0 MT T uglim uwj7 x uj R Where RI ut1 13 ul utM 13 uMT is called the residual OSU 7AMC Seminar NOV 2007 7p 17 Approximate Hessian In terms of the residual R the Hessian of f becomes M V21 13 R IITR II Z mil WZMJI 31 Where rja is the jth element of the vector The second term requires the computation of M Hessians each size N X N However if we happen to be solving a zero residual problem this second order term goes to zero One can argue that for small residual problems and good initial iterates the second order term is neglibible um OSU AMCSW N 2m GaussNewton Method The equation de ning the Newton step V2fk8k Vfk becomes RkTRUk8k RUkTRUk We de ne the Gauss Newton step as the solution skGN to this equation You can expect close to quadratic convergence for small residual problems Otherwise not even linear is guaranteed OSU 7 AMC Seminar NOV 2007 7 p 19 Numerical Example Recall uquot cu ku 0 110 uO uO 0 Let the true parameters be 55 0 MT 1 1T Assume we have M 100 data uj from equally spaced time points on 0 10 We will use the initial iterate 10 11 105T with Newton s Method and Gauss Newton We compute gradients With forward differences analytical 2 X 2 matrix inverse and use ode 1 5 s for time stepping the ODE m U M 3mquot N m 2 Comparison of initial iterate Data Eilnitial iterate 39 quotwin 7 mmm mm 39IK39JIW it in Oregon State University Gradient Norm Function Va ue Gradient Norm Function Va ue rzt xit39lfC E nmm 1mm ma a Oregon State Unwers y Newton Gauss Newton mem aws mem aws 2330e01 7881e 01 2330e01 7881e 01 6852e00 9817e 02 l767e00 6748e 03 4577e 01 6573e 04 1016e 02 4656e 07 3242e 03 3852e 08 1844e 06 2626e 13 4213e 07 2471e 13 mewow Table 1 Parameter identi cation problem locally convergent iterations CPU time Newton 345 Gauss Newton ls m U M 3mquot N m 23 Iteration history A 39I39 Newton s Method 3 Gauss Newton V 096 098 1 102 104 106 C 39 mu 7 Mai m1mw m mm 7 n regnn Slate University 39 Search Direction 1 7 7 M4110 win 1M mm 7 n regnn Slate University 7 Search Direction 39i39 Newton s Method ZZZGauss Newton 39 WU r E hl ll NM mm 7 BL 236 Oregon Slate University 7 Global Convergence Newton direction may not be a descent direction if Hessian not positive de nite Thus Newton or any Newton based method may increase f if 3130 not close enough Not globally convergent Globally convergent methods ensure suf cient decrease in f The steepest descent direction is always a descent direction M U M 3mquot N m 27 Steepest Descent Method We de ne the steepest descent direction to be dk Vf This de nes a direction but not a step size We de ne the Steepest Descent update step to be 873D Akdk fOI SOIIIC Ak gt 0 We will talk later about ways of choosing A OSU 7 AMC Seminar NOV 2007 7 p 28 Iteration history Newton s Method ijGauss Newton A 993 106 108 11 112 096 098 39 WU 7 MNKC mnuwm NEW HJIW39 7 BL 3 3 regnn Slate University 7 Search Direction I I39 Newton s Method 2 Gauss Newton 39 r mmmm mm mm 7 w M regnn Slate University 39 Search Direction i Newton s Method 3 Gauss Newton 39eDnt 5 6 7 39 r mmmm mm mm 7 w n regnn Slate University 39 Steepest Descent Comments Steepest Descent direction is best direction locally The negative gradient is perpendicular to level curves Solving for 8719 is equivalent to assuming In general you can only expect linear convergence Would be good to combine global convergence property of Steepest Descent with superlinear convergence rate of Gauss Newton OSU 7 AMC Seminar Nov 2007 7 p 32 LevenbergMarquardt Method Recall the objective function 1 fv RltwgtTRltwgt Where R is the residual We de ne the Levenberg Marquardt update step s M to be the solution of RxkTRlak Vkl 8k RUkTRCUk Where the regularization parameter Vk is called the Levenberg Marquardt parameter and it is chosen such that the approximate Hessian R wkTR W is positive de nite OSU 7 AMC Seminar NOV 2007 7 p 33 Search Direction 39i39 Newton s Method quot39I139Gauss Newton 7 Steepest Descent Levenberg Marquardt 39 r mmmm NM am 7 w Dragon Slate University 39 Search Direction I Newton s Method K lJGauss Newton 39 Steepest Descent 4 quotaq ad 4 5 6 7 C r mmmm NM mm 7 p Dragon Slate University S s LevenbergMarquardt Notes Robust with respect to poor initial conditions and larger residual problems Varying V involves interpolation between GN direction V 0 and SD direction large V We will talk later on strategies for choosing V See doc lsqnonlin for MATLAB instructions for LM and GN OSU 7 AMC Seminar NOV 2007 7 p 36 Summary Taylor series with remainder fr M wltxkgtTltx ask 9 askTvrr gtltx ask Newton mivcr ask WltxkgtTltx ask as wav rmxx ask Steepest Descent meCIE fk VfkT Irk 501quot 113kTi Ak1r rk Gauss Newton mkGNr ask VfrkTr ask 3C1 xkTH rkTR39 33 ask Levenberg Marquardt m Mcr mWltxkgtTltx xkgt ltx xkgtT Wm R39m quotmas ask OSU 7 AMC Seminar NOV 2007 7 p 37

### BOOM! Enjoy Your Free Notes!

We've added these Notes to your profile, click here to view them now.

### You're already Subscribed!

Looks like you've already subscribed to StudySoup, you won't need to purchase another subscription to get this material. To access this material simply click 'View Full Document'

## Why people love StudySoup

#### "I was shooting for a perfect 4.0 GPA this semester. Having StudySoup as a study aid was critical to helping me achieve my goal...and I nailed it!"

#### "I made $350 in just two days after posting my first study guide."

#### "Knowing I can count on the Elite Notetaker in my class allows me to focus on what the professor is saying instead of just scribbling notes the whole time and falling behind."

#### "It's a great way for students to improve their educational experience and it seemed like a product that everybody wants, so all the people participating are winning."

### Refund Policy

#### STUDYSOUP CANCELLATION POLICY

All subscriptions to StudySoup are paid in full at the time of subscribing. To change your credit card information or to cancel your subscription, go to "Edit Settings". All credit card information will be available there. If you should decide to cancel your subscription, it will continue to be valid until the next payment period, as all payments for the current period were made in advance. For special circumstances, please email support@studysoup.com

#### STUDYSOUP REFUND POLICY

StudySoup has more than 1 million course-specific study resources to help students study smarter. If you’re having trouble finding what you’re looking for, our customer support team can help you find what you need! Feel free to contact them here: support@studysoup.com

Recurring Subscriptions: If you have canceled your recurring subscription on the day of renewal and have not downloaded any documents, you may request a refund by submitting an email to support@studysoup.com

Satisfaction Guarantee: If you’re not satisfied with your subscription, you can contact us for further help. Contact must be made within 3 business days of your subscription purchase and your refund request will be subject for review.

Please Note: Refunds can never be provided more than 30 days after the initial purchase date regardless of your activity on the site.