Join StudySoup for FREE

Get Full Access to
UW - AMATH 301 - Study Guide - Midterm

Description

Reviews

Better than the professor's notes. I could actually understand what the heck was going on. Will be back for help in this class.

AMATH 301 Midterm 1 Study Guide

• Matricies/Vectors:

– Matrices in Matlab: Use commas (",") or spaces to separate elements of a row, and use semicolons (";") to separate rows and start a new one.

Example:

A =

is written as:

1 2 3

4 5 6 7 8 9

A = [1, 2, 3; 4, 5, 6; 7, 8, 9];

• Norms:

– A norm is an abstract notion of a certain kind of distance.

– kxk is a function from Rn → R and is a norm when:

– The 1, 2, 3, . . . , ∞ norms (called the p-norms) are given by the equation kxkp := (Pni=1 |xi|p)1p . – For p = 1 we get the 1-norm (Manhattan Norm) the sum of the absolute value of all We also discuss several other topics like What are the key elements of constitutionalsm?

the elements of x kxk1 =Xn

i=1

|xi|

x1 We also discuss several other topics like What is dendrochronology?

x2

x3

= |x1| + |x2| + |x3| 1

– For p = 2 we get the 2-norm, which is our traditionally understanding of the norm in that is gives the Euclidean distance (or the magnitude of the vector). Distance "as the

bird flies". kxk2 =

vuutXn i=1

|xi|2

x1

x2

x3

=

2

q

x21 + x22 + x23

Note that if the subscript is omitted from the norm (kxk) this is generally taken to be the 2-norm. If you want to learn more check out What are the negative effects of different drugs?

– The ∞-norm (max norm) is the maximum of the absolute value of the element:

kxk∞ = max

1≤i≤m|xi|

– Note: kxk∞ ≤ kxk2 ≤ kxk1

x1

x2

x3

= max(|x1|, |x2|, |x3|) ∞

– In Matlab to find the p - norm of a use the command: norm(a, p). By default, Matlab calculates the 2-norm, so norm(a) gives the 2-norm.

• Matlab commands:

– rand(# rows, # cols) → produces matrix of given size with random entries (between 0 & 1)

– zeros(# rows, # cols) → produces matrix of given size with all entries are zero – ones(# rows, # cols) → produces matrix with all entries 1 If you want to learn more check out The process of improving the material conditions of people through diffusion of knowledge & technology.

– eye(# rows, # cols) → produces identity matrix (1’s along diagonal, zero everywhere else) We also discuss several other topics like what is perception?

– size(matrix) → returns dimensions of matrix

– To save a result in a file use the save command:

save a1.dat tmp -ascii

a1.dat is the file name that tmp is saved to. -ascii specifies the format that it will be saved in.

1

2

– [maxvector, location] = max(A) → returns the maximum element of each column of a matrix, and their location (index) in the column.

max(A, [ ], 2) ← finds max in each row

max(A(:)) ← finds max in whole matrix

• Matlab logic, loops, and iterations:

– Loops: for statements → Do something for a certain amount of time.

∗ Example:

a = 0;

for j = 1:5 %j will iterate from 1 to 5

a = a+j;

end %ends the loop

∗ If instead use:

for j = 1:2:5

j will iterate from 1 to 5 with step of size 2 (1 → 3 → 5)

∗ Can also iterate j through a set array of numbers:

for j = [3, 0, 6]

so j will equal 3, then 0, then 6.

∗ "break" command breaks out of a loop

∗ "continue" command skips the next iteration

– Logic: if statements: Do something if " " is true. Example:

if (logical statement)

execute

else if (logical statement)

execute

else

execute

end

• Function handles: Define a function inline

Example:

f = @(x) exp(x)-tan(x);

So now can call f(2) and will evaluate the function at x = 2.

3

• Solving Linear Systems:

– Gaussian Elimiation

∗ We can write a system of linear functions as a function of matrices. For example, take the system:

2x1 + x2 + x3 = 2 We also discuss several other topics like What are the characteristics of the artwork being made in England before its conversion to Christianity?

x1 − x2 − x3 = 1

x1 + 2x2 + 0x3 = 3

This is equivalent to:

2 1 1 1 −1 −1

x1 x2

=

21

, or Ax = b, where

1 2 0

x3

3

A =

2 1 1

1 −1 −1 1 2 0

, x =

x1

x2

x3

, b =

21 3

∗ To solve this, we subtract equations (rows) to get an upper triangular system from which we can then back-substitute to find x1, x2, x3.

∗ In Matlab, we use the "backslash" command: x = A\b

∗ Gaussian Elimination has cost: O(n3)

– LU factorization

∗ We can decompose a matrix into the product of a lower triangular matrix (L), a upper triangular matrix (U), and a permutation matrix (P). We can then use this to solve Ax = b:

P A~x = P~b Get LU of PA

LU~x = P~b

Let U~x = ~y

then, L~y = P~b

Since U is upper triangular, just need to back substitute to get x → O(N2) Since L is lower triangular, just need to back substitute to get y → O(N2) ∗ Advantages: Though getting L, U, and P is O(N3), we only need to do that once, then we can solve Ax = b using L,U, and P with O(N2)

∗ In Matlab to solve Ax = b using LU decomposition:

[L, U, P] = lu(A);

x = U\(L)(P*b);

– Inverses:

∗ The inverse of A−1is a matrix such that AA−1 = I, where I is the identity matrix ∗ If have A−1, can solve Ax = b → x = A−1b

∗ The inverse only exists if if A is non-singular

∗ Can check singularity of A by checking its determinant. If determinant is zero, then A is singular (has no inverse). In Matlab use the command det(A) to get the determinant.

∗ Calculating the determinant may be complex and could have error, so using the condition number may be more robust. If the condition number of A is high, it may be singular. cond(A) in Matlab gives the condition number of A. • Eigenvectors and eigenvalues:

4

– For any square matrix A ∈ Rn×n, a non-zero vector x is called an eigenvector if Ax = λx for some constant λ, which is the eigenvalue associated with x.

– The eigenvalues are calculated by finding the roots of the "characteristic polynomial", which is the determinant of A − λI

– The eigenvectors are then found by finding the nullspace of A−λI for each λ (eigenvalue)

– Example: Find the eigenvalues and eigenvectors of A =

2 7 −1 −6

det(A−λI) =

2 − λ ∗ 7

−1 −6 − λ

= (2−λ)(−6−λ)+7 = λ2+4λ−5 = (λ+5)(λ−1) = 0

∴ λ1 = −5, λ2 = 1

λ1 = −5 :

(A + 5I|0) =

7 7 0 −1 −1 0

→

0 0 0 1 1 0

x1 = −x2 ξ(1) =

1 −1

(Any vector that satisfies x1 = −x2 is also an eigenvector for λ1 = −5)

λ = 1 : (A − 1I|0) =

1 7 0 −1 −7 0

→

1 7 0 0 0 0

7x2 = −x1 ξ(2) =

−7 1

(Any vector that statisfies 7x2 = −x1 is also an eigenvector for λ1 = −5) 1

−7

So the eigenvalues of A are -5 and 1 with corresponding eigenvectors ,

re

−1

1

spectively.

– If x is an eigenvector of A with eigenvalue λ, then cx is also an eigenvector with the same eigenvalue for c ∈ R. So every eigenvalue has infinitely many eigenvectors. – If A ∈ Rn, then A has at most n unique eigenvalues. Note that A can have repeated eigenvalues (meaning an eigenvalue with more than 1 distinct eigenvector). – A is non-singular (its inverse exists) exactly when 0 is not an eigenvector. – If A is a triangular matrix (upper or lower), then its eigenvalues are just the diagonal entries.

– In Matlab:

[V, D] = eig(A);

V contains the eigenvectors of A in its columns, and D whose diagonal elements are the corresponding eigenvalues in the same column number.

– If A has no repeated eigenvalues then it can be diagonalized, meaning A can be written as A = V DV −1 where V, D are as above. This makes taking the power of A very easy: An = V DnV−1

• Iterative Methods of solving Ax = b:

– Jacobi Iteration Method

∗ Decompose A into a lower triangular part w/o diagonal (L∗), an upper triangular part w/o diagonal (U∗), and a diagonal part (D) such that A = L∗ + D + U∗. In Matlab this is done with:

Lstar = tril(A, -1);

D = diag(diag(A));

Ustar = triu(A,1);

For Lstar and Ustar, the -1 and 1 respectively are to not include the diagonal in the lower and upper triangular part. For D, the inner diag extracts the diagonal

5

elements from A then the outer diag takes those elements and puts them along the diagonal of a matrix.

∗ We then iterate from an initial guess xk by the formula:

xk+1 = xk + D−1(b − Axk)

Iterate through until converge to an answer.

– Gauss-Seidal:

∗ Decompose A the same way as above in Jacobi Iteration

∗ Iterate x by the equation:

xk+1 = xk + (L∗ + D)−1(b − Axk)

– Iterative methods for solving Ax = b are used when A is large and sparse (mostly zeros) • Interpolation: Function goes exactly through data pts.

– Interpolate between two consecutive x values means that we find a function that passes though those two x values and use the function to estimate the values between the data pts. We can use any degree polynomial to interpolation.

– In Matlab to interpolate between points x1 and x2, using an n-degree polynomial: p = polyfit( [x1; x2], [y1, y2], n);

This will return the coefficients of the polynomial interpolation in p. To evaluate the function at a point x we use:

p = polyval(p, x);

– LaGrange interpolation is interpolation using all data points, so for n data points we fit a a n-1 degree polynomial.

– However, this is susceptible to the Runge phenomenon. When you try to fit a high degree polynomial to a evenly spaced x, high risk of overshooting and undershooting at the edges.

• Curve Fitting

– Least-Square Fit with linear polynomial: Try to draw a line that minimizes the residuals (distance between line and data point)

y1 = p1x1 + p2 + 1 y2 = p1x2 + p2 + 2 ...

yn = p1xn + p2 + n

y1

y2...

yn

=

x1 1

x2 1 ...

xn 1

p1 p2

+

1 2... n

min

(p1,p2)

Xn i=1

| i|2 = min

(p1,p2) T = min

(p)(y − xp)T(y − xp)

solution: p = (xT x)−1xTy

Note that we minimize the root mean square error.

6

– In Matlab

(1) Define f (function handle) f = @(p,x) p(1)exp(-p(2)*x) + p(3); (2) Define sum-of-squares error err = @ (p) sum((yin - f(p, xin)).^2); (3) Use "fminsearch" to find p [pFit, errMin] = fminsearch(err, "init. guess");

• Splines: Smoothly connect data points, make first and second derivatives match at each data point. In Matlab use the spline command: ysp = spline(xdata, ydata, interpolation values);

• Optimization Iterative Algorithms

– Fixed Point Iteration (Jacobi and Gauss-Seidel) to solve Ax = b

Start with initial guess x0, and iterate until kxn+1 − xnk < for some tolerance value – Finding a zero of f: f(x) = 0, where f : R → R. Uses the bisection method. Requires a starting interval and a zero with a change of sign. (ex. f(x) = x2 → problematic) – Newton algorithm → uses value and derivatives of f.

Start from initial guess x0, f(x0) 6= 0 and use Taylor series expansion to first order: f(x1) = f(x0) + (x1 − x0)f0(x0)

Define x1 such that f(x0) + (x1 − x0)f0(x0) = 0:

x1 = x0 −f(x0)

f0(x0)

• Minimizing a function: Minimizing a function f(x) over some domain S – Case S = R f : R → R initial guess interval [x1, x2]

fminbnd (f, x, x0) → uses only values of f

– Case S = Rm(m > 1) f : Rm → R initial guess x0

fminsearch(f, x0) can have 3 outputs [xmin, fmin, exitflag]

uses only values of f

– Case S = Rn f : Rn → R initial guess x0

fminunc(f, x0) uses values and derivatives of f and requires Optimization toolbox – All algorithms can find global minima, if given a small interval around it (or close initial guess). But if initial guess is close to a local min or initial interval is includes a local min, there is a chance the algorithm will find local min instead of the global min. So take care when choosing initial guess/interval, and be aware that answer may not be right.

– Note that maximizing a function f(x) is equivalent to minimizing −f(x) • Linear Programming: Minimization of linear function with constraints

– To solve

x∈SpT x, S = x ∈ Rn,

min

Ax ≤ b

Aeqx = beq lb ≤ x ≤ ub

use the "linprog" command as:

[xopt, fopt, exitflag] = linprog(p, A, b, Aeq, beq, lb, ub) where p, A, b are constants of the function, Aeq and beq are the the equilibrium states, lb is the lower bound, ub is the upper bound, and the outputs are xopt (optimum x value), fopt (optimum f value, f(xopt)) and exitflag (description of whether the algorithm found an optimum value or not).

– If there are no constraints (like no Aeq or lb, etc.) use an empty matrix "[ ]" in place of the constraint.

– Example: A company produces 2 types of heaters, S and L. They sell S heaters for $40 and L heaters for $88. Machine M1 needs 2 minutes to make a S heater, and 8 minutes to make an L. Machine M2 needs 5 minutes for S and 2 minutes for L. Maximize the

7

hourly revenue.

Hourly revenue: f(x) = 40x1 + 88x2

Mimize −f(x) with constraints:

2x1 + 8x2 ≤ 60

5x1 + 2x2 ≤ 60 Ax ≤ b

Code:

p = [-40; -88];

A = [2, 8; 5, 2];

Aeq = [ ]; beq = [ ]; %no equilibrium costraints lb = [0; 0]; ub = [ ]; %define bounds

[xopt, fopt, exitflag] = linprog(p, A, b, Aeq, beq, lb, ub); And we find that x1 = 10, x2 = 5 and fopt = -840 → maximum hourly revenue is $840/hr