 7.3.1: Calculate etA and use your answer to solve dx dt = Ax, x(0) = x0. a...
 7.3.2: Solve d2x dt2 = Ax, x(0) = x0, dx dt (0) = x_ 0. a. A = _ 1 5 2 4 _...
 7.3.3: Find the motion of the twomass, threespring system in Example 8 w...
 7.3.4: Let J = 2 1 2 1 2 . Calculate etJ .
 7.3.5: By mimicking the proof of Theorem 3.4, convert the following second...
 7.3.6: Check that if A is an n n matrix and the n n differentiable matrix ...
 7.3.7: Verify that d dt sin t = cos t and d dt cos t = sin t by differenti...
 7.3.8: a. Consider the n n matrix B = 0 1 0 1 . . . . . . 0 1 0 . Calculat...
 7.3.9: Use the results of Exercise 8 and Theorem 3.4 to give the general s...
 7.3.10: Let a, b R. Convert the constant coefficient secondorder different...
 7.3.11: By introducing the vector function z(t) = x1(t) x2(t) x _ 1(t) x _ ...
 7.3.12: Find the solutions of the systems d2x dt2 = Ax(t) in Exercise 2 by ...
 7.3.13: Let A be a square matrix. a. Prove that AetA = etAA. b. Prove that ...
 7.3.14: Prove that det(eA) = etrA. (Hint: First assume A is diagonalizable....
 7.3.15: (For those whove thought about convergence issues) Check that the p...
 7.3.16: (For those whove thought about convergence issues) Check that the p...
Solutions for Chapter 7.3: Matrix Exponentials and Differential Equations
Full solutions for Linear Algebra: A Geometric Approach  2nd Edition
ISBN: 9781429215213
Solutions for Chapter 7.3: Matrix Exponentials and Differential Equations
Get Full SolutionsChapter 7.3: Matrix Exponentials and Differential Equations includes 16 full stepbystep solutions. Since 16 problems in chapter 7.3: Matrix Exponentials and Differential Equations have been answered, more than 4421 students have viewed full stepbystep solutions from this chapter. This expansive textbook survival guide covers the following chapters and their solutions. This textbook survival guide was created for the textbook: Linear Algebra: A Geometric Approach, edition: 2. Linear Algebra: A Geometric Approach was written by and is associated to the ISBN: 9781429215213.

Adjacency matrix of a graph.
Square matrix with aij = 1 when there is an edge from node i to node j; otherwise aij = O. A = AT when edges go both ways (undirected). Adjacency matrix of a graph. Square matrix with aij = 1 when there is an edge from node i to node j; otherwise aij = O. A = AT when edges go both ways (undirected).

Associative Law (AB)C = A(BC).
Parentheses can be removed to leave ABC.

Basis for V.
Independent vectors VI, ... , v d whose linear combinations give each vector in V as v = CIVI + ... + CdVd. V has many bases, each basis gives unique c's. A vector space has many bases!

Cholesky factorization
A = CTC = (L.J]))(L.J]))T for positive definite A.

Condition number
cond(A) = c(A) = IIAIlIIAIII = amaxlamin. In Ax = b, the relative change Ilox III Ilx II is less than cond(A) times the relative change Ilob III lib II· Condition numbers measure the sensitivity of the output to change in the input.

Echelon matrix U.
The first nonzero entry (the pivot) in each row comes in a later column than the pivot in the previous row. All zero rows come last.

Elimination.
A sequence of row operations that reduces A to an upper triangular U or to the reduced form R = rref(A). Then A = LU with multipliers eO in L, or P A = L U with row exchanges in P, or E A = R with an invertible E.

Four Fundamental Subspaces C (A), N (A), C (AT), N (AT).
Use AT for complex A.

Free columns of A.
Columns without pivots; these are combinations of earlier columns.

Identity matrix I (or In).
Diagonal entries = 1, offdiagonal entries = 0.

Krylov subspace Kj(A, b).
The subspace spanned by b, Ab, ... , AjIb. Numerical methods approximate A I b by x j with residual b  Ax j in this subspace. A good basis for K j requires only multiplication by A at each step.

Least squares solution X.
The vector x that minimizes the error lie 112 solves AT Ax = ATb. Then e = b  Ax is orthogonal to all columns of A.

Matrix multiplication AB.
The i, j entry of AB is (row i of A)·(column j of B) = L aikbkj. By columns: Column j of AB = A times column j of B. By rows: row i of A multiplies B. Columns times rows: AB = sum of (column k)(row k). All these equivalent definitions come from the rule that A B times x equals A times B x .

Multiplication Ax
= Xl (column 1) + ... + xn(column n) = combination of columns.

Normal matrix.
If N NT = NT N, then N has orthonormal (complex) eigenvectors.

Right inverse A+.
If A has full row rank m, then A+ = AT(AAT)l has AA+ = 1m.

Standard basis for Rn.
Columns of n by n identity matrix (written i ,j ,k in R3).

Sum V + W of subs paces.
Space of all (v in V) + (w in W). Direct sum: V n W = to}.

Symmetric matrix A.
The transpose is AT = A, and aU = a ji. AI is also symmetric.

Unitary matrix UH = U T = UI.
Orthonormal columns (complex analog of Q).