- 7.4.1: Prove the statement following Theorem 7.4.1 for an arbitrary value ...
- 7.4.2: In this problem we outline a proof of Theorem 7.4.3 in the case n =...
- 7.4.3: Show that theWronskians of two fundamental sets of solutions of the...
- 7.4.4: If x1 = y and x2 = y , then the second order equation y + p(t)y + q...
- 7.4.5: Show that the general solution of x = P(t)x + g(t)is the sum of any...
- 7.4.6: Consider the vectors x(1) (t) = t 1 and x(2) (t) = t 2 2t . (a) Com...
- 7.4.7: Consider the vectors x(1) (t) = t 2 2t and x(2) (t) = et et , and a...
- 7.4.8: Let x(1) , ... , x(m) be solutions of x = P(t)x on the interval < t...
- 7.4.9: Let x(1) , ... , x(n) be linearly independent solutions of x = P(t)...
Solutions for Chapter 7.4: Basic Theory of Systems of First Order Linear Equations
Full solutions for Elementary Differential Equations and Boundary Value Problems | 9th Edition
Solutions for Chapter 7.4: Basic Theory of Systems of First Order Linear EquationsGet Full Solutions
Complete solution x = x p + Xn to Ax = b.
(Particular x p) + (x n in nullspace).
Cramer's Rule for Ax = b.
B j has b replacing column j of A; x j = det B j I det A
Diagonal matrix D.
dij = 0 if i #- j. Block-diagonal: zero outside square blocks Du.
Elimination matrix = Elementary matrix Eij.
The identity matrix with an extra -eij in the i, j entry (i #- j). Then Eij A subtracts eij times row j of A from row i.
Ellipse (or ellipsoid) x T Ax = 1.
A must be positive definite; the axes of the ellipse are eigenvectors of A, with lengths 1/.JI. (For IIx II = 1 the vectors y = Ax lie on the ellipse IIA-1 yll2 = Y T(AAT)-1 Y = 1 displayed by eigshow; axis lengths ad
Hilbert matrix hilb(n).
Entries HU = 1/(i + j -1) = Jd X i- 1 xj-1dx. Positive definite but extremely small Amin and large condition number: H is ill-conditioned.
Incidence matrix of a directed graph.
The m by n edge-node incidence matrix has a row for each edge (node i to node j), with entries -1 and 1 in columns i and j .
lA-II = l/lAI and IATI = IAI.
The big formula for det(A) has a sum of n! terms, the cofactor formula uses determinants of size n - 1, volume of box = I det( A) I.
Least squares solution X.
The vector x that minimizes the error lie 112 solves AT Ax = ATb. Then e = b - Ax is orthogonal to all columns of A.
Linear transformation T.
Each vector V in the input space transforms to T (v) in the output space, and linearity requires T(cv + dw) = c T(v) + d T(w). Examples: Matrix multiplication A v, differentiation and integration in function space.
Multiplicities AM and G M.
The algebraic multiplicity A M of A is the number of times A appears as a root of det(A - AI) = O. The geometric multiplicity GM is the number of independent eigenvectors for A (= dimension of the eigenspace).
Plane (or hyperplane) in Rn.
Vectors x with aT x = O. Plane is perpendicular to a =1= O.
Positive definite matrix A.
Symmetric matrix with positive eigenvalues and positive pivots. Definition: x T Ax > 0 unless x = O. Then A = LDLT with diag(D» O.
R = [~ CS ] rotates the plane by () and R- 1 = RT rotates back by -(). Eigenvalues are eiO and e-iO , eigenvectors are (1, ±i). c, s = cos (), sin ().
Saddle point of I(x}, ... ,xn ).
A point where the first derivatives of I are zero and the second derivative matrix (a2 II aXi ax j = Hessian matrix) is indefinite.
Singular Value Decomposition
(SVD) A = U:E VT = (orthogonal) ( diag)( orthogonal) First r columns of U and V are orthonormal bases of C (A) and C (AT), AVi = O'iUi with singular value O'i > O. Last columns are orthonormal bases of nullspaces.
Symmetric matrix A.
The transpose is AT = A, and aU = a ji. A-I is also symmetric.
Triangle inequality II u + v II < II u II + II v II.
For matrix norms II A + B II < II A II + II B II·
Vandermonde matrix V.
V c = b gives coefficients of p(x) = Co + ... + Cn_IXn- 1 with P(Xi) = bi. Vij = (Xi)j-I and det V = product of (Xk - Xi) for k > i.
Stretch and shift the time axis to create Wjk(t) = woo(2j t - k).