 2.2.1E: Find the inverses of the matrices in Exercises 1–4.
 2.2.2E: Find the inverses of the matrices in Exercises
 2.2.3E: Find the inverses of the matrices in Exercises
 2.2.4E: Find the inverses of the matrices in Exercises
 2.2.5E: Use the inverse found in Exercise 1 to solve the system Exercise 1:...
 2.2.6E: Use the inverse found in Exercise to solve the system8x1 + 5x2 = 25...
 2.2.7E: a. Find A–1 and use it to solve the four equations b. The four equa...
 2.2.8E: Use matrix algebra to show that if A is invertible and D satisfies ...
 2.2.9E: In Exercises 9 and 10, mark each statement True or False. Justify e...
 2.2.10E: In Exercises, mark each statement True or False. Justify each answe...
 2.2.11E: Let A be an invertible n × n matrix, and let B be an n × p matrix. ...
 2.2.12E: Let A be an invertible n × n matrix, and let B be an n × p matrix. ...
 2.2.13E: Suppose AB = AC, where B and C are n × p matrices and A is invertib...
 2.2.14E: Suppose (B – C)D = 0 where B and C are m × n matrices and D is inve...
 2.2.15E: Suppose A, B, and C are invertible n × n matrices. Show that ABC is...
 2.2.16E: Suppose A and B are n × n matrices, B is invertible, and AB is inve...
 2.2.17E: Solve the equation AB = BC for A, assuming that A, B, and C are squ...
 2.2.18E: Suppose P is invertible and A = PBP–1 Solve for B in terms of A.
 2.2.19E: If A, B and C are n × n invertible matrices, does the equation have...
 2.2.20E: Suppose A, B, and X are n × n matrices with A, X, and A – AX invert...
 2.2.21E: Explain why the columns of an n × n matrix A are linearly independe...
 2.2.22E: Explain why the columns of an n × n matrix A span ?n when A is inve...
 2.2.23E: Suppose A is n × n and the equation Ax = 0 has only the trivial sol...
 2.2.24E: Suppose A is n × n and the equation Ax = b has a solution for each ...
 2.2.25E: Exercises 25 and 26 prove Theorem 4 for Show that if ad – bc = 0, t...
 2.2.26E: Exercises 25 and 26 prove Theorem 4 for Show that if ad – bc ? 0, t...
 2.2.27E: Exercises 27 and 28 prove special cases of the facts about elementa...
 2.2.28E: Exercises prove special cases of the facts about elementary matrice...
 2.2.29E: Find the inverses of the matrices in Exercises, if they exist. Use ...
 2.2.30E: Find the inverses of the matrices in Exercises, if they exist. Use ...
 2.2.31E: Find the inverses of the matrices in Exercises 29–32, if they exist...
 2.2.32E: Find the inverses of the matrices in Exercises, if they exist. Use ...
 2.2.33E: Use the algorithm from this section to find the inverses of A be th...
 2.2.34E: Repeat the strategy of Exercise to guess the inverse of . Prove tha...
 2.2.35E: Let . Find the third column of A?1 without computing the other colu...
 2.2.36E: [M] Let Find the second and third columns of A–1 without computing ...
 2.2.37E: Let Construct a 2 × 3 matrix C (by trial and error) using only 1, –...
 2.2.38E: Let . Construct a 4 × 2 matrix D using only 1 and 0 as entries, suc...
 2.2.39E: Let be a flexibility matrix, with flexibility measured in inches pe...
 2.2.40E: [M] Compute the stiffness matrix D–1 for D in Exercise 39. List the...
 2.2.41E: [M] Let be a flexibility matrix for an elastic beam with four point...
 2.2.42E: [M] With D as in Exercise, determine the forces that produce a defl...
Solutions for Chapter 2.2: Linear Algebra and Its Applications 5th Edition
Full solutions for Linear Algebra and Its Applications  5th Edition
ISBN: 9780321982384
Solutions for Chapter 2.2
Get Full SolutionsLinear Algebra and Its Applications was written by and is associated to the ISBN: 9780321982384. Since 42 problems in chapter 2.2 have been answered, more than 43669 students have viewed full stepbystep solutions from this chapter. This textbook survival guide was created for the textbook: Linear Algebra and Its Applications , edition: 5. Chapter 2.2 includes 42 full stepbystep solutions. This expansive textbook survival guide covers the following chapters and their solutions.

Affine transformation
Tv = Av + Vo = linear transformation plus shift.

Associative Law (AB)C = A(BC).
Parentheses can be removed to leave ABC.

Back substitution.
Upper triangular systems are solved in reverse order Xn to Xl.

Basis for V.
Independent vectors VI, ... , v d whose linear combinations give each vector in V as v = CIVI + ... + CdVd. V has many bases, each basis gives unique c's. A vector space has many bases!

Cholesky factorization
A = CTC = (L.J]))(L.J]))T for positive definite A.

Dot product = Inner product x T y = XI Y 1 + ... + Xn Yn.
Complex dot product is x T Y . Perpendicular vectors have x T y = O. (AB)ij = (row i of A)T(column j of B).

Four Fundamental Subspaces C (A), N (A), C (AT), N (AT).
Use AT for complex A.

GaussJordan method.
Invert A by row operations on [A I] to reach [I AI].

Multiplicities AM and G M.
The algebraic multiplicity A M of A is the number of times A appears as a root of det(A  AI) = O. The geometric multiplicity GM is the number of independent eigenvectors for A (= dimension of the eigenspace).

Orthonormal vectors q 1 , ... , q n·
Dot products are q T q j = 0 if i =1= j and q T q i = 1. The matrix Q with these orthonormal columns has Q T Q = I. If m = n then Q T = Q 1 and q 1 ' ... , q n is an orthonormal basis for Rn : every v = L (v T q j )q j •

Permutation matrix P.
There are n! orders of 1, ... , n. The n! P 's have the rows of I in those orders. P A puts the rows of A in the same order. P is even or odd (det P = 1 or 1) based on the number of row exchanges to reach I.

Pivot columns of A.
Columns that contain pivots after row reduction. These are not combinations of earlier columns. The pivot columns are a basis for the column space.

Rank r (A)
= number of pivots = dimension of column space = dimension of row space.

Rotation matrix
R = [~ CS ] rotates the plane by () and R 1 = RT rotates back by (). Eigenvalues are eiO and eiO , eigenvectors are (1, ±i). c, s = cos (), sin ().

Saddle point of I(x}, ... ,xn ).
A point where the first derivatives of I are zero and the second derivative matrix (a2 II aXi ax j = Hessian matrix) is indefinite.

Special solutions to As = O.
One free variable is Si = 1, other free variables = o.

Subspace S of V.
Any vector space inside V, including V and Z = {zero vector only}.

Symmetric matrix A.
The transpose is AT = A, and aU = a ji. AI is also symmetric.

Vector space V.
Set of vectors such that all combinations cv + d w remain within V. Eight required rules are given in Section 3.1 for scalars c, d and vectors v, w.

Wavelets Wjk(t).
Stretch and shift the time axis to create Wjk(t) = woo(2j t  k).