 3.3.1: compute the indicated matrices (if possible). A + 2D
 3.3.2: solve the equation for X, given that A = [ ! ] and B = [  ]
 3.3.3: find the inverse of the given matrix (if it exists) using Theorem 3.8.
 3.3.4: solve the system Ax = b using the given LU factorization of A.
 3.3.5: let S be the collection of vectors [;] in IR 2 that satisfy the giv...
 3.3.6: Let TA : IR 2 + IR 2 be the matrix transformation corresponding ...
 3.3.7: Let P = ] be the transition ma0.5 0.7 [ 0.5 trix for a Markov chain...
Solutions for Chapter 3: Matrices
Full solutions for Linear Algebra: A Modern Introduction  1st Edition
ISBN: 9781285463247
Solutions for Chapter 3: Matrices
Get Full SolutionsSince 7 problems in chapter 3: Matrices have been answered, more than 612 students have viewed full stepbystep solutions from this chapter. This expansive textbook survival guide covers the following chapters and their solutions. Linear Algebra: A Modern Introduction was written by and is associated to the ISBN: 9781285463247. This textbook survival guide was created for the textbook: Linear Algebra: A Modern Introduction, edition: 1. Chapter 3: Matrices includes 7 full stepbystep solutions.

Adjacency matrix of a graph.
Square matrix with aij = 1 when there is an edge from node i to node j; otherwise aij = O. A = AT when edges go both ways (undirected). Adjacency matrix of a graph. Square matrix with aij = 1 when there is an edge from node i to node j; otherwise aij = O. A = AT when edges go both ways (undirected).

Change of basis matrix M.
The old basis vectors v j are combinations L mij Wi of the new basis vectors. The coordinates of CI VI + ... + cnvn = dl wI + ... + dn Wn are related by d = M c. (For n = 2 set VI = mll WI +m21 W2, V2 = m12WI +m22w2.)

Cross product u xv in R3:
Vector perpendicular to u and v, length Ilullllvlll sin el = area of parallelogram, u x v = "determinant" of [i j k; UI U2 U3; VI V2 V3].

Dimension of vector space
dim(V) = number of vectors in any basis for V.

Echelon matrix U.
The first nonzero entry (the pivot) in each row comes in a later column than the pivot in the previous row. All zero rows come last.

GaussJordan method.
Invert A by row operations on [A I] to reach [I AI].

Hypercube matrix pl.
Row n + 1 counts corners, edges, faces, ... of a cube in Rn.

Iterative method.
A sequence of steps intended to approach the desired solution.

Krylov subspace Kj(A, b).
The subspace spanned by b, Ab, ... , AjIb. Numerical methods approximate A I b by x j with residual b  Ax j in this subspace. A good basis for K j requires only multiplication by A at each step.

Least squares solution X.
The vector x that minimizes the error lie 112 solves AT Ax = ATb. Then e = b  Ax is orthogonal to all columns of A.

Left inverse A+.
If A has full column rank n, then A+ = (AT A)I AT has A+ A = In.

Normal matrix.
If N NT = NT N, then N has orthonormal (complex) eigenvectors.

Projection matrix P onto subspace S.
Projection p = P b is the closest point to b in S, error e = b  Pb is perpendicularto S. p 2 = P = pT, eigenvalues are 1 or 0, eigenvectors are in S or S...L. If columns of A = basis for S then P = A (AT A) 1 AT.

Rank r (A)
= number of pivots = dimension of column space = dimension of row space.

Right inverse A+.
If A has full row rank m, then A+ = AT(AAT)l has AA+ = 1m.

Schwarz inequality
Iv·wl < IIvll IIwll.Then IvTAwl2 < (vT Av)(wT Aw) for pos def A.

Simplex method for linear programming.
The minimum cost vector x * is found by moving from comer to lower cost comer along the edges of the feasible set (where the constraints Ax = b and x > 0 are satisfied). Minimum cost at a comer!

Spectral Theorem A = QAQT.
Real symmetric A has real A'S and orthonormal q's.

Subspace S of V.
Any vector space inside V, including V and Z = {zero vector only}.

Toeplitz matrix.
Constant down each diagonal = timeinvariant (shiftinvariant) filter.