 1.4.1: Use elementary operations to find the general solution of each of t...
 1.4.2: Decide which of the following matrices are in echelon form, which a...
 1.4.3: For each of the following matrices A, determine its reduced echelon...
 1.4.4: Give the general solution of the equation Ax = b in standard form.
 1.4.5: For the following matrices A, give the general solution of the equa...
 1.4.6: For the following matrices A, give the general solution of the equa...
 1.4.7: One might need to find solutions of Ax = b for several different bs...
 1.4.8: Find all the unit vectors x R3 that make an angle of /3 with each o...
 1.4.9: Find all the unit vectors x R3 that make an angle of /4 with (1, 0,...
 1.4.10: Find a normal vector to the hyperplane in R4 spanned by a. (1, 1, 1...
 1.4.11: Find all vectors x R4 that are orthogonal to both a. (1, 0, 1, 1) a...
 1.4.12: Find all the unit vectors in R4 that make an angle of /3 with (1, 1...
 1.4.13: Let A be an m n matrix, let x, y Rn, and let c be a scalar. Show th...
 1.4.14: Let A be an m n matrix, and let b Rm. a. Show that if u and v Rn ar...
 1.4.15: a. Prove or give a counterexample: If A is an m n matrix and x Rn s...
 1.4.16: Prove that the reduced echelon form of a matrix is unique, as follo...
 1.4.17: In rating the efficiency of different computer algorithms for solvi...
Solutions for Chapter 1.4: Systems of Linear Equations and Gaussian Elimination
Full solutions for Linear Algebra: A Geometric Approach  2nd Edition
ISBN: 9781429215213
Solutions for Chapter 1.4: Systems of Linear Equations and Gaussian Elimination
Get Full SolutionsThis textbook survival guide was created for the textbook: Linear Algebra: A Geometric Approach, edition: 2. This expansive textbook survival guide covers the following chapters and their solutions. Linear Algebra: A Geometric Approach was written by and is associated to the ISBN: 9781429215213. Chapter 1.4: Systems of Linear Equations and Gaussian Elimination includes 17 full stepbystep solutions. Since 17 problems in chapter 1.4: Systems of Linear Equations and Gaussian Elimination have been answered, more than 4629 students have viewed full stepbystep solutions from this chapter.

Adjacency matrix of a graph.
Square matrix with aij = 1 when there is an edge from node i to node j; otherwise aij = O. A = AT when edges go both ways (undirected). Adjacency matrix of a graph. Square matrix with aij = 1 when there is an edge from node i to node j; otherwise aij = O. A = AT when edges go both ways (undirected).

Cholesky factorization
A = CTC = (L.J]))(L.J]))T for positive definite A.

Commuting matrices AB = BA.
If diagonalizable, they share n eigenvectors.

Condition number
cond(A) = c(A) = IIAIlIIAIII = amaxlamin. In Ax = b, the relative change Ilox III Ilx II is less than cond(A) times the relative change Ilob III lib II· Condition numbers measure the sensitivity of the output to change in the input.

Conjugate Gradient Method.
A sequence of steps (end of Chapter 9) to solve positive definite Ax = b by minimizing !x T Ax  x Tb over growing Krylov subspaces.

Diagonalizable matrix A.
Must have n independent eigenvectors (in the columns of S; automatic with n different eigenvalues). Then SI AS = A = eigenvalue matrix.

Hilbert matrix hilb(n).
Entries HU = 1/(i + j 1) = Jd X i 1 xj1dx. Positive definite but extremely small Amin and large condition number: H is illconditioned.

Length II x II.
Square root of x T x (Pythagoras in n dimensions).

Orthogonal matrix Q.
Square matrix with orthonormal columns, so QT = Ql. Preserves length and angles, IIQxll = IIxll and (QX)T(Qy) = xTy. AlllAI = 1, with orthogonal eigenvectors. Examples: Rotation, reflection, permutation.

Orthonormal vectors q 1 , ... , q n·
Dot products are q T q j = 0 if i =1= j and q T q i = 1. The matrix Q with these orthonormal columns has Q T Q = I. If m = n then Q T = Q 1 and q 1 ' ... , q n is an orthonormal basis for Rn : every v = L (v T q j )q j •

Outer product uv T
= column times row = rank one matrix.

Projection p = a(aTblaTa) onto the line through a.
P = aaT laTa has rank l.

Pseudoinverse A+ (MoorePenrose inverse).
The n by m matrix that "inverts" A from column space back to row space, with N(A+) = N(AT). A+ A and AA+ are the projection matrices onto the row space and column space. Rank(A +) = rank(A).

Rank one matrix A = uvT f=. O.
Column and row spaces = lines cu and cv.

Rayleigh quotient q (x) = X T Ax I x T x for symmetric A: Amin < q (x) < Amax.
Those extremes are reached at the eigenvectors x for Amin(A) and Amax(A).

Special solutions to As = O.
One free variable is Si = 1, other free variables = o.

Subspace S of V.
Any vector space inside V, including V and Z = {zero vector only}.

Triangle inequality II u + v II < II u II + II v II.
For matrix norms II A + B II < II A II + II B II·

Vandermonde matrix V.
V c = b gives coefficients of p(x) = Co + ... + Cn_IXn 1 with P(Xi) = bi. Vij = (Xi)jI and det V = product of (Xk  Xi) for k > i.

Wavelets Wjk(t).
Stretch and shift the time axis to create Wjk(t) = woo(2j t  k).