 9.1.1: Find the two pivots with and without row exchange to maximize the p...
 9.1.2: Compute the exact inverse of the Hilbert matrix A by elimination. T...
 9.1.3: For the same A compute b = Ax for x = (1,1, 1) and x = (0,6, 3.6)....
 9.1.4: Find the eigenvalues (by computer) ofthe 8 by 8 Hilbert matrix aU =...
 9.1.5: For back substitution with a band matrix (width w), show that the n...
 9.1.6: If you know Land U and Q and R, is it faster to solve LUx = b or QR...
 9.1.7: Show that the number of multiplications to invert an upper triangul...
 9.1.8: Choosing the largest available pivot in each column (partial pivoti...
 9.1.9: Put 1 's on the three central diagonals of a 4 by 4 tridiagonal mat...
 9.1.10: (Suggested by C. Van Loan.) Find the L U factorization and solve by...
 9.1.11: (a) Choose sin () and cos () to triangularize A, and find R: Givens...
 9.1.12: When A is multiplied by a plane rotation Qij, which n2 entries of A...
 9.1.13: How many multiplications and how many additions are used to compute...
 9.1.14: (Turning a robot hand) The robot produces any 3 by 3 rotation A fro...
 9.1.15: Create the 10 by 10 second difference matrix K = toeplitz([2  1 ze...
 9.1.16: Another ordering for this matrix K colors the meshpoints alternatel...
 9.1.17: Jeff Stuart has created a student activity that brilliantly demonst...
Solutions for Chapter 9.1: Gaussian Elimination in Practice
Full solutions for Introduction to Linear Algebra  4th Edition
ISBN: 9780980232714
Solutions for Chapter 9.1: Gaussian Elimination in Practice
Get Full SolutionsThis expansive textbook survival guide covers the following chapters and their solutions. Chapter 9.1: Gaussian Elimination in Practice includes 17 full stepbystep solutions. Since 17 problems in chapter 9.1: Gaussian Elimination in Practice have been answered, more than 12943 students have viewed full stepbystep solutions from this chapter. This textbook survival guide was created for the textbook: Introduction to Linear Algebra, edition: 4. Introduction to Linear Algebra was written by and is associated to the ISBN: 9780980232714.

Cholesky factorization
A = CTC = (L.J]))(L.J]))T for positive definite A.

Covariance matrix:E.
When random variables Xi have mean = average value = 0, their covariances "'£ ij are the averages of XiX j. With means Xi, the matrix :E = mean of (x  x) (x  x) T is positive (semi)definite; :E is diagonal if the Xi are independent.

Cross product u xv in R3:
Vector perpendicular to u and v, length Ilullllvlll sin el = area of parallelogram, u x v = "determinant" of [i j k; UI U2 U3; VI V2 V3].

Diagonal matrix D.
dij = 0 if i # j. Blockdiagonal: zero outside square blocks Du.

Echelon matrix U.
The first nonzero entry (the pivot) in each row comes in a later column than the pivot in the previous row. All zero rows come last.

Four Fundamental Subspaces C (A), N (A), C (AT), N (AT).
Use AT for complex A.

Fourier matrix F.
Entries Fjk = e21Cijk/n give orthogonal columns FT F = nI. Then y = Fe is the (inverse) Discrete Fourier Transform Y j = L cke21Cijk/n.

Free variable Xi.
Column i has no pivot in elimination. We can give the n  r free variables any values, then Ax = b determines the r pivot variables (if solvable!).

Full row rank r = m.
Independent rows, at least one solution to Ax = b, column space is all of Rm. Full rank means full column rank or full row rank.

Hermitian matrix A H = AT = A.
Complex analog a j i = aU of a symmetric matrix.

Least squares solution X.
The vector x that minimizes the error lie 112 solves AT Ax = ATb. Then e = b  Ax is orthogonal to all columns of A.

Lucas numbers
Ln = 2,J, 3, 4, ... satisfy Ln = L n l +Ln 2 = A1 +A~, with AI, A2 = (1 ± /5)/2 from the Fibonacci matrix U~]' Compare Lo = 2 with Fo = O.

Multiplicities AM and G M.
The algebraic multiplicity A M of A is the number of times A appears as a root of det(A  AI) = O. The geometric multiplicity GM is the number of independent eigenvectors for A (= dimension of the eigenspace).

Multiplier eij.
The pivot row j is multiplied by eij and subtracted from row i to eliminate the i, j entry: eij = (entry to eliminate) / (jth pivot).

Normal equation AT Ax = ATb.
Gives the least squares solution to Ax = b if A has full rank n (independent columns). The equation says that (columns of A)·(b  Ax) = o.

Projection matrix P onto subspace S.
Projection p = P b is the closest point to b in S, error e = b  Pb is perpendicularto S. p 2 = P = pT, eigenvalues are 1 or 0, eigenvectors are in S or S...L. If columns of A = basis for S then P = A (AT A) 1 AT.

Reduced row echelon form R = rref(A).
Pivots = 1; zeros above and below pivots; the r nonzero rows of R give a basis for the row space of A.

Schwarz inequality
Iv·wl < IIvll IIwll.Then IvTAwl2 < (vT Av)(wT Aw) for pos def A.

Subspace S of V.
Any vector space inside V, including V and Z = {zero vector only}.

Toeplitz matrix.
Constant down each diagonal = timeinvariant (shiftinvariant) filter.