 8.4.1E: Let L be the line in R2 through the points Find a linear functional...
 8.4.2E: Let L be the line in R2 through the points Find a linear functional...
 8.4.3E: In Exercises 3 and 4, determine whether each set is open or closed ...
 8.4.4E: In Exercises 3 and 4, determine whether each set is open or closed ...
 8.4.5E: In Exercises 5 and 6, determine whether or not each set is compact ...
 8.4.6E: In Exercises 5 and 6, determine whether or not each set is compact ...
 8.4.7E: In Exercises 7–10, let H be the hyperplane through the listed point...
 8.4.8E: In Exercises 7–10, let H be the hyperplane through the listed point...
 8.4.9E: In Exercises 7–10, let H be the hyperplane through the listed point...
 8.4.10E: In Exercises 7–10, let H be the hyperplane through the listed point...
 8.4.11E: and let H be the hyperplane in R4 with normal n and passing through...
 8.4.12E: with normal n that separates A and B. Is there a hyperplane paralle...
 8.4.13E:
 8.4.14E: Let F1 and F2 be 4dimensional flats in R6, and suppose that F1 F2 ...
 8.4.15E: In Exercises 15–20, write a formula for a linear functional f and s...
 8.4.16E: In Exercises 15–20, write a formula for a linear functional f and s...
 8.4.17E: In Exercises 15–20, write a formula for a linear functional f and s...
 8.4.18E: In Exercises 15–20, write a formula for a linear functional f and s...
 8.4.19E: In Exercises 15–20, write a formula for a linear functional f and s...
 8.4.20E: In Exercises 15–20, write a formula for a linear functional f and s...
 8.4.21E: In Exercises 21 and 22, mark each statement True or False. Justify ...
 8.4.22E: In Exercises 21 and 22, mark each statement True or False. Justify ...
 8.4.23E: Let v1 = v2 = v3 = and p = Finda hyperplane [f : d] (in this case, ...
 8.4.24E: Repeat Exercise 23 forv1 = v2 = v3 = and p = Reference Exercise 23:...
 8.4.25E:
 8.4.27E: Give an example of a closed subset S of R2 such that conv S is not ...
 8.4.28E: Give an example of a compact set A and a closed set B in R2 such th...
 8.4.29E: Prove that the open ball is a convex set. [Hint: Use the Triangle I...
 8.4.30E: Prove that the convex hull of a bounded set is bounded.
Solutions for Chapter 8.4: Linear Algebra and Its Applications 4th Edition
Full solutions for Linear Algebra and Its Applications  4th Edition
ISBN: 9780321385178
Solutions for Chapter 8.4
Get Full SolutionsThis expansive textbook survival guide covers the following chapters and their solutions. Chapter 8.4 includes 29 full stepbystep solutions. Since 29 problems in chapter 8.4 have been answered, more than 30679 students have viewed full stepbystep solutions from this chapter. Linear Algebra and Its Applications was written by and is associated to the ISBN: 9780321385178. This textbook survival guide was created for the textbook: Linear Algebra and Its Applications, edition: 4.

CayleyHamilton Theorem.
peA) = det(A  AI) has peA) = zero matrix.

Change of basis matrix M.
The old basis vectors v j are combinations L mij Wi of the new basis vectors. The coordinates of CI VI + ... + cnvn = dl wI + ... + dn Wn are related by d = M c. (For n = 2 set VI = mll WI +m21 W2, V2 = m12WI +m22w2.)

Cholesky factorization
A = CTC = (L.J]))(L.J]))T for positive definite A.

Conjugate Gradient Method.
A sequence of steps (end of Chapter 9) to solve positive definite Ax = b by minimizing !x T Ax  x Tb over growing Krylov subspaces.

Distributive Law
A(B + C) = AB + AC. Add then multiply, or mUltiply then add.

Dot product = Inner product x T y = XI Y 1 + ... + Xn Yn.
Complex dot product is x T Y . Perpendicular vectors have x T y = O. (AB)ij = (row i of A)T(column j of B).

Full row rank r = m.
Independent rows, at least one solution to Ax = b, column space is all of Rm. Full rank means full column rank or full row rank.

GaussJordan method.
Invert A by row operations on [A I] to reach [I AI].

Linear combination cv + d w or L C jV j.
Vector addition and scalar multiplication.

Markov matrix M.
All mij > 0 and each column sum is 1. Largest eigenvalue A = 1. If mij > 0, the columns of Mk approach the steady state eigenvector M s = s > O.

Permutation matrix P.
There are n! orders of 1, ... , n. The n! P 's have the rows of I in those orders. P A puts the rows of A in the same order. P is even or odd (det P = 1 or 1) based on the number of row exchanges to reach I.

Pivot columns of A.
Columns that contain pivots after row reduction. These are not combinations of earlier columns. The pivot columns are a basis for the column space.

Polar decomposition A = Q H.
Orthogonal Q times positive (semi)definite H.

Pseudoinverse A+ (MoorePenrose inverse).
The n by m matrix that "inverts" A from column space back to row space, with N(A+) = N(AT). A+ A and AA+ are the projection matrices onto the row space and column space. Rank(A +) = rank(A).

Saddle point of I(x}, ... ,xn ).
A point where the first derivatives of I are zero and the second derivative matrix (a2 II aXi ax j = Hessian matrix) is indefinite.

Simplex method for linear programming.
The minimum cost vector x * is found by moving from comer to lower cost comer along the edges of the feasible set (where the constraints Ax = b and x > 0 are satisfied). Minimum cost at a comer!

Spectral Theorem A = QAQT.
Real symmetric A has real A'S and orthonormal q's.

Subspace S of V.
Any vector space inside V, including V and Z = {zero vector only}.

Tridiagonal matrix T: tij = 0 if Ii  j I > 1.
T 1 has rank 1 above and below diagonal.

Vandermonde matrix V.
V c = b gives coefficients of p(x) = Co + ... + Cn_IXn 1 with P(Xi) = bi. Vij = (Xi)jI and det V = product of (Xk  Xi) for k > i.