 1.4.1E: Compute the products in Exercises 1–4 using (a) the definition, as ...
 1.4.2E: Compute the products in Exercises 1–4 using (a) the definition, as ...
 1.4.3E: Compute the products in Exercises 1–4 using (a) the definition, as ...
 1.4.4E: Compute the products in Exercises 1–4 using (a) the definition, as ...
 1.4.5E: In Exercises 5–8, use the definition of Ax to write the matrix equa...
 1.4.6E: In Exercises 5–8, use the definition of Ax to write the matrix equa...
 1.4.7E: In Exercises 5–8, use the definition of Ax to write the matrix equa...
 1.4.8E: In Exercises 5–8, use the definition of Ax to write the matrix equa...
 1.4.9E: In Exercises 9 and 10, write the system first as a vector equation ...
 1.4.10E: In Exercises 9 and 10, write the system first as a vector equation ...
 1.4.11E: Given A and b in Exercises 11 and 12, write the augmented matrix fo...
 1.4.12E: Given A and b in Exercises 11 and 12, write the augmented matrix fo...
 1.4.13E: Is u in the plane in ?3 spanned by the columns of A? (See the figur...
 1.4.14E: Is u in the subset of ?3 spanned by the columns of A? Why or why not?
 1.4.15E: Show that the equation Ax = b does not have a solution for all poss...
 1.4.16E: Repeat the requests from Exercise 15 with Exercise 15: Show that th...
 1.4.17E: Exercises 17–20 refer to the matrices A and B below. Make appropria...
 1.4.18E: Exercises 17–20 refer to the matrices A and B below. Make appropria...
 1.4.19E: Exercises 17–20 refer to the matrices A and B below. Make appropria...
 1.4.20E: Exercises 17–20 refer to the matrices A and B below. Make appropria...
 1.4.21E: Does {v1, v2, v3} span ?4 ? Why or why not?
 1.4.22E: Does {v1, v2, v3} span ?3 ? Why or why not?
 1.4.23E: a. The equation Ax = b is referred to as a vector equation.b. A vec...
 1.4.24E: a. Every matrix equation Ax = b corresponds to a vector equation wi...
 1.4.25E: Note that Use this fact (and no row operations) to find scalars c1,...
 1.4.26E: It can be shown that 2u – 3v – w = 0. Use this fact (and no row ope...
 1.4.27E: Rewrite the (numerical) matrix equation below in symbolic form as a...
 1.4.28E: Let q1, q2, q3, and v represent vectors in ?5, and let x1, x2, and ...
 1.4.29E: Construct a 3 × 3 matrix, not in echelon form, whose columns span ?...
 1.4.30E: Construct a 3 × 3 matrix, not in echelon form, whose columns do not...
 1.4.31E: Let A be a 3 × 2 matrix. Explain why the equation Ax = b cannot be ...
 1.4.32E: Could a set of three vectors in R4 span all of ?4? Explain. What ab...
 1.4.33E: Suppose A is a 4 × 3 matrix and b is a vector in ?4 with the proper...
 1.4.34E: Let A be a 3 × 4 matrix, let v1 and v2 be vectors in ?3, and let w ...
 1.4.35E: Let A be a 5 × 3 matrix, let y be a vector in ?3, and let z be a ve...
 1.4.36E: Suppose A is a 4 × 4 matrix and b is a vector in ?4, with the prope...
 1.4.37E: [M] In Exercises 37–40, determine if the columns of the matrix span...
 1.4.38E: [M] In Exercises 37–40, determine if the columns of the matrix span...
 1.4.39E: [M] In Exercises 37–40, determine if the columns of the matrix span...
 1.4.40E: [M] In Exercises 37–40, determine if the columns of the matrix span...
 1.4.41E: [M] Find a column of the matrix in Exercise 39 that can be deleted ...
 1.4.42E: [M] Find a column of the matrix in Exercise 40 that can be deleted ...
Solutions for Chapter 1.4: Linear Algebra and Its Applications 4th Edition
Full solutions for Linear Algebra and Its Applications  4th Edition
ISBN: 9780321385178
Solutions for Chapter 1.4
Get Full SolutionsSince 42 problems in chapter 1.4 have been answered, more than 78859 students have viewed full stepbystep solutions from this chapter. This expansive textbook survival guide covers the following chapters and their solutions. Chapter 1.4 includes 42 full stepbystep solutions. This textbook survival guide was created for the textbook: Linear Algebra and Its Applications, edition: 4. Linear Algebra and Its Applications was written by and is associated to the ISBN: 9780321385178.

Augmented matrix [A b].
Ax = b is solvable when b is in the column space of A; then [A b] has the same rank as A. Elimination on [A b] keeps equations correct.

Back substitution.
Upper triangular systems are solved in reverse order Xn to Xl.

Block matrix.
A matrix can be partitioned into matrix blocks, by cuts between rows and/or between columns. Block multiplication ofAB is allowed if the block shapes permit.

Complete solution x = x p + Xn to Ax = b.
(Particular x p) + (x n in nullspace).

Conjugate Gradient Method.
A sequence of steps (end of Chapter 9) to solve positive definite Ax = b by minimizing !x T Ax  x Tb over growing Krylov subspaces.

Cyclic shift
S. Permutation with S21 = 1, S32 = 1, ... , finally SIn = 1. Its eigenvalues are the nth roots e2lrik/n of 1; eigenvectors are columns of the Fourier matrix F.

Diagonalizable matrix A.
Must have n independent eigenvectors (in the columns of S; automatic with n different eigenvalues). Then SI AS = A = eigenvalue matrix.

Elimination.
A sequence of row operations that reduces A to an upper triangular U or to the reduced form R = rref(A). Then A = LU with multipliers eO in L, or P A = L U with row exchanges in P, or E A = R with an invertible E.

Full column rank r = n.
Independent columns, N(A) = {O}, no free variables.

Hypercube matrix pl.
Row n + 1 counts corners, edges, faces, ... of a cube in Rn.

Jordan form 1 = M 1 AM.
If A has s independent eigenvectors, its "generalized" eigenvector matrix M gives 1 = diag(lt, ... , 1s). The block his Akh +Nk where Nk has 1 's on diagonall. Each block has one eigenvalue Ak and one eigenvector.

Kirchhoff's Laws.
Current Law: net current (in minus out) is zero at each node. Voltage Law: Potential differences (voltage drops) add to zero around any closed loop.

Lucas numbers
Ln = 2,J, 3, 4, ... satisfy Ln = L n l +Ln 2 = A1 +A~, with AI, A2 = (1 ± /5)/2 from the Fibonacci matrix U~]' Compare Lo = 2 with Fo = O.

Normal matrix.
If N NT = NT N, then N has orthonormal (complex) eigenvectors.

Particular solution x p.
Any solution to Ax = b; often x p has free variables = o.

Projection p = a(aTblaTa) onto the line through a.
P = aaT laTa has rank l.

Pseudoinverse A+ (MoorePenrose inverse).
The n by m matrix that "inverts" A from column space back to row space, with N(A+) = N(AT). A+ A and AA+ are the projection matrices onto the row space and column space. Rank(A +) = rank(A).

Rank r (A)
= number of pivots = dimension of column space = dimension of row space.

Schwarz inequality
Iv·wl < IIvll IIwll.Then IvTAwl2 < (vT Av)(wT Aw) for pos def A.

Triangle inequality II u + v II < II u II + II v II.
For matrix norms II A + B II < II A II + II B II·