 5.3.1: In Exercises 12, find , , , and .
 5.3.2: In Exercises 12, find , , , and .
 5.3.3: In Exercises 34, show that u, v, and k satisfy Theorem 5.3.1.
 5.3.4: In Exercises 34, show that u, v, and k satisfy Theorem 5.3.1.
 5.3.5: Solve the equation for x, where u and v are the vectors in Exercise 3.
 5.3.6: Solve the equation for x, where u and v are the vectors in Exercise 4.
 5.3.7: In Exercises 78, find , , , , and .
 5.3.8: In Exercises 78, find , , , , and .
 5.3.9: Let A be the matrix given in Exercise 7, and let B be the matrix Co...
 5.3.10: Let A be the matrix given in Exercise 8, and let B be the matrix Co...
 5.3.11: In Exercises 1112, compute , , and , and show that the vectors sati...
 5.3.12: In Exercises 1112, compute , , and , and show that the vectors sati...
 5.3.13: Compute for the vectors u, v, and w in Exercise 11.
 5.3.14: Compute for the vectors u, v, and w in Exercise 12.
 5.3.15: In Exercises 1518, find the eigenvalues and bases for the eigenspac...
 5.3.16: In Exercises 1518, find the eigenvalues and bases for the eigenspac...
 5.3.17: In Exercises 1518, find the eigenvalues and bases for the eigenspac...
 5.3.18: In Exercises 1518, find the eigenvalues and bases for the eigenspac...
 5.3.19: In Exercises 1922, each matrix C has form 15. Theorem 5.3.7 implies...
 5.3.20: In Exercises 1922, each matrix C has form 15. Theorem 5.3.7 implies...
 5.3.21: In Exercises 1922, each matrix C has form 15. Theorem 5.3.7 implies...
 5.3.22: In Exercises 1922, each matrix C has form 15. Theorem 5.3.7 implies...
 5.3.23: In Exercises 2326, find an invertible matrix P and a matrix C of fo...
 5.3.24: In Exercises 2326, find an invertible matrix P and a matrix C of fo...
 5.3.25: In Exercises 2326, find an invertible matrix P and a matrix C of fo...
 5.3.26: In Exercises 2326, find an invertible matrix P and a matrix C of fo...
 5.3.27: Find all complex scalars k, if any, for which u and v are orthogona...
 5.3.28: Show that if A is a real matrix and x is a column vector in , then ...
 5.3.29: The matrices called Pauli spin matrices, are used in quantum mechan...
 5.3.30: If k is a real scalar and v is a vector in , then Theorem 3.2.1 sta...
 5.3.31: Prove part ( c) of Theorem 5.3.1.
 5.3.32: Prove Theorem 5.3.2
 5.3.33: Prove that if u and v are vectors in , then
 5.3.34: It follows from Theorem 5.3.7 that the eigenvalues of the rotation ...
 5.3.35: The two parts of this exercise lead you through a proof of Theorem ...
 5.3.36: In this problem you will prove the complex analog of the CauchySch...
 5.3.a: In parts (a)(f) determine whether the statement is true or false, a...
 5.3.b: In parts (a)(f) determine whether the statement is true or false, a...
 5.3.c: In parts (a)(f) determine whether the statement is true or false, a...
 5.3.d: In parts (a)(f) determine whether the statement is true or false, a...
 5.3.e: In parts (a)(f) determine whether the statement is true or false, a...
 5.3.f: In parts (a)(f) determine whether the statement is true or false, a...
Solutions for Chapter 5.3: Complex Vector Spaces
Full solutions for Elementary Linear Algebra: Applications Version  10th Edition
ISBN: 9780470432051
Solutions for Chapter 5.3: Complex Vector Spaces
Get Full SolutionsChapter 5.3: Complex Vector Spaces includes 42 full stepbystep solutions. This textbook survival guide was created for the textbook: Elementary Linear Algebra: Applications Version, edition: 10. This expansive textbook survival guide covers the following chapters and their solutions. Elementary Linear Algebra: Applications Version was written by and is associated to the ISBN: 9780470432051. Since 42 problems in chapter 5.3: Complex Vector Spaces have been answered, more than 13834 students have viewed full stepbystep solutions from this chapter.

Commuting matrices AB = BA.
If diagonalizable, they share n eigenvectors.

Dimension of vector space
dim(V) = number of vectors in any basis for V.

Distributive Law
A(B + C) = AB + AC. Add then multiply, or mUltiply then add.

Eigenvalue A and eigenvector x.
Ax = AX with x#O so det(A  AI) = o.

Elimination matrix = Elementary matrix Eij.
The identity matrix with an extra eij in the i, j entry (i # j). Then Eij A subtracts eij times row j of A from row i.

Free variable Xi.
Column i has no pivot in elimination. We can give the n  r free variables any values, then Ax = b determines the r pivot variables (if solvable!).

Fundamental Theorem.
The nullspace N (A) and row space C (AT) are orthogonal complements in Rn(perpendicular from Ax = 0 with dimensions rand n  r). Applied to AT, the column space C(A) is the orthogonal complement of N(AT) in Rm.

Inverse matrix AI.
Square matrix with AI A = I and AAl = I. No inverse if det A = 0 and rank(A) < n and Ax = 0 for a nonzero vector x. The inverses of AB and AT are B1 AI and (AI)T. Cofactor formula (Al)ij = Cji! detA.

Left inverse A+.
If A has full column rank n, then A+ = (AT A)I AT has A+ A = In.

Linear combination cv + d w or L C jV j.
Vector addition and scalar multiplication.

Norm
IIA II. The ".e 2 norm" of A is the maximum ratio II Ax II/l1x II = O"max· Then II Ax II < IIAllllxll and IIABII < IIAIIIIBII and IIA + BII < IIAII + IIBII. Frobenius norm IIAII} = L La~. The.e 1 and.e oo norms are largest column and row sums of laij I.

Nullspace N (A)
= All solutions to Ax = O. Dimension n  r = (# columns)  rank.

Pivot.
The diagonal entry (first nonzero) at the time when a row is used in elimination.

Projection p = a(aTblaTa) onto the line through a.
P = aaT laTa has rank l.

Row picture of Ax = b.
Each equation gives a plane in Rn; the planes intersect at x.

Row space C (AT) = all combinations of rows of A.
Column vectors by convention.

Trace of A
= sum of diagonal entries = sum of eigenvalues of A. Tr AB = Tr BA.

Unitary matrix UH = U T = UI.
Orthonormal columns (complex analog of Q).

Vector space V.
Set of vectors such that all combinations cv + d w remain within V. Eight required rules are given in Section 3.1 for scalars c, d and vectors v, w.

Vector v in Rn.
Sequence of n real numbers v = (VI, ... , Vn) = point in Rn.