×
×

# Solutions for Chapter 6.3: The Adjoint of a Linear Operator

## Full solutions for Linear Algebra | 4th Edition

ISBN: 9780130084514

Solutions for Chapter 6.3: The Adjoint of a Linear Operator

Solutions for Chapter 6.3
4 5 0 308 Reviews
13
1
##### ISBN: 9780130084514

Linear Algebra was written by and is associated to the ISBN: 9780130084514. Since 24 problems in chapter 6.3: The Adjoint of a Linear Operator have been answered, more than 10981 students have viewed full step-by-step solutions from this chapter. Chapter 6.3: The Adjoint of a Linear Operator includes 24 full step-by-step solutions. This expansive textbook survival guide covers the following chapters and their solutions. This textbook survival guide was created for the textbook: Linear Algebra , edition: 4.

Key Math Terms and definitions covered in this textbook
• Adjacency matrix of a graph.

Square matrix with aij = 1 when there is an edge from node i to node j; otherwise aij = O. A = AT when edges go both ways (undirected). Adjacency matrix of a graph. Square matrix with aij = 1 when there is an edge from node i to node j; otherwise aij = O. A = AT when edges go both ways (undirected).

• Basis for V.

Independent vectors VI, ... , v d whose linear combinations give each vector in V as v = CIVI + ... + CdVd. V has many bases, each basis gives unique c's. A vector space has many bases!

• Block matrix.

A matrix can be partitioned into matrix blocks, by cuts between rows and/or between columns. Block multiplication ofAB is allowed if the block shapes permit.

• Circulant matrix C.

Constant diagonals wrap around as in cyclic shift S. Every circulant is Col + CIS + ... + Cn_lSn - l . Cx = convolution c * x. Eigenvectors in F.

• Cross product u xv in R3:

Vector perpendicular to u and v, length Ilullllvlll sin el = area of parallelogram, u x v = "determinant" of [i j k; UI U2 U3; VI V2 V3].

• Diagonalizable matrix A.

Must have n independent eigenvectors (in the columns of S; automatic with n different eigenvalues). Then S-I AS = A = eigenvalue matrix.

• Gauss-Jordan method.

Invert A by row operations on [A I] to reach [I A-I].

• Inverse matrix A-I.

Square matrix with A-I A = I and AA-l = I. No inverse if det A = 0 and rank(A) < n and Ax = 0 for a nonzero vector x. The inverses of AB and AT are B-1 A-I and (A-I)T. Cofactor formula (A-l)ij = Cji! detA.

• Left inverse A+.

If A has full column rank n, then A+ = (AT A)-I AT has A+ A = In.

• Linear transformation T.

Each vector V in the input space transforms to T (v) in the output space, and linearity requires T(cv + dw) = c T(v) + d T(w). Examples: Matrix multiplication A v, differentiation and integration in function space.

• Linearly dependent VI, ... , Vn.

A combination other than all Ci = 0 gives L Ci Vi = O.

• Lucas numbers

Ln = 2,J, 3, 4, ... satisfy Ln = L n- l +Ln- 2 = A1 +A~, with AI, A2 = (1 ± -/5)/2 from the Fibonacci matrix U~]' Compare Lo = 2 with Fo = O.

• Normal equation AT Ax = ATb.

Gives the least squares solution to Ax = b if A has full rank n (independent columns). The equation says that (columns of A)·(b - Ax) = o.

• Particular solution x p.

Any solution to Ax = b; often x p has free variables = o.

• Polar decomposition A = Q H.

Orthogonal Q times positive (semi)definite H.

• Rayleigh quotient q (x) = X T Ax I x T x for symmetric A: Amin < q (x) < Amax.

Those extremes are reached at the eigenvectors x for Amin(A) and Amax(A).

• Special solutions to As = O.

One free variable is Si = 1, other free variables = o.

• Symmetric matrix A.

The transpose is AT = A, and aU = a ji. A-I is also symmetric.

• Trace of A

= sum of diagonal entries = sum of eigenvalues of A. Tr AB = Tr BA.

• Vector space V.

Set of vectors such that all combinations cv + d w remain within V. Eight required rules are given in Section 3.1 for scalars c, d and vectors v, w.

×