×
×

# Solutions for Chapter 8.5: Linear Algebra and Its Applications 5th Edition

## Full solutions for Linear Algebra and Its Applications | 5th Edition

ISBN: 9780321982384

Solutions for Chapter 8.5

Solutions for Chapter 8.5
4 5 0 329 Reviews
23
2
##### ISBN: 9780321982384

Since 22 problems in chapter 8.5 have been answered, more than 46727 students have viewed full step-by-step solutions from this chapter. This expansive textbook survival guide covers the following chapters and their solutions. Linear Algebra and Its Applications was written by and is associated to the ISBN: 9780321982384. This textbook survival guide was created for the textbook: Linear Algebra and Its Applications , edition: 5. Chapter 8.5 includes 22 full step-by-step solutions.

Key Math Terms and definitions covered in this textbook
• Adjacency matrix of a graph.

Square matrix with aij = 1 when there is an edge from node i to node j; otherwise aij = O. A = AT when edges go both ways (undirected). Adjacency matrix of a graph. Square matrix with aij = 1 when there is an edge from node i to node j; otherwise aij = O. A = AT when edges go both ways (undirected).

• Augmented matrix [A b].

Ax = b is solvable when b is in the column space of A; then [A b] has the same rank as A. Elimination on [A b] keeps equations correct.

• Companion matrix.

Put CI, ... ,Cn in row n and put n - 1 ones just above the main diagonal. Then det(A - AI) = ±(CI + c2A + C3A 2 + .•. + cnA n-l - An).

• Complex conjugate

z = a - ib for any complex number z = a + ib. Then zz = Iz12.

• Cramer's Rule for Ax = b.

B j has b replacing column j of A; x j = det B j I det A

• Fibonacci numbers

0,1,1,2,3,5, ... satisfy Fn = Fn-l + Fn- 2 = (A7 -A~)I()q -A2). Growth rate Al = (1 + .J5) 12 is the largest eigenvalue of the Fibonacci matrix [ } A].

• Hilbert matrix hilb(n).

Entries HU = 1/(i + j -1) = Jd X i- 1 xj-1dx. Positive definite but extremely small Amin and large condition number: H is ill-conditioned.

• Incidence matrix of a directed graph.

The m by n edge-node incidence matrix has a row for each edge (node i to node j), with entries -1 and 1 in columns i and j .

• Krylov subspace Kj(A, b).

The subspace spanned by b, Ab, ... , Aj-Ib. Numerical methods approximate A -I b by x j with residual b - Ax j in this subspace. A good basis for K j requires only multiplication by A at each step.

• Length II x II.

Square root of x T x (Pythagoras in n dimensions).

• Markov matrix M.

All mij > 0 and each column sum is 1. Largest eigenvalue A = 1. If mij > 0, the columns of Mk approach the steady state eigenvector M s = s > O.

• Norm

IIA II. The ".e 2 norm" of A is the maximum ratio II Ax II/l1x II = O"max· Then II Ax II < IIAllllxll and IIABII < IIAIIIIBII and IIA + BII < IIAII + IIBII. Frobenius norm IIAII} = L La~. The.e 1 and.e oo norms are largest column and row sums of laij I.

• Orthogonal subspaces.

Every v in V is orthogonal to every w in W.

• Plane (or hyperplane) in Rn.

Vectors x with aT x = O. Plane is perpendicular to a =1= O.

• Rank one matrix A = uvT f=. O.

Column and row spaces = lines cu and cv.

• Reflection matrix (Householder) Q = I -2uuT.

Unit vector u is reflected to Qu = -u. All x intheplanemirroruTx = o have Qx = x. Notice QT = Q-1 = Q.

• Row space C (AT) = all combinations of rows of A.

Column vectors by convention.

• Simplex method for linear programming.

The minimum cost vector x * is found by moving from comer to lower cost comer along the edges of the feasible set (where the constraints Ax = b and x > 0 are satisfied). Minimum cost at a comer!

• Tridiagonal matrix T: tij = 0 if Ii - j I > 1.

T- 1 has rank 1 above and below diagonal.

• Wavelets Wjk(t).

Stretch and shift the time axis to create Wjk(t) = woo(2j t - k).

×