×
×

# Solutions for Chapter 2.3: The Inverse of a Matrix

## Full solutions for Elementary Linear Algebra | 8th Edition

ISBN: 9781305658004

Solutions for Chapter 2.3: The Inverse of a Matrix

Solutions for Chapter 2.3
4 5 0 362 Reviews
20
2
##### ISBN: 9781305658004

Since 166 problems in chapter 2.3: The Inverse of a Matrix have been answered, more than 44270 students have viewed full step-by-step solutions from this chapter. Chapter 2.3: The Inverse of a Matrix includes 166 full step-by-step solutions. This expansive textbook survival guide covers the following chapters and their solutions. Elementary Linear Algebra was written by and is associated to the ISBN: 9781305658004. This textbook survival guide was created for the textbook: Elementary Linear Algebra, edition: 8.

Key Math Terms and definitions covered in this textbook
• Associative Law (AB)C = A(BC).

Parentheses can be removed to leave ABC.

• Basis for V.

Independent vectors VI, ... , v d whose linear combinations give each vector in V as v = CIVI + ... + CdVd. V has many bases, each basis gives unique c's. A vector space has many bases!

• Column picture of Ax = b.

The vector b becomes a combination of the columns of A. The system is solvable only when b is in the column space C (A).

• Complete solution x = x p + Xn to Ax = b.

(Particular x p) + (x n in nullspace).

• Condition number

cond(A) = c(A) = IIAIlIIA-III = amaxlamin. In Ax = b, the relative change Ilox III Ilx II is less than cond(A) times the relative change Ilob III lib IIĀ· Condition numbers measure the sensitivity of the output to change in the input.

• Elimination matrix = Elementary matrix Eij.

The identity matrix with an extra -eij in the i, j entry (i #- j). Then Eij A subtracts eij times row j of A from row i.

• Ellipse (or ellipsoid) x T Ax = 1.

A must be positive definite; the axes of the ellipse are eigenvectors of A, with lengths 1/.JI. (For IIx II = 1 the vectors y = Ax lie on the ellipse IIA-1 yll2 = Y T(AAT)-1 Y = 1 displayed by eigshow; axis lengths ad

• Four Fundamental Subspaces C (A), N (A), C (AT), N (AT).

Use AT for complex A.

• Fourier matrix F.

Entries Fjk = e21Cijk/n give orthogonal columns FT F = nI. Then y = Fe is the (inverse) Discrete Fourier Transform Y j = L cke21Cijk/n.

• Full column rank r = n.

Independent columns, N(A) = {O}, no free variables.

• Hankel matrix H.

Constant along each antidiagonal; hij depends on i + j.

• Hypercube matrix pl.

Row n + 1 counts corners, edges, faces, ... of a cube in Rn.

• Multiplication Ax

= Xl (column 1) + ... + xn(column n) = combination of columns.

• Multiplicities AM and G M.

The algebraic multiplicity A M of A is the number of times A appears as a root of det(A - AI) = O. The geometric multiplicity GM is the number of independent eigenvectors for A (= dimension of the eigenspace).

• Nilpotent matrix N.

Some power of N is the zero matrix, N k = o. The only eigenvalue is A = 0 (repeated n times). Examples: triangular matrices with zero diagonal.

• Pascal matrix

Ps = pascal(n) = the symmetric matrix with binomial entries (i1~;2). Ps = PL Pu all contain Pascal's triangle with det = 1 (see Pascal in the index).

• Right inverse A+.

If A has full row rank m, then A+ = AT(AAT)-l has AA+ = 1m.

• Similar matrices A and B.

Every B = M-I AM has the same eigenvalues as A.

• Singular matrix A.

A square matrix that has no inverse: det(A) = o.

• Vector v in Rn.

Sequence of n real numbers v = (VI, ... , Vn) = point in Rn.

×