 1.3.1: If A = 3 1 4 2 0 1 1 2 2 and B = 1 0 2 3 1 1 2 4 1 compute (a) 2A (...
 1.3.2: For each of the pairs of matrices that follow, determine whether it...
 1.3.3: For which of the pairs in Exercise 2 is it possible to multiply the...
 1.3.4: Write each of the following systems of equations as a matrix equati...
 1.3.5: If A = 3 4 1 1 2 7 verify that (a) 5A = 3A + 2A (b) 6A = 3(2A) (c) ...
 1.3.6: If A = 4 1 6 2 3 5 and B = 1 3 0 2 2 4 verify that (a) A + B = B + ...
 1.3.7: If A = 2 1 6 3 2 4 and B = 2 4 1 6 verify that (a) 3(AB) = (3A)B = ...
 1.3.8: If A = 2 4 1 3 , B = 2 1 0 4 ,C = 3 1 2 1 verify that (a) (A + B) +...
 1.3.9: Let A = 1 2 1 2 , b = 4 0 , c = 3 2 (a) Write b as a linear combina...
 1.3.10: For each of the choices of A and b that follow, determine whether t...
 1.3.11: Let A be a 5 3 matrix. If b = a1 + a2 = a2 + a3 then what can you c...
 1.3.12: Let A be a 3 4 matrix. If b = a1 + a2 + a3 + a4 then what can you c...
 1.3.13: Let Ax = b be a linear system whose augmented matrix (Ab) has redu...
 1.3.14: Let A be an m n matrix. Explain why the matrix multiplications ATA ...
 1.3.15: A matrix A is said to be skew symmetric if AT = A. Show that if a m...
 1.3.16: In Application 2, suppose that we are searching the database of sev...
 1.3.17: Let A be a 2 2 matrix with a11 _= 0 and let = a21/a11. Show that A ...
Solutions for Chapter 1.3: Matrix Arithmetic
Full solutions for Linear Algebra with Applications  8th Edition
ISBN: 9780136009290
Solutions for Chapter 1.3: Matrix Arithmetic
Get Full SolutionsThis textbook survival guide was created for the textbook: Linear Algebra with Applications, edition: 8. Linear Algebra with Applications was written by and is associated to the ISBN: 9780136009290. This expansive textbook survival guide covers the following chapters and their solutions. Chapter 1.3: Matrix Arithmetic includes 17 full stepbystep solutions. Since 17 problems in chapter 1.3: Matrix Arithmetic have been answered, more than 5486 students have viewed full stepbystep solutions from this chapter.

Adjacency matrix of a graph.
Square matrix with aij = 1 when there is an edge from node i to node j; otherwise aij = O. A = AT when edges go both ways (undirected). Adjacency matrix of a graph. Square matrix with aij = 1 when there is an edge from node i to node j; otherwise aij = O. A = AT when edges go both ways (undirected).

CayleyHamilton Theorem.
peA) = det(A  AI) has peA) = zero matrix.

Complex conjugate
z = a  ib for any complex number z = a + ib. Then zz = Iz12.

Fast Fourier Transform (FFT).
A factorization of the Fourier matrix Fn into e = log2 n matrices Si times a permutation. Each Si needs only nl2 multiplications, so Fnx and Fn1c can be computed with ne/2 multiplications. Revolutionary.

Full column rank r = n.
Independent columns, N(A) = {O}, no free variables.

GramSchmidt orthogonalization A = QR.
Independent columns in A, orthonormal columns in Q. Each column q j of Q is a combination of the first j columns of A (and conversely, so R is upper triangular). Convention: diag(R) > o.

Linear transformation T.
Each vector V in the input space transforms to T (v) in the output space, and linearity requires T(cv + dw) = c T(v) + d T(w). Examples: Matrix multiplication A v, differentiation and integration in function space.

Lucas numbers
Ln = 2,J, 3, 4, ... satisfy Ln = L n l +Ln 2 = A1 +A~, with AI, A2 = (1 ± /5)/2 from the Fibonacci matrix U~]' Compare Lo = 2 with Fo = O.

Norm
IIA II. The ".e 2 norm" of A is the maximum ratio II Ax II/l1x II = O"max· Then II Ax II < IIAllllxll and IIABII < IIAIIIIBII and IIA + BII < IIAII + IIBII. Frobenius norm IIAII} = L La~. The.e 1 and.e oo norms are largest column and row sums of laij I.

Normal matrix.
If N NT = NT N, then N has orthonormal (complex) eigenvectors.

Nullspace matrix N.
The columns of N are the n  r special solutions to As = O.

Partial pivoting.
In each column, choose the largest available pivot to control roundoff; all multipliers have leij I < 1. See condition number.

Pivot.
The diagonal entry (first nonzero) at the time when a row is used in elimination.

Projection p = a(aTblaTa) onto the line through a.
P = aaT laTa has rank l.

Rank one matrix A = uvT f=. O.
Column and row spaces = lines cu and cv.

Rayleigh quotient q (x) = X T Ax I x T x for symmetric A: Amin < q (x) < Amax.
Those extremes are reached at the eigenvectors x for Amin(A) and Amax(A).

Rotation matrix
R = [~ CS ] rotates the plane by () and R 1 = RT rotates back by (). Eigenvalues are eiO and eiO , eigenvectors are (1, ±i). c, s = cos (), sin ().

Row space C (AT) = all combinations of rows of A.
Column vectors by convention.

Simplex method for linear programming.
The minimum cost vector x * is found by moving from comer to lower cost comer along the edges of the feasible set (where the constraints Ax = b and x > 0 are satisfied). Minimum cost at a comer!

Subspace S of V.
Any vector space inside V, including V and Z = {zero vector only}.