- 2.1.1: Let A=[- ], B=[- ! 9 -3 5 C = [ !]. and D = [ -] ] . -7 Compute the...
- 2.1.2: Let A= [ ]. B = [ = ]. -1 -4 5 9 C= H! - nandD= nl Compute the foll...
- 2.1.3: Let A = [ ]. B = [ _ ]. C = [J.andD = [- ] Compute the following (i...
- 2.1.4: LdA = nlB = [ 1 n -7 3 c = [-2 0 S],andD = [ : -4 - ] Compute the f...
- 2.1.5: Let A= [ 1] [ -1 ,B = C= [- 4 3 ] .and D = [ - n ] Compute the foll...
- 2.1.6: LetA= 5 -6 2 0 ;],andB = [ -1 -1 2 -3 ] 6 7 . 0 4 Let 03 and /3 be ...
- 2.1.7: a) Let A be an n X n matrix and X be an n X 1 column matrix of 1 s....
- 2.1.8: LetA be a 3 X 5 matrix, Ba 5 X 2 matrix, Ca 3 X 4 matrix, D a 4 X 2...
- 2.1.9: LetA be a 2 X 2 matrix, Ba 2 X 2 matrix, Ca 2 X 3 matrix, D a 3 X 2...
- 2.1.10: Let C = AB and D = BA for the following matrices A and B. 3 -5] [ -...
- 2.1.11: Let R = PQ and S = QP, where P = [ -i and Q = [ - -1 3 3] 4 Determi...
- 2.1.12: If A = O 4 , B = 5 O - l , and C = [ - ] determine the following el...
- 2.1.13: If A B= [ 0 -1 1 -2] 0 4 , and 3 2 0 -2] 7 -5 , determine the follo...
- 2.1.14: Let A = 3 O , B = 4 l , C = 4 -2 0 ] Compute the following products...
- 2.1.15: Let A [: -2 !J. B [j} 2 -5 p = [ 0 2 ] . Q 6 7 [-] (a) Express the ...
- 2.1.16: Let A and B be the following matrices. Compute row 2 of the matrix ...
- 2.1.17: Let A be a matrix whose third row is all zeros. Let Bbe any matrix ...
- 2.1.18: Let D be a matrix whose second column is all zeros. Let C be any ma...
- 2.1.19: LetA be an m X r matrix, Ban r X n matrix, and C = AB. Let the colu...
- 2.1.20: Let A and B be the following matrices. Use the result of Exercise 1...
- 2.1.21: Use the given partitions of A and B below to compute AB. (a) A [ + ...
- 2.1.22: Let A [-1 -2] andB = . For each partition of A given below find all...
- 2.1.23: UtA [ - -] mdB l - Jl For each partition of B given below find all ...
- 2.1.24: Suggest suitable partitions involving zero and identity submatrices...
- 2.1.25: State (with a brief explanation) whether the following statements a...
Solutions for Chapter 2.1: Addition, Scalar Multiplication, and Multiplication of Matrices
Full solutions for Linear Algebra with Applications | 8th Edition
Solutions for Chapter 2.1: Addition, Scalar Multiplication, and Multiplication of MatricesGet Full Solutions
Upper triangular systems are solved in reverse order Xn to Xl.
Change of basis matrix M.
The old basis vectors v j are combinations L mij Wi of the new basis vectors. The coordinates of CI VI + ... + cnvn = dl wI + ... + dn Wn are related by d = M c. (For n = 2 set VI = mll WI +m21 W2, V2 = m12WI +m22w2.)
Column space C (A) =
space of all combinations of the columns of A.
Commuting matrices AB = BA.
If diagonalizable, they share n eigenvectors.
When random variables Xi have mean = average value = 0, their covariances "'£ ij are the averages of XiX j. With means Xi, the matrix :E = mean of (x - x) (x - x) T is positive (semi)definite; :E is diagonal if the Xi are independent.
Determinant IAI = det(A).
Defined by det I = 1, sign reversal for row exchange, and linearity in each row. Then IAI = 0 when A is singular. Also IABI = IAIIBI and
Diagonalizable matrix A.
Must have n independent eigenvectors (in the columns of S; automatic with n different eigenvalues). Then S-I AS = A = eigenvalue matrix.
Fast Fourier Transform (FFT).
A factorization of the Fourier matrix Fn into e = log2 n matrices Si times a permutation. Each Si needs only nl2 multiplications, so Fnx and Fn-1c can be computed with ne/2 multiplications. Revolutionary.
Least squares solution X.
The vector x that minimizes the error lie 112 solves AT Ax = ATb. Then e = b - Ax is orthogonal to all columns of A.
Left inverse A+.
If A has full column rank n, then A+ = (AT A)-I AT has A+ A = In.
Linearly dependent VI, ... , Vn.
A combination other than all Ci = 0 gives L Ci Vi = O.
IIA II. The ".e 2 norm" of A is the maximum ratio II Ax II/l1x II = O"max· Then II Ax II < IIAllllxll and IIABII < IIAIIIIBII and IIA + BII < IIAII + IIBII. Frobenius norm IIAII} = L La~. The.e 1 and.e oo norms are largest column and row sums of laij I.
Outer product uv T
= column times row = rank one matrix.
Particular solution x p.
Any solution to Ax = b; often x p has free variables = o.
Ps = pascal(n) = the symmetric matrix with binomial entries (i1~;2). Ps = PL Pu all contain Pascal's triangle with det = 1 (see Pascal in the index).
Pivot columns of A.
Columns that contain pivots after row reduction. These are not combinations of earlier columns. The pivot columns are a basis for the column space.
Polar decomposition A = Q H.
Orthogonal Q times positive (semi)definite H.
Rayleigh quotient q (x) = X T Ax I x T x for symmetric A: Amin < q (x) < Amax.
Those extremes are reached at the eigenvectors x for Amin(A) and Amax(A).
If x gives the movements of the nodes, K x gives the internal forces. K = ATe A where C has spring constants from Hooke's Law and Ax = stretching.
Vector space V.
Set of vectors such that all combinations cv + d w remain within V. Eight required rules are given in Section 3.1 for scalars c, d and vectors v, w.