- 8.6.1: Which line C t + D is the best fit to the three independent measure...
- 8.6.2: In 1, suppose that the third measurement is totally unreliable. The...
- 8.6.3: In 1, suppose that the third measurement is totally reliable. The v...
- 8.6.4: A single flip of a fair coin (0 or 1) has mean m = 1/2 and variance...
- 8.6.5: Instead of adding the flip results, make them two independent exper...
- 8.6.6: Change Example 1 so that the coin flip can be unfair. The probabili...
- 8.6.7: For two independent measurements x = bland x = b2 , the best x shou...
- 8.6.8: The least squares estimate correctly weighted by !:. -1 is x = (AT!...
- 8.6.9: Change the grades to 3, 1, -1, -3 for A, B, C, F. Show that the SVD...
Solutions for Chapter 8.6: Linear Algebra for Statistics and Probability
Full solutions for Introduction to Linear Algebra | 4th Edition
Adjacency matrix of a graph.
Square matrix with aij = 1 when there is an edge from node i to node j; otherwise aij = O. A = AT when edges go both ways (undirected). Adjacency matrix of a graph. Square matrix with aij = 1 when there is an edge from node i to node j; otherwise aij = O. A = AT when edges go both ways (undirected).
Basis for V.
Independent vectors VI, ... , v d whose linear combinations give each vector in V as v = CIVI + ... + CdVd. V has many bases, each basis gives unique c's. A vector space has many bases!
peA) = det(A - AI) has peA) = zero matrix.
Change of basis matrix M.
The old basis vectors v j are combinations L mij Wi of the new basis vectors. The coordinates of CI VI + ... + cnvn = dl wI + ... + dn Wn are related by d = M c. (For n = 2 set VI = mll WI +m21 W2, V2 = m12WI +m22w2.)
Characteristic equation det(A - AI) = O.
The n roots are the eigenvalues of A.
cond(A) = c(A) = IIAIlIIA-III = amaxlamin. In Ax = b, the relative change Ilox III Ilx II is less than cond(A) times the relative change Ilob III lib II· Condition numbers measure the sensitivity of the output to change in the input.
Eigenvalue A and eigenvector x.
Ax = AX with x#-O so det(A - AI) = o.
Full row rank r = m.
Independent rows, at least one solution to Ax = b, column space is all of Rm. Full rank means full column rank or full row rank.
Incidence matrix of a directed graph.
The m by n edge-node incidence matrix has a row for each edge (node i to node j), with entries -1 and 1 in columns i and j .
Least squares solution X.
The vector x that minimizes the error lie 112 solves AT Ax = ATb. Then e = b - Ax is orthogonal to all columns of A.
The pivot row j is multiplied by eij and subtracted from row i to eliminate the i, j entry: eij = (entry to eliminate) / (jth pivot).
In each column, choose the largest available pivot to control roundoff; all multipliers have leij I < 1. See condition number.
The diagonal entry (first nonzero) at the time when a row is used in elimination.
Plane (or hyperplane) in Rn.
Vectors x with aT x = O. Plane is perpendicular to a =1= O.
Random matrix rand(n) or randn(n).
MATLAB creates a matrix with random entries, uniformly distributed on [0 1] for rand and standard normal distribution for randn.
Rank r (A)
= number of pivots = dimension of column space = dimension of row space.
Reflection matrix (Householder) Q = I -2uuT.
Unit vector u is reflected to Qu = -u. All x intheplanemirroruTx = o have Qx = x. Notice QT = Q-1 = Q.
R = [~ CS ] rotates the plane by () and R- 1 = RT rotates back by -(). Eigenvalues are eiO and e-iO , eigenvectors are (1, ±i). c, s = cos (), sin ().
Semidefinite matrix A.
(Positive) semidefinite: all x T Ax > 0, all A > 0; A = any RT R.
Vandermonde matrix V.
V c = b gives coefficients of p(x) = Co + ... + Cn_IXn- 1 with P(Xi) = bi. Vij = (Xi)j-I and det V = product of (Xk - Xi) for k > i.