 1.6.1: Determine the dot products of the following pairs of vectors. (a) (...
 1.6.2: Determine the dot products of the following pairs of vectors. (a) (...
 1.6.3: Determine the dot products of the following pairs of vectors. (a) (...
 1.6.4: Determine the dot products of the following pairs of column vectors...
 1.6.5: Find the norms of the following vectors. (a) ( 1, 2) (b) (3, 4) (c...
 1.6.6: Find the norms of the following vectors. (a) (1,3,1) (b) (3,0,4) (...
 1.6.7: Find the norms of the following vectors. (a) (5, 2) (b) ( 4, 2, 3)...
 1.6.8: Find the norms of the following column vectors. (a) [ ! ] (b) [ _J ...
 1.6.9: Normalize the following vectors. (a) (1, 3) (b) (2, 4) (c) (1, 2, ...
 1.6.10: Normalize the following vectors. (a) (4, 2) (b) (4, 1, 1) (c) (7, 2...
 1.6.11: Normalize the following column vectors. (a) [] (b) [ _J (c) m (d) [...
 1.6.12: Determine the angles between the following pairs of vectors. (a) (...
 1.6.13: Determine the cosines of the angles between the following pairs of ...
 1.6.14: Determine the cosines of the angles between the following pairs of ...
 1.6.15: Show that the following pairs of vectors are orthogonal. (a) (1,3),...
 1.6.16: Show that the following pairs of vectors are orthogonal. (a) (3, 5...
 1.6.17: Show that the following pairs of column vectors are orthogonal. (a)...
 1.6.18: Determine nonzero vectors that are orthogonal to the following vect...
 1.6.19: Determine nonzero vectors that are orthogonal to the following vect...
 1.6.20: Determine a vector that is orthogonal to both ( 1, 2,  1) and ( 3,...
 1.6.21: Let W be the subspace of vectors in R3 that are orthogonal to w = (...
 1.6.22: Let W be the subspace of vectors in R3 that are orthogonal tow = (...
 1.6.23: Let W be the subspace of vectors in R3 that are orthogonal tow = ( ...
 1.6.24: Let W be the subspace of vectors in R4 that are orthogonal tow= (1,...
 1.6.25: Find the distances between the following pairs of points. (a) (6, 5...
 1.6.26: Find the distances between the following pairs of points. (a) (4, 1...
 1.6.27: Prove the following two properties of the dot product. (a) {u + v) ...
 1.6.28: Prove that if v is a nonzero vector, then the following vector u is...
 1.6.29: Prove that two nonzero vectors u and v are orthogonal if and only i...
 1.6.30: Show that if v and ware two vectors in a vector space U and u v = u...
 1.6.31: Let u, vt> ... , vn be vectors in a given vector space. Let a1, ......
 1.6.32: Let u, v, and w be vectors in a given Euclidean space and let c and...
 1.6.33: Find all the values of c such that llc{ 3, 0, 4) II = 15.
 1.6.34: Prove that u and v are orthogonal vectors if and only if llu + vll2...
 1.6.35: Let (a, b) be a vector in R2 Prove that the vector ( b, a) is orth...
 1.6.36: Let u and v be vectors in R n . Prove that llull = ll vll if and on...
 1.6.37: Let u be a vector in Rn and ca scalar. Prove that the norm of a vec...
 1.6.38: Consider the vector space Rn. Let u = (u1, ... , un) be a vector in...
 1.6.39: Let x, y, and z be points in Rn. Prove that distance has the follow...
Solutions for Chapter 1.6: Dot Product, Norm, Angle, and Distance
Full solutions for Linear Algebra with Applications  8th Edition
ISBN: 9781449679545
Solutions for Chapter 1.6: Dot Product, Norm, Angle, and Distance
Get Full SolutionsSince 39 problems in chapter 1.6: Dot Product, Norm, Angle, and Distance have been answered, more than 8436 students have viewed full stepbystep solutions from this chapter. Chapter 1.6: Dot Product, Norm, Angle, and Distance includes 39 full stepbystep solutions. This textbook survival guide was created for the textbook: Linear Algebra with Applications, edition: 8. Linear Algebra with Applications was written by and is associated to the ISBN: 9781449679545. This expansive textbook survival guide covers the following chapters and their solutions.

Augmented matrix [A b].
Ax = b is solvable when b is in the column space of A; then [A b] has the same rank as A. Elimination on [A b] keeps equations correct.

Column picture of Ax = b.
The vector b becomes a combination of the columns of A. The system is solvable only when b is in the column space C (A).

Column space C (A) =
space of all combinations of the columns of A.

Cyclic shift
S. Permutation with S21 = 1, S32 = 1, ... , finally SIn = 1. Its eigenvalues are the nth roots e2lrik/n of 1; eigenvectors are columns of the Fourier matrix F.

Diagonalization
A = S1 AS. A = eigenvalue matrix and S = eigenvector matrix of A. A must have n independent eigenvectors to make S invertible. All Ak = SA k SI.

Elimination.
A sequence of row operations that reduces A to an upper triangular U or to the reduced form R = rref(A). Then A = LU with multipliers eO in L, or P A = L U with row exchanges in P, or E A = R with an invertible E.

Hilbert matrix hilb(n).
Entries HU = 1/(i + j 1) = Jd X i 1 xj1dx. Positive definite but extremely small Amin and large condition number: H is illconditioned.

Identity matrix I (or In).
Diagonal entries = 1, offdiagonal entries = 0.

Nilpotent matrix N.
Some power of N is the zero matrix, N k = o. The only eigenvalue is A = 0 (repeated n times). Examples: triangular matrices with zero diagonal.

Orthogonal matrix Q.
Square matrix with orthonormal columns, so QT = Ql. Preserves length and angles, IIQxll = IIxll and (QX)T(Qy) = xTy. AlllAI = 1, with orthogonal eigenvectors. Examples: Rotation, reflection, permutation.

Pivot columns of A.
Columns that contain pivots after row reduction. These are not combinations of earlier columns. The pivot columns are a basis for the column space.

Plane (or hyperplane) in Rn.
Vectors x with aT x = O. Plane is perpendicular to a =1= O.

Positive definite matrix A.
Symmetric matrix with positive eigenvalues and positive pivots. Definition: x T Ax > 0 unless x = O. Then A = LDLT with diag(DÂ» O.

Random matrix rand(n) or randn(n).
MATLAB creates a matrix with random entries, uniformly distributed on [0 1] for rand and standard normal distribution for randn.

Rayleigh quotient q (x) = X T Ax I x T x for symmetric A: Amin < q (x) < Amax.
Those extremes are reached at the eigenvectors x for Amin(A) and Amax(A).

Singular Value Decomposition
(SVD) A = U:E VT = (orthogonal) ( diag)( orthogonal) First r columns of U and V are orthonormal bases of C (A) and C (AT), AVi = O'iUi with singular value O'i > O. Last columns are orthonormal bases of nullspaces.

Skewsymmetric matrix K.
The transpose is K, since Kij = Kji. Eigenvalues are pure imaginary, eigenvectors are orthogonal, eKt is an orthogonal matrix.

Sum V + W of subs paces.
Space of all (v in V) + (w in W). Direct sum: V n W = to}.

Symmetric matrix A.
The transpose is AT = A, and aU = a ji. AI is also symmetric.

Vector v in Rn.
Sequence of n real numbers v = (VI, ... , Vn) = point in Rn.