 3.1.1: Consider the vectors x1 = (8, 6)T and x2 = (4, 1)T in R2. (a) Deter...
 3.1.2: Repeat Exercise 1 for the vectors x1 = (2, 1)T and x2 = (6, 3)T
 3.1.3: Let C be the set of complex numbers. Define addition on C by (a + b...
 3.1.4: Show that Rmn, together with the usual addition and scalar multipli...
 3.1.5: Show that C[a, b], together with the usual scalar multiplication an...
 3.1.6: Let P be the set of all polynomials. Show that P, together with the...
 3.1.7: Show that the element 0 in a vector space is unique.
 3.1.8: Let x, y, and z be vectors in a vector space V. Prove that if x + y...
 3.1.9: Let V be a vector space and let x V. Show that (a) 0 = 0 for each s...
 3.1.10: Let S be the set of all ordered pairs of real numbers. Define scala...
 3.1.11: Let V be the set of all ordered pairs of real numbers with addition...
 3.1.12: Let R+ denote the set of positive real numbers. Define the operatio...
 3.1.13: Let R denote the set of real numbers. Define scalar multiplication ...
 3.1.14: Let Z denote the set of all integers with addition defined in the u...
 3.1.15: Let S denote the set of all infinite sequences of real numbers with...
 3.1.16: We can define a onetoone correspondence between the elements of P...
Solutions for Chapter 3.1: Definition and Examples
Full solutions for Linear Algebra with Applications  9th Edition
ISBN: 9780321962218
Solutions for Chapter 3.1: Definition and Examples
Get Full SolutionsSince 16 problems in chapter 3.1: Definition and Examples have been answered, more than 10303 students have viewed full stepbystep solutions from this chapter. This textbook survival guide was created for the textbook: Linear Algebra with Applications, edition: 9. Chapter 3.1: Definition and Examples includes 16 full stepbystep solutions. This expansive textbook survival guide covers the following chapters and their solutions. Linear Algebra with Applications was written by and is associated to the ISBN: 9780321962218.

Basis for V.
Independent vectors VI, ... , v d whose linear combinations give each vector in V as v = CIVI + ... + CdVd. V has many bases, each basis gives unique c's. A vector space has many bases!

Circulant matrix C.
Constant diagonals wrap around as in cyclic shift S. Every circulant is Col + CIS + ... + Cn_lSn  l . Cx = convolution c * x. Eigenvectors in F.

Complex conjugate
z = a  ib for any complex number z = a + ib. Then zz = Iz12.

Cramer's Rule for Ax = b.
B j has b replacing column j of A; x j = det B j I det A

Cyclic shift
S. Permutation with S21 = 1, S32 = 1, ... , finally SIn = 1. Its eigenvalues are the nth roots e2lrik/n of 1; eigenvectors are columns of the Fourier matrix F.

Elimination matrix = Elementary matrix Eij.
The identity matrix with an extra eij in the i, j entry (i # j). Then Eij A subtracts eij times row j of A from row i.

Four Fundamental Subspaces C (A), N (A), C (AT), N (AT).
Use AT for complex A.

Fourier matrix F.
Entries Fjk = e21Cijk/n give orthogonal columns FT F = nI. Then y = Fe is the (inverse) Discrete Fourier Transform Y j = L cke21Cijk/n.

Iterative method.
A sequence of steps intended to approach the desired solution.

Linear combination cv + d w or L C jV j.
Vector addition and scalar multiplication.

Lucas numbers
Ln = 2,J, 3, 4, ... satisfy Ln = L n l +Ln 2 = A1 +A~, with AI, A2 = (1 ± /5)/2 from the Fibonacci matrix U~]' Compare Lo = 2 with Fo = O.

Nullspace N (A)
= All solutions to Ax = O. Dimension n  r = (# columns)  rank.

Orthonormal vectors q 1 , ... , q n·
Dot products are q T q j = 0 if i =1= j and q T q i = 1. The matrix Q with these orthonormal columns has Q T Q = I. If m = n then Q T = Q 1 and q 1 ' ... , q n is an orthonormal basis for Rn : every v = L (v T q j )q j •

Reflection matrix (Householder) Q = I 2uuT.
Unit vector u is reflected to Qu = u. All x intheplanemirroruTx = o have Qx = x. Notice QT = Q1 = Q.

Row space C (AT) = all combinations of rows of A.
Column vectors by convention.

Semidefinite matrix A.
(Positive) semidefinite: all x T Ax > 0, all A > 0; A = any RT R.

Simplex method for linear programming.
The minimum cost vector x * is found by moving from comer to lower cost comer along the edges of the feasible set (where the constraints Ax = b and x > 0 are satisfied). Minimum cost at a comer!

Subspace S of V.
Any vector space inside V, including V and Z = {zero vector only}.

Trace of A
= sum of diagonal entries = sum of eigenvalues of A. Tr AB = Tr BA.

Tridiagonal matrix T: tij = 0 if Ii  j I > 1.
T 1 has rank 1 above and below diagonal.