 8.1: What is the difference between a vector and a scalar?
 8.2: How do you add two vectors geometrically? How do you add them algeb...
 8.3: If a is a vector and c is a scalar, how is ca related to a geometri...
 8.4: How do you find the vector from one point to another algebraically?
 8.5: How do you find the dot product a ? b of two vectors if you know th...
 8.6: How are dot products useful?
 8.7: Write expressions for the scalar and vector projections of b onto a...
 8.8: What is the equation of a sphere?
 8.9: a) How do you tell if two vectors are parallel? (b) How do you tell...
 8.10: What is a symmetric matrix?
 8.11: If a matrix A rotates vectors counterclockwise by degrees, what doe...
 8.12: If a 2 3 2 matrix has complex eigenvalues, what does this matrix do...
 8.13: Suppose A is a matrix and k is a positive integer. What does the no...
 8.14: What is the relationship between the inverse of a matrix and the de...
 8.15: Explain what eigenvalues and eigenvectors are.
 8.16: Why does a 2 3 2 matrix have two eigenvalues?
 8.17: Suppose a 2 3 2 matrix with real entries has complex eigenvalues. W...
 8.18: Why is it sometimes useful to write a matrix A in the form A PDP 21...
 8.19: What does the PerronFrobenius Theorem say, and why is it useful?
 8.20: Stagestructured population Suppose a population contains juveniles...
 8.21: 2124 Solve the system of equations. 21. 3x 2 y 2 22. 2x 1 y 2 x 1 7...
 8.22: 2124 Solve the system of equations.22. 2x 1 y 2 x 1 7y 4 4x 2 y 4
 8.23: 2124 Solve the system of equations. 23. 7x 1 y 0
 8.24: 2124 Solve the system of equations.24. x 2 2y 2 14x 1 2y 0 3x 2 6y 4
 8.25: Leslie matrix Consider the following model for an agestructured po...
 8.26: CAT scans Computed Axial Tomography uses narrow Xray beams directe...
 8.27: 2730 Find the eigenvectors and eigenvalues for the matrix. 27. A c ...
 8.28: 2730 Find the eigenvectors and eigenvalues for the matrix. 28. A c ...
 8.29: 2730 Find the eigenvectors and eigenvalues for the matrix. 29. A c ...
 8.30: 2730 Find the eigenvectors and eigenvalues for the matrix. 29. A c ...
 8.31: Suppose A is a 2 3 2 matrix whose columns sum to one. Show that 1 i...
 8.32: 3235 Express the solution to the recursion nt11 Ant in terms of the...
 8.33: 3235 Express the solution to the recursion nt11 Ant in terms of the...
 8.34: 3235 Express the solution to the recursion nt11 Ant in terms of the...
 8.35: 3235 Express the solution to the recursion nt11 Ant in terms of the...
 8.36: Dispersal In Exercise 8.5.9 we modeled the dispersal of dragonflies...
 8.37: Succession In Exercise 8.5.15 we modeled ecological succession usin...
 8.38: Inbreeding In Exercise 8.5.19 we modeled inbreeding using the recur...
 8.39: A sequence is given recursively by the equation an san21 1 an22d 2 ...
Solutions for Chapter 8: Vectors and Matrix Models
Full solutions for Biocalculus: Calculus for Life Sciences  1st Edition
ISBN: 9781133109631
Solutions for Chapter 8: Vectors and Matrix Models
Get Full SolutionsBiocalculus: Calculus for Life Sciences was written by and is associated to the ISBN: 9781133109631. Chapter 8: Vectors and Matrix Models includes 39 full stepbystep solutions. Since 39 problems in chapter 8: Vectors and Matrix Models have been answered, more than 26073 students have viewed full stepbystep solutions from this chapter. This expansive textbook survival guide covers the following chapters and their solutions. This textbook survival guide was created for the textbook: Biocalculus: Calculus for Life Sciences , edition: 1.

Adjacency matrix of a graph.
Square matrix with aij = 1 when there is an edge from node i to node j; otherwise aij = O. A = AT when edges go both ways (undirected). Adjacency matrix of a graph. Square matrix with aij = 1 when there is an edge from node i to node j; otherwise aij = O. A = AT when edges go both ways (undirected).

Basis for V.
Independent vectors VI, ... , v d whose linear combinations give each vector in V as v = CIVI + ... + CdVd. V has many bases, each basis gives unique c's. A vector space has many bases!

Block matrix.
A matrix can be partitioned into matrix blocks, by cuts between rows and/or between columns. Block multiplication ofAB is allowed if the block shapes permit.

Condition number
cond(A) = c(A) = IIAIlIIAIII = amaxlamin. In Ax = b, the relative change Ilox III Ilx II is less than cond(A) times the relative change Ilob III lib II· Condition numbers measure the sensitivity of the output to change in the input.

Free columns of A.
Columns without pivots; these are combinations of earlier columns.

Free variable Xi.
Column i has no pivot in elimination. We can give the n  r free variables any values, then Ax = b determines the r pivot variables (if solvable!).

GramSchmidt orthogonalization A = QR.
Independent columns in A, orthonormal columns in Q. Each column q j of Q is a combination of the first j columns of A (and conversely, so R is upper triangular). Convention: diag(R) > o.

Hermitian matrix A H = AT = A.
Complex analog a j i = aU of a symmetric matrix.

Inverse matrix AI.
Square matrix with AI A = I and AAl = I. No inverse if det A = 0 and rank(A) < n and Ax = 0 for a nonzero vector x. The inverses of AB and AT are B1 AI and (AI)T. Cofactor formula (Al)ij = Cji! detA.

Least squares solution X.
The vector x that minimizes the error lie 112 solves AT Ax = ATb. Then e = b  Ax is orthogonal to all columns of A.

Orthonormal vectors q 1 , ... , q n·
Dot products are q T q j = 0 if i =1= j and q T q i = 1. The matrix Q with these orthonormal columns has Q T Q = I. If m = n then Q T = Q 1 and q 1 ' ... , q n is an orthonormal basis for Rn : every v = L (v T q j )q j •

Particular solution x p.
Any solution to Ax = b; often x p has free variables = o.

Positive definite matrix A.
Symmetric matrix with positive eigenvalues and positive pivots. Definition: x T Ax > 0 unless x = O. Then A = LDLT with diag(D» O.

Rank one matrix A = uvT f=. O.
Column and row spaces = lines cu and cv.

Similar matrices A and B.
Every B = MI AM has the same eigenvalues as A.

Simplex method for linear programming.
The minimum cost vector x * is found by moving from comer to lower cost comer along the edges of the feasible set (where the constraints Ax = b and x > 0 are satisfied). Minimum cost at a comer!

Skewsymmetric matrix K.
The transpose is K, since Kij = Kji. Eigenvalues are pure imaginary, eigenvectors are orthogonal, eKt is an orthogonal matrix.

Spanning set.
Combinations of VI, ... ,Vm fill the space. The columns of A span C (A)!

Stiffness matrix
If x gives the movements of the nodes, K x gives the internal forces. K = ATe A where C has spring constants from Hooke's Law and Ax = stretching.

Vandermonde matrix V.
V c = b gives coefficients of p(x) = Co + ... + Cn_IXn 1 with P(Xi) = bi. Vij = (Xi)jI and det V = product of (Xk  Xi) for k > i.