- Chapter 1: Functions and Sequences
- Chapter 1.1: Four Ways to Represent a Function
- Chapter 1.2: A Catalog of Essential Functions
- Chapter 1.3: New Functions from Old Functions
- Chapter 1.4: Exponential Functions
- Chapter 1.5: Logarithms; Semilog and Log-Log Plots
- Chapter 1.6: Sequences and Difference Equations
- Chapter 10: Systems of Linear Differential Equations
- Chapter 10.1: Qualitative Analysis of Linear Systems
- Chapter 10.2: Qualitative Analysis of Linear Systems
- Chapter 10.3: Applications
- Chapter 10.4: Systems of Nonlinear Differential Equations
- Chapter 2: Limits
- Chapter 2.1: Limits of Sequences
- Chapter 2.2: Limits of Functions at Infinity
- Chapter 2.3: Limits of Functions at Finite Numbers
- Chapter 2.4: Limits: Algebraic Methods
- Chapter 2.5: Continuity
- Chapter 3: Derivatives
- Chapter 3.1: Derivatives and Rates of Change
- Chapter 3.2: The Derivative as a Function
- Chapter 3.3: Basic Differentiation Formulas
- Chapter 3.4: The Chain Rule
- Chapter 3.5: The Chain Rule
- Chapter 3.6: Exponential Growth and Decay
- Chapter 3.7: Derivatives of the Logarithmic and Inverse Tangent Functions
- Chapter 3.8: Linear Approximations and Taylor Polynomials
- Chapter 4: Applications of Derivatives
- Chapter 4.1: Maximum and Minimum Values
- Chapter 4.2: How Derivatives Affect the Shape of a Graph
- Chapter 4.3: LHospitals Rule: Comparing Rates of Growth
- Chapter 4.4: Optimization Problems
- Chapter 4.5: Recursions: Equilibria and Stability
- Chapter 4.6: Antiderivatives
- Chapter 5: Integrals
- Chapter 5.1: Areas, Distances, and Pathogenesis
- Chapter 5.2: The Definite Integral
- Chapter 5.3: The Fundamental Theorem of Calculus
- Chapter 5.4: The Substitution Rule
- Chapter 5.5: Integration by Parts
- Chapter 5.6: Partial Fractions
- Chapter 5.7: Integration Using Tables and Computer Algebra Systems
- Chapter 5.8: Improper Integrals
- Chapter 6: Applications of Integrals
- Chapter 6.1: Areas Between Curves
- Chapter 6.2: Average Values
- Chapter 6.3: Further Applications to Biology
- Chapter 6.4: Volumes
- Chapter 7: Differential Equations
- Chapter 7.1: Modeling with Differential Equations
- Chapter 7.2: Phase Plots, Equilibria, and Stability
- Chapter 7.3: Direction Fields and Eulers Method
- Chapter 7.4: Separable Equations
- Chapter 7.5: Phase Plane Analysis
- Chapter 7.6: Phase Plane Analysis
- Chapter 8: Vectors and Matrix Models
- Chapter 8.1: Coordinate Systems
- Chapter 8.2: Vectors
- Chapter 8.3: The Dot Product
- Chapter 8.4: Matrix Algebra
- Chapter 8.5: Matrices and the Dynamics of Vectors
- Chapter 8.6: Eigenvectors and Eigenvalues
- Chapter 8.7: Eigenvectors and Eigenvalues
- Chapter 8.8: Iterated Matrix Models
- Chapter 9: Multivariable Calculus
- Chapter 9.1: Functions of Several Variables
- Chapter 9.2: Partial Derivatives
- Chapter 9.3: Tangent Planes and Linear Approximations
- Chapter 9.4: The Chain Rule
- Chapter 9.5: Directional Derivatives and the Gradient Vector
- Chapter 9.6: Maximum and Minimum Values
Biocalculus: Calculus for Life Sciences 1st Edition - Solutions by Chapter
Full solutions for Biocalculus: Calculus for Life Sciences | 1st Edition
Upper triangular systems are solved in reverse order Xn to Xl.
Big formula for n by n determinants.
Det(A) is a sum of n! terms. For each term: Multiply one entry from each row and column of A: rows in order 1, ... , nand column order given by a permutation P. Each of the n! P 's has a + or - sign.
A matrix can be partitioned into matrix blocks, by cuts between rows and/or between columns. Block multiplication ofAB is allowed if the block shapes permit.
Characteristic equation det(A - AI) = O.
The n roots are the eigenvalues of A.
cond(A) = c(A) = IIAIlIIA-III = amaxlamin. In Ax = b, the relative change Ilox III Ilx II is less than cond(A) times the relative change Ilob III lib II· Condition numbers measure the sensitivity of the output to change in the input.
Cramer's Rule for Ax = b.
B j has b replacing column j of A; x j = det B j I det A
Fourier matrix F.
Entries Fjk = e21Cijk/n give orthogonal columns FT F = nI. Then y = Fe is the (inverse) Discrete Fourier Transform Y j = L cke21Cijk/n.
Invert A by row operations on [A I] to reach [I A-I].
Hessenberg matrix H.
Triangular matrix with one extra nonzero adjacent diagonal.
Hypercube matrix pl.
Row n + 1 counts corners, edges, faces, ... of a cube in Rn.
A sequence of steps intended to approach the desired solution.
Krylov subspace Kj(A, b).
The subspace spanned by b, Ab, ... , Aj-Ib. Numerical methods approximate A -I b by x j with residual b - Ax j in this subspace. A good basis for K j requires only multiplication by A at each step.
Least squares solution X.
The vector x that minimizes the error lie 112 solves AT Ax = ATb. Then e = b - Ax is orthogonal to all columns of A.
Markov matrix M.
All mij > 0 and each column sum is 1. Largest eigenvalue A = 1. If mij > 0, the columns of Mk approach the steady state eigenvector M s = s > O.
Orthogonal matrix Q.
Square matrix with orthonormal columns, so QT = Q-l. Preserves length and angles, IIQxll = IIxll and (QX)T(Qy) = xTy. AlllAI = 1, with orthogonal eigenvectors. Examples: Rotation, reflection, permutation.
Permutation matrix P.
There are n! orders of 1, ... , n. The n! P 's have the rows of I in those orders. P A puts the rows of A in the same order. P is even or odd (det P = 1 or -1) based on the number of row exchanges to reach I.
Polar decomposition A = Q H.
Orthogonal Q times positive (semi)definite H.
Reflection matrix (Householder) Q = I -2uuT.
Unit vector u is reflected to Qu = -u. All x intheplanemirroruTx = o have Qx = x. Notice QT = Q-1 = Q.
Similar matrices A and B.
Every B = M-I AM has the same eigenvalues as A.
If x gives the movements of the nodes, K x gives the internal forces. K = ATe A where C has spring constants from Hooke's Law and Ax = stretching.