- Chapter 1: First-Order Differential Equations
- Chapter 10: Systems of Linear Differential Equations
- Chapter 11: Vector Differential Calculus
- Chapter 12: Vector Integral Calculus
- Chapter 13: Fourier Series
- Chapter 14: Fourier Series
- Chapter 15: Special Functions and Eigenfunction Expansions
- Chapter 16: Wave Motion on an Interval
- Chapter 17: The Heat Equation
- Chapter 18: The Potential Equation
- Chapter 19: Complex Numbers and Functions
- Chapter 2: Linear Second-Order Equations
- Chapter 20: Complex Integration
- Chapter 21: Complex Integration
- Chapter 22: The Residue Theorem
- Chapter 23: Conformal Mappings and Applications
- Chapter 3: The Laplace Transform
- Chapter 4: Series Solutions
- Chapter 5: Approximation of Solutions
- Chapter 6: Vectors and Vector Spaces
- Chapter 7: Matrices and Linear Systems
- Chapter 8: Determinants
- Chapter 9: Eigenvalues, Diagonalization, and Special Matrices
Advanced Engineering Mathematics 7th Edition - Solutions by Chapter
Full solutions for Advanced Engineering Mathematics | 7th Edition
Tv = Av + Vo = linear transformation plus shift.
Put CI, ... ,Cn in row n and put n - 1 ones just above the main diagonal. Then det(A - AI) = ±(CI + c2A + C3A 2 + .•. + cnA n-l - An).
Conjugate Gradient Method.
A sequence of steps (end of Chapter 9) to solve positive definite Ax = b by minimizing !x T Ax - x Tb over growing Krylov subspaces.
A = S-1 AS. A = eigenvalue matrix and S = eigenvector matrix of A. A must have n independent eigenvectors to make S invertible. All Ak = SA k S-I.
Dot product = Inner product x T y = XI Y 1 + ... + Xn Yn.
Complex dot product is x T Y . Perpendicular vectors have x T y = O. (AB)ij = (row i of A)T(column j of B).
A = L U. If elimination takes A to U without row exchanges, then the lower triangular L with multipliers eij (and eii = 1) brings U back to A.
Full row rank r = m.
Independent rows, at least one solution to Ax = b, column space is all of Rm. Full rank means full column rank or full row rank.
Hypercube matrix pl.
Row n + 1 counts corners, edges, faces, ... of a cube in Rn.
Kronecker product (tensor product) A ® B.
Blocks aij B, eigenvalues Ap(A)Aq(B).
Least squares solution X.
The vector x that minimizes the error lie 112 solves AT Ax = ATb. Then e = b - Ax is orthogonal to all columns of A.
The pivot row j is multiplied by eij and subtracted from row i to eliminate the i, j entry: eij = (entry to eliminate) / (jth pivot).
Every v in V is orthogonal to every w in W.
Pivot columns of A.
Columns that contain pivots after row reduction. These are not combinations of earlier columns. The pivot columns are a basis for the column space.
Random matrix rand(n) or randn(n).
MATLAB creates a matrix with random entries, uniformly distributed on [0 1] for rand and standard normal distribution for randn.
Right inverse A+.
If A has full row rank m, then A+ = AT(AAT)-l has AA+ = 1m.
Row space C (AT) = all combinations of rows of A.
Column vectors by convention.
Singular Value Decomposition
(SVD) A = U:E VT = (orthogonal) ( diag)( orthogonal) First r columns of U and V are orthonormal bases of C (A) and C (AT), AVi = O'iUi with singular value O'i > O. Last columns are orthonormal bases of nullspaces.
Symmetric factorizations A = LDLT and A = QAQT.
Signs in A = signs in D.
Trace of A
= sum of diagonal entries = sum of eigenvalues of A. Tr AB = Tr BA.
Tridiagonal matrix T: tij = 0 if Ii - j I > 1.
T- 1 has rank 1 above and below diagonal.