- Chapter 1: First-Order Differential Equations
- Chapter 10: Systems of Linear Differential Equations
- Chapter 11: Vector Differential Calculus
- Chapter 12: Vector Integral Calculus
- Chapter 13: Fourier Series
- Chapter 14: Fourier Series
- Chapter 15: Special Functions and Eigenfunction Expansions
- Chapter 16: Wave Motion on an Interval
- Chapter 17: The Heat Equation
- Chapter 18: The Potential Equation
- Chapter 19: Complex Numbers and Functions
- Chapter 2: Linear Second-Order Equations
- Chapter 20: Complex Integration
- Chapter 21: Complex Integration
- Chapter 22: The Residue Theorem
- Chapter 23: Conformal Mappings and Applications
- Chapter 3: The Laplace Transform
- Chapter 4: Series Solutions
- Chapter 5: Approximation of Solutions
- Chapter 6: Vectors and Vector Spaces
- Chapter 7: Matrices and Linear Systems
- Chapter 8: Determinants
- Chapter 9: Eigenvalues, Diagonalization, and Special Matrices
Advanced Engineering Mathematics 7th Edition - Solutions by Chapter
Full solutions for Advanced Engineering Mathematics | 7th Edition
Remove row i and column j; multiply the determinant by (-I)i + j •
z = a - ib for any complex number z = a + ib. Then zz = Iz12.
Conjugate Gradient Method.
A sequence of steps (end of Chapter 9) to solve positive definite Ax = b by minimizing !x T Ax - x Tb over growing Krylov subspaces.
Diagonal matrix D.
dij = 0 if i #- j. Block-diagonal: zero outside square blocks Du.
Dimension of vector space
dim(V) = number of vectors in any basis for V.
A sequence of row operations that reduces A to an upper triangular U or to the reduced form R = rref(A). Then A = LU with multipliers eO in L, or P A = L U with row exchanges in P, or E A = R with an invertible E.
Four Fundamental Subspaces C (A), N (A), C (AT), N (AT).
Use AT for complex A.
Hermitian matrix A H = AT = A.
Complex analog a j i = aU of a symmetric matrix.
Inverse matrix A-I.
Square matrix with A-I A = I and AA-l = I. No inverse if det A = 0 and rank(A) < n and Ax = 0 for a nonzero vector x. The inverses of AB and AT are B-1 A-I and (A-I)T. Cofactor formula (A-l)ij = Cji! detA.
Current Law: net current (in minus out) is zero at each node. Voltage Law: Potential differences (voltage drops) add to zero around any closed loop.
Matrix multiplication AB.
The i, j entry of AB is (row i of A)·(column j of B) = L aikbkj. By columns: Column j of AB = A times column j of B. By rows: row i of A multiplies B. Columns times rows: AB = sum of (column k)(row k). All these equivalent definitions come from the rule that A B times x equals A times B x .
Multiplicities AM and G M.
The algebraic multiplicity A M of A is the number of times A appears as a root of det(A - AI) = O. The geometric multiplicity GM is the number of independent eigenvectors for A (= dimension of the eigenspace).
If N NT = NT N, then N has orthonormal (complex) eigenvectors.
Every v in V is orthogonal to every w in W.
Pivot columns of A.
Columns that contain pivots after row reduction. These are not combinations of earlier columns. The pivot columns are a basis for the column space.
Polar decomposition A = Q H.
Orthogonal Q times positive (semi)definite H.
Rank one matrix A = uvT f=. O.
Column and row spaces = lines cu and cv.
Reduced row echelon form R = rref(A).
Pivots = 1; zeros above and below pivots; the r nonzero rows of R give a basis for the row space of A.
R = [~ CS ] rotates the plane by () and R- 1 = RT rotates back by -(). Eigenvalues are eiO and e-iO , eigenvectors are (1, ±i). c, s = cos (), sin ().
Special solutions to As = O.
One free variable is Si = 1, other free variables = o.