- Chapter Chapter 1: COMPLEX NUMBERS
- Chapter Chapter 10: APPLICATIONS OF CONFORMAL MAPPING
- Chapter Chapter 11: THE SCHWARZ-CHRISTOFFEL TRANSFORMATION
- Chapter Chapter 12: INTEGRAL FORMULAS OF THE POISSON TYPE
- Chapter Chapter 2: ANALYTIC FUNCTIONS
- Chapter Chapter 3: Elementary Functions
- Chapter Chapter 4: INTEGRALS
- Chapter Chapter 5: SERIES
- Chapter Chapter 6: RESIDUES AND POLES
- Chapter Chapter 7: APPLICATIONS OF RESIDUES
- Chapter Chapter 8: MAPPING BY ELEMENTARY FUNCTIONS
- Chapter Chapter 9: CONFORMAL MAPPING
Complex Variables and Applications 9th Edition - Solutions by Chapter
Full solutions for Complex Variables and Applications | 9th Edition
Associative Law (AB)C = A(BC).
Parentheses can be removed to leave ABC.
Upper triangular systems are solved in reverse order Xn to Xl.
Characteristic equation det(A - AI) = O.
The n roots are the eigenvalues of A.
Column space C (A) =
space of all combinations of the columns of A.
Cramer's Rule for Ax = b.
B j has b replacing column j of A; x j = det B j I det A
Ellipse (or ellipsoid) x T Ax = 1.
A must be positive definite; the axes of the ellipse are eigenvectors of A, with lengths 1/.JI. (For IIx II = 1 the vectors y = Ax lie on the ellipse IIA-1 yll2 = Y T(AAT)-1 Y = 1 displayed by eigshow; axis lengths ad
Full row rank r = m.
Independent rows, at least one solution to Ax = b, column space is all of Rm. Full rank means full column rank or full row rank.
Invert A by row operations on [A I] to reach [I A-I].
Hermitian matrix A H = AT = A.
Complex analog a j i = aU of a symmetric matrix.
Incidence matrix of a directed graph.
The m by n edge-node incidence matrix has a row for each edge (node i to node j), with entries -1 and 1 in columns i and j .
Jordan form 1 = M- 1 AM.
If A has s independent eigenvectors, its "generalized" eigenvector matrix M gives 1 = diag(lt, ... , 1s). The block his Akh +Nk where Nk has 1 's on diagonall. Each block has one eigenvalue Ak and one eigenvector.
lA-II = l/lAI and IATI = IAI.
The big formula for det(A) has a sum of n! terms, the cofactor formula uses determinants of size n - 1, volume of box = I det( A) I.
Linearly dependent VI, ... , Vn.
A combination other than all Ci = 0 gives L Ci Vi = O.
Nilpotent matrix N.
Some power of N is the zero matrix, N k = o. The only eigenvalue is A = 0 (repeated n times). Examples: triangular matrices with zero diagonal.
Orthonormal vectors q 1 , ... , q n·
Dot products are q T q j = 0 if i =1= j and q T q i = 1. The matrix Q with these orthonormal columns has Q T Q = I. If m = n then Q T = Q -1 and q 1 ' ... , q n is an orthonormal basis for Rn : every v = L (v T q j )q j •
Iv·wl < IIvll IIwll.Then IvTAwl2 < (vT Av)(wT Aw) for pos def A.
Simplex method for linear programming.
The minimum cost vector x * is found by moving from comer to lower cost comer along the edges of the feasible set (where the constraints Ax = b and x > 0 are satisfied). Minimum cost at a comer!
Skew-symmetric matrix K.
The transpose is -K, since Kij = -Kji. Eigenvalues are pure imaginary, eigenvectors are orthogonal, eKt is an orthogonal matrix.
Triangle inequality II u + v II < II u II + II v II.
For matrix norms II A + B II < II A II + II B II·
Vandermonde matrix V.
V c = b gives coefficients of p(x) = Co + ... + Cn_IXn- 1 with P(Xi) = bi. Vij = (Xi)j-I and det V = product of (Xk - Xi) for k > i.