- 22.214.171.124.31: In the vertical direction we have two forces. namely, the vertical ...
- 126.96.36.199.32: Using 0), we can divide this by T2 cos f3 = Tl cos ll' = T, obtaini...
- 188.8.131.52.33: f the second order. The physical constant Tip is denoted by c 2 (in...
Solutions for Chapter 12.2: Modeling: Vibrating String, Wave Equation
Full solutions for Advanced Engineering Mathematics | 9th Edition
Associative Law (AB)C = A(BC).
Parentheses can be removed to leave ABC.
Basis for V.
Independent vectors VI, ... , v d whose linear combinations give each vector in V as v = CIVI + ... + CdVd. V has many bases, each basis gives unique c's. A vector space has many bases!
Circulant matrix C.
Constant diagonals wrap around as in cyclic shift S. Every circulant is Col + CIS + ... + Cn_lSn - l . Cx = convolution c * x. Eigenvectors in F.
Determinant IAI = det(A).
Defined by det I = 1, sign reversal for row exchange, and linearity in each row. Then IAI = 0 when A is singular. Also IABI = IAIIBI and
Diagonalizable matrix A.
Must have n independent eigenvectors (in the columns of S; automatic with n different eigenvalues). Then S-I AS = A = eigenvalue matrix.
Ellipse (or ellipsoid) x T Ax = 1.
A must be positive definite; the axes of the ellipse are eigenvectors of A, with lengths 1/.JI. (For IIx II = 1 the vectors y = Ax lie on the ellipse IIA-1 yll2 = Y T(AAT)-1 Y = 1 displayed by eigshow; axis lengths ad
A = L U. If elimination takes A to U without row exchanges, then the lower triangular L with multipliers eij (and eii = 1) brings U back to A.
Hilbert matrix hilb(n).
Entries HU = 1/(i + j -1) = Jd X i- 1 xj-1dx. Positive definite but extremely small Amin and large condition number: H is ill-conditioned.
Hypercube matrix pl.
Row n + 1 counts corners, edges, faces, ... of a cube in Rn.
Length II x II.
Square root of x T x (Pythagoras in n dimensions).
Linear combination cv + d w or L C jV j.
Vector addition and scalar multiplication.
A directed graph that has constants Cl, ... , Cm associated with the edges.
Nullspace N (A)
= All solutions to Ax = O. Dimension n - r = (# columns) - rank.
Polar decomposition A = Q H.
Orthogonal Q times positive (semi)definite H.
Pseudoinverse A+ (Moore-Penrose inverse).
The n by m matrix that "inverts" A from column space back to row space, with N(A+) = N(AT). A+ A and AA+ are the projection matrices onto the row space and column space. Rank(A +) = rank(A).
Rank one matrix A = uvT f=. O.
Column and row spaces = lines cu and cv.
Row space C (AT) = all combinations of rows of A.
Column vectors by convention.
Schur complement S, D - C A -} B.
Appears in block elimination on [~ g ].
Simplex method for linear programming.
The minimum cost vector x * is found by moving from comer to lower cost comer along the edges of the feasible set (where the constraints Ax = b and x > 0 are satisfied). Minimum cost at a comer!
Skew-symmetric matrix K.
The transpose is -K, since Kij = -Kji. Eigenvalues are pure imaginary, eigenvectors are orthogonal, eKt is an orthogonal matrix.