- Chapter 1: QUADRATIC EQUATIONS
- Chapter 10: PERIODIC BEHAVIOUR
- Chapter 11: TRIGONOMETRIC EQUATIONS
- Chapter 12: VECTORS AND SCALARS
- Chapter 13: PROBLEMS INVOLVING VECTOR OPERATIONS
- Chapter 14: LIMITS
- Chapter 15: Rules of differentiation
- Chapter 16: Properties of curves
- Chapter 17: Applications of differential calculus
- Chapter 18: Integration
- Chapter 19: Applications of integration
- Chapter 2: RELATIONS AND FUNCTIONS
- Chapter 20: Descriptive statistics
- Chapter 21: Linear modelling
- Chapter 22: Probability
- Chapter 23: Discrete random variables
- Chapter 24: The normal distribution
- Chapter 25: Miscellaneous questions
- Chapter 3: EXPONENTS
- Chapter 4: LOGARITHMS IN BASE 10
- Chapter 5: GRAPHING FUNCTIONS
- Chapter 6: NUMBER SEQUENCES
- Chapter 7: BINOMIAL EXPANSIONS
- Chapter 8: RADIAN MEASURE
- Chapter 9: AREAS OF TRIANGLES
Mathematics for the International Student: Mathematics SL 3rd Edition - Solutions by Chapter
Full solutions for Mathematics for the International Student: Mathematics SL | 3rd Edition
Mathematics for the International Student: Mathematics SL | 3rd Edition - Solutions by ChapterGet Full Solutions
Basis for V.
Independent vectors VI, ... , v d whose linear combinations give each vector in V as v = CIVI + ... + CdVd. V has many bases, each basis gives unique c's. A vector space has many bases!
Circulant matrix C.
Constant diagonals wrap around as in cyclic shift S. Every circulant is Col + CIS + ... + Cn_lSn - l . Cx = convolution c * x. Eigenvectors in F.
Commuting matrices AB = BA.
If diagonalizable, they share n eigenvectors.
Determinant IAI = det(A).
Defined by det I = 1, sign reversal for row exchange, and linearity in each row. Then IAI = 0 when A is singular. Also IABI = IAIIBI and
Fast Fourier Transform (FFT).
A factorization of the Fourier matrix Fn into e = log2 n matrices Si times a permutation. Each Si needs only nl2 multiplications, so Fnx and Fn-1c can be computed with ne/2 multiplications. Revolutionary.
Fourier matrix F.
Entries Fjk = e21Cijk/n give orthogonal columns FT F = nI. Then y = Fe is the (inverse) Discrete Fourier Transform Y j = L cke21Cijk/n.
Least squares solution X.
The vector x that minimizes the error lie 112 solves AT Ax = ATb. Then e = b - Ax is orthogonal to all columns of A.
Length II x II.
Square root of x T x (Pythagoras in n dimensions).
Markov matrix M.
All mij > 0 and each column sum is 1. Largest eigenvalue A = 1. If mij > 0, the columns of Mk approach the steady state eigenvector M s = s > O.
The pivot row j is multiplied by eij and subtracted from row i to eliminate the i, j entry: eij = (entry to eliminate) / (jth pivot).
IIA II. The ".e 2 norm" of A is the maximum ratio II Ax II/l1x II = O"max· Then II Ax II < IIAllllxll and IIABII < IIAIIIIBII and IIA + BII < IIAII + IIBII. Frobenius norm IIAII} = L La~. The.e 1 and.e oo norms are largest column and row sums of laij I.
Particular solution x p.
Any solution to Ax = b; often x p has free variables = o.
Pivot columns of A.
Columns that contain pivots after row reduction. These are not combinations of earlier columns. The pivot columns are a basis for the column space.
Positive definite matrix A.
Symmetric matrix with positive eigenvalues and positive pivots. Definition: x T Ax > 0 unless x = O. Then A = LDLT with diag(D» O.
Projection matrix P onto subspace S.
Projection p = P b is the closest point to b in S, error e = b - Pb is perpendicularto S. p 2 = P = pT, eigenvalues are 1 or 0, eigenvectors are in S or S...L. If columns of A = basis for S then P = A (AT A) -1 AT.
Schur complement S, D - C A -} B.
Appears in block elimination on [~ g ].
Solvable system Ax = b.
The right side b is in the column space of A.
If x gives the movements of the nodes, K x gives the internal forces. K = ATe A where C has spring constants from Hooke's Law and Ax = stretching.
Transpose matrix AT.
Entries AL = Ajj. AT is n by In, AT A is square, symmetric, positive semidefinite. The transposes of AB and A-I are BT AT and (AT)-I.
v + w = (VI + WI, ... , Vn + Wn ) = diagonal of parallelogram.