 11.5.1: Use the Piecewise Linear Algorithm to approximate the solution to t...
 11.5.2: Use the Piecewise Linear Algorithm to approximate the solution to t...
 11.5.3: Use the Piecewise Linear Algorithm to approximate the solutions to ...
 11.5.4: Use the Cubic Spline Algorithm with n = 3 to approximate the soluti...
 11.5.5: Repeat Exercise 3 using the Cubic Spline Algorithm.
 11.5.6: Show that the boundaryvalue problem d dx (p(x)y ) + q(x)y = f (x),...
 11.5.7: Use Exercise 6 and the Piecewise Linear Algorithm with n = 9 to app...
 11.5.8: Repeat Exercise 7 using the Cubic Spline Algorithm.
 11.5.9: Show that the boundaryvalue problem d dx (p(x)y ) + q(x)y = f (x),...
 11.5.10: Show that the piecewiselinear basis functions {i} n i=1 are linear...
 11.5.11: Show that the cubic spline basis functions {i} n+1 i=0 are linearly...
 11.5.12: Show that the matrix given by the piecewise linear basis functions ...
 11.5.13: Show that the matrix given by the cubic spline basis functions is p...
Solutions for Chapter 11.5: The RayleighRitz Method
Full solutions for Numerical Analysis  9th Edition
ISBN: 9780538733519
Solutions for Chapter 11.5: The RayleighRitz Method
Get Full SolutionsChapter 11.5: The RayleighRitz Method includes 13 full stepbystep solutions. This expansive textbook survival guide covers the following chapters and their solutions. Numerical Analysis was written by and is associated to the ISBN: 9780538733519. This textbook survival guide was created for the textbook: Numerical Analysis, edition: 9. Since 13 problems in chapter 11.5: The RayleighRitz Method have been answered, more than 12555 students have viewed full stepbystep solutions from this chapter.

CayleyHamilton Theorem.
peA) = det(A  AI) has peA) = zero matrix.

Conjugate Gradient Method.
A sequence of steps (end of Chapter 9) to solve positive definite Ax = b by minimizing !x T Ax  x Tb over growing Krylov subspaces.

Diagonalizable matrix A.
Must have n independent eigenvectors (in the columns of S; automatic with n different eigenvalues). Then SI AS = A = eigenvalue matrix.

Full column rank r = n.
Independent columns, N(A) = {O}, no free variables.

Hermitian matrix A H = AT = A.
Complex analog a j i = aU of a symmetric matrix.

Incidence matrix of a directed graph.
The m by n edgenode incidence matrix has a row for each edge (node i to node j), with entries 1 and 1 in columns i and j .

Kirchhoff's Laws.
Current Law: net current (in minus out) is zero at each node. Voltage Law: Potential differences (voltage drops) add to zero around any closed loop.

lAII = l/lAI and IATI = IAI.
The big formula for det(A) has a sum of n! terms, the cofactor formula uses determinants of size n  1, volume of box = I det( A) I.

Length II x II.
Square root of x T x (Pythagoras in n dimensions).

Markov matrix M.
All mij > 0 and each column sum is 1. Largest eigenvalue A = 1. If mij > 0, the columns of Mk approach the steady state eigenvector M s = s > O.

Matrix multiplication AB.
The i, j entry of AB is (row i of A)ยท(column j of B) = L aikbkj. By columns: Column j of AB = A times column j of B. By rows: row i of A multiplies B. Columns times rows: AB = sum of (column k)(row k). All these equivalent definitions come from the rule that A B times x equals A times B x .

Multiplier eij.
The pivot row j is multiplied by eij and subtracted from row i to eliminate the i, j entry: eij = (entry to eliminate) / (jth pivot).

Network.
A directed graph that has constants Cl, ... , Cm associated with the edges.

Orthogonal subspaces.
Every v in V is orthogonal to every w in W.

Random matrix rand(n) or randn(n).
MATLAB creates a matrix with random entries, uniformly distributed on [0 1] for rand and standard normal distribution for randn.

Skewsymmetric matrix K.
The transpose is K, since Kij = Kji. Eigenvalues are pure imaginary, eigenvectors are orthogonal, eKt is an orthogonal matrix.

Standard basis for Rn.
Columns of n by n identity matrix (written i ,j ,k in R3).

Stiffness matrix
If x gives the movements of the nodes, K x gives the internal forces. K = ATe A where C has spring constants from Hooke's Law and Ax = stretching.

Transpose matrix AT.
Entries AL = Ajj. AT is n by In, AT A is square, symmetric, positive semidefinite. The transposes of AB and AI are BT AT and (AT)I.

Tridiagonal matrix T: tij = 0 if Ii  j I > 1.
T 1 has rank 1 above and below diagonal.