- 10.6.1: Suppose that a game has a payoff matrix A = 4 6 4 1 5 738 806 2 (a)...
- 10.6.2: Construct a simple example to show that optimal strategies are not ...
- 10.6.3: For the strictly determined games with the following payoff matrice...
- 10.6.4: For the 2 2 games with the following payoff matrices, find optimal ...
- 10.6.5: Player R has two playing cards: a black ace and a red four. Player ...
- 10.6.6: Verify Equations (6), (7), and (8).
- 10.6.7: Verify the statement in the last paragraph of Example 3.
- 10.6.8: Show that the entries of the optimal strategies p and q given in Th...
- 10.6.T1: Consider a game between two players where each player can make up t...
- 10.6.T2: . Consider a game between two players where each player can make up...
Solutions for Chapter 10.6: Games of Strategy
Full solutions for Elementary Linear Algebra, Binder Ready Version: Applications Version | 11th Edition
peA) = det(A - AI) has peA) = zero matrix.
Change of basis matrix M.
The old basis vectors v j are combinations L mij Wi of the new basis vectors. The coordinates of CI VI + ... + cnvn = dl wI + ... + dn Wn are related by d = M c. (For n = 2 set VI = mll WI +m21 W2, V2 = m12WI +m22w2.)
A = CTC = (L.J]))(L.J]))T for positive definite A.
cond(A) = c(A) = IIAIlIIA-III = amaxlamin. In Ax = b, the relative change Ilox III Ilx II is less than cond(A) times the relative change Ilob III lib II· Condition numbers measure the sensitivity of the output to change in the input.
Eigenvalue A and eigenvector x.
Ax = AX with x#-O so det(A - AI) = o.
A = L U. If elimination takes A to U without row exchanges, then the lower triangular L with multipliers eij (and eii = 1) brings U back to A.
Fourier matrix F.
Entries Fjk = e21Cijk/n give orthogonal columns FT F = nI. Then y = Fe is the (inverse) Discrete Fourier Transform Y j = L cke21Cijk/n.
Hessenberg matrix H.
Triangular matrix with one extra nonzero adjacent diagonal.
Hypercube matrix pl.
Row n + 1 counts corners, edges, faces, ... of a cube in Rn.
Jordan form 1 = M- 1 AM.
If A has s independent eigenvectors, its "generalized" eigenvector matrix M gives 1 = diag(lt, ... , 1s). The block his Akh +Nk where Nk has 1 's on diagonall. Each block has one eigenvalue Ak and one eigenvector.
lA-II = l/lAI and IATI = IAI.
The big formula for det(A) has a sum of n! terms, the cofactor formula uses determinants of size n - 1, volume of box = I det( A) I.
Matrix multiplication AB.
The i, j entry of AB is (row i of A)·(column j of B) = L aikbkj. By columns: Column j of AB = A times column j of B. By rows: row i of A multiplies B. Columns times rows: AB = sum of (column k)(row k). All these equivalent definitions come from the rule that A B times x equals A times B x .
Outer product uv T
= column times row = rank one matrix.
Polar decomposition A = Q H.
Orthogonal Q times positive (semi)definite H.
Rank one matrix A = uvT f=. O.
Column and row spaces = lines cu and cv.
R = [~ CS ] rotates the plane by () and R- 1 = RT rotates back by -(). Eigenvalues are eiO and e-iO , eigenvectors are (1, ±i). c, s = cos (), sin ().
Schur complement S, D - C A -} B.
Appears in block elimination on [~ g ].
Solvable system Ax = b.
The right side b is in the column space of A.
Trace of A
= sum of diagonal entries = sum of eigenvalues of A. Tr AB = Tr BA.
Stretch and shift the time axis to create Wjk(t) = woo(2j t - k).