- 10.7.1: Suppose that a game has a payoff matrix (a) If players R and C use ...
- 10.7.2: Construct a simple example to show that optimal strategies are not ...
- 10.7.3: For the strictly determined games with the following payoff matrice...
- 10.7.4: For the games with the following payoff matrices, find optimal stra...
- 10.7.5: Player R has two playing cards: a black ace and a red four. Player ...
- 10.7.6: Verify Equations 6, 7, and 8.
- 10.7.7: Verify the statement in the last paragraph of Example 3
- 10.7.8: Show that the entries of the optimal strategies and given in Theore...
- 10.7.T1: Consider a game between two players where each player can make up t...
- 10.7.T2: Consider a game between two players where each player can make up t...
Solutions for Chapter 10.7: Games of Strategy
Full solutions for Elementary Linear Algebra: Applications Version | 10th Edition
Tv = Av + Vo = linear transformation plus shift.
Associative Law (AB)C = A(BC).
Parentheses can be removed to leave ABC.
Augmented matrix [A b].
Ax = b is solvable when b is in the column space of A; then [A b] has the same rank as A. Elimination on [A b] keeps equations correct.
Upper triangular systems are solved in reverse order Xn to Xl.
Big formula for n by n determinants.
Det(A) is a sum of n! terms. For each term: Multiply one entry from each row and column of A: rows in order 1, ... , nand column order given by a permutation P. Each of the n! P 's has a + or - sign.
When random variables Xi have mean = average value = 0, their covariances "'£ ij are the averages of XiX j. With means Xi, the matrix :E = mean of (x - x) (x - x) T is positive (semi)definite; :E is diagonal if the Xi are independent.
Dot product = Inner product x T y = XI Y 1 + ... + Xn Yn.
Complex dot product is x T Y . Perpendicular vectors have x T y = O. (AB)ij = (row i of A)T(column j of B).
Free columns of A.
Columns without pivots; these are combinations of earlier columns.
The nullspace N (A) and row space C (AT) are orthogonal complements in Rn(perpendicular from Ax = 0 with dimensions rand n - r). Applied to AT, the column space C(A) is the orthogonal complement of N(AT) in Rm.
Inverse matrix A-I.
Square matrix with A-I A = I and AA-l = I. No inverse if det A = 0 and rank(A) < n and Ax = 0 for a nonzero vector x. The inverses of AB and AT are B-1 A-I and (A-I)T. Cofactor formula (A-l)ij = Cji! detA.
Left inverse A+.
If A has full column rank n, then A+ = (AT A)-I AT has A+ A = In.
Linear transformation T.
Each vector V in the input space transforms to T (v) in the output space, and linearity requires T(cv + dw) = c T(v) + d T(w). Examples: Matrix multiplication A v, differentiation and integration in function space.
A directed graph that has constants Cl, ... , Cm associated with the edges.
Every v in V is orthogonal to every w in W.
Projection matrix P onto subspace S.
Projection p = P b is the closest point to b in S, error e = b - Pb is perpendicularto S. p 2 = P = pT, eigenvalues are 1 or 0, eigenvectors are in S or S...L. If columns of A = basis for S then P = A (AT A) -1 AT.
Rank r (A)
= number of pivots = dimension of column space = dimension of row space.
Reflection matrix (Householder) Q = I -2uuT.
Unit vector u is reflected to Qu = -u. All x intheplanemirroruTx = o have Qx = x. Notice QT = Q-1 = Q.
R = [~ CS ] rotates the plane by () and R- 1 = RT rotates back by -(). Eigenvalues are eiO and e-iO , eigenvectors are (1, ±i). c, s = cos (), sin ().
Iv·wl < IIvll IIwll.Then IvTAwl2 < (vT Av)(wT Aw) for pos def A.
Symmetric matrix A.
The transpose is AT = A, and aU = a ji. A-I is also symmetric.