- 10.5.1: Find the product (p. 122) B 2 -1 -2 3R B 2 -3 40 4 5 -9 3R
- 10.5.2: In a certain game, you will win $6 with probability 0.15, lose $2 w...
- 10.5.3: For the game of Example 1 find the expected payoff E if player I ch...
- 10.5.4: For the game of Example 2 find the expected payoff E if player I ch...
- 10.5.5: In 58, find the expected payoff of the game matrix for the given st...
- 10.5.6: In 58, find the expected payoff of the game matrix for the given st...
- 10.5.7: In 58, find the expected payoff of the game matrix for the given st...
- 10.5.8: In 58, find the expected payoff of the game matrix for the given st...
- 10.5.9: In 912, find the expected payoff of each game matrix.B T 4 0-3 6R; ...
- 10.5.10: In 912, find the expected payoff of each game matrix.1 -1-2 3R; P =...
- 10.5.11: In 912, find the expected payoff of each game matrix.C V100010001 S...
- 10.5.12: In 912, find the expected payoff of each game matrix.4 -1 02 31R; P...
- 10.5.13: Matching Pennies In the childrens game matching pennies, each playe...
- 10.5.14: Matching Pennies In the childrens game matching pennies, each playe...
- 10.5.15: how that in a game matrix the only games that are not strictly dete...
Solutions for Chapter 10.5: Mixed Strategies
Full solutions for Finite Mathematics, Binder Ready Version: An Applied Approach | 11th Edition
Tv = Av + Vo = linear transformation plus shift.
Basis for V.
Independent vectors VI, ... , v d whose linear combinations give each vector in V as v = CIVI + ... + CdVd. V has many bases, each basis gives unique c's. A vector space has many bases!
peA) = det(A - AI) has peA) = zero matrix.
A = CTC = (L.J]))(L.J]))T for positive definite A.
Diagonal matrix D.
dij = 0 if i #- j. Block-diagonal: zero outside square blocks Du.
Diagonalizable matrix A.
Must have n independent eigenvectors (in the columns of S; automatic with n different eigenvalues). Then S-I AS = A = eigenvalue matrix.
A = S-1 AS. A = eigenvalue matrix and S = eigenvector matrix of A. A must have n independent eigenvectors to make S invertible. All Ak = SA k S-I.
Four Fundamental Subspaces C (A), N (A), C (AT), N (AT).
Use AT for complex A.
Hankel matrix H.
Constant along each antidiagonal; hij depends on i + j.
Jordan form 1 = M- 1 AM.
If A has s independent eigenvectors, its "generalized" eigenvector matrix M gives 1 = diag(lt, ... , 1s). The block his Akh +Nk where Nk has 1 's on diagonall. Each block has one eigenvalue Ak and one eigenvector.
Left nullspace N (AT).
Nullspace of AT = "left nullspace" of A because y T A = OT.
Ln = 2,J, 3, 4, ... satisfy Ln = L n- l +Ln- 2 = A1 +A~, with AI, A2 = (1 ± -/5)/2 from the Fibonacci matrix U~]' Compare Lo = 2 with Fo = O.
Minimal polynomial of A.
The lowest degree polynomial with meA) = zero matrix. This is peA) = det(A - AI) if no eigenvalues are repeated; always meA) divides peA).
IIA II. The ".e 2 norm" of A is the maximum ratio II Ax II/l1x II = O"max· Then II Ax II < IIAllllxll and IIABII < IIAIIIIBII and IIA + BII < IIAII + IIBII. Frobenius norm IIAII} = L La~. The.e 1 and.e oo norms are largest column and row sums of laij I.
Normal equation AT Ax = ATb.
Gives the least squares solution to Ax = b if A has full rank n (independent columns). The equation says that (columns of A)·(b - Ax) = o.
Pivot columns of A.
Columns that contain pivots after row reduction. These are not combinations of earlier columns. The pivot columns are a basis for the column space.
Simplex method for linear programming.
The minimum cost vector x * is found by moving from comer to lower cost comer along the edges of the feasible set (where the constraints Ax = b and x > 0 are satisfied). Minimum cost at a comer!
Singular matrix A.
A square matrix that has no inverse: det(A) = o.
Standard basis for Rn.
Columns of n by n identity matrix (written i ,j ,k in R3).
Volume of box.
The rows (or the columns) of A generate a box with volume I det(A) I.