 6.3.1: Draw the decision tree for sequential search on a list of three ele...
 6.3.2: Draw the decision tree for sequential search on a list of six eleme...
 6.3.3: Draw the decision tree for binary search on a sorted list of seven ...
 6.3.4: Draw the decision tree for binary search on a sorted list of four e...
 6.3.5: Consider a search algorithm that compares an item with the last ele...
 6.3.6: Consider a search algorithm that compares an item with an element o...
 6.3.7: a. Given the data 9, 5, 6, 2, 4, 7 construct the binary search tree...
 6.3.8: a. Given the data g, d, r, s, b, q, c, m construct the binary searc...
 6.3.9: a. For a set of six data items, what is the minimum worstcase numb...
 6.3.10: a. For a set of nine data items, what is the minimum worstcase num...
 6.3.11: An inorder tree traversal of a binary search tree produces a listin...
 6.3.12: Construct a binary search tree for In the high and faroff times th...
 6.3.13: Use the theorem on the lower bound for sorting to find lower bounds...
 6.3.14: Contrast the number of comparisons required for selection sort and ...
 6.3.15: One of five coinsis counterfeit and islighter than the other four. ...
 6.3.16: One of five coins is counterfeit and is either too heavy or too lig...
 6.3.17: One of four coins is counterfeit and is either too heavy or too lig...
 6.3.18: One of four coins is counterfeit and is either too heavy or too lig...
 6.3.19: Devise an algorithm to solve the problem of Exercise 18 using three...
 6.3.20: One of eight coins is counterfeit and is either too heavy or too li...
 6.3.21: In the decision tree for the binary search algorithm (and the binar...
 6.3.22: Our existing binary search algorithm (Chapter 3, Example 13) contai...
 6.3.23: To prove that log n! = (n log n), we can use the definition of orde...
Solutions for Chapter 6.3: Decision Trees
Full solutions for Mathematical Structures for Computer Science  7th Edition
ISBN: 9781429215107
Solutions for Chapter 6.3: Decision Trees
Get Full SolutionsMathematical Structures for Computer Science was written by and is associated to the ISBN: 9781429215107. Chapter 6.3: Decision Trees includes 23 full stepbystep solutions. Since 23 problems in chapter 6.3: Decision Trees have been answered, more than 12339 students have viewed full stepbystep solutions from this chapter. This expansive textbook survival guide covers the following chapters and their solutions. This textbook survival guide was created for the textbook: Mathematical Structures for Computer Science, edition: 7.

Change of basis matrix M.
The old basis vectors v j are combinations L mij Wi of the new basis vectors. The coordinates of CI VI + ... + cnvn = dl wI + ... + dn Wn are related by d = M c. (For n = 2 set VI = mll WI +m21 W2, V2 = m12WI +m22w2.)

Column space C (A) =
space of all combinations of the columns of A.

Complete solution x = x p + Xn to Ax = b.
(Particular x p) + (x n in nullspace).

Conjugate Gradient Method.
A sequence of steps (end of Chapter 9) to solve positive definite Ax = b by minimizing !x T Ax  x Tb over growing Krylov subspaces.

Exponential eAt = I + At + (At)2 12! + ...
has derivative AeAt; eAt u(O) solves u' = Au.

Full row rank r = m.
Independent rows, at least one solution to Ax = b, column space is all of Rm. Full rank means full column rank or full row rank.

GaussJordan method.
Invert A by row operations on [A I] to reach [I AI].

Independent vectors VI, .. " vk.
No combination cl VI + ... + qVk = zero vector unless all ci = O. If the v's are the columns of A, the only solution to Ax = 0 is x = o.

Inverse matrix AI.
Square matrix with AI A = I and AAl = I. No inverse if det A = 0 and rank(A) < n and Ax = 0 for a nonzero vector x. The inverses of AB and AT are B1 AI and (AI)T. Cofactor formula (Al)ij = Cji! detA.

Kirchhoff's Laws.
Current Law: net current (in minus out) is zero at each node. Voltage Law: Potential differences (voltage drops) add to zero around any closed loop.

lAII = l/lAI and IATI = IAI.
The big formula for det(A) has a sum of n! terms, the cofactor formula uses determinants of size n  1, volume of box = I det( A) I.

Linear transformation T.
Each vector V in the input space transforms to T (v) in the output space, and linearity requires T(cv + dw) = c T(v) + d T(w). Examples: Matrix multiplication A v, differentiation and integration in function space.

Multiplier eij.
The pivot row j is multiplied by eij and subtracted from row i to eliminate the i, j entry: eij = (entry to eliminate) / (jth pivot).

Pivot.
The diagonal entry (first nonzero) at the time when a row is used in elimination.

Polar decomposition A = Q H.
Orthogonal Q times positive (semi)definite H.

Saddle point of I(x}, ... ,xn ).
A point where the first derivatives of I are zero and the second derivative matrix (a2 II aXi ax j = Hessian matrix) is indefinite.

Singular matrix A.
A square matrix that has no inverse: det(A) = o.

Spectrum of A = the set of eigenvalues {A I, ... , An}.
Spectral radius = max of IAi I.

Standard basis for Rn.
Columns of n by n identity matrix (written i ,j ,k in R3).

Wavelets Wjk(t).
Stretch and shift the time axis to create Wjk(t) = woo(2j t  k).