 3.3.1: Modify the algorithm of Example 27 so that in addition to dropping ...
 3.3.2: What is the total number of arithmetic operations done in the algor...
 3.3.3: The following algorithm adds all the entries in a square n n array ...
 3.3.4: The following algorithm adds all the entries in the upper triangula...
 3.3.5: Analyze the following algorithm where the work unit is the output s...
 3.3.6: Analyze the following algorithm where the work unit is the output s...
 3.3.7: a. Write the body of an iterative function to compute n! for n 1. b...
 3.3.8: a. Write a recursive function to compute n! for n 1. b. Write a rec...
 3.3.9: A straightforward algorithm to evaluate a polynomial is given by th...
 3.3.10: An alternative to the polynomial evaluation algorithm in Exercise 9...
 3.3.11: For the algorithm of Example 27, count the total number of assignme...
 3.3.12: a. Write a function to convert a binary string bnbn1 b1b0 to its de...
 3.3.13: Algorithm BubbleSort works by making repeated passes through a list...
 3.3.14: In algorithm BubbleSort, suppose we include exchanges of list eleme...
 3.3.15: In one part of algorithm SelectionSort, the index of the maximum it...
 3.3.16: Defining the basic operation as the comparison of list elements and...
 3.3.17: Solve the recurrence relation of Exercise 16
 3.3.18: Assume that the exchange of L[i] and L[ j] takes place even if i = ...
 3.3.19: The merge part of algorithm MergeSort requires comparing elements f...
 3.3.20: Under what circumstances will the maximum number of comparisons tak...
 3.3.21: Write a recurrence relation for the number of comparisons between l...
 3.3.22: Solve the recurrence relation of Exercise 21.
 3.3.23: Use the results of Exercises 18 and 22 to compare the worstcase be...
 3.3.24: Use the results of Exercises 14 and 22 to compare the worstcase be...
 3.3.25: Illustrate QuickSort as above using the list 9, 8, 3, 13
 3.3.26: Illustrate QuickSort as above using the list 8, 4, 10, 5, 9, 6, 14,...
 3.3.27: How many comparisons between list elements are required for pass 1 ...
 3.3.28: How many comparisons between list elements are required for pass 1 ...
 3.3.29: Suppose that for each pass, each pivot element splits its sublist i...
 3.3.30: Solve the recurrence relation of Exercise 29.
 3.3.31: Suppose that for each pass, each pivot element splits its sublist (...
 3.3.32: Solve the recurrence relation of Exercise 31.
 3.3.33: Unlike the situation described in Exercise 29 where each pivot elem...
 3.3.34: Exercise 29 describes the best case of QuickSort and Exercise 31 de...
 3.3.35: Assume that x is in the list and is equally to be found at any of t...
 3.3.36: Find the average number of comparisons under the assumption that x ...
 3.3.37: Suppose that m divisions are required to find gcd(a, b). Prove by i...
 3.3.38: Suppose that m divisions are required to find gcd(a, b), with m 4. ...
 3.3.39: Suppose that m divisions are required to find gcd(a, b), with m 4. ...
 3.3.40: a. Compute gcd(89, 55) and count the number of divisions required. ...
Solutions for Chapter 3.3: Analysis of Algorithms
Full solutions for Mathematical Structures for Computer Science  7th Edition
ISBN: 9781429215107
Solutions for Chapter 3.3: Analysis of Algorithms
Get Full SolutionsChapter 3.3: Analysis of Algorithms includes 40 full stepbystep solutions. This expansive textbook survival guide covers the following chapters and their solutions. Since 40 problems in chapter 3.3: Analysis of Algorithms have been answered, more than 4393 students have viewed full stepbystep solutions from this chapter. Mathematical Structures for Computer Science was written by Patricia and is associated to the ISBN: 9781429215107. This textbook survival guide was created for the textbook: Mathematical Structures for Computer Science, edition: 7.

Adjacency matrix of a graph.
Square matrix with aij = 1 when there is an edge from node i to node j; otherwise aij = O. A = AT when edges go both ways (undirected). Adjacency matrix of a graph. Square matrix with aij = 1 when there is an edge from node i to node j; otherwise aij = O. A = AT when edges go both ways (undirected).

Associative Law (AB)C = A(BC).
Parentheses can be removed to leave ABC.

CayleyHamilton Theorem.
peA) = det(A  AI) has peA) = zero matrix.

Change of basis matrix M.
The old basis vectors v j are combinations L mij Wi of the new basis vectors. The coordinates of CI VI + ... + cnvn = dl wI + ... + dn Wn are related by d = M c. (For n = 2 set VI = mll WI +m21 W2, V2 = m12WI +m22w2.)

Column space C (A) =
space of all combinations of the columns of A.

Condition number
cond(A) = c(A) = IIAIlIIAIII = amaxlamin. In Ax = b, the relative change Ilox III Ilx II is less than cond(A) times the relative change Ilob III lib II· Condition numbers measure the sensitivity of the output to change in the input.

Hessenberg matrix H.
Triangular matrix with one extra nonzero adjacent diagonal.

Kronecker product (tensor product) A ® B.
Blocks aij B, eigenvalues Ap(A)Aq(B).

Length II x II.
Square root of x T x (Pythagoras in n dimensions).

Multiplier eij.
The pivot row j is multiplied by eij and subtracted from row i to eliminate the i, j entry: eij = (entry to eliminate) / (jth pivot).

Network.
A directed graph that has constants Cl, ... , Cm associated with the edges.

Norm
IIA II. The ".e 2 norm" of A is the maximum ratio II Ax II/l1x II = O"max· Then II Ax II < IIAllllxll and IIABII < IIAIIIIBII and IIA + BII < IIAII + IIBII. Frobenius norm IIAII} = L La~. The.e 1 and.e oo norms are largest column and row sums of laij I.

Normal equation AT Ax = ATb.
Gives the least squares solution to Ax = b if A has full rank n (independent columns). The equation says that (columns of A)·(b  Ax) = o.

Projection matrix P onto subspace S.
Projection p = P b is the closest point to b in S, error e = b  Pb is perpendicularto S. p 2 = P = pT, eigenvalues are 1 or 0, eigenvectors are in S or S...L. If columns of A = basis for S then P = A (AT A) 1 AT.

Pseudoinverse A+ (MoorePenrose inverse).
The n by m matrix that "inverts" A from column space back to row space, with N(A+) = N(AT). A+ A and AA+ are the projection matrices onto the row space and column space. Rank(A +) = rank(A).

Schwarz inequality
Iv·wl < IIvll IIwll.Then IvTAwl2 < (vT Av)(wT Aw) for pos def A.

Semidefinite matrix A.
(Positive) semidefinite: all x T Ax > 0, all A > 0; A = any RT R.

Similar matrices A and B.
Every B = MI AM has the same eigenvalues as A.

Solvable system Ax = b.
The right side b is in the column space of A.

Subspace S of V.
Any vector space inside V, including V and Z = {zero vector only}.
I don't want to reset my password
Need help? Contact support
Having trouble accessing your account? Let us help you, contact support at +1(510) 9441054 or support@studysoup.com
Forgot password? Reset it here