 3.3.1: Modify the algorithm of Example 27 so that in addition to dropping ...
 3.3.2: What is the total number of arithmetic operations done in the algor...
 3.3.3: The following algorithm adds all the entries in a square n n array ...
 3.3.4: The following algorithm adds all the entries in the upper triangula...
 3.3.5: Analyze the following algorithm where the work unit is the output s...
 3.3.6: Analyze the following algorithm where the work unit is the output s...
 3.3.7: a. Write the body of an iterative function to compute n! for n 1. b...
 3.3.8: a. Write a recursive function to compute n! for n 1. b. Write a rec...
 3.3.9: A straightforward algorithm to evaluate a polynomial is given by th...
 3.3.10: An alternative to the polynomial evaluation algorithm in Exercise 9...
 3.3.11: For the algorithm of Example 27, count the total number of assignme...
 3.3.12: a. Write a function to convert a binary string bnbn1 b1b0 to its de...
 3.3.13: Algorithm BubbleSort works by making repeated passes through a list...
 3.3.14: In algorithm BubbleSort, suppose we include exchanges of list eleme...
 3.3.15: In one part of algorithm SelectionSort, the index of the maximum it...
 3.3.16: Defining the basic operation as the comparison of list elements and...
 3.3.17: Solve the recurrence relation of Exercise 16
 3.3.18: Assume that the exchange of L[i] and L[ j] takes place even if i = ...
 3.3.19: The merge part of algorithm MergeSort requires comparing elements f...
 3.3.20: Under what circumstances will the maximum number of comparisons tak...
 3.3.21: Write a recurrence relation for the number of comparisons between l...
 3.3.22: Solve the recurrence relation of Exercise 21.
 3.3.23: Use the results of Exercises 18 and 22 to compare the worstcase be...
 3.3.24: Use the results of Exercises 14 and 22 to compare the worstcase be...
 3.3.25: Illustrate QuickSort as above using the list 9, 8, 3, 13
 3.3.26: Illustrate QuickSort as above using the list 8, 4, 10, 5, 9, 6, 14,...
 3.3.27: How many comparisons between list elements are required for pass 1 ...
 3.3.28: How many comparisons between list elements are required for pass 1 ...
 3.3.29: Suppose that for each pass, each pivot element splits its sublist i...
 3.3.30: Solve the recurrence relation of Exercise 29.
 3.3.31: Suppose that for each pass, each pivot element splits its sublist (...
 3.3.32: Solve the recurrence relation of Exercise 31.
 3.3.33: Unlike the situation described in Exercise 29 where each pivot elem...
 3.3.34: Exercise 29 describes the best case of QuickSort and Exercise 31 de...
 3.3.35: Assume that x is in the list and is equally to be found at any of t...
 3.3.36: Find the average number of comparisons under the assumption that x ...
 3.3.37: Suppose that m divisions are required to find gcd(a, b). Prove by i...
 3.3.38: Suppose that m divisions are required to find gcd(a, b), with m 4. ...
 3.3.39: Suppose that m divisions are required to find gcd(a, b), with m 4. ...
 3.3.40: a. Compute gcd(89, 55) and count the number of divisions required. ...
Solutions for Chapter 3.3: Analysis of Algorithms
Full solutions for Mathematical Structures for Computer Science  7th Edition
ISBN: 9781429215107
Solutions for Chapter 3.3: Analysis of Algorithms
Get Full SolutionsChapter 3.3: Analysis of Algorithms includes 40 full stepbystep solutions. This expansive textbook survival guide covers the following chapters and their solutions. Since 40 problems in chapter 3.3: Analysis of Algorithms have been answered, more than 12328 students have viewed full stepbystep solutions from this chapter. Mathematical Structures for Computer Science was written by and is associated to the ISBN: 9781429215107. This textbook survival guide was created for the textbook: Mathematical Structures for Computer Science, edition: 7.

Basis for V.
Independent vectors VI, ... , v d whose linear combinations give each vector in V as v = CIVI + ... + CdVd. V has many bases, each basis gives unique c's. A vector space has many bases!

Commuting matrices AB = BA.
If diagonalizable, they share n eigenvectors.

Covariance matrix:E.
When random variables Xi have mean = average value = 0, their covariances "'£ ij are the averages of XiX j. With means Xi, the matrix :E = mean of (x  x) (x  x) T is positive (semi)definite; :E is diagonal if the Xi are independent.

Distributive Law
A(B + C) = AB + AC. Add then multiply, or mUltiply then add.

Eigenvalue A and eigenvector x.
Ax = AX with x#O so det(A  AI) = o.

Factorization
A = L U. If elimination takes A to U without row exchanges, then the lower triangular L with multipliers eij (and eii = 1) brings U back to A.

Full row rank r = m.
Independent rows, at least one solution to Ax = b, column space is all of Rm. Full rank means full column rank or full row rank.

Graph G.
Set of n nodes connected pairwise by m edges. A complete graph has all n(n  1)/2 edges between nodes. A tree has only n  1 edges and no closed loops.

Left nullspace N (AT).
Nullspace of AT = "left nullspace" of A because y T A = OT.

Length II x II.
Square root of x T x (Pythagoras in n dimensions).

Lucas numbers
Ln = 2,J, 3, 4, ... satisfy Ln = L n l +Ln 2 = A1 +A~, with AI, A2 = (1 ± /5)/2 from the Fibonacci matrix U~]' Compare Lo = 2 with Fo = O.

Multiplier eij.
The pivot row j is multiplied by eij and subtracted from row i to eliminate the i, j entry: eij = (entry to eliminate) / (jth pivot).

Nilpotent matrix N.
Some power of N is the zero matrix, N k = o. The only eigenvalue is A = 0 (repeated n times). Examples: triangular matrices with zero diagonal.

Norm
IIA II. The ".e 2 norm" of A is the maximum ratio II Ax II/l1x II = O"max· Then II Ax II < IIAllllxll and IIABII < IIAIIIIBII and IIA + BII < IIAII + IIBII. Frobenius norm IIAII} = L La~. The.e 1 and.e oo norms are largest column and row sums of laij I.

Normal equation AT Ax = ATb.
Gives the least squares solution to Ax = b if A has full rank n (independent columns). The equation says that (columns of A)·(b  Ax) = o.

Nullspace N (A)
= All solutions to Ax = O. Dimension n  r = (# columns)  rank.

Orthonormal vectors q 1 , ... , q n·
Dot products are q T q j = 0 if i =1= j and q T q i = 1. The matrix Q with these orthonormal columns has Q T Q = I. If m = n then Q T = Q 1 and q 1 ' ... , q n is an orthonormal basis for Rn : every v = L (v T q j )q j •

Polar decomposition A = Q H.
Orthogonal Q times positive (semi)definite H.

Projection matrix P onto subspace S.
Projection p = P b is the closest point to b in S, error e = b  Pb is perpendicularto S. p 2 = P = pT, eigenvalues are 1 or 0, eigenvectors are in S or S...L. If columns of A = basis for S then P = A (AT A) 1 AT.

Semidefinite matrix A.
(Positive) semidefinite: all x T Ax > 0, all A > 0; A = any RT R.