New User Special Price Expires in

Let's log you in.

Sign in with Facebook


Don't have a StudySoup account? Create one here!


Create a StudySoup account

Be part of our community, it's free to join!

Sign up with Facebook


Create your account
By creating an account you agree to StudySoup's terms and conditions and privacy policy

Already have a StudySoup account? Login here

Textbook: Data Structures & Algorithms - Alfred V Aho (1st Edition)

Star Star Star Star Star
1 review
by: Kevin Walton

Textbook: Data Structures & Algorithms - Alfred V Aho (1st Edition) CS 260

Marketplace > Drexel University > ComputerScienence > CS 260 > Textbook Data Structures Algorithms Alfred V Aho 1st Edition
Kevin Walton
GPA 3.8
View Full Document for 0 Karma

View Full Document


Unlock These Notes for FREE

Enter your email below and we will instantly email you these Notes for Data Structures

(Limited time offer)

Unlock Notes

Already have a StudySoup account? Login here

Unlock FREE Class Notes

Enter your email below to receive Data Structures notes

Everyone needs better class notes. Enter your email and we will send you notes for this class for free.

Unlock FREE notes

About this Document

Tired of over-paying for textbooks you'll never look at again? Download the textbook for this course in convenient PDF format!
Data Structures
Kurt Schmidt
textbook, data structures, Computer Science, pdf




Star Star Star Star Star
1 review
Star Star Star Star Star
"These are great! I definitely recommend anyone to follow this notetaker"
Mrs. Muhammad Greenfelder

Popular in Data Structures

Popular in ComputerScienence

This 620 page Bundle was uploaded by Kevin Walton on Monday January 11, 2016. The Bundle belongs to CS 260 at Drexel University taught by Kurt Schmidt in Fall 2012. Since its upload, it has received 29 views. For similar materials see Data Structures in ComputerScienence at Drexel University.

Similar to CS 260 at Drexel

Popular in ComputerScienence


Reviews for Textbook: Data Structures & Algorithms - Alfred V Aho (1st Edition)

Star Star Star Star Star

These are great! I definitely recommend anyone to follow this notetaker

-Mrs. Muhammad Greenfelder


Report this Material


What is Karma?


Karma is the currency of StudySoup.

You can buy or earn more Karma at anytime and redeem it for class notes, study guides, flashcards, and more!

Date Created: 01/11/16
Data Structures and Algorithms: Table of Contents Data Structures and Algorithms Alfred V. Aho, Bell Laboratories, Murray Hill, New Jersey John E. Hopcroft, Cornell University, Ithaca, New York Jeffrey D. Ullman, Stanford University, Stanford, California PREFACE Chapter 1 Design and Analysis of Algorithms Chapter 2 Basic Data Types Chapter 3 Trees Chapter 4 Basic Operations on Sets Chapter 5 Advanced Set Representation Methods Chapter 6 Directed Graphs Chapter 7 Undirected Graphs Chapter 8 Sorting Chapter 9 Algorithm Analysis Techniques Chapter 10 Algorithm Design Techniques Chapter 11 Data Structures and Algorithms for External Storage Chapter 12 Memory Management Bibliography Preface Preface This book presents the data structures and algorithms that underpin much of today's computer programming. The basis of this book is the material contained i n the first six chapters of our earlier work, The Design and Analysis of Computer Algorithms. We have expanded that coverage and have added material on algorithms for external storage and memory management. As a consequence, this book should be sui table as a text for a first course on data structures and algorithms. The only pr erequisite we assume is familiarity with some high-level programming language such as Pascal. We have attempted to cover data structures and algorithms in the br oader context of solving problems using computers. We use abstract data types informal ly in the description and implementation of algorithms. Although abstract data typ es are only starting to appear in widely available programming languages, we feel th ey are a useful tool in designing programs, no matter what the language. We also introduce the ideas of step counting and time complexity as an integral part of the problem solving process. This decision reflects our longheld belief that programmers are going to continue to tackle problems of progressively la rger size as machines get faster, and that consequently the time complexity of algori thms will become of even greater importance, rather than of less importance, as ne w generations of hardware become available. The Presentation of Algorithms We have used the conventions of Pascal to describe our algorithms and da ta structures primarily because Pascal is so widely known. Initially we pre sent several of our algorithms both abstractly and as Pascal programs, because we fee l it is important to run the gamut of the problem solving process from problem f ormulation to a running program. The algorithms we present, however, can be readily implemented in any high-level programming language. Use of the Book Chapter 1 contains introductory remarks, including an explanation of our view of the problem-to-program process and the role of abstract data types in that p rocess. Also appearing is an introduction to step counting and "big-oh" and "big-omeg a" notation. Chapter 2 introduces the traditional list, stack and queue structu res, and the mapping, which is an abstract data type based on the mathematical notion of a Preface function. The third chapter introduces trees and the basic data structur es that can be used to support various operations on trees efficiently. Chapters 4 and 5 introduce a number of important abstract data typ es that are based on the mathematical model of a set. Dictionaries and priority queu es are covered in depth. Standard implementations for these concepts, including hash tables, binary search trees, partially ordered trees, tries, and 2-3 tre es are covered, with the more advanced material clustered in Chapter 5. Chapters 6 and 7 cover graphs, with directed graphs in Chapter 6 a nd undirected graphs in 7. These chapters begin a section of the book devoted more to issues of algorithms than data structures, although we do discuss the basics of da ta structures suitable for representing graphs. A number of important graph algorithms are presented, including depth-first search, finding minimal spanning trees, shortest paths, and maximal matchings. Chapter 8 is devoted to the principal internal sorting algorithms: quicksort, heapsort, binsort, and the simpler, less efficient methods such as inser tion sort. In this chapter we also cover the linear-time algorithms for finding median s and other order statistics. Chapter 9 discusses the asymptotic analysis of recursive procedure s, including, of course, recurrence relations and techniques for solving them. Chapter 10 outlines the important techniques for designing algorit hms, including divide-and-conquer, dynamic programming, local search algorithms, and va rious forms of organized tree searching. The last two chapters are devoted to external storage organization and memory management. Chapter 11 covers external sorting and large-scale storage organization, including B-trees and index structures. Chapter 12 contains material on memory management, divided into fou r subareas, depending on whether allocations involve fixed or varying size d blocks, and whether the freeing of blocks takes place by explicit program action or implicitly when garbage collection occurs. Material from this book has been used by the authors in data struc tures and algorithms courses at Columbia, Cornell, and Stanford, at both undergrad uate and graduate levels. For example, a preliminary version of this book was use d at Stanford in a 10-week course on data structures, taught to a population consistin g primarily of Juniors through first-year graduate students. The coverage was limited t o Chapters 1- gorithms/preface.htm (2 of 3) [1.7.2001 18:57:42] Preface 4, 9, 10, and 12, with parts of 5-7. Exercises A number of exercises of varying degrees of difficulty are found at the end of each chapter. Many of these are fairly straightforward tests of the mastery o f the material of the chapter. Some exercises require more thought, and these have been singly starred. Doubly starred exercises are harder still, and are suitable for more advanced courses. The bibliographic notes at the end of each chapter provide refe rences for additional reading. Acknowledgments We wish to acknowledge Bell Laboratories for the use of its excellent UN IX™- based text preparation and data communication facilities that significan tly eased the preparation of a manuscript by geographically separated authors. Many of our colleagues have read various portions of the manuscript and have given u s valuable comments and advice. In particular, we would like to thank Ed Beckham, J on Bentley, Kenneth Chu, Janet Coursey, Hank Cox, Neil Immerman, Brian Kern ighan, Steve Mahaney, Craig McMurray, Alberto Mendelzon, Alistair Moffat, Jeff Naughton, Kerry Nemovicher, Paul Niamkey, Yoshio Ohno, Rob Pike, Chris R ouen, Maurice Schlumberger, Stanley Selkow, Chengya Shih, Bob Tarjan, W. Van S nyder, Peter Weinberger, and Anthony Yeracaris for helpful suggestions. Finally , we would like to give our warmest thanks to Mrs. Claire Metzger for her expert as sistance in helping prepare the manuscript for typesetting. A.V.A. J.E.H. J.D.U. Table of Contents Go to Chapter 1 Data Structures and Algorithms: CHAPTER 1: Design and Analysis of Algori thms Design and Analysis of Algorithms There are many steps involved in writing a computer program to solve a g iven problem. The steps go from problem formulation and specification, to design of th e solution, to implementation, testing and documentation, and finally to evaluation of the solution. This chapter outlines our approach to these steps. Subsequent chapters discus s the algorithms and data structures that are the building blocks of most computer progra ms. 1.1 From Problems to Programs Half the battle is knowing what problem to solve. When initially approac hed, most problems have no simple, precise specification. In fact, certain problem s, such as creating a "gourmet" recipe or preserving world peace, may be impossible to formula te in terms that admit of a computer solution. Even if we suspect our problem can be solv ed on a computer, there is usually considerable latitude in several problem parameters. Of ten it is only by experimentation that reasonable values for these parameters can be found . If certain aspects of a problem can be expressed in terms of a form al model, it is usually beneficial to do so, for once a problem is formalized, we can look for s olutions in terms of a precise model and determine whether a program already exists to solve that problem. Even if there is no existing program, at least we can discover what is k nown about this model and use the properties of the model to help construct a good solut ion. Almost any branch of mathematics or science can be called into serv ice to help model some problem domain. Problems essentially numerical in nature can be mod eled by such common mathematical concepts as simultaneous linear equations (e.g., fi nding currents in electrical circuits, or finding stresses in frames made of connected bea ms) or differential equations (e.g., predicting population growth or the rate at which chem icals will react). Symbol and text processing problems can be modeled by character strings and formal grammars. Problems of this nature include compilation (the translation of programs written in a programming language into machine language) and information retrie val tasks such as recognizing particular words in lists of titles owned by a library. Algorithms Once we have a suitable mathematical model for our problem, we can attem pt to find a solution in terms of that model. Our initial goal is to find a solution in the form of an algorithm, which is a finite sequence of instructions, each of which has a clear m eaning and can be performed with a finite amount of effort in a finite length of ti me. An integer assignment statement such as x := y + z is an example of an instruction that can be executed gorithms/mf1201.htm (1 of 37) [1.7.2001 18:58:22] Data Structures and Algorithms: CHAPTER 1: Design and Analysis of Algori thms in a finite amount of effort. In an algorithm instructions can be execut ed any number of times, provided the instructions themselves indicate the repetition. How ever, we require that, no matter what the input values may be, an algorithm terminate aft er executing a finite number of instructions. Thus, a program is an algorithm as long as it ne ver enters an infinite loop on any input. There is one aspect of this definition of an algorithm that needs s ome clarification. We said each instruction of an algorithm must have a "clear meaning" and mu st be executable with a "finite amount of effort." Now what is clear to one person may no t be clear to another, and it is often difficult to prove rigorously that an instructi on can be carried out in a finite amount of time. It is often difficult as well to prove that on any input, a sequence of instructions terminates, even if we understand clearly what each instruc tion means. By argument and counterargument, however, agreement can usually be reached as to whether a sequence of instructions constitutes an algorithm. The burden of proof l ies with the person claiming to have an algorithm. In Section 1.5 we discuss how to estimate the running time of common programming language constructs that can be shown to require a finite amount of time for their execution. In addition to using Pascal programs as algorithms, we shall often present algorithms using a pseudo-language that is a combination of the constructs of a programming language together with informal English statements. We shall use Pascal as the pr ogramming language, but almost any common programming language could be used in pl ace of Pascal for the algorithms we shall discuss. The following example illustrates m any of the steps in our approach to writing a computer program. Example 1.1. A mathematical model can be used to help design a traffic light for a complicated intersection of roads. To construct the pattern of lights, w e shall create a program that takes as input a set of permitted turns at an intersection (continuing straight on a road is a "turn") and partitions this set into as few groups as possi ble such that all turns in a group are simultaneously permissible without collisions. We shall then associate a phase of the traffic light with each group in the partition. By finding a part ition with the smallest number of groups, we can construct a traffic light with the smallest num ber of phases. For example, the intersection shown in Fig. 1.1 occurs by a waterin g hole called JoJo's near Princeton University, and it has been known to cause some navigatio nal difficulty, especially on the return trip. Roads C and E are oneway, the others two way. There are 13 turns one might make at this intersection. Some pairs of turns, like AB (from A to B) and EC, can be carried out simultaneously, while others, like AD and EB, cause lines of traffic to cross and therefore cannot be carried out simultaneously. The light a t the intersection must permit turns in such an order that AD and EB are never permitted at the same time, while the light might permit AB and EC to be made simultaneously. gorithms/mf1201.htm (2 of 37) [1.7.2001 18:58:22] Data Structures and Algorithms: CHAPTER 1: Design and Analysis of Algori thms Fig. 1.1. An intersection. We can model this problem with a mathematical structure known as a graph. A graph consists of a set of points called vertices, and lines connecting the points, called edges. For the traffic intersection problem we can draw a graph whose vertices repr esent turns and whose edges connect pairs of vertices whose turns cannot be performed si multaneously. For the intersection of Fig. 1.1, this graph is shown in Fig. 1.2, and i n Fig. 1.3 we see another representation of this graph as a table with a 1 in row i and column j whenever there is an edge between vertices i and j. The graph can aid us in solving the traffic light design problem. A coloring of a graph is an assignment of a color to each vertex of the graph so that no two vert ices connected by an edge have the same color. It is not hard to see that our problem is one of coloring the graph of incompatible turns using as few colors as possible. The problem of coloring graphs has been studied for many decades, a nd the theory of algorithms tells us a lot about this problem. Unfortunately, coloring an arbitrary graph with as few colors as possible is one of a large class of problems called "NP -complete problems," for which all known solutions are essentially of the type "tr y all possibilities." In the case of the coloring problem, "try all possibilities" means to tr y all assignments of colors to vertices using at first one color, then two colors, then three , and so on, until a legal coloring is found. With care, we can be a little speedier than this, but it is generally believed that no algorithm to solve this problem can be substantially mo re efficient than this most obvious approach. We are now confronted with the possibility that finding an optimal solution for the problem at hand is computationally very expensive. We can adopt Fig. 1.2. Graph showing incompatible turns. gorithms/mf1201.htm (3 of 37) [1.7.2001 18:58:22] Data Structures and Algorithms: CHAPTER 1: Design and Analysis of Algori thms Fig. 1.3. Table of incompatible turns. one of three approaches. If the graph is small, we might attempt to find an optimal solution exhaustively, trying all possibilities. This approach, however, becomes prohibitively expensive for large graphs, no matter how efficient we try to make the p rogram. A second approach would be to look for additional information about the problem a t hand. It may turn out that the graph has some special properties, which make it unnec essary to try all possibilities in finding an optimal solution. The third approach is to c hange the problem a little and look for a good but not necessarily optimal solution. We migh t be happy with a solution that gets close to the minimum number of colors on small graphs , and works quickly, since most intersections are not even as complex as Fig. 1.1. A n algorithm that quickly produces good but not necessarily optimal solutions is called a heuristic. One reasonable heuristic for graph coloring is the following "greed y" algorithm. Initially we try to color as many vertices as possible with the first color, then as many as possible of the uncolored vertices with the second color, and so on. To color vertic es with a new color, we perform the following steps. 1. Select some uncolored vertex and color it with the new color. 2. Scan the list of uncolored vertices. For each uncolored vertex, dete rmine whether it has an edge to any vertex already colored with the new color. If there i s no such edge, color the present vertex with the new color. This approach is called "greedy" because it colors a vertex wheneve r it can, without considering the potential drawbacks inherent in making such a move. Ther e are situations where we could color more vertices with one color if we were less "greed y" and skipped some vertex we could legally color. For example, consider the graph of F ig. 1.4, where having colored vertex 1 red, we can color vertices 3 and 4 red also, pro vided we do not color 2 first. The greedy algorithm would tell us to color 1 and 2 red, assuming we considered vertices in numerical order. Fig. 1.4. A graph. As an example of the greedy approach applied to Fig. 1.2, suppose w e start by coloring AB blue. We can color AC, AD, and BA blue, because none of these four vertices has an edge in common. We cannot color BC blue because there is an edge between AB and BC. gorithms/mf1201.htm (4 of 37) [1.7.2001 18:58:22] Data Structures and Algorithms: CHAPTER 1: Design and Analysis of Algori thms Similarly, we cannot color BD, DA, or DB blue because each of these vertices is connected by an edge to one or more vertices already colored blue. However, we can color DC blue. Then EA, EB, and EC cannot be colored blue, but ED can. Now we start a second color, say by coloring BC red. BD can be colored red, but DA cannot, because of the edge between BD and DA. Similarly, DB cannot be colored red, and DC is already blue, but EA can be colored red. Each other uncolored vertex has an edge to a red vertex, so no other vertex can be colored red. The remaining uncolored vertices are DA, DB, EB, and EC. If we color DA green, then DB can be colored green, but EB and EC cannot. These two may be colored with a fourth color, say yellow. The colors are summarized in Fig. 1.5. The "extra" tu rns are determined by the greedy approach to be compatible with the turns already given tha t color, as well as with each other. When the traffic light allows turns of one color, it ca n also allow the extra turns safely. Fig. 1.5. A coloring of the graph of Fig. 1.2. The greedy approach does not always use the minimum possible number of colors. We can use the theory of algorithms again to evaluate the goodness of the s olution produced. In graph theory, a k-clique is a set of k vertices, every pair of which is connected by an edge. Obviously, k colors are needed to color a k-clique, since no two vertices in a clique may be given the same color. In the graph of Fig. 1.2 the set of four vertices AC, DA, BD, EB is a 4-clique. Therefore, no coloring with three or fewer colors exists, and the solution of Fig. 1.5 is optimal in the sense that it uses the fewest colors possible. In terms of our original problem, no traffic light for the intersection of Fig. 1.1 can have fewer than four phases. Therefore, consider a traffic light controller based on Fig. 1.5, w here each phase of the controller corresponds to a color. At each phase the turns indicated by the row of the table corresponding to that color are permitted, and the other turns are forbi dden. This pattern uses as few phases as possible. Pseudo-Language and Stepwise Refinement Once we have an appropriate mathematical model for a problem, we can for mulate an algorithm in terms of that model. The initial versions of the algorithm are often couched in general statements that will have to be refined subsequently into smalle r, more definite instructions. For example, we described the greedy graph coloring algori thm in terms such as "select some uncolored vertex." These instructions are, we hope, suff iciently clear that gorithms/mf1201.htm (5 of 37) [1.7.2001 18:58:22] Data Structures and Algorithms: CHAPTER 1: Design and Analysis of Algori thms the reader grasps our intent. To convert such an informal algorithm to a program, however, we must go through several stages of formalization (called stepwise refinement) until we arrive at a program the meaning of whose steps are formally defined by a language manual. Example 1.2. Let us take the greedy algorithm for graph coloring part of the way tow ards a Pascal program. In what follows, we assume there is a graph G, some of whose vertices may be colored. The following program greedy determines a set of vertices called newclr, all of which can be colored with a new color. The program is called repe atedly, until all vertices are colored. At a coarse level, we might specify greedy in pseudo-language as in Fig. 1.6. procedure greedy ( var G: GRAPH; var newclr: SET ); { greedy assigns to newclr a set of vertices of G that may be given the same color } begin (1) newclr := Ø; † (2) for each uncolored vertex v of G do (3) if v is not adjacent to any vertex in newclr then begin (4) mark v colored; (5) add v to newclr end end; { greedy } Fig. 1.6. First refinement of greedy algorithm. We notice from Fig. 1.6 certain salient features of our pseudo-lang uage. First, we use boldface lower case keywords corresponding to Pascal reserved words, wit h the same meaning as in standard Pascal. Upper case types such as GRAPH and SET‡ are the names of "abstract data types." They will be defined by Pascal type definition s and the operations associated with these abstract data types will be defined by Pascal proc edures when we create the final program. We shall discuss abstract data types in more d etail in the next two sections. The flow-of-control constructs of Pascal, like if, for, and while, are available for pseudo- language statements, but conditionals, as in line (3), may be informal statements rather than Pascal conditional expressions. Note that the assignment at line (1) u ses an informal expression on the right. Also, the for-loop at line (2) iterates over a set. To be executed, the pseudo-language program of Fig. 1.6 must be ref ined into a conventional Pascal program. We shall not proceed all the way to such a program in this example, but let us give one example of refinement, transforming the if-statement in line (3) of Fig. 1.6 into more conventional code. To test whether vertex v is adjacent to some vertex in newclr, we consider each member gorithms/mf1201.htm (6 of 37) [1.7.2001 18:58:22] Data Structures and Algorithms: CHAPTER 1: Design and Analysis of Algori thms w of newclr and examine the graph G to see whether there is an edge between v and w. An organized way to make this test is to use found, a boolean variable to indicate whether an edge has been found. We can replace lines (3)-(5) of Fig. 1.6 by the code in Fig. 1.7. procedure greedy ( var G: GRAPH; var newclr: SET ); begin (1) newclr : = Ø; (2) for each uncolored vertex v of G do begin (3.1) found := false; (3.2) for each vertex w in newclr do (3.3) if there is an edge between v and w in G then (3.4) found := true; (3.5) if found = false then begin { v is adjacent to no vertex in newclr } (4) mark v colored; (5) add v to newclr end end end; { greedy } Fig. 1.7. Refinement of part of Fig. 1.6. We have now reduced our algorithm to a collection of operations on two sets of vertices. The outer loop, lines (2)-(5), iterates over the set of uncolored ve rtices of G. The inner loop, lines (3.2)-(3.4), iterates over the vertices currently in the set newclr. Line (5) adds newly colored vertices to newclr. There are a variety of ways to represent sets in a programming lang uage like Pascal. In Chapters 4 and 5 we shall study several such representations. In this ex ample we can simply represent each set of vertices by another abstract data type LIST , which here can be implemented by a list of integers terminated by a special value null (for which we might use the value 0). These integers might, for example, be stored in an ar ray, but there are many other ways to represent LIST's, as we shall see in Chapter 2. We can now replace the for-statement of line (3.2) in Fig. 1.7 by a loop, where w is initialized to be the first member of newclr and changed to be the next member, each time around the loop. We can also perform the same refinement for the for-loop of line (2) in Fig. 1.6. The revised procedure greedy is shown in Fig. 1.8. There is still more refinement to be done after Fig. 1.8, but we shall stop here to take stock of what we have done. procedure greedy ( var G: GRAPH; var newclr: LIST ); { greedy assigns to newclr those vertices that may be given the same color } gorithms/mf1201.htm (7 of 37) [1.7.2001 18:58:22] Data Structures and Algorithms: CHAPTER 1: Design and Analysis of Algori thms var found: boolean; v, w: integer; begin newclr := Ø; v := first uncolored vertex in G; while v < > null do begin found := false; w := first vertex in newclr; while w < > null do begin if there is an edge between v and w in G then found := true; w := next vertex in newclr end; if found = false do begin mark v colored; add v to newclr end; v := next uncolored vertex in G end end; { greedy } Fig. 1.8. Refined greedy procedure. Summary In Fig. 1.9 we see the programming process as it will be treated in this book. The first stage is modeling using an appropriate mathematical model such as a graph. At this stage, the solution to the problem is an algorithm expressed very informally. At the next stage, the algorithm is written in pseudo-language, tha t is, a mixture of Pascal constructs and less formal English statements. To reach that stag e, the informal English is replaced by progressively more detailed sequences of statemen ts, in the process known as stepwise refinement. At some point the pseudo-language program is sufficiently detailed that the Fig. 1.9. The problem solving process. operations to be performed on the various types of data become fixed. We then create abstract data types for each type of data (except for the elementary ty pes such as integers, reals and character strings) by giving a procedure name for each operat ion and replacing gorithms/mf1201.htm (8 of 37) [1.7.2001 18:58:22] Data Structures and Algorithms: CHAPTER 1: Design and Analysis of Algori thms uses of each operation by an invocation of the corresponding procedure. In the third stage we choose an implementation for each abstract da ta type and write the procedures for the various operations on that type. We also replace any remaining informal statements in the pseudo-language algorithm by Pascal code. The result i s a running program. After debugging it will be a working program, and we hope that by using the stepwise development approach outlined in Fig. 1.9, little debugging wil l be necessary. 1.2 Abstract Data Types Most of the concepts introduced in the previous section should be famili ar ideas from a beginning course in programming. The one possibly new notion is that of an abstract data type, and before proceeding it would be useful to discuss the role of ab stract data types in the overall program design process. To begin, it is useful to compare an abstract data type with the more familiar notion of a procedure. Procedures, an essential tool in programming, generalize the notion of an operator. Instead of being limited to the built-in operators of a programming lang uage (addition, subtraction, etc.), by using procedures a programmer is free to define his own operators and apply them to operands that need not be basic types. An example of a pro cedure used in this way is a matrix multiplication routine. Another advantage of procedures is that they can be used to encapsulate parts of an algorithm by localizing in one section of a program all the statements r elevant to a certain aspect of a program. An example of encapsulation is the use of one proce dure to read all input and to check for its validity. The advantage of encapsulation is t hat we know where to go to make changes to the encapsulated aspect of the problem. For exampl e, if we decide to check that inputs are nonnegative, we need to change only a few lines of code, and we know just where those lines are. Definition of Abstract Data Type We can think of an abstract data type (ADT) as a mathematical model with a collection of operations defined on that model. Sets of integers, together with the op erations of union, intersection, and set difference, form a simple example of an ADT. In an ADT, the operations can take as operands not only instances of the ADT being defi ned but other types of operands, e.g., integers or instances of another ADT, and the r esult of an operation can be other than an instance of that ADT. However, we assume that at le ast one operand, or the result, of any operation is of the ADT in question. The two properties of procedures mentioned above -- generalization and encapsulation -- apply equally well to abstract data types. ADT's are generalizations of primitive data types (integer, real, and so on), just as procedures are generalizations of primitive operations (+, - gorithms/mf1201.htm (9 of 37) [1.7.2001 18:58:22] Data Structures and Algorithms: CHAPTER 1: Design and Analysis of Algori thms , and so on). The ADT encapsulates a data type in the sense that the de finition of the type and all operations on that type can be localized to one section of the p rogram. If we wish to change the implementation of an ADT, we know where to look, and by revis ing one small section we can be sure that there is no subtlety elsewhere in the progra m that will cause errors concerning this data type. Moreover, outside the section in which the ADT's operations are defined, we can treat the ADT as a primitive type; we hav e no concern with the underlying implementation. One pitfall is that certain operations ma y involve more than one ADT, and references to these operations must appear in the sections for both ADT's. To illustrate the basic ideas, consider the procedure greedy of the previous section which, in Fig. 1.8, was implemented using primitive operations on an abs tract data type LIST (of integers). The operations performed on the LIST newclr were: 1. make a list empty, 2. get the first member of the list and return null if the list is empty, 3. get the next member of the list and return null if there is no next member, and 4. insert an integer into the list. There are many data structures that can be used to implement such l ists efficiently, and we shall consider the subject in depth in Chapter 2. In Fig. 1.8, if we replace these operations by the statements 1. MAKENULL(newclr); 2. w := FIRST(newclr); 3. w := NEXT(newclr); 4. INSERT(v, newclr); then we see an important aspect of abstract data types. We can implement a type any way we like, and the programs, such as Fig. 1.8, that use objects of that ty pe do not change; only the procedures implementing the operations on the type need to change. Turning to the abstract data type GRAPH we see need for the followi ng operations: 1. get the first uncolored vertex, 2. test whether there is an edge between two vertices, 3. mark a vertex colored, and 4. get the next uncolored vertex. gorithms/mf1201.htm (10 of 37) [1.7.2001 18:58:22] Data Structures and Algorithms: CHAPTER 1: Design and Analysis of Algori thms There are clearly other operations needed outside the procedure greedy, such as inserting vertices and edges into the graph and making all vertices uncolored. The re are many data structures that can be used to support graphs with these operations, and we shall study the subject of graphs in Chapters 6 and 7. It should be emphasized that there is no limit to the number of ope rations that can be applied to instances of a given mathematical model. Each set of operatio ns defines a distinct ADT. Some examples of operations that might be defined on an ab stract data type SET are: 1. MAKENULL(A). This procedure makes the null set be the value for set A. 2. UNION(A, B, C). This procedure takes two set-valued arguments A and B, and assigns the union of A and B to be the value of set C. 3. SIZE(A). This function takes a set-valued argument A and returns an object of type integer whose value is the number of elements in the set A. An implementation of an ADT is a translation, into statements of a programming language, of the declaration that defines a variable to be of that abstr act data type, plus a procedure in that language for each operation of the ADT. An implementat ion chooses a data structure to represent the ADT; each data structure is built up from the basic da ta types of the underlying programming language using the available data st ructuring facilities. Arrays and record structures are two important data structur ing facilities that are available in Pascal. For example, one possible implementation for variab le S of type SET would be an array that contained the members of S. One important reason for defining two ADT's to be different if they have the same underlying model but different operations is that the appropriateness of an implementation depends very much on the operations to be performed. Much of this book i s devoted to examining some basic mathematical models such as sets and graphs, and de veloping the preferred implementations for various collections of operations. Ideally, we would like to write our programs in languages whose pri mitive data types and operations are much closer to the models and operations of our ADT's . In many ways Pascal is not well suited to the implementation of various common ADT's but none of the programming languages in which ADT's can be declared more directly is as well known. See the bibliographic notes for information about some of these language s. 1.3 Data Types, Data Structures and Abstract Data Types Although the terms "data type" (or just "type"), "data structure" and "abstract data type" gorithms/mf1201.htm (11 of 37) [1.7.2001 18:58:22] Data Structures and Algorithms: CHAPTER 1: Design and Analysis of Algori thms sound alike, they have different meanings. In a programming language, th e data type of a variable is the set of values that the variable may assume. For example, a variable of type boolean can assume either the value true or the value false, but no othe r value. The basic data types vary from language to language; in Pascal they are integer, r eal, boolean, and character. The rules for constructing composite data types out of basic ones also vary from language to language; we shall mention how Pascal builds such types mome ntarily. An abstract data type is a mathematical model, together with variou s operations defined on the model. As we have indicated, we shall design algorithms in terms of ADT's, but to implement an algorithm in a given programming language we must find some way of representing the ADT's in terms of the data types and operators supporte d by the programming language itself. To represent the mathematical model underly ing an ADT we use data structures, which are collections of variables, possibly of several different data types, connected in various ways. The cell is the basic building block of data structures. We can picture a cell a s a box that is capable of holding a value drawn from some basic or composite data ty pe. Data structures are created by giving names to aggregates of cells and (opti onally) interpreting the values of some cells as representing connections (e.g., pointers) among cells. The simplest aggregating mechanism in Pascal and most other program ming languages is the (one-dimensional) array, which is a sequence of cells of a given type, which we shall often refer to as the celltype. We can think of an array as a mapping fr om an index set (such as the integers 1, 2, . . . , n) into the celltype. A cell within an array can be referenced by giving the array name together with a value from the index set of the ar ray. In Pascal the index set may be an enumerated type, such as (north, east, south, west) , or a subrange type, such as 1..10. The values in the cells of an array can be of any one typ e. Thus, the declaration name: array[indextype] of celltype; declares name to be a sequence of cells, one for each value of type indextype; the co ntents of the cells can be any member of type celltype. Incidentally, Pascal is somewhat unusual in its richness of index t ypes. Many languages allow only subrange types (finite sets of consecutive integers) as ind ex types. For example, to index an array by letters in Fortran, one must simulate the effect by using integer indices, such as by using index 1 to stand for 'A', 2 to stand for 'B', and so on . Another common mechanism for grouping cells in programming language s is the record structure. A record is a cell that is made up of a collection of cells, called fields, of possibly dissimilar types. Records are often grouped into arrays; the type define d by the aggregation of the fields of a record becomes the "celltype" of the array. For examp le, the Pascal declaration gorithms/mf1201.htm (12 of 37) [1.7.2001 18:58:22] Data Structures and Algorithms: CHAPTER 1: Design and Analysis of Algori thms var reclist: array[l..4] of record data: real; next: integer end declares reclist to be a four-element array, whose cells are records with two fields, data and next. A third grouping method found in Pascal and some other languages is the file. The file, like the one-dimensional array, is a sequence of values of some particul ar type. However, a file has no index type; elements can be accessed only in the order of th eir appearance in the file. In contrast, both the array and the record are "random-access" str uctures, meaning that the time needed to access a component of an array or record is independe nt of the value of the array index or field selector. The compensating benefit of grouping by file, rather than by array, is that the number of elements in a file can be time-varying a nd unlimited. Pointers and Cursors In addition to the cell-grouping features of a programming language, we can represent relationships between cells using pointers and cursors. A pointer is a cell whose value indicates another cell. When we draw pictures of data structures, we ind icate the fact that cell A is a pointer to cell B by drawing an arrow from A to B. In Pascal, we can create a pointer variable ptr that will point to cells of a given type, say celltype, by the declaration var ptr: ↑ celltype A postfix up-arrow is used in Pascal as the dereferencing operator, so t he expression ptr↑ denotes the value (of type celltype) in the cell pointed to by ptr. A cursor is an integer-valued cell, used as a pointer to an array. As a method o f connection, the cursor is essentially the same as a pointer, but a curso r can be used in languages like Fortran that do not have explicit pointer types as Pascal does. By treating a cell of type integer as an index value for some array, we effectively ma ke that cell point to one cell of the array. This technique, unfortunately, works only when ce lls of arrays are pointed to; there is no reasonable way to interpret an integer as a "poi nter" to a cell that is not part of an array. We shall draw an arrow from a cursor cell to the cell it "points to ." Sometimes, we shall also show the integer in the cursor cell, to remind us that it is not a true pointer. The reader should observe that the Pascal pointer mechanism is such that cells in a rrays can only be gorithms/mf1201.htm (13 of 37) [1.7.2001 18:58:22] Data Structures and Algorithms: CHAPTER 1: Design and Analysis of Algori thms "pointed to" by cursors, never by true pointers. Other languages, like P L/I or C, allow components of arrays to be pointed to by either cursors or true pointers , while in Fortran or Algol, there being no pointer type, only cursors can be used. Example 1.3. In Fig. 1.10 we see a two-part data structure that consists of a chain of cells containing cursors to the array reclist defined above. The purpose of the field next in reclist is to point to another record in the array. For example, reclist[4].next is 1, so record 4 is followed by record 1. Assuming record 4 is first, the next field of reclist orders the records 4, 1, 3, 2. Note that the next field is 0 in record 2, indicating that there is no following record. It is a useful convention, one we shall adopt in this book, to u se 0 as a "NIL pointer," when cursors are being used. This idea is sound only if we als o make the convention that arrays to which cursors "point" must be indexed starting at 1, never at 0. Fig. 1.10. Example of a data structure. The cells in the chain of records in Fig. 1.10 are of the type type recordtype = record cursor: integer; ptr: ↑ recordtype end The chain is pointed to by a variable named header, which is of type ↑ record-type; header points to an anonymous record of type recordtype.† That record has a value 4 in its cursor field; we regard this 4 as an index into the array reclist. The record has a true pointer in field ptr to another anonymous record. The record pointed to has an index in its cursor field indicating position 2 of reclist; it also has a nil pointer in its ptr field. 1.4 The Running Time of a Program When solving a problem we are faced frequently with a choice among algor ithms. On what basis should we choose? There are two often contradictory goals. 1. We would like an algorithm that is easy to understand, code, and deb ug. 2. We would like an algorithm that makes efficient use of the computer' s resources, especially, one that runs as fast as possible. gorithms/mf1201.htm (14 of 37) [1.7.2001 18:58:22] Data Structures and Algorithms: CHAPTER 1: Design and Analysis of Algori thms When we are writing a program to be used once or a few times, goal (1) is most important. The cost of the programmer's time will most likely exceed by far the cost of running the program, so the cost to optimize is the cost of writing the program. When presented with a problem whose solution is to be used many times, the co st of running the program may far exceed the cost of writing it, especially, if many of th e program runs are given large amounts of input. Then it is financially sound to implement a fairly complicated algorithm, provided that the resulting program will run significantly fa ster than a more obvious program. Even in these situations it may be wise first to implem ent a simple algorithm, to determine the actual benefit to be had by writing a more c omplicated program. In building a complex system it is often desirable to implement a simple prototype on which measurements and simulations can be performed, before committing oneself to the final design. It follows that programmers must not only b e aware of ways of making programs run fast, but must know when to apply these techniques a nd when not to bother. Measuring the Running Time of a Program The running time of a program depends on factors such as: 1. the input to the program, 2. the quality of code generated by the compiler used to create the obj ect program, 3. the nature and speed of the instructions on the machine used to exec ute the program, and 4. the time complexity of the algorithm underlying the program. The fact that running time depends on the input tells us that the r unning time of a program should be defined as a function of the input. Often, the running time depends not on the exact input but only on the "size" of the input. A good example i s the process known as sorting, which we shall discuss in Chapter 8. In a sorting problem, we are given as input a list of items to be sorted, and we are to produce as output the same i tems, but smallest (or largest) first. For example, given 2, 1, 3, 1, 5, 8 as input we might w ish to produce 1, 1, 2, 3, 5, 8 as output. The latter list is said to be sorted smallest first. The natural size measure for inputs to a sorting program is the number of items to be sorted, or in other words, the length of the input list. In general, the length of the input is an appr opriate size measure, and we shall assume that measure of size unless we specifically state ot herwise. It is customary, then, to talk of T(n), the running time of a program on inputs of size n. For example, some program may have a running time T(n) = cn , where c is a constant. The units of T(n) will be left unspecified, but we can think of T(n) as being the number of instructions executed on an idealized computer. For many programs, the running time is really a function of the par ticular input, and not gorithms/mf1201.htm (15 of 37) [1.7.2001 18:58:22] Data Structures and Algorithms: CHAPTER 1: Design and Analysis of Algori thms just of the input size. In that case we define T(n) to be the worst case running time, that is, the maximum, over all inputs of size n, of the running time on that input. We also consider T (n), the average, over all inputs of size n, of the running time on that input. While avg T avg(n) appears a fairer measure, it is often fallacious to assume that all i nputs are equally likely. In practice, the average running time is often much harder to de termine than the worst-case running time, both because the analysis becomes mathematicall y intractable and because the notion of "average" input frequently has no obvious meaning. Thus, we shall use worst-case running time as the principal measure of time complexity, although we shall mention average-case complexity wherever we can do so meaningfully. Now let us consider remarks (2) and (3) above: that the running time of a program depends on the compiler used to compile the program and the machine used to execute it. These facts imply that we cannot express the running time T(n) in standard time units such as seconds. Rather, we can only make remarks like "the running time of s uch-and-such an algorithm is proportional to n ." The constant of proportionality will remain unspecified since it depends so heavily on the compiler, the machine, and other fact ors. Big-Oh and Big-Omega Notation To talk about growth rates of functions we use what is known as "big-oh" notation. For example, when we say the running time T(n) of some program is O(n ), read "big oh of n squared" or just "oh of n squared," we mean that there are positive constants c and n such 0 that for n equal to or greater than 0 , we have T(n) ≤ cn . Example 1.4. Suppose T(0) = 1, T(1) = 4, and in general T(n) = (n+l) . Then we see that 2 2 2 T(n) is O(n ), as we may let n 0 1 and c = 4. That is, for n ≥ 1, we have (n + 1) ≤ 4n , as the reader may prove easily. Note that we cannot let n = 0, because T(0) = 1 is not less 0 than c0 = 0 for any constant c. In what follows, we assume all running-time functions are defined o n the nonnegative integers, and their values are always nonnegative, although not necessar ily integers. We say that T(n) is O(f(n)) if there are constants c and0n such that T(n) ≤ cf(n) whenever n ≥ n0. A program whose running time is O(f (n)) is said to have growth rate f(n). Example 1.5. The function T(n)= 3n 3 + 2n is O(n ). To see this, let n = 0 and c = 5. 0 Then, the reader may show that for n ≥ 0, 3n + 2n ≤ 5n . We could also say that this T(n) 4 3 is O(n ), but this would be a weaker statement than saying it is O(n ). n n As another example, let us prove that the function 3 is not O (2 ). Suppose that there were constants n 0nd c such that for all n ≥ n 0 we had 3 ≤ c2 . Then c ≥ (3/2) for any n n n ≥ n0. But (3/2) gets arbitrarily large as n gets large, so no constant c can exceed (3/2) for gorithms/mf1201.htm (16 of 37) [1.7.2001 18:58:22] Data Structures and Algorithms: CHAPTER 1: Design and Analysis of Algori thms all n. When we say T(n) is O(f(n)), we know that f(n) is an upper bound on the growth rate of T(n). To specify a lower bound on the growth rate of T(n) we can use the notation T(n) is Ω(g(n)), read "big omega of g(n)" or just "omega of g(n)," to mean that there exists a positive constant c such that T(n) ≥ cg(n) infinitely often (for an infinite number of values of n).† Example 1.6. To verify that the function T(n)= n + 2n is Ω(n ), let c = 1. Then T(n) ≥ cn 3 for n = 0, 1, . . .. For another example, let T(n) = n for odd n ≥ 1 and T(n) = n 2/100 for even n ≥ 0. To 2 verify that T(n) is Ω (n ), let c = 1/100 and consider the infinite set of n's: n = 0, 2, 4, 6, . . .. The Tyranny of Growth Rate We shall assume that programs can be evaluated by comparing their runnin g-time functions, with constants of proportionality neglected. Under this assum ption a program with running time O(n ) is better than one with running time O(n ), for example. Besides constant factors due to the compiler and machine, however, there is a co nstant factor due to the nature of the program itself. It is possible, for example, that with a particular compiler- machine combination, the first program takes 100n milliseconds, while the second takes 5n milliseconds. Might not the 5n program be better than the 100n program? The answer to this question depends on the sizes of inputs the prog rams are expected to 3 process. For inputs of size n < 20, the program with running time 5n will be faster than the one with running time 100n . Therefore, if the program is to be run mainly on inputs of small size, we would indeed prefer the program whose running time was O(n ). However, as n gets large, the ratio of the running times, which is 5n /100n = n/20, gets arbitrarily large. Thus, as the size of the input increases, the O(n ) program will take significantly 2 more time than the O(n ) program. If there are even a few large inputs in the mix of problems these two programs are designed to solve, we can be much better off with the program whose running time has the lower growth rate. Another reason for at least considering programs whose growth rates are as low as possible is that the growth rate ultimately determines how big a problem we can solve on a computer. Put another way, as computers get faster, our desire to solve larger problems on them continues to increase. However, unless a program has a low growth r ate such as O(n) or O(nlogn), a modest increase in computer speed makes very little difference in the size of the largest problem we can solve in a fixed amount of time. Example 1.7. In Fig. 1.11 we see the running times of four programs with different t ime complexities, measured in seconds, for a particular compiler-machine com bination. gorithms/mf1201.htm (17 of 37) [1.7.2001 18:58:22] Data Structures and Algorithms: CHAPTER 1: Design and Analysis of Algori thms Suppose we can afford 1000 seconds, or about 17 minutes, to solve a give n problem. How 3 large a problem can we solve? In 10 seconds, each of the four algorithms can solve roughly the same size problem, as shown in the second column of Fig. 1.12 . Fig. 1.11. Running times of four programs. Suppose that we now buy a machine that runs ten times faster at no additional cost. Then for the same cost we can spend 10 seconds on a problem where we spent 10 3 seconds before. The maximum size problem we can now solve using each of the four programs is shown in the third column of Fig. 1.12, and the ratio of the third and second columns is shown in the fourth column. We observe that a 1000% improveme nt in computer speed yields only a 30% increase in the size of problem we can solve if we use the O(2 ) program. Additional factors of ten speedup in the computer yield an e ven smaller percentage increase in problem size. In effect, the O(2 ) program can solve only small problems no matter how fast the underlying computer. Fig. 1.12. Effect of a ten-fold speedup in computation time. In the third column of Fig. 1.12 we see the clear superiority of th e O(n) program; it returns a 1000% increase in problem size for a 1000% increase in compute r speed. We see 3 2 that the O(n ) and O(n ) programs return, respectively, 230% and 320% increases in problem size for 1000% increases in speed. These ratios will be maintain ed for additional increases in speed. As long as the need for solving progressively larger problems exist s, we are led to an almost paradoxical conclusion. As computation becomes cheaper and machin es become faster, as will most surely continue to happen, our desire to solve larg er and more complex problems will continue to increase. Thus, the discovery and use of effic ient algorithms, those whose growth rates are low, becomes more rather than less importan t. A Few Grains of Salt We wish to re-emphasize that the growth rate of the worst case running t ime is not the sole, or necessarily even the most important, criterion for evaluating an algo rithm or program. Let us review some conditions under which the running time of a program can be overlooked in favor of other issues. gorithms/mf1201.htm (18 of 37) [1.7.2001 18:58:22] Data Structures and Algorithms: CHAPTER 1: Design and Analysis of Algori thms 1. If a program is to be used only a few times, then the cost of writin g and debugging dominate the overall cost, so the actual running time rarely affects the total cost. In this case, choose the algorithm that is easiest to implement correctly. 2. If a program is to be run only on "small" inputs, the growth rate of the running time may be less important than the constant factor in the formula for runnin g time. What is a "small" input depends on the exact running times of the competing a lgorithms. There are some algorithms, such as the integer multiplication algorithm due to Schonhage and Strassen [1971], that are asymptotically the most efficien t known for their problem, but have never been used in practice even on the largest problems, because the constant of proportionality is so large in comparison to oth er simpler, less "efficient" algorithms. 3. A complicated but efficient algorithm may not be desirable because a person other than the writer may have to maintain the program later. It is hoped that by making the principal techniques of efficient algorithm design widely known, mor e complex algorithms may be used freely, but we must consider the possibility of a n entire program becoming useless because no one can understand its subtle but ef ficient algorithms. 4. There are a few examples where efficient algorithms use too much spa ce to be implemented without using slow secondary storage, which may more than ne gate the efficiency. 5. In numerical algorithms, accuracy and stability are just as importan t as efficiency. 1.5 Calculating the Running Time of a Program Determining, even to within a constant factor, the running time of an ar bitrary program can be a complex mathematical problem. In practice, however, determining the running time of a program to within a constant factor is usually not that difficult; a f ew basic principles suffice. Before presenting these principles, it is important that we lea rn how to add and multiply in "big oh" notation. Suppose that T 1n) and T 2n) are the running times of two program fragments P and1P , 2


Buy Material

Are you sure you want to buy this material for

0 Karma

Buy Material

BOOM! Enjoy Your Free Notes!

We've added these Notes to your profile, click here to view them now.


You're already Subscribed!

Looks like you've already subscribed to StudySoup, you won't need to purchase another subscription to get this material. To access this material simply click 'View Full Document'

Why people love StudySoup

Bentley McCaw University of Florida

"I was shooting for a perfect 4.0 GPA this semester. Having StudySoup as a study aid was critical to helping me achieve my goal...and I nailed it!"

Allison Fischer University of Alabama

"I signed up to be an Elite Notetaker with 2 of my sorority sisters this semester. We just posted our notes weekly and were each making over $600 per month. I LOVE StudySoup!"

Steve Martinelli UC Los Angeles

"There's no way I would have passed my Organic Chemistry class this semester without the notes and study guides I got from StudySoup."


"Their 'Elite Notetakers' are making over $1,200/month in sales by creating high quality content that helps their classmates in a time of need."

Become an Elite Notetaker and start selling your notes online!

Refund Policy


All subscriptions to StudySoup are paid in full at the time of subscribing. To change your credit card information or to cancel your subscription, go to "Edit Settings". All credit card information will be available there. If you should decide to cancel your subscription, it will continue to be valid until the next payment period, as all payments for the current period were made in advance. For special circumstances, please email


StudySoup has more than 1 million course-specific study resources to help students study smarter. If you’re having trouble finding what you’re looking for, our customer support team can help you find what you need! Feel free to contact them here:

Recurring Subscriptions: If you have canceled your recurring subscription on the day of renewal and have not downloaded any documents, you may request a refund by submitting an email to

Satisfaction Guarantee: If you’re not satisfied with your subscription, you can contact us for further help. Contact must be made within 3 business days of your subscription purchase and your refund request will be subject for review.

Please Note: Refunds can never be provided more than 30 days after the initial purchase date regardless of your activity on the site.