### Create a StudySoup account

#### Be part of our community, it's free to join!

Already have a StudySoup account? Login here

# Models of Computation Exploring the Power of Computing CS 201 intro into computer science

UNM

GPA 4.0

### View Full Document

## Popular in CS 201 intro into computer science

## Popular in ComputerScienence

This 698 page Class Notes was uploaded by Beatriz Mitchell on Monday November 2, 2015. The Class Notes belongs to CS 201 intro into computer science at University of New Mexico taught by Mr. Burns in Fall 2015. Since its upload, it has received 29 views. For similar materials see CS 201 intro into computer science in ComputerScienence at University of New Mexico.

## Similar to CS 201 intro into computer science at UNM

## Popular in ComputerScienence

### What is Karma?

#### Karma is the currency of StudySoup.

#### You can buy or earn more Karma at anytime and redeem it for class notes, study guides, flashcards, and more!

Date Created: 11/02/15

Models of Computation Exploring the Power of Computing Models of Computation Exploring the Power of Computing John E. Savage Brown University To Patricia, Christopher, and Timothy Preface Theoretical computer science treats any computational subject for which a good model can be created. Research on formal models of computation was initiated in the 1930s and 1940s by Turing, Post, Kleene, Church, and others. In the 1950s and 1960s programming languages, language translators, and operating systems were under development and therefore became both the subject and basis for a great deal of theoretical work. The power of computers of this period was limited by slow processors and small amounts of memory, and thus theories (models, algorithms, and analysis) were developed to explore the efﬁcient use of computers as well as the inherent complexity of problems. The former subject is known today as algorithms and data structures, the latter computational complexity. The focus of theoretical computer scientists in the 1960s on languages is reﬂected in the ﬁrst textbook on the subject, Formal Languages and Their Relation to Automata by John Hopcroft and Jeffrey Ullman. This inﬂuential book led to the creation of many language- centered theoretical computer science courses; many introductory theory courses today con- tinue to reﬂect the content of this book and the interests of theoreticians of the 1960s and early 1970s. Although the 1970s and 1980s saw the development of models and methods of analysis directed at understanding the limits on the performance of computers, this attractive new material has not been made available at the introductory level. This book is designed to remedy this situation. This book is distinguished from others on theoretical computer science by its primary focus on real problems, its emphasis on concrete models of machines and programming styles, and the number and variety of models and styles it covers. These include the logic circuit, the ﬁnite- state machine, the pushdown automaton, the random-access machine, memory hierarchies, the PRAM (parallel random-access machine), the VLSI (very large-scale integrated) chip, and a variety of parallel machines. vii viii Preface Models of Computation The book covers the traditional topics of formal languages and automata and complexity classes but also gives an introduction to the more modern topics of space-time tradeoffs, mem- ory hierarchies, parallel computation, the VLSI model, and circuit complexity. These modern topics are integrated throughout the text, as illustrated by the early introduction of P-complete and NP-complete problems. The book provides the ﬁrst textbook treatment of space-time tradeoffs and memory hierarchies as well as a comprehensive introduction to traditional com- putational complexity. Its treatment of circuit complexity is modern and substantative, and parallelism is integrated throughout. Plan of the Book The book has three parts. Part I (Chapter 1) is an overview. Part II, consisting of Chapters 2–7, provides an introduction to general computational models. Chapter 2 introduces logic circuits and derives upper bounds on the size and depth of circuits for important problems. The ﬁnite- state, random-access, and Turing machine models are deﬁned in Chapter 3 and circuits are presented that simulate computations performed by these machines. From such simulations arise results of two kinds. First, computational inequalities of the form C(f) ≤ κST are derived for problems f run on the random-access machine, where C(f) is the size of the smallest circuit for f, κ is a constant, and S and T are storage space and computation time. If ST is too small relative to C(f), the problem f cannot be solved. Second, the same circuit simulations are interpreted to identify P-complete and NP-complete problems. P-complete problems can all be solved in polynomial time but are believed hard to solve fast on parallel machines. The NP-complete problems include many important scheduling and optimization problems and are believed not solvable in polynomial time on serial machines. Part II also contains traditional material on formal languages and automata. Chapter 4 explores the connection between two machine models (the ﬁnite-state machine and the push- down automaton) and language types in the Chomsky hierarchy. Chapter 5 examines Turing machines. It shows that the languages recognized by them are the phrase-structure languages, the most expressive of the language types in the Chomsky hierarchy. This chapter also exam- ines universal Turing machines, reducibility, unsolvable problems, and the functions computed by Turing machines. Finally, Part II contains Chapters 6 and 7 which introduce algebraic and combinatorial circuits and parallel machine models, respectively. Algebraic and combinatorial circuits are graphs of straight-line programs of the kind typically used for matrix multiplication and in- version, solving linear systems of equations, computing the fast Fourier transform, performing convolutions, and merging and sorting. Chapter 6 contains reference material on problems used in later chapters to illustrate models and lower-bound arguments. Parallel machine mod- slsuchasthePRAMandnetworksofiedasmeshesandhypercubesare studied in Chapter 7. A framework is given for the design of algorithms and derivation of lower bounds on performance. Part III, a comprehensive treatment of computational complexity, consists of Chapters 8– 12.Chapter 8 provides a comprehensive survey of traditional computational complexity. Using serial and parallel machine models, it examines time- and space-bounded complexity classes, including the P -complete, NP-complete and PSPACE-complete languages as well as the circuit complexity classes NC and P/poly. This chapter also establishes the connections between de- ▯John E Savage Preface ix terministic and nondeterministic space complexity classes and shows that the nondeterministic space classes are closed under complements. Circuit complexity is the topic of Chapter 9. Methods for deriving lower bounds on circuit size and depth are given for general circuits, formulas, monotone circuits, and bounded-depth circuits. This modern treatment of circuit complexity complements Chapter 2, which derives tight upper bounds on circuit size and depth. Space–time tradeoffs are studied in Chapter 10 using two computational models, the branching program and the pebble game, which capture the notions of space and time for many programs for which branching is and is not allowed, respectively. Methods for deriving lower bounds on the exchange of space for time are presented and applied to a representative set of problems. Chapter 11 examines models for memory hierarchy systems. It uses the pebble game with pebbles of multiple colors to designate storage locations at different levels of a hierarchy, and also employs block and RAM-based models. Again, lower bounds on performance are derived and compared with the performance of algorithms. This chapter also has a brief treatment of the LRU and FIFO memory-management algorithms that uses competitive analysis to com- pare their performance to that of the optimal algorithm. The book closes with Chapter 12 on the VLSI model for integrated circuits. In this model both chip area A and time T are important, and methods are given for deriving lower bounds on measures such as AT 2. Chip layouts and VLSI algorithms are also exhibited whose perfor- mance comes close to matching the lower bounds. Use of the Book Many different courses can be designed around this book. A core undergraduate computer science course can be taught using Parts I and II and some material from Chapter 8.T e ﬁrst course on theoretical computer science for majors at Brown uses most of Chapters 1–5 except for the advanced material in Chapters 2 and 3.tIu safwlnyisfm Chapters 10 and 11 to emphasize space–time tradeoffs, which play a central role in Chapter 3 and lead into the study of formal languages and automata in Chapter 4.n ft material of Chapter 5, a few lectures are given onNP-complete problems from Chapter 8. This introductory course has four programming assignments in Scheme that illustrate the ideas embodied in Chapters 2, 3 and 5. The ﬁrst program solves the circuit-value problem, that is, it executes a straight-line program, thereby producing the outputs deﬁned by this program. The second program writes a straight-line program simulating T steps by a ﬁnite- state machine. The third program writes a straight-line program simulating T steps by a one-tape Turing machine (this is the reduction involved in the Cook-Levin theorem) and the fourth one simulates a universal Turing machine. Several different advanced courses can be assembled from the material of Part III and introductory material of Part II. For example, a course on concrete computational complexity can be assembled around Chapters 10 and 11, which examine tradeoffs between space and time in primary and secondary memory. This course would presume or include introductory material from Chapter 3. An advanced course emphasizing traditional computational complexity can be based pri- marily on computability (Chapter 5) and complexity classes (Chapter 8) and some material on circuit complexity from Chapter9. x Preface Models of Computation An advanced course on circuit complexity can be assembled from Chapter 2 on logic cir- cuits and Chapter 9 on circuit complexity. The former describes efﬁcient circuits for a variety of functions while the latter surveys methods for deriving lower bounds to circuit complexity. The titles of sections containing advanced material carry an asterisk. Acknowledgments The raw material for this book is the fruit of the labors of many hundreds of people who have sought to understand computation. It is a great privilege to have the opportunity to convey this exciting body of material to a new audience. Because the writing of a book involves years of solitary work, it is far too easy for authors to lose sight of their audience. For this reason I am indebted to a number of individuals who have read my book critically. JoseG.Casta nos, currently a Brown Ph.D. candidate and my advisee, has been of immense help to me in this regard. He has read many drafts of the book and has given me the beneﬁt of his keen sense of what is acceptable to my readers. Josehas ´ also served as a teaching assistant for the undergraduate theory course for which this book was used and contributed importantly to the course and the book in this capacity. Dimitrios Michailidis, also a Brown Ph.D. candidate, has also been a great help; he has read several drafts of the book and has spotted many errors and lacunae. Bill Smart, a third Brown Ph.D. candidate, also carefully read the ﬁrst nine chapters. I have also beneﬁted greatly from the eval- uations done for my publisher by Richard Chang, University of Maryland, Baltimore County; Michael A. Keenan, Columbus State University; Philip Lewis, State University of New York, Stony Brook; George Lukas, University of Massachusetts at Boston; Stephen R. Mahaney, Rut- gers University; Friedhelm Meyer auf der Heide, University of Paderborn, Germany; Boleslaw Mikolajczak, University of Massachusetts, Dartmouth; Ramamohan Paturi, University of Cal- ifornia, San Diego; Professor Gabriel Robins, and Jeffery Westbrook, AT&T Labs–Research. Others, including Ray Greenlaw of the University of New Hampshire, read an early version of the manuscript for other publishers and offered valuable advice. Gary Rommel of the Eastern Connecticut State College and the Hartford Graduate Center provided feedback on classroom use of the book. Finally, I am indebted to students in my undergraduate and graduate courses at Brown whose feedback has been invaluable. I very much appreciate advice on the content and organization of the book provided by many individuals including my faculty colleagues the late Paris Kanellakis, Philip Klein, and Franco Preparata as well as Akira Maruoka, a visitor to Brown. Together Franco and I also produced the brief analysis of circuit depth given in Section 2.12.2. Alan Selman offered valuable comments on Chapter 8.AkiraMaruokaandJohanH astadreadandcommentedon the sections of Chapter 9 containing their work. Alan Selman and Ken Regan provided help in identifying references and Allan Borodin commented on many of the chapter notes. I wish to thank Jun Tarui for suggesting that I consider rewriting my 1976 book, The Complexity of Computing, which led to my writing this book. I also extend my sincere thanks to Andy Yao for his generous comments on the book for the publisher. Many others contributed to this book in one way or another, including Chuck Fiduccia, Paul Fischer, Bill McColl, Tony Medeiros, Mike Paterson, Eric Rudder, Elizabeth and Kevin Savage, Mark Snir, and many students in my courses. I express my gratitude to Carter Shanklin, Executive Editor for Corporate & Professional Publishing at Addison Wesley Longman, for his conﬁdence in me and this project. I also thank ▯John E Savage Preface xi Alwyn Velasquez for his attractive design of the book, Patricia Unubun, Production Editor on this project, for her secure guidance of the book in its ﬁnal stages, and Dimitrios Michailidis, an expert in LTEX, for his preparation of the macros used to typeset the book and his very close reading of the complete, formatted document, for which I am most appreciative. I offer my sincere thanks to Katrina Avery for her high-quality copyediting, Rosemary Simpson for her excellent index which is a great addition to this book, and Cynthia Benn for her careful proofreading of the manuscript. The attractive cover of this book was designed by Michael LeGrand and Scott Klemmer, two very talented juniors at Brown University. Finally, this book would not have been written without the loving support of my wife Patricia and our young sons, Christopher and Timothy. Their patience and understanding for my long absences during the four and one-half years this project was in process is deeply appreciated. Addendum to the Third Printing I am very much indebted to Prof. Layne Watson of the Department of Computer Science at Virginia Tech for carefully and thoroughly reading many chapters of this book. He has discovered many small and a few important errors. Authors are very fortunate to have readers such as Layne who generously devote their valuable time to a careful reading of their books and manuscripts. I wish to also express my thanks to the following individuals who have sent me corrections: Robert Altshuler, Geert Janssen, Robert Legenstein, Eli S. Mostrales, Amanda Silver, Volker Strehl, and Jacobo Toran. Errata pages for the various printings of the book by Addison Wesley are maintained and are available on the web at http://www.modelsofcomputation.org. I am very appreciative of the LTEX help Manos Renieris provided to prepare the third printing of this book. Addendum to the Electronic Version of the Book This version of the book, in which all known errors have been corrected, is released under a Creative Commons Attribution-Noncommercial-No Derivative Works 3.0 United States license. This license is described at http://creativecommons.org/licenses/by-nc-nd/3.0/us/. Contents Preface vii I Overview of the Book 1 1 The Role of Theory in Computer Science 3 1.1 A Brief History of Theoretical Computer Science4 1.1.1 Early Years 4 1.1.2 1950s 5 1.1.3 1960s 5 1.1.4 1970s 5 1.1.5 1980s and 1990s 6 1.2 Mathematical Preliminaries 7 1.2.1 Sets 7 1.2.2 Number Systems 8 1.2.3 Languages and Strings 9 1.2.4 Relations 9 1.2.5 Graphs 10 1.2.6 Matrices 11 1.2.7 Functions 11 1.2.8 Rate of Growth of Functions13 1.3 Methods of Proof 14 xiii xiv Contents Models of Computation 1.4 Computational Models 16 1.4.1 Logic Circuits 16 1.4.2 Finite-State Machines 18 1.4.3 Random-Access Machine 19 1.4.4 Other Models 20 1.4.5 Formal Languages 21 1.5 Computational Complexity 23 1.5.1 A Computational Inequality 23 1.5.2 Tradeoffs in Space, Time, and I/O Operation24 1.5.3 Complexity Classes 26 1.5.4 Circuit Complexity 27 1.6 Parallel Computation 27 Problems 29 Chapter Notes 32 II General Computational Models 33 s t i u c r i C c i g o 2L 35 2.1 Designing Circuits 36 2.2 Straight-Line Programs and Circuits 36 2.2.1 Functions Computed by Circuits 38 2.2.2 Circuits That Compute Functions 39 2.2.3 Circuit Complexity Measures 40 2.2.4 Algebraic Properties of Boolean Function40 2.3 Normal-Form Expansions of Boolean Functions 42 2.3.1 Disjunctive Normal Form 42 2.3.2 Conjunctive Normal Form 43 2.3.3 SOPE and POSE Normal Forms 44 2.3.4 Ring-Sum Expansion 45 2.3.5 Comparison of Normal Forms 45 2.4 Reductions Between Functions 46 2.5 Specialized Circuits 47 2.5.1 Logical Operations 48 2.5.2 Shifting Functions 48 2.5.3 Encoder 51 2.5.4 Decoder 53 2.5.5 Multiplexer 54 2.5.6 Demultiplexer 55 2.6 Preﬁx Computations 55 2.6.1 An Efﬁcient Parallel Preﬁx Circui57 ▯John E Savage Contents xv 2.7 Addition 58 2.7.1 Carry-Lookahead Addition 60 2.8 Subtraction 61 2.9 Multiplication 62 2.9.1 Carry-Save Multiplication64 2.9.2 Divide-and-Conquer Multiplication 66 2.9.3 Fast Multiplication67 2.9.4 Very Fast Multiplication67 2.9.5 Reductions to Multiplication68 2.10 Reciprocal and Division 68 2.10.1 Reductions to the Reciprocal72 2.11 Symmetric Functions 74 2.12 Most Boolean Functions Are Complex 77 2.13 Upper Bounds on Circuit Size 79 Problems 82 Chapter Notes 88 3 Machines with Memory 91 3.1 Finite-State Machines 92 3.1.1 Functions Computed by FSMs 94 3.1.2 Computational Inequalities for the FSM95 3.1.3 Circuits Are Universal for Bounded FSM Computations 96 3.1.4 Interconnections of Finite-State Machin97 3.1.5 Nondeterministic Finite-State Machines98 3.2 Simulating FSMs with Shallow Circuits* 100 3.2.1 A Shallow Circuit Simulating Addition105 3.3 Designing Sequential Circuits 106 3.3.1 Binary Memory Devices 109 3.4 Random-Access Machines 110 3.4.1 The RAM Architecture 110 3.4.2 The Bounded-Memory RAM as FSM 111 3.4.3 Unbounded-Memory RAM Programs 112 3.4.4 Universality of the Unbounded-Memory RAM 114 3.5 Random-Access Memory Design 115 3.6 Computational Inequalities for the RAM 117 3.7 Turing Machines 118 3.7.1 Nondeterministic Turing Machines 120 3.8 Universality of the Turing Machine 121 xvi Contents Models of Computation 3.9 Turing Machine Circuit Simulations 124 3.9.1 A Simple Circuit Simulation of TM Computations 124 3.9.2 Computational Inequalities for Turing Machine127 3.9.3 Reductions from Turing to Circuit Computations128 3.9.4 Deﬁnitionsof P-Completeand NP-CompleteLanguages 130 3.9.5 Reductions to P-Complete Languages 130 3.9.6 Reductions to NP-Complete Languages 132 3.9.7 An Efﬁcient Circuit Simulation of TM Computations* 134 3.10 Design of a Simple CPU 137 3.10.1 The Register Set 138 3.10.2 The Fetch-and-Execute Cycle 139 3.10.3 The Instruction Set139 3.10.4 Assembly-Language Programming 140 3.10.5 Timing and Control 142 3.10.6 CPU Circuit Size and Depth 146 3.10.7 Emulation 147 Problems 147 Chapter Notes 152 4 Finite-State Machines and Pushdown Automata 153 4.1 Finite-State Machine Models 154 4.2 Equivalence of DFSMs and NFSMs 156 4.3 Regular Expressions 158 4.4 Regular Expressions and FSMs 160 4.4.1 Recognition of Regular Expressions by FSMs160 4.4.2 Regular Expressions Describing FSM Languages 164 4.4.3 grep—Searching for Strings in Files168 4.5 The Pumping Lemma for FSMs 168 4.6 Properties of Regular Languages 170 4.7 State Minimization* 171 4.7.1 Equivalence Relations on Languages and State171 4.7.2 The Myhill-Nerode Theorem 174 4.7.3 A State Minimization Algorithm 175 4.8 Pushdown Automata 177 4.9 Formal Languages 181 4.9.1 Phrase-Structure Languages 182 4.9.2 Context-Sensitive Languages 183 4.9.3 Context-Free Languages 183 4.9.4 Regular Languages 184 4.10 Regular Language Recognition 184 ▯John E Savage Contents xvii 4.11 Parsing Context-Free Languages 186 4.12 CFL Acceptance with Pushdown Automata* 192 4.13 Properties of Context-Free Languages 197 4.13.1 CFL Pumping Lemma 197 4.13.2 CFL Closure Properties 198 Problems 200 Chapter Notes 207 5 Computability 209 5.1 The Standard Turing Machine Model 210 5.1.1 Programming the Turing Machine 211 5.2 Extensions to the Standard Turing Machine Model 213 5.2.1 Multi-Tape Turing Machines 213 5.2.2 Nondeterministic Turing Machines 214 5.2.3 Oracle Turing Machines 216 5.2.4 Representing Restricted Models of Computation217 5.3 Conﬁguration Graphs 218 5.4 Phrase-Structure Languages and Turing Machines 219 5.5 Universal Turing Machines 220 5.6 Encodings of Strings and Turing Machines 222 5.7 Limits on Language Acceptance 223 5.7.1 Decidable Languages 223 5.7.2 A Language That Is Not Recursively Enumerable 224 5.7.3 Recursively Enumerable but Not Decidable Languages 225 5.8 Reducibility and Unsolvability 226 5.8.1 Reducibility 226 5.8.2 Unsolvable Problems 227 5.9 Functions Computed by Turing Machines 230 5.9.1 Primitive Recursive Functions231 5.9.2 Partial Recursive Function232 5.9.3 Partial Recursive Functions are RAM-Computable 233 Problems 233 Chapter Notes 236 6 Algebraic and Combinatorial Circuits 237 6.1 Straight-Line Programs 238 6.2 Mathematical Preliminaries 239 xviii Contents Models of Computation 6.2.1 Rings and Fields 239 6.2.2 Matrices 240 6.3 Matrix Multiplication 244 6.3.1 Strassen’s Algorithm245 6.4 Transitive Closure 248 6.5 Matrix Inversion 252 6.5.1 Symmetric Positive Deﬁnite Matrices 253 6.5.2 Schur Factorization 254 6.5.3 InverTion of Triangular Matrice255 6.5.4 LDL Factorization of SPD Matrices 257 6.5.5 Fast Matrix Inversion*260 6.6 Solving Linear Systems 262 6.7 Convolution and the FFT Algorithm 263 6.7.1 Commutative Rings* 264 6.7.2 The Discrete Fourier Transform 264 6.7.3 Fast Fourier Transform 266 6.7.4 Convolution Theorem 268 6.8 Merging and Sorting Networks 270 6.8.1 Sorting Via Bitonic Merging 271 6.8.2 Fast Sorting Networks 274 Problems 274 Chapter Notes 278 7 Parallel Computation 281 7.1 Parallel Computational Models 282 7.2 Memoryless Parallel Computers 282 7.3 Parallel Computers with Memory 283 7.3.1 Flynn’s Taxonomy 285 7.3.2 The Data-Parallel Model 286 7.3.3 Networked Computers 287 7.4 The Performance of Parallel Algorithms 289 7.4.1 Amdahl’s Law 290 7.4.2 Brent’s Principle291 7.5 Multidimensional Meshes 292 7.5.1 Matrix-Vector Multiplication on a Linear Arra293 7.5.2 Sorting on Linear Arrays294 7.5.3 Matrix Multiplication on a 2D Mesh 295 7.5.4 Embedding of 1D Arrays in 2D Meshes 297 7.6 Hypercube-Based Machines 298 ▯John E Savage Contents xix 7.6.1 Embedding Arrays in Hypercubes 299 7.6.2 Cube-Connected Cycles 300 7.7 Normal Algorithms 301 7.7.1 Summing on the Hypercube 302 7.7.2 Broadcasting on the Hypercube 303 7.7.3 Shifting on the Hypercube303 7.7.4 Shufﬂe and Unshufﬂe Permutations on Linear Arrays304 7.7.5 Fully Normal Algorithms on Two-Dimensional Arrays 306 7.7.6 Normal Algorithms on Cube-Connected Cycles 307 7.7.7 Fast Matrix Multiplication on the Hypercub308 7.8 Routing in Networks 309 7.8.1 Local Routing Networks 309 7.8.2 Global Routing Networks 310 7.9 The PRAM Model 311 7.9.1 Simulating Trees, Arrays, and Hypercubes on the PRA313 7.9.2 The Power of Concurrency 314 7.9.3 Simulating the PRAM on a Hypercube Network 315 7.9.4 Circuits and the CREW PRAM 317 7.10 The BSP and LogP Models 317 Problems 318 Chapter Notes 322 III Computational Complexity 325 8 Complexity Classes 327 8.1 Introduction 328 8.2 Languages and Problems 328 8.2.1 Complements of Languages and Decision Problems 329 8.3 Resource Bounds 330 8.4 Serial Computational Models 331 8.4.1 The Random-Access Machine 331 8.4.2 Turing Machine Models 332 8.5 Classiﬁcation of Decision Problems 334 8.5.1 Space and Time Hierarchies 336 8.5.2 Time-Bounded Complexity Classes 337 8.5.3 Space-Bounded Complexity Classes 338 8.5.4 Relations Between Time- and Space-Bounded Classes341 8.5.5 Space-Bounded Functions 342 8.6 Complements of Complexity Classes 343 xx Contents Models of Computation 8.6.1 The Complement of NP 347 8.7 Reductions 349 8.8 Hard and Complete Problems 350 8.9 P-Complete Problems 352 8.10 NP-Complete Problems 355 8.10.1 NP-Complete Satisﬁability Problems 356 8.10.2 Other NP-Complete Problems 357 8.11 The Boundary Between P and NP 363 8.12 PSPACE-Complete Problems 365 8.12.1 A First PSPACE-Complete Problem 365 8.12.2 Other PSPACE-Complete Problems 369 8.13 The Circuit Model of Computation 372 8.13.1 Uniform Families of Circuit373 8.13.2 Uniform Circuits Are Equivalent to Turing Machine374 8.14 The Parallel Random-Access Machine Model 376 8.14.1 Equivalence of the CREW PRAM and Circuits 376 8.14.2 The Parallel Computation Thesis379 8.15 Circuit Complexity Classes 380 8.15.1 Efﬁciently Parallelizable Languag380 8.15.2 Circuits of Polynomial Siz382 Problems 383 Chapter Notes 388 9 Circuit Complexity 391 9.1 Circuit Models and Measures 392 9.1.1 Circuit Models 392 9.1.2 Complexity Measures 393 9.2 Relationships Among Complexity Measures 394 9.2.1 Effect of Fan-Out on Circuit Si394 9.2.2 Effect of Basis Change on Circuit Size and Dep396 9.2.3 Formula Size Versus Circuit Depth396 9.3 Lower-Bound Methods for General Circuits 399 9.3.1 Simple Lower Bounds 399 9.3.2 The Gate-Elimination Method for Circuit Siz400 9.4 Lower-Bound Methods for Formula Size 404 9.4.1 The Neciporuk Lower Bound 405 9.4.2 The Krapchenko Lower Bound 407 9.5 The Power of Negation 409 ▯John E Savage Contents xxi 9.6 Lower-Bound Methods for Monotone Circuits 412 9.6.1 The Path-Elimination Method 413 9.6.2 The Function Replacement Method 417 9.6.3 The Approximation Method 424 9.6.4 Slice Functions 431 9.7 Circuit Depth 436 9.7.1 Communication Complexity 437 9.7.2 General Depth and Communication Complexity 438 9.7.3 Monotone Depth and Communication Complexity 440 9.7.4 The Monotone Depth of the Clique Function 442 9.7.5 Bounded-Depth Circuits 447 Problems 450 Chapter Notes 455 10 Space–Time Tradeoffs 461 10.1 The Pebble Game 462 10.1.1 The Pebble Game Versus the Branching Program 462 10.1.2 Playing the Pebble Game 463 10.2 Space Lower Bounds 464 10.3 Extreme Tradeoffs 466 10.4 Grigoriev’s Lower-Bound Method 468 10.4.1 Flow Properties of Functions468 10.4.2 The Lower-Bound Method in the Basic Pebble Game 470 10.4.3 First Matrix Multiplication Bound472 10.5 Applications of Grigoriev’s Method 472 10.5.1 Convolution 473 10.5.2 Cyclic Shifting474 10.5.3 Integer Multiplication475 10.5.4 Matrix Multiplication 476 10.5.5 Discrete Fourier Transform479 10.5.6 Merging Networks 481 10.6 Worst-Case Tradeoffs for Pebble Games* 482 10.7 Upper Bounds on Space* 483 10.8 Lower Bound on Space for General Graphs* 484 10.9 Branching Programs 488 10.9.1 Branching Programs and Other Models 493 10.10 Straight-Line Versus Branching Programs 495 10.10.1 Efﬁcient Branching Programs for Cyclic Shi496 10.10.2 Efﬁcient Branching Programs for Merging 496 xxii Contents Models of Computation 10.11 The Borodin-Cook Lower-Bound Method 497 10.12 Properties of “nice” and “ok” Matrices501 10.13 Applications of the Borodin-Cook Method 504 10.13.1 Convolution 505 10.13.2 Integer Multiplication506 10.13.3 Matrix-Vector Product 507 10.13.4 Matrix Multiplication*509 10.13.5 Matrix Inversion 511 10.13.6 Discrete Fourier Transform513 10.13.7 Unique Elements 514 10.13.8 Sorting 517 Problems 519 Chapter Notes 526 11 Memory-Hierarchy Tradeoffs 529 11.1 The Red-Blue Pebble Game 530 11.1.1 Playing the Red-Blue Pebble Game 532 11.1.2 Balanced Computer Systems 532 11.2 The Memory-Hierarchy Pebble Game 533 11.2.1 Playing the MHG 535 11.3 I/O-Time Relationships 535 11.4 The Hong-Kung Lower-Bound Method 537 11.5 Tradeoffs Between Space and I/O Time 539 11.5.1 Matrix-Vector Product 539 11.5.2 Matrix-Matrix Multiplication541 11.5.3 The Fast Fourier Transform 546 11.5.4 Convolution 552 11.6 Block I/O in the MHG 555 11.7 Simulating a Fast Memory in the MHG 558 11.8 RAM-Based I/O Models 559 11.8.1 The Block-Transfer Model 559 11.9 The Hierarchical Memory Model 563 11.9.1 Lower Bounds for the HMM 564 11.9.2 Upper Bounds for the HMM 567 11.10 Competitive Memory Management 567 11.10.1 Two-Level Memory-Management Algorithms 568 Problems 569 Chapter Notes 573 ▯John E Savage Contents xxiii 12 VLSI Models of Computation 575 12.1 The VSLI Challenge 576 12.1.1 Chip Fabrication 576 12.1.2 Design and Layout 577 12.2 VLSI Physical Models 578 12.3 VLSI Computational Models 579 12.4 VLSI Performance Criteria 580 12.5 Chip Layout 581 12.5.1 The H-Tree Layout 581 12.5.2 Multi-dimensional Mesh Layouts 583 12.5.3 Layout of the CCC Network 584 12.6 Area–Time Tradeoffs 586 12.6.1 Planar Circuit Size586 12.6.2 Computational Inequalities587 12.6.3 The Planar Separator Theorem 589 12.7 The Performance of VLSI Algorithms 592 12.7.1 The Performance of VLSI Algorithms on Functions593 12.7.2 The Performance of VLSI Algorithms on Predicate595 12.8 Area Bounds 597 Problems 598 Chapter Notes 601 Bibliography 605 Index 623 Models of Computation Exploring the Power of Computing Part I OVERVIEW OF THE BOOK CHAPTE R The Role of Theory in Computer Science Computer science is the study of computers and programs, the collections of instructions that direct the activity of computers. Although computers are made of simple elements, the tasks they perform are often very complex. The great disparity between the simplicity of computers and the complexity of computational tasks offers intellectual challenges of the highest order. It is the models and methods of analysis developed by computer science to meet these challenges that are the subject of theoretical computer science. Computer scientists have developed models for machines, such as the random-access and Turing machines; for languages, such as regular and context-free languages; for programs, such as straight-line and branching programs; and for systems of programs, such as compilers and operating systems. Models have also been developed for data structures, such as heaps, and for databases, such as the relational and object-oriented databases. Methodslsishavelpedtoseo tudy the efﬁciency of algorithms and their data structures, the expressibility of languages and the capacity of computer architectures to recognize them, the classiﬁcation of problems by the time and space required to solve them, their inherent complexity, and limits that hold simultaneously on computational resources for particular problems. This book examines each of these topics in detail except for the ﬁrst, analysis of algorithms and data structures, which it covers only brieﬂy. This chapter provides an overview of the book. Except for the mathematical preliminaries, thetopicsintroducedherearerevisitedlater. 3 4 Chapter 1 The Role of Theory in Computer Science Models of Computation 1.1 A Brief History of Theoretical Computer Science Theoretical computer science uses models and analysis to study computers and computation. It thus encompasses the many areas of computer science sufﬁciently well developed to have models and methods of analysis. This includes most areas of the ﬁeld. 1.1.1 Early Years TURING AND CHURCH: Theoretical computer science emerged primarily from the work of Alan Turing and Alonzo Church in 1936, although many others, such as Russell, Hilbert, and Boole, were important precursors. Turing and Church introduced formal computational mod- els (the Turing machine and lambda calculus), showed that some well-stated computational problems have no solution, and demonstrated the existence of universal computing machines, machines capable of simulating every other machine of their type. Turing and Church were logicians; their work reﬂected the concerns of mathematical logic. The origins of computers predate them by centuries, going back at least as far as the abacus, if we call any mechanical aid to computation a computer. A very important contribution to the study of computers was made by Charles Babbage, who in 1836 completed the design of his ﬁrst programmable Analytical Engine, a mechanical computer capable of arithmetic operations under the control of a sequence of punched cards (an idea borrowed from the Jacquard loom). A notable development in the history of computers, but one of less signiﬁcance, was the 1938 demonstration by Claude Shannon that Boolean algebra could be used to explain the operation of relay circuits, a form of electromechanical computer. He was later to develop his profound “mathematical theory of communication” in 1948 as well as to lay the foundations for the study of circuit complexity in 1949. FIRST COMPUTERS: In 1941 Konrad Zuse built the Z3, the ﬁrst general-purpose program- controlled computer, a machine constructed from electromagnetic relays. The Z3 read pro- grams from a punched paper tape. In the mid-1940s the ﬁrst programmable electronic com- puter (using vacuum tubes), the ENIAC, was developed by Eckert and Mauchly. Von Neu- mann, in a very inﬂuential paper, codiﬁed the model that now carries his name. With the invention of the transistor in 1947, electronic computers were to become much smaller and more powerful than the 30-ton ENIAC. The microminiaturization of transistors continues today to produce computers of ever-increasing computing power in ever-shrinking packages. EARLY LANGUAGE DEVELOPMENT: The ﬁrst computers were very difﬁcult to program (cables were plugged and unplugged on the ENIAC). Later, programmers supplied commands by typing in sequences of 0’s and 1’s, the machine language of computers. A major contribution of the 1950s was the development of programming languages, such as FORTRAN, COBOL, and LISP. These languages allowed programmers to specify commands in mnemonic code and with high level constructs such as loops, arrays, and procedures. As languages were developed, it became important to understand their expressiveness as well as the characteristics of the simplest computers that could translate them into machine language. As a consequence, formal languages and the automata that recognize them became an important topic of study in the 1950s. Nondeterministic models – models that may have more than one possible next state for the current state and input – were introduced during this time as a way to classify languages. ▯John E Savage 1.1 A Brief History of Theoretical Computer Science 5 1.1.2 1950s FINITE-STATE MACHINES: Occurring in parallel with the development of languages was the development of models for computers. The 1950s also saw the formalization of the ﬁnite-state machine (also called the sequential machine), the sequential circuit (the concrete realization of a sequential machine), and the pushdown automaton. Rabin and Scott pioneered the use of analytical tools to study the capabilities and limitations of these models. FORMAL LANGUAGES: The late 1950s and 1960s saw an explosion of research on formal lan- guages. By 1964 the Chomsky language hierarchy, consisting of the regular, context-free, context-sensitive, and recursively enumerable languages, was established, as was the correspon- dence between these languages and the memory organizations of machine types recognizing them, namely the ﬁnite-state machine, the pushdown automaton, the linear-bounded au- tomaton, and the Turing machine. Many variants of these standard grammars, languages, and machines were also examined. 1.1.3 1960s COMPUTATIONAL COMPLEXITY: The 1960s also saw the laying of the foundation for compu- tational complexity with the classiﬁcation of languages and functions by Hartmanis, Lewis, and Stearns and others of the time and space needed to compute them. Hierarchies of prob- lems were identiﬁed and speed-up and gap theorems established. This area was to ﬂower and lead to many important discoveries, including that by Cook (and independently Levin) of NP-complete languages, languages associated with many hard combinatorial and optimiza- tion problems, including the Traveling Salesperson problem, the problem of determining the shortest tour of cities for which all intercity distances are given. Karp was instrumental in demonstrating the importance of NP-complete languages. Because problems whose running time is exponential are considered intractable, it is very important to know whether a string in NP-complete languages can be recognized in a time polynomial in their length. This is called the P = NP problem, where P is the class of deterministic polynomial-time languages. The P-complete languages were also identiﬁed in the 1970s; these are the hardest languages in P to recognize on parallel machines. 1.1.4 1970s COMPUTATION TIME AND CIRCUIT COMPLEXITY: In the early 1970s the connection between computation time on Turing machines and circuit complexity was established, thereby giving ? the study of circuits renewed importance and offering the hope that the P = NP problem could be resolved via circuit complexity. PROGRAMMING LANGUAGE SEMANTICS: The 1970s were a very productive period for formal methods in the study of programs and languages. The area of programming language seman- tics was very active; models and denotations were developed to give meaning to the phrase “programming language,” thereby putting language development on a solid footing. Formal methods for ensuring the correctness of programs were also developed and applied to program development. The 1970s also saw the emergence of the relational database model and the 6 Chapter 1 The Role of Theory in Computer Science Models of Computation development of the relational calculus as a means for the efﬁcient reformulation of database queries. SPACE-TIME TRADEOFFS: An important byproduct of the work on formal languages and se- mantics in the 1970s is the pebble game. In this game, played on a directed acyclic graph, pebbles are placed on vertices to indicate that the value associated with a vertex is located in the register of a central processing unit. The game allows the study of tradeoffs between the number of pebbles (or registers) and time (the number of pebble placements) and leads to space-time product inequalities for individual problems. These ideas were generalized in the 1980s to branching programs. VLSI MODEL: When the very large-scale integration (VLSI) of electronic components onto semiconductor chips emerged in the 1970s, VLSI models for them were introduced and an- alyzed. Ideas from the study of pebble games were applied and led to tradeoff inequalities 2 relating the complexity of a problem to products such as AT ,where A is the area of a chip and T is the number of steps it takes to solve a problem. In the late 1970s and 1980s the layout of computers on VLSI chips also became an important research topic. ALGORITHMS AND DATA STRUCTURES: While algorithms (models for programs) and data struc- tures were introduced from the beginning of the ﬁeld, they experienced a ﬂowering in the 1970s and 1980s. Knuth was most inﬂuentialin this development, as later were Aho, Hopcroft, and Ullman. New algorithms were invented for sorting, data storage and retrieval, problems on graphs, polynomial evaluation, solving linear systems of equations, computational geometry, and many other topics on both serial and parallel machines. 1.1.5 1980s and 1990s PARALLEL COMPUTING AND I/O COMPLEXITY: The 1980s also saw the emergence of many other theoretical computer science research topics, including parallel and distributed comput- ing, cryptography, and I/O complexity. A variety of concrete and abstract models of parallel computers were developed, ranging from VLSI-based models to the parallel random-access machine (PRAM), a collection of synchronous processors alternately reading from and writ- ing to a common array of memory cells and computing locally. Parallel algorithms and data structures were developed, as were classiﬁcations of problems according to the extent to which they are parallelizable. I/O complexity, the study of data movement among memory units in a memory hierarchy, emerged around 1980. Memory hierarchies take advantage of the temporal and spatial locality of problems to simulate fast, expensive memories with slow and inexpensive ones. DISTRIBUTED COMPUTING: The emergence of networks of computers brought to light some hard logical problems that led to a theory of distributed computing, that is, computing with multiple and potentially asynchronous processors that may be widely dispersed. The prob- lems addressed in this area include reaching consensus in the presence of malicious adversaries, handling processor failures, and efﬁciently coordinating the activities of agents when interpro- cessor latencies are large. Although some of the problems addressed in distributed computing were ﬁrst introduced in the 1950s, this topic is associated with the 1980s and 1990s. ▯John E Savage 1.2 Mathematical Preliminaries 7 CRYPTOGRAPHY: While cryptography has been important for ages, it became a serious con- cern of complexity theorists in the late 1970s and an active research area in the 1980s and 1990s. Some of the important cryptographic issues are a) how to exchange information se- cretly without having to exchange a private key with each communicating agent, b) how to identify with high assurance the sender of a message, and c) how to convince another agent that you have the solution to a problem without transferring the solution to him or her. As this brief history illustrates, theoretical computer science speaks to many different com- putational issues. As the range of issues addressed by computer science grows in sophistication, we can expect a commensurate growth in the richness of theoretical computer science. 1.2 Mathematical Preliminaries In this section we introduce basic concepts used throughout the book. Since it is presumed that the reader has already met most of this material, this presentation is abbreviated. 1.2.1 Sets A set A is a non-repeating and unordered collection of elements. For example, A 50s = {Cobol, Fortran, Lisp} is a set of elements that could be interpreted as the names of languages designed in the 1950s. Because the elements in a set are unordered, {Cobol, Fortran, Lisp} and {Lisp, Cobol, Fortran} denote the same set. It is very convenient to recognize the empty set ∅, a set that does not have any elements. The set B = {0,1} containing 0 and 1 is used throughout this book. The notation a ∈ A means that element a is contained in set A. For example, Cobol ∈ A 50smeans that Cobol is a language invented in the 1950s. A set can be ﬁnite or inﬁnite. The cardinality of a ﬁnite set A, denoted |A|, is the number of elements in A. We say that a set A is a subset of a set B, denoted A ⊆ B, if every element of A is an element offB.A ⊆ B but B contains elements not in A, we say that A is a proper subset and write A ⊂ B. The union of two sets A and B, denoted A ∪ B, is the set containing elements that are in A, B or both. For example, if A0 = {1,2,3} and B =0{4,3,5},then A 0 B = 0 {5,4,3,1,2}.The intersection of sets A and B, denoted A∩B, is the set containing elements that are in both A and B.Hence, A 0∩ B 0 {3}.If A and B have no elements in common, denoted A ∩ B = ∅,theyarei tobe disjoint setshe difference between sets A and B, denoted A − B, is the set containing the elements that are in A but not in B.Tsh A 0− B =0{1,2}.(SeeFig. 1.1.) A B A ∩ B A − B Figure 1.1 A Venn diagram showing the intersection and difference of sets Aiend B.T unionisthesetofelementsinboth A and B. 8 Chapter 1 The Role of Theory in Computer Science Models of Computation The following simple properties hold for arbitrary sets A and B and the operations of set union, intersection, and difference: A ∪ B = B ∪ A A ∩ B = B ∩ A A ∪∅ = A A ∩∅ = ∅ A −∅ = A A The power set of a set A, denoted 2 , is the set of all subsets of A including the empty set. For example, 2 {2,5,9}= {∅,{2},{5},{9},{2,5},{2,9},{5,9},{2,5,9}}.W es2 A |A| to denote the power set A as a reminder that it has 2 elements. To see this, observe that for each subset B of the set A there is a binary n-tuple (1,e 2... ,e|A|) where e is 1 if the |A| ith element of A is in B and 0 otherwise. Since there are 2 ways to assign 0’s and 1’s to (e ,e ,... ,e ),2 A has 2|A| elements. 1 2 |A| The Cartesian productof two sets A and B, denoted A×B, is another set, the set of pairs {(a,b)|a ∈ A, b ∈ B}. For example, when A 0 = {1,2,3} and B = 04,3,5}, A × B =0 0 {(1,4),(1,3),(1,5),(2,4),(2,3),(2,5),(3,4),(3,3),(3,5)}. The Cartesian product of k sets A1,A 2... ,A k denoted A ×1 ×···2 A ,istheketof k-tuples {(a1,a2,... ,ak)|a ∈1 A 1 a 2 A , .2. , a ∈kA } whkse components are drawn from the respective sets. If for each 1 ≤ i ≤ k, A i = A,the k-fold Cartesian product A ×1A ×···2 A is denotkd A k.Anlmtof A is a k-tuple (a 1a 2... ,a k where a ∈iA.T h,ebry n-tuple n (e1,e2,... ,e|A|) of the preceding paragraph is an element of {0,1} . 1.2.2 Number Systems Integers are widely used to describe problems. The inﬁnite set consisting of 0 and the positive integers {1,2,3,... } is called the set of natural numbers. The set of positive and negative integers and zero, , consists of the integers {0,1,−1,2,−2,... }. In the standard decimal representation of the natural numbers, each integer n is repre- sented as a sum of powers of 10. For example, 867 = 8 × 10 2 + 6 × 10 + 7 × 10 .S 0 ie computers today are binary machines, it is convenient to represent integers over base 2 instead of 10. The standard binary representationfor the natural numbers represents each integer as a sum of powers of 2. That is, for some k ≥ 0 each integer n can be represented as a k-tuple x =( x k−1 ,xk−2 ,... ,x1,x 0,whereeachof x k−1 ,xk−2 ,... ,x1,x 0as value 0 or 1 and n satisﬁes the following identity: k−1 k−2 1 0 n = x k−12 + x k−2 2 + ··· + x 21+ x 2 0 k−1 k−2 1 0 The largest integer that can be represented with k bits is 2 + 2 + ··· + 2 + 2 = 2 − 1. (See Problem 1.1.) Also, the k-tuple representation for n is unique; that is, two different integers cannot have the same representation. When leading 0’s are suppressed, the standard binary representation for 1, 15, 32, and 97 are (1), (1,1,1,1), (1,0,0,0,0,0),and (1,1,0,0,0,0,1), respectively. We denote with x + y, x − y, x ∗ y,and x/y the results of addition, subtraction, multi- plication, and division of integers. ▯John E Savage 1.2 Mathematical Preliminaries 9 1.2.3 Languages and Strings An alphabet A is a ﬁnite set with at least two elements. A string x is an elemen1 (2 ,a ,..k ,a ) of the Cartesian product A k in which we drop the commas and parentheses. Thus, we write k x = a 1 2··· ak,andsaythat x is a string over the alphabet A.Anig x in A is said to have length k, denoted |x| = k. Thus, 011 is a string of length three over A = {0,1}. k l k+l Consider now the Cartesian product A ×A = A ,whichisthe (k+l)-fold Cartesian product of A with itself. Let x = a1 2··· a k A and y = b b ···1 2∈ A .lhenastrn ig k+l z = c 1 2··· ck+l ∈ A can be written as the concatenation of strings x and y of length k and l, denoted, z = x · y,where x · y = a 1 2··· a k 1 2·· b l That is, i = a ior 1 ≤ i ≤ k and c =ib i−k for k + 1 ≤ i ≤ k + l. The empty string, denoted ▯, is a special string with the property that when concatenated with any other string x it returns x;thatisx·▯ = ▯·x = x. The empty string is said to have zero length. As a special case of A ,wlt A denote the set containing the empty string; 0 that is, A = {▯}. The concatenation of sets of strings A and B, denoted A·B,isthesetofstringsformed by concatenating each string in A with each string in B. For example, {00,1}·{ a,bb} = {00a,00 bb,1a,1bb}. The concatenation of a set A with the empty set ∅, denoted A ·∅,isthe empty set because it contains no elements; that is, A ·∅ = ∅· A = ∅ When no confusion arises, we write AB instead of A · B. A language L over an alphabet A is a collection of strings of potentially different lengths over A. For example, {00,010,1110,1001} is a ﬁnite language over the alphabet {0,1}.(It is ﬁnite because it contains a bounded number of strings.) The set of all strings of all lengths over the alphabet A, including the empty string, is denoted A∗ and called the Kleene closure of A. For example, {0} ∗contains ▯, the empty string, as well as 0, 00, 000, 0000, ...l,A {00 ∪ 1} ∗ = {▯,1,00,001,100,0000,... }. It follows that a language L over the alphabet A ∗ ∗ is a subset of A, denoted L ⊆ A . The positive closure of a set A, denoted A , is the set of all strings over A except for ∗ ∗ + the empty string. For example, 0(0 10 ) is the set of binary strings beginning with 0 and containing at least one 1. 1.2.4 Relations A subset R of the Cartesian product of sets is called a relation.A binary relation R is a subset of the Cartesian product of two sets. Three examples of binary relations are R 0 = {(0,0),(1,1),(2,4),(3,9),(4,16)}, R 1 = {(red, 0), (green, 1), (blue, 2)},a nd R 2 = {(small, short), (medium, middle), (medium, average), (large, tall)}.oneler R 0 is a function because for each ﬁrst component of a pair there is a unique second component. R 1 is also a function, but 2 is not a function. A binary relation R over a set A is a subset of A × A; that is, both components of each pair are drawn from the same set. We use two notations to denote membership of a pair (a,b) in a binary relation R over A,namely (a,b) ∈ R and the new notation aRb. Often it is more convenient to say aRb than to say (a,b) ∈ R. 10 Chapter 1 The Role of Theory in Computer Science Models of Computation A binary relation R is reﬂexiveif for all a ∈ A, aRa.I tsi symmetric if for all a,b ∈ A, aRb if and only if bRa.Itis transitive if for all a,b,c ∈ A,if aRb and bRc,then aRc. A binary relation R is an equivalence relation if it is reﬂexive, symmetric, and transitive. For example, the pairs (a,b), a,b ∈ , for which both a and b have the same remainder on division by 3, is an equivalence relation. (See Problem 1.3.) If R is an equivalence relation and aRb,then a and b are said to be equivalent elements. We letE[a] be the set of elements in A that are equivalent to a under the relation R and call it the equivalence class of elements equivalent to a. It is not difﬁcult to show that for all a,b ∈ A, E[a] and E[b] are either equal or disjoint. (See Problem 1.4.) Thus, the equivalence classes of an equivalence relation over a set A partition the elements of A into disjoint sets. For example, the partition {0 ∗,0 (0 10 ) ,1 (0 + 1) } of the set (0 + 1) of binary strings deﬁnes an equivalence relation R. The equivalence classes consist of strings containing zero or more 0’s, strings starting with 0 and containing at least one 1, and strings beginning with 1. It follows that 00R000 and 1001R11 hold but not 10R01. 1.2.5 Graphs A directed graph G =( V ,E) consists of a ﬁnite set V of distinct verticesand a ﬁnite set of pairs of distinct vertices E ⊆ V × V called edges. Edge e is incident on vertex v if e contains v.esiid undirected if for each edge (v 1,v 2 in E the edge (v ,v2) 1 is also in E.iF 1.2 shows two examples of directed graphs, some of whose vertices are labeled with symbols denoting gates, a topic discussed in Section 1.2.7.n Iehg the edge (v 1,v2) is directed from the vertex v1to the vertex v 2 shown with an arrow from v 1 to v2.The in-degree of a vertex in a directed graph is the number of edges directed into it; its out-degree is the number of edges directed away from it; its degree is the sum of its in- and out-degree. In a directed graph an input vertex has in-degree zero, whereas an output vertex either has out-degree zero or is simply any vertex specially designated as an output vertex. A walk in a graph (directed or undirected) is a tuple of vertices 1vv 2... ,v p with the property that (v ,v ) is in E for 1 ≤ i ≤ p−1. A walk (v ,v ,... ,v ) is closed if v = v .A path i i+1 1 2 p 1 p is a walk with distinct vertices. A cycle is a closed walk with p − 1 distinct vertices, p ≥ 3. The length of a path is the number of edges on the path. Thus, the path (v 1,v2,... ,vp) has length p − 1. A directed acyclic graph (DAG) is a directed graph that has no cycles. v8 v v v5 6 + 7 v3 v4 + v4 v 5 v v v1 v2 v3 1 2 (a) (b) Figure 1.2 Two directed acyclic graphs representing logic circuits. ▯John E Savage 1.2 Mathematical Preliminaries 11 Logic circuits are DAGs in which all vertices except input vertices carry the labels of gates. Input vertices carry the labels of Boolean variables, variables assuming values over the set B = {0,1}.T aiF 1.2(a) is the logic circuit of Fig. 1.3(c), whereas the graph of Fig. 1.2(b) is the logic circuit of Fig. 1.4. (The ﬁgures are shown in Section1i4.1,Logc Circuits.) The set of labels of logic gates used in a DAG is called the basis Ω for the DAG. The size of a circuit is the number of non-input vertices that it contains. Its depth is the length of the longest directed path from an input vertex to an output vertex. 1.2.6 Matrices An m × n matrix is an array of elements containing m rows and n columns. (See Chapter 6.) The adjacency matrix of a graph G with n vertices is an n × n matrix whose entries are 0 or 1. The entry in the ith row and jth column is 1 if there is an edge from vertex i to vertex j and 0 otherwise. The adjacency matrix A for the graph in Fig. 1.2(a) is ⎡ ⎤ 0 0100

### BOOM! Enjoy Your Free Notes!

We've added these Notes to your profile, click here to view them now.

### You're already Subscribed!

Looks like you've already subscribed to StudySoup, you won't need to purchase another subscription to get this material. To access this material simply click 'View Full Document'

## Why people love StudySoup

#### "There's no way I would have passed my Organic Chemistry class this semester without the notes and study guides I got from StudySoup."

#### "When you're taking detailed notes and trying to help everyone else out in the class, it really helps you learn and understand the material...plus I made $280 on my first study guide!"

#### "There's no way I would have passed my Organic Chemistry class this semester without the notes and study guides I got from StudySoup."

#### "It's a great way for students to improve their educational experience and it seemed like a product that everybody wants, so all the people participating are winning."

### Refund Policy

#### STUDYSOUP CANCELLATION POLICY

All subscriptions to StudySoup are paid in full at the time of subscribing. To change your credit card information or to cancel your subscription, go to "Edit Settings". All credit card information will be available there. If you should decide to cancel your subscription, it will continue to be valid until the next payment period, as all payments for the current period were made in advance. For special circumstances, please email support@studysoup.com

#### STUDYSOUP REFUND POLICY

StudySoup has more than 1 million course-specific study resources to help students study smarter. If you’re having trouble finding what you’re looking for, our customer support team can help you find what you need! Feel free to contact them here: support@studysoup.com

Recurring Subscriptions: If you have canceled your recurring subscription on the day of renewal and have not downloaded any documents, you may request a refund by submitting an email to support@studysoup.com

Satisfaction Guarantee: If you’re not satisfied with your subscription, you can contact us for further help. Contact must be made within 3 business days of your subscription purchase and your refund request will be subject for review.

Please Note: Refunds can never be provided more than 30 days after the initial purchase date regardless of your activity on the site.