Popular in Course
Popular in ComputerScienence
This 26 page Class Notes was uploaded by Jonas Bartell on Monday October 26, 2015. The Class Notes belongs to CS1510 at University of Pittsburgh taught by KirkPruhs in Fall. Since its upload, it has received 46 views. For similar materials see /class/229385/cs1510-university-of-pittsburgh in ComputerScienence at University of Pittsburgh.
Reviews for ALGORITHMDESIGN
Report this Material
What is Karma?
Karma is the currency of StudySoup.
You can buy or earn more Karma at anytime and redeem it for class notes, study guides, flashcards, and more!
Date Created: 10/26/15
1 Dynamic Programming Take 1 We examine problems that can be solved by dynamic programming perhaps the single most useful algorithmic design technique Dynamic programming can be explained many ways Rather than explain what a dynamic program ming algorithm is we explain how one might develop one 1 Find a recursive algorithm for the problem It often helps to rst nd a recursive algorithm to count the number of feasible solutions 2 Spot redundancy in the calculation 3 Eliminate the redundancy This can often be best accomplished by per forming the calculations bottom up instead of top down 2 Fibonacci Numbers and Binomial Coef cients See section 122 and section 31 from the text 3 Longest Common Subsequence Problem The input to this problem is two sequences A 11 am and B b1 1 The problem is to nd the longest sequence that is a subsequence of both A and B For example if A ccdedcdec and B cdedcdec then cddec is subsequence of length 5 of both sequences Let Tz j be the length of the longest common subsequence of 11 ai and b1 19 Then one can see that if a bj then Tz j Tz3971j711 Otherwise one can see that Tz j maxTz39j71Tz3971j Note that there are only nm possible subproblems since there are only m choices for 239 and n choices for j Hence by treating the Tz j7s as array entries instead of recursive calls and updating the table in the appropriate way we can get the following 00 time algorithm For i0 to 111 do TiOO For jO to 11 do TOjO For i1 to 111 do For j 1 to 11 do if ai bj then TijTi1j 1 1 else Tij MAX Ti jl Ti 1 j 4 Computing Number of Binary Search Trees and Optimal Binary Search Trees Section 35 from the text 5 Chained Matrix Multiplication Section 34 from the text 6 Maximum Weight Independent Set in a Tree The input to this problem is a tree T with weights on the vertices The goal is to nd the independent set in T with maximum aggregate weight An independent set is a collection of mutually nonadjacent vertices Root the tree at an arbitrary node 7 and process the tree in postorder We generalize the induction hypothesis Consider an arbitrary node 1 with branches to k descendants umwg7 wk We can create an independent set for the subtree rooted at 1 in essentially two ways depending on whether or not we include 1 in the independent set If we decide not to include 1 Consider an arbitrary root 1 with branches to k descendants wl LU27 wk We can create an independent set for the subtree rooted at 1 in essentially two ways depending on whether or not we include 1 in the independent set If we decide not to include 1 we can combine any independent sets for the subtrees rooted at w17 LU27 wk to create an independent set for the subtree rooted at 1 since there are no edges between the subtrees On the other hand7 if we do include 1 in the independent set we can only use independent sets for the subtrees that do not include their respective root LU171U27 wk Otherwise we would have both 1 and some 10 in the set and it would not be an independent set anymore Therefore for each node 1 the algorithm computes the following information H b gv the maximum weight of an independent set for the subtree rooted at 1 and D bignotroo v the maximum weight of an independent set for the subtree rooted at 1 that does not include 1 At node 1 the algorithm rst recursively computes b gwi and bignotmo wi for each descendant subtree wl LU27 wk It then computes bignotroo v and b gv using the following recurrence relations that correspond to the two cases identi ed above k bignotroo v Z b gwi i1 k b gv maxln39gnotr00tv7 weigh v sz gnotmo wi i1 lf 1 is a leaf then bignotroo v 0 and b gv weigh v 7 Longest Increasing Subsequence The longest increasing subsequence LlS problem is de ned as follows INPUT A sequence X 1 zn of integers OUTPUT The longest increasing subsequence of X A sequence is increasing if each number in the sequence is larger than the previous number Let us try to develop a recursive algorithm for this problem Let LISk be the longest increasing subsequence of the rst k integers in X Consider the following rst stab at an induction hypothesis lnduction Hypothesis 1 We know how to compute LISk Assume that we have computed LISk 7 1 and we now want to compute LISk The following sequence 1 4 2 3 shows that we cant just know any longest sequence Before we consider 3 we need to know the L18 1 2 not the LIS 14 The obvious solution is to remember the LIS that ends in the smallest number Hence we change the de nition of LlS as follows Let LISk be the the longest increasing subsequence of the rst k integers in X that ends in the smallest number lnduction Hypothesis 2 We know how to compute LISk The following sequence 121314124 shows that knowing the LIS that ends in the smallest number is not suf cient information Notice that in this example the length of the LIS doesn7t change but a new one is formed with a smaller last number The most obvious way to x this problem is to remember the best the one that ends in the smallest number sequence of length one less than the LlS Let LISQk be the the increasing subsequence of the rst k integers in X that has length one less than the length of LISk and that ends in the smallest number lnduction Hypothesis 3 We know how to compute LISk and LISQk The following sequence 12131415123 shows that there is not suf cient inductive information to compute LISQk We need to know the sequence of the rst 71 integers in X of length two shorter than LIST71 that ends in the smallest number One can see that eventually we have to inductively know the sequence of each length that ends in the smallest number Hence we de ne LISk s as the the smallest last number of any subsequence of length 5 from among the rst k numbers of X This then leads us to the following algorithm For k 0 to 11 do for s 1 to 11 do LISk s plus infinity For k 0 to n do LISk O minus infinity For k 1 to n do for s 1 to n do if LISk1 s l lt Xk lt LISk1 s then LISksXk else LISksLISk 1s This is an example of where when trying to compute something recursivelyinductive it is sometimes easier to compute more information than you need This is sometimes called strengthening the inductive hypothesis This is not para doxical because the strengthening of the inductive hypothesis also gives you more information to work with when the recursion returns from the smaller subproblem The most common mistake made here is to use the additional information returned from the recursion to solve the original problem7 not the more generalized problem One point that I want you to draw from this example is that its not always easy to determine how one should strengthen the inductive hypothesis 8 Dynamic Programming Take 2 In this section we discuss another method for developing dynamic program ming algorithms that avoids having to develop a recursive algorithm7 which is almost always the most dif cult part in the previously outlined method This method is particularly applicable when the feasible solutions are subsets or subsequences The steps to developing a dynamic programming algorithm using this method are as follows 1 Determine how to generate all possible feasible solutions Once again it might be easier to rst determine how to count the number of feasible solutions Enumeration usually follows easily from counting 2 Develop a pruning rule to eliminate partial feasible solutions that either are redundant7 or that can not be extended to an optimal solution 3 Transform the algorithm to an iterative table based algorithm 9 Subset Sum Example The subset sum problem is de ned as follows INPUT A collection 1 zn of positive integers7 and a positive integer L OUTPUT A subset of the as that sum to L7 or a statement that no such subset exists We can generate the subsets inductively using a binary tree as follows The nodes of depth 239 are all subsets of 1 7 The children of a node S at depth 239 7 1 are S and S U Thus all the 2 subsets can be found at the leaves The depth of this tree is n 1 The two pruning rules are 1 Eliminate the subtree rooted at any subset with sum greater than L 2 If there are two subsets S and T at the same depth7 with the same aggre gate sum7 then one can arbitrary select one of the two nodes and eliminate the subtree rooted at that node Note that these two pruning rules mean that there are at most L 1 nodes left unpruned at any level Hence7 this gives us an algorithm with running time is polynomial in n and L We now wish to derive an iterative algorithm Let Sumk7 S be true if there is a subset of the rst k numbers that sums to S We compute SumkS as follows Sum00True For S1 to L do SumO S False For k1 to 11 do For S O to L do Sumk S Sumk 1 S or Sumk 1 S Xk 10 Knapsack Example The knapsack problem can be de ned as follows INPUT A collection of objects 01On with positive integer weights 441110n7 and positive integer values 111 1 A positive integer L OUTPUT The highest valued subset of the objects with weight at most L We can generate the subsets inductively using a binary tree as in the subset surn problem The two pruning rules are 1 Eliminate the subtree rooted at any subset with surn greater than L 2 If there are two subsets S and T at the same depth7 with the same total weight7 then eliminate the one of least value Note that these two pruning rules mean that there are at most L 1 nodes left unpruned at any level Let laluek7 S be highest value one can obtain from a subset of the rst k numbers with aggregate weight S We compute ValuekS as follows Value 00O For S1 to L do SumO S minus infinity For k1 to 11 do For S O to L do Value k S MaX Value k l S Valuek 1 S wk Vk 11 Longest Increasing Subsequence Problem Take 2 We now apply this method to the longest increasing subsequence problem We can generate feasible solutions subsequences as in the subset sum prob lem One obvious the pruning rule is 1 If you have two subsequences S and T at the same depth that have the same length prune the one that ends in the larger number This pruning rule will give you at most 71 unpruned nodes at any level If you turn this into an iterative code you will get the same code as we got before Another possible pruning rule would be 2 If two sequences S and T at the same depth have the same last number prune the shorter sequence This pruning rule will also give you at most 71 unpruned nodes at any depth and an 712 time algorithm Note how much easier it was to develop an algorithm for the LIS problem this way as opposed to trying recursion and generalizing the induction hypothesis 12 The Single Source Shortest Path Problem The input to this problem is a directed positive edge weighted graph with a designated vertex 5 The problem is to nd the shortest simple path from s to each other vertex For more information see section 42 of the text Here feasible solutions are simple paths that start from the source vertex 5 The nodes in level k of the tree represent all paths from s of k or less hops Note that by simplicity we we need only consider the tree to depth 71 the number of vertices The pruning rule is that we need only remember the shortest path to a particular vertex We thus get the following code Here Dk 239 is the shortest path from s to 239 of k or less hops For k 1 to n do For i 1 to n do Dk i min Dk1 i Mk 1 for each edge e i j do Dk jmin Dk j Dk 1 i the length of e This is Bellman Ford shortest path algorithm and has running time 0VE 13 The Shortest Path Problem with Negative Edge Weights The input to this problem is a directed edge weighted graph with a designated vertex 5 The edge weights may be positive or negative The problem is to nd the shortest simple path from s to each other vertex For more information see section 42 of the text Here feasible solutions are simple paths that start from the source vertex 5 The nodes in level k of the tree represent all paths from s of k or less hops Note that by simplicity we we need only consider the tree to depth 71 the number of vertices The pruning rule is that if two paths end at the same vertex and contain the same vertices then we may prune the shorter one Make sure you understand why the pruning rule that we used for positive weights does not work here We thus get the following code Here Dk7 SJ is the shortest path from s to 239 of k or less hops that visits exactly the vertices in S For k 1 to n do For i 1 to n do For S 1 to 2 n do Dk S i min Dk1 S i Dk S 1 for each edge e i j do if j is not in S then Dk S j jmin Dk Sj j Dk S i the length of e 9 The running time of this code is 0VE2V7 where V is the number of vertices and E is the number of edges 14 The Traveling Salesman Problem See section 36 from the text The input to this problem is a directed edge weighted graph with a designated vertex 5 The edge weights may be positive or negative The problem is to nd the shortest simple path from s that visits all of the vertices Here feasible solutions are simple paths that start from the source vertex 5 The nodes in level k of the tree represent all paths from s of exactly k Note that by simplicity we we need only consider the tree to depth 717 the number of vertices The pruning rule is that if two paths end at the same vertex and contain the same vertices then we may prune the shorter one We thus get the following code Here Dk7 SJ is the shortest path from s to 239 of k or less hops that visits exactly the vertices in S For k 1 to n do For i 1 to n do For S 1 to 2 n do for each edge e i j do if j is not in S then Dk S j jmin Dk Sj j Dk S i the length of e The running time of this code is 0VE2V7 where V is the number of vertices and E is the number of edges 1 Basic De nitions A optimization problem has the following form output a best solution S satisfying some property P Best usually means least cost7 a minimization problem7 or most bene t7 a maximization problem A best solution is called an optimal solution Note that for many problems there may be many dif ferent optimal solutions A feasible solution is a solution that satis es the property P Most of the problems that we consider can be Viewed as opti mization problems An algorithm A for a minimization problem P has performance ratio 0 if for every input I7 the cost of the output of A on input I has cost at most 0 time the cost of the least cost feasible solution to I An algorithm A for a maximization problem P has performance ratio 0 if for every input I7 the bene t of most bene cial feasible solution to I is at most 0 times the bene t of the output of A on input I 2 MST doubling algorithm for TSP with tri angle inequality See section 951 3 Christo des algorithm for TSP with trian gle inequality See section 951 4 Without triangle inequality there is no al gorithm with constant performance ratio Theorem For all 07 if there is a polynomial time algorithm for TSP with a constant performance ratio 0 then every NP complete problem has a polyno mial time algorithm Proof We use a many to one reduction from the Hamiltonian cycle problem ie the problem of deciding whether a graph contains a simple spanning cycle Let H a weighted complete graph constructed from an unweighted graph G with n vertices in the folloring manner 0 each edge in G is given weight 17 and o a edge Ly with weight 071 is added between each pair of nonadjacent vertices in G Note that if G has a Hamiltonian cycle then H has a tour of weight 717 and if G does not have a Hamiltonian cycle then every tour in H has weight greater than 071 Hence7 if the approximiation algorithm for TSP nds a tour with length at most 071 then we may be sure the G has a Hamiltonian cycle7 and if the approximiation algorithm for TSP nds a tour with length more than 071 then we may be sure the G does not have a Hamiltonian cycle 5 Analysis of First Fit for Bin Packing 6 Analysis of First Fit Decreasing for Bin Packing See section 952 1 De nition of Reduction Problem A is reducible7 or more technically Turirig reducible7 to problem B7 denoted A S B ifthere a main program M to solve problem A that lacks only a procedure to solve problem B The program M is called the reductiori from A to B Note that we use the books notation ocT and S interchangeably A is polynomial time reducible to B ifM runs in polynomial time A is linear time reducible to B if M runs in linear time and makes at most a constant number of calls to the procedure for B Assume that the reduction runs in time Ai i7 exclusive of Rn recursive calls to B on an input size of Then if one plugs in an algorithm for B that runs in time B017 one gets an algorithm for A that runs in time An Rn A decisiori problem is a problem where the output is 01 Let A and B be decision problems We say A is mariy to orie reducible to B if there is reduction M from A to B of following special form 0 M contains exactly one call to the procedure for E7 and this call is the last step in M o M returns what the procedure for B returns Equivalently7 A is mariy to orie reducible to B if there exists a computable function f from instances of A to instances of B such that for all instances I of A it is the case that I has output 1 in problem A if and only if fI has output 1 in problem B Reductions can be used to both nd ef cient algorithms for problems7 and to provide evidence that nding particularly ef cient algorithms for some problems will likely be dif cult We will mainly be concerned with the later use 2 Using Reductions to Develop Algorithms Assume that A is some new problem that you would like to develop an algo rithm for7 and that B is some problem that you already know an algorithm for Then showing A S B will give you an algorithm for problem A Let A be the problem of nd the convex hull of a collection of points xhyl smyn in the plane The convex hull is the corners of the mini mum perimeter polygon enclosing the points in clockwise order starting from an arbitrary corner Let B be the problem of sorting We now give the Graham7s Scan algo rithm7 which can be viewed as a reduction from convex hull to sorting Algorithm Graham Scan Read zhyl smyn ln linear time7 nd the points with maximum and minimum z coor dinates7 the points with minimum and maximum y coordinates these points are on the convex hull Then average each coordinate to nd a point 0 on the interior of the convex hull O is probably not one of the input points Sort the points by their angle of the ray Owl7 and the ray starting from O and passing through a known point on the convex hull Push the rst three points on a stack For each of the remaining points do 7 While the new point and the top two points in the stack form a counter clockwise turn7 pop the stack 7 Push the new point on the stack The convex hull is the stack The running time for this code is 0n plus the time for sorting 3 Using Reductions to Show that a Problem is Hard Absolutely positively7 you must read the rst two paragraphs of chapter 9 This explains the general idea 4 Matrix Squaring You want to nd an 00 time algorithm to square an n by 71 matrix The obvious algorithm runs in time ng You know that lots of smart computer scientists have tried to nd an 00 time algorithm for multiply two matrices but have not been successful But it is at least conceivable that squaring a matrix multiplying identical matrices might be an easier problem than multiply two arbitrary matrices To show that in fact that matrix squaring is not easier than matrix multiplication we linear time reduce matrix multiplication to matrix squaring Theorem If there is an 00 time algorithm to square an n by 71 matrix then there is an 00 time algorithm to multiply two arbitrary n by n matrices Proof We show that Matrix multiplication is linear time reducible to Matrix Squaring We exhibit the following reduction program for matrix multipli cation 0 Read X and Y the two matrices we wish to multiply 0 Let 0 Call procedure to compute ILXYO OXY o Read XY of from the top left quarter of 2 If you plug in an OBn time algorithm for squaring the running time of the resulting algorithm for squaring is 0712 B2n Thus an 00 time algorithm for squaring yields an 00 time algorithm for matrix multiplica tion End Proof So the nal conclusion is Since lots of smart computer scientists have tried to nd an 00 time algorithm for multiply two matrices but have not been successful you probably shouldn7t waste a lot of time looking for an 00 time algorithm to square a matrix 3 5 Convex Hull The convex hull problem is de ned below INPUT n points 101 n in the Euclidean plane OUTPUT The smallest either area or perimeter doesn7t matter convex polygon that contain all n points The polygon is speci ed by a linear order ing of its vertices You would like to nd an ef cient algorithm for the convex hull problem You know that lots of smart computer scientists have tried to nd an a linear time algorithm for sorting but have not been successful We want use that fact to show that it will be hard to nd a linear time algorithm for the convex hull problem Theorem If there is an 0n time algorithm for convex hull then there is an 0n time algorithm for sorting Proof We need to show that sorting S convex hull via a linear time reduc tion Consider the following reduction algorithm for sorting o Read zlzn 0 Let 10139 7 96139512 and 10n1 07 0 Call procedure to compute the convex hull C of the points 101 pn 0 ln linear time read the sorted order of the rst coordinates off of C by traversing C counter clockwise If you plug in an OBn time algorithm for the convex hull problem the running time of the resulting algorithm for sorting is 0n Bn Thus an 0n time algorithm for the convex hull problem yields an 0n time algorithm for matrix multiplication End Proof So the nal conclusion is Since lots of smart computer scientists have tried to nd an 0n time algorithm for sorting but have not been successful you probably shouldn7t waste a lot oftime looking for an 0n time algorithm to solve the convex hull problem 6 NPcompleteNPequivalent Problems There are a class of problem called NP complete problems For our purposes we use NP complete and NP equivalent interchangeably7 although there is a technical difference that not really relevant to us The following facts about these NP complete are relevant H If any NP complete has a polynomial time algorithm then they all do Another way to think of this is that all pairs of NP complete problems are reducible to each other via a reduction that runs in polynomial time D There are thousands of known natural NP complete problems from every possible area of application that people would like to nd poly nomial time algorithms for 9 No one knows of a polynomial time algorithm for any NP complete problem 7 No one knows of a proof that no NP complete problem can have a polynomial time algorithm Cf Almost all computer scientists believe that NP complete problems do not have polynomial time algorithms A problem L is NP hard if a polynomial time algorithm for L yields a polynomial time algorithm for anyall NP complete problems7 or equiva lently7 if there is an NP complete problem 0 that polynomial time reduces to L A problem L is VP easy if a polynomial time algorithm for any NP complete problem yields a polynomial time algorithm for L7 or equivalently7 if there is an NP complete problem 0 such that L is polynomial time re ducible to C A problem is VP equivalent7 or for us NP complete if it is both NP hard and NP easy Fact The decision version of the CNF SAT problem determining if a sat isfying assignment exists is NP complete 7 SelfReducibility An optimization problem 0 is polynomial time self reducible to a decision problem D if there is a polynomial time reduction from O to D Almost all natural optimization problems are self reducible Hence7 there is no great loss of generality by considering only decision problems As an example7 we now that the CNF SAT problem is self reducible Theorem If there is a polynomial time algorithm for the decision version of CNF SAT7 then there is a polynomial time algorithm to nd a satisfying assignment Proof Consider the following polynomial time reduction from the problem of nding a satisfying assignment to the decision version of the CNF SAT problem We consider a formula F One call to the procedure for the decision version of CNF SAT will tell whether F is satis able or not Assuming that F is satis able7 the following procedure will produce a satisfying assignment Pick an arbitrary variable x that appears in F Create a formula F by removing clauses in F that contain the literal 7 and removing all occurrences of the literal i Call a procedure to see if F is satis able If F is satis able7 make z equal to true7 and recurse on F If F is not satis able then create a formula F by removing clauses in F that contain the literal i and removing all occurrences of the literal x Make z false7 and recurse on F End Proof 8 SAT is NPhard The SAT problem is de ned as follows INPUT A Boolean Formula F not necessarily in CNF form OUTPUT 1 if F is satis able7 0 otherwise Theorem SAT is NP hard Proof We exhibit the following polynomial time many to one reduction from CNF SAT to SAT Main Program for CNF SAT o Read F 0 Call procedure for SAT on input F and give the same answer that this procedure does End of Proof This type of reduction where the reduction function f is the identity function f s is called a restriction So the rough idea is that any problem more general than an NP cornplete problem is also NP cornplete 9 3SAT is NPhard The BSAT problem is de ned as follows INPUT A Boolean Forrnula F in CNF with exactly 3 literals per clause OUTPUT 1 if F is satis able7 0 otherwise Theorem BSAT is NP hard Proof We exhibit the following polynomial time rnany to one reduction from CNF SAT to 3SAT Main Program for CNF SAT Read the formula F Create a new formula G by replacing each clause 0 in F by a collection 90 of clauses We get several cases depending on the number of literals in C 1 If 0 contain one literal x then 90 contains the clauses VO0Vbc VO0V50 V chc and V 10 V b0 2 If G contains two literals z and y then 90 contains the clauses VyVac VyVELC 3 If 0 contains 3 literals then 90 0 4 If 0 1 V V m contains k 2 4 literals then 90 contains the clauses 1 V 2 V 1Q1 3 V 3031 V 102 4 V 302 V lag M72 V CJHi V 61le and M71 V M V Flakes Note that in each case 107 be7 aah 7103 are new variables that appear nowhere else outside of 90 Clearly 0 can be computed in polynomial time 0 has exactly 3 literals per clause Furthermore7 F is satis able if and only if 0 is satis able End Proof This type of reduction7 in which each part of the instance is replaced or modi ed independently7 is called local replacement 10 3SAT g Vertex cover The Vertex Cover Problem is de ned as follows INPUT Graph 0 and integer k Output 1 if 0 have a vertex cover of size k or less A vertecc cover is a collection S of vertices such that every edge in 0 is incident to at least one vertex in S Theorem Vertex cover is NP hard Proof We show BSAT is polynomial time many to one reducible to Vertex Cover Given the formula F in BSAT form we create a 0 and a k as follows For each variable x mF we create two vertices z and i and connect them with an edge Then for each clause 0 we create three vertices 017 02 and 037 connected by three edges We then put an edge between 017 1 S 239 S 37 and the vertex corresponding to the 2th literal in 0 Let k the number of variables in F plus twice the number of clauses in F 8 We now claim that G has a vertex cover of size k if and only if F is satis able To see this note that a triangle corresponding to a clause requires at least 2 vertices to cover it and and edge between a variable and its negation requires at least 1 vertex 11 3SAT g Subset Sum Theorem Subset Surn is NP hard Proof We show that BSAT is polynornial tirne rnany to one reducible to Subset Surn This is by example Consider the formula x or y or not Z and not x or y or Z and x or not y or Z We create one number of each variable and two numbers for each clause as follows Further the target L is set as shown Numbers Base 10 representation name Clause 1 Clause 2 Clause 3 x 1 O O 1 O 1 not X 1 O O O 1 O y 0 1 O 1 1 0 not y 0 1 O O O 1 z O O 1 O 1 1 not 2 O O 1 1 O 0 Clause 1 O O O 1 O O slop O O O 1 0 Clause 2 O O O O 1 O slop O O O O 1 0 Clause 1 O O O O O 1 slop O O O O O 1 12 Why the Dynamic Programming Algorithm for Subset Sum is not a Polynomial Time Algorithm Recall that the input size is the number of bits required to write down the input And a polynomial time algorithm should run in time bounded by a polynomial in the input size The input is n integers 17zn and an integer L The inputs size is then n log L 2 log zl i1 which is at least 71 log2 L The running time ofthe algorithm is 0nL Note that this time estimate assumed that all arithmetic operations could be performed in constant7 which may be a bit optimistic for some inputs Never the less7 ifn is constant Then the input size I log L and the running time is L 21 is exponential in the input size A most concrete way to see this is to consider an instance with n 3 and L 21000 The input size is about 1002 bits7 or 120 bytes But any algorithm that requires 21000 steps is not going to nish in your life time 1 3 3SAT g 3coloring The 3 Coloring Problem is de ned as follows INPUT Graph G OUTPUT 1 if the vertices of G can be colored with 3 colors so that no pair of adjacent vertices are coloring the same7 and 0 otherwise Theorem 3 Coloring is NP hard Proof We show that 3SAT is polynomial time many to one reducible to 3 Coloring Once again we do this by example Consider formula F x or y or not Z and not x or y or Z and x or not y or Z We create the graph G as follows For each variable Z in F we create two vertices Z and not Z and connect them We then create a triangle7 with three new vertices True7 False7 and G7 and connect G to each literal vertex So for the example above we would get something like True False what ever color this is we call false vertex C vertex C is connected to each of the literal vertices X not X y not y z not 2 We then create the gadgetsubgraph shown in gure 1 for each clause We claim that this clause gadget can be 3 colored if and only if at least one of literal l7 literal 2 literal 3 is colored true Hence7 it follows that G is 3 colorable if and only if F is satis able The nal graph G for the above formula is shown below literal 1i i literal 2 True m literal 3 V Figure l The Clause Gadget H D Figure 2 The nal graph The Halting Problem is INPUT A string P and a string I We will think of P as a program OUTPUT 1 if P halts on I and 0 if P goes into an in nite loop on I Theorem Turing circa 1940 There is no program to solve the Halting Problem Proof Assume to reach a contradiction that there exists a program HaltP I that solves the halting problem HaltP I returns True if and only P halts on I The given this program for the Halting Problem we could construct the following stringcode Z Program String X If Haltx X then Loop Forever Else Halt End Consider what happens when the program Z is run with input Z Case 1 Program Z halts on input Z Hence by the correctness of the Halt program Halt returns true on input ZZ Hence program Z loops forever on input Z Contradiction Case 1 Program Z loops forever on input Z Hence by the correctness of the Halt program Halt returns false on input Z Z Hence program Z halts on input Z Contradiction End Proof One can now show that there is no program for some new problem problem U by showing that Halting S U ie Halting is reducible to U
Are you sure you want to buy this material for
You're already Subscribed!
Looks like you've already subscribed to StudySoup, you won't need to purchase another subscription to get this material. To access this material simply click 'View Full Document'