### Create a StudySoup account

#### Be part of our community, it's free to join!

Already have a StudySoup account? Login here

# Algorithm Analysis and Design CPSC 5115G

GPA 3.91

### View Full Document

## 17

## 0

## Popular in Course

## Popular in ComputerScienence

This 20 page Class Notes was uploaded by Earlene Cremin III on Sunday October 11, 2015. The Class Notes belongs to CPSC 5115G at Columbus State University taught by Edward Bosworth in Fall. Since its upload, it has received 17 views. For similar materials see /class/221205/cpsc-5115g-columbus-state-university in ComputerScienence at Columbus State University.

## Similar to CPSC 5115G at

## Popular in ComputerScienence

## Reviews for Algorithm Analysis and Design

### What is Karma?

#### Karma is the currency of StudySoup.

#### You can buy or earn more Karma at anytime and redeem it for class notes, study guides, flashcards, and more!

Date Created: 10/11/15

Chapter 9 Greedy We now discuss another of the major strategies used to design algorithms This is the greedy strategy Greedy is best summarized as follows Make the choice that looks best at present and never change your mind Algorithms designed according to the greedy strategy are applied to problems in which the solution can be viewed as a sequence of decisions In a greedy algorithm once a decision has been made it is never changed Other algorithms use backtracking to reconsider some decisions already made and explore alternate solutions There is a class of problems for which the algorithm designed according to the greedy strategy can be shown to produce an optimal answer We shall investigate these problems and close the chapter with an example in which the greedy algorithm produces a good answer that occasionally is optimal The change making problem is one of the simplest for which a good greedy algorithm exists and can be shown to produce optimal results The goal is to make change for a given amount using the minimum number of coins For example consider the problem of giving change for 93 cents using the standard US coins 1 One could give 93 pennies for atotal of 93 coins or 2 One could give 3 quarters l dime l nickel and 3 pennies for atotal 8 coins The greedy algorithm for making change for amounts less than a dollar is simply stated Let C be the amount to be returned in change with l S C S 99 Then Give the maximum number of quarters whose total value does not exceed C Subtract that amount from C Give the maximum number of dimes whose total value does not exceed the new value of C Subtract that amount from C Give the maximum number of nickels whose total value does not exceed the new value of C Subtract that amount from C 4 We now have 0 S C S 4 Give that number of pennies N D I VV L V It can be proven that this algorithm produces a solution with the fewest number of coins This proof can be applied to any number of coin sets including the US coin set with or without the 50cent piece This author cannot reproduce the proof but believes that it has something to do with the following fact Let d1 50 dz 25 d3 10 d4 5 and d5 l and note that 14 gt 15 13 gt 14 15 dz gt 13 14 15 and d1gtd2 13 14 For an example of a coin set in which greedy does not produce the optimal number of coins consider making change for 7 with d1 10 d2 5 d3 4 d4 3 and d5 l The greedy solution is to give one of dz and two of d5 7 a total of three coins The optimal solution is to give one of d3 and one of d4 7 atotal of two coins Chapter 9 Version of 7192010 Page 1 t e t We have two important terms 1 a feasible solution is one that satisfies the problem s constraints and 2 an optimal solution is afeasible solution that optimizes some function often called the objective function At each state in I 39 I I I 39 satisfy the feasibility constraints Under the greedy criterion an algorithm examines all of the partial solutions to the problem that satisfy the feasibility constraints and selects the one that islocally opumalrle39 39 39 39 39 quot 39 u designed according to the Greedy strategy it IL39 L a U1 quu Prim sAlgo m as e l wsalsorithmmr l tort c tWe first define a spanning tree and then de ne the algorithm Let G be an N Mygraph thus G hasN venices and M edges It can be proven that G can a connected graph only ifM 2 N a 1 A spanning tree ofa connected graph is atree containing allN 39 f L TL quotL 39 A f N vertices Two de nitions are 1 It is a connected graph on N Venices having N a 1 edges and 2 It is a connected acyclic graph that contains all N vertices Itisaneas L 4quotquot quot quot y f L L implies the other ut we shall leave this for another time Aminimum f 39L A 4 LG39 39 c 4 each s edge in the graph As in all optimization problems a graph may have more than one distinct i imum s 39 treme case ofan unweighted graph in which each edge g tree m panning tree In the ex has weight 1 any spanning tree is aminimum spannm 0 1 9 0 1 9 0 1 9 o 1 9 5 2 1 5 5 Z 0 3 0 0 3 0 0 3 0 o o The Graph Weight 5 Weight 9 ngg 3 r t t r i graph Any t t mt t l 750 h A t ct l Removal of AB I 4 u u u u u 1 conclude that the rst with atotal Weight of 6 is optimal chapter 9 Version of 7192010 Page 2 Prim s algorithm constructs the minimum spanning tree through a sequence of expanding subtrees starting with an initial subtree of a single vertex chosen arbitrarily and no edges In applying Prim s algorithm to a graph with N vertices we add N e l edges to the tree one at a time according to the following criterion stated two equivalent ways 1 On each iteration expand the subtree by adding to it the nearest vertex not in that subtree or On each iteration nd the shortest least cost edge connecting a vertex already in the subtree to a vertex not in the subtree and add that edge 2 V For a connected N Mgraph the algorithm terminates after adding N e l edges because at that point all N vertices will be in the tree Here is a pseudocode version of the algorithm similar to that shown in the textbook Algorithm Prim G Prim s algorithm for constructing a minimum spanning tree Input A weighted connected graph G V E with vertex set V and edge set E Output ET the set of edges for the spanning tree which by definition must have the same vertex set as G WT 7 the total weight of that tree VT Vb Initialize the vertex set of T to any vertex ET Initialize its edge set to the empty set WT O Weight of this tree is O For I I to lvl I Do lV is the size of the vertex set Find a minimumweight edge 6 u w with W 6 V5 and u e V but u e VT VTVTUlul ETETUlel MN weighte WT WT w End Do Return ET As an example consider a spanning tree for the following weighted 4 4graph VGABCD el A B 62 A C 63 A D 64 C D W1 lW25W3 2W43 Chapter 9 Version of 7192010 Page 3 We now ruustrate the exeeuhoh ofphrh39s algorrthrh oh thrs example 6 e The graph wrth ho edges rs atle The graph G has four edges each wrth werght as follows Werght Start the s1gorrthrh wrth V1 A The shortest edge eohheetrhg A to avertex hot rh V1 rs the edge A B wrth edge werght 1 At thrs porht we have the fouowrhg V1 Ar Bgt 9 Wr1 1 Now the shortest edge eohheetrhg avertex rh set A B to a o a vertex m V7 A B t c D rs the edge A D We addthxs to the subtxee wrth the fouowrhg results 2 A B D B A D gt Fmally the shortest edge eohheetrhg vertex c to avertex m the o 1 a set A B D rs the edge c D wrth werght 3 We add thrs edge to the subtree to get the followrhg sparthrhg tree 2 VTABCDgt E AB WT6 t AVDVCDgt o 3 o Thrs rs our spahhrhg tree chapter 9 Versroh of 7192010 Page 4 Dues Prim 1 Algntithnt Wnrk We now ask the stnnp1e quesuon egtyen a eonneeted wetghted N Mygtaph does Pnnn39s If Ne atxeeandan39 the graph xtself tt betng the on1y posstb1e spanntng uee ofttse1 KM lt Ne 1 ttts atxtyta1 p th only ea11ed eonneeted Cenatn1y the spanntng uee ofa onervenex graph ts untque andtnyta1 39T 39T 39T eaeh of the subttees Tt1g1gNe1ts a subgraph ofsome nntntnnunn spanntng ttee and thus that the ftna1 uee Thu ts ttse1 anntntnnunn spanntng ttee spanntng uee as any nntntnnunn spanntng ttee nnust eontatn an of the yetuees onae e spanntng uee ean eontatn T and arnvlng at an tnnpasse Let y n be anntntnnunn wetght edge torn m o avenex m T used pan atgutng that TH U 0210 ts not Therefore 1fwe add u n to a spanntng uee we nnust form a eye1e as addtng an edge to any uee forms a eye1e stnee TU 0210 eontatns a eye1e thete nnustbe anothet edge u39 t1 eonneettng avenex u39 e Ttst to avenex v39 2 TH By the assunnptton of Pnnn39s a1gonthnn thts edge has wetght not1ess than u n so tennoytng the edge u39 t1 torn T U 0210 ptoduees anothet n W tr Thus Pnnn39 Chapter 9 Vetston of 7192010 Page 5 Kruskal s Algorithm It turns out that there is another algorithm based on greedy that solves the minimum spanning tree problem This is Kruskal s algorithm which builds the minimum spanning tree as a sequence of least cost acyclic subgraphs of the given graph When one has processed an N Mgraph and produced a leastcost acyclic graph with N e 1 edges the result must be a minimum spanning tree The algorithm begins by sorting the edges in nondecreasing order of weight smallest weight rst and then adds from this list one edge at a time We now present the algorithm Algorithm Kruskal G Kruskal s algorithm for generating a minimum spanning tree Input A weighted connected graph G V E Output ET 7 the set of edges in a minimum spanning tree of G Sort the edge set E in nondecreasing order of weight Assume that the sorted edge set is reindexed e1 e2 m em so that We1 S We2 S m S Wem ET The empty set K O EdgeCount O I don t like the word ecounter Recall that V is the size of the vertex set and that E is the size of the edge set of G V E While EdgeCount lt V 1 And K lt E Do K K 1 If ET U GR is acyclic Then ET ET U ex EdgeCount EdgeCount 1 End If End Do The proof of the correctness of Kruskal s algorithm is similar to the proof for Prim s so we shall not discuss it here If we apply Kruskal s algorithm to our 44 graph we get Sorted edge set is el A B ez A D 63 C D 64 A C with W1 lW22W33W45 A B is acyclic it is a single edge ET A B ET U A D is acyclic ET A B A D ET o C D is acyclic ET A B A D C D The edge count 3 and we are done Chapter 9 Version of 7192010 Page 6 Testing the Growing Tree for CVcles Up to now we have been ignoring a very important part of these two algorithms At each stage of both algorithms we consider adding an edge and ask whether or not the addition of this edge will cause a cycle This has begged the question of detecting cycles Recall that the following code fragment is essential in the construction of the spanning tree If ET U GR is acyclic Then ET ET U l ex l EdgeCount EdgeCount 1 End If We now attempt to address this problem beginning with a standard theorem on trees and an obvious corollary of that theorem De nition A tree is a connected acyclic graph Theorem The following statements are equivalent G is a tree with n vertices and m edges Every two distinct vertices of G are connected by a unique path G is connected and m n 7 l G is acyclic and m n 7 l G is acyclic and if any two nonadjacent vertices of G are joined by an edge 6 then G e the graph with one edge added has exactly one cycle Elk WP The definition is standard and the proof of the theorem is found in every book on graph theory We now extend this by two corollaries that are easy to prove but just stated here Corollary Let G be an n m7graph with m lt n 7 1 Then G is disconnected Corollary Let G be an n m7graph with m 2 n Then G contains a cycle The key feature in our detection of cycles in the growing tree is the concept of connected components Recall that two vertices are in the same connected component if there is a path from one vertex to the other We make the following observation stated as a lemma Lemma Let J and K be two non7adjacent vertices in a graph G Then a If J and K are in the same connected component addition of the edge J K to the graph will cause a cycle to be formed b If J and K are not in the same component but in distinct components call them CJ and CK then addition of the edge J K will not add a cycle to the graph but will cause every vertex in both CJ and CK to be in the same component Specifically it causes J and K to be in the same component Proof Suppose J and K are in the same component but that J K is not an edge in the graph G Then there exists a path from J to K not involving that edge Addition of the edge J K to that path forms a cycle Suppose that J and K are not in the same connected component Then there is no path from J to K and addition of edge J K to the graph will generate a unique path from J to K As vertex J is connected to all vertices in its component the addition of this edge forms a path from every vertex in CJ to vertex K hence to every vertex in CK Chapter 9 Version of 7192010 Page 7 Here are two algorithms for use in detecting cycles while using Kruskal s Algorithm Algorithm ConnectedComp e u v Add edge e u v to T Ife u v is the rst edge added create component C1 u v Otherwise scan all of the existing component linked lists for vertices u and v If one component contains both u and v reject the edge as forming a cycle If component C J contains one vertex call it u and vertex v is not contained in any component add vertex v to component C J If vertex u belongs to component C J and vertex v belongs to component CK then merge the two components into one component If neither u nor v are in a component create a new component CN and set CN u v Here is a slightly better algorithm Like the above it relies on the use of sets to represent the connected components in the graph This algorithm begins with a graph T VG Q an N 07graph with the vertex set of the graph for which a spanning tree is to be constructed Let VG 1 2 3 N71 N Construct the connected components for graph T It is a set of sets best denoted as KG l 2 3 N 71 N Each set in the collection is a singleton set containing only one vertex re ecting the fact that the graph T has no edges at this point Denote the set representing the component containing vertex J as SJ At the beginning we have SJ J for l S J S N Algorithm ConnectedComp e I J Add edge e I J to T WLOG Without Loss of Generality assume that I lt J IfI e SJ Then J e SI also Reject Edge 6 I J Else Accept Edge 6 I J as not adding a cycle Set both SI to SI U SJ and remove SJ End If Spanning Trees for Unweighted Graphs It should be obvious that either Prim s Algorithm or Kruskal s Algorithm can be adopted to create spanning trees in unweighted graphs For each the modi cation is simple In Prim s one replaces the statement Select an edge of minimum weight between a tree node tand a fringe node u with the statement Select any edge between a tree node I and a fringe node u In Kruskal s algorithm the only requirement is to omit the sorting of edges by weight as they all have the same weight in an unweighted graph and begin with any listing of edges Chapter 9 Version of 7192010 Page 8 Mare Emblems The spahmhg trees we have eohhdehed so far do not well xllustxate the a1g6mhms T6 r oh L L 35 Thxs 9 6eghaph1s aforestwh1h eah be consxdered as a set of ddsconnectedtxees Labelmg each edge set by the mple Venex 1 Vertex 2 Edge wexght we have the 6116wmg set 1 1 23 8 13 5 452 452 med foerskal39s a1g6mhm 1h 15 123 5 6 4 564 7 1 13 5 8 9 6 8 9 6 Apply Prim s Algnrithm We shall begm wnh vertex 1 VT1gt candddate edgesarel2 3 ahd13 5 Select edge 1 2 wnh wexght 3 V1 1 2 the 6h1y candddate edge 15 1 3 5 Select edge 1 3 wnh wexght 5 Total wexght 8 V7 1 2 3 ahdvevp 4 5 67 8 9 Butthere are he edges 12 v 1h G wnh h 6 V1 andv e VexT The algonthm stalls Apply Kmskal sAlgnriLhm The edge 1st 15 7 8 1 4 5 2 1 2 3 5 6 4 1 3 5 and 8 9 6 2gtv533gt54 555gtv 58 8 59 The code seehoh ofmterest th the problem values supplxed1s Whlle Edgec K h 1 3 euhelt E And lt1ltlt a no 1v1 91131 6 If a v ex is eeyehe Then 21 131 2 ex Edgecuunt dgecuunt 1 Chapter 9 Vernon of 7192010 Page 9 K 0 EdgeCount 0 K lt 8 and EdgeCount lt 6 Set K 1 Consider el 7 8 1 7 e S7 and 8 e S7 so it is OK to add ET 7 8 1 and WT 1 S1 1 S2 2 S3 3 S4 4 S5 5 S6 6 S7 7 8 S9 9 K 1 EdgeCount 1 K lt 8 and EdgeCount lt 6 Set K 2 Consider ez 4 5 2 4 e S4 and 5 e S4 so it is OK to add ET 7 8 1 4 5 2 and WT 3 S1 1 S2 2 S3 3 S4 4 5 S6 6 S7 7 8 S9 9 K 2 EdgeCount 2 K lt 8 and EdgeCount lt 6 Set K 3 Consider 6 3 1 2 3 1 e S1 and 2 e S1 so it is OK to add ET 7 814 5 2 1 2 3 and WT 6 S1 1 2 S3 3 S4 4 5 S6 6 S7 7 8 S9 9 K 3 EdgeCount 3 K lt 8 and EdgeCount lt 6 Set K 4 Consider e4 5 6 4 5 e S4 and 6 e S4 so it is OK to add ET 7 8 1 4 5 2 1 2 3 5 6 4 and WT 10 S1 1 2 S3 3 S4 4 5 6 S7 7 8 S9 9 K 4 EdgeCount 4 K lt 8 and EdgeCount lt 6 Set K 5 Consider es 1 3 5 1 e S1 and 3 e S1 so it is OK to add ET 7 814 5 2 1 2 3 5 6 4 1 3 5 and WT 15 S1 1 2 3 S4 4 5 6 S7 7 8 S9 9 K 5 EdgeCount 5 K lt 8 and EdgeCount lt 6 Set K 6 Consider 6 5 8 9 6 8 e S7 and 9 e S7 so it is OK to add ET 7 814 5 2 1 2 3 5 6 4 1 3 5 8 9 6 and WT 21 S1 1 2 3 S4 4 5 6 S7 7 8 9 EdgeCount 6 so the algorithm terminates with the three sets representing components Chapter 9 Version of 7192010 Page 10 Another Exam 19 Th Connected We examine amore realistic example and apply both Prim s and Kruskal s algorithms The graphisa58rmph quot l W p 39 imede Ime 5 8 Graph Prim s Algorithm Applied to This Graph We first apply Prim s Algont m To avoid the boredom ofrepetition this author elects to begin the spanning tree at v ex 2 V 2 VrVr1345 Edges xy with a V1 andy e V eVr are 1 2 4 2 3 3 and 2 4 3 The algorithm allows us to select either ofthe now sh two edgeswith weight3 We arbitrarily select 233 We ow the spanning tree as it grows GO VT lt23 VeVrl45 Edges xy with a VT andy a V 7V1 are 1 2 4 1 3 2 2 4 3 3 4 2 amp 3 5 1 The obvious choice is 3 5 1 The total tree weight is now 4 Chapter 9 Version of 7192010 Page 11 VT 2 3 5 vivT 1 4 Edges x y With e VT andy e vivT are 1 2 4 1 3 2 2 4 3 3 4 2 amp 4 5 1 The obvious choice is 4 5 1 The total Weight is now 5 VT2345 vivT1 Edges x y With e VT andy e vivT are 1 2 4 1 3 2 and1 4 2 We may select either 1 3 2 or 1 4 2 each giving atotal Weight 0f7 Chapter 9 VeIsion of 7192010 Page 12 Kruskal s Algorithm Again we show the original graph 5 8 Graph In terms that we shall use for the algorithm we have VG12345 EG 3 514 511 3 2 1 4 2 3 4 2 2 3 3 2 4 3 8c 1 2 4 The set ofcomponents s1 1 32 2 33 3 34 4 and 35 5 TL A H wmle Edgecount lt 4 And K lt 8 Do vi 5 E1 8 K K 1 If as u ex 15 acycllc Then ET ET u ex Edgecount Edgecount 1 K 0 EdgeCount 0 K lt 8 and EdgeCount lt 4 SetK 1 Considere351 3 5 33 and5 9 33 so it is OK to add ET 35 1 deT 1 31 1 32 2 33 3 5 and s4 4 1 EdgeCount 1 Klt 8 and EdgeCountlt4 Set K 2 Consider 22 4 5 1 4 a s4 and 5 e s4 so it is OK to add ET 3 5 1 4 5 1 andWr 2 s1 1 s2 2 and s3 34 5 Chapter9 Version 0f7192010 Page 13 K 2 EdgeCount 2 K lt 8 and EdgeCount lt 4 Set K 3 Consider e 1 3 2 1s S1 and3 e Sl so it is OK to add ET lt 3 5 1 4 5 1 1 3 2 andwT 4 S1 1 34 5 and S2 2 K 3 EdgeCount 3 Klt 8 and EdgeCountlt4 Set K 4 Consider ei 1 4 2 1 a S1 and4 5 81 so reject this edge Nothing changes ET 3 5 1 4 5 1 1 3 2 andwT 4 S1 1 34 5 and S2 2 K 4 EdgeCount 3 K lt 8 and EdgeCount lt 4 Set K 5 Consider e 3 4 2 3 a S1 and4 5 81 so reject this edge Nothing changes ET lt 3 5 1 4 5 1 1 3 2 andwT 4 S1 1 34 5 and S2 2 K 5 EdgeCount 3 K lt 8 and EdgeCount lt 4 Set K 6 Consider Bo 2 3 3 2 e S2 and3 e S2 so it is OK to add ET lt 3 5 1 4 5 1 1 3 2 2 3 3 and wT 7 81 1 2 3 4 5 K 6 EdgeCount 4 Algorithm terminates This is one ofthe two solutions from Prim s Algorithm Chapter 9 Version of 7192010 Page 14 The Knapsack Problem The knapsack problem has been discussed several times in the textbook it is time that we discuss it As is true of many important problems this one has an easy statement We consider a knapsack with capacity W gt 0 This is a xed number for the problem We are given a set ofN items of known weights W1 wz wN and known values v1 vz VN with the following obvious constraints 1 vK gt 00 for all K 1 S K S N ie each item has a positive value and 2 wK gt 00 for all K 1 S K S N ie each item has a positive weight and 3 V N ZWK gt W if all the items t into the knapsack the problem is trivial and K1 4 wK S W for all K 1 S K S N individually every item will t into the knapsack There are two variants of this problem one easy to solve and one hard to solve The fractional knapsack problem is the one that is easily solved by a greedy technique In this variant of the problem any fraction of an item may be placed into the knapsack Let xK 1 S K S N be the fraction of item K that is placed into the knapsack thus the amount of item K actually placed in the knapsack has weight xKowK Obviously 00 S xK S 10 In the discrete knapsack problem also called the 0 1 knapsack problem one either takes all of an item or none of it thus xK 00 or xK 10 for each K This is a more dif cult problem that takes exponential time actually TN e 92N to solve completely However we shall see that the solution of this harder problem may be informed by that of the fractional knapsack in that the fractional knapsack problem can place both lower and upper bounds on the optimal solution of the discrete knapsack problem All variants of the knapsack problem are optimization problems in the sense of the discipline of Operations Research The problem may be stated as follows Given a set ofN items ofknown weights wl wz wN and known values v1 vz VN and a knapsack of capacity W N Select values x1 x2 xN 10 maXimize Z xK V1 K1 N subjectto ZxK wK 3W K1 For the continuous knapsack problem we have 00 S xK S 10 For the discrete knapsack problem we are further constrained so that either xK 00 or xK 10 for each K Chapter 9 Version of 7192010 Page 15 De ne pK k for 1 S K S N Since we have specified that vK gt 00 and wK gt 00 for all K K 1 S K S N it follows that pK gt 00 and is a nite number for every K 1 S K S N Without loss of generality we order the items such that pl 2 p Z Z pK Z Z pN thus item 1 is the item with the highest value to weight ratio and the sequence of pK forms a non increasing sequence for all values of K This ordering is essential for a solution to the fractional knapsack problem but has no effect on the solution for the discrete problem Solving the Fractional Knapsack Problem Here we present a solution based on the greedy procedure and prove its optimality Remember that this assumes the ordering pl 2 p Z Z pK Z Z pN with pK as defined Algorithm FractionalKnapsack Input a set of values and weights as described above Output a set of XK 7 the fraction of item K to include Value 7 the total value placed into the knapsack For K 1 to N Do XK 00 Initialize the output array End Do Value 00 Capacity W Start with all of the knapsack capacity available and gradually decrease it K 1 While K S N And Capacity gt 00 Do If Mamp S Capacity Then The item all fits in Value Value V Capacity Capacity 7 MQ XK 10 Else Fill up the knapsack XK Capacity Mamp Value Value XKOV Capacity 00 End If K K 1 End Do We first illustrate this algorithm by applying it to the sample problem on page 116 of the textbook and then we prove its optimality by showing that any other solution must have a total value not exceeding the value produced by this algorithm Chapter 9 Version of 7192010 Page 16 Here is the problem from gure 38 on page 116 of the textbook W100 N4 W1 70 V1 420 p1 60 W 30 V2 120 p 40 W3 40 V3 400 3 100 W4 50 V4 250 4 50 Note here that we have 3 2 pl 2 4 2 32 In order to apply the algorithm discussed above I would have to renumber the four items according to the permutation 3 gt 1 1 gt 2 4 gt 3 and 2 gt 4 or 1 gt 2 2 gt 4 3 gt 1 4 gt 3 in more conventional order To be consistent with the book I shall keep the present indices and reorder the entries W3 40 V3 400 3 100 W1 70 V1 420 p1 60 W4 50 V4 250 4 50 W2 30 V2 120 p2 40 We now solve this with greedy taking note of a speci c partial solution K 3 Capacity 100 W3 40 X3 10 Value00100400 400 W H Capacity 60 W1 70 x1 60 70 Value 400 60 70 420 760 The optimal solution to the fractional knapsack problem is x1 6070 x2 00 X3 10 X4 00 Value 760 The two solutions of interest here have values of 400 and 760 respectively The value 760 is the optimal value for any variant of this problem and is the solution to the fractional knapsack problem We shall see later that the solution to the 01 knapsack problem is bounded by these two values Indeed we shall obtain an easy lower bound that is better Chapter 9 Version of 7192010 Page 17 Optimality of the Solution to Fractional Knapsack In order to prove that the solution produced by greedy is an optimal solution we assume that there is another solution and characterize that solution showing it to be no better To produce the greedy solution we rearranged the items so that pl 2 p Z Z pK Z Z pN The solution produced had M items in the knapsack with the following holding xKl0forlSKltM 00 lt xM S 10 xK 00 for K gt M N Z xK o WK W as this solution always lls the knapsack exactly K1 Suppose another solution to the problem This would involve removing some or all of item number I and replacing that with item J with DJ 2 D J It is easily shown that such a swap with DJ D J produces an alternate optimal solution Since this does not violate the claim of optimality for the greedy method we assume strict inequality thus p1 gt D J Let x1 0 W1 be the amount of item I that has been included in our solution We consider a new solution by removing item I and adding item J with p1 gt D J There are two possible cases to consider in trying to improve the solution claimed to be optimal 1 x1 0 W1 S WJ in which case remove amount W x1 0 W1 of item I and add the same amount of item J 2 x1 0 WI gt WJ in which case remove amount W WJ of item I and add all of item J In any cases let W represent the amount of item I removed and replaced with item J We lose value equal to WopJ and gain value equal to Wop J for a net change of Wop J 7 p1 But p1 gt D J so this value is negative and the total value of the solution goes down Thus no substitution into the solution obtained by the greedy method will increase the value of the solution and we have proven that the greedy solution is optimal Again we should be clear about the claim that the solution by greedy for this problem is optimal We mean that greedy produces an optimal solution by which we claim that there is no better solution It is quite common to have a number of distinct solutions all of which are optimal Greedy will produce only one such solution We shall return to the discrete knapsack problem when we consider methods to obtain approximate solutions to problems that are NP Hard For the moment I shall present an algorithm that is a slight variant of the greedy solution and make a few claims Chapter 9 Version of 7192010 Page 18 Insights into Discrete Knapsack This variant of the algorithm is an application of greedy to the 01 knapsack problem This algorithm obviously produces a solution but can be shown not always to produce an optimal solution As before this assumes the ordering of items so that pl 2 p Z Z pK Z Z pN Algorithm DiscreteKnapsackLowerBound Input a set of values and weights as described above Output a set of XK 7 the fraction of item K to include Value 7 a lower bound on the optimal solution For K l to N Do XK 00 Initialize the output array End Do Value 00 Capacity W Start with all of the knapsack capacity available and gradually decrease it K 1 While K S N And Capacity gt 00 Do If Mamp S CapacityThen Value Value V Capacity Capacity 7 Mamp XK 10 End If K K 1 End Do We argue that the optimal solution to the discrete knapsack problem is bounded by the results of the above algorithm as a lower bound and the results of the greedy algorithm as an upper bound The arguments are simple 1 The above algorithm produces an answer that meets all of the constraints of the discrete Ol knapsack problem Any optimal answer must be at least as good thus this answer is a lower bound on the optimal answer 2 The greedy algorithm has just been proven to provide an optimal solution to the knapsack problem under the constraints that 00 S xK S 10 for all items The additional constraint of the discrete knapsack problem is that xK 00 or xK 10 e obviously a special case of the fractional knapsack Thus no solution to the discrete knapsack can have value exceeding that of the solution to the associated fractional knapsack thus the greedy solution to the fractional knapsack gives an upper bound on the optimal solution of the discrete knapsack Chapter 9 Version of 7192010 Page 19 We now apply the lower bound algorithm to the problem from the book As before we rearrange the items according to the greedy criteria but keep the original numbering W3 40 V3 400 3 100 W1 70 V1 420 p1 60 W4 50 V4 250 4 50 W2 30 V2 120 2 40 We now attempt to solve the discrete knapsack problem with greedy K 3 Capacity 100 W3 40 X3 10 Value 00 400 400 K 1 Capacity 60 W1 gt 60 so skip K 4 Capacity 60 W4 50 x4 10 Value 400 250 650 K 2 Capacity 10 W1 gt 10 so skip We have now run out of items in the list We now have two values for the optimal solution to the discrete knapsack problem Upper bound 760 and Lower bound 650 In this case it turns out that 650 is the solution to the discrete knapsack problem but this is an accident as it is quite easy to create examples in which the above algorithm does not produce the optimal solution One should note that there is one condition under which the optimal solution to the discrete knapsack problem can easily be produced Should the greedy algorithm as applied to the fractional knapsack problem produce a solution in which there is no value of xK such that 00 lt xK lt 10 with the inequalities strict that is for either xK 00 or xK 10 for all K then the solution to the fractional knapsack problem is the exact optimal solution to the discrete knapsack problem To illustrate this claim consider the same problem as before except that W 11 Chapter 9 Version of 7 192010 Page 20

### BOOM! Enjoy Your Free Notes!

We've added these Notes to your profile, click here to view them now.

### You're already Subscribed!

Looks like you've already subscribed to StudySoup, you won't need to purchase another subscription to get this material. To access this material simply click 'View Full Document'

## Why people love StudySoup

#### "There's no way I would have passed my Organic Chemistry class this semester without the notes and study guides I got from StudySoup."

#### "I used the money I made selling my notes & study guides to pay for spring break in Olympia, Washington...which was Sweet!"

#### "There's no way I would have passed my Organic Chemistry class this semester without the notes and study guides I got from StudySoup."

#### "Their 'Elite Notetakers' are making over $1,200/month in sales by creating high quality content that helps their classmates in a time of need."

### Refund Policy

#### STUDYSOUP CANCELLATION POLICY

All subscriptions to StudySoup are paid in full at the time of subscribing. To change your credit card information or to cancel your subscription, go to "Edit Settings". All credit card information will be available there. If you should decide to cancel your subscription, it will continue to be valid until the next payment period, as all payments for the current period were made in advance. For special circumstances, please email support@studysoup.com

#### STUDYSOUP REFUND POLICY

StudySoup has more than 1 million course-specific study resources to help students study smarter. If you’re having trouble finding what you’re looking for, our customer support team can help you find what you need! Feel free to contact them here: support@studysoup.com

Recurring Subscriptions: If you have canceled your recurring subscription on the day of renewal and have not downloaded any documents, you may request a refund by submitting an email to support@studysoup.com

Satisfaction Guarantee: If you’re not satisfied with your subscription, you can contact us for further help. Contact must be made within 3 business days of your subscription purchase and your refund request will be subject for review.

Please Note: Refunds can never be provided more than 30 days after the initial purchase date regardless of your activity on the site.