ADV AI COMBINATORIAL GAMES
ADV AI COMBINATORIAL GAMES CS 542
Popular in Course
Popular in ComputerScienence
This 10 page Class Notes was uploaded by Orrin Rutherford on Wednesday September 2, 2015. The Class Notes belongs to CS 542 at Portland State University taught by Staff in Fall. Since its upload, it has received 14 views. For similar materials see /class/168294/cs-542-portland-state-university in ComputerScienence at Portland State University.
Reviews for ADV AI COMBINATORIAL GAMES
Report this Material
What is Karma?
Karma is the currency of StudySoup.
You can buy or earn more Karma at anytime and redeem it for class notes, study guides, flashcards, and more!
Date Created: 09/02/15
3J CHAPTER BLIND SEARCH We argued in the last chapter that intelligent action involvessearch and described a variety of speeiiic problems where search is needed for a so lotion the 8vpuzzle game playing crosswordpuzzle generatinn and rea soning or inference generally In this chapter and the next two we examine search in a bit more depth This chapter discusses blind search the next discusses heuristic search and Chapter 5 discusses search procedures that are used in game playing We have already presented algorithms for depthfirst Search and breadthfirst search and we hegin this Lhupter by examining these proe cedures a bit more Closely Specifically we consider the question of how much memory and time these algorithms need In order to simplify the analysis we will assume that we are working with a search tree of uniform branching factor b und depth d and that this tree has a single goal node that is at depth d A tree of this sort with b 4 and d 2 appears in Figure 31 We have denoted the root node by i the nodes at depth 1 by m n2 ha and m and the children of n by n1 n3 nu and HM Except for the fact that it admits multiple solutions the crossword pnzzle problem can he thought of in this way We decide in advance on an order in which to fill the squares view a completed crossword as a goal node if all of the words are legal and distinct and View the node as a nongoal node if one of these conditions is Violated Given this interpre tation the hrannhing factor for the crosswordpuzzle problem is 26 BREADTHFIRST SEARCH As discussed in Chapter 2 in breadthfirst search we should View the list L of unexpanded nodes as a queue so that newly generated children are put on the end of this list and expanded only after nodes at shallower depths have been examined Thus during the rst few iterations through the loop in Procedure 221 L takes the following values 49 50 CHAPTER 3 BLIND SEARCH lil n 1311an Hz lhnhn1111311131114 illn ht itnizynnynMHm yn22 23vn24l The complete ordering for the search in Figure 31 is shown in Figure 32 SlitLL the total number of nodes at depth k is b and we need to store all of these nodes before the tirst node at depth k l is examined it follows that breadth rst search needs space at least bd 1 to explore a tree such as that in Figure 3 Note that if the tree continued beyond depth i even more space would be required as the nodes on the hinge each generated b successors Just before the goal is found L would contain the goal node and some fraction of the tree at depth d it How about the amount of time needed by breadth iirst search We will equate the amount of time taken by the algorithm with the number of nodes it examines remember that a node is checked to see it it is a goal node just before the node is expanded to generate successors and not when the node itself is added to the list L FtGURE 31 i A uniform search d O spalte l I n g F I G U RE 3 2 Breadthe rst search 6 7 8 91011121314l516171819202l 31 BREADTHFIRST SEARCH 51 in order to reach the goal at depth d the interim that is nonfringe nodes that must he examined are all of the nodes at depths 01 d a 1 The number of such nodes is given by d 1bb2bd1b 1 1 1 31 What about the fringe nodes We can t tell exactly how many fringe nodes will be examined since we don t know where on the fringe the goal node is located But the best we can do is to examine only one fringe node the goal the worst is if we need to examine all bd of them The average number of nodes examined on the fringe is therefore 1 z 32 and the average number of total nodes is the sum of 31 and 32 bd 1bd1 E zbd bb 1 1 b A 1 2 2th l T b bdb 3 2li 1 33 For large depth 33 reduces to approximately lid2 which is roughly the same as the amount ul time spent at the fringe This is to be expectedi in these cases most of the time spent searching will be spent at the bottom of the tree A graphical version of this argument appears in Figure 33 It b 2 2 on the other hand than 3 3 simplifies to F I G U RE 3 3 Time needed b breadth rst search a bd I bquot2 52 CHAPTER 3 BLIND SEARCH 3quot2 Once again we can understand this result directly The size of the internal tree is 1225H2d and the number of lringe nodes that we need to examine is approximately ZdZ These two values sum to 32 2 DEPTHFIRST SEARCH if L is thought of as a queue in the breadthefirst approach it is a stack in depthfirst Search Its first few values as we search the tree in Figure 31 U ii nunzmm inlrnlirni vnldsn2In31n4 inlznXSrnl lanrnSrndi The complete search is shown in Figure 34 The amount of memory needed by the depthfirst approach is easy to compute The most memory is needed at the first point that the algorithm reaches depth d in Figure 35 the marked nodes need to be saved In general we will need to store b 7 1 nodes at each depth the siblings of the nodes that have already been expanded together with one additional node at depth 1 since we haven t expanded it yet Thus the total space needed is given by db 11 F I c u R E 3 4 1 Depth rst search 3 45 6 B 910 1314l5161819202 32 DEPTHFIRST SEARCH 53 F I G U R E 3 5 Memory needed by depth rst Search For fixed b depthlirst search requires an amount of memory that is linear in d while breadth first search requires an amount exponential in the depth of the search tree The time calculation is more subtle We begin by noting that if the goal is at the far left of the tree then depth first search will proceed directly to it examining a total of d 1 nodes If it is to the far right of the tree the entire space will be examined a total of bani tw 2 b b1 nodes in all If we could average these two expressions we could conclude that the number of nodes examined in the average case is bd ibd7dbilibd bdb7d72 2b 1 T 201 1 34 But is it legitimate to take the average It isn39t obvious that it is since the numbers of nodes that might be examined are not distributed uniformly over the integers from d 1 In In Figure 34 for example we might examine three four five or six nodes but can t examine exactly seven since the seventh node we examine is 11 which is at depth 1 and not a fringe node To see that we can take the average we will use an argument due to the mathematician Kari Gauss When Gauss was ten he was making a nuisance of himself and the teacher in order to quiet him asked him to add the numbers from 1 to 100 Gauss replied almost immediately that the answer was 5U50 having reasoned as follows I can pair 1 with 100 they add to 101 III pair 2 with 99 I get 101 again similarly for 3 paired with 98 and so on T e last pair is 50 with 51 so 0 the sum of all the numbers is 50 r 101 503 54 CHAPTER 3 BLIND SEARCH Not bad for a 10 year old 5 Gauss s argument would apply equally well to showing that the average of all the integers between 1 and 100 was 1012 and this argument is one that we can apply to our situation Specifically if we denote by B the number of nodes cxarriitted in the best case and by W the number examined in the worst then in the tree in Figure 3A it is possible to examine a total of B nodes orB 1 B 2 B 3 B t 5 not B 4 and so on But on the other side of the picture we might examine W nodes or W 1 W 7 2 W 7 3 W 5 not W 4 and so on As shown for the tree with b 3 in Figure 36 the symmetry of the search space guarantees that we can average the best and worst cases so that the amount of time needed by depthfirst search in the average case is indeed given by 34 As in the breadthiirst case we will examine large d and b Z a bit more closely For large 41 34 is once again GURE 36 Gauss39s argument tor a tree with b 3 instead of b 4 B Bol 32 511 bd 2 i l 1 But ad was w 8 W74 W72 wet W t L A at Mathematicians have a history of doing things like this Consider the following famous ith lent 39i39wu locomotives start 200 miles apart each traveling toward the other at 50 miles per hmir A bumblebee starts on the front of one train and flies toward the other at 75 miles an hour when he reaches the second locomotive he turns around and heads back Assuming that he continues this behavior how many miles will he fly hefnm being squashed no can solve this problem either by doing a few pages of algebra and summing a fairly complicated series or simply reason that since the trains are approaching one another at mu milesanhnnr 39 quot quot 1n i 39 quot I The mathematician Von Neiimann given this prnhlnm responded almust imrriediately and was then told that although physicists tvpicallv solved the problem in this way math ematicians typically summed the series Von Neumann39s reply Buti did sum the series FIGURE 37 Time needed by depth rst search F l G U R Cost of breadth rst search 32 DEPTHFIRST SEARCH 55 NH so that the work at the fringe continues to dominate the computation see Figure 37 Note however that the term of order b HZ that appeared in the breadthfirst result 33 is missing from 34 a slightly more accurate estimate of the time needed by breadthfirst search is b 1 1 2 b We see from this that breadthfirst search is more computationally expen sive than depthfirst search by a factor of Typical values for this expression appear in Figure 313 For it 2 34 simplifies to ajjgt 2d2 d 2zd 1d d 2 2 2 As in the breadthfirst case we can understand this result directly The entire search space is of size 2 and depthfirst search will on average examine about half of itiin other words 2 nodes i Comparing the two approaches we see that depthfirst search is some what more efficient in time and vastly more efficient in space than its E 3 8 b Ratio 2 1 5 3 13 5 12 10 11 25 104 100 L01 56 CHAPTER 3 BLIND SEARCH FIGURE 19 Is 3 an integer 33 breadthfirst counterpart In fact the amount of space needed by breadth first methods is typically large enough to be prohibitive Depthifirst search is not without its difficulties however Many prob lems admit multiple goals with the intention being to find the shallowest one there is no guarantee that the depthfirst approach will do so Consider using depth first search to solve a route planning problem where the goal is to find a path from my house to my neighbor39s If my neighbor lives to the west but I decide to expand nodes to the east first the route l find may take me all the way around the planet in order to cross the street At the very least I will probably make my way to New York before turning around and heading back toward California Depthfirst search can also have trouble if the depth of the tree is in finite Consider the problem of proving that 3 is an integer where we know that U is an integer and also that successors and predecessors of integers are integers The search space for this problem is shown in Figure 39 where we have labelled the nodes with the numbers that we are trying to prove are integers and 0 therefore labels goal nodes We have simplified the Search tree somewhat by priming any path that takes us from a node hack to its predecessor Two pruned paths that return as to the root of the tree are shown in Figure 39 Depthfirst search may simply follow the letthand branch in this prob lem trying to prove that 4 is an integer then that 5 is then 6 and so on When searching trees of infinite depth depthifirst so chinay not find any answer at all In Exeruise 4 at the end of this chapter we will see that breadthfirst search can also outperform depthfirst search in finite spaces if thedepth of the goal node is smaller than the depth of the tree as a whole ITERATIVE DEEPENING Is there any Way to get the best of both these approaches using an algorithm that has the space requirements of depthfirst search and the performance PROCEDURE 331 33 ITERATIVE DEEPENING 57 properties of hreadiliefirsi search The auswL r is yes the approach that does this is known as iterative deepening and Was first thoroughly inves tigated by Rich Korf There are two separate ways to think about iterative deepening Perhaps the simplest although it doesn t explain the name is that it s just the same as breadthfirst search except that instead of storing all of the nodes at inlermediale depths we regenerate them when the time comes to expand them Why is this a viable approach It seems as if the amount of time spent regenerating the internal nodes would dwarf the time spent looking for the answer but this is not the case recall that in all search problems the bulk of the computational effort is spent examining the nodes at the fringe The other way to think of iterative deepening is the following in the cases in which depthsfirst Search performs poorly the depth of the goal node is less than the depth of the tree as a whole either because the first goal found is at a greater depth than the shallowest solution to the problem or because the depth of the tree is infinite and so on What iterative deep ening does is search the tree in a way that guarantees that the goal depth and tree depth match The idea is to search the tree initially with an artificial depth cutoff of 1 so that any node below depth 1 is not examined if this approach succeeds in finding a solution at depth 1 the solution is returned If not the tree is searched again but with a depth cutoff of 2 Each of these iterative searches proceeds in depthfirst fashion Iterative deepening 1 Set 3 1 this is the current depth cutoff 2 Set L to he a list of the initinl nodes in the problem 3 Let n be the first node on L If L is empty increment r and return to step 2 4 If n is a goal node stop and return it and the path from the initial node 0 n 5 Otherwise remove n from L Provided that the depth of n is less than c odd to the front of L all of 11 s children labelling each with its path from the initial node Return to stop 3 if the shallowest solution is at depth g the depthfirst search to this depth will succeed it follows from this that iterative deepening will always return the shallowest solution to the problem in question Since each of the individual searches is performed depthfirst the amount of memory required by the method is the same as for the depthfirst approach liow about the numher of nodes examined The order in which the nodes are expanded in our sample problem is shown in Figure 310 the 58 CHAPTER 31 BLIND SEARCH F I G U RE 3 1 0 Node order in iterative deepening 9 IOHl2l4l5167l920212224252627 nodes at depth 0 and 1 are labelled with a sequence of numbers since they are examined multiple times In general the number of nodes examined in the final sum sful it eration is given by 34 In the previous iterations the failing searcllus to depths 12 id 1 will need to examine the entire tree at these depths the size of the tree to depth 1 is given by bl 1 i 1b 1 bal The total number of nodes examined in the failing searches is therefore 1 177 1 dit A b 1 h 1 H a a l 1 b 1 h 7 d b 1 l b 7 1 l Ib ll Combining this with 34 and simplifying the result the total number of nodes examined in this approach is seen to be bd bi bzd b2 4nd 5b 3d 2 2b 1Z 35 For large d this is dominated by b i 1W 2b 112 33 ITERATIVE DEEPENING 59 Since the time needed by depthfirst search is dominated by bdtl 21 1 the ratio of the time needed by iterative deepening to that needed by depth first search is given y b1 brl 36 Typical values of this expression appear in Figure 311 As we can see from the table the cost of repeating the work at shallow depths is not prohibitive Since iterative deepening avoids the problems encountered in depthfirst search it is the method of choice in many search problems As usual we examine the ease h T 2 a hit more closely In the depth first case we need to uxmnine half of the entire tree since the tree is of size 2quot depth first search needs to examine ZJ nodes In iterative deep ening the failing searches take time 122quot2d since these are the sizes of the subtrees examined Since this time is twice the time needed by the successful search to depth d the overall factor in the b 2 case is 3 in agreement with 36 and Figure 311 How good is iterative deepening In a very real sense it is an optimal blind search procedure The reason is that for a fixed branching factor it takes space Md and time 0bd as we are about to see it is impossible to improve on this hehavim 7 It is clear for example that if the program is F I G U RE 3 1 1 Cosr of iterative deepening 7 By aid here I mean that for large depth the amount of space needed is dominated by a term of the form kd for some Constant k In a similar wav the amount of time needed is dominated by a term of the Ol39lll k39h 60 CHAPTER 3 BLIND SEARCH 34 FIG U R E 3 1 2 Backtrackng expected to return the path to a goal node at depth d at least space d is required What about time Since the goal node is distributed randomly along the fringe and the search technique is blind it follows that we will on average need to examine at least half of this fringe Since the fringe is of size bd we can therefore expect any blind search procedure to take time otbd Given this we can strengthen our remarks with regard to the space needed by the following argument Consider any algorithm that takes time 0de to run how much space does it need in order to distinguish its own internal states If all of the internal states of the algorithm are to remain distinct it clearly needs at least as much 5 ace as it would need to count to in But counting to bd needs logztb bits since lugzUJd d loggfb is od for fixed b we see that any blind search algorithm needs space ud whether it is expected to return a path to the answer or not ITERATIVE BROADENING Although iterative deepening is optimal in the sense that 0de is the best one can do in general there is one other blind search technique that can often improve on this result in practice not by reducing the complexity of the problem but by making it likely that a smaller portion of the fringe is examined if multiple goal nodes exist This technique is known as it erative broadening As we noted in the previous chapter it is important where possible to recognize bad choices early in the search so that we need not expand the entire search tree under these choices before finding a goal node But how are we to do this if the search is blind The answer lies in realizing that in most practical problems the goal nodes are not randomly distributed because it is possible to make fatal mistakes early in the search In Figure 312 for example it might well be FIGURE 313 Search wtth breadth cutoff PROCEDURE 34 ITERATNE BROADENING G1 a that the insertion of the word TONAL basically doomed us Although we could fill in a variety of additional words we would eventually have to insert TOAST as described earlier and would then be unable to fill the marked word in the figure What iterative broadening does is to take mns tinuing failure to find a goal below a node high in the search tree as evidence that such a fatal mistake has been made Just as iterative deepening imposes artificial depth limits on the Search and gradually increases those limits until a solution is found iterative broadening imposes artificial breadth limits increasing them until a 507 lotion is found Iterative broadening 1 Set c 2 this is the current breadth cutoff 2 Set L to be the set of initial nodes in the problem 3 Let n be the first node on L If L is empty increment c and return to step 2 4 Ifn is a goal node stop and return it and the path from the initial node to n Otherwise remove n from L Add to the front of l the first c of n s children labelling each with its path from the initial node Return to step 3 Ln We initially search the tree with a breadth cutoff of 2 then of 3 and so on A breadth cutoff of c means that a maximum of c children of any given node can be examined before the node is abandoned as a failure In Figure 313 we show a tree with branching factor 4 but a breadth cutoff of 3 The node ordering given by iterative broadening in our usual example is shown in Figure 314 As with iterative deepening iterative broadening is a viable search technique because there are at most b searches and the breadth searches with cutoff 6 take time 003quot which will be small relative to bd if c lt b The maximum amount of time spent by the approach is approximately 12dwbd 62 CHAPTER 3 F I GU RE 3 1 4 Iterative broadening BLIND SEARCH l82l A 12 3926 6 7 l6 3 l8 19 20 36 3B 39 40 Al 10 ll 25 H l5 30 33 34 35 23 24 28 29 which is approximately bdquotd for largo b a factor of hd worse than depth first search and is approximately bd virtually no cost at all for large d Iterative broadening can lead to large savings in the amount of time needed to solve search problems if multiple goal nodes exist and if it is possible to make fatal errors early in the search in the large depth limit the method is likely to find a solution more quickly than simple depth first search whenever the number of goal nodes exceeds three Nonsystematic search Itcralive broadening is a useful search technique because it keeps us from concentrating too much of our effort in a region of the search space that contains no goal nodes This lesson is so important that a variety of authors have suggested recently that when you fail to find a goal in solving a search problem you should jump to a distant portion of the search space that might be better The problem with doing this is that it is difficult to keep track of which nodes you ve examined and which you haven t you can t store all of the examined nodes without using an exponential amount of memory So what these new approaches suggest is that you simply not worry about this possibly searching some portions of the search space many times and other portions not at all Because of the scattershot nature of the resulting search these methods are called nonsystematic There is no guarantee that a non systematic search will find a goal node every time that the search space contains one In spite of this the observed performance of nonsystemalic methods on large search problems is often comparable to and in some cases better than the performance of systematic approaches Roughly speaking the search spaces in these examples are so large that the nonsystematic methods are unlikely to examine some area of the space over and over again and the spaces are also so large that a complete examination in the search for a solution where the systematic methods could be expected to shine is simply impractical This work is all very new but the next few years should shed substantial light on the general question of whether systematic or nonsystematic methods should be preferred when solving hard problems F I G U R E 3 1 S Missionaries and cannibal again 35 SEARCHING GRAPHS 63 SEARCHING GRAPHS Throughout this chapter we have assumed that the search space being examined is a tree and not a graph so that it is impossible to reach tho same node using each of several paths from the root node This assumption is clearly wrong in sliding tile puzzles for example there may be many ways to arrive at a particular configuration of the tiles The tower of Hanoi is similar as is the crosswordpuzzle problem in Figure 315 we have redrawn Figure 211 to show explicitly that the search space in the missionaries and cannibals problem is a graph and not a tree We can of course search a graph by pretending that it is a tree as in Figure 211 itself The problem with doing this is that the search may bu come less efficient as a result There are two separate ways in which graph search can be simplified Open and Closed Lists The first way is to simply avoid adding a node to L if it already appears there After all there is never any reason to commit to searching the same node twice More efficient still is to keep track of those nodes that have been examined and removed from L since these nodes have already been considered we have no need to consider them again should they be re generated for some reason The basic search procedure Procedure 221 does not keep track of the nodes that have been examined and removed from L if we want to do this we need to maintain a list of these nodes This list is often referred to as the list of closed nodes the list L of nodes still to be examined is called the list of open nodes Of course finding any particular node in the open or closed list takes both time and space and these resources need to he justified via an effective reduction in the size of the graph being searched By using hash tables the time to find or fail to find a particular node in the open or closed list can be reduced to a constant the space requirements are a bit more difficult to deal with 3M ac boat 2M 2C 3M 2C 3M 2C boat FIGURE 41 An instance of the Bpuzzle 8 CHAPTER HEURISTIC SEARCH We argued in the last chapter that any technique used to search blindly through a space of depth d and branching factor b would of necessity take time 0bd to find a single goal node on the fringe of the search tree In practice this is unacceptable As an example if the time needed to generate a plan grows exponentially with the length of the plan a planner will be unable to produce plans of interesting length In exploring a large search space the trick is to use additional infnr mation about the problem being considered to avoid examining most of the nodes that might conceivably load to solutions Instead of selecting the node to expand next in a domainindependent way as in the previous chap ter we need to use domainspecific information for the same purpose Of course the problem of finding the best node to expand next is gen erally no easier than the search problem that we are trying to solve So what we typically do is apply some sort of heuristic or rule of thumbquot to decide what to do 3 Given a list of nodes to be expanded we might simply guess how far each is from a goal node and expand the one we thought to be closest As an example consider the instance of the B puzzle shown in Figure 41 and suppose that our goal is to arrange the tiles from 1 to 8 clockwise around the edge of the puzzle if we estimate the distance to the goal by 1i 139 B The description of a heuristic as a rule of thumb is r39eigenbaurn39s 41 SEARCH AS FUNCTION MAxiMizATION 69 simply counting the number of misplaced tiles then we will expect it to take three moves to solve the problem in Figure 41 since three tiles are misplaced there The G 7 and 8 are all misplaced If we move the 6 to the right then only two tiles are misplaced Moving the H down leaves the heuristic estimate of the distance to the goal unchanged while moving the 5 to the left actually increases the expected distance We summarize this below Blank Moves Distance left i 2 right 4 ii p 3 We see from this that if our search heuristic is to move as quickly toward the goal as possible we will select the move of moving the blank to the left in Figure 41 Extending this analysis will lead us to move the blank up next and then to the39right arriving at the goal configuration after three moves This technique always expands next the child of the current node that seems closest to the goal Of course things may not always work out so easily because our heu ristic function estimating the distance to the goal may make mistakes in some cases After moving the blank up in Figure 41 we still estimate the distance to the goal as being only three moves although it is not too hard to see that four are actually required Another problem is that we need to limit the amount of time spent computing the heuristic values used in selecting a node for expansion We have already discussed this in Chapter 2 where we discussed the inevitable tradeoff between baselevel and metalevel activity There is no real solution to these problems of inaccurate or com putationally expensive heuristics the bottom line is that search problems sometimes will take an exponential amount of time to solve and there is simply no way around this But there is a third problem with our approach that is more tractable This problem can best be understood by considering an example Sup pose that we are looking for the solulion to a maze where our estimate of the value of a node is simply the Manhattan distance between our current position and the exit from the maze Thus at any given point we do our best to move closer to the exit Now consider the maze shown in Figure 42 where we enter the maze on the left and exit on the right As shown there are two solutions ea simple one that begins with a step downward and a much more complex one that begins with a step directly toward the exit but is then deflected in other directions The algorithm we have described will find the longer 70 CHAPTER 4 HEURIS HC SEARCH FIGURE 42 Amaze Wlth two 41 g of these two paths since it will begin by moving toward the goal and then be committed to this choice If we want to find the shortest path to the goal in a problem such as this a method that avoids this difficulty is a search algorithm known as A and we will discuss it later in this chapter Before doing so however let me present heuristic search from a rather different perspective drawing an analogy between these problems and the conventional problem of max imizing a function of several variables SEARCH AS FUNCTION MAXIMIZATION Suppose that we have designed a robot whose purpose is to explore the surface of Mars After being released from a landing craft of some sort the robot is expected to wander around the surface of the planet to the most interesting location it can find and then to take a variety of surface mea surements at that point What makes a surface location interesting is a function of a variety of factorseapparent surface makeup geological features alien footprints what have you The robot has no difficulty evaluating the interest of the point at which it finds itself but cannot predict Lilo interest of any other point without actually visiting it How is it to find the most interesting point on the planet39s surface From a formal point of view this is a simple function maximization problem we have a function pl that measures the interestingness of a point p on the surface and need to find that value of p for which the function s value is maxima Hill Climbing What is the functional analog to the search procedure described earlier Informally we always move in the direction of apparently largest f the functional analog is to attempt to find the global maximum of a function of many variables by always moving in the direction in which the rate of change is greatest This technique is known as hill climbing or steepest ascent for the obvious reason it is as if one attempted to find the highest point in a 41 SEARCH AS FUNCTION MAXIMIZATION 71 landscape by simply walking as much uphill as possible Here is the search version PROCEDURE Hill climbing 1 Set L to be a list ofthe initial nodes in the problem sorted by their expected distance to the goal Nodes expected to be Close to the goal should precede those that are farther from it 2 Let n be the first node on L If L is empty foil 3 lfn is a goal node stop and return it and the path from the initial node to n 4 Otherwise remove n from L Sort n s children by their expected distance to the goal label each child with its path from the initial node and add the children to the front of L Return to stop 2 In this version of the algorithm we always take a step from a child of the previously expanded node when possible this gives hill climbing a depthefirst flavors If we drop this restriction we get an algorithm known as bestfirst search PROCEDURE Besbfirst search 43911 1 Sol L to be Ll list of llie initial nodes in the problem 2 Let n he the node in l that is expected to he closest to the gnnl If L is empty foil 3 if n is a goal node stop and return it and the path from the initial node to n 4 Otherwise remove n from L and odd to L all of n s children lo belling each with its path from the initinl node Return to step 2 There are three obvious problems with hill climbing and these are depicted in Figure 43 The first and most important involves the problem of local maxima It is all too easy to construct situations in which the rate of change is negative in all directions even though the global maximum FIGURE 43 Dif culties encountered in hltl climbing 399
Are you sure you want to buy this material for
You're already Subscribed!
Looks like you've already subscribed to StudySoup, you won't need to purchase another subscription to get this material. To access this material simply click 'View Full Document'