### Create a StudySoup account

#### Be part of our community, it's free to join!

Already have a StudySoup account? Login here

# Class Note for CMPSCI 683 at UMass(10)

### View Full Document

## 20

## 0

## Popular in Course

## Popular in Department

This 12 page Class Notes was uploaded by an elite notetaker on Friday February 6, 2015. The Class Notes belongs to a course at University of Massachusetts taught by a professor in Fall. Since its upload, it has received 20 views.

## Similar to Course at UMass

## Popular in Subject

## Reviews for Class Note for CMPSCI 683 at UMass(10)

### What is Karma?

#### Karma is the currency of StudySoup.

#### You can buy or earn more Karma at anytime and redeem it for class notes, study guides, flashcards, and more!

Date Created: 02/06/15

Search and Agents Material at the end of last lecture Continuation of Simple Search The use of background knowledge to accelerate search Understand how to devise heuristics Understand the A and IDA algorihms Reading Sections 4142 Victor Lesser Characteristics of More Complex Search Subproblem interaction CMPSCI 683 More complex view of operatorcontml costs Fall 2004 Uncertaintyin search Nonmonotonic domains WWmlgearch redundancy Z There are four phases to problem solving A completely autonomous agent would have to carry 1 Goal formulation out all four phases based on currentvvorld state determine an appropriate goal oftenY goai and promem formuaiion are carried out descnbes desirable states ofthevvorld priorto agent design and the agent is given speci c goal formulation may involve general goals or specific goals goal instances agents perform only search and execution general goal formulation problem formulation specifc goal formulation etc For nonagent problem solving a solution maybe simplya speci c goalthat is achievable reachable there maybe no execution phase The execution phase fora realworld agent can be complex since the agent must deal with uncertainty and errors vintamim K vmxca im it 2 Problem formulation formalize the problem in terms of states and actions state space representation 3 Problem solution via search find sequencefs of actions that lead to goal statefs possibly select best of the sequences 4 Execution phase cany out actions in selected sequence Adopting goals and using them to direct problem solving can simplify agent design lntelligentJrational agent means selecting best actions relative to a PM but PMs may be complex multiple attributes with tradeoffs Goals simplify reasoning by limlting agent objectives but still organizedirect behavior Optimal vs satis cing behavior best performance vs goal achieved May use both goals to identify acceptable states plus PM to erentiate among goals and their possible solutlons y Wrasse nmt Complete searchbased problem solving involves both the search process and the execution of the selected action sequence 7 Total cost of searchrbased problem solvlrlg ls tne sum of tne searcn costs and tne patn costs operatorsequerlce cost Dealing with total cost may require 7 Comblrllrlg apples and oranges e g trayel mlles and CPU tlme e Havlrlg to make a traderoff between searcn tlme and solutlon cost optlmallty resource allocatlon 7 These lssues must be handled ln the performance measure Problems can vary in a number ofways that can affect the details of how problemsolving search agents are built One catego 39zation is presented in AIMA related to accessi ty and determinism e Slnglerstate problems Agent knows lnltlal state and exact effect oreacn actlorl Searcn overslrlgle states 7 Multlplerstate problems Agent cannot know lts egtltact lrlltlal state andortne exact effect of lts actlorlS Must searcn oyer state sets May or may rlotbe able to nnd a guaranteed solutlon Contingency problems 7 Exact predlctlorl ls lmposslble but states may be determlrled dunng executlorl vla serlslrlg 7 Must calculate tree of actlorls for eyery contlrlgency e lnterleaving searcn and executlon may be better Respond to state of World after executlorl of actlon Wlth uncertam outcome RTA Exploration problems 7 Agent may have no lnformatlon about the effects of lts actlons and must experlmerlt and learn 7 Search in real World vs model y Wrasse nmt a Uninformed strategies do not use any information about how close a node might be to a goal additional cost to reach goal They differ in the order that nodes are expanded and operator cost assumptions v mm m a While uninformed search methods can in principle find solutions to any state space problem they are typically too inefficient to do so in practice Informed search methods use problem speci c knowledge to improve average search performance v mm m m Heuristic problemspeci c knowledge that reduces expected search effort Informed search uses a heuristic evaluation function that denotes the relative desirability of expanding a nodestate often include some estimate ofthe cost to reach the nearest goal state from the current state In blind search techniques such knowledge can be encoded only via state space and operator representation v mm m M Travel planning Euclidean distance 8puzzle Manhattan distance Number of misplaced tiles Traveling salesman problem Minimum spanning tree 34 Where do heuristics come from v mm m 2 Heuristics can be generated via simplified models of the problem Simplification can be modeled as deleting constraints on operators Key property Heuristic can be calculated efficiently v Wrasse nmt 13 1 start with OPEN containing just the initial state 2 Until a goal is found or there are no nodes left on OPEN do a Pick the best node on OPEN b Generate its successors c For each successor do i If it has not been generated before evaluate it add it to OPEN and record its parent ii If it has been generated before change the parent if this new path is better than the previous one In that case update the cost of getting to this node and to any successors that this node may already have v Wrasse nmt 15 Idea use an evaluation function for each node which estimates its desirability Expand most desirable unexpanded node Implementation open list is sorted in decreasing order of desirability v mesa nmt 14 Do not regenerate the state you just came from Do not create paths with cycles Do not generate any state that was generated before using a hash table to store all generated nodes v mesa nmt a Simple form of best rst search Heuristic evaluation function hn estimates the cost from n to the closest goal Example straightline distance from n to Bucharest Greedy search expands the node that appears to be closets to the goal Properties of greedy search v mesa nmt o Uses a lot of space The resulting algorithm is complete in finite trees but not optimal v Wrasse nml a Search strategies rBes39tr rst Search a k a ordered Search greedv a k a bestr rst Ar ordered depthr rst a k a hillrcltmbmg rMemonrbounded Search tteratrve deepemng A lDA Srmptrrred memorvrbounded A SMA thmebounded Search Anvtrrne A RTA searchrng and actrng riterattve rrnprovernent atgorrmrns generateandtest approaches Steepest ascentmttctrmprng Randomrrestan mttctrmprng Srrndtated anneatrng rMulttrLevelMulttrDtmenstonal Search v mesa nmt Q There are a variety of factors that can affect the choice of search strategy and direction of search e Brancmrrg factor 7 Expected depthlength of Solution 7 Trrne vs space limitation e Mdttrpte goal states arrdor rrrrtrat States a trnptrcrt orExpltctt Goal State specmcatron a Urrrtorrn vs nonuniform operatorcosts a Arrv sotdtron vs optrmat Solution 7 Solutron path vs state 7 Number of acceptable sotutrorr states 7 Drttererrt forward vs backward branchrrrg factors 7 Can the Same Stale be reached with different operator sequences v Wrasse nml 2n Example muting Aquot wick twa different heuristics 77 7 7 7 77 7 7 7 77777777 7777 7 7 7 777777777777 7 7 7 7 7 7 7 Similar to best rst search except that the 1 H H 1 H 1 H 1 H 1 1 H evaluation is based on total path solution 1 1 g 1 1 g 1 g 1 1 cast 7 u 7 7 7 7 7 7 7 7 7 7 7 7 7 7 7 7 7 7 quotgnhn Where LL 1 777 777 ii 777 7 7 7 7 7 7 7 7 7 7 7 7 7 7 7 7 gn costofpathfrom the initial stateton 1 7 H H H H 777 77 777777 777 777 7 7 7 7 7 7 7 7 7 7 77 Iln estlmate ofthe remalnlng dlstance 7 3 g 7 3 g 7 77 7 77 7 7 7 7 77 7 7 7 7 7 7 7 7 7 7 vunmsiwm 2 7 5 5 5 5 7 5 5 5 5 77m7 77777 77 77777777 77777777 7777 77727 5777727772 75725 Admissible heuristic never overestimates the actual cost to reach a goal Monotone heuristic thefvalue never decreases along any path When h is admissible monotonicity can be maintained when combined with pathmax n39 mumquot gn hn39 Does monotonicity infimply admissibility 3 25 Intuitive explanation for monotone h lfh is a lowerbound thenfis a lowerbound on shortestpath through that node Therefore fnever decreases It is obvious that the rst solution found is optimal as long as a solution is accepted when snlutinn s nmie for every other nude v Wises nmt 2e Let 0 be an optimal solution with path cost Let so be a suboptimal goal state that is 350 Suppose that A terminates the search with so Let n be a leaf node on the optimal path to 0 1quot allquot n z SU nwas riot chosen for expansion 1quot 31902160 Ilsa g50 1quot 3350 admissibility ofh so is a goal M300 contradictiorll 27 A is complete unless there are infinitely many nodes withfn ltfk A is complete when 1 there is a positive lower bound on the cost of operators 2 the branching factor is finite v Wises nmt 22 locally admissible ilhlllilhllljlgcostllli iiji amphlgoallll For a given heuristic function no optimal Each state reached has the minimal piiii algorithm is guaranteed to do less work in terms of nodes expanded s mange inethty b2 Aside from ties inf A expands every node a L mm necessaryfor the proof that we ve found the quot1 quotivh J shortest path and no other nodes m h J c llllpl lt up iiii1 via NJ grim via up up always expanded heiore ii2 Not necessary to expand ripii1 nexpanded rip ii1 also llllpl slanl vanadium ad dnnmsmn ad What is the implications of local monotonicity Amount of storage What happens if h1lth2lth for all states n2 dominates hi What are the implications of overestimating h Suppose you can bound overestimation vanadium 3t dnnmsmn a2 wriiie iuioniieu search can produce uiaiiiatic ieai laveragecasel improvements in compiexity n typicaiiy uses not eiiiiiiiiatetiie potential ioi exponential iwoistcasei perroniiaiice rite perioniiaiice oiiieuiisticruiictioiis can be compared using seueiai metri 7 Average iiuiitzeimmues expandedW e PeneliamePdN 7 Effective iiiaiiciiiiig factor If ii solution depth is u tnen if is the biancninu faLtoitnat a uniiuim aeaicn tree would nave to nave to geneiate Nnnues Ii1 ilfiaz jlt tam EBFtenustnbeielalnel inucpcnuciiiuimcsuiuiiuiiuepin Note that these de nitions completer ignore the cast ofapplyinq the heuristic iuncticn vumcsiwnm 1i Stillu Uni Eiiaciiieiziiutuu rtiiui i zos mm new IDS i39iiii vii Z 6 39i l T l W i ii t u H a 2i i i u i u E X9 1 l 33 u lili 3 1 iii MW a H i l39Wl i H 13 1 7 2n 7 H llf E 03 Ho izt u e n 7 if i2 12 i2 Ci i u i x 4 itii iii izt uutwamu 3b Search cost involves both the cost to expand nodes and the cost to apply heuristic function Typically there is a tradeoff between the cost and performance of a heuristic function E g We can always get a perfect heuristic function by haying the function do a search to find the solution andtnen use that solution to compute Moods vumcsiwnm I This tradeoff is o en referred to as the meta level vs baselevel tradeoff Baselevel refers to the operator level at which the problem will actually be solved Metalevel refers to the control level at which we decide how to solve the problem We must evaluate the cost to execute the heuristic iunctiim relative to the cost of expanding nodes and the reduction in nodes expanded uutwamu cu A requires open 8 close st for Beginning with an fbound equal to the fvalue remembering nodes of the initial state perform a depth rst Caquot ieadto my iarge storage reqwemems search bounded bythef bound Instead ofa Exploit the idea that Pt Mun a h s f actual cost create incremental subs aces that can be searched demw mm essstgrage Unless the goal Is found Increasethe fbound K h h Xt t t to the lowest fvalue found in the previous ey39ssue ls quotw mm B m campquot a quotm search that exceeds the previous fbound How bad an underestimate ii how many steps does it and restan the depth rst search take to get l 39 Worse case N computation for A versus N2 for lDA oommm 3r mnmsixmlm ax Use depth rst search with fcost limit 5 instead of depth limit I e E II n I ll IDA is complete and optimal but it uses less memory 0bf lc and more time than A mount oommm 3 mnmsixmlm to Algorithm lterativeDeepeningA 1 Set THRESHOLD the heuri start state 2 Conduct a depth rst search based on minimal cost from current node pruning any branch when its total cost function g h39 exceeds THRESHOLD If a solution path is found during the search return it 3 Otherwise increment THRESHOLD by the minimum amount it was exceeded during the previous step and then go to Step 2 ic evaluation ofthe startstate always on path so initial estimate is always overestimate and neyerdeoreasing monotonic heuristics allow us to view A in terms of exploring increasing tcost contours The more informed a heuristic the more the contours WIII be stretched toward the goal they wlll be moretocused around the optimal path y mesa nmt 42 Ami A sill mam mnm a man mm m w 20 Siblu mtsaani mm A a 9 mermaidcum amiss ma rim may A m llmsnlug anlm 7elt were mm miragemm omm Nodes are labeled With f g n The n values are the straight4ine distances to aucharest What is the next contoum lDA is asymptotically same time as A but only Od in space e yersus oiod for A e Avoids overhead of sorted queue of nodes lDA is simpler to implement e no closed lists limited open list ln Korl s lsepuzzle experlmellls lDA solyed all problems ran faster even though itgenerated more nodes than A r A solved no problems due to insL cient space ran Slower than IDA y mesa nmt 44 Continuation of Discussion of DA Other Time and Space Variations of A RBFS SMAquot RTA v mm m 45

### BOOM! Enjoy Your Free Notes!

We've added these Notes to your profile, click here to view them now.

### You're already Subscribed!

Looks like you've already subscribed to StudySoup, you won't need to purchase another subscription to get this material. To access this material simply click 'View Full Document'

## Why people love StudySoup

#### "I was shooting for a perfect 4.0 GPA this semester. Having StudySoup as a study aid was critical to helping me achieve my goal...and I nailed it!"

#### "I used the money I made selling my notes & study guides to pay for spring break in Olympia, Washington...which was Sweet!"

#### "I was shooting for a perfect 4.0 GPA this semester. Having StudySoup as a study aid was critical to helping me achieve my goal...and I nailed it!"

#### "It's a great way for students to improve their educational experience and it seemed like a product that everybody wants, so all the people participating are winning."

### Refund Policy

#### STUDYSOUP CANCELLATION POLICY

All subscriptions to StudySoup are paid in full at the time of subscribing. To change your credit card information or to cancel your subscription, go to "Edit Settings". All credit card information will be available there. If you should decide to cancel your subscription, it will continue to be valid until the next payment period, as all payments for the current period were made in advance. For special circumstances, please email support@studysoup.com

#### STUDYSOUP REFUND POLICY

StudySoup has more than 1 million course-specific study resources to help students study smarter. If you’re having trouble finding what you’re looking for, our customer support team can help you find what you need! Feel free to contact them here: support@studysoup.com

Recurring Subscriptions: If you have canceled your recurring subscription on the day of renewal and have not downloaded any documents, you may request a refund by submitting an email to support@studysoup.com

Satisfaction Guarantee: If you’re not satisfied with your subscription, you can contact us for further help. Contact must be made within 3 business days of your subscription purchase and your refund request will be subject for review.

Please Note: Refunds can never be provided more than 30 days after the initial purchase date regardless of your activity on the site.