### Create a StudySoup account

#### Be part of our community, it's free to join!

Already have a StudySoup account? Login here

# FOUNDTNSOFARTIFICLINTELLGNC CS2710

Pitt

GPA 3.77

### View Full Document

## 31

## 0

## Popular in Course

## Popular in ComputerScienence

This 252 page Class Notes was uploaded by Jonas Bartell on Monday October 26, 2015. The Class Notes belongs to CS2710 at University of Pittsburgh taught by Staff in Fall. Since its upload, it has received 31 views. For similar materials see /class/229388/cs2710-university-of-pittsburgh in ComputerScienence at University of Pittsburgh.

## Reviews for FOUNDTNSOFARTIFICLINTELLGNC

### What is Karma?

#### Karma is the currency of StudySoup.

#### You can buy or earn more Karma at anytime and redeem it for class notes, study guides, flashcards, and more!

Date Created: 10/26/15

CS 2710 Foundations of AI Lecture 5 Informed heuristic search cont Milos Hauskrecht milos cspitt edu 5329 Sennott Square CS 2710 Foundations ofAI Administration PS l due today 7 Report before the class begins 7 Programs through ftp PS Z is out 7 due next week on Wednesday September 21 2005 Report Programs CS 2710 Foundations ofAI Evaluationfunction driven search A search strategy can be de ned in terms of a node evaluation function Evaluation function 7 Denoted fn 7 De nes the desirability of a node to be expanded next Evaluation function driven search expand the node state with the best evaluation function value Implementation priority queue with nodes in the decreasing order of their evaluation function value CS 2710 Foundations ofAI Uniform cost search Uniform cost search Dijkstra s shortest path 7 A special case of the evaluationfunction driven search fn g0 Path cost function g n 7 path cost from the initial state to n Uniform cost search 7 Can handle general minimum cost pathsearch problem 7 weights or costs associated with operators links Note Uniform cost search relies on the problem de nition only 7 It is an uninformed search method CS 2710 Foundations ofAI Best rst search Best rst search incorporates a heuristic function hn 3 into the evaluation function fn to guide the search Heuristic function Measures a potential of a state node to reach a goal Typically in terms of some distance to a goal estimate Example of a heuristic function Assume a shortest path problem with city distances on connections Straightline distances between cities give additional information we can use to guide the search Exalnple traveler problem with straight line distance inforlnation 5m imam ml lmesl Va Zerin Straightline distances give an estimate of the cost of the path between the two cities Bestfirst search Best rst search incorporates a heuristic function hn into the evaluation function f n to guide the search heuristic function measures a potential of a state node to reach a goal Special cases differ in the design of evaluation function 7 Greedy search f n Mquot 7 A algorithm fquot gn Mquot iterative deepening version of A IDA CS 2710 Foundations ofAI Greedy search method Evaluation function is equal to the heuristic function f0 Mquot Idea the node that seems to be the closest to the goal is expanded rst CS 2710 Foundations ofAI Greedy search fnhn queue d Arad 366 366 CS 2710 Foundations ofAI Greedy search fnhn T imisoara Z erind CS 2710 Foundations ofAI Greedy search fnhn queue d Fagaras 178 Rimniciu V 193 Timisoara 329 Arad 366 Zen39nd 374 Oradea 380 cs 2710 Foundations ofAI Greedy search fnhn queue Q Bucharest 0 RimniciuV 193 Sibiu 253 Timisoara 329 Arad 366 Zen39nd 374 Oradea 380 Goal CS 2710 Foundations ofAI Properties of greedy search Completeness Optimality Time complexity Memory space complexity CS 2710 Foundations ofAI Properties of greedy search Completeness No We can loop forever Nodes that seem to be the best choices can lead to cycles Elimination of state repeats can solve the problem Optimality No Even if we reach the goal we may be biased by a bad heuristic estimate Evaluation function disregards the cost of the path built so far Time complex1ty O b m Worst case M But often better Memory space complexity O bm Often better CS 2710 Foundations ofAI Exalnple traveler problem with straight line distance inforlnation Slrmghrhnedmmvce We Euchavml Nnaml Total 450 Lugnj lulndia V m Greedy search result Exalnple traveler problem with straight line distance inforlnation Shmgh umdblnnu m Euchmesl ma Total Greedy search and optimality Aquot search The problem with the greedy search is that it can keep expanding paths that are already very expensive The problem with the uniformcost search is that it uses only past exploration information path cost no additional information is utilized A search fquot gn Mquot g n cost of reaching the state Mn estimate of the cost from the current state to a goal f n estimate of the path length Additional Ac0nditi0n admissible heuristic hn S hn for all n CS 2710 Foundations ofAI Aquot search example 366 f0 queue Q 366 Ara d CS 2710 Foundations ofAI Aquot search example CS 2710 Foundations ofAI Aquot search example queue q fn Rimnicu V 4 13 Fa gara s 417 Timisoara 447 Zen39nd 449 O radea 526 Ara d 646 CS 2710 Foundations ofAI Aquot search example f0 queue d 526 CS 2710 Foundations ofAI Aquot search example queue Q f0 F agaras 41 7 Buchare st 4 18 Timisoara 447 Zen39nd 449 Oradea 5 26 Craiova 52 6 Sibiu 553 Rimnicu V 607 Arad 646 418 CS 2710 Foundations ofAI Aquot search example queue Bucharest CS 2710 Foundations ofAI Aquot search example queue q f0 Timisoara 447 Zen39nd 449 Oradea 5 26 Craiova 52 6 Sibiu 55 3 Rimnicu V 607 AIad 6 46 553 Goal CS 2710 Foundations ofAI 418 Properties of Aquot search Completeness Optimality Time complexity 7 9 Memory space complexity 7 9 CS 2710 Foundations ofAI Properties of Aquot search Completeness Yes Optimality Time complexity 7 9 Memory space complexity 7 9 CS 2710 Foundations ofAI Optilnality of A In general a heuristic function hn It can overestimate be equal or underestimate the true distance of a node to the goal 1717 Is the A optimal for an arbitrary heuristic function Exalnple traveler problem with straight line distance inforlnation snmghwmdmm m Ruclmesl ma u ml Lerin Admissible heuristics overestimate Exalnple traveler problem with straight line distance inforlnation Slrmghrhnedmmvce We Euchavml Nnami nu 23947178 417 Fagams 99 Lugnj lulndia V m stwa Admissible heuristics Exalnple traveler problem with straight line distance inforlnation snmgnrmdmm m Euchawsl ma n 220400620 15G Hxvsnva mm Total path 450 Admissible heuristics is suboptimal Optimality of Aquot In general a heuristic function hn Can overestimate be equal or underestimate the true distance ofa node to the goal hn Is the A optimal for an arbitrary heuristic function No CS 2710 Foundations ofAI Optimality of Aquot In general a heuristic function hn Can overestimate be equal or underestimate the true distance ofa node to the goal hn Admissible heuristic condition 7 Never overestimate the distance to the goal ll hn S hn for all 71 Example the straightline distance in the travel problem never overestimates the actual distance Is A search with an admissible heuristic is optimal 2 CS 2710 Foundations ofAI Optimality of Aquot proof Let G1 be the optimal goal with the minimum path distance Assume that we have a suboptimal goal G2 Let n be a node that is on the optimal path and is in the queue together with G2 start 0 n G2 Gl Then fG2 gG2 since hG2 0 gt gG1 since G2 is suboptimal Z fn since h is admissible And thus A never selects G2 before It CS 2710 Foundations ofAI Properties of Aquot search Completeness Yes Optimality Yes with the admissible heuristic Time complexity 9 Memory space complexity 9 CS 2710 Foundations ofAI Properties of Aquot search 0 Completeness Yes 0 Optimality Yes with the admissible heuristic 0 Time complexity 7 Order roughly the number of nodes withfn smaller than the cost of the optimal path g 0 Memory space complexity 7 Same as time complexity all nodes in the memory CS 2710 Foundations ofAI Admissible heuristics Heuristics are designed based on relaxed version of problems 0 Example the 8puzzle problem Initial position Goal position WW g E 6 WWI Admissible heuristics 1 number of misplaced tiles 2 Sum of distances of all tiles from their goal positions Manhattan distance W Hie M ri El CS 2710 Foundations ofAI Admissible heuristics We can have multiple admissible heuristics for the same problem Dominance Heuristic function hl dominates h2 if Vn h1n Z h2 7 Combination two or more admissible heuristics can be combined to give a new admissible heuristics 7 Assume two admissible heuristics hl h2 Then h3n maX hln h2n is admissible CS 2710 Foundations ofAI IDA Iterative deepening version of A Progressiver increases the evaluation function limit instead of the depth limit Performs limitedcost depthfirst search for the current evaluation function limit 7 Keeps expanding nodes in the depth first manner up to the evaluation function limit Problem the amount by which the evaluation limit should be progressively increased Solutions CS 2710 Foundations ofAI IDA Iterative deepening version of A Progressiver increases the evaluation function limit instead of the depth limit Performs limitedcost depthfirst search for the current evaluation function limit 7 Keeps expanding nodes in the depth first manner up to the evaluation function limit Problem the amount by which the evaluation limit should be progressively increased Solutions peak over the previous step boundary Increase the limit by a xed cost increment say a CS 2710F0undations ofAI IDAquot Solution 1 peak over the previous step boundary Properties the choice of the new cost limit in uences how many nodes are expanded in each iteration We may find a suboptimal solution 7 Fix 1 increase the limit to the minimum f value above the limit 7 Fix 2 complete the search up to the limit to find the best Solution 2 Increase the limit by a xed cost increment 8 Properties 7 Too many or too few nodes expanded 7 no control of the number of nodes 7 The solution of accuracy difference lt s is found CS 2710 Foundations ofAI CS 2710 Foundations of AI Lecture 2 Problem solving by searching Milos Hauskrecht milos cspitt edu 5329 Sennott Square CS 2710 Foundations ofAI Example Assume a problem of computing the roots of the quadratic equation ax2bxc0 Do you consider it a challenging problem CS 2710 Foundations ofAI Example Assume a problem of computing the roots of the quadratic equation ax2bxc0 Do you consider it a challenging problem Hardly we just apply the standard formula biVb2 4ac 2a x12 CS 2710 Foundations ofAI Solving problems by searching Some problems have a straightforward solution 7 Just apply the formula or follow a standardized procedure Example solution of the quadratic equation 7 Hardly a sign of intelligence More interesting problems require search 7 more than one possible alternative needs to be explored before the problem is solved 7 the number of alternatives to search among can be very large even infinite CS 2710 Foundations ofAI Search example Traveler problem Find a route from one city Arad to the other Bucharest Fagzlas Rimnicu Vilcez I I I Hivsovz Uyziceni I Elorie CS 2710 Foundatrons ofAI Example Traveler problem Another avor of the traveler problem 7 nd the route with the minimum length between S and T I Oradea And 3 Pagans I Vaslui Rimnicu Vilcea I Hivsova I E 39 I Giulgiu quote CS 2710 Foundatrons ofAI Example Puzzle 8 0 Find the sequence of the empty tile moves from the initial game position to the designated target position Initial position Goal position E L F 4 6 7 EE CS 2710 Foundations of AI Example N queens problem Find a con guration of n queens not attacking each other Goal con guration Bad goal con guration CS 2710 Foundations of AI A search problem is de ned by Search space 7 The set of objects among which we search for the solution Example objects routes between cities or Nqueen con gurations Goal condition 7 What are the characteristics of the object we want to nd in the search space 7 Examples Path between cities A and B Path between A and B with the smallest number of links Path between A and B with the shortest distance Nonattacking nqueen con guration CS 2710 Foundations ofAI Search Search process 7 The process of exploration of the search space The ef ciency of the search depends on 7 The search space and its size 7 Method used to explore traverse the search space 7 Condition to test the satisfaction of the search objective what it takes to determine I found the desired goal object Important to remember ll 7 Conveniently chosen search space and the exploration policy can have a profound effect on the ef ciency CS 2710 Foundations ofAI Graph search Many search problems can be naturally represented as graph search problems Typical example Route nding 7 Map corresponds to the graph nodes to cities links to available connections between cities 7 Goal nd a route path in the graph from S to T Graph search Less obvious conversion Puzzle 8 Find a sequence of moves from the initial con guration to the goal con guration 7 nodes corresponds to states of the game 7 links to valid moves of the player Graph search problem States game positions or locations in the map that are represented by nodes in the graph Operators connections between cities valid moves Initial state 7 start position start city Goal state itarget position positions target city cities T target CS 2710 Foundations ofAI Graph search More complex versions of the graph search problems 7 Find a minimal length path route with the smallest number of connections the shortest sequence of moves that solves Puzzle 8 O E O T target CS 2710 Foundations ofAI Graph search More complex versions of the graph search problems 7 Find a minimum cost path a route with the shortest distance CS 2710 Foundations ofAI Graph search How to nd the path between S and T A strawman solution 7 Generate systematically all sequences of 1 2 3 edges 7 Check if the sequence yields a path between S and T Can we do better E T target CS 2710 Foundations ofAI Graph search Can we do better We are not interested in sequences that do not start in S and that are not valid paths Solution 7 T target CS 2710 Foundations ofAI Graph search Can we do better We are not interested in sequences that do not start in S and that are not valid paths Solution 7 Look only on valid paths starting from S E T target CS 2710 Foundations ofAI Graph search 0 Being smarter about the space we search for the solution pays off in terms of search process efficiency CS 2710 Foundations of AI N queens Some problems can be converted to the graph search problems 0 But some problems are harder and less intuitive 7 Take eg Nqueens problem Goal con guration 0 Problem 7 We look for a con guration not a sequence of moves 7 No distinguished initial state no operators moves CS 2710 Foundations of AI Graph search A trick generate a con guration step by step one queen in a step States nodes correspond to con gurations of 01234 queens Links operators correspond to the addition of a queen 1 39ti 1 t t 1 d th b d n1 a s a e no queens p ace on e oar Target 1 gtTIget 2 Graph search Nqueens problems This is a different graph search problem When compared to Puzzle 8 or Route planning We want to nd only the target con guration not a path Two types of graph search problems Path search 7 Find a path between states S and T 7 Example traveler problem Puzzle 8 7 Additional goal criterion minimum length cost path Con guration search constraint satisfaction search 7 Find a state configuration satisfying the goal condition 7 Example nqueens problem design of a device with a prede ned functionality 7 Additional goal criterion soft preferences for configurations eg minimum cost design CS 2710 Foundations ofAI Search problem Search problems that can be represented or converted into a graph search problems can be de ned in terms of Initial state 7 State configuration we start to search from eg start city initial game position Operators 7 Transform one state to another eg valid connections between cities valid moves in Puzzle 8 Goal condition 7 De nes the target state destination winning position Search space the set of objects we search for the solution 7 is now de ned indirectly through the initial state operators CS 2710 Foundations ofAI Traveler problem nl Traveler problem formulation States different cities Initial state city Arad Goal condition city Bucharest Type of the problem path search Possible solution cost path length Operators moves to cities in the neighborhood CS 2710 Foundations of AI Puzzle 8 example E E E E E E E E Initial state Search problem formulation States tile configurations Initial state initial con guration Type of the problem path search Operators moves of the empty tile Goal reach the winning con guration Goal state Possible solution cost a number of moves CS 2710 Foundations of AI Nqueens problem Problem formulation States con gurations of 0 to 4 queens on the board Initial state noqueen con guration Operators add a queen to the le Inost unoccupied column Goal a con guration with 4 nonattacking queens Type of the problem con guration search Nqueens problem Alternative formulation of N queens problem Bad goal con guration Valid goal con guration Problem formulation States different con gurations of 4 queens on the board Initial state an arbitrary con guration of 4 queens Operators move one queen to a different unoccupied position Goal a con guration with nonattacking queens Type of the problem con guration search Comparison of two problem formulations Solution 2 Operators switch one of the queens Con guration space I Solution 1 Operators add a queen to the le Inost unoccupied column Con guration space lt 45 Even better solution to the Nqueens Solution 1 Operators add a queen to the le Inost unoccupied column g 45 con gurations altogether Improved solution With a smaller search space Operators add a queen to the leftmost unoccupied column such that it does not attack already placed queens S 5432 120 con gurations altogether Formulating a search problem Search process 7 The process of exploration of the search space The ef ciency of the search depends on 7 The search space and its size 7 Method used to explore traverse the search space 7 Condition to test the satisfaction of the search objective what it takes to determine I found the desired goal object Think twice before solving the problem by search 7 Choose the search space and the exploration policy CS 2710 Foundations ofAI Search process Exploration of the state space through successive application of operators from the initial state A search tree a kind of search exploration trace with nodes corresponding to explored states CS 2710 Foundations ofAI Search tree A search tree a search exploration trace 7 It is different from the graph de ning the problem 7 States can repeat in the search tree Graph Search tree General search algorithm Generalisearch problem strategy mm 39 39 39alize the search tree with the initial state of problem loop if there are no candidate states to explore return failure choose a leaf node of the tree to expand next according to strategy ifthe node satis es the goal condition return the solut39 expand e node and add all of its successors to the tre end loop General search algorithm Generalsearch problem strategy initialize the search tree with the initial state of problem oop if there are no candidate states to explore return failure choose a leaf node of the tree to expand next according to strategy if the node satisfies the goal condition return the solution expand the node and add all of its successors to the tree end loop CS 2710 Foundations ofAI General search algorithm Generalsearch problem strategy initialize the search tree with the initial state of problem loop if there are no candidate states to explore return failure choose a leaf node of the tree to expand next according to strategy if the node satisfies the goal condition return the solution expand the node and add all of its successors to the tree end loop CS 2710 Foundations ofAI General search algorithm Generalsearch problem strategy initialize the search tree with the initial state of problem oop if there are no candidate states to explore return failure choose a leaf node of the tree to expand next according to strategy if the node satisfies the goal condition return the solution expand the node and add all of its successors to the tree end loop CS 2710 Foundations ofAI General search algorithm Generalsearch problem strategy initialize the search tree with the initial state of problem loop if there are no candidate states to explore return failure choose a leaf node of the tree to expand next according to strategy if the node satisfies the goal condition return the solution expand the node and add all of its successors to the tree end loop CS 2710 Foundations ofAI General search algorithm Generalsearch problem strategy initialize the search tree with the initial state of problem oop if there are no candidate states to explore return failure choose a leaf node of the tree to expand next according to strategy if the node satisfies the goal condition return the solution expand the node and add all of its successors to the tree end loop CS 2710 Foundations ofAI General search algorithm Generalsearch problem strategy initialize the search tree with the initial state of problem loop if there are no candidate states to explore return failure choose a leaf node of the tree to expand next according to strategy if the node satisfies the goal condition return the solution expand the node and add all of its successors to the tree end loop Search methods differ in how they explore the space that is how they choose the node to expand next ll CS 2710 Foundations ofAI Implementation of search Search methods can be implemented using queue structure General search problem Queuingtn ode lt MakequeuedVIakenodednitialstatepmblem loop if nodes is empty then return failure 0 e e Removenodenodes if Goaltestpmblem applied to Statenode is satis ed then return node nodes lt Queuinginmodes Expandnode Operatorsnode end loop Candidates are added to nodes representing the queue structure r lHFVm lhw rv Implementation of search A search tree node is a datastructure constituting part of a search tree parent Other attributes state value cost depth path cost children Expand function 7 applies Operators to the state represented by the search tree node Together with Queuingfn it lls the attributes r lHFVm lhw rv Comparison of search methods Properties of different search methods Completeness 7 Does the method nd the solution if it exists Optimality 7 Is the solution returned by the algorithm optimal Does it give a minimum length path Space and time complexity 7 How much time it takes to nd the solution 7 How much memory is needed to do this Complexities are measured in terms of parameters b 7 maximum branching factor d 7 depth of the optimal solution m 7 maximum depth of the state space CS 2710 Foundations ofAI Uninformed search methods rely only on the information available in the problem de nition 7 Breadth rst search 7 Depth rst search 7 Iterative deepening 7 Bi directional search For the minimum cost path problem 7 Uniform cost search CS 2710 Foundations ofAI Breadth first search BFS The shallowest node is expanded first a A C 7 In A ofAI Properties of search methods Completeness 7 Does the method find the solution if it exists Optimality 7 Is the solution returned by the algorithm optimal Does it give a minimum length path Space and time complexity 7 How much time it takes to nd the solution 7 How much memory is needed to do this CS 2710 Foundauons ofAI Properties of breadthfirst search Completeness Optimality Time complexity Memory space complexity CS 2710 Foundations ofAI Properties of breadthfirst search Completeness Yes The solution is reached if it exists Optimality Yes for the shortest path Time complexity Memory space complexity CS 2710 Foundations ofAI Z 39 BF S time complexity depth number of nodes 0 1 1 212 2 224 3 238 d 2 bd 11 2d1 ban Expanded nodes 0 bd 0bd Total nodes CS 2710 Foundations ofAI Properties of breadthfirst search Completeness Yes The solution is reached if it exists Optimality Yes for the shortest path Time complexity 1bb2bd Obd exponential in the depth of the solution d Memory space complexity CS 2710 Foundations ofAI 233 39 BFS memory complexity Count nodes kept in the tree structure depth number of nodes or 1n the queue 0 1 1 212 2 224 3 238 d 2 bd 11 2d1 ban Expanded nodes 0 bd 0bd Total nodes CS 2710F0undations ofAI Properties of breadthfirst search Completeness Yes The solution is reached if it exists Optimality Yes for the shortest path Time complexity 1bb2bd Obd exponential in the depth of the solution d Memory space complexity 0 b d every node on the fringe is kept in the memory CS 2710 Foundations ofAI CS 2710 Foundations of AI Lecture 17 Inference in Bayesian belief networks Milos Hauskrecht miloscspittedu 5329 Sennott Square CS 2710 Foundations ofAI Modeling uncertainty with probabilities Knowledge based system era 70s early 80 s 7 Extensional non probabilistic models 7 Solve the space time and acquisition bottlenecks in probabilitybased models 7 froze the development and advancement of KB systems and contributed to the slowdown of AI in 80s in general Breakthrough late 80s beginning of 90s 7 Bayesian belief networks Give solutions to the space acquisition bottlenecks Partial solutions for time complexities Bayesian belief network CS 2710 Foundations ofAI Bayesian belief networks BBNs Bayesian belief networks Represent the full joint distribution over the variables more compactly with a smaller number of parameters Take advantage of conditional and marginal independences among random variables A and B are independent PAB PAPB A and B are conditionally independent given C PABlCPAlCPBlC PAlCBPAlC CS 2710 Foundations ofAI Bayesian belief networks general Two components B 5033 Directed acyclic graph 7 Nodes correspond to random variables 7 Missing links encode independences Parameters 7 Local conditional probability distributions for every variableparent con guration MAI SE B T F PltXi lpaXquot T T 095 005 T F 094 006 Where39 X F T 029 071 Pa i stand for parents of X F F 0001 0999 CS 2710 Foundations ofAI Bayesian belief network PB Burglary 0001 0999 Earthquake 0002 0998 PAIBE B E T F T T 095 005 T F 094 006 F T 029 071 F F 0001 0999 MaryCalls CS 2710 Foundations ofAI Full joint distribution in BBNs Full joint distribution is defined in terms of local conditional distributions obtained Via the chain rule PltX1XzX HPX IpaltXgtgt Example Assume the following assignment of values to random variables BTETATJTMF Then its probability is PBTETA TJ TMF PB7PE7PATBTE7PJTA7PMFA7 CS 2710 Foundations ofAI Bayesian belief networks BBNs Bayesian belief networks Represent the full joint distribution over the variables more compactly using the product of local conditionals But how did we get to local parameterizations Answer Graphical structure encodes conditional and marginal independences among random variables A and B are independent PA B PAPB A and B are conditionally independent given C PA CB PA C PAB CPACPB C The graph structure implies the decomposition ll CS 2710 Foundations ofAI Independences in BBNs 3 basic independence structures 1 2 3 quotW CS 2710 Foundations ofAI Independences in BBN BBN distribution models many conditional independence relations among distant variables and sets of variables These are de ned in terms of the graphical criterion called d separation D separation and independence 7 Let XY and Z be three sets ofnodes 7 If X and Y are dseparated by Z then X and Y are conditionally independent given Z D separation 7 A is dseparated from B given C if every undirected path between them is blocked with C Path blocking 7 3 cases that expand on three basic independence structures CS 2710 Foundations ofAI Independences in BBNs Earthquake and Burglary are independent given MaryCalls F Burglary and MaryCalls are independent not knowing Alarm F Burglary and RadioReport are independent given Earthquake T Burglary and RadioReport are independent given MaryCalls F CS 2710 Foundations ofAI Bayesian belief networks BBNs Bayesian belief networks Represents the full joint distribution over the variables more compactly using the product of local conditionals So how did we get to local parameterizations PX1XzXn HPX lpaX iln The decomposition is implied by the set of independences encoded in the belief network CS 2710 Foundations ofAI Full joint distribution in BBNs B E Rewrite the full joint probability using the product rule A PBTETATJTMF J M PJT BTETATMFPBTETATMF PJTAYPBTETATMF PMFBTETATPBTETAY PMFAYPBTETAY AT BTE PgBTED PgBQPgEQ PJTATPMFATPATBTETPBTPET CS 2710 Foundations ofAI Bayesian belief network In the BBN the full joint distribution is expressed using a set of local conditional distributions PB T F Bur la Earth uake 0001 0999 q 0002 0998 PABE B E T F T T 095 005 T F 094 006 F T 029 071 F F 00010999 CS 2710 Foundations ofAI Parameter complexity problem In the BBN the full joint distribution is de ned as PX1X2a Xquot H PX P0X What did we save 1139 391 Alarm example 5 binary True False variables of parameters of the full joint 25 32 One parameter is for free 25 l 31 of parameters of the BBN 23 222 22 20 One parameter in every conditional is for free 22 22 21 10 CS 2710 Foundations ofAI Model acquisition problem The structure of the BBN typically re ects causal relations BBNs are also sometime referred to as causal networks Causal structure is intuitive in many applications domain and it is relatively easy to de ne to the domain expert Probability parameters of BBN are conditional distributions relating random variables and their parents Complexity is much smaller than the full joint It is much easier to obtain such probabilities from the expert or learn them automatically from data CS 2710 Foundations ofAI BBNs built in practice In various areas 7 Intelligent user interfaces Microsoft 7 Troubleshooting diagnosis of a technical device 7 Medical diagnosis Path nder Intellipath CPSC Munin QMRDT 7 Collaborative ltering 7 Military applications 7 Business and nance Insurance credit applications CS 2710 Foundations ofAI Inference CS 2710 Foundations ofAI Inference in Bayesian networks BBN models compactly the full joint distribution by taking advantage of existing independences between variables Simpli es the acquisition of a probabilistic model But we are interested in solving various inference tasks 7 Diagnostic task from effect to cause P Burglary l JohnCalls T 7 Prediction task from cause to effect P JohnCalls l Burglary T 7 Other probabilistic queries queries on joint distributions PAlarm Main issue Can we take advantage of independences to construct special algorithms and speeding up the inference CS 2710 Foundations ofAI But very often we can achieve signi cant improvements Bad news 7 Exact inference problem in BBNs is NPhard Cooper Inference in Bayesian network 7 Approximate inference is NPhard Dagum Luby Assume our Alarm network PJT Assume we want to compute CS 2710 Foundations ofAI Inference in Bayesian networks Computing PJ T Approach 1 Blind approach Sum out all uninstantiated variables from the full joint express the joint distribution as a product of conditionals PJ T ZPBbEeAaJTMm beTF eeTF aeTF mETF Z Z Z ZPJTAaPMmlAaPAalBbEePBbPEe bd ed Fa I Fm 39F Computational cost Number of additions Number of products CS 2710 Foundations ofAI Inference in Bayesian networks Computing PJ T Approach 1 Blind approach Sum out all uninstantiated variables from the full joint express the joint distribution as a product of conditionals PJT Z beTF eETF aETF mar PBbEeAaJTMm Z Z Z ZPJTAaPMmAaPAaLBbEePBbPEe b rf e rfb rfm Computational cost Number of additions 15 Number of products CS 2710 Foundations ofAI Inference in Bayesian networks Computing PJ T Approach 1 Blind approach Sum out all uninstantiated variables from the full joint express the joint distribution as a product of conditionals PJT 7 2 beTF eETF aETF m PBbEeAaJTMm Z Z Z ZPJTAaPMmjAaPAajBbEePBbPEe b rf e rfb rfm rf Computational cost Number of additions 15 Number of products 16464 CS 2710 Foundations ofAI Inference in Bayesian networks Approach 2 Interleave sums and products Combines sums and product in a smart way multiplications by constants can be taken out of the sum PJ T Z Z Z ZPU T l AaPMml AaPAalBbEePBbPEe barf earf barf mar F Z Z ZPJTAaPMmlAaPBbZPAalBbEePEe barf mar Fmarf ea ZPU T l A a Z PMml 14612 PBb Emma BbEePEe Computational cost Number of additions 12112 l Number of products 2 2212 1 CS 2710 Foundations ofAI Inference in Bayesian networks Approach 2 Interleave sums and products Combines sums and product in a smart way multiplications by constants can be taken out of the sum PJ T Z Z Z ZPU T AaPMml AaPAalBbEePBbPEe barf earf barf marf Z Z ZPJTAaPMmlAaPBbZPAalBbEePEe barf mar Fmar ZPJTAa Z PMml Aaz PBbZPAal BbEePEe barf marf barf earf Computational cost Number of additions 12112 l9 Number ofproducts 22212 1 CS 2710 Foundations ofAI Inference in Bayesian networks Approach 2 Interleave sums and products Combines sums and product in a smart way multiplications by constants can be taken out of the sum PJ T Z Z Z ZPU T AaPMml AaPAalBbEePBbPEe barf e rf adj m l39f Z Z ZPJTAaPMmlAaPBbZPAalBbEePEe barf 257 may 26 ZPU T l A a Z PMml Aa Z PBb Emma BbEePEe Computational cost Number of additions 12112 l9 Number of products 2 2212 116 CS 2710 Foundations ofAI Inference in Bayesian networks The smart interleaving of sums and products can help us to speed up the computation of joint probability queries What ifwe want to compute PB TJ T PB TJ T T ZPJTlAa Z PMmlAaPBIZIPAalBTEePEeL PM I I I 1 ZPU TlAa Z PMml Aa Z PBb ZPAal BbEePEe A lot of shared computation 7 Smart cashing of results can save the time for more queries CS 2710 Foundations ofAI Inference in Bayesian networks The smart interleaving of sums and products can help us to speed up the computation of joint probability queries What ifwe wantto compute PB TJ T PB T J T ZIPJTAal Z PMmAaMPBTZPAaBTEePEe mETF meTF eeTF PJ T ZPJTAaLZ PMmlAa Elma b ZPA a l B b E ePE e A lot of shared computation 7 Smart cashing of results can save the time if more queries CS 2710 Foundations ofAI Inference in Bayesian networks When cashing of results becomes handy What if we want to compute a diagnostic query PB TJ T PBTlJT PJT Exactly probabilities we have just compared There are other queries when cashing and ordering of sums and products can be shared and saves computation PBJ T P J T General technique Variable elimination PBlJT aPBJT CS 2710 Foundations ofAI Inference in Bayesian networks General idea of variable elimination PTrue l 7 ZZPJleaZPMmlAaZPBbZPAalBbEePEe aETF Jew mar bETF eETF ma fM 0 W513 a Variable order A fB Q J M B Results cashed in the tree structure E CS 2710 Foundations ofAI Inference in Bayesian network Exact inference algorithms 7 Variable elimination BOOk 7 Symbolic inference D Ambrosio 7 Recursive decomposition Cooper 7 Message passing algorithm Pearl Bhk 7 Clustering and joint tree approach Lauritzen 00 Spiegelhalter 7 Arc reversal Olmsted Schachter Approximate inference algorithms B13 7 Monte Carlo methods Forward sampling Likelihood sampling 7 Variational methods CS 2710 Foundations ofAI Monte Carlo approaches MC approximation 7 The probability is approximated using sample frequencies 7 Example quot samples with BTJ T N N PBTJT N total samples BBN sampling One sample gives one assignment of values to all variables CS 2710 Foundations ofAI BBN sampling example PB Burglary 00010999 Earthquake PABE B E T F T T 095 005 T F 094 006 F T 029 071 F F 00010999 CS 2710 Foundations ofAI Generate sample in a top down manner following the links P E 0002 0998 BBN sampling example B Earthquake 0002 0998 PABE B E T F T T 095 005 T F 094 006 F T 029 071 F F 0001 0999 MaryCalls CS 2710 Foundations ofAI BBN sampling example PABE B E T F T T 095 005 T F 094 006 F T 029 071 F F 0001 0999 MaryCalls CS 2710 Foundations ofAI BBN sampling example PB PJ IA F 005 095 CS 2710 Foundations ofAI BBN sampling example CS 2710 Foundations ofAI BBN sampling example CS 2710 Foundations ofAI Monte Carlo approaches MC approximation of conditional probabilities 7 The probability is approximated using sample frequencies 7 Example quot samples with BTJ T N N 87 JT PBTJTN 39 J4 samples with J T Rejection sampling 7 Generate sample for the full joint by sampling BBN 7 Use only samples that agree with the condition the remaining samples are rejected Problem many samples can be rejected CS 2710 Foundations ofAI Likelihood weighting Avoids inef ciencies of rejection sampling 7 Idea generate only samples consistent with an evidence or conditioning event 7 If the value is set no sampling random choice occurs Problem using simple counts is not enough since these may occur with different probabilities Likelihood weighting 7 With every sample keep a weight with which it should count towards the estimate 87 13BT JT W Samples mm any value of Band JT 8 CS 2710 Foundations ofAI BBN likelihood weighting example P E Ii Earthquake 0002 0998 PABE B E T F T T 095 005 T F 094 006 F T 029 071 F F 00010999 PJ IA JTset CS 2710 Foundations ofAI BBN likelihood weighting example PB PABE F B E T F T T 095 005 T F 094 006 F T 029 071 F F 0001 0999 MaryCalls JTset CS 2710 Foundations ofAI BBN likelihood weighting example PB JTset CS 2710 Foundations ofAI BBN likelihood weighting example PB JTsetzg CS 2710 Foundations ofAI BBN likelihood weighting example PB The sample weight F w 005 JTset94 CS 2710 Foundations ofAI CS 2710 Foundations of AI Lecture 6 Constraintsatisfaction search Milos Hauskrecht milos cspitt edu 5329 Sennott Square CS 2710 Foundations ofAI Search problem A search problem Search space or state space a set of objects among which we conduct the search Initial state an object we start to search from Operators actions transform one state in the search space to the other Goal condition describes the object we search for Possible metric on a search space 7 measures the quality of the object with regard to the goal Search problems occur in planning optimizations learning CS 2710 Foundations ofAI Constraint satisfaction problem CSP Constraint satisfaction problem CSP is a con guration search problem where A state is de ned by a set of variables Goal condition is represented by a set constraints on possible variable values Special properties of the CSP allow more specific procedures to be designed and applied for solving them CS 2710 Foundations of AI Example of a CSP Nqueens Goal n queens placed in nonattacking positions on the board Variables Represent queens one for each column 7 Q15Q25Q35Q4 0 Values 7 Row placement of each queen on the board Q1 21Q2 4 1 2 3 4 C0DStFaintS Q 3 Q J Two queens not in the same row Qi Q gt i TWO queens not on the same diagonal CS 2710 Foundations of AI Satisfiability SAT problem Determine whether a sentence in the conjunctive normal form CNF is satis able can evaluate to true 7 Used in the propositional logic covered later PVQv R Pv RVS Pva T Variables Propositional symbols P R T S Values True False Constraints Every conjunct must evaluate to true at least one of the literals must evaluate to true PVQv RETrue Pv RVSETrue CS 2710 Foundations ofAI Other real world CSP problems Scheduling problems 7 Eg telescope scheduling 7 Highschool class schedule Design problems 7 Hardware con gurations 7 VLSI design More complex problems may involve 7 real valued variables 7 additional preferences on variable assignments 7 the optimal con guration is sought CS 2710 Foundations ofAI Map coloring Color a map using k different colors such that no adjacent countries have the same color Variables 0 Variable values Constraints CS 2710 Foundations ofAI Map coloring Color a map using k different colors such that no adjacent countries have the same color Variables Represent countries 7 A B C D E 0 Values 7 K different colors Red Blue Green Constraints CS 2710 Foundations ofAI Map coloring Color a map using k different colors such that no adjacent countries have the same color Variables RED Represent countries 6 7 A B C D E 0 Values 7 K different colors Red Blue Green Constraints A BA CC E etc An example of a problem With binary constraints CS 2710 Foundations ofAI Constraint satisfaction as a search problem Formulation of a CSP as a search problem States Assignment partial complete of values to variables Initial state No variable is assigned a value Operators Assign a value to one of the unassigned variables Goal condition All variables are assigned no constraints are violated Constraints can be represented 7 Explicitly by a set of allowable values 7 Implicitly by a function that tests for the satisfaction of constraints CS 2710 Foundations ofAI Solving CSP as a standard search Unassigned 9222394 39 Assigned L W Unassigned QZ Q3 Q Unassigned QZ Q3 Assigned Solving a CSP through standard search Maximum depth of the tree m Depth of the solution d Branching factor b Unassigqed Q Unassigied Q2 Q Q Assi ed Q 2 299 1 Assi ed 1 Solving a CSP through standard search Maximum depth of the tree Number of variables of the CSP Depth of the solution Number of variables of the CSP Branching factor if we x the order of variable assignments the branch factor depends on the number of their values Unassigied Q2 QsQ Ass ed 1 1 Solving a CSP through standard search What search algorithm to use Depth of the tree Depth of the solutionmumber of vars Unassxeied Qthster Ass ed Unassigned Q2 Q Q 1 Q1 1 Solving a CSP through standard search What search algorithm to use Depth rst search 7 Since we know the depth of the solution 7 DFS in context of CSP is also referred to as backtracking Unassigied Q2 Q3Q Ass ed Q11 Checking constraint consistency The violation of constraints needs to be checked for each node either during its generation or before its expansion Consistency of constraints Current variable assignments together With constraints restrict remaining legal values of unassigned variables The remaining legal and illegal values of variables may be inferred effect of constraints propagates To prevent blind exploration it is necessary to keep track of the remaining legal values so we know When the constraints are violated and When to terminate the search Constraint propagation A state more broadly is de ned by a set variables and their legal and illegal assignments Legal and illegal assignments can be represented through variable equations and variable disequations Example map coloring B Equation A 2 Red c Disequation C 4 Red Constraints assignments can entail new equations and quot 39 ARed gt B Red F CS 2710 Foundations ofAI Constraint propagation 0 Assign ARed Red Blue Green A B C D E F I l equations x disequations CS 2710 Foundations of AI Constraint propagation Assign ARed Red Blue Green n equations x disequations CS 2710 Foundations ofAI Constraint propagation Assign EBlue Red Blue Green CS 2710 Foundations ofAI Constraint propagation Assign EBlue Red Blue Green n CS 2710 Foundations ofAI Constraint propagation Assign FG reen Red Blue Green t CS 2710 Foundations ofAI Constraint propagation Assign FG reen Red Blue Green n CS 2710 Foundations ofAI Constraint propagation Assign FG reen Red Blue Green Conflict 1 N0 legal assignments available for B and C CS 2710 Foundations ofAI Constraint propagation 0 We can derive remaining legal values through propagation E Red Blue Green n BGreen CGreen CS 2710 Foundations ofAI Constraint propagation 0 We can derive remaining legal values through propagation B Red Blue Green BGreen C Green FRed CS 2710 Foundations ofAI Constraint propagation Three known techniques for propagating the effects of past assignments and constraints Value propagation Arc consistency Forward checking Difference 7 Completeness of inferences 7 Time complexity of inferences CS 2710 Foundations ofAI tquot Equot Equot Constraint propagation Value propagation Infers 7 equations from the set of equations defining the partial assignment and a constraint Arc consistency Infers 7 disequations from the set of equations and disequations defining the partial assignment and a constraint 7 equations through the exhaustion of alternatives Forward checking Infers 7 disequations from a set of equations defining the partial assignment and a constraint 7 Equations through the exhaustion of alternatives Restricted forward checking 7 uses only active constraints active constraint 7 only one variable unassigned in the constraint CS 2710 Foundations ofAI Heuristics for CSPs Backtracking searches the space in the depthfirst manner But we can choose Which variable to assign next Which value to choose first Heu ristics Most constrained variable 7 Which variable is likely to become a bottleneck Least constraining value 7 Which value gives us more exibility later CS 2710 Foundations ofAI Heuristics for CSP Examples map coloring El Heuristics Most constrained variable 7 Country E is the most constrained one cannot use Red Green Least constraining value 7 Assume we have chosen variable C 7 Red is the least constraining valid color for the future CS 2710 Foundations ofAI Search for optimal configurations CS 2710 Foundations ofAI Search for the optimal configuration Con guration search problems Are often enhanced with some quality measure Quality measure re ects our preference towards each con guration or state Goal nd the con guration with the optimal quality CS 2710 Foundations ofAI Example Traveling salesman problem Problem 0 A graph with distances 0 Goal find the shortest tour which visits every city once and returns to the start An example of a valid tour ABCDEF CS 2710 Foundations of AI Example N queens Some CSP problems do not have a quality measure The quality ofa con guration in a CSP can be measured by the number of constraints violated Solving corresponds to the minimization of the number of constraint violations of violations 3 of violations 1 of violations 0 CS 2710 Foundations of AI Local search methods Are o en used to nd solutions to large con guration search problems with an additional optimality measure Properties of local search algorithms 7 Search the space of complete con gurations 7 Operators make local changes to complete con gurations 7 Keep track of just one state the current state not a memory of past states H No search tree is necessary Example N queens Local operators for generating the next state 7 Select a van39able a queen 7 Reallocate its position Example Traveling salesman problem Problem A graph with distances Goal find the shortest tour which Visits every city once and returns to the start An example of a valid tour ABCDEF CS 2710 Foundations ofAI Example Traveling salesman problem Local operator for generating the next state divide the existing tour into two parts reconnect the two parts in the opposite order Example ABCDEF ABCD EF 1 ABCDFE CS 2710 Foundations ofAI Example Traveling salesman problem Local operator 7 generates the next configuration state CS 2710 Foundations OQI Searching configuration space Iterative improvement algorithms keep only one con guration the current con guration active Problem How to decide about which operator to apply CS 2710 Foundations ofAI Iterative improvement algorithms Two strategies to choose the con guration state to be Visited next 7 Hill climbing 7 Simulated annealing Later Extensions to multiple current states 7 Genetic algorithms Note Maximization is inverse of the minimization min fXcgt max fX CS 2710 Foundations ofAI Hill climbing Local improvement algorithm Look around at states in the local neighborhood and choose the one with the best value Assume we want to maximize the value Better Worse states CS 2710 Foundations ofAI Hill climbing Always choose the next best successor state stop when no improvement possible lunclinn Hth twsmctpmmm ennnr a mumquot stne pm pmlvldm 4 plem rune wk nn v DrumH Mmmonzr NHALSYAYEthum h Inup an m successor m u t mm LUElerml then return Lwrcr Y mumHun end Hill climbing Local improvement algorithm 1 one with the best Value Betta Wmse What can go wrong Hill climbing Hill climbing can get trapped in the local optimum value global maximum local maximum states Hill climbing Hill climbing can get clueless on plateaus value plamzu slates Hill climbing and nqueens The quality of a con guration is given by the number of constraints violated Then Hill climbing reduces the number of constraints Mincon ict strategy heuristic 7 Choose randomly a variable with con icts 7 Choose its value such that it violates the fewest constraints Success But not always The local optima problem CS 2710 Foundations of AI Simulated annealing Permits bad moves to states with lower value thus escape the local optima Gradually decreases the frequency of such moves and their size parameter controlling it 7 temperature value Always up Sometimes down states CS 2710 Foundations of AI CS 2710 Foundations of AI Lecture 2 Problem solving by searching Milos Hauskrecht milos cspitt edu 5329 Sennott Square CS 2710 Foundations ofAI Example Assume a problem of computing the roots of the quadratic equation ax2bxc0 Do you consider it a challenging problem CS 2710 Foundations ofAI Example Assume a problem of computing the roots of the quadratic equation ax2bxc0 Do you consider it a challenging problem Hardly we just apply the standard formula biVb2 4ac 2a x12 CS 2710 Foundations ofAI Solving problems by searching Some problems have a straightforward solution 7 Just apply the formula or follow a standardized procedure Example solution of the quadratic equation 7 Hardly a sign of intelligence More interesting problems require search 7 more than one possible alternative needs to be explored before the problem is solved 7 the number of alternatives to search among can be very large even infinite CS 2710 Foundations ofAI Search example Traveler problem Find a route from one city Arad to the other Bucharest Fagzlas Rimnicu Vilcez I I I Hivsovz Uyziceni I Elorie CS 2710 Foundatrons ofAI Example Traveler problem Another avor of the traveler problem 7 nd the route with the minimum length between S and T I Oradea And 3 Pagans I Vaslui Rimnicu Vilcea I Hivsova I E 39 I Giulgiu quote CS 2710 Foundatrons ofAI Example Puzzle 8 0 Find the sequence of the empty tile moves from the initial game position to the designated target position Initial position Goal position E L F 4 6 7 EE CS 2710 Foundations of AI Example N queens problem Find a con guration of n queens not attacking each other Goal con guration Bad goal con guration CS 2710 Foundations of AI A search problem is de ned by Search space 7 The set of objects among which we search for the solution Example objects routes between cities or Nqueen con gurations Goal condition 7 What are the characteristics of the object we want to nd in the search space 7 Examples Path between cities A and B Path between A and B with the smallest number of links Path between A and B with the shortest distance Nonattacking nqueen con guration CS 2710 Foundations ofAI Search Search process 7 The process of exploration of the search space The ef ciency of the search depends on 7 The search space and its size 7 Method used to explore traverse the search space 7 Condition to test the satisfaction of the search objective what it takes to determine I found the desired goal object Important to remember ll 7 Conveniently chosen search space and the exploration policy can have a profound effect on the ef ciency CS 2710 Foundations ofAI Graph search Many search problems can be naturally represented as graph search problems Typical example Route nding 7 Map corresponds to the graph nodes to cities links to available connections between cities 7 Goal nd a route path in the graph from S to T Graph search Less obvious conversion Puzzle 8 Find a sequence of moves from the initial con guration to the goal con guration 7 nodes corresponds to states of the game 7 links to valid moves made by the player Graph search problem States game positions or locations in the map that are represented by nodes in the graph Operators connections between cities valid moves Initial state 7 start position start city Goal state itarget position positions target city cities T target CS 2710 Foundations ofAI Graph search More complex versions of the graph search problems 7 Find a minimal length path route with the smallest number of connections the shortest sequence of moves that solves Puzzle 8 O E O T target CS 2710 Foundations ofAI Graph search More complex versions of the graph search problems 7 Find a minimum cost path a route with the shortest distance CS 2710 Foundations ofAI Graph search How to nd the path between S and T A strawman solution 7 Generate systematically all sequences of 1 2 3 edges 7 Check if the sequence yields a path between S and T Can we do better E T target CS 2710 Foundations ofAI Graph search Can we do better We are not interested in sequences that do not start in S and that are not valid paths Solution 7 T target CS 2710 Foundations ofAI Graph search Can we do better We are not interested in sequences that do not start in S and that are not valid paths Solution 7 Look only on valid paths starting from S E T target CS 2710 Foundations ofAI Graph search 0 Being smarter about the space we search for the solution pays off in terms of search process efficiency CS 2710 Foundations of AI N queens Some problems can be converted to the graph search problems 0 But some problems are harder and less intuitive 7 Take eg Nqueens problem Goal con guration 0 Problem 7 We look for a con guration not a sequence of moves 7 No distinguished initial state no operators moves CS 2710 Foundations of AI Graph search A trick generate a con guration step by step one queen per step States nodes correspond to con gurations of 01234 queens Links operators correspond to the addition of a queen 1 39ti 1 t t 1 d th b d n1 a s a e no queens p ace on e oar Target 1 gtTIget 2 Graph search Nqueens problems This is a different graph search problem When compared to Puzzle 8 or Route planning We want to nd only the target con guration not a path Two types of graph search problems Path search 7 Find a path between states S and T 7 Example traveler problem Puzzle 8 7 Additional goal criterion minimum length cost path Con guration search constraint satisfaction search 7 Find a state configuration satisfying the goal condition 7 Example nqueens problem design of a device with a prede ned functionality 7 Additional goal criterion soft preferences for configurations eg minimum cost design CS 2710 Foundations ofAI Search problem Search problems that can be represented or converted into a graph search problems can be de ned in terms of Initial state 7 State configuration we start to search from eg start city initial game position Operators 7 Transform one state to another eg valid connections between cities valid moves in Puzzle 8 Goal condition 7 De nes the target state destination winning position Search space the set of objects we search for the solution 7 is now de ned indirectly through the initial state operators CS 2710 Foundations ofAI Traveler problem nl Traveler problem formulation States different cities Initial state city Arad Goal condition city Bucharest Type of the problem path search Possible solution cost path length Operators moves to cities in the neighborhood CS 2710 Foundations of AI Puzzle 8 example E E E E E E E E Initial state Search problem formulation States tile configurations Initial state initial con guration Type of the problem path search Operators moves of the empty tile Goal reach the winning con guration Goal state Possible solution cost a number of moves CS 2710 Foundations of AI Nqueens problem Problem formulation States con gurations of 0 to 4 queens on the board Initial state noqueen con guration Operators add a queen to the le Inost unoccupied column Goal a con guration with 4 nonattacking queens Type of the problem con guration search Nqueens problem Alternative formulation of N queens problem Bad goal con guration Valid goal con guration Problem formulation States different con gurations of 4 queens on the board Initial state an arbitrary con guration of 4 queens Operators move one queen to a different unoccupied position Goal a con guration with nonattacking queens Type of the problem con guration search Comparison of two problem formulations Solution 2 Operators switch one of the queens 15 4 all con gurations Solution 1 7 E a E39 a Operators add a queen to the le Inost unoccupied column 1 4 42 43 44 lt 45 con gurations altogether Even better solution to the Nqueens Solution 1 Operators add a queen to the le Inost unoccupied column lt 45 con gurations altogether Improved solution With a smaller search space Operators add a queen to the leftmost unoccupied column such that it does not attack already placed queens 1443432432165 con gurations altogether Formulating a search problem Search process 7 The process of exploration of the search space The ef ciency of the search depends on 7 The search space and its size 7 Method used to explore traverse the search space 7 Condition to test the satisfaction of the search objective what it takes to determine I found the desired goal object Think twice before solving the problem by search 7 Choose the search space and the exploration policy CS 2710 Foundations ofAI CS 2710 Foundations of AI Lecture 27 Appied AI topics Milos Hauskrecht milos cspitt edu 5329 Sennott Square CS 2710 Foundations ofAI Topics in Al Five main areas Problem solving and search Logic and knowledge representations Planning Uncertainty Learning Other topics 7 AI programming languages 7 Speech recognition 7 Natural language processing 7 Image understanding 7 Robotics CS 2710 Foundations ofAI Speech recognition Objective take acoustic signal and convert it to text Analog ncm Sulllplk d qunmimd dlgllnl signal I Ewggriggmcy lt18KHz Frames 10mm lm39ig meres rn mm9e g rmmu win runner r the r m frequencyh 2nd Frames mm mm qunmiwiun mum Discreuzefmmres eg m 256 values a my Speech recognition We want to determine the sequence of words that is most probable given the input signal Pwordreq W l xigmll s It is easier to de ne an acoustic model that relates Pn39gmzl s l wordseq w This is like a diagnosis problem We can use the Bayes rule wordy W signal s Pszgnal sl wordsap wPwordseLF w Pszgnal s Assume We have multiple possible Word sequences w w2 wquot The best word sequence argmaXPn39gnal sl wordxeq w Pwordseq w Speech recognition We need to de ne Psignal sl wordseq W and Pwordseq w for all possible word and signal sequences De ning the probability Pwordseq W w wlw2 wn Pwordseq wlw2 WW Pw1Pw2 l W1PWn l wlw2 wH 7 By the chain rule Simpli cations 7 Unigram model a probability of each word is independent of the previous word Pwordseq w1 w2 wn Pw1Pw2Pw3 Pwn 7 Bigram model only the previous word matters Pwordseq wlw2 wn Pw1Pw2 l w1Pw3 l wzPwn l wH CS 2710 Foundations ofAI Speech recognition De ning the probability Psz gnal s l wordseq W s313233m3m WW1W2Wn Two simpli cations 1 De ne signal signatures for individual words Ps 3132 ms lword WI 2 Divide the acoustic word models into a sequence of phones and de ne signal signature models for phones Pp p1p2pu l word WI Ps 3132 l phone pg Conditional probabilities of sequences modeled most often as 7 Hidden Markov Models HMMs CS 2710 Foundations ofAI Speech recognition HMM models olwords Pp ppz mp word w Example word tomato wm mm W mm 2 phones sequences 0 00 Wm mun Wm musst and eucmmnnnss 4 phones sequences Speech recognition HMM model olphones Ps any ms phaonepq Example Many possmle feature sequences Hum HMM 1m m C1 C4 C6 C1 C1 C4 C6 C1 C1 C5 C4 C6 Speech recognition Finding the most probable path through an HMM for m Example sequence C1 C3 C4 C6 or 21 ca cucacar cucacace Natural language processing Goal Analyze and interpret the text in the natural language Input text sentences 7 Speech recognition system 7 Optical character recognition OCR 7 Documents in the electronic form Output 7 Knowledge extracted from the text that supports various inferences Processing multistep process 7 Syntactic interpretation parsing 7 Semantic interpretation 7 Disambiguation amp Incorporation Natural language processing Syntactic interpretation parsing Input a sentence Output a parse tree Uses grammar models for parsing the sentence to phrases and terminal symbols Example The wumpus is dead S NP VP Article Noun Verb Adjective I I I I The wumpus is dead Sometimes we have more than one possible parse Stochastic grammars quantify the goodness of possible parses CS 2710 Foundations ofAI Natural language processing Semantic interpretation 7 input a parse tree 7 output a set of meanings eg in First order logic FOL Example The wumpus is dead 7 Gives two possible semantic interpretations AliveWumpus N 0w T iredWumpus Now Disambiguation 7 chooses the most probable interpretation Incorporation 7 The extracted knowledge is checked for consistency against other pieces of knowledge before it is incorporated into the KB CS 2710 Foundations ofAI Image processing and Vision Classic image processing problem 7 Analysis of image and extraction of information from the image 7 Can be used in many applications Scene analysis Manipulation and navigation tasks Image retrieval Other image processing problems 7 Image enhancement degraded image should be improved to restore particular features 7 Storage and Compression Large amounts of data need to be archived or transmitted 7 Visualization CS 2710 Foundations ofAI Image processing Image is def39med by a light intensity function over the image plane Continuous image is typically discretized Image plane is discretized into 7 Pixels arranged on the rectangular grid 7 Resolution of the grid determines the spatial quality of the discretization Light intensity values are discretized into 7 Integers values in some interval Typical black and white image input 7 512x512 pixels 7 Light intensity 8 bits 7 512 types of gray CS 2710 Foundations ofAI Image processing Analysis of image and extraction of information from the image 0 Segmentation 7 Division of the image to meaningful entities in the scene 7 Relies heavily on edge detection algorithms CS 2710 Foundations ofAI Image processing and Vision Analysis of image and extraction of information from the image 0 To recognize identify the object from the image we need to compare it with the class pattern Problem The position orientation and the scale of the object in the scene may vary Solution Use a set of basic transformations 0 scaling translation 0 rotation ofthe object 0 Transformations are relatively easy for 2D objects much harder for 3D objects Other problems light sources and shadows CS 2710 Foundations ofAI Image processing and vision 39 More complex task analysis of a sequence of related images videos 39 39 39 r 39 visual motion between images 39 When this is useful 7 Video 7 commercial skip 7 Detection and tracking of objects in the real world AI programming languages 39 Focus on symbolic processing Special AI Languages 7 LISP r Prolog r Smalltalk CS 2710 Foundations of AI Lecture 11 Firstorder logic Milos Hauskrecht milos cspitt edu 5329 Sennott Square CS 2710 Foundations ofAI Administration PS S is out lVIidterm 7 October 24 2005 7 In class 7 Closed book 7 Covers Search Knowledge Representation and Planning CS 2710 Foundations ofAI Limitations of propositional logic World we want to represent and reason about consists of a number of objects with variety of properties and relations among them Propositional logic Represents statements about the world without re ecting this structure and without modeling these entities explicitly Consequence some knowledge is hard or impossible to encode in the propositional logic Two cases that are hard to represent 7 Statements about similar objects relations 7 Statements referring to groups of objects CS 2710 Foundations ofAI Firstorder logic FOL More expressive than propositional logic Eliminates de ciencies of PL by 7 Representing objects their properties relations and statements about them 7 Introducing variables that refer to an arbitrary objects and can be substituted by a specific object 7 Introducing quanti ers allowing us to make statements over groups objects without the need to represent each of them separately CS 2710 Foundations ofAI Logic Logic is de ned by A set of sentences 7 A sentence is constructed from a set of primitives according to syntax rules A set of interpretations 7 An interpretation gives a semantic to primitives It associates primitives with objects values in the real world The valuation meaning function V 7 Assigns a truth value to a given sentence under some interpretation V sentence gtlt interpretation gt True False CS 2710 Foundations ofAI Firstorder logic Syntax Term syntactic entity for representing objects Terms in FOL Constant symbols represent speci c objects Examples John France car89 Variables 7 represents object of speci c type defined by the universe of discourse Examples x y universe of discourse can be people students cars Functions applied to one or more terms Examples father of John father ofO ather of ohn CS 2710 Foundations ofAI First order logic Syntax Terms do not de ne propositions they cannot be evaluated to True or False Sentences in FOL 9 de ne propositions Atomic sentences 7 A predicate symbol applied to 0 or more terms Examples Red car 2 SisterAmy Jane ManagerO ather of ohn 7 t1 t2 equivalence of terms Example John father o Peter CS 2710 Foundations ofAI First order logic Syntax Sentences in FOL Complex sentences Assume 1 are sentences in FOL Then 7 WuI v1 3 w w ll and Vx 3y are sentences Symbols El V stand for the existential and the universal quanti er CS 2710 Foundations ofAI Semantics Interpretation An interpretation I is de ned by a mapping to the domain of discourse D or relations on D domain of discourse a set of objects in the world we represent and refer to An interpretation I maps Constant symbols to objects in D 1John a Predicate symbols to relations properties on D ltgt 9 Function symbols to functional relations on D 1fatherofltgt gt CS 2710 Foundations ofAI Semantics of sentences Meaning evaluation function V sentence gtlt interpretation gt True False A predicate predicateterm 1 term 2 term 3 term n is true for the interpretation I iff the objects referred to by term 1 term 2 term 3 term n are in the relation referred to by predicate 1John 3 1Paul L 1brother gt brotherJohn Paul in Ibrother VbrotherJohn Paul I True CS 2710 Foundations ofAI Semantics of sentences Equality Vterm 1 term 2 I True Hf Iterm 1 Iterm 2 Boolean expressions standard Eg Vsentence 1 v sentence 2 I True Hf Vsentence 1I True or Vsentence 2I True Quantifications bstitution of x with d VVx ITrue su Hf for all d e D V IVd True VEx 1True Iff there is a d e D st V IVd True CS 2710 Foundations ofAI Sentences with quantifiers Universal quanti cation All Upitt students are smart Assume the universe of discourse of x are Upitt students CS 2710 Foundations ofAI Sentences with quantifiers Universal quanti cation All Upitt students are smart Assume the universe of discourse of x are Upitt students Vx smartx CS 2710 Foundations ofAI Sentences With quantifiers Universal quanti cation All Upitt students are smart Assume the universe of discourse of x are Upitt students Vx smartx Assume the universe of discourse of x are students CS 2710 Foundations ofAI Sentences with quantifiers Universal quanti cation All Upitt students are smart Assume the universe of discourse of x are Upitt students Vx smartx Assume the universe of discourse of x are students Vx atxUpitt 3 smartx CS 2710 Foundations ofAI Sentences With quantifiers Universal quanti cation All Upitt students are smart Assume the universe of discourse of x are Upitt students Vx smartx Assume the universe of discourse of x are students Vx atxUpitt 3 smartx Assume the universe of discourse of x are people CS 2710 Foundations ofAI Sentences with quantifiers Universal quanti cation All Upitt students are smart Assume the universe of discourse of x are Upitt students Vx smartx Assume the universe of discourse of x are students Vx atxUpitt 3 smartx Assume the universe of discourse of x are people Vx studentx atxUpitt 3 smartx CS 2710 Foundations ofAI Sentences With quantifiers Universal quanti cation All Upitt students are smart Assume the universe of discourse of x are Upitt students Vx smartx Assume the universe of discourse of x are students Vx atxUpitt 3 smartx Assume the universe of discourse of x are people Vx studentx atxUpitt 3 smartx Typically the universal guanti er connects with an implication CS 2710 Foundations ofAI Sentences with quantifiers Existential quanti cation Someone at CMU is smart Assume the universe of discourse of x are CMU affiliates CS 2710 Foundations ofAI Sentences With quantifiers Existential quanti cation Someone at CMU is smart Assume the universe of discourse of x are CMU affiliates Elx atxCMU smartx CS 2710 Foundations ofAI Sentences with quantifiers Existential quanti cation Someone at CMU is smart Assume the universe of discourse of x are CMU affiliates Elx atxCMU smartx Assume the universe of discourse of x are people CS 2710 Foundations ofAI Sentences With quantifiers Existential quanti cation Someone at CMU is smart Assume the universe of discourse of x are CMU affiliates El x smartx Assume the universe of discourse of x are people Elx atxCMU smartx CS 2710 Foundations ofAI Sentences with quantifiers Existential quanti cation Someone at CMU is smart Assume the universe of discourse of x are CMU affiliates Elx atxCMU smartx Assume the universe of discourse of x are people Elx atxCMU smartx TVpicallV the 39 quot 39 quanti er connects with a con 39unction CS 2710 Foundations ofAI Order of quantifiers Order of quanti ers of the same type does not matter For all x andy ifx is aparent ofy theny is a child ofx Vxy parent x y 3 child y X Vy x parent x y 3 child y X Order of different quanti ers changes the meaning VxEIy loves x y CS 2710 Foundations ofAI Order of quantifiers Order of quanti ers 0f the same type does not matter For all x andy ifx is aparerit ofy theny is a child ofx Vxy parent x y 3 child y X Vy x parent x y 3 child y X Order of different quanti ers changes the meaning VxEIy loves x y Everybody loves somebody 3ny loves x y CS 2710 Foundations ofAI Order of quantifiers Order of quanti ers 0f the same type does not matter For all x andy ifx is aparerit ofy theny is a child ofx Vxy parent x y 3 child y X Vy x parent x y 3 child y X Order of different quanti ers changes the meaning VxEIy loves x y Everybody loves somebody 3ny loves x y There is someone who is loved by everyone CS 2710 Foundations ofAI Connections between quantifiers Everyone likes ice cream 7 CS 2710 Foundations ofAI Connections between quantifiers Everyone likes ice cream Vx likes xceCream CS 2710 Foundations ofAI Connections between quantifiers Everyone likes ice cream Vx likes xceCream Is it possible to convey the same meaning using an existential quanti er CS 2710 Foundations ofAI Connections between quantifiers Everyone likes ice cream Vx likes xceCream Is it possible to convey the same meaning using an existential quanti er There is no one who does not like ice cream E x likes xceCream A universal quanti er in the sentence can be expressed using an existential quanti er CS 2710 Foundations ofAI Connections between quantifiers Someone likes ice cream CS 2710 Foundations ofAI Connections between quantifiers Someone likes ice cream Elx likes xceCream Is it possible to convey the same meaning using a universal quanti er CS 2710 Foundations ofAI Connections between quantifiers Someone likes ice cream Elx likes xceCream Is it possible to convey the same meaning using a universal quanti er Not everyone does not like ice cream V x likes xceCream An existential quanti er in the sentence can be expressed using a universal quanti er CS 2710 Foundations ofAI Representing knowledge in FOL Example Kinship domain Objects people John Mary Jane Properties gender Male x Female x Relations parenthood brotherhood marriage Parent x y Brother x y Spouse x y Functions motherof one for each person X MotherOf x CS 2710 Foundations ofAI Kinship domain in FOL Relations between predicates and functions write down what we know about them how relate to each other Male and female are disjoint categories Vx Male x lt3 Female x Parent and child relations are inverse Vx y Parent x y Q Child y x A grandparent is a parent of parent Vg c Grandparentg c cgt Elp Parentg p A Parentp c A sibling is another child of one s parents Vx y Sibling x y cgt x at y A Elp Parent p x A Parent p y Andso on CS 2710 Foundations ofAI Inference in First order logic CS 2710 Foundations ofAI Logical inference in FOL Logical inference problem Given a knowledge base KB a set of sentences and a sentence a does the KB semantically entail a KB 2a In other words In all interpretations in which sentences in the KB are true is also 0 true Logical inference problem in the rst order logic is undecidable 11 No procedure that can decide the entailment for all possible input sentences in a nite number of steps CS 2710 Foundations ofAI Logical inference problem in the Propositional logic Computational procedures that answer KBIa Three approaches Truth table approach Inference rules Conversion to the inverse SAT problem 7 Resolution refutation CS 2710 Foundations ofAI Inference in FOL Truth table Is the Truth table approach a Viable approach for the FOL CS 2710 Foundations ofAI Inference in FOL Truth table approach Is the Truth table approach a Viable approach for the FOL NO 7 Why It would require us to enumerate and list all possible interpretations I I assignments of symbols to objects predicates to relations and functions to relational mappings Simply there are too many interpretations CS 2710 Foundations ofAI Inference in FOL Inference rules Is the Inference rule approach a Viable approach for the 9 CS 2710 Foundations ofAI Inference in FOL Inference rules Is the Inference rule approach a Viable approach for the 9 Yes The inference rules represent sound inference patterns one can apply to sentences in the KB What is derived follows from the KB Caveat we need to add rules for handling quanti ers CS 2710 Foundations ofAI Inference rules Inference rules from the propositional logic 7 Modus ponens A 3 B A B 7R If esouwn AvB BvC A v C 7 and others Andintroduction Andelimination Or introduction Negation elimination Additional inference rules are needed for sentences with quanti ers and variables 7 Must involve variable substitutions CS 2710 Foundations ofAI Sentences with variables Firstorder logic sentences can include variables Variable is 7 Bound 7 if it is in the scope of some quanti er Vx P x 7 Free 7 if it is not bound EIxPyQx yis free Sentence formula is 7 Closed 7 if it has no free variables Vy x P y 3 Q06 7 Open 7 if it is not closed 7 Ground 7 if it does not have any variables Likes John Jane CS 2710 Foundations ofAI Variable substitutions Variables in the sentences can be substituted with terms terms constants variables functions Substitution 7 Is represented by a mapping from variables to terms x1 t1x2 t2 7 Application of the substitution to sentences S UBST x Sam y PamLikesx y LikesSam Pam SUBSTxz y father0fJ0hnLikesxy Likesz fatherof J0hn CS 2710 Foundations ofAI Inference rules for quantifiers Universal elimination Vx x a 7 substitutes a variable with a constant symbol Vx Likesx I ceC ream a is a constant symbol LikesBen I ceC ream Existential elimination El x x a 7 Substitutes a variable with a constant symbol that does not appear elsewhere in the KB Elx Kill x Victim Kill Murderer Victim CS 2710 Foundations ofAI CS 2710 Foundations of AI Lecture 16 Bayesian belief networks Milos Hauskrecht milos cspitt edu 5329 Sennott Square CS 2710 Foundations ofAI Uncertainty To make diagnostic inference possible we need to represent knowledge axioms that relate symptoms and diagnosis Pneumonia Problem diseasesymptoms relations are not deterministic 7 They are uncertain or stochastic and vary from patient to patient CS 2710 Foundations ofAI Modeling the uncertainty Key challenges How to represent the relations in the presence of uncertainty Howto 39 quot such39K 39 J tomake39 r 7 Humans can reason with uncertainty Pneumonia Paleness CS 2710 Foundations ofAI Methods for representing uncertainty Probability theory A well de ned theory for modeling and reasoning in the presence of uncertainty A natural choice to replace certainty factors Facts propositional statements Are represented via random variables with two or more values Example Pneumonia is a random variable values True and False Each value can be achieved with some probability PPneum0m39a True 0001 PWBCcount high 0005 CS 2710 Foundations ofAI Modeling uncertainty With probabilities Probabilistic extension of propositional logic Propositions 7 statements about the world 7 Represented by the assignment of values to random variables Random variables X 7 Boolean Pneumonia is either True False Random variable Values 7 Multi valued Pain is oneof Nopaz39an39laLModerateSevere Random variable values COHtimlOllS HeartRate is avalue in lt0250 gt Random variable Values CS 2710 Foundations ofAI Probabilities Unconditional probabilities prior probabilities PPneum0m39a 0001 or PPneum0nia True 0001 PPneum0m39a False 0999 PWBCcount high 0005 Probability distribution De nes probabilities for all possible value assignments to a random variable Values are mutually exclusive PPneum0m39a True 0001 Pnegrnztzma PPneumoma PPneum0m39a False 0999 False 0999 CS 2710 Foundations ofAI Probability distribution De nes probability for all possible value assignments Example 1 r PPneumonz39a True 0001 Pneumoma P0 True 0001 PPneumonza False 0999 False 0999 PPneumonz39a True PPneumonz39a False 1 Probabilities sum to 1 ll Example 2 PWBCcoum high 000 5 WBCcount PWBCcount PWBCcount normal 0993 hlgh 0005 normal 0993 PWBCcount hzglz 0002 10W 0002 CS 2710 Foundations ofAI Joint probability distribution Joint probability distribution for a set variables De nes probabilities for all possible assignments of values to variables in the set Example variables Pneumonia and WBC count Ppneumonz39a WBCcount Is represented by 2 X 3 matrix WBC count high normal low True 00008 00001 00001 False 00042 09929 00019 Pneumonia CS 2710 Foundations ofAI Joint probabilities Marginalization reduces the dimension of the joint distribution Sums variables out Ppneumonia WBCcount 2 X 3 matrix PPneumoni a WBCcount high normal low pneumonia True 00008 00001 00001 0001 00042 09929 00019 0999 False 0005 0993 0002 PWBCcount Marginalization here summing of columns or rows CS 2710 Foundations ofAI Full joint distribution the joint distribution for all variables in the problem 7 It defines the complete probability model for the problem Example pneumonia diagnosis Variables Pneumonia Fever Paleness WBCcount Cough Full joint defines the probability for all possible assignments of values to Pneumonia Fever Paleness WBCcount Cough PPneumonia TWBCcount HighFever TCoughTPalenessT PPneumonia TWBCcount HighFever TCough TPaleness F PPneumonia TWBCcount High FeverTCoughFPalenessT etc CS 2710 Foundations ofAI Conditional probabilities Conditional probability distribution De nes probabilities for all possible assignments given a fixed assignment to some other variable values PPneum0nia true WBCcount high PPneum0nia WBCcount 3 element vector of 2 elements WBCcount high normal low True 008 00001 00001 False 092 09999 09999 10 10 10 Pneumonia PPneum0nia true WBCcount high PPneum0nia false WBCcount high CS 2710 Foundations ofAI Conditional probabilities Conditional probability Is defined in terms of the joint probability PM B PAB 133 st PB 0 Example P pneumonia true WBCcount high Ppneum0nia true WBCcount high PWBCcount high Ppneum0nia falsel WBCcount high Ppneum0nia false WBCcount high PWBCcount high CS 2710 Foundations ofAI Conditional probabilities Conditional probability distribution PA B PA B 133 Product rule Join probability can be expressed in terms of conditional probabilities st PB 0 PAB PA BPB Chain rule Anyjoint probability can be expressed as a product of conditionals PX1X2Xn PXn le XHPXL XH PXn le XHPXH le XHPXL XH H1PXl X17 XH CS 2710 Foundations ofAI Bayes rule Conditional probability PA B PA B PB APA Bayes rule PA B PBLI4A When is it useful When we are interested in computing the diagnostic query from the causal probability PCause 7800 Pe quotect causePcause Pect Reason It is often easier to assess causal probability 7 E g Probability of pneumonia causing fever vs probability of pneumonia given fever CS 2710 Foundations ofAI Bayes Rule in a simple diagnostic inference Device equipment operating normally or malfunctioning 7 Operation of the device sensed indirectly Via a sensor Sensor reading is either high or low Device status PDevice status normal malfunctioning 09 01 PSensor readingl Device status DeviceSensor Sensor readin normal malfunctioning CS 2710F0undations ofAI Bayes Rule in a simple diagnostic inference Diagnostic inference compute the probability of device operating normally or malfunctioning given a sensor reading PDevice status iSensor reading high PDevice status normal i Sensor reading high J PDevice status malfunctio ning i Sensor reading high Note that typically the opposite conditional probabilities are given to us they are much easier to estimate Solution apply Bayes rule to reverse the conditioning variables CS 2710 Foundations ofAI Probabilistic inference Various inference tasks Diagnostic task from effect to cause PPneum0m39a Fever 2 T Prediction task from cause to effect PFever Pneumonia T Other probabilistic queries queries on joint distributions PFever PFever C hestPain CS 2710F0undations ofAI Inference Any query can be computed from the full joint distribution ll Joint over a subset of variables is obtained through marginalization PAaCcZZPAaBbZCcDdJ I J Conditional probability over set of variables given other variables values is obtained through marginalization and de nition of conditionals PDdAaCc PAa CC Dd PA a C c ZPAaBhCcDd ZZPAaBb1CcDdj CS 2710 Foundations ofAI Inference Any guegy can be computed from the full joint distribution ll Anyjoint probability can be expressed as a product of conditionals Via the chain rule PX1X2XnPXn lXL XHPXL XH PXn l X17 XHPXL1 lXL XHPXL XH H1PX X1 XH Sometimes it is easier to de ne the distribution in terms of conditional probabilities Eg PFever Pneumonia T PFever Pneumonia F CS 2710F0undations ofAI Modeling uncertainty with probabilities De ning the full joint distribution makes it possible to represent and reason with uncertainty in a uniform way We are able to handle an arbitrary inference problem Problems 7 Space complexity To store a full joint distribution we need to remember Odquot numbers n 7 number of random variables d 7 number of values 7 Inference time complexity To compute some queries requires Odquot steps 7 Acquisition problem Who is going to de ne all of the probability entries CS 2710 Foundations ofAI Medical diagnosis example Space complexity 7 Pneumonia 2 values TF Fever 2 TF Cough 2 TF WBCcount 3 high normal low paleness 2 TF 7 Number of assignments 2223248 7 We need to de ne at least 47 probabilities Time complexity 7 Assume we need to compute the marginal of PneumoniaT from the full joint PPneum0m39a T Z ZPFeveriC0ughjWBCcountkPaleu lETFETFK1VLMETF 7 Sum over 223224 combinations CS 2710F0undations ofAI Modeling uncertainty with probabilities Knowledge based system era 70 early 80 s 7 Extensional non probabilistic models 7 Solve the space time and acquisition bottlenecks in probabilitybased models 7 froze the development and advancement of KB systems and contributed to the slowdown of AI in 80s in general Breakthrough late 80s beginning of 90s 7 Bayesian belief networks Give solutions to the space acquisition bottlenecks Partial solutions for time complexities Bayesian belief network CS 2710 Foundations ofAI Bayesian belief networks BBNs Bayesian belief networks Represent the full joint distribution over the variables more compactly with a smaller number of parameters Take advantage of conditional and marginal independences among random variables A and B are independent PAB PAPB A and B are conditionally independent given C PABCPACPB C PACBPAC CS 2710F0undations ofAI Alarm system example Assume your house has an alarm system against burglary You live in the seismically active area and the alarm system can get occasionally set off by an earthquake You have two neighbors Mary and John who do not know each other If they hear the alarm they call you but this is not guaranteed We want to represent the probability distribution of events 7 Burglary Earthquake Alarm Mary calls and John calls Causal relations CS 2710 Foundations ofAI Bayesian belief network 1 Directed acyclic graph Nodes random variables Burglary Earthquake Alarm Mary calls and John calls Links direct causal dependencies between variables The chance of Alarm is in uenced by Earthquake The chance of John calling is affected by the Alarm PABE PMA CS 2710F0undations ofAI Bayesian belief network 2 Local conditional distributions relate variables and their parents PABE PMA CS 2710 Foundations ofAI Bayesian belief network PB Burglary 00010999 Earthquake 0002 0998 PABE B E F T T 095 005 T F 094 006 F T 029 071 F F 0001 0999 MaryCalls CS 2710F0undations ofAI Bayesian belief networks general Two components B 5033 Directed acyclic graph 7 Nodes correspond to random variables 7 Missing links encode independences Parameters 7 Local conditional probability distributions for every variableparent con guration PX IPaX Where paX stand for parents of X PABE B T F T T 095 005 T F 094 006 F T 029 071 F F 00010999 CS 2710 Foundations ofAI Full joint distribution in BBNs Full joint distribution is defined in terms of local conditional distributions obtained via the chain rule PltX1XzXn HPltX IpaltXgtgt 11n B E Example Assume the following assignment A of values to random variables BTETATJTMF J M Then its probability is PBTETA TJ TMF PBYPETPATlBTEYPJTlAYPMFlAY CS 2710F0undations ofAI Bayesian belief networks BBNs Bayesian belief networks Represent the full joint distribution over the variables more compactly using the product of local conditionals But how did we get to local parameterizations Answer Graphical structure encodes conditional and marginal independences among random variables A and B are independent PA B PAPB A and B are conditionally independent given C PAlCB PAlC PAB l C PA l CPB l C The graph structure implies the decomposition ll CS 2710 Foundations ofAI Independences in BBNs 3 basic independence structures 2 3 Burglary 4 CS 2710F0unda1ions ofAI Independences in BBNs 1 l 2 3 1 JohnCalls is independent of Burglary given Alarm PU 143 PU 14 PJBAPJ APB A CS 2710 Foundations ofAI Independences in BBNs 2 3 quotW 2 Burglary is independent of Earthquake not knowing Alarm Burglary and Earthquake become dependent given Alarm PBE PBPE CS 2710F0unda1ions ofAI Independences in BBNs 1 3 MaryCalls is independent of JohnCalls given Alarm PJlAMPJlA PJM 14 PU l APUVI lA CS 2710 Foundations ofAI Independences in BBN BBN distribution models many conditional independence relations among distant variables and sets of variables These are de ned in terms of the graphical criterion called d separation D separation and independence 7 Let XY and Z be three sets ofnodes 7 If X and Y are dseparated by Z then X and Y are conditionally independent given Z D separation 7 A is dseparated from B given C if every undirected path between them is blocked with C Path blocking 7 3 cases that expand on three basic independence structures CS 2710F0undations ofAI Undirected path blocking A is dseparated from B given C if every undirected path between them is blocked 1 Path blocking with a linear substructure Z xO7O o707O Y ZinC XinA YinB CS 2710 Foundations ofAI Undirected path blocking A is dseparated from B given C if every undirected path between them is blocked 2 Path blocking with the wedge substructure Z XO OZCOO Y XinA YinB CS 2710F0undations ofAI Undirected path blocking A is dseparated from B given C if every undirected path between them is blocked 3 Path blocking with the vee substructure X in A Y in B Y x 0 0 Z or any of its descendants not in C CS 2710 Foundations ofAI Independences in BBNs Earthquake and Burglary are independent given MaryCalls CS 2710F0undations ofAI Independences in BBNs Earthquake and Burglary are independent given MaryCalls F Burglary and MaryCalls are independent not knowing Alarm CS 2710 Foundations ofAI Independences in BBNs Earthquake and Burglary are independent given MaryCalls F Burglary and MaryCalls are independent not knowing Alarm F Burglary and RadioReport are independent given Earthquake CS 2710F0undations ofAI Independences in BBNs Earthquake and Burglary are independent given MaryCalls F Burglary and MaryCalls are independent not knowing Alarm F Burglary and RadioReport are independent given Earthquake T Burglary and RadioReport are independent given MaryCalls CS 2710 Foundations ofAI Independences in BBNs Earthquake and Burglary are independent given MaryCalls Burglary and MaryCalls are independent not knowing Alarm Burglary and RadioReport are independent given Earthquake Burglary and RadioReport are independent given MaryCalls F F T F CS 2710F0undations ofAI Bayesian belief networks BBNs Bayesian belief networks Represents the full joint distribution over the variables more compactly using the product of local conditionals So how did we get to local parameterizations PX1XzXn HPX IPaX i1n The decomposition is implied by the set of independences encoded in the belief network CS 2710 Foundations ofAI Full joint distribution in BBNs Rewrite the full joint probability using the product rule A PBTETATJTMF J M CS 2710F0undations ofAI Full joint distribution in BBNs Rewrite the full joint probability using the product rule A PBTETATJTMF PJTBTETATMFPBTETATMF PJTAYPBTETATMF CS 2710 Foundations ofAI Full joint distribution in BBNs B E Rewrite the full joint probability using the product rule A PBTETATJTMF J M PJTBTETATMFPBTETATMF PJTAYPBTETATMF PMFBTETATPBTETAY PMFAYPBTETAY CS 2710F0undations ofAI Full joint distribution in BBNs B E Rewrite the full joint probability using the product rule A PBTETATJTMF J M PJTBTETATMFPBTETATMF PJTAYPBTETATMF PMFBTETATPBTETAY PMFAYPBTETAY AzT BTEZ PgBTEZ CS 2710 Foundations ofAI Full joint distribution in BBNs B E Rewrite the full joint probability using the product rule A PBTETATJTMF J M PJTBTETATMFPBTETATMF PJTAYPBTETATMF PMFBTETATPBTETAY PMFAYPBTETAY AzT BTEDPgBTED PB7PE7 CS 2710F0undations ofAI Full joint distribution in BBNs B E Rewrite the full joint probability using the product rule PBTETATJTMF PJTBTETATMFPBTETATMF PJTAYPBTETATMF PMFBTETATPBTETAY PMFAYPBTETAY AT BTE PgBTED PgBQPgEQ PJTATPMFATPATBTETPBTPET CS 2710 Foundations ofAI Bayesian belief network In the BBN the full joint distribution is expressed using a set of local conditional distributions PB Burglary Earthquake 0001 0999 0002 0998 PABE B E T F T T 095 005 T F 094 006 F T 029 071 F F 0001 0999 CS 2710F0undations ofAI Parameter complexity problem In the BBN the full joint distribution is de ned as PX1Xz Xn H PX1 P0X1 What did we save 1139 391 Alarm example 5 binary True False variables of parameters of the full joint 25 32 One parameter is for free 25 l 31 CS 2710 Foundations ofAI Parameter complexity problem In the BBN the full joint distribution is de ned as PX1X2 Xn H PX1 19001 What did we save 1139 391 Alarm example 5 binary True False variables of parameters of the full joint 25 32 One parameter is for free 25 l 31 of parameters of the BBN CS 2710F0undations ofAI Bayesian belief network In the BBN the full joint distribution is expressed using a set of local conditional distributions 2 2 NB P E Burglary Earthquake 0001 0999 0002 0998 PABE B E T F 8 T T 095 005 Alarm T F 094 006 F T 029 071 F F 00010999 JohnCalls CS 2710 Foundations ofAI CS 2710 Foundations ofAI Lecture 22 Supervised learning Linear and logistic regression Milos Hauskrecht milos cspittedu 5329 Sennott Square Supervised learning Data D DVD2 D a set ofn examples Dx lt x yx gt X xpx2 39 quot3641 is an input vector of size d y is the desired output given by a teacher Objective learn the mapping f 2X gt Y st yx w fxx for all i1n Regression Y is continuous Example earnings product orders 4 company stock price Classi cation Y is discrete Example handwritten digit 4 digit label Linear regression CS 2710 Foundations ofAI Linear regression Function f X gtY is a linear combination of input components d fxw0w1x1w2x2wdxd w0ijxj 1 W0 w 1 Wk parameters weights Bias term 1 x1 x w239 Input vector 2 X w a x d CS 2710 Foundations ofAI Linear regression Shorter vector de nition of the model 7 Include bias constant in the input vector X 1x1x2xd fx wox0 w1x1 wzx2 wdxd wa w0 w1 wk parameters weights Input vector X o xd CS 2710F0undations ofAI Linear regression Error Data D lt xigyi gt Function XI We would like to have yi m for all i1 n Error function 7 measures how much our predictions deviate from the desired answers 1 Mean squared error J 2 y fxz2 7 11 71 Learning We want to nd the weights minimizing the error CS 2710 Foundations ofAI Example lnear regresswn L X2061 1 dimensional input CS 2710F0unda1ions ofAI Example lnear regresswn L x1 x2 x 2 dimensional input CS 2710 Foundations ofAI Linear regression Optimization We want the weights minimizing the error 1 1 J z 20 fX2 20 WTX2 11 11 21 n For the optimal set of parameters derivatives of the error with respect to each parameter must be 0 6 2 EJAW g 01 wox1o w1x11 wdx1d x1 0 Vector of derivatives gradwJ1w unxw 2 y wa1x1 5 CS 2710F0undations ofAI Solving linear regression a H an W y1 wox1o w1x11 wdx1d x1 0 By rearranging the terms we get a system of linear equations with 61 unknowns w02x1701wlzyqyllwjzyqyjlwd2xlydl Zyll 11 11 11 11 11 1meWinnW12m1W1219m1Zm1 11 11 11 11 11 II 1112991191 12991191W1Z9 a1W12991191Zm1 11 11 11 11 11 I CS 2710 Foundations ofAI Solving linear regression The optimal set of weights satis es unxw 2 y w7x1xx 5 7 11 Leads to a system of linear equations SLE with 51 unknowns of the form AW 1102990191 12991191WJZJmJW1299m2mj 11 11 11 11 11 Solution to SLE matrix inversion CS 2710F0undations ofAI Gradient descent solution Goal the weight optimization in the linear regression model 1 JH EWOV W 231 fX1W2 7 11 n An alternative to SLE solution Gradient descent Idea Adjust weights in the direction that improves the Error The gradient tells us what is the right direction w lt w a VwErrorlw a gt 0 a learning rate scales the gradient changes CS 2710 Foundations ofAI Gradient descent method Descend using the gradient information Error W VwError wlwt w Direction of the descent Change the value of w according to the gradient w lt w a VwErrorlw CS 2710F0undations ofAI Gradient descent method Error w iError w lwt 6w w W New value of the parameter w w a 6 Error wlw Forallj 6w a gt 0 a learning rate scales the gradient changes CS 2710 Foundations ofAI Gradient descent method Iteratively converge to the optimum of the Error function Error w wow1w2w3 w CS 2710F0undations ofAI Online gradient algorithm The error function is de ned for the whole dataset D J Error wZy fxz w2 error for a sample DZ lt x1 yl gt Errorw y fxlw2 Online gradient method changes weights after every sample 6 w wj a aw Errorlw vector form w lt w a VwErrorlw a gt 0 Learning rate that depends on the number of updates CS 2710 Foundations ofAI Online gradient method Linear model fx w 7x1 Online error JWW Error w 3y1 fxl w2 On line algorithm generates a sequence of online updates i th update step with DZ lt x1 yl gt j th weight I til w lt w 011 6Err0r l w lwwl 6w gt 71 1 W WJZ alyz fXpWl xz Annealed learning rate ail z Gradually rescales changes in weights CS 2710F0undations ofAI Online regression algorithm Online linear regression D number of iterations Initialize weights W wont1M2 wd for ill39 number ofiteratz39ons do select a data point D xx y fromD set a 1 139 update weight vector w e w oty 7 fx wx end for return weights W Advantages very easy to implement continuous data streams adapts to changes in the model over time CS 2710 Foundations ofAI Online learning Example 2 CS 2710F0unda1ions ofAI Logistic regression CS 2710 Foundations ofAI Binary classi cation Two classes Y 01 Our goal is to learn how to correctly classify two types of examples 7 Class 0 7 labeled as 0 7 Class 1 7 labeled as 1 We would like to learn f X gt 01 First step we need to devise a model of the function f Inspiration neuron nerve cells CS 2710Foundations of AI Neuron neuron nerve cell and its activities Axonal arborization Axon from another cell Synapse Dendrite Synapses Cell body or Soma CS 2710 Foundations of AI Neuronbased binary classi cation model Axuual mmmaucn Am hum mvollmr sou can body In Scum Threshold function Neuronbased binary classi cation Function we want to learn f 2X gt 01 Bias term 1 Threshold function Input vector X Logistic regression model A function model with smooth switching fx gw0 w1xlwkxk where w are parameters of the models and gz is a logistic function gz 11 e39z Logistic function 2f x f Bias term 1 Input vector x I wk f X E 0 1 X xk CS 2710Foundations ofAI Logistic function function Z g 1 6 2 also referred to as sigmoid function replaces the ideal threshold function with smooth switching takes a real number and outputs the number in the interval 01 Output of the logistic regression has probabilistic interpretation fx py 1 l x Probability of class 1 CS 2710 Foundations ofAI Logistic regression Decision boundary Classi cation with the logistic regression model If fx py 1 XZ 05 then choose class 1 Else choose class 0 Logistic regression model defines a linear decision boundary Example CS 2710F0undations ofAI Logistic regression Parameter optimization We construct a probabilistic version of the error function based on the likelihood of the data LDwPDw Py y xw 11 Independent samples Xi yi likelihood measures the goodness of fit opposite of the error Loglikelihood trick lDw log LD w Error is the opposite of the Log likelihood Error Dw lDw CS 2710 Foundations ofAI Logistic regression parameter learning Error function decomposes to online error components Error Dw Z ErroriDxw 72 liDxw 11 11 Derivatives of the online error component for the LR model in terms of weights ErroriDxw7yI 7fxxw WO 0 ErrormDuw 4y 7 fxwx 0w 391 CS 2710F0undations ofAI Logistic regression parameter learning Assume D lt Xy gt Let T 1 17yl 1Xw gzl gw x Then n LDWHPyJx X15WHy391 xliy 11 11 Find weights w that maximize the likelihood of outputs 7 The optimal weights are the same for both the likelihood and the loglikelihood 1DW10g H 3117 1 210g 3117 1 11 11 Z y10g 1 y10g17 2 qum DAV 11 11 CS 2710 Foundations ofAI Logistic regression parameter learning Log likelihood 1DW Z y10g 17 ylog17 1 Derivatives of the loglikelihood Nonlinear in weights a n 1 D 7 7 0w W xy gZ Vw 1DW 234 gW X 234 fWX 11 11 Gradient descent WW 70671 akVw ZD5w wwo WW lt Wk 10lk2y fWHXX 11 CS 2710F0undations ofAI Derivation of the gradient Log likelihood lDw Z yI logyI 17 y110g17ux 11 Derivatives of the loglikelihood 39921 a a lD 1 17 1 17 aw W Z 021D 0g y 0g 0w 11 02 Derivative of a logistic function xx 0 Z w J gzx17gz J 71 0gzx 1 0gzx17 I 1gZ 02 Oily logy 17y10g17 y 21 gzI 6 2I y1g2 17yg2 y g2 I leD W Z XIU 8WTX Z 40 fWX 1 1 cs 2710FoundationsofAI Logistic regression Online gradient We want to optimize the online Error On line gradient update for the jth weight and ith step 2 271 a w lt w aWErrorlDlwWH J ith update for the logistic regression DZ lt x1 yl gt J th weight z 1 e w lt wl azyl fxlw 1xw a annealed learning rate depends on the number of updates The same update rule as used in the linear regression ll CS 2710F0undations ofAI Online logistic regression algorithm Online logistic regression D number of iterations initialize weights w0 W1 w2 wk for i1139 number ofiteratz39ons do select a data point 61 ltxygt from D set a 1 139 update weights in parallel w0 w0 ay fxw w w ay fxwxj end for return weights CS 2710 Foundations ofAI Online algorithm Example MW s s w amps f W i 9 M N s N N N W W s 21 Online algorithm Example Way 9144 w239 7n33 mas 2m 8544 CS 2710 Foundations of AI Online updates Linear regression Logistic regression fXWTX fXPylixWgWTX 1 WD X 9 w 2 fx 63y 1m 3 w xd xd Online gradient update Online gradient update wlt way fxwx W wlt W0y fX1WX CS 2710 Foundations of AI Limitations of basic linear units Linear regression Logistic regression fXWTX fXPylixWgWTX 1 Wu Z Xi w f x W x W w xd xd Function linear in inputs Linear decision boundary CS 2710F0undations ofAI CS 2710 Foundations of AI Lecture 20b Learning Milos Hauskrecht milos cspitt edu 5329 Sennott Square CS 2710 Foundations ofAI Machine Learning The eld of machine learning studies the design of computer programs agents capable of learning from past experience or adapting to changes in the environment The need for building agents capable of learning is everywhere 7 Predictions in medicine text classification speech recognition imagetext retrieval commercial software Machine learning is not only the deduction but induction of rules from examples that facilitate prediction and decision making CS 2710 Foundations ofAI Learning Learning process Learner a computer program takes data D representing past experiences and tries to either 7 to develop an appropriate response to future data or 7 describe in some meaningful way the data seen Example Learner sees a set of past patient cases patient records with corresponding diagnoses It can either try 7 to predict the presence of a disease for future patients 7 describe the dependencies between diseases symptoms eg builds a Bayesian network for them CS 2710 Foundations ofAI Types of learning Supervised learning 7 Learning mapping between inputs X and desired outputs y 7 Teacher gives me y s for the learning purposes Unsupervised learning 7 Learning relations between data components 7 No specific outputs given by ateacher Reinforcement learning 7 Learning mapping between inputs X and desired outputs y 7 Critic does not give me y s but instead a signal reinforcement of how good my answer was Other types of learning 7 Concept learning explanation based learning etc CS 2710 Foundations ofAI Supervised learning Data Dd1d2dn aI lt x I gt a set ofn examples x1 is input vector and y is desired output given by a teacher Objective learn the mapping f i X gt Y st y m fx1 for all i1 11 Two types of problems 0 Regression X discrete or continuous gt Y is continuous 0 Classi cation X discrete or continuous gt Y is discrete CS 2710 Foundations of AI Supervised learning examples Regression Y is continuous Debt equity Earnings gt company stock price Future product orders Classification Y is discrete a woman In am WWWquot L 77 m Label 3 t ml If t In minim Handwritten digit array of 01s CS 2710 Foundations of AI Unsupervised learning Data D d1d2 dn 611 XI vector ofvalues No target value output y Objective 7 learn relations between samples components of samples Types of problems Clustering Group together similar examples e g patient cases Density estimation 7 Model probabilistically the population of samples e g relations between the diseases symptoms lab tests etc CS 2710 Foundations ofAI Unsupervised learning example Density estimation We want to build the probability model of a population from which we draw samples 611 XI 25 r 5 w v5 55 2 5 m5 5 DD at a 5 D n 5545 5 5 5 a 3 55 a a 45 55 g 5 1 r 45v r 4 45 5 amp D5 5 5 c 5 e 39 n f 5 e 39 gw 1 55 4 S c 5 153 D ti t 3 436 e v 4 5 was 5 D5 45 H 5 t 5 v 2 5 1 D5 D D5 1 5 2 CS 2710 Foundations ofAI Unsupervised learning Density estimation A probability density of a point in the two dimensional space Model used here Mixture of Gaussians CS 2710 Foundations of AI Reinforcement learning We want to learn f I X gt Y We see samples of X but not y Instead of y we get a feedback reinforcement from a critic about how good our output was input sample The goal is to select output that leads to the best reinforcement CS 2710 Foundations of AI Learning Assume we see examples ofpairs x y and we want to learn the mappingfiX Y to predict future ys for values of x We get the data what should we do CS 2710 Foundations ofAI Problem many possible functions f I X gt Y exists Learning bias fo representing the mapping between x and y Which one to choose Many examples still unseen CS 2710 Foundations ofAI Learning bias Problem is easier when we make an assumption about the model say fxaxb Restriction to a linear model is an example of the learning bias CS 2710 Foundations ofAI Learning bias Bias provides the learner with some basis for choosing among possible representations of the function Forms of bias constraints restrictions model preferences Important There is no learning without a bias y CS 2710 Foundations ofAI Learning bias Choosing a parametric model or a set of models is not enough Still too many functions f x ax b 7 One for every pair of parameters a b CS 2710 Foundations ofAI Fitting the data to the model We are interested in nding the best set of model parameters How is the best set de ned Our goal is to have the parameters that reduce the mis t between the model and data Or in other words that explain the data the best Error function Gives a measure of mis t between the data and the model Examples of error functions 7 Mean square error 1 n 2 Z129 fx 7 Misclassi cation error Average of misclassi ed cases yl 2 fxl CS 2710 Foundations ofAI Fitting the data to the model Linear regression 7 Least squares t with the linear model izo fx2 7 minimizes y CS 2710F0undations ofAI Typical learning Three basic steps Select a model or a set of models with parameters E g Select the error function to be optimized E39g39 y fx2 yaxb Find the set of parameters optimizing the error function 7 The model and parameters with the smallest error represent the best t of the model to the data But there are problems one must be careful about CS 2710 Foundations ofAI CS 2710 Foundations of AI Lecture 14 Planning Milos Hauskrecht MM 5329 Sennott Square CS 2710 Foundations ofAI Administration Midterm exams 7 Mean 7775 7 Median 785 7 FOL translation of English to FOL and proof of theorems by resolution refutation Problem 6 the main problem The results of the tic tac toe competition CS 2710 Foundations ofAI Tictac toe player competition 1 Swapna Somasundaran 2 Amruta Parundare 3 Chang Liu CS 2710 Foundations ofAI Planning Planning problem nd a sequence of actions that achieves some goal An instance of a search problem Methods for modeling and solving a planning problem Classic State space search Situation calculus based on FOL 7 Search by proving the theorem 7 Inference rules or Resolution refutation approaches STRIPS a restricted FOL language 7 More efficient goal progression and goal regression 7 Partial order planners plan space search CS 2710 Foundations ofAI Planning problems Properties of many real world planning problems The description of the state of the world is very complex Many possible actions to apply in any step Actions are typically local 7 they affect only a small portion of a state description Goals are de ned as conditions referring only to a small portion of state Plans consists of a large number of actions The state space search and situation calculus frameworks may be too cumbersome and inefficient to represent and solve the planning problems CS 2710 Foundations ofAI Solutions Complex state description and local action effects 7 avoid the enumeration and inference of every state component focus on changes only Many possible actions 7 Apply actions that make progress towards the goal 7 Understand what the effect of actions is and reason with the consequences Sequences of actions in the plan can be too long 7 Many goals consists of independent or nearly independent subgoals 7 Allow goal decomposition amp divide and conquer strategies CS 2710 Foundations ofAI STRIPS planner De nes a more restricted FOLbased representation language as compared to the situation calculus Advantage leads to more efficient planning algorithms 7 Statespace search with structured representations of states actions and goals 7 Action representation avoids the frame problem STRIPS planning problem much like a standard search planning problem CS 2710 Foundations ofAI STRIPS planner States 7 conjunction of literals eg OnAB OMB Table ClearA 7 represent facts that are true at a specific point in time Actions operators 7 Action Move xyz 7 Preconditions conjunctions of literals with variables Onxy Clearx Clearz 7 Effects Two lists Add list Onxz Cleary Delete list Onxy Clearz Everything else remains untouched is preserved CS 2710 Foundations ofAI STRIPS planning Operator Move xyz Preconditions Onxy Clearx Clearz Add list Onxz Cleary Delete list Onxy Clearz I I I I I Move BTableC OnBTable add Clear C delete OnATable OnATable OnCTable h d OnCTable Clear A ML Clear A Clear B Clear B Clear Table Clear Table CS 2710 Foundations ofAI STRIPS planning Initial state Conjunction of literals that are true Goals in STRIPS A goal is a partially speci ed state Is defined by a conjunction of ground literals 7 N0 variables allowed in the description of the goal Example OnAB OnBC CS 2710 Foundations ofAI Search in STRIPS Objective Find a sequence of operators a plan from the initial state to the state satisfying the goal Two approaches to build a plan Forward state space search goal progression 7 Start from what is known in the initial state and apply operators in the order they are applied Backward state space search goal regression 7 Start from the description of the goal and identify actions that help to reach the goal CS 2710 Foundations ofAI Divide and conquer Divide and conquer strategy 7 divide the problem to a set of smaller subproblems 7 solve each subproblem independently 7 combine the results to form the solution In planning we would like to satisfy a set of goals Divide and conquer in planning 7 Divide the planning goals along individual goals 7 Solve nd a plan for each of them independently 7 Combine the plan solutions in the resulting plan CS 2710 Foundations ofAI State space vs plan space search An alternative to planning algorithms that search states con gurations of world Plan De nes a sequence of operators to be performed Partial plan 7 plan that is not complete Some plan steps are missing 7 some orderings of operators are not nalized Only relative order is given Bene ts of working with partial plans 7 We do not have to build the sequence from the initial state or the goal 7 We do not have to commit to a speci c action sequence 7 We can work on subgoals individually divide and conquer CS 2710 Foundations ofAI Statespace vs planspace search State space search STRIPS operator State set of formulas Plan space search Finish Plan transformation Start Incomplete partial plan CS 2710 Foundations ofAI Partial order planning Search the space of partial plans Start Incomplete partial plan POP is sound and complete for STRIPS CS 2710 Foundations ofAI Hierarchical planners Extension of STRIPS planners Example planner ABSTRIPS Idea Assign a criticality level to each conjunct in preconditions list of the operator Planning process re nes the plan gradually based on criticality threshold starting from the highest criticality value 7 Develop the plan ignoring preconditions of criticality less than the criticality threshold value assume that preconditions for lower criticality levels are true 7 Lower the threshold value by one and repeat previous step CS 2710 Foundations ofAI Towers of Hanoi JL gtJL Start position Goal position Hierarchical planning Assume the largest disk 7 criticality level 2 the medium disk 7 criticality level 1 the smallest disk 7 criticality level 0 CS 2710 Foundations ofAI Hierarchical planning Level 0 Level 2 Level 1 A Lem LL 1 L E J 1 A 4 Ag CS 2710 Foundations ofAI Planning with incomplete information Some conditions relevant for planning can be 7 true false or unknown Example Robot and the block is in Room 1 Goal get the block to Room 4 Problem The door between Rooml and 4 can be closed CS 2710 Foundations ofAI Planning with incomplete information Initially we do not know whether the door is opened or closed Different plans 7 If not closed pick the block go to room 4 drop the block 7 If closed pick the block go to room2 then room3 then room4 and drop the block Room4 Room3 Rooml I Room2 CS 2710 Foundations ofAI Conditional planners Are capable to create conditional plans that cover all possible situations contingencies 7 also called contingency planners becomes available 7 Sensing actions Plan choices are applied when the missing information Missing information can be sought actively through actions CS 2710 Foundations ofAI Sensing actions Ex ample CheckDoord Preconditions Doordxy checks the door d amp AtRobotx Effect Closedd V39lClosedd one will become true Room4 Room3 Rooml I Room2 IT one way door between X and y Room4 I Room3 j EJU Rooml Room2 CS 2710 Foundations ofAI Conditional plans Sensing actions and conditions incorporated within the plan F Go R1R4 DropB PickB gt ChecanorD gt Closed door 7 T GoR1R2 gt Gamma GoR3R4 CS 2710 Foundations ofAI Tomas Singliar for CS2710 Foundations of Al November 8 2005 1 On additions and multiplications counting Many people had questions about how to compute the number of products and addi tions when given an interleaved formula for computing joint probabilities I will try to formulate the algorithm on the syntactic parse77 of the formula Let ATERM denote the number of additions needed to compute the value of TERM The recursive formula for computing the number of additions is A 22 m AT m 71 61 Let us analyze this formula To get the total number of additions we obviously need to count the additions needed in the subterms Ti The subterms are homogeneous so we just multiply the cost of one with the number of values the sum variable i can take on After we have computed the values of subterms we need to add them up which contributes 1 7 1 additions Adding up 4 numbers takes 3 applications of V L AT1 Tn 21431 k1 If our formula is a product there are no extra additions after we compute the subterms so we just ring up the cost of computing the subterms AP 0 This is the basis of our induction PO is a table lookup in the discrete variable case or a density function evaluation in the continuous case You can derive a similar formula for multiplication M Z 0 ZMT ieI id V L MT1Tnn71 MTk k1 0 Exercise Derive the recursive formulas LTERM for the number of probability table lookups CS 2710 Foundations of AI Lecture 19 Decision making in the presence of uncertainty Milos Hauskrecht milos cspitt edu 5329 Sennott Square CS 2710 Foundations ofAI Decisionmaking in the presence of uncertainty Many realworld problems require to choose future actions in the presence of uncertainty Examples patient management investments Main issues How to model the decision process in the computer How to make decisions about actions in the presence of uncertainty CS 2710 Foundations ofAI Decision tree representation of the problem Investing 100 for 6 months o 6 110 down 90 140 Stock 2 down 80 101 10 39 100 CS 2710 Foundations ofAI Expected value Let X be a random variable representing the monetary outcome with a discrete set of values QX Expected value of X is EX Z xPX x x59 X Expected value summarizes all stochastic outcomes into a single quantity Example 100 Expected value for the outcome of the Stock 1 option 06 gtlt110 04 gtlt90 66 36 CS 2710 Foundations ofAI Selection based on expected values The optimal action is the option that maximizes the expected outcome CS 2710 Foundations ofAI Sequential multistep problems The decision tree can be build to capture multi step decision problems Choose an action Observe the stochastic outcome And repeat How to make decisions for multi step problems Start from the leaves of the decision tree outcome nodes Compute expectations at chance nodes Maximize at the decision nodes Algorithm is sometimes called expectimax CS 2710 Foundations ofAI Multistep problem example Assume Two investment periods Two actions stock and bank 10 105 CS 2710 Foundations ofAI Multistep problems Conditioning Notice that the probability of stock going up and down in the 2quotd step is independent of the 1st step 05 CS 2710 Foundations ofAI Conditioning in the decision tree But this may not be the case In decision trees 7 Later outcomes can be conditioned on the earlier stochastic outcomes and actions Example stock movement probabilities Assume P1 up04 P2 up1 up04 P2 quotup1 down05 CS 2710 Foundations ofAI Multistep problems Conditioning Tree Structure every observed stochastic outcome 1 branch P1 up04 P2 up1 up04 P2 quotup1 down05 1 down CS 2710 Foundations ofAI Trajectory payoffs Outcome values at leaf nodes eg monetary values 7 Rewards and costs for the path trajectory Example stock fees and gains Assume Fee per period 5 paid at the beginning Gain for up 15 loss for down 10 I 1 L down CS 2710 Foundations ofAI Constructing a decision tree The decision tree is rarely given to you directly 7 Part of the problem is to construct the tree Example stocks bonds bank for k periods Stock 7 Probability of stocks going up in the first period 03 7 Probability of stocks going up in subsequent periods Pkth stepUpi k lth step Up04 Pkth step Upi k lth stepDown05 7 Return if stock goes up 15 if down 10 7 Fixed fee per investment period 5 Bonds 7 Probability ofvalue up 05 down 05 7 Return if bond value is going up 7 if down 3 7 Fee per investment period 2 Bank 7 Guaranteed return of 3 per period no fee CS 2710 Foundations ofAI Informationgathering actions Some actions and their outcomes irreversibly change the wor Information gathering exploratory actions 7 make an inquiry about the world 7 Key bene t reduction in the uncertainty Example medicine 7 Assume a patient is admitted to the hospital with some set of initial complaints 7 We are uncertain about the underlying problem and consider a surgery or a medication to treat them 7 But there are often lab tests or observations that can help us to determine more closely the disease the patient suffers from 7 Goal of lab tests Reduce the uncertainty of outcomes of treatments so that better treatment option can be chosen CS 2710 Foundations ofAI Decisionmaking With exploratory actions In decision trees Exploratory actions can be represented and reasoned about the same way as other actions How do we capture the effect of exploratory actions in the decision tree model Information obtained through exploratory actions may affect the probabilities of later outcomes 7 Recall that the probabilities on later outcomes can be conditioned on past observed outcomes and past actions 7 Sequence of past actions and outcomes is remembered within the decision tree branch CS 2710 Foundations ofAI Oil Wildcatter problem An oil wildcatter has to make a decision of whether to drill or not to drill on a speci c site Chance of hitting an oil deposit Oil 40 POil T 04 Nooil 60 POz39l F 06 Cost of drilling 70K Payoffs 0 Oil 220K Nooil 0 K 22o7o1so Drill Jo 0 CS 2710 Foundations ofAI Oil Wildcatter problem An oil wildcatter has to make a decision of whether to drill or not to drill on a speci c site Chance of hitting an oil deposit 0 Oi140 POil T04 N00i160 POz39l F 06 Cost of drilling 70K Payoffs 0 Oil 220K Nooil 0 K Drill 22070150 70 CS 2710 Foundations ofAI Oil Wildcatter problem Assume that in addition to the drillnodrill choices we have an option to run the seismic resonance test Seismic resonance test results 7 Closed pattern more likely when the hole holds the oil 7 Diffuse pattern more likely when empty POil Seismic resonance test Seismic resonance test pattern closed di use Oil True 08 02 False 0 3 07 Test cost 10K CS 2710 Foundations ofAI Oil Wildcatter problem 22o70 10140 Decision tree 036 nooil 0701080 0 10 10 2207010140 nooil 84 0701080 0 10 10 CS 2710 Foundations ofAI Oil Wildcatter problem 22o70 10140 Alternatlve model 036 noon Dn 0 70 1080 closed 0 10 10 2207010140 nooil 84 0 70 1080 010 10 CS 2710 Foundations ofAI Oil Wildcatter problem Decision tree probabilities 22o7o1014o no oil dosed o7o1oso 010 10 POil T Test closed PTest closed Oil TPOil T PTevt closed PTest closed Oil FPOil F PT closed PTest closed PTest closed Oil FPOil F PTest closed Oil TPOil T POil F Test closed CS 2710 Foundations ofAI Oil Wildcatter problem Decision tree probabilities 22a70 10140 Drill dosed 0 70 1080 010 10 PTest closed PTest closed Oil FPOil F PTest closed Oil TPOil T PTest di PTest di Oil FPOilFPTest di Oil TPOil T CS 2710 Foundations ofAI Oil Wildcatter problem 2207010140 Declsmn tree 608 036 noon closed Dnn 0 70 1080 010 10 2207010140 0 70 1080 010 10 CS 2710 Foundations ofAI Oil Wildcatter problem 22o70 10140 Dec1s10n tree 608 clo sed Drill 036 nooil 0 70 1080 010 10 2207010140 nooil 84 0 70 1080 010 10 The presence of the test and its result affected our decision if test closed then E if testdiffuse then do not drill CS 2710 Foundations ofAI Value of information When the test makes sense Only when its result makes the decision maker to change his mind that is he decides not to drill Value of information 7 Measure of the goodness of the information from the test 7 Difference between the expected value with and without the test information Oil wildcatter example 7 Expected value without the test 18 7 Expected value with the test 254 7 Value of information for the seismic test 74 CS 2710 Foundations ofAI CS 2710 Foundations of AI Lecture 14 Planning Milos Hauskrecht miloscspittedu 5329 Sennott Square CS 2710 Foundations ofAI Administration 0 PS 6 7 Due on Wednesday October 19 2005 Midterm Monday October 24 2005 7 In class 7 Closed book 7 Covers Search Knowledge representation and Planning CS 2710 Foundations ofAI Planning Planning problem nd a sequence of actions that achieves some goal An instance of a search problem Methods for modeling and solving a planning problem State space search Situation calculus based on FOL 7 Inference rules 7 Resolution refutation CS 2710 Foundations ofAI Planning problems Properties of many real world planning problems The description of the state of the world is very complex Many possible actions to apply in any step Actions are typically local 7 they affect only a small portion of a state description Goals are defined as conditions referring only to a small portion of state Plans consists of a large number of actions The state space search and situation calculus frameworks may be too cumbersome and inefficient to represent and solve the planning problems CS 2710 Foundations ofAI Situation calculus problems Frame problem refers to The need to represent a large number of frame axioms Solution combine positive and negative effects in one rule Onu vDO MOVE x y 2 3 ltgt 7u x A v y A Onuvs v v u x A v 2 A Onx y s A Clear x s A Clear 2 3 Inferential frame problem 7 We still need to derive properties that remain unchanged Other problems Quali cation problem 7 enumeration of all possibilities under which an action holds Rami cation problem 7 enumeration of all inferences that follow from some facts CS 2710 Foundations ofAI Solutions Complex state description and local action effects 7 avoid the enumeration and inference of every state component focus on changes only Many possible actions 7 Apply actions that make progress towards the goal 7 Understand what the effect of actions is and reason with the consequences Sequences of actions in the plan can be too long 7 Many goals consists of independent or nearly independent subgoals 7 Allow goal decomposition amp divide and conquer strategies CS 2710 Foundations ofAI STRIPS planner De nes a restricted representation language as compared to the situation calculus Advantage leads to more efficient planning algorithms 7 Statespace search with structured representations of states actions and goals 7 Action representation avoids the frame problem STRIPS planning problem much like a standard search planning problem CS 2710 Foundations ofAI STRIPS planner States 7 conjunction of literals eg OnAB OMB Table ClearA 7 represent facts that are true at a specific point in time Actions operators 7 Action Move xyz 7 Preconditions conjunctions of literals with variables Onxy Clearx Clearz 7 Effects Two lists Add list Onxz Cleary Delete list Onxy Clearz Everything else remains untouched is preserved CS 2710 Foundations ofAI STRIPS planning Operator Move xyz Preconditions Onxy Clearx Clearz Add list Onxz Cleary Delete list Onxy Clearz I I I I I Move BTableC OnBTable add Clear C delete OnATable OnATable OnCTable h d OnCTable Clear A ML Clear A Clear B Clear B Clear Table Clear Table CS 2710 Foundations ofAI STRIPS planning Initial state Conjunction of literals that are true Goals in STRIPS A goal is a partially speci ed state Is defined by a conjunction of ground literals 7 N0 variables allowed in the description of the goal Example OnAB OnBC CS 2710 Foundations ofAI Search in STRIPS Objective Find a sequence of operators a plan from the initial state to the state satisfying the goal Two approaches to build a plan Forward state space search goal progression 7 Start from what is known in the initial state and apply operators in the order they are applied Backward state space search goal regression 7 Start from the description of the goal and identify actions that help to reach the goal CS 2710 Foundations ofAI Forward search goal progression Idea Given a state s 7 Unify the preconditions of some operator at with s 7 Add and delete sentences from the add and delete list of an operator at from s to get a new state can be repeated mm 9 Move BTableC Clear C add delete OrlATable OrlATable OrlCTable OrlCTable Clear A m Clear A Clear B Clear B Clear Table Clear Table CS 2710 Foundations ofAI Forward search goal progression Use operators to generate new states to search Check new states whether they satisfy the goal Search tree E M H goal Move A Table B Initial state Move B Table C llIove ATable C lt CS 2710 Foundations ofAI El Backward search goal regression Idea Given a goal G Unify the add list of some operator at with a subset of G If the delete list of a does not remove elements of G then the goal regresses to a new goal G that is obtained from G by 7 deleting add list of a 7 adding preconditions of a I E Move A ble B New goal G On A Table Goal G Clear B precondition add On AB Clear A OnBC On B C OnCTable OnCTable Mapped from G CS 2710 Foundations ofAI Backward search goal regression Use operators to generate new goals Check whether the initial state satis es the goal Search tree ma I I Initial state Move B Table C Move A Table B goal Move A B Table CS 2710 Foundations ofAI Statespace search Forward and backward state space planning approaches 7 Work with strictly linear sequences of actions Disadvantages 7 They cannot take advantage of the problem decompositions in which the goal we want to reach consists of a set of independent or nearly independent sub goals 7 Action sequences cannot be built from the middle 7 No mechanism to represent least commitment in terms of the action ordering CS 2710 Foundations ofAI Divide and conquer Divide and conquer strategy 7 divide the problem to a set of smaller subproblems 7 solve each subproblem independently 7 combine the results to form the solution In planning we would like to satisfy a set of goals Divide and conquer in planning 7 Divide the planning goals along individual goals 7 Solve nd a plan for each of them independently 7 Combine the plan solutions in the resulting plan Is it always safe to use divide and conquer 7 No There can be interacting goals CS 2710 Foundations ofAI Sussman s anomaly An example from the blocks world in which the divide and conquer fails due to interacting goals Initial state Goal OnAB OnBC CS 2710 Foundations ofAI Sussman s anomaly 1 Assume we want to satisfy On A B rst Initial state But now we cannot satisfy On B C without undoing On A B CS 2710 Foundations ofAI Sussman s anomaly 1 Assume we want to satisfy On AB rst I I M H q El Initial state But now we cannot satisfy On B C without undoing On A B 2 Assume we want to satisfy 0quot B C rst 5 E E Q E Initial state But now we cannot satisfy 0quot A B without undoing 0quot B C CS 2710 Foundations ofAI State space vs plan space search An alternative to planning algorithms that search states con gurations of world Plan De nes a sequence of operators to be performed Partial plan 7 plan that is not complete Some plan steps are missing 7 some orderings of operators are not nalized Only relative order is given Bene ts of working with partial plans 7 We do not have to build the sequence from the initial state or the goal 7 We do not have to commit to a speci c action sequence 7 We can work on subgoals individually divide and conquer CS 2710 Foundations ofAI Statespace vs planspace search State space search STRIPS operator State set of formulas Plan space search Finish Plan transformation Start Incomplete partial plan CS 2710 Foundations ofAI Plan transformation operators Examples of Add an operator to a plan so that it satis es some open condition Add link instantiate t rt MoveAH l s a A gt goa oveCA E Order reorder operators 8 fEOV CyDE MoveAHB3 g8 CS 2710 Foundations ofAI Partialorder planners POP also called Non linear planners Use STRIPS operators Graphical representation of an operator Movexyz add list preconditions Delete list is not shown ll Illustration of a POP on the Sussman s anomaly case CS 2710 Foundations ofAI Partial order planning Start and finish Goal Start M H CS 2710 Foundations ofAI Partial order planning Start and finish Goal Open conditions conditions yet to be satis ed H Start 5 CS 2710 Foundations ofAI Partial order planning Add operator El Goal We want to satisfy an open condition Always select an operator that helps to satisfy one of the open conditions Start M 5 CS 2710 Foundations ofAI Partial order planning Add link Goal m a Start 5 CS 2710 Foundations ofAI Partial order planning Add link El Goal Add link Satis es an open condition c Start M H CS 2710 Foundations ofAI Partial order planning Add link Goal Satis es an open condition instantiates y F1 H Start 5 CS 2710 Foundations ofAI Partial order planning Add operator El CS 2710 Foundations ofAI Partial order planning Add links El MoveBF1C F t H w Stan E CS 2710 Foundations ofAI Partial order planning Interactions El Deletes ClearB A was stacked on B CS 2710 Foundations ofAI Partial order planning Order operators El gt MoveBFlC F comes before MoveAFlB t H w Stan E CS 2710 Foundations ofAI Partial order planning Add operator MoveBF1C I CS 2710 Foundations ofAI gt W w CS 2710 Foundations ofAI Partial order planning Threats Goal MovefBgFl 0110351 CleaIC 4 Deletes ClearC B moved on top of C I CS 2710 Foundations ofAI I w CS 2710 Foundations ofAI MoveBF1C OnBF1 CS 2710 Foundations ofAI L CS 2710 Foundations ofAI Partial order planning Result plan Plan a topological 011 of a graph CS 2710 Foundations ofAI Partial order planning Remember we search the space of partial plans Start Incomplete partial plan POP is sound and complete CS 2710 Foundations ofAI Hierarchical planners Extension of STRIPS planners Example planner ABSTRIPS Idea Assign a criticality level to each conjunct in preconditions list of the operator Planning process re nes the plan gradually based on criticality threshold starting from the highest criticality value 7 Develop the plan ignoring preconditions of criticality less than the criticality threshold value assume that preconditions for lower criticality levels are true 7 Lower the threshold value by one and repeat previous step CS 2710 Foundations ofAI Towers of Hanoi LL M Start position Goal position Hierarchical planning Assume the largest disk 7 criticality level 2 the medium disk 7 criticality level 1 the smallest disk 7 criticality level 0 CS 2710 Foundations ofAI Hierarchical planning Level 0 Level 2 Level 1 M 1 L M Mala CS 2710 Foundations ofAI Planning with incomplete information Some conditions relevant for planning can be 7 true false or unknown Example Robot and the block is in Room 1 Goal get the block to Room 4 Problem The door between Rooml and 4 can be closed Rooml I Room2 CS 2710 Foundations ofAI Planning with incomplete information Initially we do not know whether the door is opened or closed Different plans 7 If not closed pick the block go to room 4 drop the block 7 If closed pick the block go to room2 then room3 then room4 and drop the block CS 2710 Foundations ofAI Conditional planners Are capable to create conditional plans that cover all possible situations contingencies 7 also called contingency planners Plan choices are applied when the missing information becomes available Missing information can be sought actively through actions 7 Sensing actions Room4 Room3 Room4 I Room3 Rooml I Room2 Rooml Room2 CS 2710 Foundations ofAI Sensing actions Example CheckDoord checks the door d Preconditions Doordxy one way door between X and y amp AtRobotx Effect Closedd V39lClosedd one will become true CS 2710 Foundations ofAI Conditional plans Sensing actions and conditions incorporated within the plan F Go R1R4 DropB PickB CheckDoorD gt Closed door 7 T GoR1R2 gt Gamma 600634 Room4 I Room3 Room4 Room3 IT i j Rooml I Room2 Rooml Room2 CS 2710 Foundations ofAI CS 2710 Foundations of AI Lecture 14 Planning Milos Hauskrecht miloscspittedu 5329 Sennott Square CS 2710 Foundations ofAI Administration 0 PS 6 7 Due on Wednesday October 19 2005 Midterm Monday October 24 2005 7 In class 7 Closed book 7 Covers Search Knowledge representation and Planning CS 2710 Foundations ofAI Planning Planning problem nd a sequence of actions that achieves some goal An instance of a search problem Methods for modeling and solving a planning problem State space search Situation calculus based on FOL 7 Inference rules 7 Resolution refutation CS 2710 Foundations ofAI Planning problems Properties of many real world planning problems The description of the state of the world is very complex Many possible actions to apply in any step Actions are typically local 7 they affect only a small portion of a state description Goals are defined as conditions referring only to a small portion of state Plans consists of a large number of actions The state space search and situation calculus frameworks may be too cumbersome and inefficient to represent and solve the planning problems CS 2710 Foundations ofAI Situation calculus problems Frame problem refers to The need to represent a large number of frame axioms Solution combine positive and negative effects in one rule Onu vDO MOVE x y 2 3 ltgt 7u x A v y A Onuvs v v u x A v 2 A Onx y s A Clear x s A Clear 2 3 Inferential frame problem 7 We still need to derive properties that remain unchanged Other problems Quali cation problem 7 enumeration of all possibilities under which an action holds Rami cation problem 7 enumeration of all inferences that follow from some facts CS 2710 Foundations ofAI Solutions Complex state description and local action effects 7 avoid the enumeration and inference of every state component focus on changes only Many possible actions 7 Apply actions that make progress towards the goal 7 Understand what the effect of actions is and reason with the consequences Sequences of actions in the plan can be too long 7 Many goals consists of independent or nearly independent subgoals 7 Allow goal decomposition amp divide and conquer strategies CS 2710 Foundations ofAI STRIPS planner De nes a restricted representation language as compared to the situation calculus Advantage leads to more efficient planning algorithms 7 Statespace search with structured representations of states actions and goals 7 Action representation avoids the frame problem STRIPS planning problem much like a standard search planning problem CS 2710 Foundations ofAI STRIPS planner States 7 conjunction of literals eg OnAB OMB Table ClearA 7 represent facts that are true at a specific point in time Actions operators 7 Action Move xyz 7 Preconditions conjunctions of literals with variables Onxy Clearx Clearz 7 Effects Two lists Add list Onxz Cleary Delete list Onxy Clearz Everything else remains untouched is preserved CS 2710 Foundations ofAI STRIPS planning Operator Move xyz Preconditions Onxy Clearx Clearz Add list Onxz Cleary Delete list Onxy Clearz I I I I I Move BTableC OnBTable add Clear C delete OnATable OnATable OnCTable h d OnCTable Clear A ML Clear A Clear B Clear B Clear Table Clear Table CS 2710 Foundations ofAI STRIPS planning Initial state Conjunction of literals that are true Goals in STRIPS A goal is a partially speci ed state Is defined by a conjunction of ground literals 7 N0 variables allowed in the description of the goal Example OnAB OnBC CS 2710 Foundations ofAI Search in STRIPS Objective Find a sequence of operators a plan from the initial state to the state satisfying the goal Two approaches to build a plan Forward state space search goal progression 7 Start from what is known in the initial state and apply operators in the order they are applied Backward state space search goal regression 7 Start from the description of the goal and identify actions that help to reach the goal CS 2710 Foundations ofAI Forward search goal progression Idea Given a state s 7 Unify the preconditions of some operator at with s 7 Add and delete sentences from the add and delete list of an operator at from s to get a new state can be repeated mm 9 Move BTableC Clear C add delete OrlATable OrlATable OrlCTable OrlCTable Clear A m Clear A Clear B Clear B Clear Table Clear Table CS 2710 Foundations ofAI

### BOOM! Enjoy Your Free Notes!

We've added these Notes to your profile, click here to view them now.

### You're already Subscribed!

Looks like you've already subscribed to StudySoup, you won't need to purchase another subscription to get this material. To access this material simply click 'View Full Document'

## Why people love StudySoup

#### "There's no way I would have passed my Organic Chemistry class this semester without the notes and study guides I got from StudySoup."

#### "When you're taking detailed notes and trying to help everyone else out in the class, it really helps you learn and understand the material...plus I made $280 on my first study guide!"

#### "I was shooting for a perfect 4.0 GPA this semester. Having StudySoup as a study aid was critical to helping me achieve my goal...and I nailed it!"

#### "It's a great way for students to improve their educational experience and it seemed like a product that everybody wants, so all the people participating are winning."

### Refund Policy

#### STUDYSOUP CANCELLATION POLICY

All subscriptions to StudySoup are paid in full at the time of subscribing. To change your credit card information or to cancel your subscription, go to "Edit Settings". All credit card information will be available there. If you should decide to cancel your subscription, it will continue to be valid until the next payment period, as all payments for the current period were made in advance. For special circumstances, please email support@studysoup.com

#### STUDYSOUP REFUND POLICY

StudySoup has more than 1 million course-specific study resources to help students study smarter. If you’re having trouble finding what you’re looking for, our customer support team can help you find what you need! Feel free to contact them here: support@studysoup.com

Recurring Subscriptions: If you have canceled your recurring subscription on the day of renewal and have not downloaded any documents, you may request a refund by submitting an email to support@studysoup.com

Satisfaction Guarantee: If you’re not satisfied with your subscription, you can contact us for further help. Contact must be made within 3 business days of your subscription purchase and your refund request will be subject for review.

Please Note: Refunds can never be provided more than 30 days after the initial purchase date regardless of your activity on the site.