### Create a StudySoup account

#### Be part of our community, it's free to join!

Already have a StudySoup account? Login here

# Introduction to Artificial Intelligence CS 440

CSU

GPA 3.51

### View Full Document

## 8

## 0

## Popular in Course

## Popular in ComputerScienence

This 123 page Class Notes was uploaded by Betty Kertzmann on Tuesday September 22, 2015. The Class Notes belongs to CS 440 at Colorado State University taught by Bruce Draper in Fall. Since its upload, it has received 8 views. For similar materials see /class/210179/cs-440-colorado-state-university in ComputerScienence at Colorado State University.

## Reviews for Introduction to Artificial Intelligence

### What is Karma?

#### Karma is the currency of StudySoup.

#### You can buy or earn more Karma at anytime and redeem it for class notes, study guides, flashcards, and more!

Date Created: 09/22/15

Propositional Logic continued Announcements Programming assignment 1 was due at noon l Writing assignment 2 part 1is due Tuesday Oct 7 quot Lecture 12 I Possible midterm dates Tuesda Oct14quot 100208 Thursdayday Oct 16 I Read Chapter 8 for Tuesday The Wumpus World The Wumpus World r Perfurrnance measure 7 gain 1uuubea h 7mm 7 71 perstep rmfur using the arruw 7 Squ auiaeerrttb pit breezy r 7 Glitter rrr guild is in the same 5 Hana j 7 Shuutlng Kills Wumpus ifyuu are facing it muting uses up the unly arrow 7 Grabbing picks up guld r in same square 1 WW I Sensurs Stench Breeze Glitter Bump 1 Amateur Leftturn Rightturn Furvvard Grab Release shuut Wumpus Logic I Let PW be the proposition that square xy has a it Nut thatthere are exactly 16 ufthese distinct symbuls I Let WW be the proposition that the wumpus is in square xy 7 Again 16 distinct symbuls Let BW be the proposition that a breeze is felt in square xy l Lets v be the proposition that a stench is detected in square xy I A system with a total of 64 propositional symbols Wumpus Axioms law 1 The wumpus is not in square 11 l PH Square 11 does not contain a pit WH vWQv vW49 VWM A wumpus exists WM 2 awry awry A awn 4 fthe wumpus is somewhere its nowhere else There are 16 ofthese rules Wumpus Axioms II S22 lt3 W12 V W32 V W23 V W24 If there is a stench the wumpus is in a neighboring cell Once again there are 16 of these rules Boundary cases involve fewer disjunctions E22 lt3 P12 V P32 V P23 V P24 If there is a breeze a neighboring cell has a pit 16 rules and boundary cases Wumpus Logic ll If sensors in square 11 detect neither breeze nor smell what do we know Precedent Rule Consequent ABM Modus PM v PM 31190312 V P21 Tollens PM v P DeMorgans AP1 2 quot 1 PM AP1 2 quot 1 PM Simplification AP AP1 2 quot 1 PM Simplification APZ 1 Conjunctive Normal Form A sentence is in CNF iff it is a conjunction of disjunctions of propositions and negated propositions Example Ava CA AvCAB Converting to CNF l Every sentence can be converted to CNF by sequentially eliminating all other symbols 1 Replace on cgt p with on 3 3quot3 3 on 2 Replace on 3 3 with 1 on v 3 3 Move 1 inward 1 Replace Ah 0 with a 2 Replace Awquot 3with A av A 3 3 Replace ov 3 with A a a 4 Replace on v 3 quot y with on v 3quotu v y Converting to CNF II I While converting expressions note that 0 v 3 v y is equivalent to a v 3 v y 0 quot 3 quot y is equivalent to a quot 3 A y l Why does this algorithm work Because 2 and lt2 are eliminated Because A is always directly attached to simply propositions Because what is lett must be s and v s and the can be distributed overto make CNFs I Convert A38 v BcgtC l AA 1 equivalent Converting to CNF Example I A28 v B C quot C 2 B l AA v B v AB v C quot AC v B I AAVB v ABVCquotAAVBVACVB C Av Bv C l Question are the first and last lines really Double Checking Resolution l CNF enables an inference rule called resolution aVlBquotnl3VV Ia W I In general lftwo disjunctive clauses are true and One clause contains a term that is the negation ofa term int e at er Then the disjunction ofall the otherterms is true Also ifthe result contains the same term twice you may drop one I Eg AvAv Av Resolution amp Contradictions What happens if a CNF expression contains a contradiction Example AA v B quot A quot AB I Yes this in CNF form Resolving the first amp last term produces nAquotA Another step of resolution produces Resolution amp Contradictions ll Producing an empty clause signifies that it can t be satisfied nothing must be true Moreover any set of contradictory disjunctive clauses will reduce to under repeated application of resolution And only contradictory clauses lead to Automated Theorem Proving I How do we automate the inference process Step 1 assume the negation ofthe consequent and add it to the axioms Step 2 convert all axioms to CNF l i e a collection of dlSlel lCthe clauses Step 3 Repeatedly apply resolution to remove terms until I tproduces an empty clause contradiction in wnicn case tne consequent is proven or I No more terms can be resolved in wnicn case tne consequent cannotbe proven Example Problem prove the following AQBAACAB c Step 1 negate the consequent Altgt B quotADC A B MC Step 2 convert to CNF A BABAAACABMC sAvBy h BvAquotaAvCquotBquotaC Example cont I Now apply resolution to aAv B a BvA aAvC B aC IaVC AaC HaA l Repeat a BvA aA I B 1 Repeat GBquot B I contradiction Example lll Or alternatively start with the first two clauses AvBy h BvA BA B B A a B H What can resolution do It can determine whether any sentence can be proven from a set of axioms It is complete It cannot enumerate the set of all entailed sentences You can enumerate such an infinite list by enumerating all possible sentences and verifying each with resolution Agents Lecture 2 82808 What is an agent I According to your text An agent is anything that can be viewe perceiving its envi on am through sensors and acting on that environment through actuatorsquot p 32 Caveat l Your book also says the notion of an agent is meant to a alyzi g systems not an absolu characterization that divides the world into agents and non nts One could view a andheld calculator as an agent that chooses the action of displaying 4quot when 39 the percept sequence 22 but such an analysis would hardly aid our understanding ofthe calculatorquot p 34 emphasis mine be a tool for te Examples of agents l Taxi driver I Internet shopper l Backgammon player I Chemical plant controller I Spam detector Agents my redrawing ufFigurE 2 i lri yuurtext J 0 Wm WNW mam Russell Nnrwsvan mm beware maeva Psvmaiav lEvmiasavhyimmummsiymmmeenmmnmemvmuded mm and New m avemsime39vmiansmmesumuii Emue will use WNsmmnalavy The agent and the environment An agent I Works in a particular environment I Has goa s l Perceives the environment I Performs actions to achieve its goals Example the automated taxi Formalizing Task Environments driver I Environment Roads othertraf c pedestrians otherimpediments I Sensors Cameras engine sensors laser range nders GPS keyboard microphone l Actuators Steering accelerator brake turn signal horn P Performance This is all important it defines the goal E Environmen This defines the world the agent lives in l A Actuators This defines how the agent can change the world S Sensors P bl P If M This defines how the agent sees the world and how much l 05839 e e ormance easures39 ofthe word the agent can see Safety speed legality comfort pro t Formall2lng Agents Formall2lng Agents II I Agentfunetiens map sequences of Really Do you believe the last slide erce ts onto actions p p How about EnVIronments EnVIronments Fully vs partially observable can the sensors detectall aspecB tnatare t 39 39 t39 t h t39 relevant to the choice of ECDOl i determined by the current statequot Crusswurd Backgammon internet Taxi Crusswurd Backgammon internet Taxi puzzle snupping puzzle snupping observable fully fully 7 partially observable fully fully 7 partially Determinisnc Deterministic deterministic stucnastic 7 stucnastic Epis dic Episaai Static Static Discrete Discrete Agents Agents Environments Disadic vs mums sen the agent s exvevenee be divided m sees w have the agent s anion depends oniv on the tuneth viseeev Environments suns vs dynamic sen me enmnmem hangs We the agent is messing en sew cmssvm Backgammon imevnet Yaxi cmssvm aseeemmen internet Yaxi pme snapping pume smume Ohsevvahie my Miv 7 pemeuv Ohsevvahie my Miv 7 pemeiv oeiemimsie detaniinistil stamastil 7 sachasic oeiemimsie deterministic stamasu 7 sachasic Epimdic sequemei sequemei sequemei seeuermei E Pisadic sequemei sequemei sequemei seeuermei Static Static static static dwamic dwamic Diseeie Discrete Agents Agents Environments oissm vs mnlinuaus ms meme sen ee evvieevemesvee eme enwenmem the w av time s handied and tuth veveepvsesiens eme agent cmssvm Backgammon imevnet uzne sha in w stamastil sequemei sequemei dwamic discrete cammuaus singla vs mu as eve aiso maximixing tuneth agent s anion cmssvm aeeeemmen internet we sham Environments m Does he enwenmem server other agents w some peeevmenee meesme that dEDends on the w stamasu sequermai swuermai static discrete m um The vacuum world l Environmen x Percepis location and content eg A Dirty I Actions le right suck noop The vacuum world cont Iul Admquot iApieeni Right iA om ia Ciean Len is om cieeniiAcieeni mm A cieeniiA Dmvi A simple agent function def REFLEXVVACVVMVAGENT location Helm lfttatut Dinl return tett 1s tnis tne best agent tor tne Ob7 Rational agents I A mlinnzl zgenlls unethat uoestne rlghtthlng I Vlhatlstherlghtthlng7 a Aupmmnatien themns1 smesrullaeent a Measure at sucuess 7 I Pertormance measure acmrdlng to wnat is wanted in tne envlronment goa e urmancemeasureslurthevacuum world a rne amount at on cleaned per unittirne a Hnwmucn enevngasspentnnmnvlng anucleaning Rationality I What is rational at a given time depends on 3 a e ne bulltrlrl Knowledge about ne Envlrunment e Pereept sequencetu date sensors I A rational agent cnooses an action wnicn maximizes tne expected value oflhe performance measure given tn percept sequence and its Jullain knowledge The Structure of Agents l A sequence of progressively more sophisticated agents Re ex a ents Modelbased re ex agents Goalbased agents Utilitybased agents Learning agents P PP N Reflegtlt Agent I Selects action only on the basis ottne current ercept I Large reductlon in possible perceptactlon combinations luauANNE ima tunet nn REFLEXVVACVUMVAGENT locatiori status it status 7 Din tn 5 else it location else it location Simple reflegtlt agent unclmnSiMPLEVREFLEXVAGENUPEMPOr urnsanadlnn sue a so a mies statee NTERPREHNPUUWMWO rulee RULEVM TCHstate rule A actuzrl e RutEACTiONlmel return actan Wlll only Work lf the envlronment l5 fully observable Ourtaxladrlver agent would network as a simple reflex agentl Modelbased reflex agent 1 Maintain an internal state 1 Update the state using information on how the world worksquot the model ofthe world I Taxidriver agent needs to keep track of cars in his bl39nd t itmuitta Goalbased agents it as a goal Choos s actions to achieve goal I Search and planning are sub elds of Al achive the agent39s goals Utilitybased agents t Utilitymncticin maps a sequence ctr states cintci a real ndinbei CErtaln gci als can be reached in dlfferentvvays itmuitta 7 Sum aie bettei have a bignei utlllty l lrnpruves cin gcials 7 Selecting between ccintlicting gcials 7 Se ect appiepiiately between several guals that have vaiylng pvubablllty etsdccess Learning agents I All prEVlDUS agentcprugrams descnbe inetncids rcii selecting actions 7 Vet it dues nut explaln the ciiigin urtnese pmgrains e Advantage is tne rubustness cirtne pregiain tciward initially UHKHDWH Envlmnments Learning agents Peiamiaan element selecting a i p2 cepts 39 canesugndstgtnepiengdsadent a mean g Learnlng element inticidttce a El g lmpiqumEnts in pencinnance 12 1 element Crltll valdes medias2k an agents bengmnce based on med Mutt st inicinnatiie Expellences e be inquot is eXPlnltatlnn E For Tuesday I On Tuesday we start talking about search Please read Sections 335 in your text Vision Part 1 Lecture 22 1 11808 Announcements I We are going to jump around a bit Please read Chapter 24 for Thursday I Remember programming assignment 3 is due Thursday Dec 11 I No ACM meeting this week Topics for today What is vision 0 Not as obvious as it might seem 2 Perspective projection and 3D recovery 3 Reflectance and appearance 4 The early stages of vision What is vision What information do you extract visually What tasks does vision support Local navigation don t bump into things Global navigation where am I Duck reflexes Who are you are you angry sappy sad Threat detection should I be scared Day or night Have I seen this before Myths about vision I I see the whole world in detail The fovea is the highresolution part of your eye It covers about 272 of visual angle l I see the whole world in color Most colorsensitive cones are in the fovea Peripheral color vision is poor I I see depth Stereo vision is only accurate to about 10 feet Most depth perception is qualitativecomparative l I see everything in front of me Change blindness inattentional blindness More Myths lwas born knowing how to see Babies aren t born knowing how to focus their eyes You had to learn to see Everyone sees the same visavis glasses Vision is a set of skills Some people are better than others at specific skills I Prosopagnosia Nakayama l Navigators vs identi ers Kosslyn l Image interpreters NGA Perspective Pro39ection Humans view the 3D world through non parallel viewing rays that pass through a 2D plane Pers ective Pro39ection II In perspective projection AII rays of light pass through the focapoinf aka principal reference point The image plane is an infinite plane in front of or behind the focal point Images are formed by rays of light passing through the image plane on their way to the focal point Image points are uv World points are xyz Perspective Pro39ection The key to perspective projection is that all light rays meet at the PRP focal point Again lets stmt With some assumptions I VPN is the zaxis VUP is yaxis I The View plane is z d focal length I The focal point is 000 PXyZ Then by similar triangles 3i PL i i d P2 d P 7 de P d P 7 P2 Pv 7 PYZ PK 7 P P Pid Pv Pid N 9 What does this mean Information is lost You cannot recreate the 3D world from its 2D image Every image point corresponds to a ray in the 3D world If you can identify a point in images taken from two locations you can infer its 3D position 1 Stereo I Motion Reflectance Functions Re ectance functions are defined in terms of The position ofapoint light source I The orientation ofthe re ecting suiface The viewing position surface norm Reflectance Functions There are three majurtypes ufre ectance functions Amhientre emun Thxsmudels perfectly even Lambertian re ection This is the type ufre emun displayed by matte dull nut shiny surfaces Specular re ection This is abnghtre ecnun seen un pullshed surfaces N Lambertion Reflection I Considera point light source and a Diffuse Lambem39an Surface Dif Jse surfaces re ect light with equal intensity in all directions L q Lambertian Surfaces Continued Lightperumt area amvmg depends upen angle in light source z 74 Lightlezves m all directions re ected uffmlcm surfaces Diffuse Reflection I Illuminationarea drops when tilted from a point ligh I K is a usually diagonal 3x3 matrix of re ectance parameters material property L N e Material Properties I Most surfaces re ect red light as red etc Eac berm represents ve percent or g or b light re ecbed as r g or b light Diffuse Reflection IV I For a single point light source p1 I For many point light source I ZIpKdNLp p Ambient Reflection Ambient reflection is just diffuse reflection of light from the ambient light source Ambient light models background light It is the same intensity everywhere It is the same color everywhere It comes from every angle 80 Specular Reflection l Most surfaces do not act as diffuse reflectors Consider two extremes a diffuse surface and a Mirror l Most objects have some specular reflection component Mirrors are extreme Mirrors reflect around an angle Specular Reflection I Diffuse reflection modeled deep reflection Light entered through microholes in the surface Light bounced from facet to facet changing color Light exited at a random angle Specular reflection is surface reflection Light hits a single surface facet skips off The color ofthe light is unchanged The exit angle depends on the entry angles Specular Reflection V lfV R there is no pure specular reflection lfVR then Note that k5 is a scalar measuring the percent of light reflected specularly Specular reflection is always the color of the incident light ldealized Specular Reflection l The angle of incidence equals the angle of reflection l ldealized form only applies to mirrors Total Reflectance So the reflectance off a surface point is the Diffuse reflection from the ambient light Diffuse reflection off of every point light Specular reflection off of every point light 11K Z FKALF N kaSVRpl pdights Early Vision 1 Where do we start An image is say 1000 gtlt 1000 color pixels Too much information is handle all at once m Approaches to early vision Edge detection Segmentations region detection Interest points aka FOAs Context guess what your looking at Edges as Facets m The image can be thought of as a gray level intensity surface piecewise flat flat facet model piecewise linear sloped facet model piecewise quadratic piecewise cubic Example httpwwwmirametricscombrief pro qryhics 2 htm 1 Processing implicitly or explicitly estimates the free parameters Facet Edge Detection Facet edge detectors assume a piecewise linear model and calculate the slope of the planar facet 1 st derivative If we assume that the noise is zero mean and increases with the square of distance then convolution with the Sobel Edge Operator is optimal Examples of Facet Edges Source DX Image Dy Image Properties of Facet Edges Masks ll Magnitude dx2 dy2 u Orientation tan391 dydx m DyDx responses are signed 1 Edges tend to be thick ll Edge Masks sum of weights is zero Smoothing masks sum of weights is one Canny Example Source image Canny sigma 2 0 low 0 40 high 0 90 What is an Edge I An edge is a description of a localized image pattern It measures a slope in intensity I An edge is a symbolic feature We need to know what it denotes surface marking or I surface discontinuity or I snadow iiiurnination discontinuity These things have precise positions Propositional Logic Lecture 11 93008 Announcements Programming assignment1 is due before the next class Java timerinfo httpavasun comldocslbooksltutorialluiswinglmisclti tml Qt timer info httpldoctrolltechcoml42ltimershtml Any questions We need to schedule a midterm date It will be scheduled on Thursday lfyou have any con icts knowthem by then ACM Club Wednesday 500 110 USC So what is this unix thing anywayquot Logic A logic is a formal system of reasoning It must have Syntax which describes the sentences of the logical language Semantics which assign meanings to sentences Models which are possible worlds the semantics might describe N 00 Propositional Logic Propositional logic is a simple logic based on propositions Propositions are indivisible Propositions are either true or false Examples of propositions Yourtextbook is green The sky is falling Syntax I The proposition symbols P P2 etc are atomic sentences lfS is a sentence AS is a sentence negation If S and 82 are sentences S SE is a sentence Ion conjunc If S and 82 are sentences S SQ is a sentence disjunction If S and 82 are sentences S 2 SQ is a sentence implication If S and S are sentences S lt2 SQ is a sentence bicon Itional Syntax BackusNaur Form Sentence AtomicSentence Sentence ComplexSentence AtomlcSentence T L Symbol ComplexSentence Sentence Sentence quot Sentence Sentence v Sentence Sentence 2 Sentence Sentence cgt Sentence Semantics This sentence is true when iff S S is true ms 8 is false 81quot 82 S1 is true and 82 is true 81v 82 At least one of S1 or 82 is true 81 3 82 S1 is false or 82 is true 81 cgt 82 S1 amp 82 have the same value true or false Semantics Truth Tables The semantics of any operator can be expressed as a truthtable Truth Tables ll How many binary operators are there And how do you know 2 16 pigeonhole principle Express a as a binary operator s1 52 8182 51 82 5082 T T T T T T T J J T J J J T T J T J J J T J J T Models l Models are a mapping of propositions onto Tri l Axioms are a set of simple amp complex sentences that are asserted to be true Without proof I A set of axioms entails a sentence iff the sentence is true for every model in which the axioms are true Question I Do the axioms A23 B entail A Written as A38 B A How do you know if this is true Truth Tables A B ABquotB A T T T T T J J T No A is not ltJ T T J gt ntaied J J J J Entailment The goal is to infer the truth value of a predicate from a set of axioms A B lfA B P then we know P is true lfA B aP then we know P is false Otherwise the value of P is unknown The problem is that using truth tables to determine entailment is exponentially expensive Inference proof Well known patterns of entailment are captured in inference rules Inference ll Inference rules are like macros Any consistent substitution is legal If the precedent is true eg axioms then the anticedent is true and can be treated like a new aXIom Example Axioms AQB B 0 A ls C entailed Yes Precedent Rule Anticedent A3B A Modus Ponens B B C B Modus Ponens C Tautologies An interesting special case of inference rules tautologies are sentences that are always true in every model As a result they can be inferred from nothing 1 Example A v AA Additional Inference Rules Rule Name Modus Tollens Dilemma Theorem Manual Proof Techniques There are two standard manual proof techniques for on 3 1 Apply a sequence of inference rules to on to derive 3 2 Apply inference rules to on Ali and derive a contradiction Example of proof by negation Prove reducth ad absurdam PQVP aP l Proof The Wumpus World The Wumpus World I Penerrnanee rneasure 39 e gele HEIDEI death rlEIEIEI 7 Vi eersiee rlEI rer using ne arreu merit e quares adlacerttu wumpus smelly e Squares adlacerttu pit ereez l e Glitter lrr gele is in the same s u re 7 Shuunng Kllls wumpus lryeu are racing it Shuunng uses up the only armw e Graeelng plEKS up gele it in same square 5 u pty m Sensers stench Breeze GlittEr Elurnp I Actuators Leftturn Rightturn FurWard Grab Release sneet The Wumpus World Environment Fully oeservaele7 Ne only leeal perception DEIErmlnlS tlE7 Yes outcome exactly seeelnee am Yes Wurnpus and pits ee not move Discrete Yes 2 singleagenw Yes Wumpus Logic I Let PX y be the proposition that square xy as a pit I Let WX y be the proposition that the wumpus is in square xy Then we have the following axioms wm PM 3w lt2 Px vol V PM V PMVV PW Sm Q lew V WW4 V WWW erllv Wl 1 er2 v vWi 3 VWM Wumpus II I If sensors in square 1 1 detect neither breeze nor smell what do we know Precedent Rule Anticedenl aBM Modus PM v P Bl lt2 Pl 2 v P21 T quotequots up v Pu DeMorgans erg2 A s pm aPl 2 quot a PM Simpli cation Pr2 ePlz A a PM Simpli cation 411 Information Retrieval Lecture 24 120208 Announcements I My of ce has moved tp 442 CSB Phone amp email are unchanged 1 Read Chapter 23 ofyour text Concentrate on 232 but read the whole thing i Programming assignment 3 Wed USC 100 500 last meeting of ester Assignment 3 Hint Here is my toplevel structure Prob 0 039 For every Gender 9 prob PCgPplgPClpPmlpPWlemPrleW Hill Hint cont I What is the value ofprob alter this loop I To calculate the probability ofa variable equaling a value limit that variable to that value I To calculate Pv1av2b remember that Pv1v2 Pv1 ampv2Pv2 Information Retrieval I Given a document retrieve similar documents from a data base Information Retrieval II I Or given a set of key words retrieve documents from a data base Selective attention focus image Approach 1 Deep Semantics 1 Parse every sentence in every document 2 Look up the meanings ofthe words in a dictionary Understand every sentence Retrieve documents about the same subject Unfortunately no one has ever succeeded at this 59 Side Note Language Generation Given a corpus body of text we can estimate the probability of every word pw Sample from these distributions to generate sentences Using your text as a corpus we get logical are as are confusion a may right tries agent goal the was diesel more object then informationgathering serach is Language Generation II To improve this we can compute the probabilities of a word given its predecessor pw2w1 This generates a better sentence planning purely diagnostic expert system are very similar computational approach would be represented compactly using tic tac toe a predicate Language Generation We can go further to a trigraph pw3w2w1 This generates planning and scheduling are integrated the success of naive Bayes model is Just a possible prior source by that time I Still gibberish but its interesting how syntax emerges Approach 2 Bag of Words Alternatively you can Ignore syntax altogether treat every document as a bag of words Retrieve documents with many of the same words Sounds simple but works remarkably well N Word Stemming I One problem words appear in many forms a ver s have tense ea ateeateneaing 7 Some verbs are irregular to be am are is was were a A rew nouns have gender waiter waitress waitron I Nate tnis is much Wurse in many Either languages 7 Adverbs end in ly loudly l Word stemming strips ofl endin s a Lari uage Specific rules e g rnostplurals end in s a chtlorlaryiookru I dugs endsin s is dug award I i in s is Charla a Ward 7 Special lookrups forexceptlorls Stop Words Some words are so common as to carry no discriminating information the aan to be in all its forms Most pronouns I The most common words are ignored How many words I YourAl textbook has 15000 unique words I A small focused corpus needs about 25000 words I A standard English corpus contains 250000 words I A reasonably full English corpus contains around 25 million words Slightly more than most languages What is a document I This is application specific In law data bases its a case For a search engine its a web page For Google Scholar its a paper For an encyclopedia its an entry How is a document represented I The bag of words is a vector of length n Where n is the number of words in the lexicon I Each value is an integer Records the number of times that word appears in the document Most values are 0 I The task is to match one document s vector to another s TFIDF I OK but how do we compare documents I Most common method term frequency inverse document frequency TFIDF MachA B E TermFreq 10wa Freq 2W AwBwlogyDWj Evaluation l Lets sayl had a better method How could I prove it I Traditional IR evaluation begins with a known corpus and a query Every document is judged relevant or not relevant to the query I ideally by a committee of llbl al lal ls How do you judge the quality of the set of returned documents Precision and Recall l Precision is the percent of returned documents that are relevant to the query l Recall is the percent of relevant documents that are part of the returned set I The more documents you return the more recall tends to increase while precision decreases Precision Recall Cunes FreemanRecquot cum PrecisionRecall Curve ll This is an ugly precisionrecall curve because there are so few samples In general the increases in precision with every correct sample are less visible In general only a small percent of documents are relevant so the final Precision is usually A More General Approach Returning an irrelevant document is a false positive You claimed it was relevant but were wrong Not returning a relevant document is a false negative You claimed is wasn t relevant but it was IR is an example of an algorithm with a threshold determining the false positive to false negative ratio low Confusion Matrix Relevant Irrelevant Returned 4 1 True Pos False Pos Type 1 Error Not returned 6 9 False Neg True Neg Type 2 Error ROC Curve ROCCuwe mg mm hie IT T FIT T FIT FIT lllllll llllllll ROC Properties l Every ROC curve starts at 00 and ends at 10 10 7 When nu ducuments are returned here are nu true pusmves e nu rarse pusmves 7 When every ducument rs returned mere rs munn true pusmves and1 fase pusmves E A random classi er will produce an approximately diagonal line I The smallerlhe area under the curve AUC the better Saar2 ermema ROC Properties II no m 0 U a 06 a C8440 Introduction to Artificial Intelligence Lecture 1 82608 i inginwk 2003 2nd edition ISBN 0137903952 Get to know this book It Will be your friend Administrivia II I Web i e The class web site is wwwcscolostateeducs440 All lectures posted there after presentation A homework assignments pos e t ere Important news bulletins Anonymous feedback page Elf Administrivia III I Contact information Ins ructor Bruce A Draper office 7 TnisWiii change tuAAZ csa rna Teaching Assistant Jason Remington Email iasuri rernirigmri tagrnail Burn Eng Administrivia IV Workload 7 Two Exams urie midterm urie final I Final isTuesday Dee lE R ii ZDamrl ZUpm 7 Three prugrarnrnlng assignments a All sulu 7 Two re Written assignments r oiaeee bum en cuntent and style u Eaen Written assignment nasiniee parts 7 n t w e critique ma colleague s Written assignment 7 Aiewmenasseninem r All iniee pa swlll be graded e F39rugrarnrning Assignments 4m 7 Written Assignments ZU n r curving 7 Yes grades are curved e Duringtne semester yuu getnurneru snare and class histugrarns curve is applied at the E d Administrivia VI I Cheating Policy 7 is yuurrESpDnSibilitytD khuwthe de anmeht chEating pulicy 7 Yes We will use moss anduruthertuulstu help us detect cheating 7 When in duubt as me 7 Very rare atthis level but taken very sEriuusly when it happens 0h7hrhe policy 7 Deadlines are deadlines he EXtEriSiEIriS 7 Preextehsmhs may be granted for grind rEasuns rs E S E E I Cell phones offormuted 7 if muted leave the ruurn befure answering 7 Please givE yuurnarnE first su l can put names in faces l may chuuse in take some discussiuns ermine About This Course Divided roughly into three sections Search Logic Bayesian Inference Machine learning gets its own followon course 08545 For Thursday Read Chapter 1 of your text It should remind you of today s discussion It will be useful for the 1St written assignment Read Chapter 2 of your text Background forThursday s lecture E C What is Al The exciting new effort to make computers think machines with minds in the full and literal sense Haugeland 1985 The automation of activities that we associate with human thinking activities such as decision making problem solving learning Bellman 1978 How different are these What is Al ll Definitions of artificial intelligence Systems that think like Systems that think humans rationally Systems that act I ke Systems that act humans rationally The de nitions vary by Thought processes vs action Judged according to human standards vs success according to an ideal concept of intelligence ra ionality 11 Al Act Like Humans The art of creating machines that perform functions that require intelligence when performed by people Kurzweil 1990 l The study of how to make computers do things which at the moment people are better at Rich and Knight 1991 Etc The Turing Test hen does a svstem behave intelligentlv7 e Turing l 95m Computing Macnnery and intelligence 7 Operational test or intelligenee arivve distinguish machines rrom humans7 Em r 9 e Requires the sueeessml application or maiorrields or Al ent on reasonin t Knowledge re res g na ural language processing machine learning Role of the Turing Test I To articulate a performance goal I To avoid de ning intelligence I How signi cant is the Turing test How would you administer it What would you ask Would we all agree on the outcome I How close are we 5 Eliza Eliza the computer therapist Original version by J Weizenbaum at MIT in 1966 7 Demo here it the intemet is accessible from Girrord Ei Al Think Like Humans I How do humans think ires understanding or brain aotivitv cognitive model 7 Levels of abstractlol l E Co ni ive seienee bsvehologv I Reglunal mnetional anatomv l Cogni ive neuroscience Note there is no single quotrightquot leielofabstractlorl thinh or an onion Era Thinking rationally I The logicist approach ogic notation and rules of derivation for thoughts I Problems Not all intelligent behavior is mediated by logical deliberation What is the urpose ofthinking What thoughts should I bother to have I Rational behavior doing the right thing 7 The right thin is thatvvhieh is expected to maximize goal given the available information It The abilityto alter one39s actions to meet one39s goals 7 More sbeeirieallv the ability to optimize expected rewards chap 2 K Ourfocus rational agents and how to construct them 7 lvvill give voo goalsvoo build the agents I Al in everyday life Al Systems Al systems we use on a daily basis CR Talking with customer service or your cellphone voice recognition NLP Applying for a loan Online map services Computer games chess etc Web searches I Deep Blue 7 Defeats Kasparov Chess Grand Masterr BM 1997 5 Deep 3 ace 7 Al planner controls space probee NASA legs 5 DARPA grand challenge 2005 a 130 mile race of driverless cars in the mojave desert 3K Mundane vs Expert Tasks 5 Foundations of Al l Mundane Identifying objects in a visual image Answering a question Picking up an e i Expert Chess Medical diagnosis W Con guring computer hardware circuit layout rn o o 3 o w 2 E E I E E Neuroscience huvvdu bralns pruness lnfunnatlun 39 heury Computer Engineering bulldlng the hardware and suftvvare that make Al I Linguistics new deal Wltn language I Prepositional Logic Firstorder Logic Lecture 13 100708 Announcements Writing Assignment 2 is due now Two printed copies one anonymous Midterm is one week from today Read Chapter 9 This will be the last chapter on the midterm Programming assignment 1 ls being graded Many of you were loose in your output some ignored the specification altogether Writing Assignment 2 What are the propositions What are the initial axioms What axioms are added with every guess How do we detect cheating Inconsistency implies lying Inconsistency detected through resolution Propositional logic Propositional logic is declarative Propositional logic is compositional meFaning of BH P 2 is derived 39om meaning of B J and o l Meaning in propositional logic is contextindependent unlike natural language where meaning depends on context Propositional logic has limited expressive power Unlike natural language E 9 cannot say quotpits cause breaes in adjacent squares exceptby Writing une sentence fureach square First Order Logic Examples of things we can say All men are mortal Vx Manx 2 Mortalx Everybody loves somebody Vx 3y Lovesx y The meaning of the word above Vx Vy abovexy lt2 onxy 32 onxz abovezy First Order logic I Whereas propositional logic assumes the world contains a s l firstorder logic like natural language assumes tains Objects people houses numbers colors Relations red round prime brother of biggerthan part 0 Functions fatherof plus Logics in General I Ontologlcal Commitment Nhat exists in the World 7 TRUTH 7 PL racts nuld ur no not halo 7 FOL noiectsWitn relations oetweentnern tnatnuld urdu nut nuld l Epistemollglcal Commitment state or knowledge allowed Witn respect to a fact iii iriiiiii iiiiiiiiiiiii i it namiimiiant 4m not like iintm m m lam so new minim im nite mm men it tt new Him m tth my wine 2 ii mm Ian Whig ii i m rum im we aim e ii i llnmn inn em Syntax of FOL l User de nes these primitives 7 constant symbols l e i tne individuals in tne World E g Mary 3 7 Function symbols mapping individuals to lndlvldual E g ifatneno Mary Jonn coionortsw Blue 7 Predicaterelation symbols mapping from individuals to trutn values E g greater5i3i greenGrass coiortcrass Green Syntax cont l FOL supplies these primitives Variable symbols Eg xy Connectives Same as in PL a v lt2 Equality Quanti ers Universal V and Existential 3 Atomic sentences Te rrn fundinn errnm termquot or constant or variable Atomic sentence predicale ermm termquot or term term rnp le te nns 5min rlltlngUnnRichardTheLlonieart GrealerLenglnLe LegOfRicnarz LenglnLeflLegOfKingonn Complex sentences Complex sentences are made rrorn atomic sentences using connectives 75 51A 52 51v 5 1 52 spas and by applying ouantitiers Exam pies Sipingmingonnnicnari1 2 SiblinnglChard Kingohm greaterti 2 v iessonequaiti i2 Vx y Sibling x y 2 Sioiingtyix Truth in firstorder logic r Recalltnatamudelistnesetniallpussiblemrldsundercunsmeratiurr Sentences aietiueWitn iesnectto a mudel and arr inteioietation l Moeeicontainsuniectsteomaineieinentsaneieiationsamonetnem lnteipretatiunspeci esreiererrtsiur consentsnnonis e nliiects pieeicate svmlmls e relations tundinrrsvmlmls e turrctinrralrelatinrrs n Arr aturnlc sentence predicate errm term is true iiitne up ects ieienee tn lav term term 312 in tneutattm ietenee to W predicate Models for FOL Example Models for FOL I We can enumerate the models for a given KB vocabulary For each number 0 domain elements I from 1 m x For each L39d39y predicate Pr l the voLabulavy For caclr possrblu Aaly relation on n objects For caclr constant symbol 39 in me vocabulary a choice of referent for I39 from 1 objects I Computing entailment by enumerating the models will not be easy Quanti ers l Allow us to express properties of collections of objects instead of enumerating objects by name I Universal for all V I Existential there exists 3 Universal quanti cation Vltvariablesgt ltsentencegt Everyone at CSU is smart Vx Atx CSU Smartx Vx P is true in a model In iff P is true with X being each possible object in the model II Roughly speaking equivalent to the conjunction of instantiations of P AtKingJohnCSU 2 SmartKingJohn A AtRichardCSU 2 SmartRichard A AtCSUCSU 2 SmartCSU Using universal quantifiers l Typically is the main connective with V I Do not make the following mistake Vx Atx CSU A Smartx means Everyone is at CSU and everyone is smartquot Existential quanti cation 3ltvariablesgt ltsentencegt Someone at CSU is smart 3x Atx CSU A Smartx 3x P istrue in a model In iff P is true with x being some possible object in the model I Roughly speaking equivalent to the disjunction of instantiations of P AtKingJohnCSU A SmartKingJohn AtRichard CSU A SmartRichard V AtCSU CSU A SmartCSU Existential quantification cont l Typically A is the main connective with 3 l Common mistake using 2 with 3 3x Atx csu 2 Smartx When is this true Properties of quantifiers Vx Vy is the same as Vy Vx 3x y is the same as 3y 3x 3x Vy is not the same as Vy x 3x Vy Lovesxy There is a person who loves everyone in the worldquot Vy 3x Lovesxy Everyone in theworld is loved by at least one personquot I Quanti er dualig each can be expressed usingthe other Vx LikesxlceCream ix LikesxlceCream 3x Likesx Broccoli Vx LikesxBroccoli Equality term termz is true under a given interpretation if and only if term and termz refer to the same I Eg definition of Sibling in terms of Parent VXy SiblingXy cgt X y A 3mf am f A Parentmx A Parentfx A Parentmy A Parentfy Wumpus World redux I An agent eg in wumpus world may keep a knowledge base of objects predicates amp inference rules Objects in this world are grid squares wh Some predicates are sensations I Eg Stenchwh Glitterwh Breezewh I These are asserted over time as the agent explores Other predicates are facts about locations g Wumpuswh Pitw h Goldwh Interacting with FOL KBs S ppose a wumpusworld agent perceives a smell and a breeze but no glitter in square 12 TellKB Stench12 TellKB Breeze12 TellKB Glitter12 I This asserts a set of facts axioms AskKB Safe13 I This asks the KB whether the statement Safe131 is entailed by the known facts axioms Deductions These use a set of inference rules stored in the KB Vwh Breezewh 3 Pitw1h v Pitw 1h v Pitwh1 1 v Pitwh1 VWh Safewh lt2 aPitwh A aWumpuswh 3 wh Wumpuswh VWihiXiy WX quot h30 V aWumPUSWihD quot WumPUSXiy Creating a KB using FOL ldentifythe task Whatwill tne KB be used ror Assernoie tne relevant knowledge LilSl ion Decide on a vocabulary or predicates mnctions and constants Translate domainrlevel knowledge into logicrlevel narnes X 2 o i a 1 L0 a m o g 4 Encode general knowledge about tne dornain derine axioms 5 Encode a descnotion ortne soeciric orooiern instance a Pose queries to tne inrerence procedure and get answers Debug tne knowledge base Examples The kinship domain I Basic predicates Female Parent Other predicates in this omain e39s mother is one39s female parent Vmc Motherc m Cgt Femalem A Parenfmc Vxy Siblingxy Cgt x y A 3mf m l A Parentmx A Parentfx A Parentmy A Parentfy A rst cousin is a child ofa parent s sibling Vxy FirstCousinxy Cgt 3pps Parentpx A Siblingpsp A Parentpsy 1 AL m they g since they use biconditionals Some sentences are theorems they can be derived 39om the axioms e Sibling is symmetric VX y Sioiingxy 2 Sioiingyx Examples cont The set domain Notation xs is he set resul ing from adding xto the set s Vs Sets Q s 0 v 3x 3s2 Sets2 A s xs2 Vx st a sCgt3ys2sys2A xyvxe s2 Vszx a scgtsxs Vsrv 51g s22Vxxa s12xe s2 Vsivs si s2gtcgtsi we so VXV51SZXE51MSZCgtXE siAXE s2 Vszis2x a s1 u s2cgtxa s1 vx a s2 Examples cont The natural numbers domain I 0 is a natural number NatNumO The successor ofa natural number is a natural number n NatNumn 2 NatNumSn Constraints on the successor function Vn 20 Sn Vm Vn m n 2 Sm Sn De ning addi 39 Vn NatNumn 2 0 n n Vm Vn NatNumm A NatNumn 2 Sm n Smn Announcements Critique of writing assignment 2 is due now Inference In FlrSt order Leglc The midterm is Tuesday at your request All material through Chapter 9 Hand out last year s midterm Ledure 14 Programming Assignment 1 100908 ls actively being graded More emails went out from Jason check your CS accounts Practice Question Practice Question 1 Given A ltgt B A v D quot B v 30 C D lt3 Antecedent Inference Consequent AC Prove A V A V Method one traditional forward proof P PC Method two resolution theorem proving Equiv 1 Ponens Question 1 alternative Question 2 Antecedent Inference Rule Consequent Convert to CNF A lt3 B A v D quotB v Simplification A v D n A 3BAB 3A c CI AAVBV hByA DQAC Mat Equiv1 D3ACquot 0930 AC3D IDaC aCD I 3D 3C A 33C D D 3 AC A Simplification D 3 AC Lich E c y DV 303D AVDquotBVaC A A A A A D 3 AC c Modus Tollens AD SECS BEE C PD V PC C V D A v D 3D Resolution A Outline Three methods of firstorder inference 1 Conversion to propositional logic Can then use propositional resolution 2 Unification Generalized Modus Ponens FonNardbackward chaining 3 Firstorder resolution FOL to PL First order inference can be done by converting the knowledge base to PL and using propositional inference How to convert universal quantifiers l Replace variable by ground terms I Assumes nite domain ofdiscourse How to convert existential quantifiers I Skolemization Universal instantiation Ul Every instantiation of a universally quantified sentence is entailed by it VV on Substvg on for any variable v and ground term g Eg Vx Kingx A Greedyx 2 Evilx yields KingJohn A GreedyJohn 2 EvilJohn KingRichard A GreedyRichard 2 EvilRichard KingFatherJohn A GreedyFatherJohn 2 EvilFatherJohn Existential instantiation El For any sentence a variable v and constant symbol k that does not appear elsewhere in the knowledge e 3 V CL Substvk a Eg 3x Crownx A OnHeadx John yields CrownC A OnHeadCJohn provided 07 is a new constant symbol called a Skolem constant El versus Ul l Ul can be applied several times to add new sentences the new KB is logically equivalent to old I El can be applied once to replace the existential sentence the new KB is not equivalent to the old but is satisfiable if the old KB was satisfiable Reduction to propositional inference Suppose the KB contains just the following Vx Kingx A Greedyx 2 Evilx KingJohn GreedyJohn BrotherRichardJohn lnstantiating the universal sentence in all ssible ways we have39 KingJohn A GreedyJohn 2 EvilJohn KingRichard A GreedyRichard 2 EvilRichard The new KB is propositionalized Reduction cont I CLAIM A ground sentence is entailed by a new KB iff entailed by the original KB I CLAIM Every FOL KB can be propositionalized so as to preserve entai ment IDEA propositionalize KB and query apply resolution return resul I PROBLEM with function symbols there are in nitely many ground terms e g PaineFalneKFalneKJohn Reduction THEOREM Herbrand 1930 Ifa sentence on is entailed by an FOL KB it is entailed by a finite subset of the propositionalized KB IDEA Forn Oto do create a propositional KB by instantiating with depthn terms see ifa is entailed by this KB Reduction IV PROBLEM works if on is entailed does not halt if on is not entailed THEOREM Turing 1936 Church 1936 Entailment for FOL is semi decidable agorithms exist that say yes to every entailed sentence but no algorithm exists that also says no to every non entailed sentence With p k ary predicates and n constants there are pnquot instantiations Method 2 Unification Instead of translating the knowledge base to PL we can make the inference rules work in FOL For example given Vx Kingx Greedyx 2 Evilx KingJohn Vy Greedyy It is intuitively clear that we can substitute John for the variable X and John forthe variable y written xJohnyJohn and obtain that EvilJohn Unification I We can make the inference ifwe can nd a substitution such that Kingx and Greedyx match KingJohn and Greedyy r xJohnyJohnlvvorks Unifya p e ifSubste o Subste p 0 3 Subst KnowsJohnx KnowsJohnJane xJane KnowsJohnx KnowsyOJ XOJi yJOh Knows Johnx Knows Mother yJ hquotv y W XMothemy KnowsJohnx KnowsxOJ fail Variable Standardization The previous failure was unneccessary To unrelated universal quantifications used the same variable name by c ance Before unification give all unrelated variables unique names Unification x You may unify a unlversally quantified varlable With a eenstant ur With a aneither unlversally quantified varlable K Unitiers er Knowsohngtlt and Knowsyz are yJuhn xz l or yJuhn xJuhn ZJuhn I The first unifier is rnure general than the second I There is a single must general unlflErWGU that is unique uptu renaming ufvarlables cu lvllnhn ml The unification algorithm Tvvu functluns Unlfyrvar uniries a varlable With an EXprESSiDn Unlfy uniries tWei Expressions Let s start With the easier une lum39tiun iii iminlium l7 nrutnls a suhsxlwllnn quotrpm in mime aunimien ii m subslltutlan suit up a iai irliuiiinil C llihru i iivtnti ll L39li it Inuit 5 mm i niiitiui min i an n i itrr int i rliiinr u willquot billquot L uml m lnn l is l7 Explaining UnifyVar I UnifyVar and Unify take the currently developing substitution 6 lfthe variable is already substituted for by a constant in 6 use it lfthe variable is already substituted for by another variable in 6 use it I lfthe variable exists in x the expressions anno e uni ed Return failure I Else substitute var for x The unification algorithm tiiiitttuii l ini it ii iii IDHIIIVS a aLlI lIqun to m r w i Mammal lepuln avarlahlc zamlanl its maniqu i 1 War man in manhunt um i i iutm inn up w 4quot irii iiiiuiutnx innumiiuu IIN in i llwn mi in aluminuminumatuetiisnnti iiui T 5 emit r r t ir runciio les 5 Generalized Modus Ponens GMP Suppese that Substta Substta p forall then nil pal phi pirpm APquot 0 Substta q p1 is KlngUohn p1 is Klnggtlt pg is oreeuyly p2 is Greedygtlt e is lxuehnyuehnl q is Elllgtlt Subst e u is EvlJohn All varlables assumed unlversally quantified Uncertainty Lecture 16 102308 Uncertainty Stepping Back I We have finished talking about logic 2 d Programming assignment is due a week 39om Tuesday Now would be a good time to ask questions We have also finished talking about search Although search is a component to everything in this class I Our next topic is uncertainty Chapter 13 ofyourtext today Chapter 14 ofyour text Tuesday Sources of Uncertainty Consider the following statements 1 It rained in Tuscaloosa last Friday Uncertainty caused by ignorance Refers to a state of knowledge 2 It will rain in Ft Collins tomorrow Predictive uncertainty Refers to a sampling likelihood 3 The odds of global nuclear war is 1 in 1 million Subjective estimate Subjective uncertainty The Language of Probability Random variables The equivalent of a proposition in logic Each variable has a set of possible values I Boolean l Discrete nite list of possibilities I Continuous LoP cont I An atomic event is the complete specification of the state of world ontain many random variables Alternatively it may contain only one An atomic event speci es the value of every random variable I Rain example It DID rain in Tuscaloosa and WON T rain in Ft Collins I We can assign probabilities to atomic events What is the probability that is rained in Tuscaloosa but not Ft Collins Unconditional Probabilities The unconditional or prior probability of an atomic event is our state of belief in the event prior to any specific information Prain in Tuscaloosa 3 It rains a lot in Alabama Prain in Ft Collins 1 Not so much in Colorado Bayes Rule pabzm Pb Conditional Probabilities The conditional probability of an atomic event is its probability given that the values of some of the random variable are known Example PFt Collins rain Tuscaloosa rain 2 The Axioms of Probability All of probability theory can be derived from 3 axioms Axiom 1 Va 0 Pag1 Axiom 2 Ptrue 1 Pfalse 0 Axiom 3 Pa v b Pa Pb PaAb I Derivable from above Lemma 1 EPa 1 l Summed over all possible values ofa Bayes Rule ll The previous equation holds for Pb gt 0 What happens when Pb 0 Bayes rule can be rewritten as the product rule PaAbPalbPb Probability as a Venn Diagram Pa v b Pa Pb PaAb Joint Distributions II Joint Distributions l LetT be the proposition of rain in Tuscaloosa Book example probabilities of 1 having a I Let C be the propositiion of rain in Ft Collins cavity 2 having a toothache and 3 having the dentist s probe catch C c toothache atoothache catch catch catch catch T 03 27 cavity 0108 0012 0072 0008 BT 07 3963 acavity 0016 0064 0144 0576 Marginal Probabilities Conditioning The marginal probability is the probability Alternatively you may know conditional rs probabilities rather than the joint probabilities of one term summed over the othe The probability of cavity is 02 Why The probability of toothache is also 02 Why The conditioning rule is The probability of catch is 034 Why As a general rule marginalization Independence II Independence Sometimes two random variables have Why is independence important nothing to do With each other Because its useful to know when two things 1 related PaAb Pan arequot Because joint probability tables grow exponentially with the number of variables Because samplebased interpretations of too many dependent variables are infeasible Pal b Pa Pbl a Pb Simple Bayesian Inference What is the probability of a cavity given a toothache and that the dentists probe caught ie what is Pcavity toothache catch Ptoothache quot catch 0124 Pcavity toothache quot catch 108 PaAb 108 P b a Pb 124 39871 Inference in Firstorder Logic Lecture 15 102108 Announcements I Last Thursday we handed back The midterm Programming assignment 1 grades Writing assignment 2 essays Writing assignment 2 critiques Writing assignment 2 rewrites are clue how Programming assignment 2 is due on Nov 4 F P NT ACM topic advanced unix scripting Unification l Instead of translating the knowledge base to PL we can make the inference rules work in FOL l Forexample given Vx Kingx Greedyx 2 Evilx KingJohn Vy Greedyy It is intuitively clear that we can substitute John for the variable X and John for the variable y written xJohnyJohn and obtain that EvilJohn Unification We can make the inference if we can nd a substitution such that Kingx and Greedyx match KingJohn and Greedyy r xJohn yJohnlworks Unifyo p e if Subste o Subste p 0 3 Subst KnowsJohnx KnowsJohnJane xJane KnowsJohnx KnowsyOJ XOJy yJOh Knows Johnx Knows Mothe yJ hquotv y W XMothemy KnowsJohnx KnowsxOJ fail Variable Standardization The previous failure was unneccessary To unrelated universal quantifications used the same variable name by chance Before unification give an unrelated variables unique names Unification I You may unify a universally quanti ed variable with a constant or with another universally quanti ed variable I Uni ers of KnowsJohnx and Knowsyz are yJohn xz or yJohn leohn John x The rst uni er is more general than the second I There is a single most general uni er MGUthat is bl unique up to renaming ofvaria es MGU yJuhn xz The unification algorithm Tvvu functluns Unlfyrvar0 uniries a varlable With an EXprESSiDh Unlfy uniries We EXprESSthS Let s start With the easier eirie m i ii iem W ii n e a substitution blc H mm ten amen ii the substitution euiii up 5 iei illimiiitl ii um i won 1V else irliwil s that mini iiiiiil rlni39 iniui iieriimiirrih 1 mm nhm aim elib mum m lini l m ii Explaining UnifyVar l UnifyVar and Unify take the currently developing substitution 6 I If the variable is already substituted for by a constant in 6 use it I If the variable is already substituted for by another variable in 6 us it I If the variable exists in x the expressions cannot e uni e Retum failure I Else substitute varfor x The unification algorithm mum mii ii ill rLLum a uuiiiuimn u mu Aim lump immn maul mm is mumu minim only a i in ummuuiu it iii inimim n ii petuniaquot m stimuli himmritiieiiuiiu iii m5 he es new ii runciio s Generalized Modus Ponens GMP Suppuse that Supste pa Supsta p for alli then nil p23 pn pmpm M7quot 0 Supsta p1 is KlngUohn p1 is Klngx pg is Greedyty p2 is Greeth e is xJuhnyJuhn q is Ellx Supste q is EvlJohn I All varlables assumed unlversally quantified But first returning to predicate logic an example I The law says that it is a crime for an American to sell weapons to hostile nations The count Nono n e y f erica has some missiles and all of its missiles were sold to it by Colonel an West who Is Americ I Prove that Col West is a criminal Example knowledge base contd ii is a eiime man Amenean tn sellweapunstu hesiile naiiens AmericaMx A Weaponm A Sellsxyz A Hash91 3 Crlmmalgtlt N no hasseme missilesie ExOWnsNunuxMlssllex ariaW and Missile w EnemyxAmerlca 3 Hostilex Westth isAmeiiean AmericanlWesz The eeuniiyhenp an enemy eiAmenea EnemyonoAmerlca Forward chaining algorithm nunnun n mu 1l lh v mum z hymn 04 my h xntcnu v in quotfle My i 7 lwlvumlJnh39kluh for l39ilrll sud Ital lp quot HNW Iquot 39 ll V lf lav swmc pm y in A H Si l 111 7 ll 439 4 no a vtnammg a1 a sentence mm in mm m n um do Idd if m mu 7 1 n mg n irll l m ml um rtlunu m u m 11 add mum pm Forward chaining example Forward chaining example FonNard chaining example MamieNomi gmmmmwmm Properties of forward chaining i Sound and complete for rstorder de nite clauses eg cr m FC terminates for Datalog in nite number of iterations E Data0g rstorder definite clauses with no functions e KB I May not terminate in general de nite clauses with functions if sentence is not enta39 ed This is unavoidable entailment with de nite clauses is semidecidable Increasing the ef crency of fonNard ch aInIng Incremental forward chaining no need to match a rule on iteration tifa premise wasn39t added on iteration t 1 match each rule whose premise contains a newly added positive literal Matching itself can be expensive Database indexing allows 01 retrieval of known facts eg query Missilex retrieves MissileM7 Given a rule such as Missilex l OwnsNonox gt SellsWestxNono what is the order in which the conjuncts should e checked Forward chaining is widely used in deductive databases Backward chaining algorithm rm l Hmv lm nmwmmlimiwmmmw s i m m Backward chaining example Backward chaining example Backward chaining example Backward chaining example Backward chaining example Backward chaining example Backward chaining example Backward chaining example Properties of backward chaining l Depth rst recursive proof search space is linear in size of proof I Incomplete due to in nite loops x by checking current goal against every goal on stack 1 Inef cient due to repeated subgoals both success and failure x by caching previous results extra space Vinder used for logic programming Logic programming Prolog I BASlS backward chaihihg With Horn clauses bells amp whlstles I Program set of clauses head 7 llteralw llteraln aihiheitx ramerlmmx vleapanwli SellsKXiV Zl hasiietz Local Search Lecture 6 91 108 Writing Assignment 1 Was due Tuesday handed back today Critique is due NOW Look at your grade on the essay and make a decision Ifyou grade was good don t do a rewrite Your essay grade Wiii count tWice If it wasn t good submit a rewrite on Tuesday Eitherway hand the graded version back to Jason before you leave Read Chapter 5 for Tuesday Read Chapter 6 for Thursday Assignment 1 cont How were tney graded O lt e m o i 2 3 4 uaitcstdcscriptiun 5 Successurfunctiun dcscripticin E Path ccist dcscripticin 7 u g a a Branching ractcir dcscripticin iEI Wnctncrit is atrcc cirgra Paints rern oyed for i Unnecessary vycirds cir paragrapns 2 Grammar cir spciiing crrcirs How did you do niaacsn m Em smras Avg73 StDev19 Min4 Max10 Review A Variant ofbest rst search e BFs At eyery step expand node n in fringe Witn tne nignest yaiue of rm ForA fn gn hn e gn exact cost from root to n 7 Mn is neunstic estirnate ofthe cost to a goai Admissability 7 Mn true cost to goai foraii n e guarantees optirnaiity Consistency 7 n07 gn n07 Vn between n and a goai state 7 guarantees tnat rst occurrence ofaii ri Wiii be tne best I Ayciids bunkkeeping Dominance When is one heuristic better than another General rule admissable heuristics are better than nonadmissable heurist39cs Heuristic h1 dominates heuristic h2 iff hn 2 h2n Vn If one admissable heuristic dominates another it is better Why Local minima vs local maxima I Local search nd a maximum or minimum of an objective function cost function Note the local minima ofa function n are the same ofthe maxima Ofr n Both A and Local Search can be cast as either reward maximization or cost minimiza 39on I Note that the de nition ofA aornlssaollltv reverses it the problem is maxlrnlzatlol l Local Search Previously systematic exploration of search space Path to goal is solution to prob em I For some problems path is irrelevant mples Bqueens K In such cases we can use loossearch algorithms 7 Search the space offeaslble SUullmls currentquot sta e ry 0 improve it by searching the space of solutions Problem rede nition K State space any board with 8 queens on it State quality Number of queens that are not atened Improve state by moving a queen to a position where fewer queens attack each other Greedy local search I Choose the action that immediately resolves the most con icts I Problem can get stuck in a local minimum happens 86 of the time for the 8 queens problems Hillclimbing Apply successor function and keep moves that improve the objective function slumquot Emma glam mum nz that we m 2 Wu Hillclimbing vuncunn HlLLCLlMElNWP ODem return a stalethatls a local maxlrnurn In uI currents MAKEVNODElNlTlALrSTATEVJroDlequotI lnnrl un nelgnoore a nluncsi valucu successor of current quotVALUE lnelgllborl s VALUElcunerlll Ihen naurn srArElcurreni urrente neighbor Tnls flavor of nlllcllrnolng ls known as steepest ascent steepest en w en the oolectlve ls rnlnlrnlzatlon so ution tour A 2opt move 3opt Choose three edges from tour Remove them and combine the three parts to a tour in the cheapest way to link them Source University or quotarm 3Opt Branching Factor I Note that the six nodes come in pairs that are already connected AB CD EF l Node A can connect to 7 Anything but 8 7 Connecting it to D makes it a 270m OK 7 4 cnoices I Node B can connect to 7 Cannot connect to A or what A to 7 Cannot connect to tne other halfofthe pair that A connects to cnoices I The nal two nodes connect to each other 7 Eignt legal possibilities 7 otwnicn are novel Solving TSP cont 3opt moves lead to fewer local minima than 2opt moves The LinKernighan algorithm 1973 a A opt move efficiently constructs a successor that changes A cities in a tour Often finds optimal solutions The best algorithm for TSP until 1989 The Art of Local Search Hillclimbing like A is a simple algorithm The art is in defining the successor function More commonly called a neighborhood Goal avoid local minima The art in A is defining the heuristic function 1 Start with any sequence of cities as an initial 2 Design a successor function that yields valid 5 Variations l Steepest ascentdescent Successor is the neighbor with the largest increasydecrease in objective function Stochastic hillclimbing ndom selection among he uphill moves The selection probability can vary with the steepness of the uph39ll e l Firstchoice hillclimbin Stochastic hill climbing generating successors randomly found until a etter one Is I Randomrestart hillclimbin Start several hillclimbing runs each 39om a random initial state Random Restart Suppose that the probability of failure in a single try is Pf The probability of failure in ktrials Pk trials Pk Ps c trials I Pfk trials I P k The probability of success can be made arbitrarily close to 1 by increasing k Why this won t necessarily work I NPcomplete problems can have an exponential number of local minima Problem Solving by Searching part II Lecture 4 90408 Tree search functio fringe e NSERTWAKENODEmmbbmlnitialStatEO loop do fringe i EM F39TY7frlnge then return fallu e node e slrale SelectAndRernuveNude if problem Sulutlun7node then return n else fringe e APPEND enddo fringe OH u m fringe problem EXPANDmodeD n TREESEARCHprobem fringe strategy returns a sulutlun ur failure Comparing search strategies I A strategy is de ned by the order of node expansion ed 7 emery during Search I Search space complexity is measured in terms of 7 b 7 maximum branching factor oflhe Search tree 7 d a depth oflhe leaslrcosl solull on 7 maximum depth bribe slale space may be a Announcements Read Chapter 4 for Tuesday First writing assignment Tree sea rch functi RCHprobem fringe strategy returns a sulutlun urfallure frin on TREEVSEA 99 e NSE TMAKENODEprobem lnltlalState loop do 0 fringe if EM F39TY7frlnge then return failure node e straleg 5eleemnunemevenbueminge if problem Sulutlun7node then return node else m 99 e APPENDminge problem EXPANDmodeD enddo Structure Nude contains State state NodePtr parent Operator generatedrby Integer depth Number cust Uninformed search strategies I aka blind search use only information available in problem de nition When strategies can determine whether one nongoal state is better than another gt informed search I Categories de ned by expansion algorithm Breadth rst search Uniformcost searc Depth rst search Depthlimited searc Iterative deepening search Bidirectional search BreadthFirst Search BFS Expand all nodes at depth d before proceeding to depth d1 Implementation FIFO queue 0 0 0 Evaluation of BFS I Completeness 7 YES I Time complexity 7 Total numberufnudes generated 17 172 b3 Iquot 17quot 7 l Space complexit Same if each nude is retained in memory I Optimality 7 Does it always find the leaslrcosl solution7 if depth Bust Uniform cost search I Extension of BFS Expand node with Iowestpath cost 1 Implementation fringe queue ordered by path cost I Same as BFS when all stepcosts are equal Uniform cost search I Completeness YES if stepcost gt a small positive constant I Time complexi y Assume C the cost ofthe optimal solution Assume that every action costs at least a Worstcase I Space complexi me as ime complexity I Optimality nodes expanded in order of increasing path cost YES if complete Depth First Search DFS Expand deepest unexpanded 0 node DFS evaluation l Completeness unless search space is nite I Time complexity 0b quot I Space complexity 0bm I Optimality No Same issues as completeness Depthlimited search I DFS with depth limit 1 ie nodes at depth have no successors Problem knowledge can be use Solves the in nitepath problem If gt dthen not optimal lfl dthen complete still not optimal but efficient Time complexity 0b Space complexity 0bl lfl lt ddepth ofleast cost solution then incomplete Iterative Deepening Search IDS A strategy to find best depth limit I DepthLimited Search to depth 1 2 3 l Expands from the root each time I Appears very wasteful but combinatorics can be counter intuitive NDLS b b1 b bd 0W NIDS db dim 2b M 0M NBFS b b2 bd M 0M l Example Forb 10 and d 5 NDLS 111111 NIDS123456 NBFS 1111100 Iterative deepening search function ITERATIVEiDEEPENINeisEARCl IUNObem return a SOlthlOl l or failure inputs probem for depth EU to w do IesuIe DEPTHVLIMITEDisEAR CHpobem depth if IeSuIgculloffthen return IesuI Evaluation of IDS Completeness YES no infinite paths Evaluation of IDS I Completeness l Time complexity 01 Evaluation of IDS l Completeness Y I Time complexity 0bd Space complexity 0bd Same as DFS Evaluation of IDS l Completeness r VE Tlrne Bumplexlty om l Space Eurnplexlty ow e SameasDFS I Optimality 7 YES ir step cost is l Iterative Deepening Search I Analogous to BFS explores a complete layer of nodes before proceeding to the next one I Combines bene ts of DFS and BFS Bidirectional Search I Two simultaneous searcnes ncim start and goal 7 Mutlyatlun W W mucn less tnan M I aerore a node is expanded it is cnecxed ir it is in tne rnnge or tne citner searcn can be done in constanttime using a nasn table I Time complexity mom I Space complexity same I Complete and uptlmal fur unirciim step costs ir bdtn searcnes are are Bidirectional Search issues in applying I ne predecessor or eacn rlode snoulo be emciently computable e Wnen acndns are easily reversible l Goal rlode sometimes not known explicitly e g in cness Comparison of search strategies Grimhm Mfr quotIli lmr Dank Dw r lm m Hidimm39n a a 5 mm in am m no co re it m M r o a b a r or in bquot Ifquot rm bl M 0 ow by no no no re re When the search graph is not a t I Need to avoid repeated states I Happens in problems with reversible operators 39 naries and Cannibals problem 3 392 m 4 3 u 2 o I Detection compare a node to be expanded to those already expanded Those are kept in the closed ist 7 in practice closed nasn table I Increases memory requirements especially for DFS bounded by the size ofthe state space Same forthe time complexity Graph Search function GRAPHVSEARCHmrobem funge strategy returns a Sumter ur faHurE funge e NSERTWAKENODEmrobem nmaxsmteoy funge loo do if EM F39TY7ange then return faHurE node e strategy SexeemnuRemuveNuue funge if probem Sumtmn7node then return node ifNOT N7node dosed closed e APPENDmoda closed funge e APPENDmmge probem EXPAN Dnode enddo Problem Solving by Searching Lecture 3 90208 Announcements By now you should have read Chapters 1 3 of your text Read Chapter 4 for Thursday The missionaries and cannibals problem1 1 Goal transport 3 missionaries and 3 cannibals from the left bank of a river to the right bank I Constraints Whenever cannibals outnumber missionaries the missionaries get eaten The boat can hold at most two people The boat can t cross the river empt I But both missionaries amp cannibals can operate the boat apulugize furthe politically incurred nature or this problem 7i prefervvulves e How do you solve this problem It s not an equation It s not a continuous function to regress It s a set of discrete states connected by operators An agent searches for an answer by trying sequences of actions and this is what we will formalize Problem solving agents l Goal Formulation Desired state ofthe world Problem Formulation States and actions to consider in view ofgoal l Searc Determine the possible sequence of actions that lead to the states 0 known values and then choosing the best Execute Given the solution perform the actions Assumption Environment is fully observable deterministic Agent knows the effects of its actions Back to our example I A state description that allows us to describe our goal Mu Ct B Where ML number of missionaries on left bank CL number of cannibals on left bank B location of boat LR lnitialstate 33L Goal 00R Graph formulation of the search problem I Nodes states of the problem Edges there is an edge from state u to state vifthere is an action that akes you from vto u e What are the edges formissioriaries ahd Cal ll llbalS problem 7 Can you apply all actions from al ly node i Problem is now to nd a path from the start state 33L to the goal state 00R I In general paths have costs associated with them 7 What cost l5 appropriate formissioriaries amp Cal ll llbal57 The problem is to nd the lowest cost path from the initial state to the goal The search graph Actions operators CCR transport two cannibals to the right ban MCL transport a missionary and a cannibal to the lett bank At each step Checkto see if solution is legal Can t have more cannibals than missionaries on either bank Compute cost of current path How many boat trips have we made Figure out which actions are possible Possible next steps The partially expanded search graph State MLCL a Repeated states The search graph is not necessarily a tree MampC Solution The 8 puzzle ngtm The 8 puzzle n states7 integer lucanun er eacntile l initial state Any 5 I A uns wnere direction is une er item Alrng Up Down n ma test Cneckwnetnergual configuration is reaeneu I Patn rust Number er actions tn reaen goal l is me searen grapn atree 8 queens problem I In 7 incrementalfurmulatiun starts rn y state and involves operators taugrnenttne state descriptlun e A complete state formulation starts With all 8 queens on the board and moves them around 8 queens problem representation is key 7 inererneniai lurmulatiun Slales7 Any arrangement em tuE queensenine board s1ate7 Nu ueens uare board anq none attacked st7 None 3 xl l pussible s equeneesie investigate isine searen qrapn a iree7 8 queens problem a better representation 1 l I 35245 n39 39 2 Aneiner inererneniairennuiaimn States7 n belween anq a queens on inebearqene in eaener ine n leltrmus1 columns nu queens attacking eaen other initial s1ate7 Nu queens I Actions7Add queen in le must empty column suen inai it qees not attack anyenne queens aireaqy on ine board Gualtes17E queens on board 2u57 possible sequeneesie rnvesirqaie Real world problems I Route Finding I Traveling Salesperson Problem TSP l Vle Layout I Robot Navigation Navigating through a search tree Navigating through a search tree Navigating through a search tree 0 Navigating through a search tree Navigating through a search tree 0 Navigating through a search tree Navigating through a search tree Navigating through a search tree Navigating through a search tree Unexpanded nodes the fringe o 0 0 o e o 0 0 0 At every point in the search process we keep track ofa ist ofnodes that haven t been expanded yet the 39inge Tree search Initial state function TREESEARCHmrobem strategy return a suiutiun urfaiiure initiaiiZE searen tree te tne rnrtrat state at tne problem 0 if no candidates fur Expansion then return failure eneese ieaf neue fur Expansion according te strategy if neue cuntains geai state then return sotutron eise expand tne neue anu auu resuiting neueste tne searen tree enddo Tree search function TREESEARCHmrobem strategy return a suiutiun er faiiure initiaiiZE searen tree te tne rnrtrat state at tne problem do if no candidates fur Expansion then return failure eneese ieaf nudefur Expansion according te strategy if neue cuntains geai state then return sotutron eise expand tne neue anu auu resuiting neueste tne searen tree enddo What s in a node Slate Parent I Operator Used to generate node E De lh P PathCost Tree search function TREES EARCH probem rnnge mum a simian m tenure e lt7 iNSERTM AKEno DEtiniriAEsrArEmwvir ninge 0 do If EMPTVfringe then return raiiure node lt7 REMOVEriRsTrninge If GoAL7TEsnpmoiem appiiee tn STATEmode succeeds then return OLUTiONmode rnnge lt7 iN sERrALqu PAnomode prooiemr rnnge Comparing search strategies I A strategy is eerinee bytne emer ufnude expansien l mine v y 7 rinecmnpienw Numbernlnndesgeneraled ed 7 Pace enmpienw Numbernlnndessmredmmemwy during seam l TimeandSpaceDumpiexilvaremeasuredinler i 7 27 maximum branching lacmrnlme seam nee 7 minimumsieasrcustsamion 7 n 7 mammum depth Mme state space nay be Uninformed search strategies I a k a blind searen use only intunnatiun available in problem detinitiun 7 Wnen strategies can determine wnetner une nenguai state is better nan another informed Search Efined by Expansiun aiguntnrn S EidirEEtiDnal searen BreadthFirst Search BFS I Expand all nodes at depth d before proceeding to depth d1 I Implementation FIFO queue 9 O 9 e o o Evaluation of BFS I Completeness oes it always finds solution it one exists7 YES it snaiiuwestguai nude is at same rinite depth d Evaluation of BFS I Compietehess 7 YES I Time complexity 7 Assume a state space Where every state has I sueeessurs I Assume SEIlutlEIri is at depth d I Wurst ease Expand aii but the lastnude at depth d I Tutal number at hddes expanded bbzb3 bdbdl 17 01 Evaluation of BFS I Completeness I Time complexity 7 Tutal humperurhudes generated 17 b2b3 bquotbquot I Space complexity Same it each hdde is retained in memory 01 Evaluation of BFS E Completeness YES I Time complexity 7 Tutalnumber drhddes generated bb2b3 bd 17quot I Space complexity 7 Same it each hude is retained in memury Optimali 7 Does it always nndihe ieastcosi soluli0n7 lf depth eust BFS evaluation Memory requirements are a bigger problem than the number of operations I Exponential complexity search problems cannot be solved by uninformed search methods for any but the smallest instances DEPTH NODES TIME MEMORY 2 m Time and memdrv requirements rdr am far pm mpuu nodessec mun bytesnude Uniform cost search I Extension of BFS Expand node with lowestpath cost I Implementation fringe queue ordered by path I Same as BFS when all stepcosts are equal Uniform cost search I Completeness YES if stepcost gt a small positive constant What is some step costs are zero or negative I Time complexity sume C the cost ofthe optimal solution Assume that every action costs at least a Worstcase 0bC E I Space complexit Same as ime complexity I Optimality nodes expanded in order of increasing path cost YES if complete Depth First Search DFS Expand o dee est p o o unexpanded node Depth First Search DFS Expand 0 dee est p o o unexpanded node 00 Depth First Search DFS Expand deepest unexpanded node Depth First Search DFS Expand Depth First Search DFS Expand deepest unexpanded node Depth First Search DFS Expand deepeg unexpanded node Depth First Search DFS Expand deepest unexpanded node Depth First Search DFS Expand deepest unexpanded Implementation fringe is a LIFO queue stack DFS evaluation I Completeness it always nd a solution i one exists I unless search space is nite and no loops are possible DFS evaluation l Completeness NO unless search space is nite l Time complexity Terrible ifmdepth of search space is much larger han d depth of optimal solution But ifmany solutions faster than BFS DFS evaluation l Completeness unless search space is nite l Time complexity 0b quot l Space complexity 0bm Possible to use even less expand one successor instead of all b DFS evaluation l Completeness unless search space is nite I Time complexity 0b quot I Space complexity 0bm l Optimality No Same issues as completeness Adversarial Search Constraint Propagation Lecture 9 92308 Announcements Programming Assignment 1 Due a week from Thursda Width amp height were swapped in test le I N w fixed Lets talk about timing l Two mihut 5 permove may be too long I Should we shorteh in I No ACM Meeting this week I Google recruiting Thursday 530 Engineering 120 Hammond Auditorium Open to CSECEMathPhysics students Send email to JudyBrobstcolostateedu AlphaBeta Algorithm runcu39un ALPHAVBETAVSEARCHmIegt returns an acttan inpuu xmte current state rn game veMAXrVALUE r we return the action in sUccEssonststateWith value v runcu39un MAXVVALUE mte a returns a uzrtrry Value if TERMJNALVTESTmate then return UTILITY5tate rerun fora5ll i SUCCESSORSSlaledo veMANleNVVALU ifv 2 then return r aeMAXlt returnv ES a J a r Final Comments about AlphaBeta Pruning ruriirig does not affect final results Ehtire subtrees cah be pru ed hotiust le b 7 Effective branching factor ur surt u u nee i h rheta pruning can mumrice as deep as mlnlmax in me amount Elf time the sa AlphaBeta Algorithm runatun MJNVVALU39E mte a by returns a urtltty Value if TERMNALVTESTmate Lhen remrn UTILITY5tate v e u fares in SUCCESSORSSlale do lN vMAampltrVALUESi a 5 ifv athen retu v e MW return v Games of imperfect information I Minimax and alphabeta pruning require too much leaf node evalu 39 I May be impractical within a reasonable amount oftime I SHANNON 1950 Terminate searc Apply heuristic e h at a lower depth valuation function EVAL instead ofthe UTILITY function Cutting off search Change 7 quotTERM NALTESUstateHhen raurn UTiLiTWstat 39 into 7 CUTOFFJESUstatedepth hen Mum EVALstate intrpddcesameddepth irnddspth 7 Seieded m that the amount nttime wii nut emeed vhatthe ruies ntthe garneaiinw When cuttu uccuvsAhe evaiuatiun is pertpnned Heuristic EVAL z idea pruduce an estimate at the expected utiiity ptthe game from a given pusitiun Performance depends uri ddahty pt EVAL In Requirements 7 EVAL shcuid crderterrninahncdes in the sarne Way as UTiLiTV 7 Fast tc ccrnpute 7 Fur ncnterrninai states the EVAL shcuid pe strungiy ccrreiated With the actuai chance at Winning Heuristic EVAL example mm Imam zvdlplmimu Eva1 wf wf was In c 355 w matena1 wimabxmy w lung a ty w center control How good are computers I Let s look at the state ofthe art computer programs that play games such as chess checkers othello go Checkers r Chinook thenrst rugrarntu Wiri the Wurid champion titie iri a competition against a human 1994 chincch 7 Search variant ut aiphapeta Search space has mm states 7 Evaidatrpntdnchpn 7 Endgame database ten aii s1ateswith4vs 4 pieces ruughiyAEl hiiiiun positions Opening ppph a database dippen rig mqus x chincch can determine the nnai resuit cut the game Within the first i u moves c Adthcr has recentiy shown that severai cpenings iead tc a draw Chess 1 1997 Deep Blue wins a 6game match against Garry K s v I Searches using iterative deepening alpha beta evaluation function has over 8000 features opening book of4000 positions end ame database I FRiTZ piays Worid charnpion Victor krarnnik 8cgarne draw Othello Go I The best Othello computer programs can easily defeat the besthumans Logistello 1997 I GO humans still easily beat the best computer programs Constraint Propagation I A new type of search problem De ned by a set of variables X1 Xquot 1 Every variaple n s a domain or possible values And a set of constrain i c2 cm x Every eonstraint restriets eompination of values I Chapter 5 ofyour text Adversarial search Chapter 6 CSP example map coloring iven a map ofAustralia color it using three colors such that no neighboring territories have the same olor CSP example map coloring r Sulutluns are assignrnerts satistying all unstralnts e g MAred NTgreen Qred Nswgreen red SA olue T gr Ben Constraint satisfaction problems state is oerineo by an assignment orvalues to some or all variaole Wnatisa C e Asetorvariaples my lquotWlth oornains possiple values 010 0quot e A set oreonstraints cyc cm 7 Eaen constraintclirnits ne aluestnat a supsetor variaples can take e g v1 v s n urpose algoritnrns Witn rnore puwertnan n algoritnrns ineluoing genene neuristies Applications 7 TlmE taple proplerns examteaening seneoules l Consistent assignment assignmenttnat does not not 7 Ass gnmem pmmems Wm teams what violate tne eonstraints l Complete assignment every variable is mentioned I oal a complete eonsistent assignment Constraint satisfaction problems simple example or a formal representation language CSP eneflts e stanoaro representation language 7 Generl goal an eeessorrun e Userul generale sta u re are CSP example map coloring Varlables WA NT 0 NSW Vi SA T Dnrrlalrrs DredgreerlDue Cunamlrrts a iacerit regions rrlus1 have dlllererrt mlnrs 7 E g mam CSP example map coloring nine languageallnwsthls m l Sulutluns are assignrnents satisrying all unstraints e g r WANT lri MA LN green Qred NSWgreen 15 blue Tgr reeigreenitreeplueroreenremrgreengluertlzlueremrlzluegreen Ben Constraint graph Varieties of CSPs I Binary CSF eaen cunstralnt relates WEI varlables I Discrete variabies censtraint graph neiges are variables edges are 7 Flnle gernains ersize 120nm eernplete assignrnents constraints I The satls alzllrtv problem a Boolean CSP 7 lnnnite gurnains integers strings etc E g ins ermine We mane salient davs im mi ins 0 ed acnrrs1ralrrt languages g startle 5 sstamiml l Cuntlnuuusvarlables b a 0 G 7 e g stanengtirnes rer Hubble Telescope ubservatluns 7 Linear eenstrairts solvable in pulytlme bylinear pregrarnrning rnetnegs Varieties of constraints I Unary unstraints lerEIlVE a single varlable 7 eg SA green I Binaryeunstraints lerEIlVE pairs ufvarlables 59 I Higner7urger unstraints lerEIlVE a urmure variables I Prefe enee soft unstraints e g redls betterthan green nt d erten represe e as a eestrereaen varlable assignrnent 7 rrutcurrslderedhere Example cryptharithmatic I Task assign digits to letters to make the Additional constrain 39 no two letters have the same va u Cryptarithmatic ll How do we set this up as a CSP What are the variables that we want to solve for 72 W O F U R What values can they hold 0123456789 How do we express the all different constraint T W T O W O Cryptarithmatic lll Expressing cryptarithmatic as a CSP How do we express the mathematical constraints I Step 1 we need hidden variables for carry values Xli X2i X3 I Step 2 Each digit addition is a constraint 0 O10x1 R2Wx1 U10x2 CSP as a standard search problem I Incremental formulation Initial State the empty assignment 0 Successorfunction Assign value to unassigned variable provided that there is not con ict Goatest the current assignment is complete I Same formulation for all CSPs ii I Solution is found at depth 11 11 variables I Maximum search depth is also 11 CSP as a standard search problem Branching factor at the root not Branching factor at depth 1 rtDd Hence nd leaves Backtracking search I Observation the order of assignment doesn t 2 can consider assignment of a single variable at a time Results in dquot leaves I Backtracking search DFS for CSPs with single variable assignments backtracks when a variable has no value that can be assigned I The basic uninformed algorithm for CSP Backtracking search function EACKTRACKlNGrSEARCHmsp return a solution artailure return RECURSWEEACKTRACKWQG asp tailure If assignment is cum plete then return assignment foreach value in ORDERVDOMAlNVVALUEsnar assignment asp do quotvalue is Eunsistentwith assignment aeeeruing ta CONSTRAlNTScsp then add vapvaluene assignment resulte RECURSlVErEACTRACKlNGasslgnment asp it result fallure then return resur remeve wapvaluemem assignment return tailure Backtracking example Backtracking example Backtracking example i ll Backtracking example Improving backtracking efficiency Generalpurpose methods can give huge gains in s ee Which variable should be assigned next In what order should its values be tried Can we detect inevitable failure early Most constrained variable H We H Hl l vareSELECYVUNASSlGNEDVARlABLEVARlABLEslcvllissgnmentcsp nose the varlahle vvlm the fewest legal values the mast constralned varlable a k a mlnlmum remalnlng values M rev or fall rst heurlstlc 7 What is he lmultlun behlnd this EHDlEE7 Most constraining variable Ha Hi Ha Ht Select the variable that is involved in the largest number of constraints on other unassigned variables Also called the degree heuristic because that variable has he largest degree in the constraint graph Often used as a tie breaker e g in conjunction with MRV Least constraining value 1 Akw39 l vn ur rm 3 h H 3 quotr 1e 1 e mowsovnunewlsn Least constraining value heuristic guides the choice of which value to assign nex I Given a variable choose the least constraining value39 the one hat rules out the fewest values in the remaining variab es Forward checking Can we detect inevitable failure early Forward checking WA in 0 NSW v 5A r EZIEZIEZIEZIEZIEZI IijZIEZIZIEZI I Assign WAred Effects on other variables connected by constmints with WA 7 mean no Iangerbe red 7 SA can no Iangerbe red l 7 And avoid it later I Forward checking keep track of remaining legal values for unassigned variables Terminate search direc ion when a variable has no legal values Forward checking Fir quot FLLT l M m u Nsw v y mmEIIIIEjEZIEII lZlEIIEZIEZlIIEZI IZZl EZIEZlIIEZI r Assign ozgreen I Effects on other Variables connected by constmints with WA e n e an nger be green MRV heunslic will automatically select NT and SA next Forward checking I if Vis assigned blue Effects on other variables connected by constmints with WA e SA 5 empty 7 NSWcan no hanger be bue PC has detected that partial assignment is Inconslslenl with the constmints and backtracking can occur Local Search ll Lecture 7 91 608 Announcements I Confusion Writing assignment 1 is 133 ofyour nal grade e ag The initial essay was 1339d of assignment 1 l A little more tne 4 ofyour final grade The critique is 13 d of assignment 1 The rewrite is 13 ofassignment 1 I Programming assignment1 handed out Thursday ACM Meeting 500 Wed in USC 110 Alternative lnter cesquot eg the Wii Critiques l Every critique should have 4 parts 7 Very brief summary a few sentences at most 7 Explanation orweaxnesses e Explanation ofstrong points grade acceptreiect etc I No critique should ever 7 lnsulttne autnors CrltlElZE the Wurk nut the author i rri lt a z a 6 2 ewer I 5 p rsuri is ox l dldn tunderstandthe part about I But I graded easily e haven tgone overtnis i an m m e 0 E 3 m a Review Local Search I Used when the solution matters not the path I General idea A feasible solution is any syntactically correct solution An evaluation function ranks feasible so utions A neighborhood function produces local feasible solutions I i e feasible solutions tnat are similarto but not tne same as tne current solution Start with a feasible solution iteratively improve it by selecting the best ofthe neighboring solutions Review Hillclimbing function HlLLecLlMalth propem return a state tnat is a local maximum i put propem a problem local variables current a node ne noor anode current e MAerNoDEtlNlTlALesTATElprooleml loo do neignoore a hlghestvaiued successor of current If VALUE neignoorl SVALuElcurrentl then return STATEcunenl current e neignoor This avor of hillclimbing is known as steepest ascent steepest descent when the objective is minimization Variations on HillClimbing Alternatives to steepestascent ochastic hill climb39n I Select at random from among better neignpors Firstchoice hill climbin I Adopt rirst nelghborthat is better tnan current I lfnodesbare generated in a random order tnis is stocnastic mg Randomrestart hill climbing l vvnen an optima is reacned record it and restart from a random position Beam search I llteep N states is parallel Beam Search I Variant of mu climbing E p r Else select Kbestfrum successors and repeat 1 Maior difference Witn randumrestart search 7 information is shared among Kseamn threads I can surrerrrom rack or diversity More Exotic Variations I Many exotic search algorithms are just variations on local search Some dynamically vary the neighborhood structure Some dynamically change the neighbor selection rule Simulated Annealing l Physical systems are good at nding minimum energy con gurations a physical system cooled down to absolute zero will settle into its minimum energy con guration I Mimic the process using a probabilistic process Simulated Annealing imam siuuwsp MNEAUNGUVUNEM maneraum asaiuuansutn input pram Problem mum mivvinvtramtimetakmvermure current e Make NDDEUNWiAL srArEbmaieml ion i to e do TswhwutEH tnesvstamstivssamehme a Wequot temperature ii 7 ma return current me imam saluted mew mum VALUEhmm VALUElnenl gtmha39i cum e next else current e quotat m mummy 5 vi use WAS Temperature controis tne probability or increasing steps Properties of Simulated Annealing I Theory or Markov cnains As numberofrnoves goes to inrinity tne probability tnat state is some value a Ortiorial to ex I lfternperature is lowered SlOWl enough the lobal optimum Will be round Witn nign probability A lot of research into what makes a good cooling schedule I Proposed in i983 byScott kirkpatrick I Wdely used in VLSi layout airiine scheduling etc Genetic algorithms I Keep a population ofsolutions that undergo recombination and mutation Genetic Algorithms The state is the gEnEtlE material that makes an ind Vldual t l Genetic Algorithms tmetim eEhEric eopirnmmpmriun FerEsseFN Mum an individual mute pawarm a set atlndlvlduals FllNESSeFN a mnetian qualengthe eueiitvat an individual ravet newPDPUHfiurl a enmv set tnnptttritmnit tn sizmupmriitmttn xMennonitecriohtpupuierm FllNESSJN o SELE criohtpupuierm FllNESSJN a ldt PUPUHtlun a new pulaum mt smile individual is m enaugh ai enaughtlme has elapsed mini the best individual Solving TSP Represent a tour as a permutation iti of 1 2n I Fitness ofa solution negative ofthe cost ofthe tour Initialization using either some heuristic or a random set of permutations Need to de ne crossover and mutation operations Crossover I Order crossover choose a subsequence tour from one parent and preserve the relative order ofthe cities from the other Example p1l 23l54B7lEB p crawl l E lex The tell in p startingrreirn its seeeine ut lerlL is aeaezteaezeleae7ee Remwethecltles already in 01 uptainingthe pamal teiur a gt 3gt 2gt l gt a lnsertthis partial teiurarterthe seeeine ut peiirt er 0 resulting ln01218l5467l93 Mutation Can use a 2opt operation Select two points along the permutation cut it at these points and reinsert the reversed string Example l2l3456l789gtl2l6543l789 Search in Continuous Spaces I Example want to place a new airport in a region such that the sum of squared distances from eac cityis minimized denoted byt39 I Can solve by39 gradientt39 0 5 When a closed form solution is not available gradient ascentdescent x x a gradientfx How to choose at Eg linesearch Online Search I So far we have used deterministic actions and fullyknown environments Permits offline search I Consider a new problem A robot is placed in the middle of a maze The task is to nd the exit ie escape Questions Can the robot do Aquot search Can it do local search Formalizing Online Search Online search problems are defined by ACTIONSs returns the set of actions a a that can be executed in state s l Every action returns a new state given a current state but you have to execute it to nd out what that state will be Csas The stepcost function tells you the cost of an action once 3 is known GOALTESTs returns true ifs is a goal state Simple Online Local Search I Let us assume that actions are reversible and determinis ic Like the robot maze with no holes slides etc I Then we can perform local search with an evaluation heuristic For all actions in state s Try tne action leading to state 5 Evaluate 5 Reverse tne action returning to state 5 Select best action do it again going to state s If s is a goal return else repeat loop Other Search Strategies We could use the same idea of reversing actions to implement Breadthfirst search A search I Note that depthfirst search isn t so bad DepthFirst Search with Memory I Keep a table initially empty that maps state action a state I Keep a list for every state of the actions that haven t been tr39ed I Keep a list of parent nodes for every state and whether they have been backtracked I Use these to implement online depthfirst search even when the search space is a graph LRTA A similar idea can improve online A search Scenario Robot is in state s hs is 1 step Two reversible actions apply I A1sas hs is 3 l A2sas hs is 4 What will the robot do on step 1 2 Step 2 LRTA cont I After going to state s the robot will probably return to state s because hs is 1 I But we can learn Since the states that followed s have heuristic values of 3 amp 4 the true distance to the goal from s must be at least 314 action a cost 1 Create a function Hs l Initialize Hs to hs lA er expanding s Hs e MAXHS1MNH5 V5 Informed Search A Lecture 5 90908 Announcements I Turn in Wl ltll ig assignment 1 now 7 Yuu should give Wu cupiestu Jason n One With yuurnarne unit I OHEWVLH UL I At tne end or class Jason will give you back someone else s anonymou5e55ayto cnti Lie 7 Your evaluation Will not errect their grade ra e on yuur crthue agraph u n rt z Whatthey mu wrung n The grade you Wuuld have given men 7 Due Thursday I ACM club Wed 5 0076 00 USC Mo 7 Auturnatrun and Revisions crun syn ReVIew Tree search function TREE75EARcHprobiem rnnge strategy returns a solution or failure rnnge e NSERTMAKENODEprobem lnitialStateO loo rnnge do if EM F39TY7fringe then return failure node e strategy SelectAndRernuyeNude rnnge if problem Sulutlun7node then return node else fringe e APPENDtrnnge problem EXPANDmodeD ddo en Bestfirst search I Informed search strategy expand apparently best I Factors going into determination of best Current cost ofthe solution path Estimated distance to the nearest goal state I Node is selected for expansion based on an evaluation function fn I lmplemen ation Fringe is a queue sorted in decreasing order off Special cases greedy search A search Heuristics Heuristic A rule of thumb simplification or educated guess thatreduces or limits the search for solutions in domains that are dif cult and poorly understood The heuristic function hm estimates cost ofthe cheapest path from node n to goal node Ifn is a goal node hn0 Greedy bestfirst search Expand node on the frontier closest to goal Determination of closest based upon the heuristic function h Greedy search An example I Consider path planning between two cities I Use the straight line distance heuristic hm The greedy solution is A B D s The least cost solution is A B D Greedy search An example r Consider path planning between two cities r Use the straight line distance heuristic hm r The greedy solution is A c D r The least cost solution is A B D A Search I Order states bytheir total estimated cost I Always select the node with the lowest value Total estimated cost quot gW MKquot I gm the cost to reach n mown l hm the estimated cost to the goal Avoiding repeated states I BFS Add to fringe only if state not already visited state absent from the closed list I A If node represents state already visited update cost according lower total estimated cost Heuristic functions Heuristics for me a puzzle ntne numberufmlsplaceu Has 7 nsa 1 n2 the sum urtne distances urtne ties rmin thelrgual pusltluns manhattan ulstance e h2s3l222332l Comparison of heuristics Even very simple heuristics like h1 and hZ can signi cantly reduce the search cost A in Romania in Romania mm mm In newsz I Get to Bucharest starting at Arad fArad cAradAradhArad0366366 Goal shortest route from Arad to Bucharest A in Romania A in Romania Aha nmmg AM gt 511v ma 1ng ow 54sgmaae sawa a zquzsw Maszzmlv I Expand Arrad and determine fn for each node Expand Sibiu and determine n for each node 7 fSibiucArad Sibiuh8ibiu140253393 fAradcSibiu AradhArad280 6 r fTirnisoara r dTirnisoarth irnisoaraii8329447 a Fag r65c ibiu garashFagaraS239i79415 7 zenndCArad Zenndhzennd75374449 r fOradeacSibiu OradeahOradea29138067i l Best choice is Sibiu r fRirnrch ViiceadeibiuRimmcu ch 8 hRimmcu Viicea220192413 Best choice is Rimnicu Vilcea A In Romania A example m Aft rpmhug runAmen im 3 m A crmpmdmg F1311 Sgt Gimme ukmmza cmdmass may mama WW mm D w magma ashram I 253m 417lm i mmn 35353325 1 Expand Rimnicu Vilcea and determine fn for each node 7 Cram cRirnnicu Viicea CraiovahCraiova360160526 r fPitesticRirnnicu Viicea PitestihPitesti317100417 r SibiucRimnicu viiceasumhsmm3oo2535 E Best choice is Fagaras lt m Expand Fagaras and determine fn for each node 7 SibiucFagaras Sibiuh8ibiu338253 e r fBuchaI SixFagaas BucharesthBuchaesi21500450 Best choice is Pitesti A in Romania Expand Pltes1l and deiennine n ici eacn nude n ems Hammersremade m wherewzli oozli Eu are me values aiung sulutiun path ii is the sulutiun uptirnaw Admissible heuristics l A heuristic is admissible if it never overestimates cost to reach the goal optimistic rmrnaiiv i Mn anneie h isthetrue ms imin n 2 Mn snhGi7lnrannal s E xain pies nmm never axeiesinaies ine adual mad distance Heurisicsiaiapuzzie Optimality of A r z e G2 be a subuptimal gual riqu I Let nbe an unexpanueu riqu an an uptlmal ust path m gual G Then we 932 since ma a gt 9a since Gzis subuptimal gt W since in is admissible Since G gt rrn A Will never select szur expansicin Graph search I Turns out there is an additional way to avoid the necessary bookkeeping required for graph search use a consistent heuristic Consistency A heuristic is consistent if n gem a n my Given a consistent heuristic 3019 W 9 2m mam my 2301 WM A consequence of consistency n monotonically decreasing along a path 1 k9 Monotonicity implies optimality A expands nodes in orderorinereasing fyalue l Expansion represented as contours ofstates Wiin ual fyalue x A expands all nodes winn lt 0 When a heuristic is almost admissible I Graceful Decay of Admissibility ir a neurisiic rarey overestimates cost by more inan 5 then in A algorilnm will rarely find a solution whose Lost is more that 5grealerlnan the cost ofme optimal solution Means 7 Suiting asvve undersnuutalmustall name and bound now much We uversnuut We seldom get in trouble and the trouble is minor Evaluation of A I Completeness YES 7 Unless tneie are lriflriltelymariy nudes Witn fltfG Evaluation of A I Completeness YES I Time Complexity 7 Number Elf nudes expanded is Still exponential since A a an saiinudesmtnmltc Evaluation of A I Completeness YES I Time Complexity 7 Number Elf nudes expanded is Still exponential since A expands aii nudes Witn lt c I Space complexity aiso exponential Evaluation of A I Compieteness YES I Time compiexitv 7 Number urncides expanded is stiii expcinentiai since A apands aii nudes Witnm lt c I Space compiexitv aiso exponential I Optimaiitv YES 7 A dues nut expand any nude With n gt c e Assumtvigan MW 11th minim Comparing heuristics Heuristics for ins a puzzle I n endmpercirmispiacedtiies I n ne sum drtne distances at netiiesncim tneirgdai pcisincins mannattan distance Inventing heuristics x Adrnissibie heuristics can be derived rrern the exact suiunun cost or a reiaxed version or the prebiern e Reiaxed Erpuzzie fur71 a tiie can rneve anywhere e Reiaxed Erpuzzie reith a iie can rnevete anv adiacent square 7 Anuther reiaxanen a tiie can rneve to anv biank square Adrnissibiiitv The eptirnai seiutien caster a reiaxed prebiern is nu greater han the up irnai seiutien cost or the reai prebiern Inventing heuristics Adrniss bIe heuristics can aiso be derived trorn the soiution cost or a subprobiern or a given probiern I This cost is a Iovver bound on the cost or the reai pro iern l Construct a database or soiutionsrorsubprobierns use a combination or subprobierns to denne the heuristic Inventing heuristics Having the best of all worlds given admissible heuristics hp hm WM VMWIW MM is a Mg admissible heuristic Inventing heuristics Learning trorn experience nence soiving iots orsepuzzies earning aigonthrn can be used to predict costs for states that arise during search 4gtlt Vision Part 2 Lecture 23 112008 Warning I The example given in programming assignment 3 is PR lib C T W knife This is easy Room depends on Cook and Weapon so just look up the answer in the table But what ifl reversed it PC T R lib W knife I Remember the evidence is an arbitrary set of variable assignments The query may be anywhere in the graph Where were we Ii Talking about vision What does it do What makes it hard I Loss of 3D Reflectance The notion of object I How do we do it Very good question I d love an answer Traditional starts Edge extraction l Segmentation I Interest Points Predictions Canny Example Source image Canny sigma 2 0 low O 40 high 0 9O Canny Example cont ICU 0 k P l x it a wil A Sigma3 O Sigma10 low04high09 low04high09 Canny Example Ill V x J quot J S A AN LAx 90 by W fy f a cm W m gal2g gs 394quot r0 7 a a 4 s fe gig p R u At 1 run A 7 M 73 ElmV 2 u so aw we a Vb x m w stkgkr kf a Sigma 2 2 0 Sigma 2 O low04high06 10W04h1gh099 What is an Edge I An edge is a description ofa localized image pattern It measures a slope in intensity I An edge is a symbolic feature We need to know what it denotes l surface marking or I surface discontinuity or I shadow illumination discontinuity These things have precise positions Segmentation I Divides an image up into regions of similar pixels From hltp cmmensmpIrbeucherwtshed html Segmentation II quotautumn mvmnlsnci mamaquot Vim n Immucnmsmnm Segmentation l Problems Objects are rarely a single color or texture Shadows highlights change appearances Definitional Ils a car one object or a set of tires and windows and When asked to segment an image by hand two people rarely agree Interest Points l Why look at the whole image People attend to specific points in images Overt and covert attention I Interest point extractors find locations and scales to look at further Interest Points These interest points come from NVT Itti USC l Arrows show the sequence of eye movements How do we directly compare two images Arethese images the same7 Arethey Slrmlar7 Pixelwise Comparison 11 xlt148ylt151 8140 Z Z AxyeBxy 0r normallzed by image area about 5 gey values per pixel What is Correlation About Let s play a Game lmenston in one lets you predict the value in the other The results of the Game I First Game 7 Random I Game 2 7 features 7 N Early perfect preulctlun e tu wttmn une value I Punch me e correlation measures predlctablllty Sources of Variation Change in scene content Outof plane rotation scale change 8 E Inplane translation rotation Change in illumination g Change in mixedpixels 8 Change in gain f stop Electronic noise Correlation 22in 315W E What is the widertylrg model Assumptions of Correlation ZZV AvaVXXEXvIE 24le Zv lwk i f Assumes two signals vary linearly e Constant shiftto either signal as n o effee mpunent duesi39t matter I This minimizes s sit39vi o I W e ehanges in overall illumination e festop or gain Informal De nition Stereo I The ability to infer 3D structure and distance from e overlapping images taken simultaneously from different viewpoints Are these stereo Imag257 Describe the wewpamts Scenarios I Most common perpendicular optical axes I Also common conver n 0 cal axes More common than on mi tthink arbitrary axes Two SubProblems I Image Matching conesponden e e identifying vvhieh oints in image 1 mateh vvhieh points in image 2 e note not all points in lmagel mateh anything in image 2 Why mit7 7 Note not all matehing points ean be found I Reconstruction e Given point matehes determine their 3D position e Requires triangulation implieit or explicit Image Matching I Find common scene points in two images 7 O c u sl on eomplete overlap of visual fields e Potentially strong perspeetive effeets General Methods e Correlation based I Crussrcun39date every pixel in le lrrage tn right image I Epipular geumetry En BEIan this smreh e Feature based i et punts edges lines ete and match them aeruss images I Mure un these fmtures later in the euurse Reconstruction as Triangulation I Assume that the positions and baselines ofthe eras are known Solve forT39s eompute eoordinate of p oint Q 15K 1 this uvstcuhsoamaw baseline I Epipolar Geometry For any point in image 1 there is a line of points in image 2 such that its match if one exists must lie on that line I This is because there is a plane de ned by the two focal points and the point in image 1 The scene point must lie in this plane Why I Also the matching point in image 2 must lie in this plane Why Getting Formal about Stereo The fundamental matrix describes any pair of cameras with overlaping views and can reconstruct the depth of any pair of matching points to within a scale factor Image Contents Matching Image I i r r l 1 uli w A E l i Features oups Feature r Image 1 Image 2 Increasmg Abstraction Image to Data Matching Image 1 Feature Simple Example Matching Points I Edges amp Corners can both be viewed as interesting 2D points in an image I It might be useful to match a set of edgescorners to points in an object model Creates a correspondence problem which feature in image A matches which feature in image B I Similar but different from correlating edge maps Feature Matching l Correspondence problems One to one One to many Many to many I Geometric Transformations Similarity Affine Perspective I Noise missing and extraneous features I Match metrics For individual feature pairs I Can any point match any point For feature sets I Geometric error measures Point Matching Examples Simple 039 H 0 00 Example 1 Example 2 The Directed Hausdorff Metric A distance measure between two sets of points I Given two sets of points h AB MAX MIN 1 W 1 a e A b e B l The Hausdorff metric is n otherwords it is the maximum distance from any point in A to its nearest point in B What class oftransformations does this support Robust Hausdorff l The Hausdorff metric is highly sensitive to noise One outlier point in set A will ruin the match A more robustversion is how gill vlla bl Typical values of K are 10 20 or 50 ofm Hough Transform Grouping I The idea ofthe Hough transform is that a change in representation converts a point grouping problem into a peak detection problem I Standard line representations W y mx b W compact bulpIObems with vellica imes W xu yutx1y1rr your Iaylace used lhis form bur iliS highy Iedundanl A free palameleS W c 0 W Erasemam s uses ims form SW Iedundanl 3 free palameleS I How else mightyou represent a line Hough Grouping cont Represent infinite lines as p Hough Grouping lll Why This representation is Small only two free parameters like ymxb Finite in all parameters 0 lt plt row2co2 0 lt 1 lt 27 Unique only one representation perline I General Idea The Hough space 114 represents every possible line ent Next step use discrete Hough space Let every point vote forquot any line is might belong to Hough Grouping Directed ges l Every edge has a location and position so it can be part of only one in nitely extended line l Colinear edges map to one bucket in Hough space Hough Example Hough space m insarlyon frlolionlrnagesF4 gr The Generalized Hough Transform Ballard 1981 1 Suppose that we have edges extracted 39om an arbitrary 2D curve Can we use a Hough space to determine if the curve is in another Image Generalized Hough III I For every edge in the model curve there is a vector 39om the edge to the reference point I Store this set of vectors Visualizing the Generalized Hough Why So Much Emphasis on Transformations Because correspondence problems are hard Search is hard right Trying to search for a onetoone correspondence between model and image parts if infeasibly complex Try to simplify the problem before going to the types of solutions studied in 08440 Pose Determination I More general match 3D model to 2D data Find Q rotation amp translation such that model pro39ects under perspective onto data Note that scale mirror not allowed why I A harder problem No closed form solutions in general Guess approximate pose then iteratively re ne estimate Use extra data to overconstrain the solution Formalization A Pipeline I Let M Mx My Mz be a 3D model point I Let D Dx Dy Dz be a 3D model point after translation and rotation Letd x y be a 2D data point I Then Back to Pose I So we formalize the pose problem as The Pose Problem I We assume that the camera s focal length fis known I This leaves six free parameters a anee translational Tx Ty T1 7 anee rotational exam I In principal solvable from three point correspondences in practice loo nonlinear a n practice too sensitive to SmaHamoulls or noise I A better approach Newton s method Overrcorlstralri the problem Witn extra points s an initial answer a iteratively irnprove it Error Minimization for overconstrained problems I Let R T be the initial guess at a rotation amp translation I In the absence ofnoise if R T are correct I Therefore the error in the guess noise is I We want to minimize the total error Unsupervised Classification KMean amp EM Lecture 24 120908 Announcements l Programming Assignment 3 e Dec 11m Tables are now corrected on the web page I Scarlet has one t I All values are now exact I All rows sum to one Programming Assignment 2 Grades handed back last class If our results don t match yours let us know If simple x is possible talk to us I Today s lecture Chapter 203 Goal Given a set of observed data samples K the number of underlying processes Find The K Gaussian distributions that are most likely to have generated the data We wzll present me algmzthm rsl and me a astz ufthzury later First Attempt KMeans Algorithm 1 Randomly select K samples as initial cluster centers Until convergence For every sample Assign sample to nearest cluster Euclidean dist For every cluster I Move clusterto the geometric center of its samples Demonstration 20 Zeclass Gausszan example 000 00 o o o 0 00000 00000 00000 00000 00 00 o o 0 00 0 0 00 0 0000 0 000 0000 000 000 0000 000 o 030 o o pf o 00 0 00 0 00 00 0 00 Does KMean recover Gaussian clusters Sadly no Distance measure problems I Unequal variance across dimensions l Dependencies between dimensions Hard assignment problems I Gaussian distributions are probabilistic EM can be viewed as KMeans after the problems above have been corrected Pmblemi ai Unequal Varlances l Two Gaussians but different variances Euclidean distance is symmetric I Need to model each cluster notjust its center Improving KMeans l ClusterCenters Traditiona K Means Points in Feature Space Improved K Means Gaussian Probability Functions The cluster center is the mean point in feature space I St Dev s ox oy in general 0 for every dimension D Improved KMeans iteratively Assigns samples to clusters based on PDFs Updates PDF based on assigned samples Improved KMeans Algorithm 2 I Initialize PDFs Initial mean is position ofrandomly selected sample Initial as are 10 l Until convergence For every sam I Calculate dlstance to tne mean of every PDF measured II I 6 5 6 I Asslgn sample to closest PDF max llkellhood For every PDF I Update means and 5 s accordlrlg to samples Note we lag on Iteranun j dzstancesfmm samples to PDF Iteranun 71 means amp G s Solves unequal variance 1 X X X xx X x x x i x x x Xxxxxxx xx xx XX xxf xx xx t 00 o 00I o e o Dashedlmesrepresemcs 0 o Blue cluster has large Q Which attracts samples to Its left away from the red cluster which has a small oy Prublem1b Dependencies among dimensions What If we rotate the previous figure The problem returns 4 Deviations no longer capture g meaningful variance Improving KMeans again I Drop Independence assumption for PDFs I Measure covariance 2 of PDFs Let X be the DxN set of meansubtracted samples Then 2 is the covariance matrix Improving KMeans cont Now we can measure distance in terms of probability Let S be sample vector i Note that Si is the ith column ofX Then the probability of 8 given PDFj is Improved KMeans Algorithm 3 I Initialize PDFs 7 Initial u is position ofarandomly selected sample 7 Initial Z is I I Until convergence 7 For every sample Calculate prob ofPDFJ using equation on previous slide for all 1 Assign sample tom likely PDF 7 For every PDF Update us and Es according to samples Solves the dependence ble pro m As long as the underlying functions are multivariate Gaussians dependencies no longer matter Question since Gaussians have infinite tails any sample could be created by any ofthe Gaussians Why assign samples uniquely to individual PDFs The HardAssignment Problem 43 1 39 4 Which Gaussian 339 W generated these samples 393 vhf Overlapping Gaussians i V Mquot 3 2 V 443 Mquot a it a e i 393 393 gs 539 19 a 393 l 4 With a hard assignment Whichever cluster gets the intersection rst gets larger covariances and too many samples 4 393 vhf More Overlapping Gaussians True 239 cluster When the means oftwo PDFs are nearly the same it again creates local optima that are hard to break out of because every sample is uniquely assigned to one PDF Really improved K Meam Expectation Maximization EM I EM refines our improved K Means by introducing soft assignmen s Psu 2 is the probability of PDF generating s Given that s exists Is the responsibility of PDF j for s so assignment a is the aprior probability of PDF The EM Algorithm I Initialize PDFS for every j 7 Initial n is position ofarandomly selected sample 7 Initial Z is I 7 Initial 710 UK I Until convergence 7 For every sample i CaieuialeigrsgforeveinDFJusing oidquot 12170 EMF 7 For every PDFj using newR s Update a Update 2 Updalea EM cont I Now instead of assigning samples to PDFs you compute responsibilities 1 Your underlying model is p 2 on for all jK Mela quot Conclusion I Algorithmically EM is K Means with A distance measure based on multivariate Gaussians So assignments l Theoretically others have shown w s convergesto a local optimum This optimum selects K Gaussians so as to maximize the probability of observing the data Can be applied to other distributions besides Gaussians EM Update Equations A Taste ofthe Theory l The data can be divided into the observed data Xi and the unobserved data Yi ie labels l lei XYi then where 9 is the parametervector for your PDFs Pyx 9 is related to responsibility Px 9 can be solved for given the type of PDf Bayes IV Exact Inference in Bayesian Nets Lecture 19 110608 Announcements Programming assignment 2 is now past due Late assignments accepted until the 10m 20 penalty Review A Simple Network causes effects Directed acyclic graph DAG Review Conditional independence Two variables Xand Vare conditionally independent given Zif PX XyyZz PX XZz for all values Xyz That is learning the values of ydoes not change prediction of Xonce we know the value of Z Notation IX39 yZ Where are we We know the basics of probability eg The probability of an event Pe Conditional independence Pef x We know how to represent networks of causal events But how do we inferprobabiities given a causal network and a set of observations If Mary calls and John calls do I believe there was a burglary Two Steps 1 First we need to be able to calculate PX for an arbitrary node X in a Bayes Net without any evidence 2 Then we will compute PXlE PXampEPE Inference in BN I Set E of evidence variables that are observed eg JohnCallsMaryCal s I Query variable X eg Burga for which we would like to know the posterior probability distribution PXE Distribution Col idlhol ial to the observations made Inference in BN Simplest case FIB 0 0 21MB l PUKHIPH r For Boolean variables Pb PbaPa Pb a P ia P lb P rbaPa P ib a P a Inference on a chain 0 0 0 PC PC I BPB I APA 5 PC PC I BEAXB I APA Inference on a chain 0 0 0 0 I For a chain with 71 nodes where each node has k possible values the complexity is 0m 1 Inference on a chain Summary PII Z rm nan Iu Z I39IDIL39l Ifl39lrAIl lI rur Z PIDICI PII39IBIZPIRIJIPMI 39 rr r Inference on a chain We can perform variable elimination using a different order ERA P8 IA PCI BPD I C PM HEW4812 PODr A D Variable Elimination l Suppose We re interested in PXJ I Write query in tne rorrn Pl Xi Z H Pl Xririilx il l ir ivir i iteratively r MEIVE all irrelevant terms outside CW innermost sum 7 Penunn innennust sum getting a new ten 7 insert the NEW Erin into the product Inference Ex 2 6 PM ZPW rsPr cPs cPc 5wa rs Pr cPs cP PON rsfrs IS A More Complex Example Th e Asia n etwork 00 Example cont 9 o o 0 Suppose we want to compute Pd 0 0 The form of me joint dish ibution PvPPzlvP1lPblPalz1PxlaPdlab o 0 Example cont 9 o o 0 Need m eliminam Vsxflab o 0 Example cont 0 o o 0 Need m eliminam sygflab 0 HVVUVUllVW 5Fbl 5Fal Fx l aFal a b Eliminate l Cnmpute 3930 ZENVG l v mvti 5Fbl 5Fal f FX i aFal a b Nate m w in general result nf ellmmatmn ls rmr necessarily a pmbablllty term o 0 WWW i oftl Fbl Fa i f um i gym a b ovuml 5Fbl 5Fal mm gym a b Ellmmate Cnmpute b ZIPstl sWl 5 e r b Fa i f FX i aFal a b gummmg err sresults rr a freer wth twn argumenie a fun ver In general result nfehmmatmn may be a elm nfse a1 Vanables o 9 Example cont 0 o o 0 Need to eliminate xfab 0 PvPsPf vP sPb sPa f Px aPd a b gt ffPsP sPb sPa f Px aPd a b gt f b Pa f Px aPd a b Eliminate X Compute u ZPX a b UPa 7 Pd a b Note 7217 Ifor all values of a In fact every variable that is not an ancestor ofthe query or evidence variables can be i noredl Example cont Need to eliminate fab o o PvPsPf vP sPb sPa r Px aPd a b gt ffPsP sPb sPa r Px aPd a b gt 5mm Pa r Px aPd a b 2 b aPaf P0 a b Eliminate 7 Compute t 2 fWalf a b OHIOMPW a b Example cont Need to eliminate ab o PvPsPf vP sPb sPa f Px aPd a b 2 fPsP sPb sPa f Px aPd a b 2 f b Pa f Px aPd a b a b 7 UPa 7 Pd a b 2 b aMPd a b Eliminate Compute u b2 b fa Q 507 b aPd a b Example cont Need to eliminate ab o PvPsPf vP sPb sPa f Px aPd a b 2 fPsP sPb sPa f Px aPd a b 2 f b Pa f Px aPd a b a f b 7 UPaf Pd a b Q b afa P0 ab 2 fa b7aPd a b 2 fab 0 DM Eliminate ab Compute b d Zfa b aPt a b d Z b d Variable Elimination We now understand variable elimination as a sequence of rewriting operations Computation depends on order of elimination Dealing With EVIdence o o 0 0 I How do we deal with evidence 0 E I Suppose get evidence V f 5 f D f and wantto compute PLt f 5 f D r PLV zs fD z PLlVtSfDt PVtsfDt Dealing with Evidence 1 9 o o 0 0 I We start by writing the factors 0 D PvPsPf vP sPb sPa f Px aPd a bl SVince we know that l f we don t need to eliminate l Instead we can replace the factors Pland PTl Wit fw Pl f fmeU39 PT l f These select the appropriate parts of the original dence factors given the ev I Note that fMis a constant and thus does not appear in elimination of other variables Dealingwith Evidence 9 3 o o ComputePLVtSfDt 0 0 o 19 1 Initial factors after setting evidence fwi mfmiv i bibP I PX afl dlaba b Dealing with Evidence 3 3 0 0 lComputePLVtSfDt o o o 9 Initial factors a er setting evidence fW6560Mf l5 bl5bPa I f m1fl dlaba b 1 Eliminating x I fwi eifmiv 6i bibP I lquotxquot aiabquot b Dealing with Evidenc I l ltlalfat r aft r Wit id ii I6v5 riv ii bibP l 0P X 1 6dlaba MI I Eliminating We get I 6mmommaUmbiwa I r Igt ltagtctdiaiilta b I Eliminatin twe h frvfl 6l6blsbfa A aniaia b e I Elimin t may b 57539 p fUJfI b E I m g vf fPll l Variable Elimination I Compute the probability of Xk given values to evidence variables E 1 i Si quot Iiiiiii lirti llll ll39 lltllr4 39llulul iiiiiiiii Algorithm is same as before with no need to perform summation with respect to evidence variables Complexity of inference Thm Computing PX x in a Bayesian network is NPhard Approaches to inference Exact inference Variable elimination Join tree algorithm Approximate inference Simpify the structure of the network to make exact inference efficient variational methods loopy belief propagation Probabilistic methods Stochastic simulation sampling methods Markov chain Monte Carlo methods Bayes I Introduction to Bayes Nets Lecture 18 110408 Announcements Programming Assignment 2 is now due Any comments ACM meeting tomorrow 5pm USC100 how to get into the world of computer graphics Election day vote Unless you voted already this isn t Chicago Probabilistic Agent sensors environment actuators I believe that the sun will still exist tomorrow with probability 0999999 and that it will be a sunny day with probability 06 Problem At a certain time t the KB of an agent is some collection of beliefs Every belief has a degree of certainty At time tthe agent s sensors make an observation that changes the strength of one of its beliefs How should the agent update the strength of its other beliefs Purpose of Bayesian Networks Facilitate the description of a collection of beliefs by making causality relations explicit exploiting conditional independence Provide efficient methods for Representing a joint probability distribution Updating belief strengths when new evidence is observe Review Conditional Probability PAB The probability of A given that B is true The frequency with which A will be observed considering only samples in which B is true Eg Ptoothache cavity from last lecture Bayes Rule Palb Example I39m at work neighbor John calls to say my alarm is ringing but neighbor Mary doesn39t ca Sometime it39s set off by a minor earthquake Is there a burglary Variables Burglary Earthquake Alarm JohnCaIIs ManCalls Network topology re ects causal knowledge A burglary can set the alarm off An earthquake can set the alarm off The alarm can cause Mary to call The alarm can cause John to call A Simple Network causes Intuitive meaning from x m y x has in uence on yquot effects Nods are random variables 7 Assigning Probabilities to Roots Conditional Probability Tables Conditional Probability Tables Size of the CPT for a node w k parents What the BN Means El Calculation of Joint Probability PJNVlAAMBAE PO WM0P0quotlBAEPGBPGE 09X07X0001X0999X0998 Independence of Random Variables 1 Two variables Xand yare independent if PX X yry PXXfor all values gy That is learning the values of ydoes not change knowedge o E If Xand yare independent then P06 y 39 PXVPy 39 POOP n general if are independent then PX1X PX P09 Requires On parameters Conditional independence l Unfortunately most random variables of interest are not independent A more suitable notion is that of conditional independence l Two variables Xand yare conditionally independent given Zif X XyyZ39z PXXZz for all values Xyz That is learning the values of ydoes not change pre iction 0 once we know the value 0 Notation 17X yz What the BN Encodes 1 For example John dos I The beliefs JohnCalls and MaryCalls are independent given Alarm independent of Burglary quot A arm and Earthquake given Alarm or Alarm What the BN Encodes For instance the reasons why John and Mary ma not call if there is an alarm re unrelated i he beliefs JohnCalls Note that these reasons could d MaryCalls are be other beliefs in the network ePerident QiVe Alarm The probabilities summarize these I39 Alarm nonexplicit beliefs Alarm or Alarm K A an individual s enotype enera Ions PLisa Marge Marge s parents PLisa Marge Example of conditional independence I O node represents I Each random variable X is conditionally independent of its nondescendents given it parents X Non DescX P300 Example Naive Bayes Model A common model in disease diagno 39 Symptoms are conditionally independent given he disease I Thus if jX denote whether the symptoms exhibited by the atient headache highfever etc H denotes the hypothesis about the patients health I then PXyX H PHPXHPX H This na39l39ve Bayes model allows compact representation It makes strong independence assumptions Independence Relations in BN Given a set E of evidence nodes two beliefs connected by an undirecte path a e independent ifone of the following three conditions holds e on the path is linear and in E 2 A node on the path is diverging and in E 3 A node on the path is converging and neither th39s node nor any descendant is in E Independence The expression XiiXauiXi Hi1nF XIF aXi means that each belief is independent of its predecessors in the BN given its paren Said otherwise the parents of a belief Xare all the beliefs that directly influence X I Usually but n talways the parents of X are its uenced by Burglaly directly JohnCaIIs ls directly in uenced by Alarm Types of Nodes on a Path Independence Relations in BN Given a set E of evidence nodes two beliefs connected by an undirecmd pa independent if one of the following three conditions hold s o e on the path is linear and in E Gas and Radio are ndependent given evidence on SparkPIugs Independence Relations in BN Independence Relations in BN 3 Gas and Radio are independent in given no evidence but they are c dependent given evidence on Starts or Moves Given a Set E of evidence nodes two beliefs c 1 A on i e pa 2 A node on the path diverg39ng and in E 3 A node on the path is converging and neither this node nor any descendant is in E 2 3 A node on the p is converging and neither this node nor any descendant is in E DAGs and Topological Ordering Bayesian Networks l Lemma a directed graph is a DAG iff it has a l Agimpled graphical notation for conditional in epen a serti ns s 39n in a compact tOpOIOQIC I den representation for the full joint distribution l Atopological ordering of a graph is an ordering of its nodes such that each node comes before all nodes I syntax How 5 th relevant to Bayes n networks a setofnodesy one pervariabie I Proof every DAG has a sink Use it as the last o e in the sort Remove it from the DAGI The a directed acyclic graph link direct In uences resuming graph395 5p a DAG Procee j by 39nduot39on a conditional distribution for each node given its parents For the other direction assume the graph has a P Parents topological order and yet is not a DAG Existence of a cycle contradicts topological order Bayes ll Introduction to Bayes Nets Lecture 17 102808 FoL Resolution Theorem Proving General idea same as predicate resolution theorem proving A00 V 300 quot aAX V 300 3 300 V 300 New issues Universally quantified variables Existentially quantified variables Constants Announcements I received multiple queries about the homework and Section 95 in yourtext It covers FoL resolution theorem proving Like why didn t I cover it Well I thought it was implied but I am happy to cover it if you wish Conjunctive Normal Form CNF Same rules as before Eliminate lt3 and 3 Move a to the bottom of the parse tree Make the expression a two level conjunction and of disjunctions ors of possibly negated prepositions I New rules Eliminate existential variables Skolemization Move universal quanti ers to the top of the tree where they become implicit Conversion to CNF l Example Vx Vy Animaly 3 Lovesxy 3 3y Lovesyx 1 Eliminate 3 nothing to do in this example 2 Eliminate 3 Vx Vy aAnimaly v Lovesxy 3 3y Lovesyx Vx aVy aAnimaly v Lovesxy v 3y Lovesyx 3 Conversion cont Move a inwards l Vx Px 3 3x Px ix Px 3 Vx Px Vx Vy Animaly v Lovesxy v 3y Lovesyx Vx 3y Animaly v Lovesxy v 3y Lovesyx Vx 3y AnimaKy quot Lovesxy v 3y Lovesyx Vx 3yAnimaly quot Lovesxy v 3y Lovesyx Conversion Ill 4 Standardize Variables I Don t reuse names across scopes Vx 3y Animaly quot nLovesxy v 3y Lovesyx becomes Vx 3y Animaly quot nLovesxy v 32 Loveszx Conversion IV 5 Skolemize Replace existential variables by new constants I If the existential is inside a universal it depends on the binding of the universal Vx 3y Animaly quot nLovesxy v 32 Loveszx Becomes Vx AnimalFx quot nLovesxFx v LovesGxx Conversion V 6 Drop universal quantifiers I They become implicit AnimalFx quot nLovesxFx v LovesGxx 7 Distribute v overquot I As in propositional logic AnimalFx v LovesGxx quot nLovesx Fx V LovesGx x Disheartened Don t be The initial axioms in the homework assignment never change so calculate the CNFform by hand The turns of the game only produce two types of axioms l Plumcv Revolvercv Billiardsc I Plumc v Revolverc v Billiardsc So only automate the conversion of this form Resolution Theorem Proving Now the resolution process is For any pair of disjunctive clauses that contain Pq and nPq I Q may be a variable or constant I Unify the expressions 7 Variables unify Witn variables by Simple substitution 7 Variables unify Witn constants by adopting tne constant lApply resolution Announcements ll Programming assignment 2 is due in 1 week Don t forget to vote No ACM meeting this Wednesday Probabilistic Agent sensors environment actuators I believe that the sun will still exist tomorrow with probability 0999999 and that it will be a sunny day with probability 06 Problem At a certain time t the KB of an agent is some collection of beliefs Every belief has a degree of certainty At time tthe agent s sensors make an observation that changes the strength of one of its beliefs I How should the agent update the strength of its other beliefs Purpose of Bayesian Networks Facilitate the description of a collection of beliefs by making causality relations explicit exploiting conditional independence Provide efficient methods for Representing a joint probability distribution Updating belief strengths when new evidence is observed Review Probabilities represent a degree of belief Bayesian interpretation l Subjective measure of internal con dence Frequentist interpretation l Based on sampling from random draws Probability values 0gpg1 PA v B PA PB PA quot B Review Conditional Probability PAB The probability of A given that B is true The frequency with which A will be observed considering only samples in which B is true Eg Ptoothache cavity from last lecture Bayes Rule PM b Pb l aPa Bayesian networks aka Belief networks Causal networks Graphical models Example I39m at work neighbor John calls to say my alarm is ringing but neighbor Mary doesn39t ca Sometime it39s set off by a minor earthquake Is there a burglary Variables Burglary Earthquake Alarm JohnCaIIs ManCalls Network topology re ects causal knowledge A burglary can set the alarm off An earthquake can set the alarm off The alarm can cause Mary to call The alarm can cause John to call A Simple Network causes Intuitive meaning from x m y x has in uence on yquot effects Nods are random variables 7 Assigning Probabilities to Roots Conditional Probability Tables Conditional Probability Tables Size of the CPT for a node w k parents Markov Chains Lecture 21 1 11308 Announcements I Programming Assignment3 is due Dec 11 quot at as much time as it seems given Thanksgiving break I Any questions Review Inference on a chain Summary I tl Z I39lll39ll tn 2 I39lilt39i lt39ilrl39ltuirl tlr 41 Z whim W39an Purimmu 4 n r Review multiple orders work n PAP5l APc l BPDi C PM Pts WE PCi BPD i C 2 c 2 PltAgtz HEW4512 g a Variable Elimination l Suppose were interested in PXJ I Write query in the orrn Pan a Z H Pl anl39 l v xi r iteratively r MEIVE all irrelevant terms outside EW innermost sum 7 PErfurrni rmust sum getting a new arm 7 insert the NEW term into the product I How do we deal with evidence I Suppose get evidence V f 5 5 D f and want to compute P V f 5 D I Dealing with Evidence I We start by writing the factors 0 1 FvFsFrlvFl 5Fbl 5Palr Fx l aFal a b I Since we knowthat ya we ooh theeo to eliminate V insteadi we can replace the factors FIafid F7 V with m Pal gory rtr IV t I These select the appropriate part5 ofthe orioihai factors given the evioehce Markov chain Monte Carlo sampling I G enerates events by making random changes to the state variab e value for one of the none I The next state is generated by sampling a vidence variables conditioned on the current valu Markov chains A Markov chain is a random process ihfihite quel iCe of random variables XoX1 gm thatsatisfies PXt lXKHL X670 PWMXKPU The probability ofa particular state at time t depends only on the state at time M ifthe transition probabilities are xed foraii l the chain is called homogeneous and is aracterized by a transition matrix 07 u u Markov chains I In order for a Markov chain to be useful for sampling from Px we require that for any starting state xm lint Fi 1 r I Equivalently the stationary distribution ofthe Markov chain must be PM i i iIX l e Z l iXle X l x the transition pfubablllty to x mm Pix ZFxltx rx39l 30 x Using a Markov chain to sample Stationary distribution lfthe Markov chain indeed converges to the desired distribution from we can start in an arbitrary state use the Markov chain to do a random walk for a while an output the M l The resulting state will be sampled from Px it 0703 o a ml I The stationary distribution of this chain is 033033033 Markov chains for sampling I To ensure that the chain converges to a unique stationary distribution the following conditions are suf cient my there Exists a i such that p y gt ovvhen startan atx e Aperlodlcliy the chain doesn t getcaught in eveles I The process is ergodic ifii is both irreducible and aperiodic very state is eventuallv reaehaole trorn Detailed balance I To ensure that the statlonary distribution otthe Markov chaln is Px it is sumoient torp and Q satistv the dataled baance reversibility condition PxQx x Px Qx39 A x Given that detailed balance holds 2 llixitrlx Ax39l Z l lx39Wlx39 xi I lleLJix 1x1 l h l Gibbs sampling I Idea To transition from on assignment to another by Pick a variable X J Sample its value from the conditional distribution e state variable y in yum xquot I In a Bayesian networka depends only on a subset of the variables Markov Blanket l Variables are independent oftheir non descendents given their parents I Variables are independent of everything else 39 he network given their Markov blanket I So to sample a node only need to condition on its Markov blanket Pm lxu mam Xi PKXJ lMBKXi The Gibbs sampling algorithm GIBBSX 2 1m N returns estimate of P Xiz Nx counts the numberoftimes each value of Xwas observed x the current state ofthe network xU initialized with random values for the vidence variables for 1 to Ndo for each nonevidence variable X sample X from PXlMBX Np Np 1 wherex is the value oinn x Convergence of Gibbs sampling l Gibbs sampling satis es detailed balance I lxie l kc Plinxnelmijlxhcl 7 Pilixelquotxcf iliixpel I l nix el l39l 1 Plx39ieN lnii e i denotes the variables other than 11 xi z Gibbs sampling example I Consider a 2 variable network z I Initialize randomly l Sample variables alternately Practical issues How many iterations When to stop Adversarial Search Constraint Propagation Lecture 10 92508 Announcements Programming Assignment 1 Due a weekfrom today New timing limit 30 seconds Constraint satisfaction problems I What is a CSP A set of variables V1 V2 Vquot with domains possible values 0102Dv A set of constraints 0102 Cm Each constraint C limits he values that a subset of variables can take eg V1 V I A state is de ned by an assignment of values to some or all variables I Consistent assignment assignment that does not not vio ate the constraints I Complete assignment everyvariable is mentioned I Goal a complete consistentassignment Example cryptharithmatic I Task assign digits to letters to make the math true I Additional constraint no two letters have the same value CSP as a standard search problem I Incremental formulation Initial State the empty assignment 0 Successorfunction Assign value to unassigned variable provided that there is not con ict Goatest the current assignment is complete I Same formulation for all CSPs II I Solution is found at depth n n variables I Maximum search depth is also n Backtracking search I Observation the order of assignment doesn t matter 2 can consider assignment of a single variable at a time Results in dquot leaves I Backtracking search DFS for CSPs with single variable assignments backtracks when a variable has no value that can be assigned I The basic uninformed algorithm for CSP Improving backtracking efficiency 1 Generalpurpose methods can give huge gains in speed Which variable should be assigned next In what order should its values be tried Can we detect inevitable failure early Most constrained variable hrquot r i quotn Hi late SEL EAR RI F rr n1 Choose the variable with the fewest legal values the most constrained variable aka minimum remaining values MRV or fail first heuristic What is he intuition behind this choice Most constraining variable ri Hi a HE l Select the variable that is involved in the largest number of constraints on other unassigned variables Also called the degree heuristic because that variable has he largest degree in the constraint graph I Often used as a tie breaker e g in conjunction with MRV Least constraining value quot r A tows I vs illquot lnr 3A l l Y 39 t m n ltt V t i gt7 I Least constraining value heuristic guides the choice of which value to assign next Given a variable choose the least constraining value the one hat rules out the fewest values in the remaining variables Forward checking WA NT 0 Nsw v sA T EIlEJEZIEZIEZIEZIEII I Can we detect inevitable failure early 7 And avoidii a er Forward checking keep track of remaining legal values for unassigned var39ables Terminate search direc ion when a variable has no legal values Forward checking new SA r EZIEZI EZIEZIEZIEZI IijZIEZIZIEZI l Assign WAred Effects on other variables connected by constmints with WA 7 Ilchan no Iangerbe red 7 SA can no Iangerbe red Forward checking Forward checking H HA HT 1 lk l lp li w m a w v gt4 mmmmmc Jm mmmjmcjEjm IijjmmCj ElmWEE Asslgn wreen llllsasslgnedDue Enes z Enecls an mnev vanalzles connected lav censualnls wlh WA cl an mnev memes connected lav cunamlnls wlh WA Was 93999quot mpb Sill mm Mcmmeem mwlmem Wmmew Fc nee lemme panel en e W ne MRl heunsbc wll aulnmallcallv select NT and SAnexl and bammckmg can occur Example 4Queens Problem Example 4Queens Problem Example 4Queens Problem Example 4Queens Problem Example 4Queens Problem Example 4Queens Problem Example 4Queens Problem Example 4Queens Problem 1234 szH Example 4Queens Problem 1234 szH Example 4Queens Problem Example 4Queens Problem Example 4Queens Problem Constraint propagation Example 4Queens Problem iln l Lev d Lead mmmEIIEjEZIEZI l Snlvlng CSPs Wltn Enmblnatlnn er neurlstles plus runtare ancKlng ls rnere errlelenttnan eltner appreaen alone I FC e lng propagates information but does not prnvlde Early dEtE lnrl nffallures e NTand SAcarmul rrern asslgned tn unasslgned varlables be luel l Cnns tralnt prupagatlnn repeatedly enrerees eenstralnts locally Arc consistency Hr tir l e Arc consistency E gt I x aYis consistent iff l x aYis consistent iff r e eryvalue xofX here is some allowed y r e vaalue xorxtnere is some allowed y is consistent since SAblue and NSWIedi5 a ot consistent since for NSWblue there is no consistent assignment to SA Arc can be made consistent by removing bluefrom NSW consistent assignment Arc consistency arcade l RECHECK neighbourS move red from V Arc consistency Wm w e rr 0 m W I Can be run as a preprocessor or a er eacn assignment 7 Repealed until no inconsistency remains Arc Consistency Algorithm tunetrnn A073CSO return the 0an pussrelywrtn reuueee eernarns Inputs cso ablnavycspwnhvallablesmr 1 XJ lnczl variables queue a queue err ares initially the ares in 05 umrleuueuersrruternptyun X e REMOVEFlRSrtaueue IVREMOVErlNCONSlSTENTr ALUE SXX Ihen tnr ezen Xx In NElGHEIOR39Sle an ueue anemxptuu runetrnn REMOVENCONSiSrENTVVALUEmY 9 return lrue rrrwe iemuve a value quotreverie raise arezchxm DOMAlNerl an nu valuEym DOMAlNer alluwstw tn satrslytrre eurrstrarrrts belweerl qarld X n delete X 1mm DOMAlNler removede We return removed Time complexity On 2d3 K consistency l Arc consisten es not detect all lnconslstencle A ed 5 edls in e Panlalassl rn sierlt Stronger forms of propa d usrng the notion of k7 co r rlcy A SP Reconsrstentrlloran consrstentassrgnrnenttot always be ass gned to Not practical fkrl variables and for any a consistent value can Information Retrieval Lecture 25 120408 Information Retrieval review Lexicon A list ofwords Preprocessing Stemming get rid of syntactic word endings Stop words too common ignore them I Bag of Words Documents represented as vectors ofword counts Matching TFIDF Announcements I Programming Assignment 3 Due Dec 11m Tables are now corrected on the web page Scarlet has one t I All values are now exact I All rows sum to one I Programming Assignment 2 Grades handed back last class If our results don t match yours let us know If simple x is possible talk to us TFIDF OK but how do we compare documents Most common method term frequency inverse document frequency TFIDF TFIDF details Document vectors may be normalized So long documents aren t overrepresented Document frequency is a percentage Log1Dw l Min 0 word appears in every document I Max logcorpus size Dw 1corpus size How might I improve results Synonyms chair and seat mean the same thing almost If we can map them to one term seatchair we can improve performance Requires a thesaurus list of synonyms MazchA B E TermFreq log yDocquj 2W AwBwlogyDWj How Might I Improve Results ll Word Sense Windows may referto glasscovered openings Windows may refer to intervals of time Windows may refer to an operating system I This latter de nition is the most common on the web Another View This is equivalent to thinking of a document as a point in an Ndimensional space N is the number of words in the lexicon Every word is a dimension If some words cluster then there will be points documents that are roughly aligned Disambiguating Word Sense Your can be viewed as a matrix Latent Semantic Analysis I apologize to those who are rusty on their linear algebra but A singular value decomposition SVD of the document byword matrix is called Latent Semantic Analysis LSA The resulting eigenvectors tell you which words appear together Window appears o en with Blue Screen and Death Window also appears with UV coating and insulation Visualizing LSA I Think of the corpus as a set of high dimensional points Every word stem is a dimension I Since most words don t occur in most documents points cluster near the origin I If word count vectors are normalized all points lie on I Documents that use the same words will cluster on the surface of the sphere l The center of a cluster is a concept a set of related words LSA l Synonyms tend to appear together Related meanings of words a la Windows tend to appeartoget er I The eigenvectors define a subspace of seman ics Projecting a word vector into this space removes irrelevant terms ise and suggests synonyms and relate ms In the extreme two documents about the same concept with no overlapping words might be grou Limitations of LSA I LSA works well on small lexicons amp corpi I PCA a related technique in computer vision w rks well for image matchin I We don39t know if it would work on large lexicons corpl 7 Nut feaslbie tei perrerrn LSA on a milliein gtlt millieiri matrix 7 Cari subsarnpie terrns is documents 7 Local LSA retrieve smaller set at Similar documents usan TFlDF then perrerrn LSA Evaluation Confusion Matrices Relevant Irrelevant Returned 4 1 True Pos False Pos Type 1 Error Not returned 6 9 False Neg True Neg Type 2 Error ROC Curve memrv ROC Properties Every ROC curve starts at Erin and Ends at l Eli l u 7 eri riu ducurnents are returned there are riu true pusitives is 7 When ever ducurnent ls returned there ls iEIEItrue pusnlves and ruma alse pusnives ROC curves are menitenieallv inereasing in bath TF39R an 7 Retrievin anuther ducurnentvvlll never causethetrue pesitive iatetera reievantducsretrlevedMutaireievantducs 7 similailv it Will never cause the ralse pusitivE iatetera ees A random elassiriervvill pru uce an appruxlmateiy dlagunai line I The smallerthe area underthe curve AUC the better ROC Properties II ROD am Snurm Wlklpedla Modern Internet Evaluation I For internet search engines there is no way to measure reca hat percent of all the possible relevant documents on the web did you return I Reciprocal rank Is the reciprocal ofthe position ofthe rst relevant response C8540 Topics for Spring 2008 A ents Collaborative Filtering Constraint satisfaction amp SAT Data minin Evaluation Experimentation Genetic algorithm Information retrieval Plannin Modeling search Oth r e I You pick

### BOOM! Enjoy Your Free Notes!

We've added these Notes to your profile, click here to view them now.

### You're already Subscribed!

Looks like you've already subscribed to StudySoup, you won't need to purchase another subscription to get this material. To access this material simply click 'View Full Document'

## Why people love StudySoup

#### "There's no way I would have passed my Organic Chemistry class this semester without the notes and study guides I got from StudySoup."

#### "I made $350 in just two days after posting my first study guide."

#### "There's no way I would have passed my Organic Chemistry class this semester without the notes and study guides I got from StudySoup."

#### "Their 'Elite Notetakers' are making over $1,200/month in sales by creating high quality content that helps their classmates in a time of need."

### Refund Policy

#### STUDYSOUP CANCELLATION POLICY

All subscriptions to StudySoup are paid in full at the time of subscribing. To change your credit card information or to cancel your subscription, go to "Edit Settings". All credit card information will be available there. If you should decide to cancel your subscription, it will continue to be valid until the next payment period, as all payments for the current period were made in advance. For special circumstances, please email support@studysoup.com

#### STUDYSOUP REFUND POLICY

StudySoup has more than 1 million course-specific study resources to help students study smarter. If you’re having trouble finding what you’re looking for, our customer support team can help you find what you need! Feel free to contact them here: support@studysoup.com

Recurring Subscriptions: If you have canceled your recurring subscription on the day of renewal and have not downloaded any documents, you may request a refund by submitting an email to support@studysoup.com

Satisfaction Guarantee: If you’re not satisfied with your subscription, you can contact us for further help. Contact must be made within 3 business days of your subscription purchase and your refund request will be subject for review.

Please Note: Refunds can never be provided more than 30 days after the initial purchase date regardless of your activity on the site.