### Create a StudySoup account

#### Be part of our community, it's free to join!

Already have a StudySoup account? Login here

# BAYESIAN NETWORK GRAPHS CSCE 582

GPA 3.61

### View Full Document

## 25

## 0

## Popular in Course

## Popular in Computer Science and Engineering

This 114 page Class Notes was uploaded by Trace Mante MD on Monday October 26, 2015. The Class Notes belongs to CSCE 582 at University of South Carolina - Columbia taught by M. Valtorta in Fall. Since its upload, it has received 25 views. For similar materials see /class/229565/csce-582-university-of-south-carolina-columbia in Computer Science and Engineering at University of South Carolina - Columbia.

## Reviews for BAYESIAN NETWORK GRAPHS

### What is Karma?

#### Karma is the currency of StudySoup.

#### You can buy or earn more Karma at anytime and redeem it for class notes, study guides, flashcards, and more!

Date Created: 10/26/15

Bayesian Networks and Decision Graphs Chapter 9 Chapter 9 p 131 A small quiz Which of the following two lotteries would you prefer 39 Lottery A 1mill 39 Lottery B 015mill 0891mill 0010 Chapter 9 p 231 A small quiz Which of the following two lotteries would you prefer 39 Lottery A 1mill 39 Lottery B 015mill 0891mill 0010 What about these two Lottery C39 0111mill 0890 Lottery D 015mill 090 Chapter 9 p 231 A small quiz Which of the following two lotteries would you prefer 39 Lottery A 1mill 39 Lottery B 015mill 0891mill 0010 What about these two Lottery C39 0111mill 0890 Lottery D 015mill 090 Is this the rational choice Chapter 9 p 231 Reverse the directions Consider the following model The probability distributions PSleepy PFeverSleepy and PFlueFever can be calculated from the model above and used in the model below So is there any difference Chapter 9 p 331 Decisions Taking the temperature and setting the temperature can be seen as a test decision and an action decision respectively W Impacts from the decisions 39 Tests Both directions 39 Actions V th the direction only Chapter 9 p 431 Poker again Consider the poker example again Why request this Chapter 9 p 531 Poker again Consider the poker example again Why request this Fold or call 39 Both placed 1 39 She has placed 1 more 39 fold gt she takes the pot 39 call gt place 1 gt best hand takes the pot Chapter 9 p 531 Call or fold This decision problem can be represented graphically by extending the BN with a decision node and a utility node Chapter 9 p 631 Call or fold This decision problem can be represented graphically by extending the BN with a decision node and a utility node The expected utility of call EUcalle 2 PBH le 2 PBH Ope 0 PBH drawe ZUBHcaIIPBHe BH Chapter 9 p 631 Mildew Two months before the harvest the farmer observes the state Q of his wheat field and he can check whether the field is attacked by mildew M If there is a mildew attack he can decide for a treatment with fungicides Chapter 9 p 731 Mildew Two months before the harvest the farmer observes the state Q of his wheat field and he can check whether the field is attacked by mildew M If there is a mildew attack he can decide for a treatment with fungicides 0 gem Chapter 9 p 731 Mildew Two months before the harvest the farmer observes the state Q of his wheat field and he can check whether the field is attacked by mildew M If there is a mildew attack he can decide for a treatment with fungicides 0 6 EUAeCA Z UHarvestPHarvestAe Harvest gem Chapter 9 p 731 One action in general Ui has domain 26 Chapter 9 p 831 One action in general Ui has domain 26 EUDe Z U1X1P21De Z UnXnPXnD 6 951 Choose an action with largest EU OptDe arg mgx EUDe Chapter 9 p 831 Utilities without money Two courses Graph algorithms GA and D88 Marks 0 1 2 34 5 2 2 is a pass Effort Keep pace kp slow down sd follow superficially fs Effort Effort kp sd fs kp sd fs 0 0 0 01 0 0 0 01 1 01 02 01 1 0 01 02 GA 2 01 01 04 D882 01 02 02 3 02 04 02 3 02 02 03 4 04 02 02 4 04 04 02 5 02 01 0 5 03 01 0 PGA Effort PDSS Effort Maxscore Maxpass Otherwise Chapter 9 p 931 Marks as utilities Effort Effort kp sd fs kp sd fs 0 0 0 01 0 0 0 01 1 01 02 01 1 0 01 02 GA 2 01 01 04 D882 01 02 02 3 02 04 02 3 02 02 03 4 04 02 02 4 04 04 02 5 02 01 0 5 03 01 0 PGA Effort PDSS Effort EUkpfs Z Pmkpm Z Pmfsm meGA meDSS 011012023044025021022033024258 Chapter 9 p 1031 Marks as utilities Effort Effort kp sd fs kp sd fs 0 0 0 01 0 0 0 01 1 01 02 01 1 0 01 02 GA 2 01 01 04 D882 01 02 02 3 02 04 02 3 02 02 03 4 04 02 02 4 04 04 02 5 02 01 0 5 03 01 0 PGA Effort PDSS Effort EUkpfs Z Pmkpm Z Pmfsm meGA meDSS 011012023044025021022033024 258 EUsdsd 61 Chapter 9 p 1031 Marks as utilities Effort Effort kp sd fs kp sd fs 0 0 0 01 0 0 0 01 1 01 02 01 1 0 01 02 GA 2 01 01 04 D882 01 02 02 3 02 04 02 3 02 02 03 4 04 02 02 4 04 04 02 5 02 01 0 5 03 01 0 PGA Effort PDSS Effort EUkpfs Z Pmkpm Z Pmfsm meGA meDSS 011012023044025021022033024 258 EUsdsd 61 EUfskp 2 However does the marks really reflect your utilities Chapter 9 p 1031 Subjective lotteries I consider 2 as the worst mark utility 0 and 5 as the best mark utility 1 Now imagine the following lottery 1 p 4 G For which p am indifferent Chapter 9 p 1131 Subjective lotteries I consider 2 as the worst mark utility 0 and 5 as the best mark utility 1 Now imagine the following lottery For which p am indifferent The utility table EUEffort 10151071035 kpfssdsdfskp Chapter 9 p 1131 Instrumental rationality 1 Reflexivity For any lottery A A i A Chapter 9 p 1231 Instrumental rationality 1 Reflexivity For any lottery A A i A 2 Completeness For any pair A B of lotteries A i B or B i A Chapter 9 p 1231 Instrumental rationality 1 Reflexivity For any lottery A A i A 2 Completeness For any pair A B of lotteries A i B or B i A 3 Transitivity If A i B and B i C then A i C Chapter 9 p 1231 Instrumental rationality 1 Reflexivity For any lottery A A i A 2 Completeness For any pair A B of lotteries A i B or B i A 3 Transitivity If A i B and B i C then A i C 4 Preference increasing with probability If A i B then 05A l 1 ozB i BA l 1 BB if and only ifa 2 B Chapter 9 p 1231 Instrumental rationality 1 Reflexivity For any lottery A A i A 2 Completeness For any pair A B of lotteries A i B or B i A 3 Transitivity If A i B and B i C then A i C 4 Preference increasing with probability If A i B then 05A l 1 ozB i BA l 1 BB if and only if a 2 B 5 Continuity If A i B i C then there exists a E 0 1 such that B N aA 1 aC Chapter 9 p 1231 Instrumental rationality 1 Reflexivity For any lottery A A i A 2 Completeness For any pair A B of lotteries A i B or B i A 3 Transitivity If A i B and B i C then A i C 4 Preference increasing with probability If A i B then 05A l 1 ozB i BA l 1 BB if and only if a 2 B 5 Continuity If A i B i C then there exists a E 0 1 such that B N aA 1 aC 6 Independence If C 05A l 1 aB and A N D then C N aD l 1 aB Chapter 9 p 1231 Instrumental rationality 1 Reflexivity For any lottery A A i A 2 Completeness For any pair A B of lotteries A i B or B i A 3 Transitivity If A i B and B i C then A i C 4 Preference increasing with probability If A i B then 05A l 1 ozB i BA l 1 BB if and only if a 2 B 5 Continuity If A i B i C then there exists a E 0 1 such that B N aA 1 aC 6 Independence If C 05A l 1 aB and A N D then C N aD l 1 aB Theorem For an individual who acts according to a preference ordering satisfying rules 16 above there exists a utility function over the outcomes st the expected utility is maximized Chapter 9 p 1231 Are you rational Recall 39 Lottery A 1mill 39 Lottery B 015mill 0891mill 0010 39 Lottery C 0111mill 0890 39 Lottery D 015mill 090 Chapter 9 p 1331 Are you rational Recall 39 Lottery A 1mill 39 Lottery B 015mill 0891mill 0010 39 Lottery C 0111mill 0890 39 Lottery D 015mill 090 Let U5mz ll 1 U0 0 U1mz ll u If you prefer A over B we get 10 ugt01089u ltgt ugt Chapter 9 p 1331 Are you rational Recall 39 Lottery A 1mill 39 Lottery B 015mill 0891mill 0010 39 Lottery C 0111mill 0890 39 Lottery D 015mill 090 Let U5mz ll 1 U0 0 U1mz ll u If you prefer A over B we get 10 ugt01 l 089u 42gt ugt Hence 10 EUC39 011u gt 011E 01 EUD and C should therefore be preferred over D Chapter 9 p 1331 Decision trees pos Pour p Pour throw pour throw pour throw 96 9994 9994 999 Chapter 9 p 1431 Decision trees n 9994 pour throw y pour neg Pour throw n Pour n throw Branches from chance nodes 0 shall be labeled with the probability of the branch given the path down to the node The probabilities can be found from the model 9994 3QO Chapter 9 p 1431 Solving decision trees I throw 039 09351 pour mm M 00649 post throw he N 099999 Pour 00000 throw v1 00007 9 pour n 09993 9994 O 9994 9 Chapter 9 p 1531 Solving decision trees I W 09351 9994 pour post throw W 099999 A 9994 79 39 y 939 0893 pour y 0 Pour 00000 throw n 00007 o pour n 09993 throw The decision tree can be solved by going from the leaves towards the root 39 Take weighted sum through chance nodes 39 Take max through decision nodes Chapter 9 p 1531 Solving decision trees 1 0 07 03 09 01 08 02 04 06 03 07 6966 Chapter 9 p 1631 Solving decision trees 0A 0 e 9394 d 05 o 16 07 2 o 633 d2 9 45 3 03 Q Q 909 a 28 9 o o 1398 0 0391 0 08 0 24 a 02 Chapter 9 p 1631 Decision trees characteristics Advantages gt All scenarios are represented explicitly gt Very few restrictions on the decision problems that can be represented Disadvantages gt Two separate models are used one representing the structure and one representing the uncertainties gt The size of the decision trees grows exponentially in the number of variables Chapter 9 p 1731 An alternative representation But how do we represent the sequence of decisions and observations Chapter 9 p 1831 Representing the decision sequence Possible representation All nodes observed before a decision are parents of that decision gt Assuming that the decision maker doesn t forget then some links are redundant Chapter 9 p 1931 Representing the decision sequence A better representation an influence diagram Advantages 39 You can read the sequence of decisions 39 You can read what is known at each point of decision Chapter 9 p 1931 Influence diagrams Nodes and links Q Chance variable gt causal links El Decision variable gt information links ltgt Utility function gt utility link U 1 Ui m 39 We assume noforgetting 39 A directed path comprising all decisions gt the scenario is welldefined Chapter 9 p 2031 Influence diagrams and Hugin In Hugin the nodes are depicted as Chance variable Q Decision variable D Utility nodes ltgt Note that 39 Notables are specified for decision nodes 39 A utility function is specified for a utility node Chapter 9 p 2131 Influence diagrams Characteristics Advantages gt Grows only linearly in the number of variables gt Requires only one model for representing both structure as well as the uncertainty model Disadvantages gt The sequence of observations and decisions is the same in all scenarios the decision problem is symmetric Definition A decision problem is said to be symmetric if gt In all decision tree representations the number of scenarios is the same as the cardi nality of the Cartesian product of the state spaces of all chance and decision variables gt in one decision tree representation the sequence of observations and decisions is the same in all scenarios Chapter 9 p 2231 Symmetric decision trees H 2 D E g 96966669 MMMMHH Chapter 9 p 2331 Symmetric decision trees 6 6 6 6 6 6 6 6 MMMHNH The sequence of observations and decisions is the same in all scenarios 96 Chapter 9 p 2331 Optimal strategy a o o o 39 9 o o o o o 39 Chapter 9 p 2431 Optimal strategy o Solution for influene diagrams lllllllllllllll 1 Determine a policy for D2 0D2D1A gt D2 For this we need PCD1A D2 2 Use 0132 for determining a policy for D1 0D1 gt D1 For this we need PAD1 All probabilities can be achieved from the model without folding out the decision tree Chapter 9 p 2431 Optimal strategy The policy for D aDMH0 MFC FC MH1MSC SC MH2 gt D We request PBHMH0 MFC FC MH1 MSC SC MH2 D Chapter 9 p 2531 Optimal strategy The policy for D aDMH0 MFC FCMH1MSCMH2 gt D We request PBHMH0 MFCE MH1 MSC MH2Q From dseparation we can find the relevant past Chapter 9 p 2531 Fishing in the north sea Based on measurements T a quota for fishing volume FV for next year is decided The amount of fish V and the quota determines the utility Chapter 9 p 2631 Fishing in the north sea Based on measurements T a quota for fishing volume FV for next year is decided The amount of fish V and the quota determines the utility A five year period Unfortunately the optimal policy for FV5 depends on the entire past 0FV5T17FV17T27FV27T37 FV37T47 FV47T5 This is intractable Chapter 9 p 2631 Information blocking The probability PV2T1 FV1 is taken from the initial model Chapter 9 p 2731 The dangers of nonobserved nodes Temporal links between non observed nodes are dangerous The dangers of nonobserved nodes Temporal links between nonobserved nodes are dangerous When are lD s suitable for repeated use The sequence of decisions D1 D2 Dn is fixed The chance variables in Ii are always observed after Di and before Di The decision maker remembers the past The decision problem is symmetric The decisionobservation sequence is independent of the actual observations and decisions Chapter 9 p 2931 A cause of asymmetry Test decisions Take your temperature before deciding on aspirin You only observe the test result Fever if you decide to take your temperature Chapter 9 p 3031 A cause of asymmetry Test decisions But these problems can still be modeled in influence diagrams ITF 4 Aspirin Fever y n y 100 010 TF n 001 001 PF y n notFever TF Chapter 9 p 3031 Transformation of testdecisions in general O 0 Va an 00 A TA y 100 0100 0010 n 01 01 01 PF a1ann0 tATA Chapter 9 p 3131 Bayesian Networks and Decision Graphs Chapter 6 Chapter 6 p 117 Learning probabilities from a database We have gt A Bayesian network structure gt A database of cases over some of the variables We want gt A Bayesian network model with probabilities representing the database Chaptev a in 217 Complete data Maximum likelihood estimation We have tossed a thumb tack 100 times It has landed pin up 80 times and we now look for the model that best fits the observationsdata I I I 39 I 39 I 39 I Structure I I I I I I I I I I I Probability Ppin up 01 I 02 I 0 3 I Model M01 M02 M03 Chapter 6 p 317 Complete data Maximum likelihood estimation We have tossed a thumb tack 100 times It has landed pin up 80 times and we now look for the model that best fits the observationsdata I I I I I I Structure I I I I I I I I Probability Ppin up 01 I 02 I 0 3 I Model MOII M02 M03 We can measure how well a model fits the data using PDM9 Ppin up pin up pin down pin upM9 Ppin upM9Ppin upM9Ppin downM9 Ppin upM9 I This is also called the likelihood of M9 given D I Chapter 6 p 317 Complete data Maximum likelihood estimation We have tossed a thumb tack 100 times It has landed pin up 80 times and we now look for the model that best fits the observationsdata I I I I I Structure I I I 39 I 39 I 39 I Probability Ppin up 01 I 02 I 03 I Model M01 M02 M03 We select the parameter 0 that maximizes A 9 zarg mgrxPDM9 100 arg Inng Pd Mg 21 arg maxu 0801 920 6 Chapter 6 p 317 Complete data Maximum likelihood estimation We have tossed a thumb tack 100 times It has landed pin up 80 times and we now look for the model that best fits the observationsdata I I I I I Structure I I I 39 I 39 I 39 I Probability Ppin up 01 I 02 I 03 I Model M01 M02 M03 By setting d I 080 1 0 20 Z 0 den we get the maximum likelihood estimate gt H 53 00 Chapter 6 p 317 Complete data maximum likelihood estimation In general you get a maximum likelihood estimate as the fraction of counts over the total number of counts WewantPAaBbCcl To find the maximum likelihood estimate PM a B b C c we simply calculate 15AaBbCc Chapter 6 p 417 Complete data maximum likelihood estimation In general you get a maximum likelihood estimate as the fraction of counts over the total number of counts WewantPAaBbCc Chapter 6 p 417 Complete data maximum likelihood estimation In general you get a maximum likelihood estimate as the fraction of counts over the total number of counts WewantPAaBbCcl To find the maximum likelihood estimate PM a B b C c we simply calculate A NAazlirb70c A P P1420 A Z I PBbyCc Chapter 6 p 417 Complete data maximum likelihood estimation In general you get a maximum likelihood estimate as the fraction of counts over the total number of counts WewantPAaBbCcl To find the maximum likelihood estimate PM a B b C c we simply calculate A NAazlirb70c HAMBQCZQZ HBZQC Z E 2q Nmszzaoza NBbC39c So we have a simple counting problem Chapter 6 p 417 Complete data maximum likelihood estimation Unfortunately maximum likelihood estimation has a drawback Last three letters aaa aab aba abb baa bba bab bbb aa 2 2 2 2 5 7 5 7 EC ab 3 4 4 4 1 2 0 2 0 letters ba 0 1 0 0 3 5 3 5 bb 5 6 6 6 2 2 2 2 By using this table to estimate eg PT1 bT2 aT3 2 T4 2 T5 a we get A NTbT T TT PT1b7T2CLT3T4T5a 1 2 CLN3 4 5 a This is not reliable 0 Chapter 6 p 517 Complete data maximum likelihood estimation An even prior distribution corresponds to adding a virtual count of 1 Last three letters aaa aab aba abb baa bba bab bbb aa 2 2 2 2 5 7 5 7 Exit ab 3 4 4 4 1 2 0 2 letters ba 0 1 0 0 3 5 3 5 bb 5 6 6 6 2 2 2 2 From this table we get T1 T1 a b gt a b E E T2 a 32 17 T2 a 321 171 T2 3111 gg b 20 31 b 201 311 b a E NT17T2 NT1T2 N T1T2 PT2T1 W Chapter 6 p 617 Incomplete data How do we handle cases with missing values gt Faulty sensor readings gt Values have been intentionally removed gt Some variables may be unobservable Why don t we just throw away the cases with missing values Chapter 6 p 717 Incomplete data How do we handle cases with missing values gt Faulty sensor readings gt Values have been intentionally removed gt Some variables may be unobservable CL1 CL1 CL1 CL1 CL1 CL1 CL1 CL1 CL1 CL1 b1 b1 b1 b1 b1 b1 b1 b1 b1 b1 Why don t we just throw away the cases with missing values A 02 6L2 6L2 6L2 6L2 6L2 6L2 6L2 6L2 02 b1 b1 b1 b1 7 7 7 7 7 Using the entire database A Na1 10 Plta1 Na1 Na2 10 10 05 Having removed the cases with missing val ues 1301 N a1 10 N a1 l N a2 105 23 Chapter 6 p 717 How is the data missing We need to take into account how the data is missing Missing completely at random The probability that a value is missing is independent of both the observed and unobserved values Missing at random The probability that a value is missing depends only on the observed val ues Nonignorable Neither MAR nor MCAR What is the type of missingness gt In an exit poll where an extreme rightwing party is running for parlament gt In a database containing the results of two tests where the second test has only per formed as a backup test when the result of the first test was negative gt In a monitoring system that is not completely stable and where some sensor values are not stored properly Chapter 6 p 817 The EM algorithm 9 1 Cases Pr Bt Ut pos pos 2 yes neg pos 3 yes pos 4 yes pos neg 5 neg Estimate the required probability distributions for the network Chapter 6 p 917 The EM algorithm Cases Pr Bt Ut 1 pos pos yes neg pos yes pos yes pos neg If the database was complete we would estimate the required probabilities PPr PUt Pr and PBt Pr as P PWN neg NPr yes PPr yes 2 N PUt yes Pr yes w NPr yes NBt yes Pr no PBt yes Pr no NPr no So estimating the probabilities is basically a counting problem Chapter 6 p 917 The EM algorithm Cases Pr Bt Ut pos pos yes neg pos yes pos yes pos neg Estimate PPr from the database above neg P rPFDNquot Case 2 3 and 4 contributes with a value 1 to NPr yes but what is the contribution from case 1 and 5 gt Case 1 contributes with PPr yesBt 2 pos Ut 2 pos gt Case 5 contributes with PPr yesBt neg To find these probabilities we assume some initial distributions P0 have been assigned to the network We are basically calculating the expectation for NPr yes denoted ENPr yes Chapter 6 p 917 The EM algorithm 9 1 Cases Pr Bt Ut pos pos 2 yes neg pos 3 yes pos 4 yes pos neg 5 neg Using P0Pr 05 05 P0Bt Pr yes 2 05 05 etc as starting distributions we get ENPr yes 2 P0Pr yes Bt Ut 2 pos 1 1 1 P0Pr yes Bt neg 05111054 ENPrno P0PrnoBt Utpos000P0PrnoBtneg 05000051 So we eg get 151Pr yes 2 N ENPr yes 08 Chapter 6 p 917 The EM algorithm Cases Pr Bt Ut 1 pos pos yes neg pos yes pos yes pos neg To estimate 151Ut Pr ENUt PrENPr we eg need neg P PWN ENUtpPryPOUtpPryBtUtp1P0UtpPryBtpPry 0P0UtpPryBtn051050025225 ENPryes POPryesBtUtpos111P0PryesBtneg 05 1 1 05 24 So we eg get ENUt p Pr y 225 15 Ut os Pr es 2 2 205625 1 p I y ENPryes 4 Chapter 6 p 917 The EM algorithm P0Pr 0505 P0Ut 2 pos Pr 0505 P0Bt 2 pos Pr 05 05 Cases Pr Bt Ut 1 pos 2 yes neg 3 yes pos 4 yes pos 5 neg pos pos neg Chapter 6 p 1017 The EM algorithm Estep 1 IE MPH 471 P0037 057 05 ENUt 2 pos PM 225 P0UtposPr 0505 05000025 P0Bt 2 pos Pr 05 05 E0NBt 2 pos Pr 05 0 1 1 0 25 Cases Pr Bt Ut 1 pos pos yes neg pos yes pos yes pos neg neg 01 wa Chapter 6 p 1017 The EM algorithm P0Pr 0505 P0Ut 2 pos Pr P0Bt 2 pos Pr P1P7A P1Ut posPr P1Bt 2 pos Pr Estep 1 NPr 41 E0 9 E0 N Ut 2 pos PM 225 0505 05000025 05 05 E0NBt 2 pos Pr 05 0 1 1 0 25 a 0500000 Mstep 2 1 270145 Cases Pr Bt Ut 1 pos pos 2 yes neg pos 3 yes pos 4 yes pos neg 5 neg Chapter 6 p 1017 The EM algorithm Estep 1 NEPM 41 IEO PoP7 05705 EOW U15 2 posPr 225 P0UtposPr 0505 05000025 P0Bt 2 pos Pr 05 05 E0NBt 2 pos P70 2 05 0 1 1 0 25 a 0500000 Mstep 2 Estep 3 P1ltPrgt lt E11NltPrgt1lt 7 gt P1Ut pos Pr 0 175 E1NUt 10051370 7 P1BtposPr0T 1NBtP057P7 7 5 E Cases Pr Bt Ut pos pos yes neg pos yes pos yes pos neg neg 919 01 Chapter 6 p 1017 The EM algorithm Estep 1 NEPM 41 IEO P0P7quot 05 05 MN Ut 2 pos Pm 225 P0UtposPr 2 0505 05000025 P0Bt 2 pos Pr 05 05 E0NBt 2 pos PM 05 0 1 1 0 25 9 0500000 Mstep 2 Estep 3 mm g gt E1NP7 gt P1UtposPr E1NUtP057P7 7 P1BtposPr 27f E1NBtP05P7 7 Mstep 4 Cases Pr Bt Ut 4 L 1 pos pos a yes neg pos f5ljr 3quot a 39 yes pos P2Ut 2 pos Pr P2Bt 2 pos Pr pos neg neg lt m 0 Until convergence Chapter 6 p 1017 The EM algorithm Cases Pr Bt Ut 1 pos pos yes neg pos yes pos yes pos neg 1 Let 00 Qijk be some start estimates PX j paXz39 k Qijk neg P PWN 4 Repeat until convergence Estep For each variable Xi calculate the table of expected counts 1NXipaX D Z PXipaXi 01975 9 d e D Mstep Use the expected counts as if they were actual counts E911NXikpaltXzgtjlm wk ISPXi N X k X 39 D 2161 E94 2 pa z J Chapter 6 p 1117 Adaptation Adapt the tables to experience cases Social env or expert t1 P1AB C Socialenv or expert tk PkAB C Chapter 6 p 1217 Adaptation Adapt the tables to experience cases Social env or expert t1 P1AB C Socialenv or expert tk PkAB C Variable T t1 tk PT reflects the credibility of t1 tk PAB CT 2 ti PiAB C Any case 6 will yield a PTe This is used as prior for the next case Chapter 6 p 1217 Fractional updating I am uncertain about PAB C Idea I can represent my uncertainty by assuming that PAbi cj are frequencies from a virtual sample of n cases gtThe largerl put n the more certain I am iePAibicj quot 73 quotm 7 n Chapter 6 p 1317 Fractional updating I am uncertain about PAB C Idea I can represent my uncertainty by assuming that PAbi cj are frequencies from a virtual sample of n cases gtThe largerl put n the more certain I am iePAbicj quot 73 update PAbi cj when a new case arrives aNewcase Bb CcjAa1 1 m 7711 772 n n17n i 1739H7n l 1 Chapter 6 p 1317 Fractional updating I am uncertain about PAB C Idea I can represent my uncertainty by assuming that PAbi cj are frequencies from a virtual sample of n cases gtThe largerl put n the more certain I am iePAbicj quot 73 update PAbi cj when a new case arrives b New case B th cj and PAcase x1 xm PWAIbi Cj n1x1 n2x2 nmxm n1 7 n1 H7 n1 Chapter 6 p 1317 Fractional updating I am uncertain about PAB C Idea I can represent my uncertainty by assuming that PAbi cj are frequencies from a virtual sample of n cases gtThe largerl put n the more certain I am iePAbicj quot 73 quotm 7 n update PAbi cj when a new case arrives c New case A a1and Pltb7j7CjCCLS x 71m nx nx nx Chapter 6 p 1317 Fractional updating I am uncertain about PAB C Idea I can represent my uncertainty by assuming that PAbi cj are frequencies from a virtual sample of n cases gtThe largerl put n the more certain I am iePAbicj quot 73 quotm 7 n update PAbi cj when a new case arrives 1 New general case gt PAcase 1xm and Pb7j7CjCOJS x PWAIbiycj n1 xx17n2xx27m7nmxxm nx nx nx Chapter 6 p 1317 Fractional updating I am uncertain about PAB C Idea I can represent my uncertainty by assuming that PAbi cj are frequencies from a virtual sample of n cases gtThe largerl put n the more certain I am iePAbicj quot 73 quotm update PAbi cj when a new case arrives 6 New case B bi C cj and this is allll m n2 nm quot n1 mm P 7 7 7 n l 1 n l 1 n l 1 Unjustified we thereby confirm our belief in our present distribution Chapter 6 p 1317 Assumptions What is the situation 39 We are uncertain about PAB C 39 We get a new case with B 2 b1 and C 62 When updating we have that 39 PAb1C2 is changed 39 All other PAbi cj are unaffected This involves the following two assumptions Local independence The second order uncertainty on PAbi cj is independent of the second order uncertainty on PAb 03 Global independence The second order uncertainty for the various variables is independent Chapter 6 p 1417 Example Spoofing Estimate Pchoseninhand 2 02 06 02 Virtual sample size 20 corresponding to 4124 New case chosen 0 Pchoseninhand 2 le 23 new cases 7 8 8 Pchoseninhand 2 g g g 027046027 Apparently she plays H Chapter 6 p 1517 Do l have to take the wrong past with me We have two situations 39 The initial probabilities are wrong 39 The probabilities change over time Fading Multiply the old set of counts with a fading factor q lt 1 n1 n2 mg 7 7 031302303 n n n Chapter 6 p 1617 Do l have to take the wrong past with me We have two situations 39 The initial probabilities are wrong 39 The probabilities change over time Fading Multiply the old set of counts with a fading factor q lt 1 y y m17x27x3 withxzzxi n n n Updating proceeds as follows The counts Tl17771277713 gt 391 39q7n2 39q7n 3 n gtnq The probabilities PM n139Q17n2q27n3qx3 quot I HU n39QlZU nq l x This technique is very efficient for implementing adaptive agents for games with perfect recall Chapter6 p 1617 Interpreting the fading factor Vl th the fading factor we have Tl17771277713 gt 391 39q7n2 39q7n 3 n gtnq If all counts will be updated with the value 1 then the past will fade away exponentially and the limit the effective sample size will be If n n and a new case arrives we get nnq1 q 1 n 1 q 1 q 80 instead of declaring a fading factor we can specify an effective sample size and the fading factor is then n 1 q n Chapter 6 p 1717 Marco Valtorta wrote these notes mainly around 1982 Blaine Nelson typed them on October 24 2002 The main reference is Bertele Umberto and Francesco Brioschi Non Serial Dynamic Programming Academic Press 1972 Another reference used in notes written later is Shenoy Prakhash P ValuationBased Systems for Discrete Optimization In PP Bonissone M Henrion LN Kanal and JF Lemmer eds Uncertainty in Arti cial Intelligence 6 ElseVier 1991 pp385400 Marco Valtorta An Application of Dynamic Programming Globally Optimum Selection of Storage Patterns Overview This talk has two goals a A review of the fundamentals of dynamic programming and an introduction to nonserial dynamic programming b An application of the techniques to some of the issues involved in the problem of determining globally optimum storage patterns Dynamic Programming Dynamic programming is a problem solving method which is especially useful to solve the problems to which Bellman s Principle of Optimality applies An optimal policy has the property that whatever the initial state and the initial decision are the remaining decisions constitute an optimal policy with respect to the state resulting from the initial decision Example The shortest path problem in a directed staged network 1 The principle of optimality can be stated as follows If the shortest path from 0 to 3 goes through X then 1 that part from O to X is the shortest path from O to X and 2 that part from X to 3 is the shortest path from X to 3 The previous statement leads to a forward and a backward algorithm for nding the shortest path in a directed staged network I shall now give a more formal definition of the dynamic programming problem Brioschi and Bretele 1972 The statement of the nonserial NSPD unconstrained dynamic programming problem is minfX minZ Xi ieT where X X19 X2 Xn is a set of discrete variables 5x being the de nition set of the variable Xj S x 0 51 j T 12 tand Xi cX The function fX is called the objective function and the functions fiXl are the components of the objective function Some useful de nitions are now given Two variables X E X and y E X are said to interact if there exists a component fkXk such that both x and y belong to xk The interaction graph G X L of a nonserial unconstrained problem is an undirected graph Without selfloops and parallel edges de ned by the following a The vertex set X of the graph is the set of variables of the problem b Two vertices are adjacent if and only if the corresponding variables interact Example The interaction graph of a serial problem n l 11311 1 xi 9 xil is giVCIl below Rather than formally stating the way to solve a nonserial problem I will present an example Example mXinf1X19 X29 X3 f2X39 X49 X5 f3X49X59X6 X X19 X29 X39 X49 X5 X6 X X19 X2 X3 X X49 X5 X6 X2 X39 X4 X5 f1 X1 X2 X3 f2 X3 X4 X5 f3 X4 X5 X6 1000 4000 0000 300180015001 5010 0010 6010 801150113011 2100 3100 5100 610151011101 2110 8110 4110 411121113111 The interaction graph of the problem is I choose to eliminate variable X6 rst To do so I consider with which variable X6 interacts X4 and X5 For every assignment to X4 and X5 I compute the value of X6 for which f3 is minimal note that X6 is a member of X3 only ie it is only involved in the component f3 This leads to the following table hl X4 X5 0 O O O l 3 O l l l l O l 3 l l Bellman s Principle of Optimality holds because once he optimal values for X4 and X5 have been determined the optimal value for X6 is X6 Therefore we can consider a new problem in which X6 does not appear X6 has been eliminated flX19 X22 X3 f2X3a X42 X5 hl X4 9 X5 The interaction graph for the new problem is 6 a 9 Q 9 At this point I note that X1 and X2 interact only with X3 in f1 so I decide to eliminate them in block by building the following table X1gtXlt X2gtXlt I h2 I X3 0 0 I 1 0 O O 3 1 The new problem to be solved is minx h2X3f2X32X42X5 X39X6X1 a h1X42X5 The corresponding interaction graph is I eliminate X4 and X5 in block by considering both f2 and h1 to build h3 Note that h3 Xmixnh1X4aX5f2X3aX4aX5 X4gtXlt X5 I h3 I X3 1 O 1 O 0 0 3 1 Now the problem to be solved is min h2X3h3X3E XX6X1X2X4 E h2X3 11303 The corresponding interaction graph is 69 and the solution is X3 O which corresponds to h2 h3 2mXinf1 f2 f3 To nd out the optimal values of all the variables an optimizing arrangement we use Bellman s principle of optimality and the tables we have built The computational cost of solving a nonserial dynamic programming problem is the sum of two terms of functional evaluation and table lookups However the maXimum number of interacting variables of interacting variables is also a reasonable indeX of the computing time Bertele and Brioschi 1972 I shall now introduce the reordering optimization problem Our elimination reordering is the ordered sequence of the eliminated variables the rst variable to be eliminated is the rst in the ordering The dimension of the ordering W Dw is the maximum of the degrees of the vertices in W at the time of their elimination This de nition might be modi ed for the block elimination case Example The elimination ordering for the solution given in the previous example is W1 2 X6 X1 X2 X4 X5 X3 Its vector dimension is dw1 2 2 1 2 1 0 Its dimension is DW1 One could solve the same problem by eliminating variables in this order W2 2 X3 X6 X2 X1 X4 X5 whose dimension is DW2 49 as can be easily veri ed In a simpli ed formulation the secondary optimization problem is the problem of nding the elimination ordering with minimal dimension A general solution to the secondary optimization problem is computationally very heavy Brioschi and Bertele chapter 3 so that heuristic criteria are used instead The simplest criterion which often determines an optimal or near optimal ordering is a greedy criterion as eXpressed in the minimum degree algorithm at any step we eliminate a minimum degree verteX in the current interaction graph

### BOOM! Enjoy Your Free Notes!

We've added these Notes to your profile, click here to view them now.

### You're already Subscribed!

Looks like you've already subscribed to StudySoup, you won't need to purchase another subscription to get this material. To access this material simply click 'View Full Document'

## Why people love StudySoup

#### "Knowing I can count on the Elite Notetaker in my class allows me to focus on what the professor is saying instead of just scribbling notes the whole time and falling behind."

#### "I signed up to be an Elite Notetaker with 2 of my sorority sisters this semester. We just posted our notes weekly and were each making over $600 per month. I LOVE StudySoup!"

#### "Knowing I can count on the Elite Notetaker in my class allows me to focus on what the professor is saying instead of just scribbling notes the whole time and falling behind."

#### "It's a great way for students to improve their educational experience and it seemed like a product that everybody wants, so all the people participating are winning."

### Refund Policy

#### STUDYSOUP CANCELLATION POLICY

All subscriptions to StudySoup are paid in full at the time of subscribing. To change your credit card information or to cancel your subscription, go to "Edit Settings". All credit card information will be available there. If you should decide to cancel your subscription, it will continue to be valid until the next payment period, as all payments for the current period were made in advance. For special circumstances, please email support@studysoup.com

#### STUDYSOUP REFUND POLICY

StudySoup has more than 1 million course-specific study resources to help students study smarter. If you’re having trouble finding what you’re looking for, our customer support team can help you find what you need! Feel free to contact them here: support@studysoup.com

Recurring Subscriptions: If you have canceled your recurring subscription on the day of renewal and have not downloaded any documents, you may request a refund by submitting an email to support@studysoup.com

Satisfaction Guarantee: If you’re not satisfied with your subscription, you can contact us for further help. Contact must be made within 3 business days of your subscription purchase and your refund request will be subject for review.

Please Note: Refunds can never be provided more than 30 days after the initial purchase date regardless of your activity on the site.