Class Note for CMPSCI 683 at UMass(20)
Class Note for CMPSCI 683 at UMass(20)
Popular in Course
Popular in Department
This 16 page Class Notes was uploaded by an elite notetaker on Friday February 6, 2015. The Class Notes belongs to a course at University of Massachusetts taught by a professor in Fall. Since its upload, it has received 30 views.
Reviews for Class Note for CMPSCI 683 at UMass(20)
Report this Material
What is Karma?
Karma is the currency of StudySoup.
You can buy or earn more Karma at anytime and redeem it for class notes, study guides, flashcards, and more!
Date Created: 02/06/15
Puleth 540 Introductory Biostatistics Page 2 of 10 VARIANCE Let s save ourselves the trouble of a very long brute force formula by using the formula for grouped data Let j index the unique values There are 14 unique values x1 if fjxj if j Xi fi 1 31 1 18225 18225 2 35 2 9025 18050 3 36 2 7225 14450 4 38 3 4225 12675 5 40 2 2025 4050 6 41 3 1225 3675 7 43 1 225 225 8 45 3 025 075 9 46 2 225 450 10 51 2 4225 8450 11 53 2 7225 14450 12 59 1 21025 21025 13 60 1 24025 24025 14 69 1 60025 60025 TOTALS 26 199850 14 f 2 Z x 7 J J 199850 5211 2 S0 S27994 14 213 1 Standard deviation S xE So S 894 WkZisolutionsdoc Victor Lesser CMPSCI 683 Fall 2004 InstanceBased Learning Analytical Learning ExplanationBased Learning First work done at Umass on learning rules of Baseball Maleriallake from Mitchell s Machine Leaming Begin Resource Bounded Reasoning ymxva im Z Encode specific experiences in memory rather than abstractions Carry out generalizations at the retrieval time rather than the storage time lazy learning In their most general form Based on partial match on a similaritymetric retrieve a set of caseshnstances most relevant to the present context Adapl the retrieved experiences to new situationsThis could be based on algorithms ranging from a simple k nearest neighborclassi cation to chains of reasoning vvmamim Key idea juststore all training examples x x l Nearest neighbor Given query llisialiCqu hisl locale neareslliaining example xm then estimate flxv lt ft i lt Nearest neighbor Given xq laxevole among its hneaiest neighboisildisciele valuedlaigellunction lake mean ollvalues olhneaiest neighbors l realvalued A flwleL hl k vimgeamim it oi ieii Positive and NegativeTraining Examples 39Nearest ieiemmryesrei xq 57x ciassilies it as negative on right is decision surlace iei nearest helghtmr eueiy lrl ieeien Wlii have same value stir e Imagine instances described by 20 attributes but only 2 are relevant to target function Curse o dimensionality nearest neighbor is easily mislead when highdimensional X Similarto overtitting One approach Stretch y quot axis by Weight 2 Where 21 i 2 chosen to minimize prediction error 39 Length the axes that correspond to the more relevant attributes Use crossvalidation to automatically choose Weights zii izh Minimize eiiei lrl DiasslilDatluh Setting z to zeie eliminates this dimension all together Might want weight nearer neighbors more my former Where 1 251 u And 109 1 is distance between Jim 1 Note now it makes sense to use all training examples instead ofjust k 7 crassiricaiiari much slower uuiiiiesmml e Instances map to points in SRquot Continuous real values Less than 20 attributes per instance Lots of training data Advantages Training is very fast Robust to noise training data Learn complex target functions Don t lose information Disadvantages Slow at query time Easily fooled by irrelevant attributes uuiiiiesmml e rug 1 39 Note ANN arms loml approximion to ifor each query point X Why not form an miicn approximation x lorregion sumundingxq 7 Fit lireaitundimtn kneaea nagrhnis Wuwaiv van 7 rnquaoianc 7 Pindtcs pieuewse appmxiniatinn tnl Swud choices of mono nl39nimixe explo graling descent 7 Squared ennmvei knearestto neignms W irkEmWfxifixii 7 oiaanmveigneu squaeueimi ova all neighbors 7 Em rm fltzgtgt mltmzgtgt Can apply instancebased learning even whenX a M Need different distance metric CaseBased Reasoning is instancebased learning applied to instances with symbolic logic descriptions userconiplamt errorsaonshutdown cpuniodel PowerPC operatmgsystem windows networkconnectlon ecm nieniozy uxmeg mstalledapphcatmns Excel Netscape Vlrusscan dlsk lglg lJJcelycause a2 uumicsixmim m Key elements of problem solving CBR are Cases represented as solved problems lndex cases under goals satisfied and planning problems avoided Retrieve prior case sharing the most goals amp avoiding the most problems Adapt solution of prior case to solve a new case Mav require resolving problems andor repairing solutions lndex new case and solution under goals satisfied and planning problems aoided CADET 15 stored examples of mechanical devices Each training example qualitative iunctioni mechanical structure New query desired function Targetvalue mechanical structure for thisiunction Distance metric match qualitative function descriptions Size ct largest subgraph between two function graphs uumicsixmim u imam shimmy mam Exploits dornian speciic rewrite rule to modiiy cases to maxe matching more lixely tiiiuiimiiaiiii was A 7gt B seam to 7 A 77gt x 77gt B y we m H L J y au ii jp v A rial pound of beef Twotablespoons of soy sauce One teaspoon of rice wine A rial tablespoon of corn starch One teaspoon of sugar A har pound of green bean One teaspoon of salt One chunk of garlic chop the garlic into pieces the size of matchheads Shred the beer Marinate the beer in the garlic sugar corn starch ncewme and soy sauce stirrry the spices rice erle and beefior one minute Add the green bean to the spices rice Wine and beef stirrry the spices rice Wine green bean and beef torthree minutes Add the salt to the spices ncewine green bean and beer y Wises nmt is CHEF consists of six processes 7 Problem anticipation the planner anticipates planning problems by noticing features in the current input that haye preyiously participated in past planning problems 7 Plan retrieval The planner searches for a plan that satisries as many of its current goals as possible while ayoiding the problems that it has predicted 7 Plan modification The planner alerts the plans it has found to satisry any goals from the inputthat are not already achieyed 7 Plan repair When a plan rails the planner rixes the faculty plan by building up a casual explanation orwhy the fallure has occurred and using it to rind the dlffererlt strategies for repairing it 7 Credit assignment Along with repairing a failed plan the planner wants to repairthe characterization of the world that allowed it to create the failed plan in the rirst place ltdoes this by using the casual explanation orwhy thefallure occurred to identify the reatures in the inputthat led to the problem and then marxthern as predictiye of it 7 Plan storage The planner places successrul plansin memory indexed by the goals that they Satisfy and the problems that they avoid y Wham amt i4 Recipe for BEEFANDBROCOLI Found nearest recipe is BEEFWITHGREENBEANS Modifying recipe BEEFWITHGREENBEANS To satisfy include broccoli in the dish Placing some broccoli in recipe BEEFWITHGREENBEANS Considering ingredientcritic Before doing step Stir fry the Variabe do Chop the broccoli into pieces the size of chunks ngredient critic applied Chef alters old plans to satisfy neNgoals using a set of Weizmann modi cation rules and a set of nail objects 5 Anattpnttn ntheet enemaxpnnnntmgar Twa tatlespnnn ntsnysaum t natt pmth at tmccntt One a punt ntttce we t natttatlespnnn ntan with One teaspnnn ntstt One chunk ntgalttc Chth 91quotquot mo pacesme sue a mtmuns Slumhe be mnnaemeneq mm game sugar cum smut mwne am 50 sun Ile mm m will sin a Hunks snmyme sum me Mme am mum on mm Mame moi mm spas m Mme am he snmyme sum me tune human am Mmrmmmmes Mame an mm sum no tune mean am he The beet tS nuw tendet The new nuw tastes savuty The hmtmll 5 ml my The tsh nuw tastes satty The tshnuwtastesswaet The mstt nuw tastes tte gatttu vumcsiwzm n 39 ALTERVPLW StDErEFFECT Reptauethe step that Causes the vtntattng Dnndtttnn thh mte that dues nnt have the same state want but auhtevesthe same gnat SLPtTrANDrREFORht SDtttthe step mm twn separate steps and run them tnaependentty 39 ADJUNTVPLAN REMOVE Add a new Step tn be Tun atnng thh a step that Causes a stderetteutthat Temnves the stderettect as tt ts Heated mmmm TX tndextng BEEFVANDVBROCCOU under gods and ptobtents tfthts ptan ts successfut the Tottcwtng shoutd bettue The bed ts now tender The bTOCOOtt ts now atsp tnctude beet tn the dtsh tnctude bTOCCOtt tn the dtsh Make a stttrtty dtsh The ptan avedsfatture exernptttted bythe state The broccdt ts now soggy tn tectpe BEEFVANDVBROCCOU vumcsiwzm w Searchtng tot pan that sattsttes tTtpttt goas tnduue htcken tn the dtST tnduue mow peatn the dtsh Mate aStTrtTyOTST Cottectng m0 actttattng tests ts the GTST STVLEVSTTRVFRV ts the ttetn aMEAT ts the ttetn a VEGETABLE ts the TEXTURE 0t ttetn CRTSP ChtckenSnowpeaSttt Etytng Fatute Meat sweats when tt ts SttTrtTted Stttrtytng tn too much ttqtttd mates CTtSp vegetaues soggy Remtnded 0t atatute tTt the BEEFANDVBROCCOU ptan Fatute The vegetaue ts howsoggy mnmsixmtm in Driving down on Make a stirIn dish Succeeded Driving down on Avoid Iailure exemplilied bylhe state The broccoli is now soggyquot in rea39pe BEEFANDBROCCOLI Succeeded Driving down on Indude chicken in the dish Failed Trying more general goal Driving down on Include meal in the dish Succeeded Driving down on Indude snow pea in the dish FailedTwing more general goal Driving down on Include vegetable in the dish Succeeded Found redpegt RECQ BEEFANDBROCCOLI v mus m 21 v mm m Given Space of Hypotheses Training examples of target concept Determine Hypothesis consistent with the training examples May need a lot of training examples to distinguish relevant from irrelevant features 22 Give Space of Hypotheses Training examples of target concepts Domain theory for explaining examples Determine Hypothesis consistent with both the training examples and the domain theory May need fewer training examples to distinguish relevant attributes because generalization can be based on logical rather than statistical reasoning v mus m 23 Normal Learning Involves a long process ofuncovering the consequences of prior knowledge Guided by a speci c training examples Use prior knowledge to reduce the complexity of the hypothesis space to be searched Reducing sample complexity Improving generalization accuracy v mm m 24 GMquot Domain Theory instances pairs 0 objects 7 represented owe predicateslype Color Voluiie Ower Matenal Densiyano safe39To39StaCWiy lt39 N tFrag eW 0quot SafeToStackmy lt Lighterxry Hypotheses saslirstoider tlieri rules lioni clauses Lighter y lt WWW W and WWW W and Targa Concept SaieIomdrfxy misis maimed n belmed L Training Example SdetodaddOBJmBJZl ESSW WV ammonia Weightmw lt Volumemv and Densmx d and rypaoaii aux Ein alw Md WMOEJZ ENDTAELE Cdnmmwm Weightxr5 lt Typex ENDTABLE cannoaizamE VolumeOEll 1 umsinoann Determine Hypothesis h consrstent Vinth domain theory B if B does not entail not h z 1o summ lnitialize hypothesis 0 il iwaiai For each positive training example not M m covered by hypothesis 1 Explain how training example satis es target Wm H mime concept in terms of domain theory 2 Analgethe explanation to determine the most MM mm mm A llthNJEJ r general conditions under which this explanation proof hard WWW 3 Refine the hypothesis by adding a new rule whose preconditions are the above conditions and whose consequent asserts the target concept vmmm 1 Norman 2x vmmm 2 New Rule SaleTnStach y lt77 Volumex vx menstmxnx KEuuaKvx timeqvx dx amp LessThanMx S amp Typey ndtable an 39 some 39 r lm Justi ed generalization from single example Compiling relevant parts of domain theory Explanation determines feature relevance Regression determines needed feature constraints Generality of result depends on domain theory Still require multiple examples vmmm It Theoryguided generalization from examples Exampleguided operationalization oftheories Just restating what learner already knows Is It learning Are you learning when you get better overtime at chess Even though you already know everything in principle once you know rules of the game v Are you learning when you sit in a math class Even though those theorems follow deductively from the axioms you ve already learned uunmsixmlm 2 Learning as a search for the best hypothesisfunction that matches the training data Knowledge of the domain is represented as llecal mles Learning technology has become very formal and View learning as a form of reasoning closely related to statistics Background knowledge is used to construct Gradquot Ass39gnmem 395 ammp39ex Prf b39em proofs or explana Ons of experiencei Many different approaches to learning based on r Supervised vs Unsupervised 39 These Pr f5 are thequot generallzed based on 7 Learning Single step decisio process vs inuitioie step one atype of goal regression and compiled into 7 Orirlii39ieli39icremei39italvs Offrlii39ie ieaming rules 7 CharacterofFurictiorithatrieedsto be learned 7 Quaiitv and CharacterofTi airiirig Data We havejust scratched the surface 7 Support Vector Macnines Hidden Markov Models Bavseari It is a process of transforming existing knowledge into another form so as to be able to apply it efficiently also called speedup Network Learning learning 7 Field is evolvingvery qUickly v mm m a v mm m 14 Agents have limited computational power They must react within an acceptable time Computation time delays action and reduces the Resource Bounded Reasoning value ofthe result Must cope with uncertainty and missing information Limited planning horizon The appropriate level of deliberation is situation dependent It is beneficial to build systems that can tradeoff computational resource for quality of results v mesa nmt a Al deliberation techniques have a large Not onm an issue of more computing variability in execution time memory power consumption etc Need intelligent control Difficult to predict reasonable upper bounds Nonrdeteiministicseaicn 39Vmbr WWquot We How to build agents that satis ce 39 Variable amount olunceilainty I rather than optimize The computational resources required to reach an optimal decision normally reduce the overall utility of the result How to get the best answer within available resources vmmm 3i mummy ax Pro osed b Herbert Simon in 1957 to denote deciion mayking that searches until an It is not feasible computationally or desirable alternative is found that meets the agent s economically to compute the optimal answer aspiration level criterion The appropriate level of deliberation is It appears probable that however adaptive the Situation dependent behavior of organisms in learning and choice situations this adaptiveness falls far short of the It Is benE c39al to bu39ld systems that can ideal maximizing postulated in economic tradeoff computational resources for quality of theory Evidently organisms adapt well enough results to satisfice they do not in general quot39Ze 39 Key to Applying AI in Open Environments vmmm 3 mummy o ticUm L L o Computation Time versus Quality of Results 0 Memory versus Time versus Power 0 Cost of Information versus Quality of Results 0 Communication Bandwidth versus Quality 0 Manymore Satis cing approximate reasoning Satis cing approximate modeling Satis cing approximate metareasoning Satis cing bounded optimality Satis cing a combination of the above mnmsixmlm u 39 God 7 thmn tn semrmsupptvtre vehicle tvpes msnlnns am nuvenent mamdeuatcslorasmanv venuesnuwrgtrrmugrrtrre enviromtenl as misihle gtwngpretermeetutuutng vemdesottvpe w am determining venue urreurnns win high preusnn gt31 S amnesihllv39s39 39 vZsZlZd2 7 rrre lollowng evens arru trrer mrrewnrrurrg leatures are uertatn venue the w lomled am moving mm weeds l am diledlnn m Vehtdetvpe v2 at t2 moving theed 2 am diledlnn uz No utrre Venus ae present in tre errwurrnent 39 Em within Dealline 7 From a mimryanalvsis rt tsttxuvtnattrrere exasa venue otthe w lomled rrea it movinth weedhetveen a anust lndlredlnn ut otner venues nrgn be present 39 Appropriate approximation 15 based both on the needs of the user and on the current state mnmsixmlm u Medical diagnosis of treatments Combinatorial Optimization Evaluation of belief networks Realtime heuristic search Information Gathering Mobile robot navigation Database query optimization Software validation Realtime Graphics Fig 4 Mm mdadgsmaamm dam rem Tahmlantakes advantage ar mud andtempnml mums by mnsl39mmirg pevmilslyrendeied writes mead nfrearendenng the sprites Rectangle cornpmed nt39dotled lines boundspntes waxpdtn tanewl39mrne mclargles nfsnhd hnesbnund ass rendered smtes Fig 3 Rauajmudmma Weseek to mdeisland tie wmbonnl39tle sensibvitynl39weweis tnvmmls clues ar gmphics degnmbmi as mud and lsmpaai asiaamsmps change has guie lugdighls opportunities for moth attention as funnimns nl39lhe pnmaayraaus nfattenbnnandsuch asiaadasiaps as aL lJzLemyr distance in ankgmund a and mmmn ol39wntes Hnmwmgvei worm Notes illneAAAFallsyw on Heme mmpuramn woe n Fig 1 waddle umgjor quotmadam 17le 0 model Methods l39nneducmg the nurnbeml39venmes udla dsaaias a geometric model canbe used in genemte a spsamam ol39models man an ideal model to pmgremvelysmpkr models The fully detailed model on is dssadasdvayms vemces The ampli ed model right is dssadasdvayw vsmass Fig Rammg tespamlsolawm Oojects canbe sampkdatavmetyol39 mad remlmmnsbel39me burg composited min nal scenes diam for a tndem between the clantynl39an object andthe aampmaada requde render the object Howilz Menayei Working NolasomeAAAFalsymp on Flexible Computation 1995 a A methodologyfor building satis cing systems by addressing the following four major issues 1 Elementary algorithm construction 2 Performance measurement and prediction 3 Composability of methods subsystems 4 Monitoring and metalevel control In the context of an Overall System Architecture a What s are the baselevel computational small quantities of resources such as time methods memory or information to be traded for gains in the value of computed results Develop computational methods that allow How does the system represent and project 39 7 Quality measures replace correctness resourwqualltymden s39 r Certaintyr Likelihood that the answer is DDllEDt 7 Precision r ALDuraDy ol the answer 7 Speci city Level ol detail in the ahSNEF 7 Completeness 7 Part olproblem solved 7 Cost 7 Overall solution Cost 7 Multidimensional quality measum What kind of metalevel control is used What are the characteristics of the overall system vumcsiwzm w mnmsixmim so An anytime algorithm is an algorithm whose quality of results improves gradually as computation time increases Anytime algorithms have been designed for planning Bayesian inference CSPs combinatorial optimization diagnosis Ideal maximal quality notime Traditional quality maximizing Anytime utility maximizing vumcsiwzm 5r mnmsixmim 52 Interruptlble algorlthmg are anytime Both interruptible and contract algorithms algorlthms Whose run tlme need not be have performance profiles Qt which return determined in advance They can be interrupted at any time during execution and return a result quality as a function of time Note that for contract algorithms the Connam algorithms are anytime algorithms horizontal axis is the time that was given in that take the deadline as input advance No assumption is made about the results produced before the deadline v mm m 53 v mm m 54 Good contract algorithms can be easier to What if we want to use a contract algorithm in design because they can optimize with a setting where we don t know the deadline respect to the extra time input Examples We can repeatedly activate the contract Depthbounded or costbounded tree search algorithm with increasing run times Discretizing a continuous problem Composition of anytime algorithms v mm m 55 v mm m 56 When the deadline occurs we can return the result produced by the last contract to nish Deadhne Return resu t 1mm this mum vqmmm a domain 5x More on Resource Bounded Reasoning vqmmm an
Are you sure you want to buy this material for
You're already Subscribed!
Looks like you've already subscribed to StudySoup, you won't need to purchase another subscription to get this material. To access this material simply click 'View Full Document'