Advanced Topics in Computer Networks
Advanced Topics in Computer Networks EEL 6788
University of Central Florida
Popular in Course
Popular in Electrical Engineering
This 58 page Class Notes was uploaded by Isaac Hauck on Thursday October 22, 2015. The Class Notes belongs to EEL 6788 at University of Central Florida taught by Staff in Fall. Since its upload, it has received 6 views. For similar materials see /class/227670/eel-6788-university-of-central-florida in Electrical Engineering at University of Central Florida.
Reviews for Advanced Topics in Computer Networks
Report this Material
What is Karma?
Karma is the currency of StudySoup.
You can buy or earn more Karma at anytime and redeem it for class notes, study guides, flashcards, and more!
Date Created: 10/22/15
LECTURE 6 MULTIAGENT INTERACTIONS An Introduction to Multiagent Systems httpwwwcsclivacuk mjwpubsimas Lecture 6 An Introduction to Multiagent Systems 1 What are Multiagent Systems KEY Env1ronment organisational relationship sphere of in uence interaction agent httpwwwcsclivacuk mjwpubsimas 1 Lecture 6 An Introduction to Multiagent Systems Thus a multiagent system contains a number of agents 0 which interact through communication 0 are able to act in an environment 0 have different spheres of influence which may coincide 0 will be linked by other organisational relationships httpwwwcsclivacuk mjwpubsimas 2 Lecture 6 An Introduction to Multiagent Systems l2 Utilities and Preferencesl 0 Assume we have just two agents Ag ij 0 Agents are assumed to be selfinterested they have preferences over how the environment is 0 Assume Q w1w2 is the set of outcomes that agents have preferences over 0 We capture preferences by utility functions MiIQ39 gtR 0 Utility functions lead to preference orderings over outcomes to i w means who 2 uiw w h w39 means Lilw gt Lilwl httpwwwcsclivacuk mjwpubsimas 3 Lecture 6 An Introduction to Multiagent Systems What is Utility 0 Utility is not money but it is a useful analogy 0 Typical relationship between utility amp money utility money httpwwwcsclivacuk mjwpubsimas 4 Lecture 6 An Introduction to Multiagent Systems l3 Multiagent Encountersl 0 We need a model of the environment in which these agents will act agents simultaneously choose an action to perform and as a result of the actions they select an outcome in Q will result the actual outcome depends on the combination of actions assume each agent has just two possible actions that it can perform C cooperate and D defect 0 Environment behaviour given by state transformer function 7 49 x 49 gt 9 agent i s action agent j s action httpwwwcsclivacuk mjwpubsimas 5 Lecture 6 An Introduction to Multiagent Systems 0 Here is a state transformer function TDD wl TDC mg TCD Lug TCC m This environment is sensitive to actions of both agents 0 Here is another TDD wl TDC wl TCD wl TCC wl Neither agent has any influence in this environment 0 And here is another TDD wl TDC mg TCD wl TCC mg This environment is controlled byj httpwwwcsclivacuk mjwpubsimas 6 Lecture 6 An Introduction to Multiagent Systems Rational Action 0 Suppose we have the case where both agents can influence the outcome and they have utility functions as follows uiw1 1 uiw2 1 LizW3 4 WW4 4 W011 1 W012 4 WW 1 W014 4 0 With a bit of abuse of notation 1 1 4 MiCC 0 Then agent i s preferences are CC i CD h DC iDD 0 C is the rational choice for 1 Because i prefers all outcomes that arise through C over all outcomes that arise through D httpwwwcsclivacuk mjwpubsimas 7 Lecture 6 Payoff Matrices 0 We can characterise the previous scenario in a payoff matrix i defect coop defect 1 4 coop 4 0 Agent 1 is the column player 0 Agent j is the row player http wwwcsc livac uk mjwpubsimas An Introduction to Multiagent Systems Lecture 6 An Introduction to Multiagent Systems lDominant Strategiesl 0 Given any particular strategy s either C or D agent i there will be a number of possible outcomes 0 We say s1 dominates S2 if every outcome possible by 1 playing s1 is preferred over every outcome possible by 1 playing s2 0 A rational agent will never play a dominated strategy 0 So in deciding what to do we can delete dominated strategies 0 Unfortunately there isn t always a unique undominated strategy httpwwwcsclivacuk mjwpubsimas 9 Lecture 6 An Introduction to Multiagent Systems lNash Equilibriuml 0 In general we will say that two strategies s1 and s2 are in Nash equilibrium if 1 under the assumption that agent 1 plays s1 agent can do no better than play s2 and 2 under the assumption that agent plays s2 agent i can do no better than play s1 0 Neither agent has any incentive to deviate from a Nash equilibrium 0 Unfortunately 1 Not every interaction scenario has a Nash equilibrium 2 Some interaction scenarios have more than one Nash equilibrium httpwwwcsclivacuk mjwpubsimas 10 Lecture 6 An Introduction to Multiagent Systems lCompetitive and ZeroSum lnteractionsl 0 Where preferences of agents are diametrically opposed we have stricty competitive scenarios 0 Zerosum encounters are those where utilities sum to zero uiw uJw 0 for all w E Q 0 Zero sum implies strictly competitive 0 Zero sum encounters in real life are very rare but people tend to act in many scenarios as if they were zero sum httpwwwcsclivacuk mjwpubsimas 11 Lecture 6 An Introduction to Multiagent Systems l4 The Prisoner s Dilemmal Two men are collectively charged with a crime and held in separate cells with no way of meeting or communicating They are told that 0 if one confesses and the other does not the confessor will be freed and the other will be jailed for three years 0 if both confess then each will be jailed for two years Both prisoners know that if neither confesses then they will each be jailed for one year httpwwwcsclivacuk mjwpubsimas 12 Lecture 6 An Introduction to Multiagent Systems 0 Payoff matrix for prisoner s dilemma i defect coop defect 2 1 j 2 4 coop 4 3 1 3 0 Top left If both defect then both get punishment for mutual defection 0 Top right If i cooperates and j defects 1 gets sucker s payoff of 1 while gets 4 0 Bottom left If cooperates and i defects j gets sucker s payoff of 1 while 1 gets 4 0 Bottom right Reward for mutual cooperation httpwwwcsclivacuk mjwpubsimas 13 Lecture 6 An Introduction to Multiagent Systems 0 The individual rational action is defect This guarantees a payoff of no worse than 2 whereas cooperating guarantees a payoff of at most 1 0 So defection is the best response to all possible strategies both agents defect and get payoff 2 0 But intuition says this is not the best outcome Surely they should both cooperate and each get payoff of 3 httpwwwcsclivacuk mjwpubsimas 14 Lecture 6 An Introduction to Multiagent Systems 0 This apparent paradox is the fundamental problem of multiagent interactions It appears to imply that cooperation will not occur in societies of selfinterested agents 0 Real world examples nuclear arms reduction why don t I keep mine free rider systems public transport in the UK television licenses 0 The prisoner s dilemma is ubiquitous 0 Can we recover cooperation httpwwwcsclivacuk mjwpubsimas 15 Lecture 6 An Introduction to Multiagent Systems lArguments for Recovering Cooperationl 0 Conclusions that some have drawn from this analysis the game theory notion of rational action is wrong somehow the dilemma is being formulated wrongly 0 Arguments to recover cooperation We are not all machiavelli The other prisoner is my twin The shadow of the future httpwwwcsclivacuk mjwpubsimas 16 Lecture 6 An Introduction to Multiagent Systems i41 The Iterated Prisoner s Dilemma 0 One answer play the game more than once If you know you will be meeting your opponent again then the incentive to defect appears to evaporate 0 Cooperation is the rational choice in the in nititey repeated prisoner s dilemma Hurrah httpwwwcsclivacuk mjwpubsimas 17 Lecture 6 An Introduction to Multiagent Systems l42 Backwards lnductionl 0 But suppose you both know that you will play the game exactly n times On round n 1 you have an incentive to defect to gain that extra bit of payoff But this makes round n 2 the last real and so you have an incentive to defect there too This is the backwards induction problem 0 Playing the prisoner s dilemma with a fixed finite predetermined commonly known number of rounds defection is the best strategy httpwwwcsclivacuk mjwpubsimas 18 Lecture 6 An Introduction to Multiagent Systems l43 Axelrod s Tournamentl 0 Suppose you play iterated prisoner s dilemma against a range of opponents What strategy should you choose so as to maximise your overall payoff 0 Axelrod 1984 investigated this problem with a computer tournament for programs playing the prisoner s dilemma httpwwwcsclivacuk mjwpubsimas 19 Lecture 6 An Introduction to Multiagent Systems lStrategies in Axelrod s Tournamentl 0 ALLD Always defect the hawk strategy 0 TITFORTAT 1 On round u 0 cooperate 2 On round u gt 0 do what your opponent did on round u 1 0 TESTER On lst round defect Ifthe opponent retaliated then play TlT FORTAT Otherwise intersperse cooperation amp defection 0 JOSS As TlT FORTAT except periodically defect httpwwwcsclivacuk mjwpubsimas 20 Lecture 6 An Introduction to Multiagent Systems Recipes for Success in Axelrod s Tournamentl Axelrod suggests the following rules for succeeding in his tournament 0 Don t be envious Don t play as if it were zero sum 0 Be nice Start by cooperating and reciprocate cooperation 0 Retaiate appropriatey Always punish defection immediately but use measured force don t overdo it 0 Don t hold grudges Always reciprocate cooperation immediately httpwwwcsclivacuk mjwpubsimas 21 Lecture 6 An Introduction to Multiagent Systems l5 Game of Chickenl 0 Consider another type of encounter the game of chicken i defect coop defect 1 2 j 1 4 coop 4 3 2 3 Think of James Dean in Rebel without a Cause swerving coop driving straight defect 0 Difference to prisoner s dilemma Mutual defection is most feared outcome Whereas sucker s payoff is most feared in prisoner s dilemma 0 Strategies cd and do are in Nash equilibrium httpwwwcsclivacuk mjwpubsimas 22 Lecture 6 An Introduction to Multiagent Systems l6 Other Symmetric 2 x 2 Gamesl 0 Given the 4 possible outcomes of symmetric cooperatedefect games there are 24 possible orderings on outcomes CC a CD gtDC H DD Cooperation dominates DC a DD H CC a CD Deadlock You will always do best by detecting DC a CC gtDD a CD Prisoner s dilemma DC a CC a CD a DD Chicken CC gtDC gtDD H CD Stag hunt httpwwwcsclivacuk mjwpubsimas 23 LECTURE 7 REACHING AGREEMENTS An Introduction to Multiagent Systems httpwwwcsclivacuk mjwpubsimas Lecture 7 An Introduction to Multiagent Systems l1 Reaching Agreementsl 0 How do agents reaching agreements when they are self interested 0 In an extreme case zero sum encounter no agreement is possible but in most scenarios there is potential for mutually bene cial agreement on matters of common interest 0 The capabilities of negotiation and argumentation are central to the ability of an agent to reach such agreements httpwwwcsclivacuk mjwpubsimas 1 Lecture 7 An Introduction to Multiagent Systems lMechanisms Protocols and Strategiesl 0 Negotiation is governed by a particular mechanism or protocol 0 The mechanism defines the rules of encounter between agents 0 Mechanism design is designing mechanisms so that they have certain desirable properties 0 Given a particular protocol how can a particular strategy be designed that individual agents can use httpwwwcsclivacuk mjwpubsimas 2 Lecture 7 An Introduction to Multiagent Systems iMechanism Designi Desirable properties of mechanisms 0 Convergenceguaranteed success 0 Maximising social welfare 0 Pareto ef ciency 0 Individual rationality 0 Stability 0 Simplicity 0 Distribution httpwwwcsclivacuk mjwpubsimas 3 Lecture 7 An Introduction to Multiagent Systems m 0 An auction takes place between an agent known as the auctioneer and a collection of agents known as the bidders 0 The goal of the auction is for the auctioneer to allocate the good to one of the bidders 0 In most settings the auctioneer desires to maximise the price bidders desire to minimise price httpwwwcsclivacuk mjwpubsimas 4 Lecture 7 An Introduction to Multiagent Systems iAuction Parametersi 0 Goods can have private value publiccommon value correlated value 0 Winner determination may be rst price second price 0 Bids may be open cry seaed bid 0 Bidding may be one shot ascending descending httpwwwcsclivacuk mjwpubsimas 5 Lecture 7 An Introduction to Multiagent Systems lEnglish Auctionsl 0 Most commonly known type of auction rstprice open cry ascending 0 Dominant strategy is for agent to successively bid a small amount more than the current highest bid until it reaches their valuation then withdraw 0 Susceptible to winners curse shils httpwwwcsclivacuk mjwpubsimas 6 Lecture 7 An Introduction to Multiagent Systems Dutch Auctions Dutch auctions are examples of opencry descending auctions 0 auctioneer starts by good at artificially high value 0 auctioneer lowers offer price until some agent makes a bid equal to the current offer price 0 the good is then allocated to the agent that made the offer httpwwwcsclivacuk mjwpubsimas 7 Lecture 7 An Introduction to Multiagent Systems lFirstPrice SealedBid Auctionsl Firstprice sealedbid auctions are oneshot auctions 0 there is a single round 0 bidders submit a sealed bid for the good 0 good is allocated to agent that made highest bid 0 winner pays price of highest bid Best strategy is to bid less than true valuation httpwwwcsclivacuk mjwpubsimas 8 Lecture 7 An Introduction to Multiagent Systems iVickrey Auctionsi 0 Vickrey auctions are secondprice sealedbid 0 Good is awarded to the agent that made the highest bid at the price of the second highest bid 0 Bidding to your true valuation is dominant strategy in Vickrey auctions 0 Vickrey auctions susceptible to antisocial behavior httpwwwcsclivacuk mjwpubsimas 9 Lecture 7 An Introduction to Multiagent Systems 3 Negotiation 0 Auctions are only concerned with the allocation of goods richer techniques for reaching agreementsare required 0 Negotiation is the process of reaching agreements on matters of common interest 0 Any negotiation setting will have four components A negotiation set possible proposals that agents can make A protocol Strategies one for each agent which are private A rule that determines when a deal has been struck and what the agreement deal is Negotiation usually proceeds in a series of rounds with every agent making a proposal at every round httpwwwcsclivacuk mjwpubsimas 10 Lecture 7 An Introduction to Multiagent Systems l31 Negotiation in TaskOriented Domainsl Imagine that you have three children each of whom needs to be delivered to a different school each morning Your neighbour has four children and also needs to take them to school Delivery of each child can be modelled as an indivisible task You and your neighbour can discuss the situation and come to an agreement that it is better for both of you for example by carrying the other s child to a shared destination saving him the trip There is no concern about being able to achieve your task by yourself The worst that can happen is that you and your neighbour won t come to an agreement about setting up a car pool in which case you are no worse off than if you were alone You can only benefit or do no worse from your neighbour s tasks Assume though that one of my children and one of my neigbours s children both go to the same school that is the cost of carrying out these two deliveries or two tasks is the same as the cost of carrying out one of them It obviously makes sense for both children to be taken together and only my neighbour or I will need to make the trip to carry out both tasks httpwwwcsclivacuk mjwpubsimas 11 Lecture 7 An Introduction to Multiagent Systems TODs Defined 0 A TOD is a triple Tl1g 6 where T is the finite set of all possible tasks Ag 1 n is set of participant agents c MT gt H2 defines cost of executing each subset of tasks 0 An encounter is a collection of tasks T1 T where T g T for each i 6 Ag httpwwwcsclivacuk mjwpubsimas 12 Lecture 7 An Introduction to Multiagent Systems Deals in TODs 0 Given encounter T112 a deawill be an allocation of the tasks T1 U T2 to the agents 1 and 2 0 The cost to i of deal 6 D1D2 is cD and will be denoted cosh6 0 The utility of deal 6 to agent i is utility6 2 CU cosh6 0 The con ict deal 6 is the deal T1 T2 consisting of the tasks originally allocated Note that utility8 0 for all i 6 Ag 0 Deal 6 is individual rational if it weakly dominates the conflict deal httpwwwcsclivacuk mjwpubsimas 13 Lecture 7 An Introduction to Multiagent Systems lThe Negotiation Setl 0 The set of deals over which agents negotiate are those that are individual rational pareto efficient httpwwwcsclivacuk mjwpubsimas 14 Lecture 7 An Introduction to Multiagent Systems iThe Negotiation Set Illustratedi utility for agenti B utility of con ict A deal fori E D con ict deal utility of con ict deal for j http wwwcsc livac uk mjwpubsimas deals on this line from B to C are Pareto optimal hence in the negotiation set this circle delimits the space of all possible deals utility for agent j Lecture 7 An Introduction to Multiagent Systems lThe Monotonic Concession Protocoll Rules of this protocol are as follows 0 Negotiation proceeds in rounds 0 On round 1 agents simultaneously propose a deal from the negotiation set 0 Agreement is reached if one agent finds that the deal proposed by the other is at least as good or better than its proposal 0 If no agreement is reached then negotiation proceeds to another round of simultaneous proposals 0 ln round u 1 no agent is allowed to make a proposal that is less preferred by the other agent than the deal it proposed at time u 0 If neither agent makes a concession in some round u gt 0 then negotiation terminates with the conflict deal httpwwwcsclivacuk mjwpubsimas 16 Lecture 7 An Introduction to Multiagent Systems lThe Zeuthen Strategyl Three problems 0 What should an agent s first proposal be Its most preferred deal 0 On any given round who should concede The agent least willing to risk con ict 0 If an agent concedes then how much should it concede Just enough to change the balance of risk httpwwwcsclivacuk mjwpubsimas 17 Lecture 7 An Introduction to Multiagent Systems lWillingness to Risk Conflictl 0 Suppose you have conceded a lot Then Your proposal is now near to conflict deal In case conflict occurs you are not much worse off You are more willing to risk confict 0 An agent will be more willing to risk conflict if the difference in utility between its current proposal and the conflict deal is low httpwwwcsclivacuk mjwpubsimas 18 Lecture 7 An Introduction to Multiagent Systems lNash Equilibrium Again l The Zeuthen strategy is in Nash equilibrium under the assumption that one agent is using the strategy the other can do no better than use it himself This is of particular interest to the designer of automated agents It does away with any need for secrecy on the part of the programmer An agent s strategy can be publicly known and no other agent designer can exploit the information by choosing a different strategy In fact it is desirable that the strategy be known to avoid inadvertent conflicts httpwwwcsclivacuk mjwpubsimas 19 Lecture 7 An Introduction to Multiagent Systems Deception in TODsl Deception can benefit agents in two ways 0 Phantom and Decoy tasks Pretending that you have been allocated tasks you have not 0 Hidden tasks Pretending not to have been allocated tasks that you have been httpwwwcsclivacuk mjwpubsimas 20 Lecture 7 An Introduction to Multiagent Systems l4 Argumentationl 0 Argumentation is the process of attempting to convince others of something 0 Gilbert 1994 identified 4 modes of argument 1 Logical mode If you accept thatA and thatA implies B then you must accept that B 2 Emotional mode How would you feel if it happened to you 3 Visceral mode Cretin 4 Kiscera mode This is against Christian teaching httpwwwcsclivacuk mjwpubsimas 21 Lecture 7 An Introduction to Multiagent Systems lLogic based Argumentationl Basic form of logical arguments is as follows Database l Sentence Grounds where 0 Database is a possibly inconsistent set of logical formulae 0 Sentence is a logical formula known as the conclusion and 0 Grounds is a set of logical formulae such that 1 Grounds g Database and 2 Sentence can be proved from Grounds httpwwwcsclivacuk mjwpubsimas 22 Lecture 7 An Introduction to Multiagent Systems Attack and Defeatt Let 111 1 and 121 2 be arguments from some database A Then 121 2 can be defeated attacked in one of two ways 1 11 rebUtS 22 b1 E 39quotMbg 2 41I 1 undercuts 22 if 1 E mp for some 2p 6 F2 A rebuttal or undercut is known as an attack httpwwwcsclivacuk mjwpubsimas 23 Lecture 7 An Introduction to Multiagent Systems lAbstract Argumentationl 0 Concerned with the overall structure of the argument rather than internals of arguments 0 Write x gt y argument x attacks argument y x is a counterexample ofy or x is an attacker of y where we are not actually concerned as to what x y are 0 An abstract argument system is a collection or arguments together with a relation gt saying what attacks what 0 An argument is out if it has an undefeated attacker and in if all its attackers are defeated httpwwwcsclivacuk mjwpubsimas 24 Lecture 7 An Introduction to Multiagent Systems An Example Abstract Argument System A 3 g V Kj Lg T fnd h httpwwwcsclivacuk mjwpubsimas 25 YAES simulator a tutorial Lotzi Boloni April 27 2008 version 02 Contents 1 The big picture 1 2 How to run a simulation 2 21 Running a simulation interactively i i i i i i i i i i i i i i i i i 2 22 Running a simulation Without interactive control i i i i i i i i i 3 3 The components of a simulation 3 31 The simulation input i i i i i i i i i i i i i i i i i i i i i i i i i 3 32 The simulation output i i i i i i i i i i i i i i i i i i i i i i i i 4 33 The simulation code i i i i i i i i i i i i i i i i i i i i i i i i i 5 34 The update function i i i i i i i i i i i i i i i i i i i i i i i i i 5 35 The context i i i i i i i i i i i i i i i i i i i i i i i i i i i i i i 6 4 How to create a video 6 5 Generating graphs and presenting results 7 5 1 Parameter sweeps i i i i i i i i i i i i i i i i i i i i i i i i i i i 7 511 Saving your data i i i i i i i i i i i i i i i i i i i i i i i 8 5 2 Generating graphs i i i i i i i i i i i i i i i i i i i i i i i i i i 8 1 The big picture YAES is a timestep simulator7 that is7 the simulator performs something at every timestepi The input to the simulators are the input parameters SimulationInput the simulation code this is a class implementing the yaes framework simulation SimulationCode interface i What to do before simulation starts in the setup function7 at every timestep in the update function and after the simulation nishes During running the simulation the current data of the simulation is carried in the context a class implementing the yaes framework simulat ion cont ext interface It is important that you keep all the information in this class rather than in other classes you write this allows many cool things like rerunning only parts of a series of simulation processing the results of the simulation later distributing a simulation over many machines and so on ina y t e output 0 the simulation is captured in the yaes framework simulationSimulationUutput class You donlt need to overwrite this class it is ne as it is For your convenience the SimulationOutput class will carry all the input parameters measurements you might have made and the nal version the of context Simulat ionUutput is serializable and if you kept the context serializable as well then you can just write and load it to a le and you have the results of the simulation stored for future inspection 2 How to run a simulation We will assume that you have the input parameters the context class MyCont ext and the simulation code class MyCodei In the rst approximation there are two ways to run a simulation In the interactive mode you step through the simulation step by step This makes sense during debugging or demoingi Nevertheless you might want to have some sort of visualization as at least informative text output happening in this case Alternatively you just want to run a simulation as fast as possible and you are only interested on the output This is important in many scienti c studies where you need to run the same simulation many times and get the average values or alternatively you need to run the simulation many times while varying them or more parameters 21 Running a simulation interactively This is what you will put somewhere in your code SimulationInput si new SimulationInput sisetStepTime100 run for 100 cycles sisetParameterquotTemperaturequot 70 In practice almost always you will de ne a constant for the string Temper ature77i Now for the run itself MyContext context new MyContext simulationOutput SimulationControlGuisimulationsi MyCodec1ass context And this is it Note that the simulation code is passed as a class rather than an external instantiationi If you run this code it brings up the window in Figure 1 mm 1 5m 51m 5 mqu the mmm mmm Senmm wnmmuse Stem Rm Hyml 1e ymlaauswpwatthenmummspbywshmgsmlgt hie t We um a me hum of the Sumkmmtpuz we may mpxed m quot2 my unnmg a simulation whom mcexaccm cuntwl To run mum mm mwmve mm yum mn m mum mum ycade c1155 cmm m cm w m amt and me mum mu m an m mm mmquot m an Wm 5 many an mum numba a MS spews m m simman Input 1mg mummgmagmsmme am quot2 palms of m smuhunn 3 The components or a simulation 3 1 The Simulation mu mm mm m M 5 a me W Emmi an mad w ma m m smelE mommy of hum Mass by us suing he We md mums ow um Wm yum mmmw 5 m smmmmpm 5 sethrzmetexC mpexztnxe 70 mm m 5 ewzxzmetexbanble 39mpen ln gt w s The simulation input can be created with an optional parameter taking another simulation input object The effect is to copy all the parameters over to the new input This comes in handy if you need to generate inputs where only one parameter differs parameter sweep 32 The simulation output The simulation output is also a repository of values addressed by name How ever its main focus is the processing and manipulating of the statistic properties of the values Let us see an example Create a simulation output and update its variable quotXquot SimulationOutput so new SimulationUutputO soupdatequotXquot 10 soupdatequotXquot 50 soupdatequotXquot 40 Now we can inspect this variable We can of course retrieve its last value double x sogetVa1uequotXquot RondomVariableProbeLASTVALUE which will be of course 4 But we can retrieve the maximum minimum and average max sogetVa1uequotXquot RandomVariableProbeMAX min sogetVa1uequotXquot RandomVariableProbeMIN xaverage sogetVa1uequotXquot RandomVariableProbeAVERAGE We can retrieve the sum xsum so getValuequotXquot RandomVariable Probe SUM the number of times the variable was updated xcount so getValuequotXquot RandomVariable Probe CUUNT or the variance xvariance sogetVa1uequotXquot RandomVariableProbeVARIANCE We can also retrieve the lower and higher range of the 95 con dence inter V All these come in handy in the post simulation analysis the generation of the graphs and so on 33 The simulation code The simulation code class needs to implement the actual simulation It appears that this will be a very complex class but in practice it is usually very simple One thing to remember you should keep the SimulationCode class store less that is do not create any variables here All the state of the simulation should go into the context Here is what you would have in the setup public void setupSimu1ationInput sip SimulationOutput sop IContext theContext final DirectedDiffusionContext context DirectedDiffusionContext theContext contextinitia1izesip sop So basically you initiate the context The context having state is worth initializing The code not having any variables does not have anything with initializing e postprocess function gives you an opportunity to do some calcula tions after the simulation had been nished One example would be to calculate values which can not be calculated during simulation In the rst approxima tion leave this function empty and add code on the needbyneed basis 34 The update function This is where the real work of the simulation happens At every timestep this function needs to perform all the activities which advance the simulation such as move the vehicles perform the message agents make decision energy gets consumed and so on However this functionality is the business of the individual objects which are part of the environment These objects need to provide an update func tion which perform the work The responsibility of the update function in the simulation code is to call the update function of all the relevant objects in the context This update function takes as a parameter the current time and return an integer value Return 1 if you want to continue the simulation for another timestep return 0 if you want to terminate it early the simulation will terminate anyhow when the speci ed timestep in the simulation input expire 35 The context As we said previously the context is the repository of the current state of the simulation There are essentially two different type of objects which you keep here constants and variables which are here for convenience as measurement purposes active objects which need to be updated more exactly be themselves update at every simulation step Examples are agent vehicles network nodes sensors and so on One special object of this type is the World The World object is supposed to represent the environment in which the active objects operate There are two readymode world objects in YAES yaesfraneworkworldnlorld a generic world for embodied agents Contains time a map a list of named locations a directory of objects yaesframeworkwor1dsensornetworkSensorNetworkWorld in addi tion to the regular world it maintains a list of sensors actuators mo bile node such as intruder It is also managing the communication and perception among the nodes In an ideal world you need to develop a world model for your speci c ap plication In a lesser world just take one of the existing ones and if it does not cover what you need complement it with objects in your context 4 How to create a video We assume that you have a running simulation of your chosen application We assume that you have a working display of your simulation on the visual panel The rst step is to convince YAES to save an image of the visual panel at every simulation step To do this you need to go to your xxxS imulation java class the one which implement ISimulat ionCode In the update function you will probably nd the call to repaint the visual display Right after that you need to enter the saving of the le so the result should be as follows context getVisual repaint 0 String fileName Stringiormat quotvideoZOdepgquot int time cont ext getVisual SaveImage f ileName This will save it in les vide0001jpg videoOO2 jpg etc What remains is to create a video out of these les You can use your favorite video editor program some versions of Microsoft Windows come with a simple one Another way is to use mencoder Download the mplayer package from httpwwwmp1ayerhqhu The command line you want to use is something like this mencoder mfjpg mf w800h600fps25typejpg ovc lavc 1avcopts vcodecmpeg4mbd2tre11 oac copy 0 outputavi For more complete information check http wwwmp1ayerhqhuDOCSHTML enmencfeatencimages htmli 5 Generating graphs and presenting results 51 Parameter sweeps We assume that you have your simulation running and that you are collection the performance results of your simulation in the Simulat ionUutputi ow you want to prepare a presentation or a paper with your performance results The accepted was to do this is to show a graph which contains on the X axis something which is considered an input parameter such as number of nodes and on the Y axis something which is considered a performance metric such as packets losti et us consider rst the data acquisition The normal way to do this is to perform the simulation for all the values of the speci c input parameter in the range while keeping all the other parameters constant This process is called a parameter sweep Parameter sweeps tend to be computationally expensive thus you don7t want to run them visually much more important than the visualization is the ability to see how far they have progressed and how much is left This is how you perform a parameter sweep ListltSimu1ationInputgt inputs new ArrayListltSimulationInputgt for int nodes 10 nodes lt 100 nodes nodes 5 SimulationInput sim new SimulationInput model simsetParameterquotNodesquot nodes inputs addsim ListltSimulation0utputgt outputs Simulation simulationSet inputs MySimulationc1ass MyContextc1ass To put it simple you create a list of SimulationInput objects which dif fer in only one parameter feed it to YAES an what you get is an array of SimulationOutput objecti One little thing to consider is that the context will be created with a default constructor thus if you need to initialize it do it in the setup function of the simulation Many times you want to compare your algorithm against its competitors egi random greedy or differently parametrized versions of the same algorithmi You will need to repeat the parameter sweep for each competitori The output of this process is a collection of lists of simulation output objectsi 511 Saving your data The lists of SimulationOutputs are serializable and can be read and written to a le using the save7 saveList7 restore and restoreList functions in SimulationOut put Use them Do not generate a long function which does all the calculations in memory and at the end generates the graphs and discards the data Put the generation of the graphs and the running of simulation in separate menu items You might want to check the Simulat ioncachedSimulationSetO func tion7 which allows you to rerun only parts of the simulation Although it seems a wasted time7 the effort put in to run your simulation ef ciently7 is time well spend You will waste more time rerunning the simulation over and over 5 2 Generating graphs We assume that you have the lists of simulation outputs as discussed above Now7 you can generate a graph by specifying for each individual line on the gmph 0 what list of simulatoon outputs to take the data from 0 what value to put on the X axis this will be the parameter you have done the parameter sweep on 0 what value to put on the Y axis this would be the performance metric of interest7 and its appropriate statistical sample last value7 average7 minimum7 maximum and so on Here is an example ListltSimulation0utputgt myalgorithm obtained from simulation ListltSimulation0utputgt randomalgorithm obtained from simulation PlotDescription pd new P10tDescriptionquotThe number of nodesquot quotThe performancequot pdaddPlotLinenew PlotLineDescriptionquotNodesquot quotPerformancequot quotMy algorithmquot myalgorithm pdaddPlotLinenew PlotLineDescriptionquotNodesquot quotPerformancequot quotRandom algorithmquot randomalgorithm pdgeneratenew Filequotperformancegraphmquot Note that a different constructor of PlotLineDescript ion class allows you to plot various aggregates such as maximum7 minimum7 average7 sum and so on This segment of code will create a le performancegraphm You need to simply run this le in Matlab to display the graph You can use the Matlab tools to customize the graph to your liking and then save it in the desired format7 such as EPS for inclusion in LaTe
Are you sure you want to buy this material for
You're already Subscribed!
Looks like you've already subscribed to StudySoup, you won't need to purchase another subscription to get this material. To access this material simply click 'View Full Document'