×

### Let's log you in.

or

Don't have a StudySoup account? Create one here!

×

### Create a StudySoup account

#### Be part of our community, it's free to join!

or

##### By creating an account you agree to StudySoup's terms and conditions and privacy policy

Already have a StudySoup account? Login here

by: Orval Funk

21

0

23

# SEMINADVAPPLOFSTAT STAT991

Orval Funk
Penn
GPA 3.53

Staff

### Almost Ready

These notes were just uploaded, and will be ready to view shortly.

Purchase these notes here, or revisit this page.

Either way, we'll remind you when they're ready :)

### Preview These Notes for FREE

Get a free preview of these Notes, just enter your email below.

×
Unlock Preview

### Preview these materials now for free

Why put in your email? Get access to more of this material and other relevant free materials for your school

COURSE
PROF.
Staff
TYPE
Class Notes
PAGES
23
WORDS
KARMA
25 ?

## Popular in Statistics

This 23 page Class Notes was uploaded by Orval Funk on Monday September 28, 2015. The Class Notes belongs to STAT991 at University of Pennsylvania taught by Staff in Fall. Since its upload, it has received 21 views. For similar materials see /class/215439/stat991-university-of-pennsylvania in Statistics at University of Pennsylvania.

×

## Reviews for SEMINADVAPPLOFSTAT

×

×

### What is Karma?

#### You can buy or earn more Karma at anytime and redeem it for class notes, study guides, flashcards, and more!

Date Created: 09/28/15
Maybe PV Time is weird Here is a spline smooth of the PV time vs hour of day for all calls going to the service queue on 081202 Note the peaks at about 930 1045 1200 and 130 These are not altifacts 0f the spline smoothing method They are real In fact if you look at a scatterplot of the data itself now on our website in the appropriate detail you will see that there is something speci c and very noticeable happening around these time periods Try it I suspect that this strange something has a relation to the nonPoissonicity of the service queue arrival times that we discussed I don t yet know what the PV phenomenon is nor what the connection might be I haven t yet looked so I don t even yet know whether it happens on other days though I suspect this is the case Hence as part of the project challenge you need to see what this peculiarity is whether this is happening on other days what is its relation to time of day more pertinently perhaps what is its relation to congestion in the system and nally is it the cause of the nonPoissonicity we ve observed As a further complication note that we ve so far observed nonPoissonicity at other times of day as well but the weirdness here is mainly associated only with particular periods of the day However perhaps this type of extended PV time occurs at those other times of day as well in enough frequency to cause nonPoissonicity but not in suf cient frequency to show up in the spline smooth All these questions and issues should give you plenty to think about in trying to explain the causes for the non Poissonicity and its nature and extent Bivariate Fit of PVTIME By vru entry hr 70 60 50 40 30739 20 10 PVTIME c I I I I I I I I I I I I I I I I 7891012141618 20 22 24 vru entry hr Smoothing Spline Fit ambda01 Smoothing Spline Fit ambda01 RSquare 0101013 Sum of Squares Error 16189991 Estimation of Load Plots from Israeli Data for Nov Dec Reg P8 Calls Day ofWk O 4 The rst plot shows the estimates of mean arrival rate and of mean service time The arrival rates are multiplied by 3 so that both curves can be shown on the same overlay plot Overlay Plot 45 4 35 10 1395 20 Exit time qhr Y x pred of Ndayampmin x 3 I est mean serv time min The second overlay plot shows these two Gunes along with the estimated load Overlay Plot 10 1395 20 Exit time qhr Y Load est mean servtime min pred of Ndayampmin x 3 Plot of Load for IN Calls Nov amp Dec Y Est Nhr l l 15 20 Exit time qhr mn serv time MIN39s Load x 10 Estimation of rates in an inhomogeneous Poisson Process Cox Process Notes for Stat 991 7 Spring 2005 L Brown Basic setting Let T1 2 be the arrival times of a Cox Process having intensity t For notational simplicity assume 0 S t S 1 This note discusses the estimation of t For various reasons it is often convenient to rewrite t as 1 l AtYlt where Ilrdrl 0 The estimation problem then naturally divides into two parts 7 estimation of Y and estimation of l where of course 139 lt20andJlrdr1 It is not required to have the estimate it satisfy 139 even though A does although it is often helpful to impose that constraint Estimation of Y Let N ofarrivals in 01 Then N 7P0issY It is ancillary to 1 since its distribution does not depend on 1 Hence it makes sense to base the estimate of Y only on N Hence throughout the following we use the commonsense estimate 2 Y N This estimate has other nice credentials for example it is MVUE and admissible under SE loss Direct kernel estimates kernel density estimation These begin by fixing the choice of a kernel K satisfying 3 jKrdr1 It is usually desirable for K to have compact support say ll or 7 if not 7 to have very thin tails such as the normal density For many situations it suffices to consider only non negative kernels but there can sometimes be advantages in both theory and practice for a choice that is not nonnegative Some conventional choices are the tricube kernel 4 Id C1l73 Igrg7 or the Epanechnikov kernel 439 KrCl rlz1 rlgr The normalizing constants in 4 and 439 are separately chosen to satisfy 3 One needs also to choose a bandwidth h and then the estimate of l is given by A N 1 T t 5 A t K ltgt 0 m l h The bandwidth can be chosen by general theory based on rather precise assumptions about the smoothness of A by automatic methods such as AIC or crossvalidation or by trial and error methods to produce visually satisfactory results In the latter case I suggest choosing h to be the smallest value at which the visual picture does not contain many obviously extraneous small wiggles Other methods can be used for direct tting as well as for the less direct method to be discussed below These include methods based on local linear or polynomial estimation a mild but very useful generalization of kernel methods smoothing splines polynomial splines orthogonal series methods wavelets and perhaps some others At the level of asymptotic theory to be discussed below all these methods if properly implemented can yield asymptotically equivalent performance In nonasymptotic contexts suitable versions of some may be preferable to others Quality of t The quality of fit of an estimation procedure is measured through an appropriate loss function I will concentrate on L2 quadratic loss as measured by 6 LlteggtIzltrgt gltrgtzdr and the corresponding risk 7 R 1 M EY RN L1 where RN 1 i E 411 N It is often convenient to also consider quadratic loss at a fixed value I 6 01 and its corresponding risk These are defined as 8 L1gltgtzand 9 R 1 ti EY Rm L1 where Rm 1 i E L 1 iN Asymptotic order I ll use the symbol for this since my software doesn t have the correct symbol The definition applies to two sequences of numbers n and yn say and is that y cgt Elg gt 0 3 8 S liminfwoo nyn S limsupwoo nyn S 8391 We ll discuss situations in which there eXists a polynomial rate 0 lt lt 12 such that the risks satisfy 10 RNliN Note that 10 also implies 1039 RlYiY Then since RN1 RN72 Li dt it follows that 11 RmliON p v and meastRmliN gt 0 2 RN M 1W Lipshitz balls It is necessary to restrict the class of functions being estimated in order to judge the asymptotic performance of nonparametric function estimation procedures Lipshitz balls provide an easily understood but still interesting class of functions Let a denote the largest integer less than a For any a gt 0 0 lt B lt 00 define the corresponding Lipshitz ball as follows For a S l LipaB fWSB Vste 01 3 t 12 For a gt 1 LipaB is the set offfor which the a derivative off fwl S B Vi la7 and Ill39 fS 200 f t 0 SB mm exists lstl In the current context when we assume that l e LipaB we will also assume that l is bounded away from 0 and 00 hence for our context we include the statement 13 B 1 S A S B as part of the assumption that l e LipaB Asymptotic bias Let be given by 5 Then its bias satis es 1 r t 14 Bias12 Eit ztZKTJAT ztdr If we now assume thatK satis es 1439 jr Krdr0 i1a then 15 Biasi10ha uniformly for all te 01 l e LipaB Furthermore it is possible to choose a sequence of in l e LipaB so that 1539 Bias AL1 h as N gt 00 holds on a set of positive measure I ll leave the full derivation of 15 15 as a rather hard but possible exercise Many of the main ideas are present even for the special case with a S l In that case the right side ofl3 becomes SILK 2 lr tladr h h l r t K l r l t dr I h h lt gt lt gt Then one must also verify 15 for this same a Note that to verify 15 it will be s h jKvBv dv Ch necessary to choose A to depend on n For a 1 this is not difficult but this is a somewhat painful step to write down in full generality Note that the familiar kernels 4 and 439 satisfy 14 for a 2 but not for larger values of a Asymptotic Variance Since 5 involves a sum of independent variables it is also possible to calculate for its variance that 16 VaritN hi IT tjl r UKlrd1j Under the condition that h gt 0 as N gt 00 this yields A l 17 VarltN Mlt Asymptotic Risk Combining 15 and 17 yields that A 2a 1 18 RN110h m uniformly for l e LipaB Also there exists a sequence of in l e LipaB so that A l 1839 R 1 hm N M The expression h is minimized by the choice 19 h hm N41 This yields the optimal rate 2a 1 20 Rategpt hm Wm N 1 2 and the kernel estimator satisfying l4 and having h hop N 70 attains this rate in the sense that 2039 supmw RN 1 A N mW Raregp Asymptotic minimax risk Let go denote the class of all procedures It has been shown that the asymptotic minimaX risk satisfies 2a 21 infim supmw RN 11 N W See below for references Hence we say that the kernel estimator with K satisfying l4 and with h hm satisfying 19 is a ratewise asymptotic minimax estimator Remarks a The classic case of 21 is for the situation a 2 In that case the asymptotic rate is the now familiar value N 3945 An anatomy of the above proof and especially l4 and 18 shows that this is the best possible minimaX rate attainable by a nonnegative kernel That is even if a is known to be larger than 2 a nonnegative kernel will only attain the minimax rate N 394 5 b Results resembling 18 are contained in the path breaking papers by Rosenblatt and by Parzen 0 See Farrell or Stone for early proofs of 21 and also Ibragimov and Hasminskii Pinsker and Pinsker and Efromovich contain even earlier proofs of a result like this but for Sobolev balls rather than Lipshitz balls A more modern and cleaner approach to 21 can be found in work by Donoho and collaborators See especially Donoho Liu and MacGibbon 1989 Donoho and Liu 1990 and Donoho 1994 c The papers by Pinsker et al and by Donoho et al also contain precise or fairly precise information about the constants present in results like 1821 Such information is needed if one is to construct procedures known to really perform well After all a result like 21 only says that the asymptotic minimax risk lies between in 2a CIN 2 1 and czN 2 11 where without further information it might be that say 01 00001 and c2 10000 Such knowledge by itself isn t very useful at realistic sample sizes Another paper that provides nice insight into constants and optimality is Sacks and Ylvisaker 0 whose perspective provides an asymptotic optimality justification for use of the Epanechnikov kernel d Much detail is by now known about the practical performance of various classes of nonparametric function estimators For example the performance of kernel estimators such as 5 is degraded near the boundary of the range of the density This problem can be ameliorated by the use of boundary corrected procedures Local linear or local polynomial estimators are one technique that helps improve performance near the boundary 7 and sometimes in other aspects as well These techniques are really more natural in the regression setting to be developed below but they have also been adapted to the density estimation setting above Adaptive Estimation going forward Examination of the preceding results culminating in 20 shows that the most desirable bandwidths must adapt to the shape of 2a A This is because hjp f N N 1 and hence bias and variance must be balanced opt by any efficient estimator This is the socalled biasvariance tradeoff Furthermore the bias depends on the unknown form of A through the unknown parameters 013 of its smoothness class A more careful look at the consequences ofthe bias expression in 13 also suggests that it would be desirable to have h depend on t as well as l rather than be constant throughout 01 Furthermore not only h but also the shape of desirable kernels should adapt to the smoothness of A For fixed kernels corresponding to specific assumptions about the smoothness class of A standard databased bandwidth selection techniques AIC or r quot J quot do a 39 39 job of J Iquot 39J selecting good constant values of h The issue of adaptively choosing the shape of K is much tougher Although the adaptation issue for kernel density estimation is not insurmountable there appear to be better ways to proceed than to try to attack this issue directly within the kernel density estimation realm discussed above Other estimation schemes e g wavelets adapt in much more natural ways The schemes I have in mind are more easily discussed and implemented in the nonparametric regression context than in the density estimation context This motivates trying to transfer the density estimation problem to a regression problem so long as this is convenient and it can be shown that very little useful information is lost in the transfer The problem of constructing con dence statements about the estimator such as con dence bands is another motivation for trying to transfer to a regression setting Accurate conf1dence statements are a difficult and still unsettled issue in any setting since they need to explicitly or implicitly involve the unknown smoothness of l Again this issue is more easily and naturally approached in the regression setting below rather than in the density setting above Binning Our telephone data is most conveniently recorded as binned count data Such binning is also the first step in the transfer to a regression model The issue here is to choose the bin sizes so that useful information is not lost in the binning process Let b be an integer and let 1 1 f 22 N T STlt j I b I b The bin width is l b Usually we choose b to depend on N In that situation the values of N 1 actually depend on N through b bN In order to keep the notation more compact 17 this dependence of N onN is suppressed Note that ZN N 1 Let 7 139 12 b denote the midpoint of the ith bin Let I be the binstepped version of A that is 23 Ir 2i for jlb tltjb The binned problem has discrete values for the independent variable and so it is natural to define the corresponding discrete version of the L2 loss as 24 11 b 1TTj iTZ with the corresponding conditional risk rN EZ 61 TMN etc Now note that if A e LipaB then 25 ZliLl Tob mvb z since the summation in 24 is a Riemann sum for the corresponding L2 integral in 6 and since 10 Zltt2 O b39m v b392 Hence if one chooses 26 lb 0N ll N lZ and if is asymptotically ratewise minimax in the sense that 27 supk pm rN A NamMd Rategpt it follows that J is also asymptotically ratewise minimax in the sense of 20 for the original unbinned problem It is desirable to convince oneself at this stage of the argument that there is an estimator for the binned problem that attains 27 It can be shown that this is achieved by the natural extension of the kernel estimation idea to the binned data setting This estimator is A b N 7 it 28 1t gNhK h The pattern of proof is similar to that for the unbinned case and also involves statements like 25 I ll omit all details since we ll construct a different asymptotically rate minimax estimator below The preceding argument involving 26 is only asymptotic It suggests that one will want to choose 1 b as small as practical Certainly it should be chosen smaller asymptotically than 26 and much smaller than h in any situation using kernel types estimators Later we will introduce a competing restriction which allows l b to be small but not too small The minimax part of the above argument is couched only in terms of ratewise results Actually the correspondence between the binned and unbinned problem is much tighter when b is chosen suitably small as in 26 This is partly evident from 24 since the two sides of that expression are equal to within 0b392 1 v b4 More details are beyond the scope of the current discussion As a further remark I note that if one wants the binned and unbinned problems to be asymptotically equivalent for any bounded loss as in Brown and Low 0 then a b must satisfy the more restrictive condition that agtl2 andoj nlZa Rooting the binned counts The method to follow transforms the preceding density estimation problem to a nonparametric regression problem through the root unroot technique This begins by binning as above and then letting 29 Y JN 14 To explain the reason for the 14 in 29 note that for Z Poiss Eltmgt 27 0 Hence choosing c 14 eliminates the term of order lJE and yields 1 31 EZ14JEOF We will see below why such good control over the expectation term is valuable First let s validate the approximation 30 3 l and look at its accuracy a little more carefully Proof of 30 Write the four term Taylor expansion for Z c m gZ gc 11Z 4gtc1Z 4gtc2 32 2 if 8 if if Zc3OZ c4 EZ lto Ez c2 g 33 3 4 2 2 EZ g0g EZ g 3g0g Taking the expectation of the terms in the Taylor expansion then yields 31 Note that the above proof requires the order information contained in 33 about the first Note that four moments A little more care with the proof will yield the the O J term in 30 1 32 16c2 246 7 1 as 12832 F to follow J however that coefficient is not needed for the asymptotics Quality of 30 As we ll see the best simple picture of the value of the root 2 transform lies in a comparison of if and E VZ 0 Here is such a comparison for c0 andcl4 I I 0 1 2 3 4 5 lambda HE E Poissc c2 for c O and c 14 Also shown is the line ug 9V xi Lowest curve is for c 0 Highest curve is for c 14 Non parametric regression estimator To produce a slightly cleaner notation let 77t J10 Note that l e LipaB implies 77 e LipaB39 because ofl3 De ne a nonparametric regression kernel estimator of 77t based on observation of YTYj as A 17 7 34 nr K fh IJYJ Then the corresponding unnormalized estimator of 1t is 35 i0 7amp2 t Bias of the estimator To get an idea why this is a reasonable estimator first note that EYjni 0J3 where 13 is used to eliminate 77 from the 0 term Then calculate the expectation of 7 as b b 1 T t E A t E K j Y 36 N IK T tnrgwll j ildri since the sum in 36 is the Riemann sum for the integral there In computations of this sort one needs to take care that the error in the Riemann sum approximation is of the desired small order Reasoning as in l315 yields 37 E t 77t0h 02 Hence Bias2iltrgtElt ltrgtz n2ltrgtz E ltrgtzVar ltrgt 2ltrgtz oha0jVar tz Oh jVa 2 IJ It will thus suffice to choose b to satisfy 1 b 4 1 1 39 bO ij sothat 0 FJ0WJ foranyagt0 Variance of the estimator Reasoning as in 3 l32 is straightforward to calculate that 3939 VarYj 14ObN In this sense 2 14 is a variance stabilizing transformation 3 8 If one is interested in optimizing the variance stabilizing property of the transformation then a better choice is Anscombe s transformation JZ 38 where Z Poiss with if Nb for then VarY l4 O bNY See Anscombe 1948 However use ofthis transformation would lead to an error term of O blN2 in 38 in place ofthe term 0 blN4 This would entail a smaller choice of b in order to guarantee asymptotic minimaXity and so is probably not desirable In any event the difference in variances is small whether 0 14 or 38 is used as is shown by the following plot 04 3 395 8 03 5 2 0 8 0239 L lt5 E gt 01 0 o 10 20 Mean of Poiss Red Z Green z14 Blue Z38 While were looking at pictures it may be of interest to note that even for somewhat small values of if the distribution of quotZ 14 is approximately normal This isn t really required for validity of the mean and variance calculations here but it s reassuring to know Here s a comparative plot for the case if 10 09 08 Y x dens of Poisslambda I Normaroot1014 Calculations similar to those we ve done before will then show that A l 40 Var t O lt gt W gt M It follows that also 2 l 41 VarAt O lt gt W gt M The detailed veri cation of the passage from 40 to 41 involves several steps These include checking that E Y is well behaved I ll skip the details but note that 41 can be motivated from 40 by rewriting 40 as 77tquot013l Then z t 772 t 2 13 1 OP With some additional care 41 then follows Asymptotic minimax rate of risk Combining 3 8 39 and 41 then yields the familiar type of expression A l N 2a 42 RN 11 h M with this expression holding uniformly for l e LipaB Hence for the appropriate choice of h 4239 supkmeB RN 1 A Arm 2 Ratem Remarks The rate in 42 is the same as that in 20 for the kernel density estimator An advantage of the rootunroot approach is that it is more natural to use other nonparametric techniques such as splines or wavelets or other orthogonal series methods Versions of these methods can be used to yield procedures that are adaptive overB and also over a wide range of values of 04 These other methods also work more easily over a wider range of function spaces and the current knowledge about confidence bands is more advanced for them What especially helps in the nonparametric regression context is that the variance of the Y j is approximately known and constant It s also easier to use the regression methods in semiparametric contexts The overall estimate The discussion at 1 suggests estimating At TAO by A t Nit with J z as described above The information above along with fairly straightforward asymptotic calculations shows that A t has quadratic minimax error rates of R2 LY2 ENit Ylt2 VarNit YZO Yin144 Y In spite of its unusual appearance this is the optimal minimax rate over Lipshitz balls for this problem Remarks 1 These are minimax rates because the 0 term holds uniformly over the Lipschitz ball 2 The estimator in 43 when used with an adaptive bandwidth scheme such as C1 or GCV seems a reasonable choice for moderately accurate estimation in a variety of practical settings including our telephonecall arrival setting As noted it is asymptotically minimax in rate over fixed Lipshitz balls Although there is no proof above or elsewhere it seems plausible to conjecture from other known results that with optimal kernels and bandwidth choices it will come within a modest multiplicative factor perhaps lt 3 and maybe even lt 2 of being asymptotically minimax over such balls As discussed previously it will not be adaptive One will need to look beyond kernel estimators to find practical adaptive procedures but the root unroot mechanism still appears to be a reasonable course of action 3 Even in the nonadaptive case performance and appearance of standard kernel estimators can also be improved somewhat with other simple modifications and variants such as local polynomial schemes 43 Renormalization an argument against Note that 44 lJlrdrJ772rdr It therefore seems heuristically reasonable that 7 should be renormalized by multiplication by a constant 5 say so that 45 1 527 1511 Denote the new estimator of 77 by 7 87 and the corresponding estimator of l by J 7 so that l Iirdr In view of the properties of J that have already been established it is not hard to show that it also has the optimal asymptotic minimaX rate in 42 42 In spite of heuristic plausibility it is not clear whether is generally a better estimator than I have not been able to settle this question However note the following information that may be relevant Consider Hellinger loss and risk de ned as A A 2 A A 46 L 7777 jnr 770 an R 7777 EL nnN Consider the problem of choosing a value of c depending on the data so as to minimize L77 Of course only an oracle could actually nd this value of c in a practical context This minimizer satis es 5 I77r7 rdr S 1772rdTI2J 2rdrllz J 7617quot I 2rdr z 174074 1 47 madam Hence the optimal choice of estimator 77 57 satisf1es 48 Iirdr j7zrdrEzj 2rdr S1 with equality iff 7 77 This suggests that at least for Hellinger loss it may be desirable to use estimators having I72 7617quot lt l and hence may possibly not be desirable to normalize by 8 as in 45 Renormalization an argument in favor Note that failure to normalize may however have at least one undesirable consequence Without renormalization the overall estimator of A is A N If one begins from A then using 43 and some auxiliary calculations yields that the corresponding estimator of Y satisf1es 1a 49 Y jArdry0pYm However the estimator f N satisf1es 50 Y Y 0P N I2 and hence achieves a better asymptotic rate The renormalized estimator it will also achieve the rate in 50 For this reason it be preferable to it even if its integrated square error loss may be slightly larger asymptotically It remains to be seen whether there is an estimation procedure that achieves very good or even best asymptotic values of R or RH while at the same time integrating out to yield the desired properties as an estimator of Y ml MMl Queue Basic Theory 11705 Arrival Process Poisson process with rate in Arrival times 141142 Thus the inter arrivaltimesA1 A1 A3 A2 are it39d Exponential written EXp1l7a having density f7 leill Service Times Service times are it39d EXpl is with is gt in This implies the process is stable Consequently the mean interarrival and service times satisfy 0 gt 5 There is only one server Let S1S2 denote the lengths ofthe successive service times Queue protocol FIFO Departure process Let D1D2 denote the successive times of the departures from service Since there is only one server and the protocol is FIFO the kth ordered arrival time Ak and the k th ordered departure time Dk correspond to the same customer the kth For queues with more than one server this need not happen Queue and Service counts Let 3t qt denote the number of customers in service and in the queue at time f the instant just after time t Because of this s t qt are left continuous Note that st 0 or 1 since there is only one server and qt gt 0 3 st 1 Both these variables are recoverable from the system occupancy count wt st qt This is true in any GGn queue Facts about the quot 39 distribution LetX Y be independent exponentially distributed variables with rates A 77 respectively Then 1 X AY MmXY Expl177 A l 77 239 and the event X AY X is independent of the value of X AY 3 S X Y has density fss 7 12mquot r21 ifl 77 I ll denote the density in 3 as EXplSum 177 Note that for this density ES l 77 since S XY 2 PXYX Jquot 39 Interdeparture distribution39 wDI1 0 3 D1 DI1 ExplSumUtaJS wDI1 gt 0 3 D1 D171 ExplUts These interdeparture times are independent of the value of DI1 We thus have EDl DHwDH 0 L10EDl DHIWDH 0 M where A i i i 0 20 25 M 25 I Prob of empty queue View the process up to the nth departure time given the starting value of w0 As a convention let D0 0 Then 4 D Z D171 Z Dz D171 wD10 wDH 0 Hence 5 EDn uD EDH DH 01ED11 DH 0 Let 6 Pg PwDH 0 139 1n P0 limePom P0 exists and does not depend on the value of w0 By de nition E An n 110 ua As in the reasoning leading to Little s Law stability ofthe process then implies EDn n gt lla ua Then 5 yields iEl oPJ M11 EDquot gta n This together with 6 implies 1 7 P0 1 3 1 u l a s One can also ask about P where POW PqL 0 139 1n P lime POWquot This is important because the event qA1 0 is the event that the ith customer goes directly to service without having to wait in queue Now the il st customer leaves an empty queue behind if and only if the ith customer arrives to an empty system We thus get immediately from 7 that t A 739 Po 1 5 1 u l a s since wl1 0 if and only if qDH 0 Corolla Formula 7 and 7 is also valid for an MGl queue Why It is not necessarily valid for a GGl queue or even a GMl queue Why not Distribution of interdeparture times It follows from 4 that in the long run D1 D171 is ExplSumUtaJS with probability P0 and is Explls with probability 1 P0 Hence after some algebra the longrun density of D1 D171 is f0 PO Add e74 er z1Polseizzz Anemia This is exactly the same exponential distribution as the interarrival density This neat fact is not in general true for GMl or MGl systems Distribution of gueue lengths wt t e 0 00 is a continuous time birth and death process and or a Markov jump process It has the de ning characteristics a If wt 0 then the next jump occurs at a time Epr in later and is to wt l and b If wt gt 0 then the next jump occurs at a time Epr in is later and is to wtl with probability pup 1 lab and to wt l with probability pd l pup a 5 Just as a Markov chain corresponds to a transition matrix there is a matrix corresponding to a and b that describes these rates amp transition probabilities It is pd pup 0 0 pa 0 pup 0 7 M 0 pa 0 p 0 0 p 0 The first row of this matrix may not be obvious and will be discussed In a positive recurrent Markov chain the transition matrix has the property that the solution to VM V39vj Z O v i Z vj l is unique and gives the stationary probability vector V for J the chain In the same way the solution to 8 v39M V39vj 2 0w 2v 1 J describes the stationary state probabilities for the values of wt The solution to 8 is easily derived as V VJ where 71 11 A A 171 V pd i a I pd 10 pa A A

×

×

### BOOM! Enjoy Your Free Notes!

We've added these Notes to your profile, click here to view them now.

×

### You're already Subscribed!

Looks like you've already subscribed to StudySoup, you won't need to purchase another subscription to get this material. To access this material simply click 'View Full Document'

## Why people love StudySoup

Jim McGreen Ohio University

#### "Knowing I can count on the Elite Notetaker in my class allows me to focus on what the professor is saying instead of just scribbling notes the whole time and falling behind."

Amaris Trozzo George Washington University

#### "I made \$350 in just two days after posting my first study guide."

Jim McGreen Ohio University

Forbes

#### "Their 'Elite Notetakers' are making over \$1,200/month in sales by creating high quality content that helps their classmates in a time of need."

Become an Elite Notetaker and start selling your notes online!
×

### Refund Policy

#### STUDYSOUP CANCELLATION POLICY

All subscriptions to StudySoup are paid in full at the time of subscribing. To change your credit card information or to cancel your subscription, go to "Edit Settings". All credit card information will be available there. If you should decide to cancel your subscription, it will continue to be valid until the next payment period, as all payments for the current period were made in advance. For special circumstances, please email support@studysoup.com

#### STUDYSOUP REFUND POLICY

StudySoup has more than 1 million course-specific study resources to help students study smarter. If you’re having trouble finding what you’re looking for, our customer support team can help you find what you need! Feel free to contact them here: support@studysoup.com

Recurring Subscriptions: If you have canceled your recurring subscription on the day of renewal and have not downloaded any documents, you may request a refund by submitting an email to support@studysoup.com

Satisfaction Guarantee: If you’re not satisfied with your subscription, you can contact us for further help. Contact must be made within 3 business days of your subscription purchase and your refund request will be subject for review.

Please Note: Refunds can never be provided more than 30 days after the initial purchase date regardless of your activity on the site.