### Create a StudySoup account

#### Be part of our community, it's free to join!

Already have a StudySoup account? Login here

# MODELS IN BIOLOGY BIOL 429

UW

GPA 3.73

### View Full Document

## 30

## 0

## Popular in Course

## Popular in Biology

This 24 page Class Notes was uploaded by Renee Lehner on Wednesday September 9, 2015. The Class Notes belongs to BIOL 429 at University of Washington taught by Carl Bergstrom in Fall. Since its upload, it has received 30 views. For similar materials see /class/192308/biol-429-university-of-washington in Biology at University of Washington.

## Reviews for MODELS IN BIOLOGY

### What is Karma?

#### Karma is the currency of StudySoup.

#### You can buy or earn more Karma at anytime and redeem it for class notes, study guides, flashcards, and more!

Date Created: 09/09/15

Lecture 9 Matrix theory approach to Markov chains Biology 497 Carl Bergstrom November 19 2006 Sources These lecture notes draw heavily on material from Taylor and Karlin An Introduction to Stochastic Modeling 3rd Edition 1998 and Kemeny and Snell Finite Markov Chains In places de nitions and statements of theorems may be taken directly from these sources Matrix representation In the previous lecture we talked about ways to represent matrices in block form so that we write a matrix as composed of multiple subsidiary sub matrices By properly classifying the different states of a Markov chain as transition states ergodic states or terminal states we were able to greatly reduce the complexity of the Markov matrix by working with a matrix of the following form P l Q R l 0 I Here Q corresponds to the part of the chain that moves among transi tion states and the I corresponds to a set of absorbing states One of the very convenient facts about this form of writing that matrix is that matrix products correspond to the products of the submatrices so that P2 Q2 R QR 0 I We ll arrange this so that states 1 through r 7 1 are transition and states r through 71 are absorbing under our ordering ln general7 higher powers of this matrix take the form P2 Q2 RQRQZRQ3R 0 I In another form Pu Q2 IQQ2Q3Q 1R 0 I This form turns out to be extremely powerful for analyzing the properties of this Markov chain For one thing7 it allows us to compute W11 the average We can write W in terms of the matrix powers of P 11 W1 2 PE 10 Now here s a really cool thing Look at the submatrix consisting of the rst r 7 1 rows and columns of P From the expression for P 7 we see that the rst r 7 1 row and column block is equal to Q Now we can write lt gt n lt gt n n n 71 W11 2PM 2P3 ZQZ 10 10 10 where Q0 I Now we can transform this matrix equation into a one step recursion W IQQ2Q But look at our right hand side If we factor out a Q7 this is simply equal to 1 QI Q Q2 Qn l But the sum in parentheses is by de nition WW4 So we have WW 1 QWW D 1 QwW l Look at this This is a onestep or rst step analysis equation For the rst r 7 1 states7 it says that the matrix of the number of Visits to each state is I ie each state increases in visits by 1 if it was the visited state in that last round plus the product of the movement probabilities Q which are equal to P for the rst r 7 1 states times the expected number of visits to each state in the subsequent 71 7 1 rounds It may help to look at this term wise Karlin and Taylor do so on p171 Now in the limit as 71 goes to in nity W IQQ2Q3 or equivalently W I QW Thus as 71 gets large the number of visits are given by the solution for W to the above equation which gives us WI 7 Q Iand thus W I 7 Q 1 ie W is equal to the matrix inverse of I 7 Q This matrix W plays a critical role in the analysis of Markov chains we call it the fundamental matrix associated with Q We can use a similar approach to compute the time of expected time of absorption using a matrix formulation Let T be the time of absorption Notice that until the time of absorption the process moves through transient states 1 r 7 1 But WM gives us the expected number of visits to the j th state given that we started in 2 Thus the expected time to xation is just the sum of the expected visits to each of the transient states starting in i 771 ElTlXo 7 i 7 Z m 70 Again we can view this as a rst step analysis as described Karlin and Taylor p 173 Finally we can compute UM the probability of ending up in each par ticular absorbing state k 2 r given absorption time T and given a starting place i We do this by rst computing the probability U51 of landing in each absorbing state by some nite time n and then taking the limit as 71 goes to in nity How to we get to k at time m By being in some other state j at m 71 then entering Via R at time m Thus we can write UW 1 Q Q2 Qn 1R W 1R Now we take the limit as 71 goes to in nity to get U WR Again7 and as treated by Taylor and Karlin on p 1747 this can be used to write a rst step analysis expression for U Lecture 4 Continuous probability Biology 497 Carl Bergstrom October 17 2006 Sources These lecture notes draw heavily on material from Taylor and Karlin An Introduction to Stochastic Modeling 3rd Edition 1998 and Pitman 1993 Proba bilityi In places de nitions and statements of theorems may be taken directly from these sources We can just about any continuous probability distribution as the limit of a sequence of discrete probability distributions Suppose we have a random variable such as the length of a peacock s tail which takes on a continuous value rather than a discrete one How can we describe and work with the probability distribution for this random variable One good way to start would be by 77binning77 the possible values of the random variable X For example we might bin the distribution into ten centimeter intervals we can then construction a probability distribution for a binned random variable Y where Y 5 means that the tail is than 07 10 cm long Y 15 means that the tail is 10720 cm long Y 25 means that the tail is 20 30 cm long etc This procedure would give us a discrete distribution such as that shown below and we know how to work with these PY Y More formally7 we break the distribution up into n bins of width A and we consider the new discrete random variable Y to be the value of the bin into which the value of X falls Ymi ifiA Xlti1A By the mean value theorem7 we can thus select a set of values M such that iA lt zl lt 1A and the probability distribution of Y is given by i1A pl mm mm 1A Now we can formally see the connection between this de nition and the idea of continuous distributions as the limit of n division discrete spaces We take the conventional Riemann integration view of the integral as the limit of a sequence of discrete sums over smaller and smaller subdivisions PY I Y Y Y PY Y Our de nitions of probability density7 expectation7 and variance for con tinuous distributions then following naturally De nition 1 The probability density function for a continuous ran dom uariable m is de ned such that 17 Pp X g b mm 04 Here we must have ff 1 and Z 0 for all m De nition 2 The cumulatiue density function for a continuous ran dom uariable m is de ned such that I Fz Pp 2 X mm 00 De nition 3 The expectation of a continuous random uariable m with den sity function is De nition 4 The uariance of a continuous random uariable m with density function is VarX EX2 7 EX2 We can also develop an analogous set of de nitions for joint probability distributions for continuous random variables De nition 5 Thejointprobability density function fx7 y for a continuous random uariables m and y is de ned such that I C Pa X b7c Y d fmydydm b d Here we must have ff ff oly dm 1 and Z 0 for allm and y lf anything7 marginal distributions are easier to understand for continu ous probability distributions than for discrete ones Think of the marginal distributions of a jointly distributed random vari able X7 Y as the distribution of X Whatever Y is and the distribution of Y Whatever X is Lecture 2 Discrete probability Biology 497 Carl Bergstrom October 77 2006 Sources These lecture notes draw heavily on material from Taylor and Karlin An Introduction to Stochastic Modeling 3rd Edition 1998 and Pitman 1993 Proba bilityi 1n places7 de nitions and statements of theorems may be taken directly from these sources First a correction l was missing a summation sign in the de nition of marginal probability distributions in last week s lecture notes 7 no wonder people found that confusing The proper de nition should have been The marginal distribution PX Zl PY le gives us a single variable distribution on the values of Y Bernoulli Distribution The Bernoulli distribution takes only two possible values7 0 or 17 with probabilities p and 1 7 p respectively The mean of this distribution is p1 1 7p0 p and the variance is EX 7 EX 7p p17p217p07p2 p172pp217pp2 p72p2p3p27p3 p 7p2 p1 7 p We ll often call the event X 1 a success and X 0 a failure Binomial Distribution Suppose a lynx chases n 20 hares in a day and during each chase the probability of success for the lynx not the hare is p 01 How many hares does the lynx catch on average What is the probability that the lynx goes hungry on any given day What is the variance in the number of catches The binomial distribution Bn p represents the outcome of 71 trials each of which is an independent identically distributed iid Bernoulli random variable with probability p of success ie taking value 1 instead of value 0 The distribution function is for X N Bnp is Pltkgt ltZgtpklt1epgtw n 71 k mm 7 k This is because the probability of getting any speci c sequence of k where success and nik failures is pk17p k and there are different sequences with k success and n 7 k failures The mean of this distribution is np and the variance is np1 7 p Geometric Distribution Let us again consider our hunting lynx who chases n 20 hares a day and catches each with p 01 Instead of asking about the distribution of catches in a given day suppose we want to model the failed chases that the lynx attempts before the next catch Here we turn to the geometric distribution which gives us the average number of failures before the rst success According to Taylor and Karlin so I ll use that formulation More often I see the geometric distribution de ned as giving the average weighting time until the n th success which is simply the Taylor and Karlin value plus 1 But we ll use the Taylor and Karlin de nition here The distribution function for X Cp is given by Mk 171939 The rst term is the probability of k consecutive failures the second term is the probability of success on the k 1 th trial The expected value of X is 1 and the variance is 1177217 The tail probability is the probability that more than z consecutive trials fail ie PX gt m 1 719V So the average number of failures for the lynx is 9 the variance is 90 The probability that the lynx goes at least 20 trials without catching anything is 920 Negative Binomial This distribution tells us how many failures we see before the r th success The easiest way to think about the negative binomial is as the sum of r iid random variables each drawn from the geometric distribution The expectation and variance then follow immediately given our pre vious rules about additivity of expectations and for independent random variables additivity of variances as well The expectation is simply T17pp and the variance is r1 7 pp2 The distribution function gives the proba bility of seeing exactly k failures before the r 7 th success r 7 1 k 39r 1 7 k n p p plt T71 Exercise why Hypergeometric Suppose that you have a population of 71 individuals of whom k are of one type and n 7 k are of a second type If you sample j individuals from this population how many of the rst type will you get This is closely related to the binomial distribution 7 but here we are sampling from a xed population instead of a xed frequency distribution Another way to think about this is that in the hypergeometric distribution we are sampling for a xed population without replacement whereas in the binomial distribution we are sampling from a xed population with replacement The distribution function for the hypergeometric gives us the probability of getting m samples of the rst type Note that this is really just a formula in combinatorics We write out the full set of equally likely members of the ensemble and look at how probable different combinations of these are we do not even assign a probability parameter anywhere The expectation is and the variance is 7 Poisson We can think of the Poisson distribution gives the distribution of successes in a large number of samples when successes are rare events Ake A Mk k The expectation is and the variance is also We can view the probability distribution for a Poisson distribution with parameter as the limiting case of the binomial distribution with np 7 as 71 gets very large and p gets very small Example 77You be the judge77 Taylor and Karlin p28729 How would you judge this case Where is the fallacy in the prosecutor s or defense s argument Related example Mills et al 2006 PLoS Medicine Probability 4 6 Expected Number of Events v Figure 1 Multinomial distribution In the binomial distribution you have two possible outcomes success or failure Suppose that instead we are looking at a case where you have k different possible outcomes For example when I work on methylation patterns a doublestranded CpG site can be methylated on both sides un methylated on both strands or 77hemimethylated ie methylated on one side but not the other If the probabilities of these three states are p1 p2 and p3 respectively what is the distribution function for the number sites of each type in a sample of 71 sites V 77 m x 17m iz V19111922193 1 2 PTltX 17mz77l 7 m1 7 m2 The expectation is mp1np27np3 and the variance is npl17p17np217 pg7 np31 7 193 Of course this generalizes to more than three outcomes Lecture 6 Sums and Martingales Biology 497 Carl Bergstrom October 23 2006 Sources These lecture notes draw heavily on material from Taylor and Karlin An Introduction to Stochastic Modeling 3rd Edition 1998 i In places de nitions and statements of theorems may be taken directly from these sources Random Sums The rst stochastic process we ll look at is the random sum The idea here is that we draw a discrete integer valued random variable N and let a random variable X be the sum of N random variables 51 X51 N Here we can directly follow the exposition in Taylor and Karlin 113 In short H To nd the marginal distribution function for X we can compute the conditional distribution of X conditional on each value of N and then use the law of total probability to compute the marginal distribution for X to Let u be the expectation of E and 1 be the expectation of N Then expectation of X is simply the product of the expectation of N and the expectation of 5 That is EX uzl 3 We can compute the variance similiarly Let 02 be the variance of E and 7392 be the variance of N Then VarX 102 272 Example 1 TampK p73 Last time we worked out how to take the convolution integral to nd the sum of 2 or more random variables If we are interested in the sum X of n iid random variables 5 we can write the n fold convolution in a recursive form M2 102 W2 f 1ziufudu Then assuming N gt O the marginal density function for X is the given by the law of total probability 0 fXW Z 1 WWW n1 Example 2 TampK pp74 75 Mart ingales Casanova of romantic fame was also a gambling man In his memoirs he wrote Before leauing Mi Mi asked me to go to her casino to take some money and to play taking her for my partner I did so I took all the gold I found and playing the martingale and doubling my stakes continuously I won euery day during the re mainder of the carniual I was fortunate enough neuer to lose the sixth card and if I had lost it I should have been without money to play for I had two thousand sequins on that card I congratulated myself upon hauing increased the treasure of my dear mistress Giacomo Casanova The idea behind Casanova s strategy was simple start by betting 1 coin If one loses double the bet As soon as one wins returning betting 1 coin Following such a strategy after each win no matter how many intervening losses the gambler will have a net gain of 1 coin And eventually the gambler is assured a win so the martingale should be a certain strategy for winning at games of chance 7 or should it Casanova nds out otherwise later in his tale I still played on the martingale but with such bad luck that I was soon left without a sequin As I shared my property with Mi M7 I was obliged to tell her of my losses and it was at her request that I sold all her diamonds losing what I got for them she had now only ue hundred sequins by her There was no more talk of her escaping from the conuent for we had nothing to liue on Giacomo Casanova The problem with the strategy is that to be sure of winning one needs to have in nite wealth because one has to keep doubling bets until a win 7 and that could be a very long time requiring very large bets should the gambler experience a run of losses Poor Casanova hit such a stretch and had to sell his mistress s jewels and effectively her future with him to get himself out of debt Of course we can model this whole affair We can view a stochastic process as a dynamic process that generates a sequence of random vari ables X1 X2 X3 If the expected value of the successive terms of the sequence conditional on the previous terms stays constant we have a mar tingale De nition 1 A martingale is a stochastic process such that the uncondi tional expected ualue is always nite and Ean1le 7X1l Xn Thus a martingale can be thought of as a sequence of fair gambles It has a constant mean if Xn is a martingale EXn1 Em We can now show after a bit of effort that not only is does the martin gale pose a risk of bankruptcy to a gambler with merely nite income but it also is overwhelmingly unlikely to generate large returns There s an old brain teaser that goes something like this John has to drive 120 miles He gets stuck in tra c and averages 15 mph for the rst 30 miles How fast does he have to drive so that he covers the entire distance at an average speed of 60 mphf2 The answer of course is that there is no nite speed that will suf ce he s already driven for two hours so he d have to be there already if he were to average 60 mph We can view the Markov Inequality as an application of the same prin ciple to probabilities and means It allows us to make what seems at rst glance a surprising statement about the tail probability of a distribution knowing only its expectation De nition 2 The Markov Inequality states that for a non negative random variable X and a positive constant k El PrX 2 k 3 How can we say so much about the tail probability without knowing anything about the distribution The answer is that we actually have made one very important assumption about the distribution namely that it is non negative That means that the minimum value that X can take on is 0 So for example if the mean of some random variable X is a the probability that X 2 10p is less than 110 If k EX the inequality is automatically satis ed So assume that k gt Let PrX 2 k 1 Since x 0 for all z lt O the smallest mean we can have is to have X 0 for 17 j of the time and X k for g5 of the time But this distribution has mean gtk7 so we know that EX gt 1 k ie7 EH PrX k g T That is the inequality that we aimed to prove Now we can apply this inequality directly to a martingale process to bound the tail probability of a non negative martingale process We replace X in the above formula with the i th value of the martingale Xi and we recall that the martingale has constant mean7 so that EX0 This gives us E X PrlXi 2 k where Xn is a non negative martingale and k is a strictly positive constant But we can actually say something much stronger than this as well Theorem 1 The Maximal Inequality for Non negative Martingales If Xn is a non negative martingale with expected value EX0 and k is a non negatiue constant E X PrmaXXi 2 k g M 120 k In other words7 the probability that the martingale ever exceeds is k at any time in the in nite future is bounded by EX0k Thus the probability that Casanova ever doubles his mistress s money7 even with arbitrarily large wealth to invest7 is E 39 gt lt 7 lrmZaOXXl 7 2EX0ll 7 2 Lecture 5 Continuous distributions Biology 497 Carl Bergstrom October 17 2006 Sources These lecture notes draw heavily on material from Taylor and Karlin An Introduction to Stochastic Modeling 3rd Edition 1998 and Pitman 1993 Proba bilityi In places de nitions and statements of theorems may be taken directly from these sources Uniform Distribution One of the simplest continuous distributions is the uniform in which the random variable is equally likely to be anywhere within a xed interval 11 The density function is 1 1 for a g x g b b 7 a The cumulative density function is 13 7 a Fw bia The expectation is the midway point a b2 Normal Distribution The normal distribution is one of the most important distributions in statis tics because of the Central Limit Theorem Theorem 1 Central Limit Theorem Let Y be the sum of n independent identically distributed random variables Xi with expectation Ia and variance 0392 As n gets large the distribution of Y approaches a normal distribution 2 with mean na and variance n0 Thus any random variable that we can View as the sum of a large number of iid elements should take on a normal distribution Similarly the log of any random variable that we can View as the product of a large number of iid elements should take on a normal distribution because the log of the product is equal to the sum of the logs The density function for the normal Na 02 is 1 1 2 2 7 0 m 7 e 2 f 0 77 We cannot compute an explicit form for the cdf because we cannot integrate 2 e75m We can also View the normal distribution as a continuous analogue of the binomial distribution For large n the binomial with n trials each with probability p is approximated by the normal distribution with mean n p and variance np17 p Exponential distribution The exponential distribution provides the distribution of waiting times until an event that occurs with a constant hazard rate we can see it as the continuous time analogue of the geometric distribution On its range 0 00 the distribution function is e t On the same range the cumulative density function is 1 7 e At The expectation is 1 and the variance is 12 Survival given a general expression for hazard The exponential corresponds to a constant failure rate If the failure rate varies over time as Mt we can still write down a density function as M wife W The corresponding cumulative distribution function is 1 7 57 fox WW Gamma distribution The sum of k iid exponential random variables with rate parameter is a Gamma distribution this can be seen as the waiting time until the k th event under a constant hazard model and thus analogous to the negative binomial distribution from discrete probability lts density function is k 71mk7167z 1W Notice that the gamma distribution 1 is simply the exponential with parameter The cdf is Til k 7 7Am 7175 10 k The above formulae hold only for integer k though we can generalize this by replacing k 7 1 which requires integer k with the gamma function Mk which does not We cannot then compute the cdf explicitly however As expected given that the gamma is the sum of iid exponentials its mean and variance are kA and kAZ Convolut ions Often in dealing with random processes we will be interested in the sum of independent random variables X and Y with probability densities of f and gy respectively If Z X Y the density 712 of Z is given by the convolution integral 00 112 f2 e tgtglttgtdt 700 The idea behind this integral is that it sums the various ways that Z could equal any particular value 2 it could do so because X 2 it and Y t for any value of 25 So we integrate the product of the densities of X and Y with respect to these quantities over all possible values oft We can take the product because we ve assumed that X and Y are independent We can also take a convolution of k random variables by integrating over k 7 1 indices 25 co co co f1ltt1gtf2ltt2gt fk27t17t2 itk71tdt1dt2 dtk1 700 700 700 In this vein the gamma distribution with shape parameter k is the k fold convolution of the exponential Also notice that the sum of two gamma distributions with shape parameters k1 and kg and the same rate parameter is the gamma with shape parameter k1 kg and rate parameter thus the sum of two gammas is gamma so long as they have the same rate parameter irrespective of their shape parameter Changes of variable Suppose that we want to rescale the m axis by some scaling function 9 such that Y gX We assume 9 is a proper rescaling ie it is monotone increasing and differentiable Notice that 7 because the integral of the pdf must be 1 7 the value of the pdf of a random variable changes when we change the scaling of the z axis The density function for Y can be written as fX90 7 fX9 1y 9W 7 9 9 1y 39 The value of the cdf does not to be corrected accordingly fYy FYy FX9 1y Recognizing this7 we often do well to work with cdfs instead of pdfs when we are changing variables or rescaling the z axis

### BOOM! Enjoy Your Free Notes!

We've added these Notes to your profile, click here to view them now.

### You're already Subscribed!

Looks like you've already subscribed to StudySoup, you won't need to purchase another subscription to get this material. To access this material simply click 'View Full Document'

## Why people love StudySoup

#### "Knowing I can count on the Elite Notetaker in my class allows me to focus on what the professor is saying instead of just scribbling notes the whole time and falling behind."

#### "Selling my MCAT study guides and notes has been a great source of side revenue while I'm in school. Some months I'm making over $500! Plus, it makes me happy knowing that I'm helping future med students with their MCAT."

#### "There's no way I would have passed my Organic Chemistry class this semester without the notes and study guides I got from StudySoup."

#### "It's a great way for students to improve their educational experience and it seemed like a product that everybody wants, so all the people participating are winning."

### Refund Policy

#### STUDYSOUP CANCELLATION POLICY

All subscriptions to StudySoup are paid in full at the time of subscribing. To change your credit card information or to cancel your subscription, go to "Edit Settings". All credit card information will be available there. If you should decide to cancel your subscription, it will continue to be valid until the next payment period, as all payments for the current period were made in advance. For special circumstances, please email support@studysoup.com

#### STUDYSOUP REFUND POLICY

StudySoup has more than 1 million course-specific study resources to help students study smarter. If you’re having trouble finding what you’re looking for, our customer support team can help you find what you need! Feel free to contact them here: support@studysoup.com

Recurring Subscriptions: If you have canceled your recurring subscription on the day of renewal and have not downloaded any documents, you may request a refund by submitting an email to support@studysoup.com

Satisfaction Guarantee: If you’re not satisfied with your subscription, you can contact us for further help. Contact must be made within 3 business days of your subscription purchase and your refund request will be subject for review.

Please Note: Refunds can never be provided more than 30 days after the initial purchase date regardless of your activity on the site.