Discrete Mathematics and Probability Theory
Discrete Mathematics and Probability Theory COMPSCI 70
Popular in Course
Popular in ComputerScienence
This 4 page Class Notes was uploaded by Mr. Hayley Barton on Thursday October 22, 2015. The Class Notes belongs to COMPSCI 70 at University of California - Berkeley taught by Staff in Fall. Since its upload, it has received 19 views. For similar materials see /class/226664/compsci-70-university-of-california-berkeley in ComputerScienence at University of California - Berkeley.
Reviews for Discrete Mathematics and Probability Theory
Report this Material
What is Karma?
Karma is the currency of StudySoup.
You can buy or earn more Karma at anytime and redeem it for class notes, study guides, flashcards, and more!
Date Created: 10/22/15
CS 70 Discrete Mathematics for Spring 2008 David Wagner Note Variance Question At each time step I ip a fair coin If it comes up Heads I walk one step to the right if it comes up Tails I walk one step to the left How far do I expect to have traveled from my starting point after n steps Denoting a rightmove by 1 and a leftmove by 71 we can describe the probability space here as the set of all words of length n over the alphabet i1 each having equal probability For instance one possible outcome is 17 1 71 71 Let the rv X denote our position relative to our starting point 0 after n moves Thus X X1X2 Xm 1 ifith toss is Heads where X 71 otherw1se Now obviously we have 0 The easiest rigorous way to see this is to note that x l x 71 0 so by linearity of expectation 21 0 Thus after n steps my expected position is 0 But of course this is not very informative and is due to the fact that positive and negative deviations from 0 cancel out What the above question is really asking is What is the expected value of W our distance from 0 Un fortunately computing the expected value of turns out to be a little awkward due to the absolute value operator Therefore rather than consider the rv lX l we will instead look at the rv X 2 Notice that this also has the effect of making all deviations from 0 positive so it should also give a good measure of the distance traveled However because it is the squared distance we will need to take a square root at the end Let s calculate Ele ELXZ EX1Xz Xn n EIZX 224291 i1 ij ZEWHZEW i1 ij In the last line here we used linearity of expectation To proceed we need to compute and j for i 7 j Let s consider rstX Since can take on only values i1 clearly X2 1 always so 1 What about E Since ande are independent it is the case that 01 Plugging these values into the above equation gives EXznxl0n 1 For independent random variablesXY we have ELXY You are strongly encouraged to prove this as an exercise along similar lines to our proof of linearity of expectation in an earlier lecture Note that ELXY is false for general rv sXY39 to see this just look at the present example where we have ELXIZ CS 70 Spring 2008 Note 19 1 So we see that our expected squared distance from 0 is n One interpretation of this is that we might expect to be a distance of about away from 0 after n steps However we have to be careful here we cannot simply argue that EX2 Why not We will see shortly see Chebyshev s Inequality below how to make precise deductions about W from knowledge of For the moment however let s agree to View EX2 as an intuitive measure of spread of the rv X In fact for a more general rv with expectation 1 what we are really interested in is EX 7 12 the expected squared distance from the mean In our random walk example we had 1 0 so EX 7 12 just reduces to De nition 191 variance For a rv X with expectation 1 the variance ofX is de ned to be VarltXgt Ewe m The square root VarX is called the standard deviation of X The point of the standard deviation is merely to undo the squaring in the variance Thus the standard deviation is on the same scale as the rv itself Since the variance and standard deviation differ just by a square it really doesn t matter which one we choose to work with as we can always compute one from the other immediately We shall usually use the variance For the random walk example above we have that VarX n and the standard deviation of X is The following easy observation gives us a slightly different way to compute the variance that is simpler in many cases Theorem 191 For a rv X with expectation 1 we have VarX EX 7 12 Proof From the de nition of variance we have 7 2 7 2 2 7 2 2 7 2 2 2 7 2 VarX7ElXH LEW ZILXJFH LEW l ZHElXHH iEiX lilu 11 iEiX lili In the third step here we used linearity of expectation D Let s see some examples of variance calculations 1 Fair die LetX be the score on the roll of a single fair die Recall from an earlier lecture that 7 So we just need to compute E X2 which is a routine calculcation 2 1 2 2 2 2 2 2 91 Eleg12 3 4 5 6 Thus from Theorem 151 91 49 35 v X E 2 7 EX 2 arltgt w H 6 412 2 Biased coin Let X the the number of Heads in n tosses of a biased coin with Heads probability p ie X has the binomial distribution with parameters n7 17 We already know that n 17 Let i 1 if ithtoss is Head 0 otherwise CS 70 Spring 2008 Note 19 2 We can then writeX X1Xz Xn and then ELXZ 7 EX1XzXn2 VI ZEWHZEWQl i1 13 quotX17 1 XI72 quot2172 quot170717 In the third line here we have used the facts that p and that E 72 since iXj are independent Note that there are nn 7 1 pairs 139 j withi 7 j Finally we get that VarX E X2 7 npl 7p As an example for a fair coin the expected number of Heads in n tosses is g and the standard deviation is 2quot Notice that in fact VarX ZiVarX and the same was true in the random walk example This is no coincidence and it depends on the fact that the X are mutually independent Exercise Verify this So in the independent case we can compute variances of sums very easily As we shall see in example 3 however when the rvs are not mutually independent the variance is not linearly additive 5 Number of xed points LetX be the number of xed points in a random permutation of n items ie the number of students in a class of size n who receive their own homework after shuf ing We saw in an earlier lecture that l regardless of n To compute EX2 writeX X1 Xz Xn WhereXi l ifi is a xed point 0 otherwise Then as usual we have EM im ymr lt1 i1 i j Since is an indicator rv we have that PrX l In this case however we have to be a bit more careful about note that we cannot claim as before that this is equal to because X ande are not independent do you see why not But since bothX ande are indicators we can compute directly as follows PrX lAXj 1 Prboth i andj are xed points 7 l 7 nn7l39 Check that you understand the last step here Plugging this into equation 1 we get EmquotXquotquot1gtXquotn11112 Thus VarX EX2 7 2 7 l 1 In other words the variance and the mean are both equal to 1 Like the mean the variance is also independent of n Intuitively at least this means that it is unlikely that there will be more than a small number of xed points even when the number of items n is very large 7 Chebyshev s lnequahty We have seen that intuitively the variance or more correctly the standard deviation is a measure of spread or deviation from the mean Our next goal is to make this intuition quantitatively precise What we can show is the following CS 70 Spring 2008 Note 19 3
Are you sure you want to buy this material for
You're already Subscribed!
Looks like you've already subscribed to StudySoup, you won't need to purchase another subscription to get this material. To access this material simply click 'View Full Document'