New User Special Price Expires in

Let's log you in.

Sign in with Facebook


Don't have a StudySoup account? Create one here!


Create a StudySoup account

Be part of our community, it's free to join!

Sign up with Facebook


Create your account
By creating an account you agree to StudySoup's terms and conditions and privacy policy

Already have a StudySoup account? Login here

Probabilistic Methods In Electrical And Computer Engineering

by: Cassidy Casper

Probabilistic Methods In Electrical And Computer Engineering ECE 30200

Marketplace > Purdue University > Electrical Engineering & Computer Science > ECE 30200 > Probabilistic Methods In Electrical And Computer Engineering
Cassidy Casper
GPA 3.59

Ilya Pollak

Almost Ready


These notes were just uploaded, and will be ready to view shortly.

Purchase these notes here, or revisit this page.

Either way, we'll remind you when they're ready :)

Preview These Notes for FREE

Get a free preview of these Notes, just enter your email below.

Unlock Preview
Unlock Preview

Preview these materials now for free

Why put in your email? Get access to more of this material and other relevant free materials for your school

View Preview

About this Document

Ilya Pollak
Class Notes
25 ?




Popular in Course

Popular in Electrical Engineering & Computer Science

This 41 page Class Notes was uploaded by Cassidy Casper on Saturday September 19, 2015. The Class Notes belongs to ECE 30200 at Purdue University taught by Ilya Pollak in Fall. Since its upload, it has received 48 views. For similar materials see /class/207887/ece-30200-purdue-university in Electrical Engineering & Computer Science at Purdue University.

Similar to ECE 30200 at Purdue

Popular in Electrical Engineering & Computer Science


Reviews for Probabilistic Methods In Electrical And Computer Engineering


Report this Material


What is Karma?


Karma is the currency of StudySoup.

You can buy or earn more Karma at anytime and redeem it for class notes, study guides, flashcards, and more!

Date Created: 09/19/15
Chapter 2 Random Signals So far in this course we only considered deterministic signals In many practical sit uations this is not enough It is much more convenient to model many processes as random signals For example images acquired with a medical imaging device always have noise The exact intensity values vary from image to image They depend on many different factors which are beyond our control and which therefore cannot be reliably modeled However what we may be able to reliably model is the average behavior of these signals In other words if some averaging procedure is performed over a hundred images perhaps it will be possible to predict how the next hundred images will behave on average Once we have a good probabilistic model we could pose and solve various useful estimation problems in order to eg remove noise and enhance the image quality Our objective for this topic1 will be to develop the analysis tools for random signals We will start by reviewing some basic facts about probability I 21 Introduction to Random Sequences Detection and Estimation I 211 Events and Probability The main concepts are as follows 0 An outcome of an experiment or an elementary event 0 The sample space 9 for an experiment is the set of all possible outcomes of the experiment 0 An event is a set in the sample space2 We say that event A occurs if one of the outcomes in the set A occurs 0 A probability measure is a function which assigns a probability a number PA to each event A and which satis es the following properties 1Probabilistic modelingeie assigning probabilities t0 eventsewill not be studied here We will always assume either that a probabilistic model is given or that it can be easily constructe 2Not every set in the sample space must necessarily be an event with a wellde ned probability We will not discuss this here deferring to advanced courses on probability and measure theory 118 CHAPTER 2 RANDOM SIGNALS 1 It is non negative PA 2 O 2 The probability of the whole sample space is 1 PQ 1 3 It is countably additive 00 00 P Aigt ZPAi if Ai Aj Q for any i j i1 i1 Recall A1 1 A2 A1 U A2 11111011 Of A1 A2 A1 or A2 ie the union of A1 and A2 consists of every outcome which is in A1 or A2 or both A1A2 A1 A2 intersection of A1 A2 A1 and A2 ie the intersection of A1 and A2 consists of every outcome which is both in A1 and in A2 note that A1 A2 9 means that A1 and A2 are mutually exclusive events A0 A complement of A not A ie the complement of A consists of all the elements of Q which are not contained in A Example 21 Consider a random experiment where a fair six sided die is thrown once Its sample space is Q 1234 5 6 and We de ne the following euents A1 an euen number turns upquot ie A1 246 A2 a prime number turns up quot ie A2 23 5 A3 one turns upquot ie A3 We can calculate PA1 l l by property 5 additiuity 6 2 A1 1 A2 A1 U A2 the number which turns up is either euen or prime or bothquot 2737 47 57 6 A1A2 A1 A2 the number which turns up is both euen and primequot 2 Sec 2 1 Introduction to Random Sequences Detection and Estimation A1 an odd number turns upquot 1 3 5 A2 a non prime number turns upquot 1 4 6 ZlUZZ 13456A1 A2 Zl A z 1A1UA2 I 212 Conditional Probability In observing the outcomes of a random experiment we are often interested in how the outcome of one event A is in uenced by that of another event B For example in one extreme case A may always occur whenever B does as in Fig 21a while in the other extreme case A never occurs if B does as in Fig 21b A a A always occurs if B does b A never occurs if B occurs Figure 21 Two illustrations of dependence between events A and B To characterize the relationship between A and B we introduce the conditional probability of A given B which is the probability of A occurring if B is known to have occurred De nition 21 The conditional probability of A given B is de ned as follows dif PA m B PltAiBgt i PUB assuming that PB gt 0 Conditional probabilities specify a probability law on the new universe B it can be shown that they satisfy all three axioms Therefore all general properties of probability laws remain valid for conditional probabilities 120 CHAPTER 2 RANDOM SIGNALS Example 22 In the preuious die throwing experiment Ex 2 what is PA1 A2f2 PA1 A2 Peuen number turns up giuen that a prime number turned up PA1 A2 P A2 P2 P2 3 5 16 1 2 1 g If we look at Fig 22 we immediately see thatPA1 A2 because giuen that A2 has occurred only one outcome ie 2 out of three equiprobable outcomes 235 makes A1 occur A2 Figure 22 A diagram for the diethrowing experiment When we are conditioning on A2 we are no longer operating in the whole of 9 A2 becomes the sample space and hence all probabilities conditioned on A2 are normalized by PA2 In particular PA2 1 Properties of conditional probabilities 10 PA B 1 21fA B Q then PA B 0 3 If B C A ie B a A then PA B 1 Sec 2 1 Introduction to Random Sequences Detection and Estimation A Figure 23 When B is a subset of A7 PAlB 1 4 If 1411427 are mutually exclusive with 00 A U Ak k1 then we have Figure 24 PAlB ZPMMB k1 5 Total Probability Theorem A1 B A2 A3 Figure 25 The set B is contained in the union of mutually exclusive sets A1 A2 and A3 122 CHAPTER 2 RANDOM SIGNALS If the sample space is partitioned into sets A17 A27 and A3 as in Fig 257 then the probability of any event B can be computed as follows PB PBoA1PBoA2PBoA3 PBlA1PA1 PBlA2PA2 PBlA2PA3 More generally7 133 ZPBlAiPAi7 if Ai s are mutually exclusive and B C UAi i 6 Bayes7 Rule is used for combining evidence It tells us how to compute prob abilities PAilB if PAi and PB are known Here is some commonly used terminology 0 Prior model probabilities PA 0 Measurement model PBlAl the conditional probability to observe data B given that the truth is A o Posterior probabilities PAilB the conditional probability that the truth is Ai given that we observed data B PAl O B PBlAiPAl W W PltBgt total pergrbearsility PB AiPAi ZPBlAjPAj 739 7 Multiplication Rule n n71 P Aigt PA1PA2lA1PA3lA1 m A2 P A AJ i1 j1 provided that all the conditioning events have positive probability Example 23 You leaue point 0 choosing one the roads 0B1 0B2 0B3 0B4 at random with equal probabilities Fig 26 At euery subsequent crossroads you again choose a road at random with equal probabilities What is the probability that you will arriue at A Solution fork 1234 1 Parriue at Bk from O Z Sec 2 1 Introduction to Random Sequences Detection and Estimation 123 0 B4 Figure 26 Illustration of Ebrample 23 Applying the the property 5 of conditional probabilities we get the notation X lt Y means that we arrive at X from Y Parriye atA PAltB1iB1ltOPB1ltO PAltB2iB2ltOPB2ltO PAltB3iB3ltOPB3ltO PAlt B4iB4 lt OP B4 0 i 1 11 11 1 2 1 7 3 4 5 4 i 1 10153012 7 4 30 i 67 7 120 This property often allows us to break complicated problems into simpler steps and e iciently solve them This basic principle is at the heart of Kalman ltering and more general techniques of estimation on graphs as well as e icient coding algorithms I 213 Statistical Independence De nition 22 If PA B PAPB then the events A and B are said to be statistically3 independent Otherwise they are said to be statistically dependent Recall that we introduced conditional probability to talk about the information that the occurrence of one event provides about another event Independent events constitute a special case when no such information is provided Since PA O B 3The quali er statistical is used to avoid confusion of the notion of independence of events or random Variables With the notion of linear independence of Vectors When there is no danger of ambiguity We Will simply say that events A and B are independent CHAPTER 2 RANDOM SIGNALS PAlBPB PBlAPA7 independence is the same as PAlB PltAgt ifPltBgt y 0 PBlA PB if PA 7 0 In other words7 the occurrence of B has no in uence on the probability ofthe occurrence of A7 and Vice versa Example 24 Pick a card from a full deck and de ne the following events A the card picked at random from a full deck is a spade B the card picked at random from a full deck is a queen Are the events A and B independent The probabilities of events A and B are 13 1 PA 51 4 1 PB 5 Then the probability of their combined occurrence is PA O B Pthe card is the queen of spades Hence A and B are independent Example 25 Disjoint does not mean independent In fact if the events A and B are disjoint and if PA gt 0 and PB gt 0 then PA O B P03 0 PAPB gt 07 and therefore A and B are dependent Statistical independence of a collection of events A collection of events is called inde pendent if information on some of the events tells us nothing about probabilities related to the remaining events De nition 23 A17 A27A37 A are called statistically independent if P A Upon is is Sec 2 1 Introduction to Random Sequences Detection and Estimation 125 for every subset S of 1 2 n In other words PA Aj PAPAj for every pair i j PA Aj Ak PA39PAjPAk for every triplet i j k PA1 A2 Q An PA1PA2 Example 26 Throw a pair dice and de ne the following events A1 rst die turns up an odd number A2 second die turns up an odd number A3 the total of both numbers is odd Are these events pairwise independent Furthermore are they independent Solution We have PA1 PA2 12 Assuming that all 36 outcomes of the pair of throws are equally likely PA1 A2 936 14 PA1PA2 and therefore A1 and A2 are independent Moreover 1 PA3 Given that A1 has occurred A3 occurs if and only if the second die turns up even PA3iA1 PA3 A1 and A3 are independent Similarly 1 PA3iA2 5 PA3 A2 A3 are independent However A3 cannot occur if A1 and A2 both occur and so PA1 A2 A3 0 whereas PA1PA2PA3 7g PA1 A2 A3 Thus A1 A2 and A3 are not independent I 214 Random Variables De nition 24 A random variable is an assignment of a value number to every possible outcome We refer to this number as the numerical value or the experimental outcome of the random variable In other words a random variable is a function from the sample space to the real numbers 9 gt R 126 CHAPTER 2 RANDOM SIGNALS Example 27 Suppose our experiment consists of two independent tosses of a fair coin Let X the number of tails Then X is a random uariable 0 Notation 7 Random variable X 7 Experimental value x or k or any other letters 0 We will often analyze random variables without specifying the experiment which they come from However it is important to always remember that a random variable is a function from the sample space of some random experiment to a set of numbers A random variable is called discrete if its range the set of values it can take is nite or at most countably in nite De nition 25 Probability mass function of a discrete random uariable X is X is a continuous random variable if there is a function fm 2 0 called the probability density function of X or pdf of X such that PX E B fmzdm for any B C R B Note that for a discrete random variable PX e B 2pm 63 De nition 26 The cumulatiue distribution function cdf of a random uariable X is de ned as PX This de nition applies to any random uariable X Example 28 Throw a die X is the number which turns up Sec 2 1 Introduction to Random Sequences Detection and Estimation 127 T O 4 5 6 7 8 O 1 2 3 4 5 6 7 8 X X a PMF of X b GDP of X Figure 27 The pmf and Cdf of the diethrowing experiment Example 29 Toss a point at randomquot onto the interval 07 6 X is the point where it lands I 39 I I I 39 I I PDF fXX a PDF of X b GDP of X Figure 28 The pdf and Cdf of the pointtossing experiment Some properties 0 is non decreasing o 0 as x a foo alaszaoo 128 CHAPTER 2 RANDOM SIGNALS o If X is a continuous random variable o If X is a continuous random variable PaltXltb Pa XltbPaltX bPa X b Fxb 7 Fxa abfxzdz Fxltbgt Ommdz 1 fXxd 1 Expectation or expected value E goo if gltzgt me dz pdf of X is called the expectation or expected value of the random variable gX Note when gX is a discrete this is just E9X Z 9PX90 Expected value of X Second moment of X Variance of X VarX EX 7 EX2 0 x 7 EX2fXzdz The expectation is linear in the following sense EagX bhX c aEgX bEhX C Therefore EX EX2 E X2 2XEX EX2 19le 2EX 2 EX2 E X2 7 ltEltXgtgt2 Sec 2 1 Introduction to Random Sequences Detection and Estimation 129 Example 210 Throw a die and let X be the number that turns up The expected value ofX is m1 1 1 2 6 6 6 21 6 735 Note that EX is not necessarily a value that X can assume The second moment ofX is EX2 im2PXm 11 1 1 1 1 1 E 114 g 111736 6 91 6 1 15 6 The variance ofX can then be calculated using Eq 21 VarX E X2 7 EX2 i 91 49 6 4 7 1827147 12 i 35 1239 Example 211 Random variable X is uniformly distributed on 06 Fig 29 fXI 6 l O 6 x Figure 29 The pdf of X in Ebtample 211 130 CHAPTER 2 RANDOM SIGNALS The expected value ofX is 6 2 EXO 6d712 3 The second moment of X is 61 3536 EX2 2d 12 l 06 a 180 The variance of X is VarX 12 7 32 3 I 215 Two Random Variables De nition 27 The joint cumulative distribution function for two random variables X and Y is de ned as FXymy PX m and Y The joint probability density function for X and Y is 82FXY7 y fXY967 y awy We have m y FXyzygt fXylta6gtd da 00 00 The individual pdf s fXm and fyy are then called marginal pdf s to distinguish from the joint pdf fXymy How can we get fXm and fyy from fXyzy7 We note that PX m PX xYltoo FXYz7 00 1 00 fX3tz7 3MB la 22 00 00 On the other hand7 in the previous section7 we saw that 1 PX x fXada 23 00 Identifying the integrands in Eq 22 and Eq 237 we see that 00 ma mo BM 700 or fXz 0 fXymydy 24 Sec 2 1 Introduction to Random Sequences Detection and Estimation Similarly7 fYW 1 fXYmydz Example 212 Let f 7 A OSySz l X Y 7 07 else 1 Find A 2 Find 186 Solution Figure 210 The support of the function ayLy of Example 212 is the shaded triangle Within the triangle7 fx7ye y A 1 To nd A 00 CO 1 fXYzydm dy 00 00 A area of the triangle 1 7 A 5 a A 2 2 We can nd fX by using Eq 1fXYlt7Z 9gtdy fad 1 Ie in order to nd the marginal pdf of X we integrate y out by computing the line integral of ny along the dashed segment in Fig 210 More generally 1 Qdy 2x nggl otherwise fX fXY7yd g 7 132 CHAPTER 2 RANDOM SIGNALS fXW Figure 211 The marginal density from Example 212 De nition 28 The random variables X and Y are independent if hex9671 fX90fYy This de nition of independence implies that any event involving X is independent of any event involving Y Conditional density baaW y if fXgtY 7 3 fYW If X and Y are independent then 7 7 fXfYy 7 qule 7 y 7 fyy 7 fX7 ie7 the knowledge that Y y does not provide any information about X Expected value EltgltXYgtgt W W gm mime was dy Note that this de nition is consistent with our previous de nition of the expected value of a single random variable EltgltXgtgt 1amp1m9fxym7yddy 1995 fXYm7ydydm co cogfxdm Sec 2 1 Introduction to Random Sequences Detection and Estimation 133 Correlation of X and Y is EXY Covariance of X and Y is m CovltX Y if E M 7 MW 7 Eco EXY 7 EXEY De nition 29 X and Y are uncorrelated if Xy 0 The following are some remarks about these terms 1 X and Y are uncorrelated if and only if EXY 2 If X and Y are independent then they are uncorrelated because in this case 00 00 00 00 EltXYgt 22mm was dy zyfxltzgtfyltygtdz dy OO 00 OO 00 commast Ozmm EXEY 3 However ifX and Y are uncorrelated it does not imply that they are independent But if X and Y are eg Gaussian random variables the notions of independence and uncorrelatedness are equivalent 4 VarX CovX X 5 The correlation coe cient is CovX Y 0 VarXVa Y It is possible to show that 71 pr 1 and that 39 PXY 0 a X and Y are uncorrelated 0 prlXaYbwithagt0 0 pr7l XaYbwithalt0 Why do we need the notion of uncorrelatedness in addition to independence This is because 0 it also indicates to what extent X and Y are related and in some cases is equiv alent to independence 0 to determine whether X and Y are independent we need to know nyzy to determine whether X and Y are uncorrelated we only need to know the rst and second moments which are easier to estimate 134 CHAPTER 2 RANDOM SIGNALS I 216 Random Sequences De nition 210 A DT random or stochastic process or DT random signal or random sequence Xn 700 lt n lt 00 is a sequence of random variables 39 de ned on the same probability space Alternative notation Xn 7X717X07X17m Example 213 Suppose we flip a coin every second and assign 7 1 if the n th flip is heads X01 7 71 if the n th flip is tails This sequence of binary random variables Xn is a random process The observations associated with physical processes are often most appropriately modeled as random It is typically the case however that their probability distribution is unknown and has to be deduced from the data For example when we ip a coin we do not know a priori that Pheads if the coin is unfair then we may have Pheads 7 I 217 Estimating Distributions Example 214 Repeatedly and independently flip a coin and let X01 7 1 if the n th observation is heads 7 71 if the n th observation is tails Figure 212 Probability mass function of Xn for the cointossing experiment Suppose we do not know p How do we learn it from observing Xn2 Sec 2 1 Introduction to Random Sequences Detection and Estimation 135 Solution We construct the empirical distribution of Xn by tallying the number of occurrences of heads and the number of occurrences of tails Fig 213 71 Heads Tails 15mm 1 1 I3N 2 A 3 PN 4 1 1 x A number of heads out of N experiments pN Figure 213 Estimated probability distribution of the cointossing experiment How good is this estimatef2 Let 07 else Then EHn p11p0p7 E p7 and therefore VMHW p 7 292 Note that N ZHW A 711 pN N 7 and that it is a random uariable Its expected value is 1 N E W2mm n1 1 N Ein 32l The expected value of our estimate E is equal to the quantity that we are trying to estimate Estimators that have this property are called unbiased Let us now calculate VarpN First consider two properties of variance 136 CHAPTER 2 RANDOM SIGNALS 1 If we scale the random variable W by a constant 04 the resulting variance will be scaled by 042 VaraW E aW7EaW2 E MW 7 mm QZE W EW2 anarW 2 What is the variance of a sum of random variablesf2 It depends For example a Suppose W1 W2 WN W then N Var W VarNW NZVarW n1 b Suppose W1 W2 WN are independent random variables then N N 2 Var W E i 1 n1 E 044 7 EW2 n1 2 2 Wm EWnWm EWm nm N 2E WV 7 EWn2 n1 2 2 EW 7 EWEWm 7 EWm N nm ZVarWn n1 Thus if we only made one experiment but counted its result N times 1 N V A V H arpN ar NE 1 N WVar 71 VarHn Sec 2 1 Introduction to Random Sequences Detection and Estimation 137 However if the N experiments are independent then 1 N VarpN mVarZHn n1 1 N m ZVarHn n1 1 NVarHn 19 N Therefore Var gt 0 as N gt 0 Estimators whose variance approaches zero as the sample size grows to in nity are called consistent estimators We have therefore just shown that 131V is a consistent estimate of p In other words as the number of experiments increases our estimate becomes more reliable Since the estimate itself is a random variable the variance of the estimate gives us a measure of its reliability Roughly speaking a larger variance of the estimate implies a larger spread of distribu tion Hence we have a larger chance of ending up with an estimate that is far from the correct value ofp On the other hand if the variance of the estimate is small there is a higher chance of the estimate being in the vicinity of the correct value ofp as shown in Fig 214 Note that this gure is not meant to be a precise characterization of this particular estimator since for example it is a discrete random variable and therefore its PDF is actually a sequence of impulses fmv f m i p a p a a Distribution of 131v for small N b Distribution of 131v for large N Figure 214 For larger Values of N the Variance of 131v is smaller Which means that the estimate is more likely to be Close to the correct Value of p Example 215 X1 XN are independent identically distributed iid contin uous random variables with unknown pdf How to estimate One possible method is 1 Partition the m axis into L intervals x0z1 12 xL1 mL CHAPTER 2 RANDOM SIGNALS Figure 215 An unknown distribution to be estimated 2 Estimate 1 l p Px391 Xn M71 as follows A number of outcomes that fall into the i th interual PN N 3 Note that if is approximately constant on the i th interual nub m then mi fltzgtdz m M xi 7 M71 Hence we can estimate on the i th interval as 4139 PN f m z mi 7 miil Unless something is known a priori about how fast changes this method will not necessarily be successful in estimating Let I 17 2f 95121 S X01 S 95 Him 0 else Then I E Hiln 19 E935 Ms 2le pa which means that our estimate of p is unbiased We also haue N i 239 Ai Z7 Z7 Var pg r r 2 Var N 2 1707 pm Erma Nmiimi1 Sec 2 1 Introduction to Random Sequences Detection and Estimation 139 50 bins N 100 08 I I I I I I I I 1 0 1 10 bins N 100 1 0 1 50 bins N 10000 I 53000 NCOJ Figure 216 The tWo upper panels illustrate the tradeOH between the number of bins and the size of a bin if the bins are too small the estimate can be Very noisy if there are too few bins the estimate may be too coarse The bottom panel illustrates the in uence of the number N of experiments for the same bin size increasing N leads to a less noisy estimate Therefore if the bin size 7amp4 is small we need a large number N of experiments to get a reliable estimate However if the bin size is too large then may not be approximately constant within each interval resulting in an estimate which does not track the actual uery well This trade o is illustrated in Fig 216 Sometimes it is not necessary to estimate the whole pdf Often there are practical situations when there are only one or several parameters of interest Example 216 Suppose we are interested in measuring some quantity A but what we 140 CHAPTER 2 RANDOM SIGNALS actually measure is Y1 A X1 V what we actually observe 5 4mm of Wm measurement noise Y2 A X2 second measurement independently obtained YN A XN Suppose it is known that the pdf of Xn is the same for all n and 0 but the speci c form of the pdf is unknown How do we estimate A Solution One possibility is the following estimate 1 N ANNYn Then 1 N 1 N E AN N EEO71 N ZEA Xn 1 n1 7 A i unbiased A 1 Var AN W NVarXn VaTXn VW WQX 0 as N gt oo 7 consistent N 7 I The uariance approaches zero as the number of measurements increases to in nity Central limit theorem Suppose 1 N Zn gmn where Xn are iid with mean zero and uariance 0392 Then as N gt 00 the CDF of Zn gt Gaussian C DF with mean zero and uariance 0392 The pdf of a Gaussian normal random uariable R with mean m and uariance 02 is 1 Jim fRr e 202 27117 0 For large N the distribution of our estimate is approximately Gaussian 0 As N gets larger it becomes more and more probable that AN which we compute from our obseruations is near A Sec 2 1 Introduction to Random Sequences Detection and Estimation 141 fAN 96 Figure 217 The pdf of the estimate AN of Ebtample 216 Example 217 Estimate the variance of iid random variables X17 7XN Solution 1 If the mean m is known try Then E E 7 m2 VarXn unbiased 2 If the mean m is unknown a estimate the mean as in Example 216 I try n1 1N X A 2 7 g W W W m 1 N 1 N NZltXltngt7mgt272ltmimgtNZltXltngtimgt 1 n1 142 CHAPTER 2 RANDOM SIGNALS n1 mcmV 7 1 N X 2 A 2 7 g ltngtemgt eltmemgt E VarXn 7 me VarXn 7 W NA 1VapXn i biased Xn 7 m2 is unbiased because gt1 H 2i H H M2 X X a E X VarXn I 218 Sampling From A Distribution Suppose you have hypothesized or built a probabilistic model ie7 you have assumed or estimated a probability distribution of your observation eg7 using techniques from the previous section It is often very important to be able to synthesize samples from this probability distribution Eg7 texture synthesis is needed in rendering7 printing7 and other computer vision and imaging applications One possible approach to texture synthesis is 1 Model texture as a 2 D random process 2 Synthesize an image or a texture patch by sampling4 from the probabilistic model While thorough consideration of the texture synthesis problem is beyond the scope of this course7 we now consider the problem of sampling from a distribution Example 218 Suppose we have a cdf FX How do we generate samples of a random variable with this cdf Solution Suppose we know how to generate samples of a uniform variable Y whose pdf is shown in Fig 218 4Note the Word sampling used here means drawing a random sample from a probability distribution Which is completely diHerent from the meaning it had in the earlier sections Which dealt With converting CT signals to DT and Where sampling meant discretization Sec 2 1 Introduction to Random Sequences Detection and Estimation 143 fYl Figure 218 The pdf of the uniform Variable Y in Ebrarnple 218 What is the cdf of W Fglmfz Figure 219 An illustration of Fgl because FX PMMHWSMHWUSMMWHYSMWWdMM So the cdf of W is FX Use Fgl to transform the uniform distribution into FX I 219 Filtering A Random Process What happens to a random process when it is put through an LTI system In general7 this is a very difficult question however7 we can say what will happen to its rst and second order statistics De nition 211 If Xn Yn are real ualued random sequences the autocorrelation function of Xn is m mWMXWWWW How strongly are the points ofX at m and n are correlated The cross correlation function of Xm and Yn is chmn How strongly are X and Y correlated 144 CHAPTER 2 RANDOM SIGNALS A note on complex valued random variables and processes CovXY EX7mXYimy Correlation E XY rXXm7 n E XmXn eggm n E XmYn De nition 212 A random sequence Xn ls wide sense stationary W35 if 1 E is constant for all n 2 rXm7 n TXXlt07nim for any n7 m abuse of notatzon rxxn 7 m If both X and Y are WSS then CXyl 7 n chn 7 What will happen to the mean and autocorrelation of a random process when it goes through an LTl system Consider the system illustrated in Fig 220 Real Valued7 WSS7 with X mean ax and autoeorrelation rXX7 L gt gth LTl system With real Valued Figure 220 System With a WSS input Given am rXXn7 and hn7 can we nd and ryy Sec 2 1 Introduction to Random Sequences Detection and Estimation 145 1 mm E W 7 mgtXltmgt 1 W W inth 7 mgtEltXltmgtgt W W mith 7 M 7 minim1 2 mm m E XltmgtYltngt 7 E Xm k W 7 axon 16 W 7 kE XmXk Ii W 7 kmxug 7 m MEETW f hn 7 m 71TXX1 1700 hrXXnim ionly depends 0110177717 chn h TX X01 146 CHAPTER 2 RANDOM SIGNALS 3 ryymn E YmYn Ym Z hn 7 k7eo 2 70 7 kE XkYm k7eo i hn 7 kchm 7 k k7eo i hn 7 kCYXk 7 m hi7 jig Z W 7 gt7 zgtcyxltzgt k7eo h CYX 71 r h ch m 7 u7 where 71771 h7n Again7 ryymn ryynm ryyn 7 m Also7 we found that is constant Y ls WSS39 So7 if the input to an LTl system is WSS7 then the output is also WSS7 with h gtk CXy h gtk h gtk TXy h gtk ny ryyn chn Time ch7n nyn Tyyltngt reversal Txxn Figure 221 Decomposition of the steps in calculating er7 L from rXX7 L Example 219 Let Yn Xn Xn 7 1 in the system illustrated in Fig 222 Find ryyn Ie how correlated is the output with itself shifted by some lag ualue n Sec 2 1 Introduction to Random Sequences Detection and Estimation iid Gaussian zero mean7 and Variance 7 Figure 222 System in Example 219 Solution won E XltngtXltn M 7 0 m 0 7 07 m 7 0 since X is independent 0343071 A process such as this for which Xn is uncorrelated with Xm unless n m is called a white noise process chm Alternatively chltmgt E MW M EXltngtltXltn m M m 71m a ifmOorm1 07 else because of independence 1 1 0 0 Tinm h7CXYm 71 o 0 1 20 0 0 CHAPTER 2 RANDOM SIGNALS xm xm 3 1 Ym Ym 3 1 1 3 5 5 3 1 1 3 5 Ym m Ym Figure 223 Scatter plots for EbCample 219 Sec 2 1 Introduction to Random Sequences Detection and Estimation 149 Estimating Correlation Functions For Wide sense stationary processes autocorrelation rXXn EXmXm cross correlation er EXmYm How to estimate TXXlt71gt7 Again compute sample averages I i I T I T i 0 1 2 3 4 5 6 1 An estimate of Txx2 X0X2 X1X3 X2X4 XltK71gtXltK1gt gt More generally if N points of Xn are available form 1 Niimiil T XX W Z XnXn imi 7N7 1 gmgNi 1 n0 What is this useful for Well there are a lot of different applications one of which is radar detection The key to this application is the following property of the autocorre lation function iTXXwH S Txx0 And in fact for many random processes which model natural phenomena the random variables become less and less correlated as they become more separated in time i 7 2 31er 7 WM So a typical autocorrelation function might look like the one in Fig 224 rxxn Figure 224 A typical autocorrelation function 150 CHAPTER 2 RANDOM SIGNALS A random variable is very much correlated with itself it may be somewhat corre lated with its neighbors and is basically uncorrelated from random variables which are far in the future or in the past An estimate of the autocorrelation function may look like Fig 225 1 05 5 g 0 O5 100 50 O 50 100 n Figure 225 An estimate of the autocorrelation function the distance to the object In radar or sonar you transmit a signal Xn and then receive its re ection from an object By measuring the delay between Xn and Yn you can estimate Xn transmitted signal Yn received signal attenuated noisy delayed version of X It is usually impossible to tell the delay just by looking at Xn and However by comparing an estimate of cross correlation ch such as the one shown in Fig 226 to the estimate of rXX it becomes easy to estimate the delay Sec 2 1 Introduction to Random Sequences Detection and Estimation 151 1 05 E Q o O O5 100 50 O 50 100 Figure 226 An estimate of the crosscorrelation function In this gure the delay is n 50 I 2110 Noise Removal and Detection For the past few sections we have been considering random sequences 7 that is se quences of random variables Exactly predicting the behavior of such a sequence is impossible but as you have learned in the past three sections in some cases it is pos sible to estimate certain parameters 7 like the mean or the autocorrelation functions 7 from a realization of a random sequence The main idea here was that if you have a sequence of identical random variables preferably independent you can estimate cer tain parameters by computing time averagesln this section we continue the discussion of estimation and consider examples of detecting or estimating a signal corrupted by additive noise Problem 1 Binary hypothesis testing in DT additive white Gaussian noise Hypothesis H0 Y W Hypothesis H1 Y x W Where W is a vector of uncorrelated zero mean Gaussian random variables with vari ance 02 For example a situation like Fig 227 can be encountered in a communication system One possible objective having observed Y y choose H0 or H1 so as to maximize the likelihood of the observation H1 PYlH1YlH1 2 PYiHoYlH0 Ho CHAPTER 2 RANDOM SIGNALS Transmitted Noise7 W nail7 Sm a 7710 500 YsmW m0 or 51 X or m 1 A 1 Figure 227 Model of a communication system H1 N n 7w n 2 N n 2 H 1 57y 20 2 H 1 e WED 1 x27ra 1 V 27m Ho 1 log 110g H1 1 N 1 N m Zltynn2 2 m ZWWW 1 n1 H0 H1 N N 7 2 yn2 224009501 01 2 ZWWDZ 1 n1 H0 H1 N N 2 2 WWW 2 ZWWV 1 n1 H0 H1 2lty7xgt 2 H X HZ H0 H1 lty7xgt gt lt H X HZ Sec 2 1 Introduction to Random Sequences Detection and Estimation 153 1e 7 o calculate the projection of the data y onto X o if the coefficient is greater than decide that X is present if not decide that X is not present It is Very likely that this y resulted from x Say 1 It s Very likely that this y is just due to noise Say Ho Eherything which is l x Figure 228 Vector space interpretation of hypothesis testing This is quite intuitive In order to isolate X we need to represent our observations Y in a basis where X stands out namely a basis where X is one of the basis vectors If indeed Y was equal to X plus noise we would eXpect its projection onto X to be large whereas otherwise there is no reason to eXpect this projection to be large Let us now generalize this idea to a slightly more complicated situation Here we do not know the underlying signal X exactly but we know for example that it is a sinusoidal signal or a sum of sinusoids If we represent a noisy sinusoid in a Fourier basis by computing its DFT we would see two large spikes corresponding to the frequency of the sinusoid and then there will be noise One simple method to get rid of noise would then be to 1 set all the coefficients which are below a certain threshold to zero so that we are left with just these two spikes 2 reconstruct our signal from the remaining coefficients Now let us analyze this procedure a little more closely Problem 2 Noise removal Via representations in orthogonal bases Let Y X W where X is the sum of a few non random sinusoids of unknown frequencies with suffi ciently large amplitudes CHAPTER 2 RANDOM SIGNALS 154 W is the vector of uncorrelated7 zero mean random variables with variance 02 mm W 1 W lt gt WN 7 1 Take DFT YAYAxAWiv V where Y is the DFT of Y and A is the DFT matrix 1 1 1 392 1 392 1 392 1 577K 1 5 71quot 2 57N71 A 1 jm hfreixl eijzwglvveog 6773927r11vV1N71 1 Suppose acos Zwon7 0 g n g N 7 17 then DLN kkoorkNik0 7 2 7 7 07 otherwise 2 What is W the DFT of white noise W E W EAW AEW 0 Where E denotes it is the mean of W Sec 21 Introduction to Random Sequences Detection and Estimation The covariance of W AW my gt AE WWH AH A AAHUZ NaZAA l the IDFT matrix B A l AH AH NA l N02 So7 the DFT of white noise is white noise with zero mean and standard deviation v Na As long as Naltlt7 we can remove noise by thresholding the DFT coefficients 0 Calculate DFT of Y 0 Set all coe icients whose absolute value is less than7 say 3W0 to zero 0 Reconstruct CHAPTER 2 RANDOM SIGNALS DFTXnWn Figure 229 Thresholding noise coef cients The dashed line represents the threshold level 3 3 2 2 1 E 1 s ellWW o X E 1 g 1 2 2 3 3 0 500 1 000 0 n a Noisefree sinusoid 3 600 2 E 500 1 E 400 8 ll 1 E 200 2 L5 100 3 0 0 500 1 000 n c Reconstructed sinusoid d DFT of noisy sinusoid Figure 230 Noise removal through thresholding DFT coef cients Sec 2 1 Introduction to Random Sequences Detection and Estimation 3 2 1 E 3 o E X E 1 gt2 2 3 0 500 1 000 n a Sum of two sinusoids 3 2 E A 1 e 5c o E X 1 E 2 o 3 0 500 1 000 n c Reconstruction d DFT of noisy signal Figure 231 Noise removal for a sum of two sinusoids Note that the reason why we chose the Fourier basis is that we knew that our signals of interest were sinusoids In other scenarios you would choose a different basis Note that our analysis would still hold for any orthogonal basis because in our derivation of the fact that the transform of white noise is still white noise we never used the fact that our basis vectors were Fourier vectors


Buy Material

Are you sure you want to buy this material for

25 Karma

Buy Material

BOOM! Enjoy Your Free Notes!

We've added these Notes to your profile, click here to view them now.


You're already Subscribed!

Looks like you've already subscribed to StudySoup, you won't need to purchase another subscription to get this material. To access this material simply click 'View Full Document'

Why people love StudySoup

Steve Martinelli UC Los Angeles

"There's no way I would have passed my Organic Chemistry class this semester without the notes and study guides I got from StudySoup."

Anthony Lee UC Santa Barbara

"I bought an awesome study guide, which helped me get an A in my Math 34B class this quarter!"

Bentley McCaw University of Florida

"I was shooting for a perfect 4.0 GPA this semester. Having StudySoup as a study aid was critical to helping me achieve my goal...and I nailed it!"


"Their 'Elite Notetakers' are making over $1,200/month in sales by creating high quality content that helps their classmates in a time of need."

Become an Elite Notetaker and start selling your notes online!

Refund Policy


All subscriptions to StudySoup are paid in full at the time of subscribing. To change your credit card information or to cancel your subscription, go to "Edit Settings". All credit card information will be available there. If you should decide to cancel your subscription, it will continue to be valid until the next payment period, as all payments for the current period were made in advance. For special circumstances, please email


StudySoup has more than 1 million course-specific study resources to help students study smarter. If you’re having trouble finding what you’re looking for, our customer support team can help you find what you need! Feel free to contact them here:

Recurring Subscriptions: If you have canceled your recurring subscription on the day of renewal and have not downloaded any documents, you may request a refund by submitting an email to

Satisfaction Guarantee: If you’re not satisfied with your subscription, you can contact us for further help. Contact must be made within 3 business days of your subscription purchase and your refund request will be subject for review.

Please Note: Refunds can never be provided more than 30 days after the initial purchase date regardless of your activity on the site.