### Create a StudySoup account

#### Be part of our community, it's free to join!

Already have a StudySoup account? Login here

# Class Note for ECE 7342 with Professor Jansen at UH

### View Full Document

## 28

## 0

## Popular in Course

## Popular in Department

This 147 page Class Notes was uploaded by an elite notetaker on Friday February 6, 2015. The Class Notes belongs to a course at University of Houston taught by a professor in Fall. Since its upload, it has received 28 views.

## Reviews for Class Note for ECE 7342 with Professor Jansen at UH

### What is Karma?

#### Karma is the currency of StudySoup.

#### You can buy or earn more Karma at anytime and redeem it for class notes, study guides, flashcards, and more!

Date Created: 02/06/15

ADVANCED TOPICS IN SIGNAL PROCESSING ELEE 7342 section 12138 A graduate course presented by Ben H Jansen Professor Department of Electrical and Computer Engineering University of Houston 713 743443 1 bjansenuhedu Fall 2003 CONTENTS Contents TimeFrequency Analysis of TimeVarying Signals Problem Speci cation 11 Objective 12 Stationary signals 13 Nonstationary signals Average Frequency and Bandwidth 21 De nitions 22 Energy density functions 23 Computing average frequency and variance from 3t 24 Bandwidth of complex signal Instantaneous Frequency 31 The concept of instantaneous frequency 32 The analytic signal 33 Obtaining the analytic signal 34 Instantaneous frequency The Uncertainty Principle 41 The time bandwidth product theorem ShortTime Fourier Transform 51 De nition 52 Filter interpretation 53 Discrete STFT OJMHI l mib ka 10 11 11 12 13 14 15 15 17 17 19 20 CONTENTS 54 Gabor expansion 55 Time Frequency resolution of STFT 6 Continuous Wavelet Transform 61 Comparing STFT and CWT 7 Discrete Wavelet Transform or DiscreteTime Wavelet Transform 8 Functional Analysis 81 Vector Spaces 82 Function Spaces 9 Introduction to Wavelets 91 Expansion Systems 92 Wavelet Transforms 10 Multiresolution Formulation 101 Scaling Function 102 Multiresolution Analysis 103 Wavelet Functions 104 Discrete Wavelet Transform 105 Parseval s Theorem 106 Haar Wavelet System 11 Filter Banks and the DWT 1 11 Analysis 112 Deriving Equations 113 Synthesis 114 Trees and Packets 95 and 96 ii 21 22 23 24 25 27 27 30 32 32 34 35 35 36 37 38 39 40 43 43 44 46 48 CONTENTS 12 Wavelet Construction 13 Biorthogonal Wavelet Systems 14 Applications of the Wavelet Transform 15 Wavelet Selection II Linear Systems Analysis and Modeling 16 DiscreteTime Systems 161 Linear Time Invariant Systems 162 Difference Equations 163 Pole Zero Diagrams 17 Stochastic Processes 171 Autocorrelation and Autocovariance 172 Stationarity and the Autocorrelation Function 173 Autocorrelation Matrix of WSS Process 18 Linear Modeling 181 AR and ARMA Models 182 Why modeling 183 Autoregressive Models 184 Autoregressive Model Coef cient Estimation 185 Stationary AR Models 186 Matrix Forms of Normal Equations 187 Levinson Durbin Recursion iii 49 54 56 57 58 58 58 61 63 66 66 68 69 70 70 72 73 74 76 77 79 CONTENTS 188 Linear Algebra for LeVinson Recursion 189 Stability of AR Models 1810LeVinson Recursion and Minimum Phase Property 1811Lattice Filters 1812Partial Correlation and KJ 1813Practical Methods for AR Coef cient Computation 1814Selecting the Model Order 19 ModelBased Spectral Estimation 191 BasicApproach 192 AR Spectra and Resolution 193 Problems with AR Based Spectral Estimation 194 Comparing AR Processes III Nonlinear Systems Analysis and Modeling 20 Linear Systems 21 Nonlinear Systems 22 Higher Order Spectra 221 Characteristic Functions 222 Moments 223 Cumulants 224 Stationary Processes 225 Properties of Cumulants 226 Polyspectra 227 Linearity and Coherenc iV 85 92 96 100 104 106 110 113 113 115 116 117 119 119 120 121 122 123 125 126 129 131 133 CONTENTS 228 Linear Systems 229 Generalization with Cumulants 2210Blind Deconvolution 2211Tests Based on Cumulants 2212Computing the Polyspeotrum from Data 134 135 137 138 139 Part I TimeFrequency Analysis of TimeVarying Signals Based largely on 2 5 1 Problem Speci cation 11 Objective Objective Quantify the signal energy as a function of frequency and time In other words obtain a function Et w 1 Signal energy can be expressed using density functions in the time domain energy per unit time at t using frequency domain energy per unit frequency at w using Sw2 with total energy 2 2 Est dtSw dw 1 PROBLEM SPECIFICATION 12 Stationary signals Energy distribution in the frequency domain is given by Sw2 R55Te 1Wd7 RssT will be time invariant hence Em Eltwgt Swl2 I PROBLEM SPECIFICATION 3 13 Nonstationary signals Suppose Observe that Swgtl2 sltwgtsltwgt 81m 82w Sm 85 31wgt2 S2wgtl2 81ltwgt83ltwgt Sfltwgt82ltwgt generally 31wgt2 S2wl2 Therefore if ad and 32t are both of nite duration and well separated in time and fre quency Scu2 will not be a good measure of Et w 2 AVERAGE FREQUENCY AND BANDWIDTH 4 2 Average Hequency and Bandwidth 21 De nitions We will de ne the Fourier transform pair as 8t wkjwtdw 3 Sm s k jwtdt 4 22 Energy density functions Treating the energy density function as a probability density function among other things we assume I st2dt 1 we can de ne average time indicates where the energy is located on average in time M lt t gt tst2dt 5 variance in time measure of the duration of the signal most of the signal will have gone by in 20 seconds a ltt Ht28tl2dt lt6 lt t2 gt u where 7 lt t2 gt t2st2dt 8 average frequency indicates where the energy is located on average in frequency W lt w gt wSw2dw 9 2 AVERAGE FREQUENCY AND BANDWIDTH 5 variance in frequency measure of the bandwidth of the signal most of the signal will be concentrated in 2B 20w rads 0 w uw23wl2dw lt10 lt U2 gt 13 Where 11 ltw2 gt w28w2dw 12 8U 1pp1s39flt m gt W 4L Lsi 18294 m i Mw 118111 amaqu 9AA KuadOJd Euzyzs aqq 311mg L lp 8L8 5 m 1PPL19p1I lt gt aouaq maqu am i lt gt mdm 1129 put 311mg mpipip lm9m1sisf mp1p1m5913LpLm9L8m mpmsmsm mpzlmslm lt m gt 18 1110 eoueueA pue Kauenbeg 3812131 Sumndmog 3 5 9 HLUIAAUNVH UNV AONHH HH I HOVHEIAV Z 2 AVERAGE FREQUENCY AND BANDWIDTH 7 Similarly can show that 2 2 i d 3 lt0 gt i 8 and 1 ndns n 7 7 7 ltw gt i s t dtn dt 15 Another simpli cation uses 1i ii gt 31m dtsmdp 52a j dtsa dt 16 which can be proven by integration by parts and using 31ioo 32ioo 0 ie nite duration signals This leads to lt U2 gt stlilistdt 2 AVERAGE FREQUENCY AND BANDWIDTH The mean of an arbitrary function gw with Taylor series expansion 9W Z gnw n can be obtained from lt M gt gltwgtSwl2dw Zgnw 3wl2dw Zgn3tltgtn stdt 3t Zgn stdt 3tg stdt 18 2 AVERAGE FREQUENCY AND BANDWIDTH 9 Equation 18 can be used to nd 03 w lt w gt2Sw2dw jdt 2 1 lt w gtgt st dt 19 In case lt w gt 0 this happens when Mt is real 02 7lt U2 gt d8lttgt 2dt 20 w 7 7 dt 39 2 AVERAGE FREQUENCY AND BANDWIDTH 10 24 Bandwidth of complex signal For I 8t Atej t the square of the bandwidth ie frequency variance can be found using 0 318 j lttgtslttgt and Equation 19 repeated here 2 03 ltwgtgt st dt 2 tst ltwgtst gigsa dt 2 t ltwgtgt23t2dt lt18 st2dt 21 32 31M BiM 22 Notes 0 Bandwidth depends on phase variation 56 and amplitude variationAt o A constant amplitude signal of varying frequency could have same bandwidth as signal with constant frequency but short duration 3 INSTANTANEOUS FREQUENCY 11 3 Instantaneous Frequency 31 The concept of instantaneous frequency Suppose the signal 3t is complex anAmJ U The mean frequency can be computed using Equation 13 lt w gt 8td2ttdt 51 Nae W Hm Alttgtej lttgtdt 506 1118 A2tdt 23 The mean frequency will be real hence I jAt Atdt 0 and ltwgtampmpm ma on For lt w gt to be interpreted as the average frequency bt must be the instantaneous frequency Note Treating the derivative of the phase as the instantaneous frequency can be done only if one can de ne a complex signal corresponding to the real signal under study 3 INSTANTANEO US FREQUENCY 32 The analytic signal Objective Given a real signal 340 nd a complex signal 2t srlttgtjs t Atej t Where At 3ts t and tarctan and wit lt A2 Where dzt should re ect the instantaneous frequency of 340 The 2t that meets these requirements is the analytic signal 12 3 INSTANTANEO US FREQUENCY 13 33 Obtaining the analytic signal Sr03 Zc0 39COO C00 C00 lt 2B gt Observe that small Sm Sim w 3rw ISM Ml hence lt w gt 0 and aw do On the other hand Zw has lt w gt do and aw B which makes more sense Therefore select 2m 2 0 Srwejwtdw 25 Where the factor 2 serves to ensure that 86 The analytic signal corresponding to s t is de ned as Alsl 205 8W 8N 1710 25 Where Ht 1 wt denotes the Hilbert transform with frequency response 39 wlt0 wgt0 3 INSTANTANEO US FREQUENCY 14 34 Instantaneous frequency The instantaneous frequency aid of the real signal 3t is de ned as the derivative of the phase of the analytic signal 2t corresponding to 5t wt t with 205 Atej t and Z 1 OOSCUejwtdw with 3w 00 OO stejwtdt Paradoxes l to calculate 2t knowledge of 3t for all time is required ie instantaneous frequency is non local 41239 if SW is a line spectrum dzt may be continuous possibly over in nite range 41239 if 3t is bandlimited dzt may be outside the band may not be one of the frequencies in S may be negative 97 0 The last four paradoxes arise when more than a single frequency component is present at the same time This argues for describing the time frequency structure by a surface rather than a curve in the time frequency plane 4 THE UN CERTAIN TY PRINCIPLE 15 4 The Uncertainty Principle 41 The timebandwidth product theorem Observations o a short duration narrow waveform yields a wide spectrum 0 a narrow spectrum yields a long duration wide waveform o waveform and its spectrum cannot be made arbitrarily small simultaneously This leads to the Uncertainty Principle 2 2 at a gt Constant 27 1 4 or 03an DlH Proof Assume 3t vanishes faster than 1f as t gt ioo Assume that 3t has ut 0 and um 0 Using Equation 8 and Equation 20 we obtain a a tsn9dt d8 dt 2 it zlm n nnf where the inequality is due to SchwarZ m2 1912 2 ym 4 THE UN CERTAIN TY PRINCIPLE 16 Observe 2 lds t t t if slt no 2 dt hence 2 1 d t tsts tdt itsimdt and integrating by parts 2 dt 1 2 00 1 2 2 its tlOO s th i 1 7 2 because 8loo 0 and we assume unity signal energy Therefore 2 2 1 0t 0w QED The equality will hold for Gaussian signals only 2 3t 66kt 2 28 with k 2a and c Nor7T This follows from the fact that the SchwarZ equality holds when the two functions involved are proportional to each other ktst s 5 SH ORT TIME FO URIER TRANSFORM 17 5 ShortTime Fourier Transform 51 De nition The STFT is the Fourier Transform of a windowed signal with the Window centered at t Xwt w x7w7 t jWTdT 29 Where wt is a low pass function and typically f wt2dt l The inverse STFT is de ned as 06t Xw7wl7t ejwthdw provided wt7t l 30 A natural choice is to make the analysis Window 7t equal to the synthesis Window Notes 1 STFT is a linear time frequency representation 2 STFT depends on choice of w xt2dtwt2dt letwl2dw x72lwT t2d7 lxtl2 letwl2dt Xw 2Ww w 2dw Xwgtl2 5 SHORT TIME FOURIER TRANSFORM 19 52 Filter interpretation Observe I 6quotth we met xltTgtwltT e 06de 31gt hence Equation 29 can be Viewed as ltering Operation With a low pass lter With impulse response 10 it as shown in A A x t X Loo lt gt W39t e j03t B X x t W 6 mt m e j03t Also I I I I 56t wite 7wt x7 w7 i tequot7W7 e 7wtd7 ejthtw 32 hence Equation 29 can be Viewed as ltering Operation With a band pass lter With impulse response ejwtw it ie a frequency modulated low pass lter as shown in B 5 SH ORT TIME FO URIER TRANSFORM 53 Discrete STFT De ne a sampled version of the STFT XwnTkF x7w7 nTejk27TFTdT or Xwn k Zxmwm nejk27rmL where k 01L 1 m The inverse Discrete STFT is de ned as m ZZXWnT kF7t nTejk27TFt or n k Mm ZZXUJW k7m nejk27rkmL where k 0 1 L 1 n k The inverse D STFT holds if 1 7 2705 E nTwt nT mg for all t F n F 20 35 36 37 5 SH ORT TIME FO URIER TRANSFORM 21 54 Gabor expansion Equation 35 can be Viewed as a series expansion of a one dimensional signal in terms of two dimensional time frequency functions at gum My km where gm km gt nTejk27TFt 38 and gt is a one dimensional function Equation 38 is referred to as the Gabor expansion If the basis functions gn7kt are selected such that they are well localized in time and well concentrated in frequency then the Gabor coef cients GIW k will re ect the signal s time frequency content around nT kF Gabor proposed glttgt 12 e tg lt39 which is a Gaussian signal meeting 03 03 14 Note The Fourier Transform of a Gaussian is a Gaussianl The Gabor expansion is always possible if T F g l but the Gznk are not unique the D STFT is just one solution 5 SH ORT TIME FO URIER TRANSFORM 2 2 55 TimeFrequency resolution of STFT Using the lter interpretation of the STFT we observe 1 all the bandpass lters have the same duration impulse response hence the time resolution of the STFT is the same at all frequencies Also time resolution is proportional to 200 Window length 2 all the lters have the same bandwidth irregardless of their center frequency hence the frequency resolution is the same at all time Also the frequency resolution is proportional to 205 2B and inversely proportional to Window length STFT Wavelet pry V 261 Although 03 a Constant at could be made large for low frequency components and small for high frequency components resulting in small aw for low frequency components and large aw for high frequency components This leads to the Continuous Wavelet Transform 6 CONTINUOUS WAVELET TRANSFORM 23 6 Continuous Wavelet Transform Rather than using constant bandwidth bandpass lters as the STFT does use constant Q bandpass lters where center frequency we Q bandwidth B in other words the impulse response will decrease in duration and the bandwidth increase as center frequency increases Hence use scaled versions of w t To simplify matters use window functions that are scaled versions of the same prototype 7t Wmv em where 7t is the analyzing wavelet and a bandpass function centered around f5 The wavelet transform is de ned as inwanMnl 7 n 150m 41 Using b fc f the wavelet transform can also be de ned as a time scale representation y 7 l T t Wm tbix7my lt b gt617 42 An energy distribution in the time scale domain is obtained by means of the scalogram ltd9E7 t b 2 43 6 CONTINUOUS WAVELET TRANSFORM 61 Comparing STFT and CWT STFT measures the similarity cross correlation between Mt and gw77t Xltwgt xtgtgZ Ttgtdt t T with gwj t w ejwt CWT measures the similarity cross correlation between Mt and mid mm xth77tdt 1 with hb T W7 T g t STFT W W T1 T1 CWT f0 2f0 1 T1 gt 05T1 gt 24 7 DISCRETE WAVELET TRANSFORM OR DISCRETETIME WAVELET TRANSFORM 25 7 Discrete Wavelet Transform or DiscreteTime Wavelet Transform The CWT of Equation 42 is a redundant system Can discretize the t b plane to obtain a non redundant system A process referred to as Subbdnd Coding is used 3001 hn 4gt gt gt gt h n Y1n gm a gt g n Given xn derive a lower resolution signal by lowpass ltering with impulse response A larger scale is obtained by downsampllng In case of ideal half band lters downsampling by 2 is possible xn X U39I y0n hkx2n 48 k oo Also produce 00 9101 Z 9k062n k7 49 k oo Where Mn is a high pass lter Again downsampling by 2 is possible Perfect reconstruction is possible if the analysis and synthesis lters are identical both are ideal halfband and Mn and gn are orthonormal 7 DISCRETE WAVELET TRANSFORM OR DISCRETETIME WAVELET TRANSFORM 26 Continue the subband coding scheme on the highpass or lowpass lter outputs Speci cally to obtain ner frequency resolution at lower frequencies iterate the scheme on the lower band hn gt X quot hn gtlt gt gt 9n only hn 9n 9n gt Note 0 the cascade of lters implements a series of bandpass lters 0 downsampling makes it possible to use the same lter at each stage 0 the signal duration is halved at each stage 8 FUNCTIONAL ANALYSIS 27 8 Functional Analysis 81 Vector Spaces De nition A vector space V is a set of vectors with operations of addition and multiplication by a real scalar that satisfy 1 Closure under addition V03 4 with 56 7 E V the unique sum 56 1 E V y 2 Forf 7 ZEV a 7JZi i b f7j f c 36 such thatf6ffor ally d Easuch thatf f6for all 56 3 Closure under multiplication by a scalar Va 55 with a real and 56 E V the unique vector of E V 4 For real scalars oz and and 56 7 E V 8 FUNCTIONAL ANALYSIS 28 Linear Independence A set of vectors 931 932 in is linearly dependent if there exists a set of scalars cl 02 cn that are all non zero such that n a Z Gigi 0 50 i 1 Orthogonality two vectors are orthogonal if their inner product is zero 00 ltfijgt Z xnlynl0 Withltfi gtlf7 0andll lla O 51 n oo Span A set of vectors 561 f2 in E V is said to span the vector space if V03 02 E V there exists a set of scalars cl 02 cn such that n 5 Z Gigi 52 i 1 Equation 52 represents an expansion of t in terms of 032 Basis A basis of a vector space is a set of linearly independent vectors that span the space Dimension The dimension of a vector space is the minimum number of nonzero vectors that span the space Finite Dimensional A vector space is nite dimensional if it can be spanned by a nite number of vectors Subspace If S C V ie if 56 E S then a E V Then S is a subspace ofV if S is a vector space Wrt the same operations as V 8 FUNCTIONAL ANALYSIS 29 Frame A family of vectors 0 in a vector space V forms a frame if EIA B A gt 0 B lt oo 7 such that 2 a aa 2 mm sztmTy mwn ea J Tight Frame The frame bounds are equal ie A B hence a a 2 a 2 kxpygtAmn on J and the vector 7 can be expanded as gA 1Zlt j pg 55 1 Note The vectors in a tight frame may not be independent and the expansion is not unique anymore 8 FUNCTIONAL ANALYSIS 82 Function Spaces 30 Function spaces are like vector spaces but deal with continuous functions rather than vectors which can be viewed as discrete functions Function Space A vector space Where the vectors are functions and the scalars are real sometimes complex Inner Product scalar a is obtained from a lt ftgt gt ftgtdt Norm de ned as fltffgt Orthogonality two functions are orthogonal if lt ftgt gt 0 with f 0 and Hall 0 56 57 58 8 FUNCTIONAL ANALYSIS 31 Types of Signal Function Spaces L1Rf e L1 gt fftdt K lt oo In nite summations and in nite integrations can be interchanged for functions in L1R Here L signi es the Lebesgue integral and R indicates that the independent variable t is a number over the Whole real line The value of a Lebesgue integral is not affected by values of the function over any countable set of values of its argument For example if Mt 1 for t rational and Mt 0 for t irrational the Lebesgue integral over Mt is zero L2R f e L2 gt fft2dt E lt oo nite energy signal LPR f 6 LP gt fftpdt K lt 00 General class Contains generalized functions distributions de ned by an inner product with a normal function For example 3t de ned by M ftlt5t Tm 9 INTRODUCTION TO WAVELETS 9 Introduction to Wavelets 91 Expansion Systems 32 Wave vs Wavelet Wave has in nite time span and in nite energy a wavelet is a small wave with nite duration nite support and nite energy Linear Decomposition N ak kagt7 with expansion coe icients ak and expansion functions wk If the expansion functions form an orthogonal basis then Wavelet expansion 59 60 61 with wavelet expansion functions 1ij kt usually orthogonal basis The aj k form the Dis crete Wavelet Transform DWT of f t and Equation 61 is referred to as the Inverse DWT 9 INTRODUCTION TO WAVELETS 33 Wavelet System 1 Building blocks to represent a signal 2 Provide time frequency localization of a signal 3 Expansion coef cients can be computed ef ciently in 9N or 9N log N 4 Wavelet systems are generated by sealing and translation of the mother wavelet lj km 212w2jt k lt62 5 Often satisfy mnltlresolntlon conditions if a set of signals can be represented by a weighted sum of 90t k then a larger set can be represented by 902t 6 Lower resolution coef cients can be computed from higher resolution coef cients using a lter bank 9 INTRODUCTION TO WAVELETS 92 Wavelet Transforms Discrete Wavelet Transform Inverse DWT ft ZZaj k2j2iJ2jt k k j 7 with aj k the DWT of DiscreteTime Wavelet Transform o discrete time signals f n 0 can be obtained from digital lter bank Continuous Wavelet Transform t a b Inverse CWT ft Fa bw CWT Fab ftw dt t a b da db with a b real 34 63 64 65 10 MULTIRESOLUTION FORMULATION 35 10 Multiresolution Formulation Objective Decompose signal events into ner and ner detail Approach De ne a scaling function 90t from the concept of resolution Derive wavelet functions from the scaling function 101 Scaling Function De ne a set of scaling functions in terms of integer translates of a basic scaling function 90t 90W W k may 906 L2 lt66gt The functions pica span a subspace V0 hence t gawk WW 6 V0 67 De ne a two dimensional set of functions by scaling and translation 90331605 292m23t k lt68 with span V39 hence ft akltpk2jt k vm e vj 69 Forj gt 0 gpj k will be short and ne detail can be represented Forj lt 0 gpj k has long duration and coarse information is represented Also fewer signals f t can be represented for small or negative j than for large j Question What happens ifj gt oo 1 0 M ULTIRES OL UTI ON FORMULATION 102 Multiresolution Analysis Require a nesting of spaces CV2CV1CV0CV1CV2CL2 or VjCVj1VjEZ with VOO 0 v00 L2 Note that flttgt E lt f2t E 1 Observe 900006 W 6 V0 V0 C V1 gt W 6 V1 Hence 90t is a linear combination of 901 k2t and its translates Mt Zhk ltp2t k k e z k 36 70 71 72 73 74 75 Equation 75 is referred to as the re nement equation dilation equation or the multiresolution equation 10 M ULTIRES OL UTI ON FORMULATION 37 103 Wavelet Functions Rather than representing a signal in terms of a linear combination of gpj Mt use a set of functions pj Mt wavelets that describe ie span the dz erence Wj between the spaces VJ and VJ 1 De ner by wli woi v0 Vj1IVj Wj 76 Note L2 VOGBWOGBWlGBlVQGB39 77 W0 2 L VjoWJOWJO1 WJO2 78gt L2 W2 W1W0W139 79 V0 Woo W1 80 V23 V13 V0 Require that all members of VJ are orthogonal to all members of W ie lt gt 07 27 which implies that Wj is the orthogonal complement to VJ in VJ 1 Observe that wt 6 W0 C V1 hence W Z hwyMa k k e z 82 k It can be shown that the coef cients h1n are related to the scaling function coef cients of Equation 75 by him 1quoth1 TL 83 10 MULTIRESOLUTION FORMULATION 38 104 Discrete Wavelet Transform Recall that 2 L Vjo 69 Wjo 69 Wjo 1 69 Wjo 2 69 With 30 any integer 84 This implies that any function f t 6 L201 can be written as ft Expansion in Vjo I Expansion in Wj 85 J Jo f ZM WQW MXi ww w rwgt em k k j 10 0ggmmawzzww am en k k J Jo where 30 can be any integer Note The choice of 30 determines the coarsest scale The coef cients in Equation 87 are referred to as the discrete wavelet transform If the wavelet system is orthogonal then 03006 lt ftltpj0 kt gt and 88 we lt UWLMUgt em It has been shown that the wavelet expansion coef cients drop off rapidly as j and k increase 10 M ULTI RESOL UTION FORMULATION 39 105 Parseval s Theorem The energy in a signal is related to the wavelet coef cients by 2 oo oo W 12 2 law Jmk w 2 0 2 i dt k 72 Wool l 90 Daubechies1 showed that orthonormal scaling functions and wavelets can have compact support ie are non zero over a nite interval This provides the time localization we need 5 Figure 1 Evoked Potential and two views of DWT 100mmumcattons on Pme and Applted Mathemattcs 149099967 1988 10 MULTIRESOLUTION FORMULATION 40 106 Haar Wavelet System Require the scaling function to have nite support over a unit time interval 1 0 S t lt l 0 elsewhere W mom 90 At the next scale j 1 two scaling functions can be de ned 1 0 g t lt 12 0 elsewhere 7 i 1 12 3 t lt 1 902t 1 901105 7 7 elsewhere 92 93 wt 90106 Questions 1 Are 017 0t and 9017 1t orthogonal 2 Are 007 0t and 901 Mt orthogonal 3 What kind of lters do 9 Mt represent Recall the multiresolution equation Equation 75 9015 Zhk 902t k k E Z k hence MO 1 and MI Observe As scale j gt 00 then 232903 Mt gt 3t 10 MULTIRESOLUTION FORMULATION 41 Wavelet 1Jt can be constructed using Equation 82 repeated here wt EweEM k k e z k with Equation 83 repeated here h1n 1nh1 Consequently 1 0 tlt12 L 0 elsewhere Note The requirement that lt 90t11t gt 0 plus the fact that 90t is even around t 12 necessitates 1Jt to be odd around t 12 This coupled with Equation 82 leads to the same result as using Equation 83 Question What type of lter does 1Jt represent 10 M ULTI RESOL UTI ON FORMULATION 42 Recall Vj 1 Vj EB W 39 hence gal k0 is expandable in terms of 00 0t and 0 0t speci cally W1 06 gm 0 wo 06 and 80116 V700t 511006 11 FILTER BANKS AND THE DWT 43 11 Filter Banks and the DWT In practice one rarely deals directly with the scaling function and wavelets instead one uses the coef cients hlc and h1lc in Equation 75 and 82 respectively and 636k and dec in the expansion of Equation 87 111 Analysis The expansion coef cients chc and dj can be expressed in terms of cj 106 using chc hm 2kcj1m 95 6W him 2 gtCjimgt 96 The filter Mn is lowpass and h1n i2 d MM is highpass If the signal is bandlirnited there will be an upper scale 339 J above which dj7 k are zero and the signal samples can be used as an estimate Of C J cj1 hn lz 0 This requires that the scaling function is wellbehaved and approaches the Dirac pulse at scale j J ll FILTER BANKS AND THE DWT 112 Deriving Equations 95 and 96 Starting from Equation 75 repeated here 9W hn 902t 71 scale and translate the time variable to obtain 90211 k Zhnx ltp22jt k n Zhnx ltp2j 11 2k n and substituting m 2k 71 results in 902jt k ZMm 2k ltp2j l 1t m m Using Equation 87 function ft E Vj 1 can be written as N m1 10020 1 WM 1 1t k k or at scale j as N cjw M291 10 Zd k j M291 k k k If pj Mt and g Mt form orthonormal sets then cm lt fa sag10gt gt Nani2900111 10 dt Using Equation 98 and interchanging summation and integration cjk ZMm 2kft2j 1gt2 2j 11 k dt m hm 2k6j1m QED 44 97 98 99 100 101 102 103 ll FILTER BANKS AND THE DWT 45 Similarly starting from Equation 82 repeated here 0 ghmkmmm 10 scale and translate the time variable to obtain 0211 k Zh1n ltp22jt k n Zh1n ltp2j 11 2k n 104 and substituting m 2k 71 results in 0211 k 2010 2k ltp2j 11 m 105 m If pj Mt and g Mt form orthonormal sets then 613k lt ft11jjkt gtft2j2 2jt k dt 100 Using Equation 105 and interchanging summation and integration 613k 2010 2kft2j 92942 11 k dt 107 m Tinjh1m 2kcj1m QED 108 ll FILTER BANKS AND THE DWT 46 1 13 Synthesis Using Equation 87 function f t E Vj 1 can be expanded using scaling functions only fmQmmWD m H w mm At the next scale j f t can be expanded using wavelets and scaling functions as ft Cjk2j2ltp2jt k djk2j2 2jt k 110 which gives after substituting Equations 75 and 82 rm Eyam2mmn00 m ir k m k n djkh1n201gt2 2J1t 2k n 111 Starting from Equation 110 lt mwrm gtNMWlD me m um and using orthogonality 1mmmmwm3 am results in lt mw11mmgtgmmWD no 11 FILTER BANKS AND THE DWT 47 Similarly when starting from Equation 111 lt 110947 1mlttgt gt gang1an W M 11 211 0903 1mlttgt dt 617k 1110020 1290j 1 2k Milgt947 1 M0 dt 115 and again using orthogonality 1 2knm ornm 2k 11gk1nlttgtsoj1mlttgtdt07 k im 116 results in lt fa 107 11mm gt yam 1gt2hltm 21c yam 1gt2h1ltm 2k 117 hence cj 1071 Cjkhm 2k djl h1m 2k d1 12 g1ltn 118 Equation 118 is a convolution operation but re j1 quires upsamplmg rst The lter gn Mn is lowpass and 9101 MM is highpass 01 l 2 gm 1 If the signal is bandlirnited there will be an upper scale j J above which d If are zero and the signal samples can be used as an estimate of CJ 11 FILTER BANKS AND THE DWT 1L4 D ees and Packets Full tree STFT Octaveband tree wavelet sen39es Arbitrary tree wavelet packet 48 12 WAVELET CONSTRUCTION 49 1 2 Wavelet Construction Wavelets are constructed using digital lter design methods constrained by scaling function 90t existence requirements ie there exists a solution to the multiresolution equation Equa tion 75 SW hk 902t kl Necessary conditions 1 If W e L1 and if we dt 7g 0 then 2 Mn 119 n 2 If 90t is an L2 0 L1 solution and if MW k dt 136k then Zhnhn 2k M 120 This implies that Ihnl21 121 Zh2n2h2n1 122 n 71 Note 1 If Mn is FIR has nite support N then one linear equation and N 2 bilin ear quadratic equations have to be satis ed leaving N 2 1 free variables Note 2 Equation 120 implies N even 12 WAVELET CONSTRUCTION 50 N 2 Necessary conditions result in MD h1 fgt h0 h1 m1j N 4 Condition 1 results in mmhuymahy i and two equations result from Condition 2 ie 3 Z hnhn 2k 50 for k 01 producing 71 0 m m rwd mammmamnana leaving one free variable oz The coef cients become h0 1 cosa sina2 hl l cosa sinoz2 h2 l cosa sinoz2 h3 l cosa s1noz2 123 The length 2 Haar equations are obtained for or 0 7T 2 37T 2 and a degenerate condition fora7r The Daubechies coef cients result for or 7T 3 12 WAVELET CONSTRUCTION 51 N 6 Two free variables oz and remain resulting in h0 1 cosa sina1 cos sin 2 sin cosa4 h1 1 cosa sina1 cos sin 2sin cosa4 M2 1 cosa s sina 2 h3 1 cosa s1na M4 1f h0 h2 M5 1f h1 M3 124 The Haar coef cients are generated for any oz and the length 4 coef cients result if 0 The length 4 Daubechies coef cients result if oz 7T 3 and 0 and the length 6 coeffcients are obtained for oz 1359803732414182 and 078210638474440 Solutions for the free variables are found by imposing a smoothness ie differentiability requirement on the wavelet functions and using the loose relationship between smoothness and the wavelet moments mk tk1Jt dt 125 being equal to zero 12 WAVELET CONSTRUCTION 15 us us 15 15 us us 15 uuxuzuzmusnmuxmx lt90 M Haar n Meyer us us 15 W u u u u m us us m u m 1 W 52 12 WAVELET CONSTRUCTION is Daubechies db4 i W db8 W 53 13 BIORTHOGONAL WAVELET SYSTEMS 54 13 Biorthogonal Wavelet Systems g 4 2 d k t 2 g 01 n 0 01 m E 2 t 2 h 00 k f j f j analysis synthesis 51m Z Z h2k mh2k n g2k m 2k ncln Requiring orthogonality places severe restrictions on the wavelet system including nonlinear phasel Also demanding that the analysis and synthesis lters are time reversals of each other may be too restrictive Instead require perfect reconstruction ie 6171 6101 V71 6 Z 13 BIORTHOGONAL WAVELET SYSTEMS 55 The prefect reconstruction requirement leads to th mh2k n g2k mf2k 6m n 126 which can be simpli ed to the following conditions an 1nh1 n 90 1nh1 n 127 up to some constant factors Substituting Equation 127 into Equation 126 results in Zhnhn 2k mg 128 n In other words it is orthogonal to h thus the name biorthogonal Note1z If h and h are FIR with length N and N respectively then Equation 128 implies that N N is even ie both N and N are even or odd Note 2 Parseval s theorem no longer holds but can be made to almost hold l4 APPLICATIONS OF THE WAVELET TRANSFORM 56 14 Applications of the Wavelet Transform Denoising Assume a signal corrupted by additive noise 601 1171 6n Wavelet transformation based denoising can be effected as follows 1 Compute the DWT Y 2 Perform thresholding on the wavelet coef cients A 7 Y7 2 P7 X A 0 m lt r W 3 Compute the inverse DWT 02 Signal and Image Compression DWT the N point pixel signal image and quantize the coef cients using the uniform scalar quantizer with step size A QcikAifkAgciltk1A 130 If lt A then 0 ie the coef cient is not signi cant Create a signi cant map to indicate whether a coef cient is signi cant 1 or not 0 This requires N bits The M signi cant coef cients are compressed using adaptive entropy coding or other suitable method 1 5 WAVELET SELECTION 1 5 Wavelet Selection 57 58 Part II Linear Systems Analysis and Modeling Based largely on 8 10 16 DiscreteTime Systems 161 Linear TimeInvariant Systems Linearity A system is linear if a391b39 3932 i a y1b39y2 131 when 31 i yl H and 32 Time Invariance A system is time or shift invariant if 33D 3 yD 132 Whena i y andxpn xn D Impulse Response Mn is the system s response to 1 n 0 601 i 0 elsewhere 133 16 DISCRETETIME SYSTEMS 59 frequency response DT FT of Mn Hm mag W 134 transfer function Z transform of Mn 00 Hz Z mag k with R lt z lt Ri 135 k oo Convolution Sum a LTl system s response to arbitrary input is given by yn Z xkhn k or 136 Z hkxn k 137 k oo FIR Finite Impulse Response system has Mn of nite duration 7g 0 M2 lt n lt M1 0 elsewhere hm IIR In nite Impulse Response system has Mn of in nite duration ie M1 andor M2 oo Causality A system is causal if its output yn depends on past inputs and outputs and present input only Causality is guaranteed if Mn 0for nlt 0 16 DISCRETETIME SYSTEMS 60 Stability A system is stable if a bounded input always results in a bounded output This leads to 00 Z hk lt 00 138 k oo which is also the requirement for the existence convergence of the DT FT 16 DISCRETETIME SYSTEMS 61 162 Difference Equations Discrete Time System can be implemented by way of di erence equation N M gm Z akyW k Z brawn V 139 k 1 r 0 which is most easily analyzed using the Z transform N k M Yz Z akz Yz Z brzrXz k 1 r 0 M b T Hz HZ 720 M XZ N k7 1 Z akz k 1 Z 0 Mn lt gt Z hnz for R lt lt R 140 Zeros 0 Poles 00 Number of poles number of zeros Real Systems If Mn is real then the poles and zeros will occur in conjugate pairs 20 rej and 25 rej 141 16 DISCRETETIME SYSTEMS 62 ROC region of convergence o If Mn is FIR then ROC is 0 lt lt 00 plus perhaps 2 0 or z o If Mn is HR and causal then ROC is gt R ie the exterior of a circle Stability DT FT of Mn needs to exist ie oo hove W n 00 should converge Observe that Ho Hltzgt 142 z expjw 7 therefore the unit circle must be in the ROC gt I f system is causal and stable all poles should be inside the unit circle Inverse System H 1z is the inuerse system of if H 1z 1 Minimum Phase System H is a minimum phase system if all its poles and Zeros are inside the unit circle AllPass System The transfer function of an all pass system is given by HapZ H k 2 611 1 6 i 143 In other words each pole has a zero that is its conjugate reciprocal pole 2p ak rkej l k i i i ej k zero 22 i a i m Rational Any rational can be expressed as Hminz Hapz 144 16 DISCRETETIME SYSTEMS 16 3 PoleZero Diagrams The system function i 7 0 l Z akz Yz k 1 can be rewritten as assuming do b0 0 M ll Z Cr Hz G with G boao H 2 dk k 1 and CT and dk specify the zeros and poles respectively In other words the zeros and poles plus ROC de ne H Within a factor G lOlog10HaJ2 2010g10HZz eXpjCU M 2010g10G Z 2010g10lejw crl 71 N w Z 2010g10lej dial k 1 Similarly M N ArgHw ArgG Z Argejw cr Z Argejw dk 71 k1 63 145 146 147 16 DISCRETETIME SYSTEMS 64 M N lologlollicugtF 2010g10k34 2 2010g10le3 crl 2 2010glole1 dkgt 71 k1 7 M N ArgHw ArgG glArgejw cr k1Argejw dk Observe that Kc dm represents the length of the line connecting any point on the unit circle to the pole dk Also Arg 6 dk is the angle of the line connecting dk to a point on the unit circle el 3dk Argneimdkl 16 DISCRETETIME SYSTEMS 65 The location of the poles determine the spectral peaks and the zero locations the valleys 100 H03 H03 1 7 STO CHASTIC PROCESSES 1 7 Stochastic Processes 171 Autocorrelation and Autocovariance A stochastic process has mean variance autocorrelation autocovariance crosscorrelation crosscovariance mm E1xltkgt mxk 211 myltzgt1 66 148 149 150 151 152 153 154 155 1 7 STO CHASTIC PROCESSES and yn are uncorrelated if cnyc 1 0 for all k and l and yn are orthogonal if rnycJ 0 for all k and l 67 1 7 STO CHASTIC PROCESSES 172 Stationarity and the Autocorrelation Function is wide sense stationary WSS if mm mm and WOW mk 1 The ACF of a WWS process has properties symmetry 706k TEX k meansquare value maximum value 68 156 157 158 159 l7 STOCHASTIC PROCESSES 69 173 Autocorrelation Matrix of WSS Process The autocorrelation matrix of a WSS process of length N is de ned as r M 1 Rm E m0x1xN 1 f 160 i WV 1 J 7060 321 322 732W 1 961 060 321 732W 2 7962 7961 7960 732W 3 161 736N 1 736N 2 736N 3 7360 R is Hermitian Toeplitz If real then R is symmetric Toeplitz Properties 1 If RM is Hermitian or symmetric ToeplitZ then RM is an autocorrelation matrix not vice versa 2 A necessary condition for a sequence of numbers Mk for k 0 1 p to be an ACF is that R is semi positive de nite ie 51 1ng5 2 0 for any 5 18 LINEAR MODELING 70 18 Linear Modeling 181 AR and ARMA Models Assume that an observed signal can be approximated as a linear weighted combination ofp previous values of 0601 ie 19 5671 aknxn k 162 with prediction 67 07 601 163 x n gt ak n x n 18 LINEAR MODELING 71 This leads to p Z aknxn k 601 164 k 1 which is referred to as the all pole AutoRegressive AR model of order p Note that Equa tion 164 is a special case of 139 A more general estimator is obtained by also using the q previous values of 601 resulting in the AutoRegressive Moving Average ARMA model of order p q p q Z aknxn k Z brnen 7 165 k 1 r 0 18 LINEAR MODELING 18 2 Why modeling Coef cients can be used for 1 14gt prediction Classi cation spectral estimation data compression 72 18 LINEAR MODELING 183 Autoregressive Models Assuming stationarity and taking the z transform of Equation 164 results in Xz and A2Xz with p k Az 1 Z akz and 161 A 1z p1 1 Z akzk k 1 168 is the analysis equation and 169 is the synthesis equation 73 166 167 168 169 18 LINEAR MODELING 74 184 Autoregressive Model Coef cient Estimation At any instance 71 select the coef cients akn k 12 1 such that the mean squared estimation error is minimized 5n E 6201 minimum 170 Differentiating Equation 170 with respect to Lln and setting the result to zero 8571 B 62 n Baim BainE 2E amm 171 Using P 6n Z aknxn k 172 k 1 we obtain a Baim xn Z and Equation 171 becomes 785m 7 enxn t or 3W 7 2E lt gt lt gt o Reggnn t 0 fort12p 173 173 is referred to as the set of orthogonality equations 18 LINEAR MODELING 75 Substituting Equation 172 in 171 results in n p L 2Exnxn i2 Z aknExn kxn i0 Balm 161 19 2ngnn i2 Z aknRxxn kn i k1 p ngnn i kglaknRxxn kn i forz3912p 174 with 174 referred to as the set of normal equations 18 LINEAR MODELING 76 185 Stationary AR Models Assume that is at least wide sense weak stationary ie ngn kn RexOLn Z Equation 174 becomes 17 ng i Z aknngk i for i 12 p 175 k 1 Evaluating 174 at n n m results in p ngnmnm i Z aknmRxxnm knm i fori 12p k 1 p k 1 From 175 and 176 it follows that akn akn m in other words the linear estimator is time invariant Also 5 will be independent of n and henceforth we will denote the expected prediction error of a model with order p by 5 18 LINEAR MODELING 77 186 Matrix Forms of Normal Equations In case of stationarity Equation 175 becomes P ng i Z akngUv z39 forz39 12p 177 k 1 which can be written in matrix form Ramp 1 R0606ltP 2 Rxxm ap RMMP This is sometimes referred to as the Yule Walker Equation 18 LINEAR MODELING 78 Observe that 19 5p E 6201 k akE enxn p Z akRex k k 1 and because Reg Iv 0 see Equation 173 we obtain 5p Rem 179 Furthermore I9 Rem E n06n E06n06n k2 akE 06n06n 16 p 5p Rmcm k akax k 180 1 Rewriting Equation 175 as p Z fori1a27 apa k 1 and combining with Equation 180 results in the augmented matrix equation 2 s s s 3 3 181 18 LINEAR MODELING 79 187 Levinson Durbin Recursion Iterative procedure to obtain AR model coef cients for order p denoted by ally from the coef cients of the p 1th order model Start with rst order model using Equation 181 Rmm Rm 1 51 182 Rm Rmm ll 0 which results in Rum Rm1a 1 51 and 183 R 1 R 0 1 i 0 184 xx xxltgtal a from which follows 1 Rm a 7 or 185 1 Rch lt 1 and lt1 2 The term all is known as the re ection coe icient or the partial correlation coe icient From 181 forp 0 it follows that 50 R0 Generally R0 gt R1 hence all lt 1 and 51 lt R0 01quot 51 lt 50 18 LINEAR MODELING 80 Continue for second order model Rmcm Rama Rx cQ 1 52 Rama Rmo Rama w 0 188 RMQ Rm Rmm ag2 0 from which we obtain 2 2 2 Rxx21x0Rxx1 190 Substituting 185 in 189 results in lt2gt lt1 lt1 lt2 191 a1 a1 a1 a2 a and 191 into 190 produces 1amp2 ailRmc1 Rm2 192 a11Rmc1 Rm 193 Also see Equation 180 52 Rmo Imam among 194 18 LINEAR MODELING 81 P Equations 191 and 193 suggest that it might be possible to obtain a recursively using yng agp D dgp D agp agp gt app 1 lg 11 dig 11 7 Gigi 7 7 0 7 7 1 7 Where the right most term in 195 has to be determined For example for p 2 Equation 193 suggests that 2 a 1Rm1 RMQ and from Equation 191 we deduce that d 1 131 197 18 LINEAR MODELING 82 In order to generalize these equations we explore the case for p 3 Using Equations 191 and 193 we nd Rxxm Rama Rme Rm 1 7 mm 1mm mm Rxx2 wig Rmlt2gt Rama Rmo Rama a 3gt Rm 19962 Rama Rmcm g 7 ngm ng Rxx2 Rxx3 19mm 19mm 19mm RMQ a 2K3d 2 Rme R0606 Rxxm R0606 ag2 Kgdg RxxB Rxe Rm l ngm K3 7 Multiplying the fourth row of R results in 2 lt2 K i 3 3 2 lt2 Rmc0 I Rxx3 Jldj Suppose 2 n 2 n dj 7 3f01quotjil201quot 198 199 200 put zoz 8 samooaq 661 1101118an 11911 d 5 dc loc 0 I08 I I J I d I dp 68 ONITEIUOW HVEINIT 8T 18 LINEAR MODELING Generalizing we obtain the LeVinson Durbin recursion 50 ngm 17 1 1 RmcP RxmP Ja 39p Kp 3 5p1 azgp Kpa p 19 1 19 1 1 aj i a Kpapj forji p 1 5 R 0 p R P 19 MW 2 MUM 1 Kg5p1 84 203 204 205 200 207 208 18 LINEAR MODELING 85 188 Linear Algebra for Levinson Recursion The Levinson iteration is based on two properties of the autocorrelation matrix R l R of given size contains as subblocks all lower order autocorrelation matrices 2 R is re ection invariant ie interchanging columns and then rows results in R This is a property of ToeplitZ matrices This implies that if R0 R1 R2 R3 a0 b0 RG Rm R1 M2 01 bl R2 R1 R0 R1 a2 b2 7 33 M2 R1 Rm 3 53 then also R0 R1 R2 R3 13 b3 R1 R0 R1 R2 a2 b2 R2 R1 R0 R1 a1 b1 33 M2 R1 Rm 0 bo 18 LINEAR MODELING and using the results of Step 0 we obtain or 50 K190 51 D0 K150 0 from which follows 150 51 1Ki50 In preparation for Step 2 enlarge to next size by padding zero 7 BMW R0606 RMQ 7 1 7 51 7 R0606 Rxxm R0606 a 1 0 7 7 RMQ R0606 BMW 7 7 0 7 DI and by reversal invariance 7 BMW R0606 RMQ 7 7 0 7 7 DI 7 Rm Rmm Rm ll 0 7 Rme R0606 Rxxm 7 7 1 7 7 51 87 18 LINEAR MODELING Step 2 We Wish to solve ng ngm Try an expression of the form From Step 1 we nd 01quot 51 K2731 D1 K251 Rxx2 52 2 R 0 2 0 m a2 1 0 aglgt m w 0 1 51 D1 0 K2 0 D1 51 52 0 from which follows 18 LINEAR MODELING 52 89 51 Rama agani 1 K3 gt51 and Again enlarging R by padding a zero and using the reversal invariance property we obtain Rm Rmlt0gt Rm Fwd Rm Rmlt0gt Rm Ema Rum 1 52 mm Rma w 0 Rmo Rama g 0 and Ryan Raw 7 0 7 792 Rmlt2gt Rme 7 0 7 492 ma Rma w 0 Rmm Rama g 0 Ryan Raw 7 1 7 62 18 LINEAR MODELING Step 3 want to solve Rmcm Raw ng Rxxlt0gt Rxe ng Rxx3 Rxe Try From Step 2 we nd ng Rxxm a1 K3 2 a2 0 52 0 K3 0 90 18 LINEAR MODELING 01quot etc 52 K3732 D2 K352 K3 53 53 0 from which follows D2 5 1 352 91 18 LINEAR MODELING 92 189 Stability of AR Models An AR model of order p can be represented as P 601 Z aank k 0 Where do 1 and the other coef cients are the negative of the coef cients seen sofar The optimal AR model speci ed by d a 610611 apt is the one that minimizes the mean squared prediction error 50 i E i p p E k 2 9 a n n Z 2 am 0601 00601 ak 0 m 0 k 0 P P m 0 k 0 P Z RxxlRaal 211 P with sample autocorrelatz39on l p l I Z anlal forO lgp Rad l for P 13 1 Let 5 be the optimal set of coef cients and ZZ for 239 1 p be the corresponding pole locations for Az a0 112 1 122 2 61192 17 1 212 11 222 1 1 2192 1 213 18 LINEAR MODELING 93 Suppose Az has only one zero outside the unit circle at 1 1 39 z 7 763900 with c lt 1 6 M Then Az 1412 2 1 0 with 1412 minimum phase 1 1 l cz A z z c 1lt gt 1Cz1 1 gtlt 1 z c Azgt IA1lt2gtI1 cz 1 4 cz because 2 1 c 1 1 1 all pass system 62 Therefore re ecting a zero or pole from 61900 lclejsac lcl does not affect the magnitude response or the power spectrum Consequently the autocorre lation function will not be affected either Wiener Khinchine 18 LINEAR MODELING 94 Suppose one term of Az is changed by re ecting 22 and the new set of coef cients is 5 b0 b1 bpt and normalized b0b0b1b0 bpb0t then R600 M M 2 2 lbol lbol m H The least mean squared error for the new model will be a g momma 214 p 215 The model speci ed by d was optimal hence 531 a gt b0 g 1 18 LINEAR MODELING In case pole ZZ is reversed only we observe p k 1 p 1 Z bkz z H 1 zjz k j 1 2 1 p k p k b0 Z bkz z Z gkz hence k k 1 b0 z and for any pole we nd lz lbol S 1 in other words all poles for the optimal AR model are inside the unit circle Recall the analysis and synthesis equations 168 and 169 p k Az 1 Z akz 161 l A 12 p 1 Z akzk 161 95 If 1442 has all its poles inside the unit circle then Az will have all its zeros inside unit circle ie the AR model is minimum phase 18 LINEAR MODELING 96 1810 Levinson Recursion and Minimum Phase Property Recall that J J Since 2 and 21 are the sum of the squared errors both will be positive Therefore 51 KJZ51 lKjl g 1 forj 12p 216 We will show that If and only if lKjl g 1forj l2p then all the roots of Az are inside the unit circle Proof by induction j 1 1412 l K121 and there is one root at z K 1 Stability is guaranteed if K1g l arbitrary j Assume Ajz is minimum phase 18 LINEAR MODELING 97 Recall the Levinson recursion or as z transform Where Note Apz A z 1 1 0 1 all agp Gigi 1 6121 GE K dig 1 p 1 7 a 1 ago a 61131 7 0 7 7 1 7 Ap 1 12 Apz Kp 1 12 1A z 217 14152 PAW 1 218 2 p g algpzkp k 1 18 LINEAR MODELING 98 Aj 12 A Z ARlt2gt l 132 1 Kj 12 17 Because of the assumptions 142 will have j zeros inside the unit circle and j poles at the origin Assume that Aj1z has I zeros outside the unit circle and j l l zeros inside the unit circle all its poles will still be at the origin Principle of the Argument for 132 and C a closed contour in the z plane the argument of 132 will be equal to NZ Np27r when C is traveled in counter clockwise fashion and NZ and N23 are the number of poles and zeros respectively inside the contour In case C expjw ie C is the unit circle the path traced in the 132 plane encircles the origin NZ Np times Using the principle of the argument we observe NZ number of zeros of Aj 12 inside UC number of poles AjltZgt inside UC jl lj2jl l Np number of poles of Aj 12 inside UC number of zeros Ajz inside UC 18 LINEAR MODELING 99 j 1 j 2339 1 NZ Np 1 Observe that Kjlzm in1 3 z expjw therefore 132 traces a circle with radius Kj1 and centered at z 1 132 will not encircle the origin ie l 0 if Kj1 g 1 QED 18 LINEAR MODELING 100 181 1 Lattice Filters Recall the z transform representation of the LeVinson recursion as expressed by Equation 217 repeated here 14192 Ap 12 sz1A 12 and in the time domain D Rw D ay J Kpgg11 1 Substituting p j for j results in 1 D j ap j 11 off a p 1 Kpa p11 and in the z domain Kpap R 1 R Ap z Ap 12 KpAp 220 Combining Equations 217 and 220 in matrix form results in A z 2 1 A Z in 1 Kp I 1ltgt 221 AP Kp 2 1 AP 12 18 LINEAR MODELING 101 Multiplying Equation 221 left and right by X z 2 1 E Z Eplt 1 Kp 1 gt 7 222 Epz Kp 2 1 E 12 where Ez and E12 denote the z transforms of the forward and backward prediction error respectively which are de ned as 6301 2p algpxn k 223 k 1 p 1 6p 0601 19 ak xn pk 224 18 LINEAR MODELING 102 Converting Equation 222 to the time domain 301 e 101 erg 101 1 225 e er 101 e 101 1 226 to be initialized with 6301 6601 This leads to the following lattice structure for the analysis equation xh 90 ep1quot e Plquot K1 39K1 0 o o KP Kp gt 21 gt 21 y gt e390n e39p1n e39pn xn en gt Az gt 18 LINEAR MODELING 103 To implement the synthesis equation rewrite Equation 225 and 226 e M e301 er m 1 227 61301 49361 10 61 101 1 228 epn ep1n e1n e390nxn epn e39p 1 n e1quot e 0 18 LINEAR MODELING 104 1812 Partial Correlation and K Suppose one has p 1 samples of a process and j xnaxn 1 7x01 Generally will be correlated with 06n i 1 hence and gem p will be correlated Partial Correlation Coef cient measure of the correlation between and 0671 p 1 with the effect of the intermediate samples removed Ee e Wm p 229 with 61 xn projection of on the subspace 0601 1xn 2xn p1 60 0601 p projection of 0601 p on the subspace 0671 1xn 2xn p1 e m l Note that the same samples are used to produce e n as needed for egm We obtain Ee ne n 1 p 7 W 230 18 LINEAR MODELING 105 Can ShOW E n 5n 1 PM 1 Z akaQ 1 k Ee n De m 1 Rx0 Z akaUC hence 18 LINEAR MODELING 106 1813 Practical Methods for AR Coef cient Computation All methods attempt to minimize the expected prediction error Eeltngt2 or Ee5ltngt2 but expectation is summation over in nity requiring knowledge of for all n In practice only known over nite interval 0 N 1 and certain assumptions have to be made Given a block of measured signal values the AR model coe icients can be computed using 1 the autocorrelation or Yule Walker method 2 the covariance method or 3 Burg s method 18 LINEAR MODELING 107 Autocorrelation Method Assume is zero outside the interval 0 N 1 This is equiv alent to windowing with 7 1 OgngN i w n i 0 elsewhere This leads to 00 6p 2 e301 n 00 P P 00 Z Z ak Z 0601 kxn Dal k OZ 0 n OO diizmdt where 1 N 1 2 xn kxn fork01N 1 231 n k Estimating the autocorrelation function using Equation 231 preserves the ToeplitZ struc ture of the matrix hence LeVinson Durbin recursion can be used 18 LINEAR MODELING 108 Covariance Method Use error criterion independent of outside 0 N 1 50 e n 2 232 p ngp plt gt lt gt P P N 1 Z Z ak Z 0602 kxn Dal 233 k OZ 0 n P 5132569651 234 Where 1 N 1 729069606 1 7 2 0672 kxn l for 0 g k g p 235 N P n p Equation 235 results in a more accurate estimate of R but ToeplitZ structure is lost N 1 fgcmk1l1 Z 0602 1 kxn 1 1 W 17 N 2 Z xm kxm l m p 1 N 1 xp 1 kxp 1 lmpxm kxm l xN 1 kxN 1 l fxklgtxp 1 kgtxp l l xN 1 kxN 1 1 Hence Levinson Durbin cannot be used anymore and stability is not guaranteed 109 18 LINEAR MODELING Burg s Method Minimize the sum of the forward and backward prediction error computed using available data only N 1 5 71 6301 500 236 To guarantee stability subject the minimization of 53 to the Levinson Durbin recursion ie agb N 1 36 36 p i p p 7 i 2 7 7 237 3193 723 61 aKp 61 aKp gt Using 225 and 226 we obtain N 1 71 6p nep 1n 1 6p nep 1n 0 N 1 2 2 71 ep 1nep 1n erp 101 1 erp 101 61 1nep 1n 0 which results in N 1 2 6p 1nep 101 KP N 1 p 238 2 2 71 ep 1m 1 ep 101 Equation 238 is of the form 25b 2 72 lal lbl which will be less than one hence Burg s method guarantees stability 18 LINEAR MODELING 110 1814 Selecting the Model Order 1 Select order 13 which minimizes the least mean square error 513 Will result in 13 gt 00 because 5 3 5131 2 Minimize a criterion based on 5 plus a penalty term for the number of AR coef cients a Akaike s Information Criterion AlCp Nlog 5p 2p 239 Experience has shown that it underestimates the order for non autoregressive signals and overestimates the order for large N b Minimum Description Length MDLp Nlog 5p log Np 240 Shown to be a consistent estimator ie estimated order approximates actual order when N gt 00 c Final Prediction Error Np1 FPE 5 241 p p N p 1 lt gt d Criterion Autoregressive Transfer function CAT 1 g Nj Np 242 p 7 7 7 Nj 1 N63 N51 None of these techniques works well for small N or for noisy signals 18 LINEAR MODELING 111 3 Use common sense reasoning upper bound Statistical literature suggests that the ratio between the polulation size and the number of parameters to be estimated is greater than ve ie lower bound need at least one pair of poles for resonances and one pole for DC com ponent hence plower 2 twice number of spectral peaks plus one 4 Account for the sampling frequency If f8 is large then a good predictor will be 0671 1 ie order p 1 with all 1 Clearly not representative of the frequency content of 18 LINEAR MODELING 112 Realize that we are dealing with sampled signals ie BMW RMWIT kTa 243 where T lfs Rxxm fSelecting model order is ef 39 ectively equivalent to wmdowmg V v BMW with a rectangular window of length pT Clearly a more rep Rxxlk resentative section of BMW is ob tained and hence a better spectral representation 7 thanks to Wiener Khinchine for the same order lter but with lower sampling rate 1 9 M ODEL BA SED SPECTRAL ESTIMATION 113 19 ModelBased Spectral Estimation 191 Basic Approach Assume that a signal is generated by an AR process excited by zero mean bandlimited White noise According to 166 and 169 E Z M 1 Z akzk k 1 and the powerspectrum of can be found from 1 2 xNecwmwww mmM 1 2 iNec mwww p 2 1 Zakzk I 61 26w 7 8660 19 2 1 Z ak wk k 1 The White noise 601 is bandlimited to 19 M ODEL BASED SPECTRAL ESTIMATION hence B Ree0 03 B Seewejwodw 2BSee0 See0 As a result we obtain Smxltwgt T Ug2 1 g ak jwk Note SMW is continuous in w 114 244 1 9 M ODEL BA SED SPECTRAL ESTIMATION 115 192 AR Spectra and Resolution Recall the FFT based spectral estimate is obtained from the Discrete Time Fourier Transform Xk ch with 245 1 N l 2 k N 44 Z xmk77Wn mkQLuwN 1 Mm N n 0 The DFT assumes that is of nite duration N and that N 1 xn E Ckej2 knPJfOITL0lVFVl 247 k 0 In other words is modeled by N harmonically related complex exponentials with funda mental period equal to interval length which is generally not a signal characteristic AR based spectral estimates are derived from E z Xz plt 1 Z akzk k 1 p l p 14k 4 b 4444444444 b 4444444444 lt39 248 0kE11dkz1 0k 11dkz1 resu ingin p n p b0 2 Ak div un b0 2 Akrgej mum 249 k1 k1 Comparing 247 and 249 we note that is expanded with a tailor made Fourier series in case of AR based spectral estimation This could result in less leakage and higher resolution spectra 1 9 M ODEL BA SED SPECTRAL ESTIMATION 116 193 Problems with AR Based Spectral Estimation 1 Using model order p smaller than actual order ouerly smooth spectra 2 Using model order p much larger than actual order 0 tting the noise 0 line splitting 3 Signal contains additive White noise 0 may give rise to peak displacement 0 use higher order model Wold decomposition theorem 0 use ARMA model Wold decomposition theorem Any general random process can be written as the sum of a predictable and a regular random process 06pm sc n with E 0 A process is predictable iff its spectrum is discrete in w Therefore N wa PxTW Z akow wk k 1 The spectrum PITW is continuous in w and can be approximated by a large in nite number of spectral lines 1 9 M ODEL BA SED SPECTRAL ESTIMATION 117 194 Comparing AR Processes Given a reference sequence x301 produced by lARZ excited by 6301 A test sequence x n thought to be produced by lATZ excited by 6Tn Objective Determine if 1432 and AT2 are equivalent Test Development xTn 8Tquot gt ATZ Feed the test sequence through ATZ and 1432 and compare the spectra of the outputs ETW and e n re spectively using i 1 7T Se 6 CU xTn ARM eTn dATAR floggHrmdw 250 Can show 2 S T TW MRWM SITQCTltWgta and 2 S T Tw MTWM SITQCTltWgta hence 2 A dltATARiOg7T7Ti Rwi2dw 251 iATWi Observe that 6Tn is White noise hence S T Tw UET E a 1 9 M ODEL BA SED SPECTRAL ESTIMATION 118 and the mean output power T7T36T6Twdw E Therefore A A 10g T7 R Using 210 we obtain the takum distance tR dATAR log 252 aTRTaT In practice aT a3 and RT will be estimated from x n and 061301 Note KAT AR a dAR AT Part III Nonlinear Systems Analysis and Modeling 20 Linear Systems Linear systems are completely described by their impulse response Mn time domain convolution sum yn k or P C xkhn H M8 H M8 P C frequency domain frequency response transfer function 00 Z Mu exp J WU u 0 Hw Properties 0 sine wave in gt sine wave out 06t A0 exijOt gt yt A0 expj w0t Hw0 O superposition principle 156105 bx2t gt ay1 t by1 t 119 253 254 255 256 257 21 NONLINEAR SYSTEMS 120 21 Nonlinear Systems Nonlinear systems are described by in nite sequence of transfer functions Generalized Transfer Function 00 H1w1 2 Mn exp J W1 258 u 0 oo oo Hgw1 wg Z Z Mun exp J Mu My 259 u 0v 0 Properties 0 frequency multiplication can sine wave in gt can 2w0 3w0 sine wave out o intermodulation distortion can an in gt w0w1w0 an out 22 HIGHER ORDER SPECTRA 121 22 Higher Order Spectra De nition Fourier transform of higher order statistics cumulants Examples 0 second order statistics ie ACE gt power spectrum 0 third order statistics gt bi spectrum o fourth order statistics gt tri spectrum Motivations l to suppress Gaussian noise of unknown spectral characteristics because HOS of order 3 and higher are zero for Gaussian signals Useful for detection parameters estimation and signal reconstruction hos of order 3 and higher are zero for Gaussian signals 2 to reconstruct phase and magnitude response of signals and systems Power spectrum is phase blind iexcept for minimum phase systems while HOS preserve phase and magnitude 3 to identify a nonlinear system or to detect nonlinearity in time series 22 HIGHER ORDER SPECTRA 122 221 Characteristic Functions If x is a random variable then the characteristic function is de ned as w E 5 61 260 ie if x is continuous with probability density function 1905 00 w Oopxeijdx 261 ieltlgtw is the inverse Fourier transform2 of the pdf and px is the Fourier transform of w pm OOltIgtweijdw 262 Remember if x and y independent and Z 06 y then 192 1905 py because 451 5 WW t 9 5 61 5 61019 cpxw eyed If X is a set of N random variables X x17x27 axNa then the joint characteristic function I is de ned as w1w2 MN 5eljltw1x1 C2062 WWW 263 2Using the mathematicians7 de nition of the Fourier Transform 22 HIGHER ORDER SPECTRA 123 222 Moments The moments of a random variable are de ned by my 5 my 00 xrpxdx 264 00 The rst four moments of x are m1 5x 265 m2 5x2 266 m3 5x3 267 m4 5x4 268 The relationship between moments and characteristic functions are obtained by differentiating 261 and evaluating at w 0 dltIgtw dw diwjooopmkijdx w0 W0 Oijpxeijdx w0 OO 3906 xdx 39m 00 p J 1 22 HIGHER ORDER SPECTRA The r th moment is obtained by If X is a set of N M d random variables X x17x27 axNa then the joint moments of X of order 7 are given by k1 k2 kN N mr5x1 x2 xNw1th 7 2 Using the de nition of the joint characteristic function 263 we obtain TBTCDCU1CU2 wN k1 k2 kN aC JIBC JZ39HBWN wlw2wN0 m 1 124 269 270 271 22 HIGHER ORDER SPECTRA 22 3 Cumulants The cumulants of a random variable x are de ned by drlnltIgtw 7 39 7 C71 7 dwr w 0 The rst four oumulants of 062 are ain5emi 1 61 j a J a w 0 m1 mean 02 m2 m variance 03 m3 3m2m1 2m 7 2 2 4 C4 7 m4 4m3m1 3m2 12m2m1 6m1 Note Knowledge of rst 7 moments is required to compute c The joint cumulants of order 7 of set X are given by Ck1k2kn 3 1 rBrlnCMwng wN Bwkldw ogV 5eWi w1w2 85eiji 125 272 273 274 275 276 277 22 HIGHER ORDER SPECTRA 126 224 Stationary Processes If Mk for k 0 ll i2 is real valued strictly stationary and rst 72 moments exist then the joint moment Momlx v 0606 71 Mk Tn 1 will only depend on time differences 71 Tn1 m717n 1 71 Tn 1 278 The rst four joint moments are given by mi 5xt mean 279 mgm 5xtxt71 autocorrelation function 280 mg l g 5xtxt71xt72 281 milt717 273 5xtxt71xt72xt73 282 The nth order cumulant function of a non Gaussian stationary random signal for n 2 3 can be written as 071Tn 1 m717n 1 mg717n 1 283 Where mg717n1 is the nth order moment function of an equivalent Gaussian signal with same mean value and autocorrelation function as Hence 0 0 for n 2 3 for Gaussian processes to be used as a test 39h Z1u1 1u1 I1 1u131u1 Z1 1u1hu1 51 51 I1 ggu1 h h p111 5111 59 111 9 11an 0 111 quot9391 Sseoo1d 11139111 0192 8 S1 90 H 39g 39SQH GIHIHHO 1110 181391 113 111 139189191111 9113 GM 11111 8313111 0 go 11an quotJ39p39d 311191111111 we SE11 H 391 aqu 188 T79101119 11 Z19u181 1m1Lgi w 84111 Wm 1111 lt1wgtz 1ltZmgt w Sm w Z 51 319 6u1 I1 1 I1 Z1 u1u1 I1 Z1u1 1 I1 1u131u1 Z1 1u1hu1 81 51 I1 111 h h ggz 8901105 01 Z1U1 Z1U1 I1U19ICU1 Z1 I1 U1 Z1 I19C0 H T 1 V V I 1 8 V 3 H O NH T 1 V V H NR 00 NR 8 00 108 NS ggg 1101101171 90117011101100 Z0111 A q H 11119 17 11109111 111 9E9 SAAOHOJ 8 8 811191110111 10 8111191 111 passa1dxa aq 11123 911110171111710 111101 11101 181131 911 L81 VHLOEIJS HEIGHO HEIHOIH ZZ 22 HIGHER ORDER SPECTRA 128 3 If in addition 71 72 73 0 then 7 005x2k variance 288 7 c 005x3k skewness 289 sf c 0005x4k 374 kurtosis 290 71 is short hand notation for the rth order cumulant if random process 06 is treated as a random variable 4 Symmetry properties for 0 include w WJDPmu QFWQQ m m CQH QWQFWQHLQ u ED T2 A II III I Therefore knowing cg in any of the gt six sectors I through VI is suf T cient to nd the entire cumulant IV 1 function VI V 22 HIGHER ORDER SPECTRA 225 Properties of Cumulants 1 If A for 239 1 n are constants and x is a random process then 72 cum1x1nxn H A cumx1xn z 1 2 Cumulants are symmetric in their arguments ie cumx1 xn cumx391 xZn Where 2391 in is any permutation of 1 n 3 Cumulants of sums equal sums of cumulants ie cumx0 110211 zn cumx0zl 2n cumy0zlzn 4 If a is a constant cuma 21 2n cumzl 129 292 293 294 295 22 HIGHER ORDER SPECTRA 130 5 If the random variables are independent of the rV yi for i 12 n then cumx1 111 xn yn cumx1 xn Cumy1 yn 296 Suppose yn Mn Where yn and Mn are independent then cirlrk1 0rlrk1 0 71rk 1 If Mn is Gaussian colored or White then cirlrk 1 crlrk 1 for k 2 3 This makes the higher order statistics more robust to additive Gaussian mea surement noise than correlation even if the noise is colored 6 If a subset of the k random variables is independent of the rest then cx1xk 0 297 Cumulants of an independent identically distributed iid random sequence are delta functions ie if wt is an iid process then cg717k 1 5571572m5 k 1 298 Where 71 is the kth order cumulant of the stationary random sequence 22 HIGHER ORDER SPECTRA 131 22 6 Polyspectra If c 714 is absolutely summable ie oo 00 x Z Z lcTLTla aTn 1lltooa 739 00 T7171 00 then the nth order cumulant spectrum polyspectrum of exists 3 OO 00 x Cnw1wn1 Z Z CnTlaaTn l39 7391 00 Tnil OO exp jw171 can 17 1 299 with wi 7rforil2n l and w1wn1 7r Note Polyspectra are continuous in w periodic with 27F and generally complex for n gt 2 and hence preserve phase Power Spectrum For n 2 029 gi 6T ja 300 7 7e w 3 7T Rx7ej 301 Well known that 22 HIGHER ORDER SPECTRA 132 Bispectrum For n 3 06 00 00 06 jw171 W272 03w17w2 Z Z 6371772 71 oo 72 oo for lwil S W lw2 S W lwi w2 S 7T 302 Due to the symmetry properties of the third order cumulant we observe C lt 1M2gt 0295w2aw1 C w1a W2 C w2a W1 C w1 CU2aw2 C w1 w1 12 C w1 w2 w1 ngg w1 w2 Trispectrum For n 4 06 00 00 00 x jw1 ngg ngg 04wiaw2aw3 Z Z Z 6471772773 71 oo 7392 OO 7393 OO for lwil 7r lw2 7r lwgl a m wg wgl r 303 Again many symmetry regions can be derived 22 HIGHER ORDER SPECTRA 227 Linearity and Coherence The nth order coherency index is de ned as Pgw1wn 1 0w1wn 1 WW1 o ltwn 005 w 1W2 n th order cumulant spectrum 71 gt 2 power spectrum A signal is linear of order n if lPw1wn 1M constant for all w 133 304 22 HIGHER ORDER SPECTRA 22 8 Linear Systems Figure 2 Linear system with output corrupted by additive noise For the system presented in Figure 2 if Mk is white Gaussian noise with variance 0 134 k is white Gaussian noise with variance 0 with Mk and independent and H z transform of impulse response is stable causal and linear then Rzzk BMW Rniiltkgt 03 hihi k 02503 i 0 820 0 Hwgtl2 0 me 5vnzn k 03W hence hk 71ng 305 300 307 308 Note Equation 308 can be used to determine impulse response if input and output are observable 22 HIGHER ORDER SPECTRA 135 229 Generalization with Cumulants If for the system presented in Figure 2 is iid and non Gaussian ie U U CMTL 39 77k 1 0 elsewhere and H is causal and exponentially stable and is Gaussian not necessarily White then Equation 305 generalizes to 00 ci717k 1 7 Zohnhn71 hn 7k 1 309 n if k gt 2 and Equation 306 generalizes to Cw1wk 1 7Hw1Hw2Hwk 1H W1 Wk 1 310 22 HIGHER ORDER SPECTRA Observe 02 75mm 03w1aw2 73Hw1Hw2Hw1w2 03w1aw2 Cafltw1aw2aw3gt WZHW1HW2HW3H W1 312 3 02w1aw2aw3 Therefore a better estimate of the power spectrum of yn can be obtained using C w10 7 Hw1HOH cal which results in w 2 i w C w0 W l e Sy gt rim or using 0426111 0 0 72Hw1HOHOHw1 which results in o w00 2 Hw i Syltwgt Wm 136 311 312 313 314 315 22 HIGHER ORDER SPECTRA 137 2210 Blind Deconvolution Recall that for the system presented in Figure 2 the bispectrum of and is given by C w1w2 7 Hw1Hw2H W1 W2 hence i0 wlaw2i HW1 HW2 HW1 C 12l 315 Arg0 w17w2 W17W2 Arg 90w1 90w2 90w1 tug where 317 MW Arg HM Note that Arg7 0 or 7T Estimates of can be obtained from the power spectral domain While Maj can be obtained recursively using 90w i 2 0w MAMA 0w 1m w Ad 318 or for discrete systems 1 n 1 n we 2 2 M 2 Wyn 239 319 71 1 20 20 22 HIGHER ORDER SPECTRA 138 2211 Tests Based on Cumulants 1 To determine if a linear process is Gaussian use the fact that all cumulants of order 3 and larger are zero for a Gaussian process ie 70 forallngt2 2 To distinguish between a linear non Gaussian process and a nonlinear process assuming both of the processes have nonzero nth order cumulants use the fact that the nth order magnitude coherence function is a constant for a linear process ie 27W WW2 lPw1wn 1l constant for all w Where the nth order coherency index is de ned by 304 22 HIGHER ORDER SPECTRA 139 2212 Computing the Polyspectrum from Data Two basic approaches Direct and Indirect Method Both assume zero mean data partitioned in segments of M samples non overlapping Direct Method Assume that x is produced by a linear time invariant system excited by White non Gaussian noise Then one can use I C A1k 1 1X A1 39 39 Ak 1 Where X100 is the M point DFT of the l th data segment x1x2xM X1M 1 01k e DFT computeC k xM1 xM2 x2M X2M DFT gt compute 02k ensemble average V xnM1 xnM2 xnM XnM cnk gt DFT gt compute an 22 HIGHER ORDER SPECTRA 140 Indirect Method The rst k moments for each data segment I are computed using min M M Tn1 mg 71 Tn 1 1 xlqxlqq xlq7n 1 q x1 x2 xM gtcompute the first k moments for each data cumulant ck segment and m39k xnM1 xnM2 xnM ensemme gtaverage xM1 xM2 x2M gt compute lt DFT lt window Ck01 03k1 The Window is selected to reduce leakage and may be selected as W7391 Tn 1 d71d72 dTn 1d71 Tn 1 Where 320 sini dm L lt 1 008 MEL 321 0 39739 gtL 7T7quot REFERENCES 1 41 References 1 J S Bendat and AG Piersol Engineering Applications of Correlation and Spectral Analg 12l l3l llOl llll sis John WileyzNevv York 1995 CS Burrus RA Gopinath and H Guo Introduction to Wavelets and Wavelet Trans forms Prentice Hallepper Saddle River NJ 1998 J A Cadzow Blind deconvolution via cumulant extrema IEEE Signal Processing Mag azine 13324 42 1996 L Cohen Time frequency distribution 7 a review Proceedings of the IEEE 777941 981 1989 L Cohen Time Frequency Analysis Englewood Cliffs NJ Prenctice Hall PTR 1995 MH Hayes Statistical Digital Signal Processing and Modeling New York John Wiley amp Sons 1996 F Hlawatsch and GF BoudreauX Bartles Linear and quadratic time frequency repre sentations IEEE Signal Processing Magazine 9221 67 1992 SM Kay and SL Marple Jr Spectrum analysis 7 a modern perspective Proceedings of the IEEE 69111380 1419 1981 J Kovacevic and l Daubechies Eds Special Issue on Wavelets Proceedings of the EEE 844 1996 J Makhoul Linear prediction a tutorial review Proceedings of the IEEE 634561 580 1975 J M Mendel Tutorial on higher order statistics spectra in signal processing and system theory theoretical results and some applications Proceedings of the IEEE 793278 305 1991 REFERENCES 1 42 12 CL Nikias and JM Mendel Signal processing with higher order spectra IEEE Signal Processing Magazine 10310 37 1993 13 MB Priestly Non Linear and Non Stationary Time Series Analysis Academic PresszNew York 1988 14 J G Proakis OM Rader F Ling and CL Nikias Advanced Digital Signal Processing Macmillan Publishing New York 1992 15 ED Rao and KS Arun Model based processing of signals a state space approach Proceedings of the IEEE 802283 309 1992 16 O Rioul and M Vetterli Wavelets and signal processing IEEE Signal Processing Mag azine 8414 38 1991 17 M R Schroeder Linear prediction entropy and signal analysis IEEE Signal Processing Magazine 133 11 1984

### BOOM! Enjoy Your Free Notes!

We've added these Notes to your profile, click here to view them now.

### You're already Subscribed!

Looks like you've already subscribed to StudySoup, you won't need to purchase another subscription to get this material. To access this material simply click 'View Full Document'

## Why people love StudySoup

#### "I was shooting for a perfect 4.0 GPA this semester. Having StudySoup as a study aid was critical to helping me achieve my goal...and I nailed it!"

#### "I made $350 in just two days after posting my first study guide."

#### "I was shooting for a perfect 4.0 GPA this semester. Having StudySoup as a study aid was critical to helping me achieve my goal...and I nailed it!"

#### "Their 'Elite Notetakers' are making over $1,200/month in sales by creating high quality content that helps their classmates in a time of need."

### Refund Policy

#### STUDYSOUP CANCELLATION POLICY

All subscriptions to StudySoup are paid in full at the time of subscribing. To change your credit card information or to cancel your subscription, go to "Edit Settings". All credit card information will be available there. If you should decide to cancel your subscription, it will continue to be valid until the next payment period, as all payments for the current period were made in advance. For special circumstances, please email support@studysoup.com

#### STUDYSOUP REFUND POLICY

StudySoup has more than 1 million course-specific study resources to help students study smarter. If you’re having trouble finding what you’re looking for, our customer support team can help you find what you need! Feel free to contact them here: support@studysoup.com

Recurring Subscriptions: If you have canceled your recurring subscription on the day of renewal and have not downloaded any documents, you may request a refund by submitting an email to support@studysoup.com

Satisfaction Guarantee: If you’re not satisfied with your subscription, you can contact us for further help. Contact must be made within 3 business days of your subscription purchase and your refund request will be subject for review.

Please Note: Refunds can never be provided more than 30 days after the initial purchase date regardless of your activity on the site.