Popular in Course
verified elite notetaker
Popular in Department
This 107 page Study Guide was uploaded by Usman Qureshi on Wednesday November 18, 2015. The Study Guide belongs to a course at a university taught by a professor in Fall. Since its upload, it has received 16 views.
Reviews for 02_Filters_and_Fourier_Transforms
Report this Material
What is Karma?
Karma is the currency of StudySoup.
You can buy or earn more Karma at anytime and redeem it for class notes, study guides, flashcards, and more!
Date Created: 11/18/15
CM3106 Chapter 2: DSP, Filters and the Fourier Transform Prof David Marshall email@example.com and Dr Kirill Sidorov K.Sidorov@cs.cf.ac.uk www.facebook.com/kirill.sidorov School of Computer Science & Informatics Cardi▯ University, UK Digital Signal Processing and Digital Audio Recap from CM2202 Issues to be Recapped: Basic Digital Signal Processing and Digital Audio Waveforms and Sampling Theorem Digital Audio Signal Processing Filters For full details please refer to last Year’s CM2202 Course Material | Especially detailed underpinning maths. Simple Waveforms Frequency is the number of cycles per second and is measured in Hertz (Hz) Wavelength is inversely proportional to frequency 1 i.e. Wavelength varies as frequency The Sine Wave and Sound The general form of the sine wave we shall use (quite a lot of) is as follows: y = A:sin(2▯:n:F =Fw) s where: A is the amplitude of the wave, F w is the frequency of the wave, F ss the sample frequency, n is the sample index. MATLAB function: sin() used | works in radians Relationship Between Amplitude, Frequency and Phase Phase of a Sine Wave sinphasedemo.m % Simple Sin Phase Demo samp_freq = 400; dur = 800; % 2 seconds amp = 1; phase = 0; freq = 1; s1 = mysin(amp,freq,phase,dur,samp_freq); axisx = (1:dur)*360/samp_freq; % x axis in degrees plot(axisx,s1); set(gca,’XTick’,[0:90:axisx(end)]); fprintf(’Initial Wave: \t Amplitude = ...\n’, amp, freq, phase,...); % change amplitude phase = input(’\nEnter Phase:\n\n’); s2 = mysin(amp,freq,phase,dur,samp_freq); hold on; plot(axisx, s2,’r’); set(gca,’XTick’,[0:90:axisx(end)]); Phase of a Sine Wave: sinphasedemo output 1 0.8 0.6 0.4 0.2 0 −0.2 −0.4 −0.6 −0.8 −1 0 90 180 270 360 450 540 630 CM3106 Chapter 2 Recap: Relationship Between Amplitude, Frequency and Phase 7 Basic DSP Concepts and De▯nitions: The Decibel (dB) When referring to measurements of power or intensity, we express these in decibels (dB): ▯ ▯ X XdB = 10log10 X 0 where: X is the actual value of the quantity being measured, X 0s a speci▯ed or implied reference level, X dB is the quantity expressed in units of decibels, rela0ive to X . X and X 0ust have the same dimensions | they must measure the same type of quantity in the the same units. The reference level itself is always at 0 dB | as shown by setting X = X 0note: log (10 = 0). Why Use Decibel Scales? When there is a large range in frequency or magnitude, logarithm units often used. If X is greater than X 0hen X dB is positive (Power Increase) If X is less than X 0hen X dB is negative (Power decrease). Power Magnitude = jX(i)j j so (with respect to reference level) 2 X dB = 10log (j10i) j) = 20log (j10i)j) which is an expression of dB we often come across. Decibel and Chillies! Decibels are used to express wide dynamic ranges in a many applications: Decibel and acoustics dB is commonly used to quantify sound levels relative to some 0 dB reference. The reference level is typically set at the threshold of human perception Human ear is capable of detecting a very large range of sound pressures. Examples of dB measurement in Sound Threshold of Pain The ratio of sound pressure that causes permanent damage from short exposure to the limit that (undamaged) ears can hear is above a million: The ratio of the maximum power to the minimum power is above one (short scale) trillion (10 ). The log of a trillion is 12, so this ratio represents a di▯erence of 120 dB. 120 dB is the quoted Threshold of Pain for Humans. Examples of dB measurement in Sound (cont.) Speech Sensitivity Human ear is not equally sensitive to all the frequencies of sound within the entire spectrum: Maximum human sensitivity at noise levels at between 2 and 4 kHz (Speech) These are factored more heavily into sound descriptions using a process called frequency weighting. Filter (Partition) into frequency bands concentrated in this range. Used for Speech Analysis Mathematical Modelling of Human Hearing Audio Compression (E.g. MPEG Audio) More on this Later Examples of dB measurement in Sound (cont.) Digital Noise increases by 6dB per bit In digital audio sample representation (linear pulse-code modulation (PCM)), The ▯rst bit (least signi▯cant bit, or LSB) produces residual quantization noise (bearing little resemblance to the source signal) Each subsequent bit o▯ered by the system doubles the resolution, corresponding to a 6 (= 10 ▯ 10g (4)) dB. So a 16-bit (linear) audio format o▯ers 15 bits beyond the ▯rst, for a dynamic range (between quantization noise and clipping) of (15 x 6) = 90 dB, meaning that the maximum signal is 90 dB above the theoretical peak(s) of quantisation noise. 8-bit linear PCM similarly gives (7 x 6) = 42 dB. 48 dB di▯erence between 8- and 16-bit which is (48/6 (dB)) 8 times as noisy. More on this Later Signal to Noise Signal-to-noise ratio is a term for the power ratio between a signal (meaningful information) and the background noise: ▯ ▯ 2 Psignal Asignal SNR = = Pnoise Anoise where P is average power and A is RMS amplitude. Both signal and noise power (or amplitude) must be measured at the same or equivalent points in a system, and within the same system bandwidth. Because many signals have a very wide dynamic range, SNRs are usually expressed in terms of the logarithmic decibel scale: ▯ ▯ ▯ ▯ Psignal Asignal SNR dB = 10log10 = 20log10 P noise Anoise System Representation: Algorithms and Signal Flow Graphs It is common to represent digital system signal processing routines as a visual signal ow graphs. We use a simple equation relation to describe the algorithm. Three Basic Building Blocks We will need to consider three processes: Delay Multiplication Summation Signal Flow Graphs: Delay Delay We represent a delay of one sampling interval by a block with a T label: x(n) T y(n)= x(n ▯ 1) We describe the algorithm via the equation: y(n) = x(n ▯ 1) Signal Flow Graphs: Delay Example A Delay of 2 Samples A delay of the input signal by two sampling intervals: We can describe the algorithm by: y(n) = x(n ▯ 2) We can use the block diagram to represent the signal ow graph as: x(n) T T y(n)= x(n ▯ 1) y(n)= x(n ▯ 2) x(n) y(n) = x(n ▯ 2) Signal Flow Graphs: Multiplication Multiplication We represent a multiplication or weighting of the input signal by a circle with a ▯ label . We describe the algorithm via the equation: y(n) = a:x(n) a x(n) y(n)= a.x(n) ▯ e.g. a =0 .5 x(n) y(n) = 0:5x(n) Signal Flow Graphs: Addition Addition We represent a addition of two input signal by a circle with a + label . We describe the algorithm via the equation: y(n) = a :x1(n)1+ a :x (n2 2 a1.x1(n) + y(n)= 1 .1 (n)+2a2.x (n) a2.x2(n) Signal Flow Graphs: Addition Example In the example, set a = a = 1:1 2 a1.1 (n) y(n)= a .x (n)+ a .x (n) + 1 1 2 2 a2.2 (n) x (n) x (n) 1 2 y(n) = x (n) 1 x (n) 2 Signal Flow Graphs: Complete Example All Three Processes Together We can combine all above algorithms to build up more complex algorithms: 1 1 1 y(n) = x(n) + x(n ▯ 1) + x(n ▯ 2) 2 3 4 This has the following signal ow graph: x(n ▯ 1) x(n) T T x(n ▯ 2) 1 1 1 ▯ 2 ▯ 3 ▯ 4 + y(n)=x(n)+x(n ▯ 1) + x(n ▯ 2) 2 3 4 Signal Flow Graphs: Complete Example Impulse Response x(n ▯ 1) x(n) T T x(n ▯ 2) ▯ 2 ▯ 3 ▯ 4 + y(n)=x(n)+x(n ▯ 1) + x(n ▯ 2) 2 3 4 1 1 1 x(n) y(n) = x(n) + x(n ▯ 1) + x(n ▯ 2) 2 3 4 CM3106 Chapter 2 System Representation: Algorithms and Signal Flow Graphs 23 Filtering Filtering Filtering in a broad sense is selecting portion(s) of data for some processing. If we isolate a portion of data (e.g. audio, image, video) we can Remove it | E.g. Low Pass, High Pass etc. ▯ltering Attenuate it | Enhance or diminish its presence, E.g. Equalisation, Audio E▯ects/Synthesis Process it in other ways | Digital Audio, E.g. Audio E▯ects/Synthesis More Later Filtering Examples (More Later) Filtering Examples: In many multimedia contexts this involves the removal of data from a signal | This is essential in almost all aspects of lossy multimedia data representations. JPEG Image Compression MPEG Video Compression MPEG Audio Compression In Digital Audio we may wish to determine a range of frequencies we wish the enhance or diminish to equalise the signal, e.g.: Tone | Treble and Bass | Controls Equalisation (EQ) Synthesi| Subtractive Synthesis, EQ in others. How can we ▯lter a Digital Signal Two Ways to Filter Temporal Domain | E.g. Sampled (PCM) Audio Frequency Domain | Analyse frequency components in signal. We will look at ▯ltering in the frequency space very soon, but ▯rst we consider ▯ltering in the temporal domain via impulse responses. Temporal Domain Filters We will look at: IIR Systems : In▯nite impulse response systems FIR Systems : Finite impulse response systems In▯nite Impulse Response (IIR) Systems Simple Example IIR Filter y(n) x(n) + The algorithm is represented by the di▯erence equation: T ▯ y(n ▯ 1)H1(n) y(n) = x(n)▯a :y(n▯11▯a :y(n▯2) 2 ▯a1 T This produces the opposite ▯ y(n ▯ 2)H2(n) signal ow graph ▯a2 In▯nite Impulse Response (IIR)Systems Explained IIR Filter Explained y(n) x(n) + The following happens: The output signal y(n) is fed back through a series of delays T Each delay is weighted ▯ y(n ▯ 1H1n)x Each fed back weighted delay ▯a1 T is summed and passed to new output. ▯ y(n ▯ 2H2n)x Such a feedback system is ▯a2 called a recursive system A Complete IIR System x(n) y(n) + + + + + ▯ ▯ ▯ ▯ ▯M▯1 ▯M▯2 ▯1 T T T y(n ▯ M) y(n ▯ 1) Complete IIR Algorithm Here we extend: The input delay line up to N ▯ 1 elements and The output delay line by M elements. We can represent the IIR system algorithm by the di▯erence equation: XM y(n) = x(n) ▯ aky(n ▯ k) k=1 Finite Impulse Response (FIR) Systems FIR system’s are slightly simpler | there is no feedback loop. Simple Example FIR Filter y(n) x(n) ▯ + A simple FIR system can be described as follows: b0 T y(n) = b x0n)+b x(n 11)+b x(n ▯2) 2 x(n ▯ 1H1(n) ▯ T b The input is fed through delay 1 elements x(n ▯ 2H2(n) Weighted sum of delays gives ▯ y(n) b2 A Complete FIR System x(n) x(n ▯ 1) x(n ▯ 2) x(n ▯ N + 1) T T T b b b bN▯2 N▯1 ▯ ▯ ▯ ▯ ▯ 1 2 y(n) + + + + FIR Algorithm To develop a more complete FIR system we need to add N ▯ 1 feed forward delays We can describe this with the algorithm: N▯1 X y(n) = b k(n ▯ k) k=0 Filtering with IIR/FIR We have two ▯lter banks de▯ned by vectors: A = fa g, k B = fb gk These can be applied in a sample-by-sample algorithm: MATLAB provides a generic filter(B,A,X) function which ▯lters the data in vector X with the ▯lter described by vectors A and B to create the ▯ltered data Y. The ▯lter is of the standard di▯erence equation form: a(1) ▯ y(n) = b(1) ▯ x(n) + b(2) ▯ x(n ▯ 1) + ::: + b(nb + 1) ▯ x(n ▯ nb) ▯a(2) ▯ y(n ▯ 1) ▯ ::: ▯ a(na + 1) ▯ y(n ▯ na) If a(1) is not equal to 1, ▯lter normalizes the ▯lter coe▯cients by a(1). If a(1) equals 0, filter() returns an error Creating Filters How do I create Filter banks A and B Filter banks can be created manually | Hand Created: See next slide and Equalisation example later in slides MATLAB can provide some prede▯ned ▯lters | a few slides on, see lab classes Many standard ▯lters provided by MATLAB See also help filter, online MATLAB docs and lab classes. Filtering with IIR/FIR: Simple Example The MATLAB ▯le IIRdemo.m sets up the ▯lter banks as follows: IIRdemo.m fg=4000; fa=48000; k=tan(pi*fg/fa); b(1)=1/(1+sqrt(2)*k+k^2); b(2)=-2/(1+sqrt(2)*k+k^2); b(3)=1/(1+sqrt(2)*k+k^2); a(1)=1; a(2)=2*(k^2-1)/(1+sqrt(2)*k+k^2); a(3)=(1-sqrt(2)*k+k^2)/(1+sqrt(2)*k+k^2); Apply this ▯lter How to apply the (previous) di▯erence equation: By hand IIRdemo.m cont. for n=1:N y(n)=b(1)*x(n) + b(2)*xh1 + b(3)*xh2 ... - a(2)*yh1 - a(3)*yh2; xh2=xh1;xh1=x(n); yh2=yh1;yh1=y(n); end; Use MATLAB filter() function | see next but one slide Far more preferable: general | any length ▯lter Filtering with IIR: Simple Example Output This produces the following output: 1 0.5 → 0 x(n) −0.5 −1 0 2 4 6 8 10 12 14 16 18 n → 1 0.5 → 0 y(n) −0.5 −1 0 2 4 6 8 10 12 14 16 18 n → CM3106 Chapter 2 Finite Impulse Response (FIR) Systems 36 MATLAB ▯lters Matlab filter() function implements an IIR/FIR hybrid ▯lter. Type help filter: FILTER One-dimensional digital filter. Y = FILTER(B,A,X) filters the data in vector X with the filter described by vectors A and B to create the filtered data Y. The filter is a "Direct Form II Transposed" implementation of the standard difference equation: a(1)*y(n) = b(1)*x(n) + b(2)*x(n-1) + ... + b(nb+1)*x(n-nb) - a(2)*y(n-1) - ... - a(na+1)*y(n-na) If a(1) is not equal to 1, FILTER normalizes the filter coefficients by a(1). FILTER always operates along the first non-singleton dimension, namely dimension 1 for column vectors and non-trivial matrices, and dimension 2 for row vectors. Using MATLAB to make ▯lters for filter() (1) MATLAB provides a few built-in functions to create ready made ▯lter parameterA and B: Some common MATLAB Filter Bank Creation Functions E.g: butter, buttord, besself, cheby1, cheby2, ellip. See help or doc appropriate function. Fourier Transform (Recap from CM2202 The Frequency Domain The Frequency domain can be obtained through the transformation, via Fourier Transform (FT), from one Temporal (Time) or Spatial domain to the other Frequency Domain We do not think in terms of signal or pixel intensities but rather underlying sinusoidal waveforms of varying frequency, amplitude and phase. Applications of Fourier Transform Numerous Applications including: Essential tool for Engineers, Physicists, Mathematicians and Computer Scientists Fundamental tool for Digital Signal Processing and Image Processing Many types of Frequency Analysis: Filtering Noise Removal Signal/Image Analysis Simple implementation of Convolution Audio and Image E▯ects Processing. Signal/Image Restoration | e.g. Deblurring Signal/Image Compression | MPEG (Audio and Video), JPEG use related techniques. Many more : : : : : : Introducing Frequency Space 1D Audio Example Lets consider a 1D (e.g. Audio) example to see what the di▯erent domains mean: Consider a complicated sound such as the a chord played on a piano or a guitar. We can describe this sound in two related ways: Temporal Domain : Sample the amplitude of the sound many times a second, which gives an approximation to the sound as a function of time. Frequency Domain : Analyse the sound in terms of the pitches of the notes, or frequencies, which make the sound up, recording the amplitude of each frequency. Fundamental Frequencies D[ : 554.40Hz F : 698.48Hz A[ : 830.64Hz C: 1046.56Hz plus harmonics/partial frequencies .... Back to Basics An 8 Hz Sine Wave A signal that consists of a sinusoidal wave at 8 Hz. 8 Hz means that wave is completing 8 cycles in 1 second The frequency of that wave is 8 Hz. From the frequency domain we can see that the composition of our signal is one peak occurring with a frequency of 8 Hz | there is only one sine wave here. with a magnitude/fraction of 1.0 i.e. it is the whole signal. 2D Image Example What do Frequencies in an Image Mean? Now images are no more complex really: Brightness along a line can be recorded as a set of values measured at equally spaced distances apart, Or equivalently, at a set of spatial frequency values. Each of these frequency values is a frequency component. An image is a 2D array of pixel measurements. We form a 2D grid of spatial frequencies. A given frequency component now speci▯es what contribution is made by data which is changing with speci▯ed x and y direction spatial frequencies. Frequency components of an image What do Frequencies in an Image Mean? (Cont.) Large values at high frequency components then the data is changing rapidly on a short distance scale. e.g. a page of text However, Noise contributes (very) High Frequencies also Large low frequency components then the large scale features of the picture are more important. e.g. a single fairly simple object which occupies most of the image. Visualising Frequency Domain Transforms Sinusoidal Decomposition Any digital signal (function) can be decomposed into purely sinusoidal components Sine waves of di▯erent size/shape | varying amplitude, frequency and phase. When added back together they reconstitute the original signal. The Fourier transform is the tool that performs such an operation. Summing Sine Waves. Example: to give a Square(ish) Wave (E.g. Additive Synthesis) Digital signals are composite signals made up of many sinusoidal frequencies A 200Hz digital signal (square(ish) wave) may be a composed of 200, 600, 1000, etc. sinusoidal signals which sum to give: Summary so far So What Does All This Mean? Transforming a signal into the frequency domain allows us To see what sine waves make up our underlying signal E.g. One part sinusoidal wave at 50 Hz and Second part sinusoidal wave at 200 Hz. Etc. More complex signals will give more complex decompositions but the idea is exactly the same. How is this Useful then? Basic Idea of Filtering in Frequency Space Filtering now involves attenuating or removing certain frequencies | easily performed: Low pass ▯lter | Ignore high frequency noise components | make zero or a very low value. Only store lower frequency components High Pass Filter | opposite of above Bandpass Filter | only allow frequencies in a certain range. Visualising the Frequency Domain Think Graphic Equaliser An easy way to visualise what is happening is to think of a graphic equaliser on a stereo system (or some software audio players, e.g. iTunes). So are we ready for the Fourier Transform? We have all the Tools.... This lecture, so far, (hopefully) set the context for Frequency decomposition. Past CM2202 Lectures: Odd/Even Functions: sin(▯x) = ▯ sin(x), cos(▯x) = cos(x) Complex Numbers: Phasor Form re i▯ = r(cos ▯ + i sin ▯) R kx Calculus Integration: ekxdx= e k Digital Signal Processing: Basic Waveform Theory. Sine Wave y = A:sin(2▯:n:F =F w s where: A = amplitude, F w = wave frequency, F =ssample frequency, n is the sample index. Relationship between Amplitude, Frequency and Phase: Cosine is a Sine wave 90 ▯ out of phase Impulse Responses DSP + Image Proc.: Filters and other processing, Convolution Fourier Theory Introducing The Fourier Transform The tool which converts a spatial or temporal (real space) description of audio/image data, for example, into one in terms of its frequency components is called the Fourier transform The new version is usually referred to as the Fourier space description of the data. We then essentially process the data: E.g. for ▯ltering basically this means attenuating or setting certain frequencies to zero We then need to convert data back (or invert) to real audio/imagery to use in our applications. The corresponding inverse transformation which turns a Fourier space description back into a real space one is called the inverse Fourier transform. 1D Fourier Transform 1D Case (e.g. Audio Signal) Considering a continuous function f (x) of a single variable x representing distance (or time). The Fourier transform of that function is denoted F(u), where u represents spatial (or temporal) frequency is de▯ned by: Z 1 ▯2▯ixu F(u) = f (x)e dx: ▯1 Note: In general F(u) will be a complex quantity even though the original data is purely real. The meaning of this is that not only is the magnitude of each frequency present important, but that its phase relationship is too. Recall Phasors from Complex Number Lectures (CM2202). e▯2▯ixu above is a Phasor. Inverse Fourier Transform Inverse 1D Fourier Transform The inverse Fourier transform for regenerating f (x) from F(u) is given by Z 1 2▯ixu f (x) = F(u)e du; ▯1 which is rather similar to the (forward) Fourier transform except that the exponential term has the opposite sign. It is not negative Fourier Transform Example Fourier Transform of a Top Hat Function Let’s see how we compute a Fourier Transform: consider a particular function f (x) de▯ned as ▯ 1 if jxj ▯ 1 f (x) = 0 otherwise, 1 1 The Sinc Function (1) We derive the Sinc function So its Fourier transform is: Z 1 ▯2▯ixu F(u) = f (x)e dx ▯1 Z 1 = 1 ▯ e▯2▯ixudx ▯1 ▯1 = (e2▯iu▯ e▯2▯iu) 2▯iu e▯ ▯ e▯i▯ sin ▯ = ; So: 2i sin 2▯u F(u) = : ▯u In this case, F(u) is purely real, which is a consequence of the original data being symmetric in x and ▯x. f (x) is an even function. A graph of F(u) is shown overleaf. This function is often referred to as the Sinc function. CM3106 Chapter 2 Fourier Transform 55 The Sinc Function Graph The Sinc Function The Fourier transform of a top hat function, the Sinc function: sin(2πu)/(π u) 2 1.5 1 0.5 0 −−65 −4 −2 0 2 4 6 u CM3106 Chapter 2 Fourier Transform 56 The 2D Fourier Transform 2D Case (e.g. Image data) If f (x;y) is a function, for example intensities in an image, its Fourier transform is given by Z Z 1 1 F(u;v) = f (x;y)e2▯i(xu+ydx dy; ▯1 ▯1 and the inverse transform, as might be expected, is Z 1 Z 1 2▯i(xu+yv) f (x;y) = F(u;v)e du dv: ▯1 ▯1 The Discrete Fourier Transform But All Our Audio and Image data are Digitised!! Thus, we need a discrete formulation of the Fourier transform: Assumes regularly spaced data values, and Returns the value of the Fourier transform for a set of values in frequency space which are equally spaced. This is done quite naturally by replacing the integral by a summation, to give the discrete Fourier transform or DFT for short. 1D Discrete Fourier transform 1D Case: In 1D it is convenient now to assume that x goes up in steps of 1, and that there are N samples, at values of x from 0 to N ▯ 1. So the DFT takes the form N▯1 1 X ▯2▯ixu=N F(u) = f (x)e ; N x=0 while the inverse DFT is X▯1 f (x) = F(u)e2▯ixu=: x=0 NOTE: Minor changes from the continuous case are a factor of 1=N in the exponential terms, and also the factor 1=N in front of the forward transform which does not appear in the inverse transform. 2D Discrete Fourier transform 2D Case The 2D DFT works is similar. So for an N ▯ M grid in x and y we have X X M▯1 1 ▯2▯i(xu=N+yv=M) F(u;v) = NM f (x;y)e ; x=0 y=0 and X X M▯1 f (x;y) = F(u;v)e 2▯i(xu=N+yv:M) u=0 v=0 Balancing the 2D DFT Most Images are Square Often N = M, and it is then it is more convenient to rede▯ne F(u;v) by multiplying it by a factor of N, so that the forward and inverse transforms are more symmetric: XX1 N▯1 1 ▯2▯i(xu+yv)=N F(u;v) = N f (x;y)e ; x=0 y=0 and 1XX1 N▯1 f (x;y) = F(u;v)e2▯i(xu+yv:=N N u=0 v=0 Fourier Transforms in MATLAB fft() and fft2() MATLAB provides functions for 1D and 2D Discrete Fourier Transforms (DFT): ▯t(X) is the 1D discrete Fourier transform (DFT) of vector X. For matrices, the FFT operation is applied to each column | NOT a 2D DFT transform. ▯t2(X) returns the 2D Fourier transform of matrix X. If X is a vector, the result will have the same orientation. ▯tn(X) returns the N-D discrete Fourier transform of the N-D array X. Inverse DFT i▯t(), i▯t2(), i▯tn() perform the inverse DFT. See appropriate MATLAB help/doc pages for full details. Plenty of examples to Follow. Visualising the Fourier Transform Visualising the Fourier Transform 1 Cosine signal x(n) a)0 Having computed a DFT it might be −1 0 2 4 6 8 10 12 14 16 n → useful to visualise its result: 1 Magnitude spectrum |X(k)| 0.5 b) It’s useful to visualise the 0 0 2 4 6 8 10 12 14 16 k → Fourier Transform 1 Magnitude spectrum |X(f)| 0.5 c) Standard tools 0 0 0.5 1 1.5 2 2.5 3 3.5 f in Hz → x 10 Easily plotted in MATLAB CM3106 Chapter 2 Visualising the Fourier Transform 63 The Magnitude Spectrum of Fourier Transform Recall that the Fourier Transform of our real audio/image data is always complex Phasors: This is how we encode the phase of the underlying signal’s Fourier Components. How can we visualise a complex data array? Back to Complex Numbers: Magnitude spectrum Compute the absolute value of the complex data: q jF(k)j = F (k) + F (k) for k = 0;1;:::;N ▯ 1 R I where FR(k) is the real part and I (k) is the imaginary part of the N sampled Fourier Transform, F(k). Recall MATLAB: Sp = abs(fft(X,N))/N; (Normalised form) The Phase Spectrum of Fourier Transform The Phase Spectrum Phase Spectrum The Fourier Transform also represent phase, the phase spectrum is given by: FI(k) ’ = arctan for k = 0;1;:::;N ▯ 1 FR(k) Recall MATLAB: phi = angle(fft(X,N)) Relating a Sample Point to a Frequency Point When plotting graphs of Fourier Spectra and doing other DFT processing we may wish to plot the x-axis in Hz (Frequency) rather than sample point number k = 0;1;:::;N ▯ 1 There is a simple relation between the two: The sample points go in steps k = 0;1;:::;N ▯ 1 For a given sample point k the frequency relating to this is given by: fs fk= k N where f is the sampling frequency and N the number of samples. s s Thus we have equidistant frequency steps of N ranging from 0 Hz to N▯1 f Hz N s Time-Frequency Representation: Spectrogram Spectrogram It is often useful to look at the frequency distribution over a short-time: Split signal into N segments Do a windowed Fourier Transform | Short-Time Fourier Transform (STFT) Window needed to reduce leakage e▯ect of doing a shorter sample SFFT. Apply a Blackman, Hamming or Hanning Window MATLAB function does the job: Spectrogram | see help spectrogram See also MATLAB’s specgramdemo MATLAB spectrogram Example spectrogrameg.m load(’handel’) [N M] = size(y); figure(1) spectrogram(fft(y,N),512,20,1024,Fs); Produces the following: Filtering in the Frequency Domain Low Pass Filter Image with Noise Added Example: Audio Hiss, ’Salt and Pepper’ noise in images, Noise: The idea with noise Filtering is to reduce various spurious e▯ects of a local nature in High Cut−off Frequency Low Pass Filtered Image the image, caused perhaps by noise in the acquisition system, arising as a result of transmission of the data, for example from a space probe utilising a low-power transmitter. Frequency Space Filtering Methods Low Pass Filtering | Remove Noise Noise = High Frequencies: In audio data many spurious peaks in over a short timescale. In an image means there are many rapid transitions (over a short distance) in intensity from high to low and back again or vice versa, as faulty pixels are encountered. Not all high frequency data noise though! Therefore noise will contribute heavily to the high frequency components of the signal when it is analysed in Fourier space. Thus if we reduce the high frequency components | Low-Pass Filter should (if tuned properly) reduce the amount of noise in the data. (Low-pass) Filtering in the Fourier Space Low Pass Filtering with the Fourier Transform We ▯lter in Fourier space by computing G(u;v) = H(u;v)F(u;v) where: F(u;v) is the Fourier transform of the original image, H(u;v) is a ▯lter function, designed to reduce high frequencies, and G(u;v) is the Fourier transform of the improved image. Inverse Fourier transform G(u;v) to get g(x;y) our improved image Ideal Low-Pass Filter We need to design or compute H(u;v) If we know h(x;y) or have a discrete sample of h(x;y) can compute its Fourier Transform Can simply design simple ▯lters in Frequency Space The simplest sort of ▯lter to use is an ideal low-pass ▯lter, which in one dimension appears as : 2.0 H(u) u0 u Ideal Low-Pass Filter (2) How the Low Pass Filter Works with Frequencies 2.0 H(u) u0 u This is a h(x;y) function which is 1 for u between 0 and u , 0 the cut-o▯ frequency, and zero elsewhere. So all frequency space information above u is 0 discarded, and all information below u is kept. 0 A very simple computational process. Ideal 2D Low-Pass Filter Ideal 2D Low-Pass Filter The two dimensional version of this is the Low-Pass Filter: ▯ p 1 if u + v ▯ w H(u;v) = 0 0 otherwise, where w 0s now the cut-o▯ frequency for both dimensions. Thus, all frequencies inside a radius w 0re kept, and all others discarded. w0 Not So Ideal Low-Pass Filter? (1) In practice, the ideal Low-Pass Filter is no so ideal The problem with this ▯lter is that as well as noise there may be useful high frequency content: In audio: plenty of other high frequency content: high pitches, rustles, scrapes, wind, mechanical noises, cymbal crashes etc. In images: edges (places of rapid transition from light to dark) also signi▯cantly contribute to the high frequency components. Choosing the most appropriate cut-o▯ frequency is not so easy Similar problem to choosing a threshold in image thresholding. Not So Ideal Low-Pass Filter? (2) What if you set the wrong value for the cut-o▯ frequency? If you choose the wrong cut-o▯ frequency an ideal low-pass ▯lter will tend to blur the data: High audio frequencies become mu▯ed Edges in images become blurred. The lower the cut-o▯ frequency is made, the more pronounced this e▯ect becomes in useful data content Ideal Low Pass Filter Example 1 (a) Input Image (b) Image Spectra (c) Ideal Low Pass Filter (d) Filtered Image Ideal Low-Pass Filter Example 1 MATLAB Code low pass.m: % Compute Ideal Low Pass Filter u0 = 20; % set cut off frequency % Create a white box on a u=0:(M-1); v=0:(N-1); % black background image M = 256; N = 256; idx=find(u>M/2); image = zeros(M,N) u(idx)=u(idx)-M; box = ones(64,64); idy=find(v>N/2); v(idy)=v(idy)-N; %box at centre image(97:160,97:160) = box; [V,U]=meshgrid(v,u); D=sqrt(U.^2+V.^2); % Show Image H=double(D<=u0); figure(1); % display imshow(image); figure(3); imshow(fftshift(H)); % compute fft and display its spectra % Apply filter and do inverse FFT F=fft2(double(image)); G=H.*F; figure(2); g=real(ifft2(double(G))); imshow(abs(fftshift(F))); % Show Result figure(4); imshow(g); CM3106 Chapter 2 Filtering in the Frequency Domain 78 Ideal Low Pass Filter Example 2 (a) Input Image (b) Image Spectra (c) Ideal Low-Pass Filter (d) Filtered Image Ideal Low-Pass Filter Example 2 MATLAB Code lowpass2.m: % Compute Ideal Low Pass Filter u0 = 50; % set cut off frequency u=0:(M-1); % read in MATLAB demo text image v=0:(N-1); idx=find(u>M/2); image = imread(’text.png’); u(idx)=u(idx)-M; [M N] = size(image) idy=find(v>N/2); v(idy)=v(idy)-N; [V,U]=meshgrid(v,u); % Show Image D=sqrt(U.^2+V.^2); H=double(D<=u0); figure(1); imshow(image); % display figure(3); % compute fft and display its spectra imshow(fftshift(H)); F=fft2(double(image)); % Apply filter and do inverse FFT figure(2); G=H.*F; imshow(abs(fftshift(F))/256); g=real(ifft2(double(G))); % Show Result figure(4); imshow(g); CM3106 Chapter 2 Filtering in the Frequency Domain 80 Low-Pass Butterworth Filter (1) We introduced the Butterworth Filter with IIR/FIR Filters (Temporal Domain Filtering). Let’s now study it in more detail. Much easier to visualise in Frequency space 2D Low-Pass Butterworth Filter Another popular (and general) ▯lter is the Butterworth low pass ▯lter. In the 2D case, H(u;v) takes the form 1 H(u;v) = 1 + [(u + v )=w 2]n; 0 where n is called the order of the ▯lter. Low-Pass Butterworth Filter (2) Visualising the 1D Low-Pass Butterworth Filter This keeps some of the high frequency information, as illustrated by the second order one dimensional Butterworth ▯lter: .0 H(u) .0 u0 u Consequently reduces the blurring. Blurring the ▯lter | Butterworth is essentially a smoothed top hat functions | reduces blurring by the ▯lter. Low-Pass Butterworth Filter (3) Visualising the 2D Low-Pass Butterworth Filter The 2D second order Butterworth ▯lter looks like this: w0 Note this is blurred circle | blurring of the ideal 2D Low-Pass Filter. Butterworth Low Pass Filter Example 1 (1) (a) Input Image (b) Image Spectra (c) Butterworth Low-Pass (d) Filtered Image Filter Butterworth Low-Pass Filter Example 1 (2) Comparison of Ideal and Butterworth Low Pass Filter: Ideal Low-Pass Butterworth Low-Pass Butterworth Low-Pass Filter Example 1 (3) butterworth.m: % Load Image and Compute FFT as % in Ideal Low Pass Filter Example 1 ....... % Compute Butterworth Low Pass Filter u0 = 20; % set cut off frequency u=0:(M-1); v=0:(N-1); idx=find(u>M/2); u(idx)=u(idx)-M; idy=find(v>N/2); v(idy)=v(idy)-N; [V,U]=meshgrid(v,u); for i = 1: M for j = 1:N %Apply a 2nd order Butterworth UVw = double((U(i,j)*U(i,j) + V(i,j)*V(i,j))/(u0*u0)); H(i,j) = 1/(1 + UVw*UVw); end end % Display Filter and Filtered Image as before CM3106 Chapter 2 Filtering in the Frequency Domain 86 Low-Pass Butterworth Filter Example 2 (1) (a) Input Image (b) Image Spectra (c) Butterworth Low-Pass (d) Filtered Image Filter Low-Pass Butterworth Filter Example 2 (2) Comparison of Ideal and Low-Pass Butterworth Filter: Ideal Low Pass Butterworth Low-Pass Butterworth Low Pass Filter Example 2 MATLAB (3) butterworth2.m: % Load Image and Compute FFT as in Ideal Low Pass Filter % Example 2 ....... % Compute Butterworth Low Pass Filter u0 = 50; % set cut off frequency u=0:(M-1); v=0:(N-1); idx=find(u>M/2); u(idx)=u(idx)-M; idy=find(v>N/2); v(idy)=v(idy)-N; [V,U]=meshgrid(v,u); for i = 1: M for j = 1:N %Apply a 2nd order Butterworth UVw = double((U(i,j)*U(i,j) + V(i,j)*V(i,j))/(u0*u0)); H(i,j) = 1/(1 + UVw*UVw); end end % Display Filter and Filtered Image as before Low Pass Filtering Noisy Images How to create noise and results of Low Pass Filtering Use Matlab function, imnoise() to add noise to image (lowpass.m, lowpass2.m): Image with Noise Added Low Pass Filtered Noisy Image (a) Input Noisy IImage with Noise Added (b) Deconvolved NHigh Cut−off Frequency Low Pass Filtered Image (c) Input Noisy Image (d) Deconvolved Noisy Image (Higher Cut O▯) Other Filters Other Filters High-Pass Filters | opposite of low-pass, select high frequencies, attenuate those below u 0 Band-pass | allow frequencies in a range u :::u attenuate 0 1 those outside this range Band-reject | opposite of band-pass, attenuate frequencies within u :::u select those outside this range 0 1 Notch | attenuate frequencies in a narrow bandwidth around cut-o▯ frequency, u 0 Resonator | amplify frequencies in a narrow bandwidth around cut-o▯ frequency, u 0 Other ▯lters exist that essentially are a combination/variation of the above High Pass Filtering Easy to Implement from the above Low Pass Filter A High Pass Filter is usually de▯ned as 1 - low pass = 1 ▯ H: Original image High Pass Filtered (a) Input Image (b) High Pass Filtered Image Image with Noise Added High Pass Filter Noisy Image (c) Input Noisy Image (d) High Pass Filtered Noisy Image Convolution Many Useful Applications of Convolution Several important audio and optical e▯ects can be described in terms of convolutions. FIltering | In fact the above Fourier ▯ltering is applying convolutions of a low pass ▯lter where the equations are Fourier Transforms of real space equivalents. Deblurring | high pass ▯ltering Reverb | impulse response convolution (more soon). Note we have seen a discrete real domain example of Convolution with Edge Detection. Formal De▯nition of 1D Convolution: Let us examine the concepts using 1D continuous functions. The convolution of two functions f (x) and g(x), written f (x) ▯ g(x), is de▯ned by the integral Z 1 f (x) ▯ g(x) = f (▯)g(x ▯ ▯)d▯: ▯1 ▯ is the mathematical notation for convolution. No Fourier Transform in sight here | but wait! 1D Convolution Real Domain Example (1) Convolution of Two Top Hat Functions For example, let us take two top hat functions: Let f (▯) be the top hat function shown: ▯ 1 if j▯j ▯ 1 f (▯) = 0 otherwise, and let g(▯) be as shown in next slide, de▯ned by ▯ 1=2 if 0 ▯ ▯ ▯ 1 g(▯) = 0 otherwise. 1D Convolution Example (2) Our Two Top Hat Functions Plots 1.0 1.0 -5.0 0.0 5.0 -5.0 0.0 5.0 ▯ ▯ f (▯) = 1 if j▯j ▯ 1 g(▯) = 1=2 if 0 ▯ ▯ ▯ 1 0 otherwise, 0 otherwise. 1D Convolution Example (3) The Convolution Process: Graphical Interpretation g(▯▯) is the re ection of this function in the vertical y-axis, 1.0 g(x ▯ ▯) is the latter shifted to the right by a distance x. Thus for a given value of x, f (▯)g(x ▯ ▯) integrated over all ▯ is the area of overlap of these two top hats, as f (▯) has unit height. An example is shown for x in the -5.0 x.0 5.0 range ▯1 ▯ x ▯ 0 opposite 1D Convolution Example (4) So the solution is: If we now consider x moving from ▯1 to +1, we can see that For x ▯ ▯1 or x ▯ 2, there is no overlap; As x goes from ▯1 to 0 the area of overlap steadily increases from 0 to 1=2; As x increases from0 to 1, the overlap area remains at 1=2; Finally as x increases from 1 to 2, the overlap area steadily decreases again from 1=2 to 0. Thus the convolution of f (x) and g(x), f (x) ▯ g(x), in this case has the form shown on next slide 1D Convolution Example (5) 1.0 -5.0 0.0 5.0 Result of f (x) ▯ g(x) 1D Convolution Example (6) Mathematically the convolution is expressed by: 8 (x + 1)=2 if ▯1 ▯ x ▯ 0 < 1=2 if 0 ▯ x ▯ 1 f (x) ▯ g(x)>= 1 ▯ x=2 if 1 ▯ x ▯ 2 : 0 otherwise. 1.0 -5.0 0.0 5.0 Fourier Transforms and Convolution Convolution Theorem: Convolution in Frequency Space is Easy One major reason that Fourier transforms are so important in signal/image processing is the convolution theorem which states that: If f (x) and g(x) are two functions with Fourier transforms F(u) and G(u), then the Fourier transform of the convolution f (x) ▯ g(x) is simply the product of the Fourier transforms of the two functions, F(u)G(u). Fourier Transforms and Convolution (Cont.) Recall our Low Pass Filter Example (MATLAB CODE) % Apply filter G=H.*F; Where F was the Fourier transform of the image, H the ▯lter Computing Convolutions with the Fourier Transform Example Applications: To apply some reverb to an audio signal. To compensate for a less than ideal image capture system. More soon. Example Applications (Cont.) Deconvolution: Compensating for undesirable e▯ects To do this fast convolution we simply: Take the Fourier transform of the audio/imperfect image, Take the Fourier transform of the function describing the e▯ect of the system, Multiply by the e▯ect to apply e▯ect to audio data To remove/compensate for e▯ect: Divide by the e▯ect to obtain the Fourier transform of the ideal image. Inverse Fourier transform to recover the new improved audio image. This process is sometimes referred to as deconvolution. Image Deblurring Deconvolution Example Inverting our Previous Low-Pass Filter Recall our Low Pass (Butterworth) Filter example of a few slides ago: butterworth.m: deconv.m and deconv2.m reuses this code and adds a deconvolution stage: Our computed butterworth low pass ▯lter, H is our blurring function So to simply invert this we can divide (as opposed to multiply) by H with the blurred image G | e▯ectively a high pass ▯lter Ghigh = G./H; ghigh=real(ifft2(double(Ghigh))); figure(5) imshow(ghigh) In this ideal example we clearly get F back and to get the image simply to inverse Fourier Transfer. In the real world we don’t really know the exact blurring function H so things are not so easy. (a) Input Image (b) Blurred Low-Pass Filtered Image (c) Deconvolved Image deconv2.m results (a) Input Image (b) Blurred Low-Pass Filtered Image (c) Deconvolved Image Deconvolution is not always that simple! Origial Image Deconvolved (a) Input Image (b) Deconvolved Image Image with Noise Added Deconvolved Noisy Image (c) Input Noisy Image (d) Deconvolved Noisy Image
Are you sure you want to buy this material for
You're already Subscribed!
Looks like you've already subscribed to StudySoup, you won't need to purchase another subscription to get this material. To access this material simply click 'View Full Document'