Special Topics ECE 8873
Popular in Course
Popular in ELECTRICAL AND COMPUTER ENGINEERING
This 0 page Class Notes was uploaded by Cassidy Effertz on Monday November 2, 2015. The Class Notes belongs to ECE 8873 at Georgia Institute of Technology - Main Campus taught by Staff in Fall. Since its upload, it has received 13 views. For similar materials see /class/233917/ece-8873-georgia-institute-of-technology-main-campus in ELECTRICAL AND COMPUTER ENGINEERING at Georgia Institute of Technology - Main Campus.
Reviews for Special Topics
Report this Material
What is Karma?
Karma is the currency of StudySoup.
You can buy or earn more Karma at anytime and redeem it for class notes, study guides, flashcards, and more!
Date Created: 11/02/15
ECE 8873 Data Compression and Modeling Lecture 14 Data Compression in Communication Systems School of Electrical and Computer Engineering Georgia Institute of Technology Spring 2004 Outline Video signals and coding issues Telephony and networks Broadcasting and networks New dimensions in data compression embedded and layered coding spring 2004 ECEVEE73 a H Juana CupyrlghlZUUA Lecture 14 Slide 1 spring 2004 ECEVEESS a H Juana CupyrlghlZUUA Lecture 14 Slide 2 NTSC Video Representation 525 scan lines per frame 30 frames per second actually 2997 fps 3337 msframe on CRT Interlaced each frame divided into 2 elds 2625 linesfield 20 lines reserved for control information at the beginning of each eld maximum visible data of 485 lines Laserdisc and SVHS have resolution 420 lines ordinary TV 320 lines Each line takes 635 microseconds to scan Horizontal retrace takes 10 microseconds with 5 microseconds horizontal synch pulse embedded active line carrying video is 535 microseconds Video Signal Representation CRT Cathode Ray Tube for display of video images NTSC 240 lines spring 2004 ECEVEESS a H Juana CupyrlghlZUUA Lecture 14 Slide 3 spring 2004 ECEVEESS a H Juana CupyrlghlZUUA Lecture 14 Slide 4 Color Video Component video each primary color eg RGB is sent as a separate video signal Best color reproduction but requires more bandwidth and good synchronization of the three components Composite video color chrominance and luminance signals are mixed into a single carrier wave see next slide SVideo Separated video Transformation of RGB into separate luminance and chrominance signals eg YIQ YUV a compromise between component analog video and the composite video It uses two separate channels one for luminance and the other for composite chrominance signal Digital component video luminance and color difference video Y 299R 587G 114Bjust as inNTSC and Cr RY Cb BY 8 bissample Composite Video Seven elements horizontal line sync pulse color reference burst 358 MHz Ophase sine wave 8 to 9 cycles before the picture information on each scan line reference black level picture luminance information color saturation information color hue information vertical sync pulse Composite video signals between equipment are connected with a single 75 ohm coax cable usually with RCA connectors Composite video signals can also be modulated onto an RF carrier for TV broadcast or transmitted on coax cable in cable TV distribution systems spring 2004 ECEVEESS a H Juana CupyrlghlZUUA Lecture 14 Slide 5 spring 2004 ECEVEESS a H Juana CupyrlghlZUUA Lecture 14 Slide 6 Luminance amp Composite Chrominance B 1 2 3 4 MHZ Y luminance l amp Q color difference Y I Q R39 G39 1339 Gammacorrected RGB NTSC Composite Video Spectrum 0596 70274 70322 G39 0299 0587 0114 R 0212 70523 0311 39 Cr R Y Cb B Y ATSC HDTV Video Format Video Format The video scanning formats supported by the ATSC Digital Television Standard are shown in the following table Vertical Lines Horizontal Pixels Aspect Ratio Picture Rate 1080 1920 169 601 30P 24P 720 1280 169 60P 30P 24P 480 704 169 amp 43 601 60P 30P 24P 480 640 43 601 60P 30P 24P spring 2004 ECEVEESS a H Juana CupyrlghlZUUA Lecture 14 Slide 7 spring 2004 ECEVEESS a H Juana CupyrlghlZUUA Lecture 14 Slide 8 Color information for a frame is sampled at a lower spatial resolution than the luma information Ch roma Su bsampling Full frame Chroma Subsampling 420 5539 a Bottom field SMPTE DVPAL 444 422 JPEGIMPEG1IMJPEG ais Top field Spring 2004 ECEVEEBB E H Juana CDPWGWZUU Lecture 14 Slide 9 Spring 2004 ECEVEEBB E H Juana Cupyrlgm 2004 Lecture 14 Slide 10 Digital Vorce amp Telephone Networks Digital VOIce amp Networks Typical applications ND 64K View Dr iSDN 65523332quot LI WW Digital network SDN Wes lilellace 30 hierarchy W iSDN BRI PRI D39 39t I M rk or I la ne 0 Allalog 32039 623W 5 ERl2 64K16KZED ghierarchy subscriber loop D8345Mb5 T3 was 64K64KZSED M r lnter continental data WADPCM undersea cab es Digital phone impleer aKbs Leased line Digital network vg 96Kbs T1 rate hierarchy 13Kbs Some early speCIal applications AD 64K View or d t Diaieup Arlaw POM a a DSL Digital phone modem Digital phone rno39dern 2 MW 5 300 3200 various bit rates Analog Digital network Digital subscriber Digital network subscriber loop hierarchy hierarchy TDM or IP lnternet spring 2004 ECEVEEBB a h Juang Cupynghtznm Lecture 14 sride 11 spring 2004 ECEEEBB a H Juang Copyright 2on4 Lecture 14 Slide 12 Digital Sound Distribution or Broadcasting Terrestrial Broadcasting InBandOnChannel IBOC for FM amp AM CD Iike quality over FM channel FMIike quality over AM channel Satellite Broadcasting SBand Sirius XM Subscription based service Up to 130 nationalsyndicated program channels Small discantenna digital receiver Digital Music Download from Internet lnBandOnChannel FM Challenges efficient source coding robust channel coding efficient modulation adjacent channel interference suppression error resilient transmission coverage Code 2 Carrier Code 1 RF amplification Audio streaming Frequency Solidstate audio player Spring 2004 ECE39EEH E H Juana Cupyilahimm Lecture 14 Slide 13 Spring 2004 ECE39EEH E H Juana Cupyiiahimm Lecture 14 Slide 14 lnBandOnChannel DAB existing analog broadcast IBOC FM with Multistream Coding 64 kbs 64 kbs I analog FM UEP m s n31 E I I hybrid digital Rate 12 DQPSKOFDM all digital Convolutional frequency CRC J r h brid IBOC 39 39 y a d39g39tal IBOC PAC decoder w multlstreamU EP s m g I o o 2 I 391 0 source coding diversity simpler decoder and combiner design digital program can be different from analog program offerin frequency frequency new datacasting services eg weather stock traffic messaging spring 2004 ECEVEESS a H Juana CupyrlghlZUUA Lecture 14 Slide 15 spring 2004 ECEVEESS a H Juana CupyrlghlZUUA Lecture 14 Slide 16 ECE 8873 Data Compression and Modeling Lecture 3 Introduction to Lossless Compression School of Electrical and Computer Engineering Asymptotic EquiPartition Principle AEP Let S be the signal observation space or simply the source in which a random variable X is defined Let X XXZXN be a random sequence in the cartesian product space formed by S S x S xx S lfN is sufficiently large then any outcome x xxzxNiS almost surely to belong to a subset of SN having only 2 members all having probability close to 2 Or 710 x Georgia Institute of Technology e HS lt 5 Spring 2004 N spring 2004 REM E H We CWWMW Lecture 3 Slide 1 Spring 2004 ECEVEESS E H Juana cupynamznm Lecture 3 Slide 2 Global vs Local Models Universal Codes Collection of codes pick best Adaptive Codes Adjust code parameters in real time Often by fitting model to source and then coding for model Coding as a Task Representation of analog signal for digital transmission or storage often integrated with AD conversion Compression of digital information to reduce transmission or storage requirement compression can also be realized in analog domain efficiency is defined in terms of bandwidth or storage required for the delivery of a xed amount of information such as a second of speech a video frame Result of coding is a sequence of digital often binary symbols The sequence of digital symbols may or may not have explicit delimiter spring 2004 ECEVEESS a H Juana CupyiighiZUUA Lecture 3 Slide 3 spring 2004 ECEVEESS a H Juana CupyiighiZUUA Lecture 3 Slide 4 Information Rate Bit Rate Bits per second Bits per sample Bits per pixel Bits per letter But information rate or bit rate alone is not a sufficiently accurate indicator of the quality of the reconstructed signal Fixed vs Variable RateLength Coding Depends on the nature of the source and practical considerations in implementation Relevant coding theorems in Information Theory If were not for practical implementations the two are asymptotically equivalent in terms of source entropy argument Advantages and disadvantages Alphabet Aa1a2aM codebase BbbzbN binarycode B01 AgtltAgtltA3gtltAgtltA4 B gtltBgtltBzgtltB5 gtltB spring 2004 ECEVEESS a H Juana CupyrightZUUA Lecture 3 Slide 5 spring 2004 ECEVEESS a H Juana CupyrightZUUA Lecture 3 Slide 6 Our Options in the Probability Space Let A a1a2aM be the alphabet We can define a probability space based on a random experiment with A as the set of all outcomes Since we ARE most likely to be interested in encoding a sequence of symbols we can define a probability space based on a composite random experiment with the sampleobservation space defined as the cartesian product of A that is A gtltAgtltAgtltA Again we have the flexibility to redefine the sample space and its mapping to the code set eg A9 gtltAgtltA3gtltAgtltA4 4 x does not imply independence Variable Rate and Variable length Coding Rate Bit count in a unit time interval Variable Rate eg Huffman Codes Source symbol may be emitted at regulartime instances the encoded result ie the corresponding bit stream however may not be produced at equal pace Length Number of source symbols letters encoded in one encoding cycle Variable Length eg Tunstall Codes Number of source symbols letters encoded in each cycle may not be the same spring 2004 ECEVEESS a H Juana CupyrightZUUA Lecture 3 Slide 7 spring 2004 ECEVEESS a H Juana CupyrightZUUA Lecture 3 Slide 8 Lossless vs Lossy Compression Lossless compression focuses on the code structure according to the probability of symbols so as to achieve minimum average code length Lossy compression involves an additional dimension namely the amount ofdistortion if the symbol is replaced by another one as a result of compression Choice of distortion measures what matters Simultaneous consideration of probability of occurrence and the incurred distortion spring 2004 ECEVEESS a H Juana CupynghtZUUA Lecture 3 Slide 9 Fundamentals A lossless code consists of An encoder 0 14 B 01 If x E A and 0600 appear as a parsable units in their respective domain we can speak of its code length lax We are most interested in the average code length Elax watch outforE over what space 7 A decoder 8 B A such that 6 aXX Questions of interest 1 How small can Elax be 2 How to achieve optimal performance 3 How to practically achieve near optimal performance spring 2004 ECEVEESS a H Juana CupynghtZUUA Lecture 3 Slide 11 Lossless Compression Assume no obvious redundancy in the original signal representation eg A a1a200001111 C 01 Then we need to explore the frequency of occurrences or probability of letterssymbols to minimize the average code length In doing so the length of codeword each of which is used to represent a lettersymbol has to bear some relationship with the corresponding probability resulting in variable length code and assuming that the source puts out letters at regular intervals variable rate coding spring 2004 ECEVEESS a H Juana CupyvlghtZUUA Lecture 3 Slide 10 Uniquely Decodable Codes UDC Consider fixedrate or fixed length code letter code a1 00 a2 01 a3 10 a4 1 1 Sequences that contain this set of code are uniquely decodable only if we know where the code blocks start may need synchronization from time to time spring 2004 ECEVEESS a H Juana CupyvlghtZUUA Lecture 3 Slide 12 Uniquely Decodable Codes Unique decodability any sequence of codewords can be decoded in one and only one way u m 011111111111111111 Uniquely decodable if the overall length is known 0101010101010101010 Not uniquely decodable Review Morse code its use of space as the delimiter makes the code a ternary code not an efficient use of bit resources but eliminates the ambiguity issue Uniquely Decodable Codes A sequence of uniquely decodable codewords must also be parsable Embedded parsing Explicit parsing Instantaneous parsingdecoding vs noninstantaneous parsingdecoding Example spring 2004 ECEVEESS a H Juana oupyvlgmznm Lecture 3 Slide 13 Spring 2004 ECEVEESS a H Juana CupyrlghlZUUA Lecture 3 Slide 14 Test for Unique Decodability llalllc llblln kltn If a bf a is a prefix of b and b1 is the dangling suffix Test procedure 1 Start with the codebook the list of all codewords 2 For each codeword find all codewords that are its Code Structure amp Tree Representation Prefixfree Code A code in which no codeword is a prefix of another codeword ensures unique decodability but not vice versa Code tree A B Prefix code a1 a2 0 Internal node C 0 External node a1 2 or leaf prefix append the list with the corresponding a3 a dangling suffix 4 a3 a4 3 Repeat 2 for every entry ofthe larger codebook until Code A Code B Code C Obtaining a dangling suffix that is a codeword a1 0 0 0 Codewords of a prefix code not uniquely decodable a2 1 10 01 are only associated with no more unique dangling suffixes uniquely a3 00 110 011 eXtema39 n des e39g39 de B decodable a4 11 m 0111 Spring 2004 ECE39EEM E H Juana CDPYV EMZDDA Lecture 3 Slide 15 Spring 2004 ECE39EE33 E H Juana Cnpyilammm Lecture 3 Slide 16 Two Important Results Consequences Necessary condition on the codeword lengths of Any uniquely decodable code has lengths which uniquely decodable codes satisfy the Kraft inequality There always exists a prefix code that satisfies the For any collection of lengths satisfying the Kraft necessary condition inequality one can construct a prefix code that has this set of lengths The KraftMcMillan Inequality Given ANY uniquely decodable code one can find a Let C be a code with Ncodewords each of length lllzIN respectively If c is uniquely decodable KC7 if lt1 prefix code using tree structure With the same set of 7 1 code lengths Existence of Prefix Code Given a set of integers Jpn1 that satisfythe inequality N 22 1 we can always find a prefix code with codeword 1 lengths bibquot5 Spring 2004 E05353 E H Juana CupyiiaMUm Lecture 3 Slide 17 Spring 2004 E05333 E H Juana CupyiiahlZUm Lecture 3 Slide 18 Huffman Coding Minimum Variance Huffman Codes Huffman codes are prefix codes 0 Huffman code is optimal for a given model or probability 2 0394 2 0394 2 0394 0 3 0396 a 0 1071 measure ofthe source ie subject to the applicability of the a1 02 111 02 a 04 12 04 1 2 0392 61 measure But where does this knowed e come from 0 2 0 2 o 0 2 I 39 1130 1130 ai1 a02ooo Two necessary conditions for an optimal binary prefix code 0 I 3 symbols with higher probability have shorter codewords 4 01 4 02 1 114 01 0012 the two symbols that occur least frequently have same as 01 1 a5 01 0011 codeword length their codewords differ only in the last bit 112 04 112 04 a 04 a 06 0 a2 04 112 04 112 04 a 06 a102 11102 a2 04 0 a 041 a2 0400 i 0 11102 11102 0 11304 112 04 a 02 a 0 2 0 a 0 2 1 a1 02 10 at3 02 at3 02 at1 02 1 3 0 1 39 4 39 at3 02 11 at4 01 0 aquot 02 1 groupthe least probable 4 01 a 02 1 a4 01 010 symbols and reshuffle the a 01 01 011 as 01 1 new list until only two left 5 1 a5 Spring 2004 E0953 5 H Juana CupyViEMUm Lecture 3 Slide 19 Spring 2004 E05553 E H Juana CuwiiaMUm Lecture 3 Slide 20 ECE 8873 Data Compression and Modeling Lecture 4 Lossless Compression Arithmetic Code and Dictionary Techniques School of Electrical and Computer Engineering Georgia Institute of Technology Arithmetic Coding A skewed source often has low entropy A source with small alphabet also has low entropy in most practical situations Huffman codes are inefficient for skewed sources its deviation from optimality depends on P maxi maximum of symbol probabilities Extended Huffman code may improve efficiency but implementation requires large table Is it possible to sequentially construct only the part of the table that is needed Spring 2004 spring 2004 ECE39EEH E H Juana Cnpyilahlmm Lecture 4 slide 1 spring 2004 ECEVEESS a H Juana Cupyngmznm Lecture 4 Slide 2 Arithmetic Code Key mechanism Code ktuples of input symbols For each ktuple input generate a tag a real number in 01 according the probabilities in the alphabet the tag can be generated sequentially Represent each tag in binary code with length commensurate inversely with the symbol s probability the binary code can be truncated without compromising the decodability due to the tag structure Let A a1a2wam andXai PX r Pa and FXo HX r k1 Mapping a Symbol Sequence into a Tag Let A 111112113 and Pa1 07 P012 01 Pa3 02 For sequence a1a2a3a3 Ta1a2a3a305586 07 08 0546 05572 spring 2004 ECEVEESS a H Juana CupyrlghtZDDA Lecture 4 Slide 3 Spring 2004 ECEVEESS a H Juana CupyrlghtZDDA Lecture 4 Slide 4 Generating and Deciphering the Tag Let I and um be the lower and upper bound of the tag associated with a sequence of 11 symbols X x1 x2 xn These bounds can be computed via the following recursion n loin 7004 7lltn71gtFXx 71 01 1 0104 7 l 1FXxn with 10 0 and ult gt1 The Deciphering Algorithm 1 Initialize 1 0 and W 2 For each k k1 2n find fk tag 71k71uk71 710671 3 Find the value of xk for which FXxk 71 g t lt FXxk 4 Update W and u The same recursion is used to convert a tag back into the sequence of Symbols 5 Repeat steps 24 until the entire sequence is found Need to know the cumulative probabilities Fx spring 2004 ECEVEESS a H Juana CupyvlghtZUEM Lecture 4 Slide 5 spring 2004 ECEVEESS a H Juana Cupyiigmznm Lecture 4 Slide 6 Another Way to Look at Arithmetic Code 1 al 1 ala Ella3 13511 aza 13513 PX049 007 014 014 002 0 4 1251l azaz 12513 007 001 002 049 056 070 08 094 I I I I II I I I I i i i i i i A i A i i A i A I did aiaz a1 azai a2 Kai azaz tag gt 0245 0 525 063 zaz 0 87 2 Use binary format for tag but truncated it to x 7 logPx11 bits iiie llullLalIUll mu e i i i in ism spring 2004 ECEVEESS a H Juana CupyvlghtZUEM Lecture 4 Slide 7 Binary Code Assignment Each tag in 01 is represented by a binary fractional number and truncated to lxl10gPx11 bits This binary code is uniquely decodable because the truncated tag always remains in FXX1FXX and it is a prefix code Symbol FX TX In Binary flog Px l1 Code 1 500 25 010 2 01 2 750 625 101 3 101 3 875 8125 1101 4 1101 4 1000 9375 1111 4 1111 2 Average code length HX g IA g HX m block length In spring 2004 ECEVEESS a H Juana Cupyiigmznm Lecture 4 Slide 8
Are you sure you want to buy this material for
You're already Subscribed!
Looks like you've already subscribed to StudySoup, you won't need to purchase another subscription to get this material. To access this material simply click 'View Full Document'