Special Topics EECS 498
Popular in Course
Popular in Engineering Computer Science
This 43 page Class Notes was uploaded by Ophelia Ritchie on Thursday October 29, 2015. The Class Notes belongs to EECS 498 at University of Michigan taught by Ian Hiskens in Fall. Since its upload, it has received 7 views. For similar materials see /class/231531/eecs-498-university-of-michigan in Engineering Computer Science at University of Michigan.
Reviews for Special Topics
Report this Material
What is Karma?
Karma is the currency of StudySoup.
You can buy or earn more Karma at anytime and redeem it for class notes, study guides, flashcards, and more!
Date Created: 10/29/15
Image Compression or How to squeeze lots of pictures into an iPod Jeffrey A Fessler EECS Department The University of Michigan Engin 110 20081009 Alternative Energy Follow Up Winter 2009 Course EECS 498 Special Topics Grid Integration of Alternative Energy Sources Prof lan Hiskens o Photovoltaics solar cells Power electronic interfaces Fuel cells Power systems a Wind generation including basic electrical machines Plug hybrid electric vehicles Prerequisite electrical circuits eg EECS 215 or EECS 314 Electrical engineering will be the heart and brain of all alternative energy solutions o heart circulatory system energy distribution a brain central nervous system information control 0 energy power information Cell Phones Everywhere Notice anything funny about this picture Whit is inside an iPod This talk concerns the invisible algorithms executing inside the chips within iPods digital cameras cell phones Digital Camera Basics AD Sensor Digitizer 10010110 11001001 Data Binary EECS everywhere optics semiconductor devices analog circuits digital logic software user interface Key component analogtodigital converter AnalogtoDigital AID Conversion Photo Sensor Current Electrical Current QUANTIZER Saturation Linear Operating Range Key component quantizer Binary Voltages High or Low Representing 0 or 1 Euunam ze Analog Signal LSB Least Signi cant Bit Digital Signal These 8bit binary values represent 27242320 128 168 1 153 Smallest 8bit digital value O Largest 8bit digital value 8 Quantization x Digital Signal Analog Signal Digital Gray scale BNV Image 00000010 11001000 As Displayed As stored Raw bits stored pixelso bits per pixel bpp Lots of Bits Example 3000 x 3000 pixel array 9 Megapixel camera 8bit quantizer total of bits 300030008 72106 bits 8 9 MB 9MB for a grayscale image is undesirably large And for a RGB color image we would need 3gtlt more Image Compression aka Image Coding A subfield within the field of data compression aka source coding Other data compression problems audio compression MP3 text compression zip Goal reduce the bits while trying to preserve image quality 0 Lossless image coding perfectly reversible o Lossy image coding recover an approximation Why For storage transmission Concept 3 as fill as 01 s The coder and decoder codec are designed together Examples MP3 JPEG MPEG4 AVI Q How low can we go how few bits per pixel A Claude Shannon s information theory Basic Image Compression by Rounding Down Suppose for each pixel we discard the two least significant bits LSBs ie we set them to 0 MSBEELSB 00000000 00000100 00001000 Possible gray scale values are multiples of 4 00001100 00010000 quot x xoobo CDN 11111100 252 No need to store the two leastsignificant bits so now only 6 bits per pixel are stored instead of 8 25 data compression What happens to image quality Quantization for Rounding Setting the two LSBs to 0 reduces memory but induces error 0 0 D N c r E a 2 to gt E c 27 w A N 69000 lt quantization error original value Average Quantization Error for Rounding Down For discarding 2 bits 0123 average error 4 15 gray levels For discarding d bits where 0 g d lt 8 0122d 12d 2d 2 1 average error gray levels As we discard more bits the error increases Shannon called this the ratedistortion tradeoff Next we see what it looks like rigimai image Original using all 8 bits per pixel mgrss i immeags Discarding 1 least significant bits mgrss i immeags 2 Discarding 2 least significant bits empressed image 53 Discarding 3 least significant bits Average error 348 gray levels mg rss i images 4 Discarding 4 least significant bits 20 empressed imege 5 Discarding 5 least significant bits i g Average error 1461 gra levels r I LII 397 21 mg resse i image 5 Discarding 6 least significant bits I a I 51 Fquot 39uquot Average error 3109 gray levels 22 gtmg resse i image 7 Discarding 7 least significant bits rm i Fitzi5 xii Average error 4244 gray levels I I I r I I r I IF 1 23 r4 RateDistortion Tradeoff for Rounding Down AMMOOOO O IOO IO 2 a gt 9 gt U L 2 L o L L a a 0 U L a gt U A O IO O bits per pixel Can we design a better image compression method What does better mean 24 Mark Rothko s White and Black on Wine 25 Jeff Fessler s With Apologies to Rothko This image has only four distinct gray levels 15 120 180 230 How many bits per pixel are needed to encode it 26 Coding an image with a few distinct gray levels Reasonable binary code With this code only 2 bits per pixel are needed for this image value code 15 120 180 230 00 01 1O 11 Plus a few bits overhead to store the code table for the decoder Can we do even better 27 VariableLength Codes So far we have been using fixedlength codes where every value is represented by the same number of bits 2 bits per value in preceding example Consider international Morse Code 1840s N A G u H O B I P V C II I I W J Q D x K R E Y F L 8 Z M T Why only a single dot for E Idea of variablelength codes use fewer bits for values that occur more frequently 28 VariableLength Code Example Improved variablelength code value proportion code 15 98 0 1 1 120 475 1 180 305 0 0 230 122 0 1 0 How many bits per pixel on average are needed with this code 0098 3 0475103052 0122 3 1745 Less than 2 bits per pixel How is it stored This is a Huffman code see Matlab s huffmandict command Can we do even better Shannon s Source Coding Theory To encode numerous signal values that lie in a set with N elements with proportions probabilities p1p2pN on average we need at least H bits per value where H is the entropy N H anlogzpn 111 Example for our image with N 4 gray levels the entropy is H P1 lot112171 194 10g2P4 009810g20098 047510g20475 030510g2 0305 01221052 0122 17313 bits per pixel 22 Our Huffman code came remarkably close to this lower limit This type of theoretical bound is important for practical design note analysis vs design 30 More Complicated images original UEl This image s pixel values also lie a finite set 012m255 So Shannon s entropy bound applies Forthis image need at least 701 bits per pixel Can we do better Coding Pixel Relationships So far we have coded the individual pixel values directly This provides only modest data compression for most images For better image compression we must consider the relationships between pixel values For example neighboring pixel values tend to be similar 32 Histogram of Difference Image H 2i pi log2 pi 506 bpp proportion 0 pixel values in difference image The horizontal difference image has pixel values that lie in a finite set 255 2540255 So Shannon s entropy bound applies For this image need at least 506 bits per pixel 33 Modern TransformBased Image Coding 0 Capture pixel relationships using 0 discrete Cosine transform JPEG o wavelet transform JPEG 2000 o Substantial compression by discarding sma DCT coefficients 0 Lossy vs ossess compression For video coding 0 use DCT within each frame 0 and differences between frames 34 JPEG Image Compression and the DCT DCT discrete cosine transform relative of the discrete Fourier transform DFT practical thanks to the fast Fourier transform FFT digital equivalent of an optical prism CDP 7 450m 2 35 i igiml lmagca Original using all 8 bits per pixel 36 JPEG m3 reeee 3 Emerge JPEG with 0685 bits per pixel 1 JL Average error 062 gray levels 37 JPE mg reeee Hmege 2 95 JPEG with 0390 bits per pixel quot1 Average error 132 gray levels 38 JPE mg reeee Hmege 3 8 JPEG with 0185 bits per pixel Average error 285 gray levels 39 JPE mg reeee Hmege 4 25 JPEG with 0071 bits per pixel ranF1 1 quot k Lia39ag Hquot r I 27 39 rquot i A V r7 1 rquot i r MWhME LJ J Average error 540 gray levels 40 JPECCE empree eed Hmage 5 5 JPEG with 0030 bits per pixel LIDi r r i Average error 1001 gray levels 41 JPECCE empreeeed Hmage JPEG with 0024 bits per pixel 1 I f w I Average error 1376 gray levels 42 7 L u m D In 5er 7 mac 7 m 7 PA a l l li q 1EDllli LLU DU U aoleoil iior JPEQJ RateDistortion Tradeof39f for JPEG E a gt 2 gt as L 2 L o L L a a c S a gt as 04 06 bits per pixel Recall that for compression by rounding at 1 bbp the average error was 40 gray levels 43
Are you sure you want to buy this material for
You're already Subscribed!
Looks like you've already subscribed to StudySoup, you won't need to purchase another subscription to get this material. To access this material simply click 'View Full Document'