### Create a StudySoup account

#### Be part of our community, it's free to join!

Already have a StudySoup account? Login here

# TOPICS IN COMPUTER SCIENCE CS 519

OSU

GPA 3.98

### View Full Document

## 50

## 0

## Popular in Course

## Popular in ComputerScienence

This 297 page Class Notes was uploaded by Dorothea McGlynn on Monday October 19, 2015. The Class Notes belongs to CS 519 at Oregon State University taught by Staff in Fall. Since its upload, it has received 50 views. For similar materials see /class/224502/cs-519-oregon-state-university in ComputerScienence at Oregon State University.

## Popular in ComputerScienence

## Reviews for TOPICS IN COMPUTER SCIENCE

### What is Karma?

#### Karma is the currency of StudySoup.

#### You can buy or earn more Karma at anytime and redeem it for class notes, study guides, flashcards, and more!

Date Created: 10/19/15

Mike Bailey CS 519 Oregon State University Homogeous Coordinates Adding a 4th Value to an XYZ Triple We usually thinkofa 3D point as being represented by a triple xyz wt 44 A 39 hv rnn snlinn then divides x y and 2 by w before it uses them Thus 1231 2462 1231 all represent the same 3D point mlbrApill 1 Inna 412009 This Seems Awkward Why Do It One reason isthat it allows for perspective division within the matrix mechanism The OpenGL ca glFmstum Ie right bottom top near far creates this matrix 2near 0 right le 0 Y right 712 right 7 le x top bottom y 0 0 y Y top 7 bottom top 7 bottom 2 z 7farnear 72farnear wv 0 0 1 far 7 may for 7 near 0 0 7 1 0 This gives w 2 which is the necessary divisor for perspective mih7AWiH 2m Another Reason is to be able to Represent Points at Infinity 1quot 39 quot quot quot39 39 placing the quot h 39 quot at in nity The point 1231 represents the 3D point 123 The point 123 5 represents the 3D point 246 The point 123 01 represents the point 100200300 So 1230 represents a point at in nity but along the ray from the origin through 123 mih7ApHi 1 ma 41 2009 However When Using 39 P You Need to be able to get a Vector Between Two Points I To get a vector between two homogeneous points we subtract them I x y 2 x y 2 xb7yb7Zb7wb7xa7ya7za7wai b a Waxb Wayb Wazb7 bea7 wbya7 sza wawb Fortunately most of the time that we do this we only want a unit vector in that direction not the full vector So we can ignore the denominator and Just say V normallzewaxb 7 wbxa wayb 7 wbya wazb 7 wbza mib7AWiH 2m 41 2009 CS 519 Signal amp Image Processing Display and Reconstruction Properties of Displays Size and of Pixels Brightness Linearity Flatness Resolution Reconstruction Reverse of digitization 39 Undo sampling or at least make is seem continuous Gaussian spots Resampling 39 Undo quantization convert back to analog Interpolation Dithering Intensity Discrimination Human eye can discriminate 1000 shades of gray For constant adaptation about 200 levels 8 bits 9 28 256 shades A Linearity T Applies to output as well as input Twice the recorded value should be twice as bright Problem monitor response not linear Gamma Output Intensity gt1 2 c V 7 b 4 O set bias Gain slope Input Voltage Gamma Response n Gamma Correction 17 g g graylevel After Gamma Correction 1 cg1YY b g graylevel Gamma Correction Different monitors have different gammas Be careful of different operating systems drivers etc Applies to imaging deVices as well cameras scanners etc Gaussian Spots 7 39 Digital display deVices generate output Via a collection of dots spots Each spot has a 2D intensity distribution Modeled as a radially symmetric Gaussian 2 2 2 x r pxye y e R 9 radius at which intensity drops to 12 maximum pre r2 e rR2 1112 e1n2 lt 2 2 rR2 Flat Display 0 A constant high intensity image should look at M a 0 Problem Hard to make individual spots blend into a constant eld Solution Use Wider spots ERAff x I x f Xx Put spots closer together y C8 a fquot f a I f quotquot1 If x Ix Max If x gquot D 5 x 39E I a Iquot fa 1x III MK I an IE 4 3 quotx f K 2 x 1quot a x R D 2 39 x 2 a ngj xxx Hf REE I1 I I I I CEI 5 I I 0 I l I 3905 7 6 2 I15 I I I K Resolution f x J 0 Wider or closer together spots mean less resolution 0 Individual spots spread and interact With neighbor 0 Rapid changes lose contrast Modulation scaled contrast between neighboring high and low intensity pixels Contrast vs Frequency Display sinusoids of different frequencies Measure contrast as a function of frequency Transfer Function CS 519 Signal amp Image Processing Histograms De nition Histograms count the number of occurrences of each graylevel value H D Count Graylevel Sky Properties Sum of histogram elements equals the image size Discrete 255 ZHD of pixels D0 Continuous jHDdD area 0 f Properties Sum of values between a and b equals the size of all objects in that range Discrete 1 2H D of pixels in objects Da Continuous b JHDdD 2 area of objects a Properties 7 39 Integrated optical density weight of image or objects b 101 JDHDdD a 39 Mean graylevel average intensity in image or objects bjDHDdD ijDdD Application Camera Parameters Too Bright lots of pixels at 255 or max Too Dark lots of pixels at 0 Gain Too Low not enough of the range used Application Segmentation Can be used to separate bright objects from dark background or Vice versa 10 Histograms Normalizing and Cumulative 39 Probability density function histogram normalized by area H D 171 2 A 39 Cumulative histogram counts pixels with values up to and including the speci ed value a Ca JHDdD 0 Cumulative density function normalized cumulative histogram a Pa jpDdD 2T 0 L Mike Bailey CS 519 Oregon State University Valylmj variables May 31 2006 The Generic Computer Graphics System Input Devices MC Mndel Cnnrdinmes Wnrld Cnnrdinmes nnrdinmes Cnn ND nrmzl ee Cnnrdinmes sc Screen Cnnrdinmes TC Texture Cnnrdi mes rdinmes iled Devi Uni ed Memory Architecture UMA Z Transforming a Surface Normal Before Transformation T PO P F0 N T 0 or expressed in matrix notation NF T0 AfTer Transformation T39Pn39P39 MltPn MltP MltPn ltP M T N T 0 orexpressedin matrix notation ltN39gtT ltT39gt 0 1X3 3X1 If Q is The maTrix which needs To Transform The normal Then QN T 1quot 0 then substituting for T 3x3 3x1 1 3x1 QN T M T 0 then distributing the transpose NT QT M T 0 then associating thez middle tenns NT QT M T0 thenrememberingthatN T T 0 QT M I so that Q must equal Q Mquot T Signals and Functions Signal a function carrying information Functions have domains and ranges CS 519 Signal amp Image Processing f 8 v Image Acquisition Domain Range t sound air pressure Xy graylevel light intensity Xyt color RGB HSL Xyz LANDSAT 7 bands r Xyzt L What do the Range Values Mean Domains amp Ranges 0 May be Visible light 0 Analog continuous domain and range 39 mummy gay39level Digital discrete domain and range What does the value 213 mean color RG13 0 Converting from continuous to discrete M b ft t Domains selection of discrete points is called sampling 39 a e uan 1 165 we can no sense y q Ranges selection of discrete values 1s called quantization I Radio waves e g doppler radar Range images Ultrasound 9 Quantization Magnetic resonance Xrays e g CT Domain 9 Sampling Sampling or Quantization Sampling vs Quantization Dots per inch 0 Sampling described using terms such as 0 Black and white images Rate Frames per second Frequency 0 441 KHz audio 16bit audio Spacing Density Quantization is referred to as 24bit color of discrete values of bits per samplepixel Resolution 0 Ability to discern detail both domain amp range Not simply the number of samplespixels Determined by the averaging or spreading of information when sampled or reconstructed Acquistion Devices Aperature size of sampling area 0 Scanning ordered sampling of signalimage 0 Sensor measures quantity of sample Quantizer converts continuous to discrete 0 Output storage medium saves quantized samples Compare pinhole wfllm amp CCD cameras L Acquistion Devices Aperature size of sampling area 0 Sensor measures quantity of sample Apertures Point measurements are impossible 0 Have to make measurements using a weighted average over some aperture 39 Time window Spatial area Etc Size determines resolution Smaller 9 better resolution Larger 9 worse resolution Apertures Lenses allow physically larger aperture with effectively smaller one 39 Effective 39 Aperture Physical 9 Aperture Sensor 39 r Lens Sensor Converts physical property e g lighUphotons to chemical andor electrical response Examples Fihn silver halide crystals Eyes photoreceptors rods cones Digital camera chargecoupled device CCD I Unavoidableundesirable uctuation from correct value The nemesis ofsignal and image processing I Usually mndom modeled as a statistical distribution Mean 4 at the correct wlue Measured sample varies from 4 according to distribution o I SignaltoNoise Ratio SNR Measures how noise free the acquired signal is Sources of Noise I Quantum discrete nature of light photons Poisson distributed o J SN39R increases with more light Tum up light source Larger aperture Collect longer I Background Thermal ak a Dark current Builds up over time Decreases with Cools environment Shorts collection time Thermal electrons indistinguishable from photoelectrons El Sources ofNoise cont 1 I Sensor inhomogeneity every sensor is unique Dark cunent levels vary from element to element Long exposures produce fixed patterns Can be subtracted out I Circuitry prior to converting to digital Electromagnetic interference from other circuits shifting charge from well to well CCD Some photoelectrons lost chargertransfer efficiency lt l Worse as the charge is read out at a faster rate Dead pixels Noise vs Resolution I Smaller Aperture Higher resolution Less area a fewer photons a more noise I Larger Aperture Lower resolution More area a more photons a less noise I Lens Larger physical photon collection area smaller effective resolution aperture I Noise vs Resolution I Example 1 Camera settings Fstop aperture shutter speed collection time I Example 2 Film crystals Larger lower resolution but faster short exposure Smaller higher resolution but slower long exposure Ideal Sensor Response I Multiplying input by a constant value multiplies the output by the same constant ux a x I Adding tWo inputs causes corresponding outputs to add x y x HO I Linearity ax by a x b y Typical Sensor Response m Gain and Offset 4 s Every sensor has an effective dynamic range Gain proportionality of output to input Most approximately linear devices are linear over I Offset blag comm addition to ompm some range Low lt toe at nonzero response Middle linear response effective dynamic range 3 530 High gt shoulder saturation and blooming 5 3096 Example H amp D curve for lm 95 13mm y39mtercept Offset We or input y Toe J 9 Other Sensor Problems Modulation Transfer Function Blooming photoelectrons over ow from one sensor well to A way of measuring resolution neighboring electrically connected wells Instead of1inepajrsy uses sine waves Measure the contrast of the response as a function of frequency 1 MTF FrequEncy uf Frequency 9 lmmng resuluuun r Mum m History of the VS Model Introduced by Gerald Salton in the 19605 e The SMART system Internet 7 WldE Area lnfurrnatlun System WAlS m the lBBEIs The web 7 Web search engmes Exclte Lycus Altavlsta i133 aquot mopm Problems with Boolean Model Queries can be complex To get exactly what you need Not appropriate for nonexpert users Binary Relevance Too many results or too few u Queries in the Vector Space Mod A query is issued in natural language No operators or formal syntax Just a list of keywords Dealing with Non Binary Relevance Treat query as small document Find documents that are similar Not just exact matches How do you de ne similarity Mama mopm m helmum Text The Vector Space Indexmg DefInItIons Ndimensional space K M M e Setutall mdexterms e N l number er lndex terms m W gt duwmem Emlecf u Jyvelgm utterm m ducument e Each document Is a vector In this 7 WNU ltk lsnutmd space Each ducurnent lS descrlbed as a vectur e The value ufeach dlrnenslun lStHE A WWW th WElght ufthatterrn m the ducurnent The query ls alsu descnbed as a ve ur The query is treated like a small 7 Ejmww w W document i133 MMquot Hmmwkmzmljun m 4 mm New How Do We Compute the Weights Weight ofa term in a document 7 W 7 Intuiti vely extent to which that keyword is representative ofthe document content A good amount of research into choosing the right weights Example Doc1 A dog is an animal a big animalquot A cat is an animalquot K a is an dog cat animal big 7g12 i i i 2 1 41 1 u m u Simplest appro ch 2 B C 3 e Cumputed based un number uftirnes X e39y 39 399 a akeyvvurd appearsinaducument 7gEI EI EI EI i EI i Mum 2375 er in Similarity in VS How do we de ne similarity between two docume t vectors Salton proposed Cosine ofthe angle between the two vectors Similarity in Three Term Space t l mum rmwa m WWquot Mlmxtpmmuusmmmmasrhmmmmehsx m mun M w m Angle Related to Cosine in 2D Anwegrtsarepusmvet Greater angle smaller cusme 9n Degrees 9n Degrees Cam Image by David Jayne mp Hdayh m Mdrayutmrrmmm Cosine Formula Slm il q Idrllqi J2 i2 Cat and Dug Example edgtz m mm 2 2 Warm 2 1 Wm nr rn um Stmd q Z Ui i i m 1 Z39UJHMJI senautututuuuzutasenmuuuuz U111U111 33 149n2 mopm Cosine Formula 8 0 Z w w zmd q i quotJ quot9 i 41qu Jziwfrlzgwfq increases wuvd uccms mulE liequently in either my bum ducuments Cosine Formula 7 2 d u w w mm 1 391 f d a 2 2 llql 21 We 2qu Denominator 7 Nurmalizatiun famur 7 if ducuments are exactlythe same Slmlallllyl 7 Nut affected We Ward neeurs lrl me 5 dam and nut the Either 3 MHLWIM Review 111e Vector Thursday Feb 2nd Space Model Exam on Tuesday Ndimensional space Will cover material from visiting s 5 quotWW if WEX EWS kers ucumem ED EEtan pea Each document is a vector in this Will cover everything In class space through to ay 7 The value ufeach dimenslun lStHE Wi cover assigned reading Weight ufthatterm m the dueument A throu h 2 5 3 A The query is treated like a small g 39 39 document quot5173 Similarity 111ree Term Space Cosine Formula e 2 d u w w 1 Slm iyq 2 391 f 2 lxllql Jzemlzem 2 Cat and Dug Example 7 2 l l l 2 1 7 q D D U U l ll momman 139n139n we m l TD l l sqn27171717U727l7J sqvtm7 z nuumuumz I 81 1 sqn12 sqn2 1m nz quotm mugewwnmm quot15w aw mm mmmeeemmmmmw mnsz Maw An Im roved A roach How Do We Compute P Pquot to Term Weighting the Weights Factors Weight ofa term in a document W 7 Intuitiyely extent to which that I J f fwlf keyword is representative ofthe 39 39 document content Wag ufkeywmd IS A good amount of research into m dugummn choosing the right weights 39 Simplest appro ch inverse Ducument Frequency 7 Cumputed based un number at ttmEs Narmahzed rmhwmmwom a keyvvurd appears m a ducument 5 Frequency Mon occurmu maquot whvam mam 3 3quotquot Term Wei htin Normalized Frequency 9 9 actors Rawwurd fr equency ducuments m eq cullectiun w df 1Og 1 I lan f r e qv J quotx Nurmalized Inverse ducument We Must equent keywurd We Number uf ducuments m appeanngin dumment which keywurd k appears quot51mg Weights for Vector x m Space Algorithm Knovm commonly as quot TFIDF M g Term frequency inverse 1 document frequency p m m m 1 ducuments keywurd appears m mmr 0quot er mopm Summary Vector Space Algorithm Advantages 7 Wurks Pertuvms mute accmatety m ctusem evevy umy pvupused mudet that use untytm team yes 7 Nunrbtnaryretevance b trnpternented very Emmanuy mm m Summary Vector Space Algorithm Disadvantages Assumes all terms are independent Filtrth CS 519 Signal amp Image Processing The Fourier Transform Examples amp Properties Magnitude and Phase Remember complex numbers can be thought of in two ways real imaginary or magnitude phase Magnitude quot93F2 3F2 EMF Phase F arctan 307 Intuition Real part How much of a cosine of that frequency you need Imaginary part How much of a sine of that frequency you need Magnitude Amplitude of combined cosine and sine Phase Relative proportions of sine and cosine Odd and Even Functions Even Odd m t m airr Symmetric Antisymmetric Cosines Sines Transform is Real Transform is Imaginaryquot for realvalued signals Any function t can be broken into even and odd parts f0 fet f0t 12ft JFK0 12ft ft L Sinusoids 7 Spatial Domain Frequency Domain f0 FS cos27cwt 12 s 0 6s 7 60 sin27cwt 12 s w XS 7 w mint Functions Spatial Domain f0 Frequency Domain F s l 53 a 53 Delta Impulse Function Spatial Domain f0 Frequency Domain F S 6t l Square Pulse Spatial Domain f0 Frequency Domain F S 1110 sincaTEs arcs sinaTEs Triangle Spatial Domain f0 Frequency Domain F S Ada sinczam Comb Shah Function Spatial Domain Frequency Domain f0 F S combht 5t mod h 50 mod lh Gaussian Spatial Domain Frequency Domain f0 F S mz Tts2 e e Differentiation Spatial Domain Frequency Domain f0 F S 1 2m dt Common Fourier Transform Pairs Spatial Domain ft Frequency Domain F s Cosine cos27at Shifted Deltas 25s w 5s7 w Sine sin2739at Shifted Deltas 25s w 5s7 wi Unit Function 1 Delta Function 5s Constant a Delta Function 11 5s Delta Function 51 Unit Function 1 Comb 51 mod h Comb 51 mod lh Square Pulse 11111 Sinc Function sincani Triangle A110 Sinc Squared sinc2ani Gaussian eimz Gaussian eimz Differentiation ddt Ramp 211139s Properties Notation Let F denote the Fourier Transform FFf Let F1 denote the Inverse Fourier Transform f Fla Properties Addition Theorem Adding two functions together adds their Fourier Transforms FfgFfFg Multiplying a function by a scalar constant multiplies its Fourier Transform by the same constant FafaFf Consequence Fourier Transform is a linear transformation Properties Shift Theorem A Translating shifting a function leaves the magnitude unchanged and adds a constant to the phase If fo fit a F1 FOG F2 2 F03 then Ile iFll ltF2 ltF139 2759a Intuition magnitude tells you how much phase tells you where Properties Similarity Theorem Scaling a function s abscissa domain or horizontal axis inversely scales the both magnitude and abscissa of the Fourier transform If fzt fia t F1 2 F03 F2 2 F03 then F2S 1lalF1s M k Properties Convolution Theorem Convolution in one domain either spatial or frequency is the same as multiplication in the other domain If FS FWD GS Fgt then Ff g PO Hg FS GS and FF G RF FG ftgt ii Rayleigh s Theorem Total energy sum of squares is the same in either domain ohft2dt Fs2ds q Mike Bailey Oregon State University Image Basics Treat the image as a texture To get from the current teer to a neighboring texel add I 1lResS 1ResT to the current ST ResT ResS Oregon State University Computer Graphics mJb January 29 2003 Image Negative 1R1G1B Oregon State University M Computer Graphics mJb January 29 2003 Image Distortion uniform float S T uniform float Power uniform sam pler2D TexUnit void main vec2 st gl TexCoord0st vec2 delta st vec2 ST st vec2ST signdelta pow absdelta Power vec3 rgb texture2D TexUnit st rgb gl FragColor vec4 rgb 1 Oregon State University Computer Graphics Image UnMasking More of whatl do want Blend of what have What and what want more of started with Blend of what don t want and what have What I don t want I I t 00 10 20 mjb January 29 2008 33 quot lout 139tdontwant tin Computer Graphics Brightness Idontwant vec3 0 0 o i Oregon State UniverSity w Computer Graphics mJb January 29 2008 Contrast ldontwant veC3 05 05 05 T0 Oregon State University M Computer Graphics mJb January 29 2003 HDTV Luminance Standard Luminance 02125Red 07154Green 00721Bue Oregon State University M Computer Graphics mJb January 29 2003 Saturation vec3 luminance luminance luminance ldontwant Oregon State University Computer Graphics mjb January 29 2003 Difference ldontwant lbefore in after 6TZ Oregon State University Computer Graphics mjb January 29 2008 ChromaKey Replace fragment if R lt T G lt T B gt 1T Oregon State University M Computer Graphics mlb January 29 2003 Blur Blur Convolution 121 BL242 16 121 Oregon State University M Computer Graphics mlb January 291 2003 Sharpening Blur Convolution 1 2 1 BL2 4 2 1 2 1 Oregon State University Computer Graphics ldontwant lblur mjb January 29 2008 Oregon State University Computer Graphics Sharpening Embossing vec2 stpo vec2 1IResS 0 vec2 stpp vec2 1IResS 1IResT vec3 coo texture2D ImageUnit st rgb vec3 cp1 p1 texture2D ImageUnit st stpp rgb vec3 diffs coo cp1 p1 float max diffsr if absdiffsg gt absmax max diffsg if abSdiffsb gt absmax max diffsb float gray clamp max 5 0 1 vec4 grayVersion vec4 gray gray gray 1 vec4 colorVersion vec4 grayc00 1 color mix grayVersion colorVersion T Oregon State University m Computer Graphics mJb January 29 2003 Edge Detection Horizontal and Vertical Sobel Convolutions 1 2 1 1 O 1 H 0 0 0 V 2 O 2 1 2 1 1 O 1 G atan2 V H H2V2 Oregon State University M Computer Graphics mJb January 29 2003 Edge Detection vec2 stpO vec2 lResS 0 vec2 stOp vec2 0 lResT vec2 stpp vec2 l ResS l ResT loat i00 dot texture2D ImageUnit st rgb vec3021250715400721 r vec3021250715400721 b vec3021250715400721 L b vec3021250715400721 float iplml dot texture2D ImageUnit ststpm rgb vec3021250715400721 b vec3021250715400721 b vec3021250715400721 L b vec3021250715400721 i0pl dot texture2D ImageUnit stst0p r b vec3021250715400721 i0 39 l 39 ml float v limlml 2imlO limlpl liplml 2ip10 liplpl float mag sqrt hh vv vec3 target vec3 magmagmag 0101 vec4 mix irgb target T l Oregon State University Computer Graphics mjb January 29 2003 Edge Detection Oregon State University Computer Graphics mjb January 29 2008 Toon Rendering float mag sqrt hh vv if mag gt MagTol color vec4 0 0 0 1 else rgbrgb Quantize rgbrgb vec3 5 5 5 ivec3 irgb ivec3 rgbrgb rgbrgb vec3 irgb Quantize color vec4 rgb 1 Oregon State University Computer Graphics mjb January 29 2008 Toon Rendering Original Image Colors Quantized Oregon State University M Computer Graphics Outlines Added 1O Ianuary 29 2008 Oregon Ste 5 Computer t 39 quot Toon Rendering for NonPhotorealistic Effects Use The GPU To enhance scien rific engineering and ar chi rec rur al illus rr39a rion Oregon State University Computer Graphics 11 Toon Rendering for NonPhotorealistic Effects Use The GPU To enhance scien rific engineering and archi rec rur al illus rr39a rion Oregon State University Computer Graphics Mandelbrot Set 2 Z141 Zi Zo How fast does it converge if ever Oregon State University Computer Graphics 12 How fast does it converge if ever Oregon State University Computer Graphics Julia Set 2 Zi1Zi 0 13 Mike Bailey CS 519 Oregon State University a Dome Projection WW v ew n9 Volume 11 to 11 Edge orthe c r39cle represents the edge orthe dome projection your left right bottom tapas you are sitting in the theater Dome Vertex Shader Godseye View As The eye sees if Y From The side Y 6 PasV ienx 5qrix2 W W X Z EVE const float in 314159265f pusmun void main void vec 4 pos glMode1ViewMatrix glVertex float lenxy length posxy float phi atan2 lenxy posZ PI 2 4 Note p05 xyle xy magn9 glPosition g1ProjectionMatrix pos Dome Shader Inferaci ive graphics application Appnhm i hm m x Hmwurz quotEmmtuw quotchD Testing out in the Reuben H Fleet Science Center A Better View of RenderMan bvrman bvrman pronounced BeaverMan is a Windows progmm to allow a better view into RenderMan and some of its other associated programs RenderMan as it comes out of the box is a commandline driven program While that works OK on UNDbased systems it is horribly awkward on Windows Thus was born bvrman a Better View of RenderM an bvrman is started with exe At the same time a console window pops up There are at times e Jl messages that are displayed there Most ofthe time you can ignore it Iconify it if you want Start gt All Programs gt Pixar gt bvrman The resulting user interface window looks like this Editing and Rendering RIB Files RIB stands for RenderMan Imeiface Bytesfream It is an ASCII encoded input to the Photorealistic RenderMan rendering progmm The RIB ponion ofthe bvrman menu looks like 39s Display an image Select 7 Display l Veibuse Compile a Shade Select Edii l We Make aTexiuie 1mm an image Filer Select Make Gull Select This brings up a dialog box that allows you to select a RIB le Its name gets displayed underneath vrman the Select button where you se here b quickly parses your RIB le looking for the name of the image le that you will be creating If it nds it it places its name in the Display region below just as if you had selected that image le yourself Edit This brings up a wordpad window so that you can edit your RIB le bvrman does not block on this so you can come back to the bvrman interface even e Render a RIB File 7 Select I E d 7 Display When Dene Render it with the wordpad window open Display When Done Clicking this checkbox causes bvrman to display the resulting image the moment the render is complete Otherwise bvrman Just sits there Render This sends your RIB le to the prman program The other buttons in this ponion of the interface in the console window 17v blocks until the render is done When the render is complete the other buttons become ungrayed and the gray out to show that it is busy A percentagedone counter goes on resulting image displays if the checkbox is checked mjb April 1 2006 rman Displaying lngvs ms Almvs mas saiaaiasgaisyiayimss Minna dagorilmymmhendzndwnhkendz mbmmi Dismay an image massaniy sa um ms brings zaulogbuxdrma ws yul as saiaai dig a an wlrmwuxkl 39s mea m Mia mamas dreadybe saiaaiaa m m am saiaaiaamg m was haw dag saga mass milg buiianaispiays k Sekdedxme in mg 3914 Wmduws 1mg am FAX waex mm ms was chasenbecanse ii is wk 0 12th easymuse Ms yam mama zrd has a swam m A was am ms 3 a massxy mas bmmncmses m admiymgzms am bnmhzd m amen yam mp 5 ma sammuaiaanba d yhyed Ya dox med Mm Ms s 5 s E E a a i 5 E a 5 i a 1 Ed g and Camp g Shadeis Th dluwsymmedumdcumydzkendeMmmn msbdus Cump ig 35mg umThisbmgsuyaaubgboxmaianmsymmseiecia SE EE Ramast shad Wm wcaiiy end in 3914 si in axiansm m mm mm saimaa slrudex lz hype belmv 3914 mm mm Egii Thst F zwoxdyadwmde sa dmyul mailWed Lump msbdasm bvvmawdaesmlblnckun39hissuy lcmcum back m 11 bvvmaw interface eventhh 3914 warding mi mien Cami ms quotwakes 3914 Rudeme slrudex comm comm mamssasas Wm Anna in in coma mm A smassm cummhcnnwi mi mill cxezcnn ara shade ohm m Wm has 3914 sam m as 3914 slrudexsmlme minds ma siaaxiansm Dllnl39lgcummhcnn 3914 Seladzrd m bum an awed a m w ride cummis mshdbyhmmmbecomnmnyed am byanmmimmag box awaarg Making Texlunrs Rexidzmewzms NPRPMQSS mnemn mesbefunw cmnsed zmmznmu ms Penman m dlaws w wyexfumi um in cessm Make aTex me 1mm an image We Seieci Salad ms brings Adubgbuxdrma wzs ymlm seledmmlagl le muse as zkxmn m Ramast mm mymcassm url Yaccem min Jam 2mm 2 TIFF les but you can specify any image type to bvrman and it will do the conversion for you The name of the selected image le appears below the Select button Make This invokes the RenderMan texture preprocessor Error messages will appear in the console window A successful make will result in the creation of RenderMan texture le which has the same name as the texture source image but ends in a tex extension D 39 texture making the Select button is gmyed out You know when the texture maker is nished by the button becoming ungmyed and by an information dialog box appearing Miscellaneous Last but not least Verbose Normally the messages in the console I Verbose 39 are things that you really need to know But if you would like to see more of what is really going on behind the scenes click this checkbox on It canbe at times voluminous Don t say I didn t warn you Quit Need I say more Lefttoright ToptoBottom Plastic Screen Eroded PlasticDented Questions Comments Direct bvrman questions and comments to iley Oregon State University 2117 Kelley Engineering Center Corvallis OR 973315501 5417372542 mbcs oregonstate edu mjb April 1 2006 3 CS 519 Signal amp Image Processing Compression Typical Compression Rates Application Uncompressed Compressed Voice 64 kbps 2 4 kbps 8 k samplessec 8 bitssample Audio Conference 64 kbps 16 64 kbps 8 k samplessec 8 bitssample Digital Audio Stereo 15 Mbps 128 kbps 441 ksamplessec 15 Mbps l6 bitssample Slow Motion Video 507 Mbps 8 16 kbps 10 fps 176 X 120 frames 8 bitspixel L f Typical Compression Rates Application Uncompressed Compressed Video Conference 3041 Mbps 64 768 15 fps kbps 352 X 240 frames 8 bitspixel Video File Transfer 3041 Mbps 384 kbps 15 fps 352 X 240 frames 8 bitspixel Digital Video on CD ROM 6083 Mbps 15 4 Mbps 30 fps 352 X 240 frames 8 bitspixel 1 Typical Compression Rates Application Uncompressed Compressed Broadcast Video 24883 Mbps 3 8 Mbps 30 fps 720 X 480 frames 8 bitspixel HDTV 133 Gbps 20 Mbps 5994 fps 1280 X 720 frames 8 bitspixel f Coding All information in digital form must be encoded Examples Binary numbers ASCII IEEE oatingpoint standard Coding can be Simple or complex Lossless or lossy Ef cient or inefficient in terms of memory of bits and or computation of CPU cycles Compression is e icient coding in terms of memory Information Information is something unknown More probabilistically it is something unexpected What s the missing letter H E L O Now what letter is missing AY Di erent letters andor different contexts convey di erent amounts of information Information vs Data Data the actual bits bytes letters numbers etc Information the content Redundancy difference between data and information redundancy data information Compression keep the information and remove the redundancy as much as possible Data Information Redundancy Types of Redundancy Remember redundancy data information In general there are three types of redundancy Coding Inefficient allocation of bits for symbols Intersample interpixel Predictability in the data Perceptual visual More data than we can hearsee Types of Compression Compression algorithms characterized by information preservation Lossless or informationpreserving No loss of information text legal or medical applications Lossy Sacrifice some information for better compression web images Nearlossless No or very little perceptible loss of information increasingly accepted by legal medical applications Quantifying Compression and Error Compression described either using Compression ratio popular but less technical Rate bitspersymbol bps or bitsperpixel bpp Error is measured by comparing the compressed decompressed result f to the original f M N A errarmy 2 y fxgtyz y1 x1 SNR Sign my ZilZiill xwr ny z Quantifying Compression and Error Most lossy algorithms let you trade off accuracy vs compression This is described as the rate distortion curve The fewer bits required the more distorted the signal is D max 0 i 0 01 02 03 04 0 DistortionD e Quantifying Information Entropy Information content entropy of symbol a With probability of occurrence pa infoa flogz pa bits Examples 8 possible symbols with equal probability for each symbol a infoa 1 log2 g 3 b1ts Symbol a with probability y with 255 other symbols infoa log2 3 bits Entropy for a Language The average bits of information entropy for a language withn symbols a1 a2 an is n H Z pa r infot i1 Z pa3910g2 Mai il where pal is the probability of symbol at occurring 4 Entropy for a Language Example 7 Flip two fair coins and communicate one of three messages both heads both tails one each Message Probability Information Both heads 14 2 bits Both tails l4 2 bits One each l2 1 bit Weighted Average 15 bits Context Information is based on expectation Expectation is based on context 39 Therefore Full analysis of information content must consider context Examples of contexts Last n letters Last n values for a timesampled sequence Neighboring pixels in an image Calculating Information in Context 7 Without considering context the information content of symbol at is information log2 pa bits where pal is the probability of symbol at occurring Considering context the information content of the same symbol after sequence a0 a11 is information log2 pa bits where pal a0 a1 1 is the probability of symbol at occurring immediately after symbols a0 a1 1 I q Entropy Coding Entropy coding allocates bits per symbol or groups of symbols according to information content entropy Huffman Coding Optimal unique coding on a persymbol basis Arithmetic Coding Encodes a sequence of symbols as an infiniteprecision real number About 21 better than Huffman Vector Quantization Encoding large groups of symbols with lossy approximations The theoretical compression limit of entropy coding is the entropy of the language set of symbols itself QL Huffman Coding 7 Algorithm for producing variablelength codes based on entropy Sort the symbols according to probability of occurrence Repeat the following until there s just one symbol left Combine the two symbols with the least probability into a single new symbol and add their probabilities Resort the symbols insertion sort of new symbol according to probability The combinations form a binary tree structure that can be encoded using a single bit for each level g Huffman Coding c0nt Message Probability Information Both heads 14 2 bits Both tails U4 2 bits One each l2 1 bit Properties Decoding is just traveising the tree When you reach a leaf emit symbol and start a new one I Prefix property required for variablelength codes No symbol s code appears as the beginning of a longer one Each symbol is encoded with approximately l ilog2 pa bits Arithmetic Coding Huffman encodes symbols onebyone so you have to round up the bit allocation Arithmetic coding doesn t use a onetoone mapping between symbols and bit patterns The entire string is encoded as one long real number Encoding sequence 4 111 12 n1 n1 n4 1 02 008 0072 00588 006752 Vector Quantization Divide the signal into sets of n values and treat as one collective vector value Quantize the vectors to some nite number of themithis causes loss but reduces the space of possible values Use a unique code for each vector and transmit that code Most VQ systems intelligently select the quantization based on the signal contentiso you have to send the codebook Can extend to image using n X m blocks Example 4 X 4 blocks 256 colors per pixel 9 25615 possibilities 9 128 bitsblock Choose 4096 possibilities l2 bitsblock lb Interpixel Redundancy 7 The basis of intersample or interpixel redundancy is Repetition Prediction RunLength Encoding Encode sequences of identical symbols as symbol count pairs Can use fixedsize counts or special prefixes to indicate the number of bits for the count Fixed can reduce compression if either too large or too small Variable overhead for the pre xes Can extend to multiple dimensions Encode difference from previous line hopefully long runs of 0 s Encode using length or markers from previous line Useful for binary signals and blackandwhite images or for signal that have only a few possible values 2D RLE is used in the CCITT fax standard LempelZivWelch 7 39 Basic idea encode longest possible previouslyseen sequence Coding stream is mixture of symbols and backpointers Better yet Keep a codebook of previouslyseen sequences Store codebook index instead of backwards pointers Used in most common text compression algorithms zip and the GIF image standard Lempel ZivWelch Basic Idea 4 codebook all single symbols sequence empty while getsymbol if sequence symbol is in codebook sequence symbol else outputcode for sequence add sequence symbol to codebook sequence symbol Lempel Ziv Example Mary had a little lamb little lamb little lamb Mary had a little lamb Its fleece was white as snow Predictive Coding Use one set of pixels to predict another Predictions Next pixel is like the last one Next scan line is like the last one Next frame is like the last one Next pixel is the average of the alreadyknown neighbors 39 The error from the prediction residual hopefully has smaller entropy than the original signal The information used to make the prediction is the context L Predictive Coding Key Sender and receiver use the same predictive model Sender Make prediction no peeking Send the residual difference Receiver Make prediction Add the residual to get the correct value Lossless entropy code the residual Lossy quantize the residual Delta Modulation 1 Basic algorithm Prediction next signal value is the same as the last Residual is the difference delta from the previous one Residual is encoded in a smaller number of bits than the original Often used in audio systems phones 0 Problem limitedrange or limitedprecision delta can cause underovershoot Delta Modulation J J 55 5 Granular noise Code o Slope overload Input Encoder Decoder 1 l e i f f n 1 7 l4u l4 ll l 15 140 l Ll 55 2ll5 l4 ll my 5 1 14 205 45 755 140 205 140 3 15 140 55 205 H ll 205 14 29 55 270 270 20 15 37 loo 55 355 335 35 15 47 l3 5 55 400 400 70 17 52 220 55 455 465 155 15 75 s 5 55 57 o 530 220 19 77 240 55 595 595 175 Predictive Image Coding Predict next pixel based on neighbors that have already been seen Simple predictor average of the four neighbors abc dx xabcd Can use a larger context Can quantize lossy or entropy code lossless the residual Predictive Image Coding cont Number orpixels XIDDUD Number of pixels x 10000 Gray level Prelliclion error 4 Residual Coding EntropyQuantization The residual values should be very small so Use fewer bis for smaller values entropy codingl andor Use finer quantization less loss for smaller values Output 1 hp 77777 quotW y i i 7 i i l LHLEH 39t 5 52 Mam Input Ita Perceptual Redundancy Eye is less sensitive to Color High Frequencies So Allocate more his to intensity than chromaticity Allocate more bits to low frequencies than to high frequencies Can play similar tricks with the ear and varying sensitivity to different frequencies eg the psychoacoustic model plays a key role in MP3 L Block Transform Coding Use some transform to convert from spatial domain to another e g a frequencybased one 39 Advantage l 2 Many transforms pack the information into parts of the domain better than spatial representations 39 Advantage 22 Quantize coefficients according to perception e g quantize high frequencies more coarsely than low ones 39 Problem artifacts caused by imperfect approximation in one place get spread across the entire image 39 Solution independently transform and quantize blocks of the image 9 block transform encoding Transform Coding General Structure Encoder Input Construct Forward S ymbol Compressed nj j sugganges H transform Quanuzer encoder image Decoder Compressed Symbol Inverse Mirge Decompressed image encoder transform sugmar39ges image Transform Coding There are many other transforms besides the Fourier Transform all with the same structure Forward transform general form M 1 N l Tuv Z Zfxygxyuv y0 x0 Inverse transform general form M lN l my 2 ZTltuvgthltxyuvgt y0 x0 Most of the time gx y u v hx y u v possibly normalized or complex conjugates E Transform Coding cont Other basis sets for 4 x 4 blocks WalshHadamard Cosine 1 V ll l 391 i i 2 EEEE I Discrete Cosine Transform DCT 3 Basis functions realvalued cosines gx y uv 11x y uv ocuocv cosM COSM 2N 2N where M if u 0 otherwise 1a Discrete Cosine Transform cont How can we get away with cosines and no Sines Assumes alternating periodicity instead ofperiodicity stmnn nmlv Bum mmu lntensityChromaticity Convert to YCer color model and downsample allocate fewer bits to the chromaticity components 8 X 8 block DCT Energy compaction by converting to frequency representation Predictively encode Takes advantage of redundancy in the block DC coefficients averages Quantize AC Many high frequencies become zero coefficients Zigzag ordering Changes from 2D to 1D and puts similar frequencies together Runlength encoding Collapses long runs of zeros Entropy encode To more efficiently encode the RLE sequences What s left L Progressive Methods Progressive methods allow Viewing of the image while downloading Interlaced GIF Send every 8 scanlines then every 4 then 2 then all Interpolate intermediate lines until they get there Progressive JPEG Send DC coefficients Send all lowestfrequency AC coefficients Send successively higher AC coefficients 21 c Other Compression Methods Wavelets Similar in principle to blocktransformbased methods Greater compressionfidelity by exploiting both spatial and frequency information Fractals Blocksportions of the image are represented as rotated translated and rescaled copies blocks Probably the highest claimed compression rates Useful in some applications very lossy in others Research continues into hybrid fractal compression methods lb Evaluating Compression Compression ratio or rate isn t the only thing Must also consider any distortion error or loss introduced RateDistoration Curves Want to avoid lossy compression that causes distortionsartifacts if the image is going to be processed further 22 CS 519 Signal amp Image Processing The Fourier Transform Frequency Analysis gr To use tmnsfer functions we must rst decompose a signal into it component frequencies I Basic idea any signal can be written as the sum of sines and cosines of different frequencies I The mathematical tool for doing this is the Fourier Transform General Idea of Transforms Given an orthonormal orthogonal unit length basis set of vectors k Any vector in the space spanned by this basis set can be represented as aweighted sum ofthose basis vectors 245 It To get avector s weight re1ative to aparticular basis vector Thus the vector can be transformed into the weights 1 Likewise the transformation can be inverted by turning the weights back into the vector Linear Algebra with Functions The inner dot product of two vectors is the sum of the pointwise multiplication of each compone W Emil5U I Can t we do the same thing with functions fg jfltxgtgltxgtdx Functions satisfy all ofthe linear algebraic requirements ofvecfors L Transforms with Functions Just as we transformed vectors we can also transform functions Vectors 1 Functions 20 Transform akVEkZVJEk11 1kf392k Tftektdt 7 7M Inverse 7 ZakEk f0 Zakem k k Basis Set Generalized Harmonics The set of generalized harmonics we discussed earlier form an orthonormal basis set for functions 21271 Where each harmonic has a different frequency s Remember e 27395t cos27rst i sin27m The real part is a cosine offrequency s The imaginary part is a sine offrequency s The Fourier Series The Fourier Transform Most tasks need an in nite number of basis functions frequencies each With their own Weight Fs All Functions 20 Harmonics aim M at f Fm Fourier Series Fourier Transform Transform ak fek Ift2ktdt T maz w ak fExZ7ILI Fs fExZ7t 39 Trmfom E ftaquotz 5t dt Tft2quotzmdt inverse f0 Zaeeem f0 Zaee mt k k inverse f0 Zaee mt f0 TFsgte ds k 7M A A7 The Fourier Transform How to Interpret the Weights Fs 11 To get the Weights amount of each frequencyF The Weights Fs are complex numbers Fr Dftequotzmdt Real part How much of a casing offrequency 5 you need Fs is 9 mm mm W PW FS nnagina nan How much ofasme offrequency 5 you need To convert Weights back into a signal invert the transform Magnimde H W MWquot 0 Siquot s id ffr q quot y 5 3 0 need W Phase Whatphasz that sinusoid needs to be ft JFse dr t is the InverseFourier Transform or Fs F Fs t I r Fourier Pairs Does the FT Always Exist Use the Fourier Transform denoted F to get the weights for ach harmonic component in a signal39 Yes ifthe signal has a nite sum area under the curve Fs Fm Ift2quot dt WWI g B And use the Inverse Fourier Transform denoted Fquot to For some nonin nite bound B b39th 39htdh quottth quotal39al recom me e was e memes 0 e ongm Sign If t is periodic only need to test over one periodP ftF391Fs I Fsa z ds P ftdt B We write a signal and its transform as aFourier Transform pair 0 What about a constant function f I lt gt Fs Periodic Signals Discrete Fourier Transform DFT I Periodic signals With period N If We treat a discrete signal With N samples as one Underlying frequencies must also repeat over the periodN pe od of an in nite PBIiOdiC signal then ach component frequency must be amultiple ofthe I 1 N 1 rtht frequency ofthe pen39odic signal itself F V W 2 f f2 A 3 t0 F F F and N 1 273 I Ifthe signal is discrete fill ZFlVle Highest frequency is one unit periodrepeats after a single sample 50 No more than Ncomponems Note For aperiodic function the discrete Fourier tmns orrn is the same as the continuous transform 1 2 3 N F V y F We give up nothing in going from a continuous to a discrete transform as long as the function is periodic Normalizing DFTs Discrete Fourier Transform DFT FBasi Transform Inverse weenix unctlon E WN Fsl fltle fltlgf slexzw LITBA 7LN4 427cm 7LN4 IZ7U 50 We Fm AIggyma t mgmp Questions What would the code for the discrete Fourier transform look like 212m1 E tkgzm l t Sphkxzmm tu u What would its computational complexity be L L Fast Fourier Transform Fast Fourier Transform If We let Separating out the M even and M odd terms lZ M71 Mel WN e FlS Z fl221Wz Z zzuywz w the Discrete Fourier Transform can be Written HI HI Notice that 1 N71 2 7 42733 7 aim 7 I Fs ZffWi Wm 2 4M4 Mm N t0 and mam IfNis amultiple of 2 N 2Mfor some positive integer W53 2 1 2M 2 Imme 12 ijWfM M substituting 2M for N gives So 7 1 W4 5 1 1 Mquot n 1Mquot n FlVlr flflWZM FD Zfl2tWM ZfllHllWMWW 2 t0 2 M tu M tu rL Fast Fourier Transform M71 Fs1 Z zzWf1 Can be Written as FS Fgwn5 Fodd 5WZM to compute the last M terms as follows FrsM1 atter swam M71 M Z ztuym ng I We can use this for the rst Mterms of the Fourier transform of 2Mitems then We can reuse these values L Fast Fourier Transform If M is itself a multiple of 2 do it again If N is a power of 2 recursively subdivide until you have one element which is its own Fourier Transform Complexslgnal F F39I Comp1exslgna 1 1engh 1 c 1 retum f n lengcmf 2 w 2n eI 2 k m n Acumplexvalue even FFTEvenTemsf odd rru oddTemsf ors0sltMs result 5 evenIs W72M s oddIs zesuicrsm evenIs W72M s oddIs it Fast Fourier Transform C omputational C omplexity Discrete Fourier Transform a 0m Fast Fourier Transform a 0NlogN Remember The FFT is just afaster algorithm for computing the DFT 7 it does not produce adifferent re ult CS 519 Signal amp Image Processing Sampling S l39 1 amp mg f0 Continuous 30 Discrete Sampling SpatialTemporal Domain 9 Sampling a continuous function f at timespace interval At to produce a discrete function g gin nAf is the same as multiplying it by a comb g f combh Where h At 30 Sampling SpatialTemporal Domain Continuous Z combh Sampling Comb Z 30 Discrete I Sampling Frequency Domain Sampling in the spatialtemporal domain by multiplying With comb h g f combh is the same as convolution in the frequency domain With the tmnsform of combh G F combW Convolution of a function and a comb causes a copy of the function to stick to each tooth ofthe comb and all of them add together Sampling Frequency Domain 175 Spectrum 5 combmo Comb s Spectrum 5 GS Spectrum of Discrete Signal 5 L Reconstruction In theory we can reconstruct the original continuous function by removing all of the extraneous copies of its spectrum created by the sampling process 175 G5H1h5 In other words keep everything in the frequency domain between 7i S v S i and throw the rest 2 2h away L Reconstruction Graphical Example GS Spectrum of Discrete Signal 5 HMS Rectangular Box Filter 5 FS Reconstructed Signal Spectrum A The Sampling Theorem We can only do this reconstruction ifth duplicated copies do not overlap They do not overlap iff 1 The signal is band limited and 2 The highest frequency in the signal is 1ess than 21 h In other words the sampling mte 1h must be twice the frequency of the highest frequency in the image This is called the Nyqum rate Aliasing What if the duplicated copies in the frequency domain do overlap High frequency parts of the signal those higher than intrude into neighboring copies The higher the frequency the lowerthe point of overlap in the adjacent copy These high frequencies masquemding as low frequencies is called aliasing False lowfrequency patterns are called Moir patteer le I Sampling Frequency Domain FS Spectrum 5 conbids Comb s Spectrum 5 GS Spectrum of Discrete Signal 5 l v Sampling Above the Nyquist Rate W q Sampling At the Nyquist Rate Sampling Below the Nyquist Rate Moir Patterns TWR Preventing Aliasing You have two choices 1 Increase your sampling 2 Decrease the highest frequency in the signal before sampling Reconstruction 4 Reconstruction was F S GS I ll113 But in the timespatial domain this is equivalent to t gt sinc27Ith So convolve your discretelysampled nonaliased image with a sinc function and you can reconstruct the original continuous image at I Imperfect Reconstruction Problem you can t do it the sinc function has infinite extent The best you can do is come close By not perfectly clipping in the frequency domain the duplicate copies now look like false high frequencies Jaggies in graphics false high frequencies cased by poor reconstruction t Imperfect Reconstruction GS Spectrum of Discrete Signal 5 Imperfect It ll Reconstruction I I Correcting Imperfect Reconstruction 1 Sample well above the Nyquist rate 2 Lowpass ltera erreconstruction Typical Processing Pipeline 1 Lowpass lter to reduce aliasing 2 Sample 3 Do something With the digitized signalimage 4 Reconstruct 5 Lowpass lter to correct for imperfect reconstruction A The Discrete Frequency Domain I If sampling in the timespatial domain is multiplication by a comb so is sampling discretizing the frequency domain I Multiplication by a comb in one domain is convolution With a comb of inverse spacing in the 0 er I Discrete timespatial samples 9 replicated copies of e signal s spectrum appear in the frequency domain I Discrete frequencies 9 replicated copies of the signal appear in the timespatial domain ie the signal is periodic The Discrete Frequency Domain reproduce the signal every N samples in the time domain domain signal is the same periodic signal I If a signal is N time samples long and We disccretize the frequency domain a UN intervals like the DPT We I The Discrete Fourier Transform of a truncated nite 39 39 the Continuous Fourier Tmnsform f o I Frequency Resolution 0 I AnNelement signal is accurate in the frequency domain only to 1 I To be more accurate in the spatial domain sample more frequently I To be more accurate in the frequency domain sample longer CS 519 Signal amp Image Processing Reconstruction Image Reconstruction Reconstruction is the process of attempting to recreate the original signal given a corrupted one Terms Scene the real world Image a possibly corrupted picture of a scene Image reconstruction attempts to recreate the scene from an image Required Knowledge k Reconstruction algorithms usually use one or more of Knowledge about the process that corrupted the image Knowledge about properties of the original scene Examples Deconvolution requires knowledge of the point spread function Weiner ltering requires an estimate of the strength of the noise ib Knowledge About Corruption Process Knowledge about the corruption process puts limits on reconstruction Usually though of a fitting the data the reconstructed image can t vary too much from the original corrupted image Example Assuming white noise with standard deviation 039 the probability of getting noisy image g from scene f is pgfHef g 7 i Knowledge About Scene Properties Possible general properties Generally smooth A few scattered rapid transitions Possible specific properties Known scene contents subject anatomy etc Other related imagesscenes p f can be determined for all scenes f Knowledge About Scene Properties 7 Example Penalize unsmooth images pf Ze m f z39keNz39 where Ni denotes the neighborhood of 139 Notice that one large discontinuity in intensity is more likely than several smaller discontinuities Results in piecewiseconstant images with infrequent but rapid discontinuities A Statistical Reconstruction Goal for all possible reconstructed scenes f find the one that maximizes p f g for image g Problem your knowledge of the imaging process tells you Pg f but how do you determine Pf g Really big problem How big is the space of all possible scenes f I Bayesian Reconstruction 7 W I g 2 Pltg fPf P g Pg f is the data term Pf is the a priori knowledge prior Pg is usually assumed to be uniform P f g is called the a posteriori estimate This is often called maximum a posteriori MAP estimation A Bayesian Reconstruction If Pg f and P f are negative eXponentials the process usually boils down to minimizing some function datag g 7 prior f where dataf g penalizes reconstructions f that don t agree with the data g and prior f penalizes reconstructions that are a priori unlikely The weight 7 controls the relative importance of the two lb Balancing the Data and Prior Terms dataf g 7 prior f If 7 is set too low the data term dominates and there is little improvement If 7 is set too high the prior term dominates and the reconstruction may not be true to the original Optimization A Since the space of all f to search is far too large non eXhaustive functional minimization techniques must be used Gradientdescent Simulated annealing Neural networks Graduated nonconvexity Etc Other Reconstruction Methods There are many other reconstruction methods but nearly all Use knowledge about the process that corrupted the imagesignal Use knowledge about properties of the original scenedata Attempt to optimize some form of likelihood function Mike Bailey Oregon State University Note a shader program can have any combination of vertex geometry and fragment shaders in it All three are not required concurrently The Geometry Shader Where Does it Fit in the Pipeline Transformed Venices Assembled Primitive lnterpolamed Values Pixels jb May 10 2009 Geometry Shader What Does it Do Application Points Lines Line Strip Line LoopLines with generates Adjacency Line Strip with Adjacency Triangles these Triangle Strip Triangle Fan Triangles with Adjacency Triangle Strip Adjacency Driver feeds these atatime one Line Line with Adjacency into the Geometry Shader gle Triangle with Adjacency Point Trian Geometry Shader generates almost as many of these as it wants Points LineStrips TriangleStrips There needn39t be any correlation between Geometry Shader input type and Geometry Shader output type Points can generate triangles triangles can generate triangle strips etc mm 7 May w 2009 Additional Arguments are Now Available for ngegin GLL NESADJACENCYEXT GLL NESTRPADJACENCYEXT GLTRANG LESADJACENCYEXT GLI HIANb LlSTRPADJECENCYEXT mil 7 May 10 2009 New Adjacency Primitives and what they do by default 0 1 2 3 Lines with Adjacency O 0 o O N 1 4N vertices ere given where N is the number of line segm A line segment is drewn between 1 Vert ces 0 end 3 ere there to provide ed ents to drew end 2 jucency informetion l 0 J 0 Line Strip with Adjacency O l Nt3 vert ces ere given whereN is the number of line segments to drew A line segment is drewn between 1 end 2 2 end 3 N end Ntl v rt s 0 end Ntz ere there to provide adjacency informetion mil 7 May 10 2009 New Adjacency Primitives and what they do by default Tr39angles w39th Adjacency 6Nverlices ere gi n where N is the number of triengies to drew Points 0 2 end 4 define the triengie Points 1 3 end 5 tell where edjeeent triengies ere angle St p thAdjacency 42N vertices eregiven whereN is the number of triengies to dr Points 024 6810 define the trien ies Points 1 3 7 9 11 te where edjecent triengies ere mibeiway 10 2009 glProgram Parameter Must Be Called Before the Shaders are Linked glProgramParameteriEXT progname GLGEOMETRYVERTCESOUTEX1Z int value Maximum number of vertices this Geometry Shader will be emitting mil 7 May 10 2009 glProgram Parameter Must Be Called Before the Shaders are Linked glProgramParameteriEXT progname GLGEOMETRYNPUTTYPEEX39E int value The primitive type that this Could actually come from GLLINES Geometry Shader Will be GLLINESTRIP or GLLINELO0P receiving Could actually come from L LINES ADJACENCY EXT or GLLINE TRIPADJACENCYEXT GL PO39 NTS GLL NES GLL NESADJACENCYEXT Could actually come from GLTRIANGLES GL TRIANGLE STRIP or y GL TRIANGLES GLTRIANGLEFAN GLTRANG LESADJACENCYEXT Could actually come from GLTRIANGLESADJACENCYEXT or GLTRIANGLESTRIPADJACENCYEXT mm 7 May 10 2009 glProgram Parameter Must Be Called Before the Shaders are Linked glProgramParameteriEXT progname GLGEOMETRY0UTPUTTYPEEX1 int value The primitive type that this Geometry Shader will be sending on to the rest of the pipeline GLPONTS GLLNESTRP GLTRANGLESTRP mil 7 May 10 2009 Warning glProgramParameteriEXT calls can go into a Display List deferring their execution until it is too late Bad idea GLuint dl glGenLists 1 glNewList dl GLCOMPILE ut data uele Morallf ou quot a 39 r 39 inp l 39 a Program Parameters and the Linking of the Program until after the Display List is complete 39 g quot calls to 39 39 quot39 39 If a Vertex Shader Writes Variables gPosition gTexCoord gFrontColor gBackCoIor gPointSize gLayer varying In the Geometry Shader the dimensions indicated are givm by the variable g Verticesn although you will already know this by the type of as gt gt gt gt gt gt then the Geometry Shader and will Write Them to the Fragment will Read Them as Shader as gt gPosition gTexCoord gPositionI na gTexCoordl nE gt gFrontColorlnH gt gFrontColor gBackCoIorln gt gBackCoIor gPointSizenl gPointSize gLayerln gt gLayer varying in varying out i GLiF O NTS 2 LiLlNES 4 GLiLlNEsiADJACENCYiBCF 3 GLiTRlA GLES 6 GLiTRlANGLES ADJACENCYiEXT geometry you are inputting mlb 7 May 10 2009 The Geometry Shader Can Assign These Variables gPosition gTexCoord gFrontColor gBackCoIor gPointSize gLayer gPrimitivel D When the Geometry Shader calls EmitVertex this set of variables is copied to a slot in the shader s Primitive Assembly step When the Geometry Shader calls EndPrimitive the vertices that have been saved in the Primitive Assembl ste are then assembled rasterized etc Note there is no BeginPrimitive quot routine It is implied by 1 the start of the Geometry Shader or 2 returning from the EndPrimitive cull Note there is no need to cull EndPrimitive at the end of the Geometry Shader it is implied mlbrlvlay 10 2009 Notes 39 In a VerTex Shader varying variables become 1 inpuTs To The rasTerizer if There is no GeomeTry Shader or 2 inpuTs To The GeomeTry Shader if There is one 39 If There is a GeomeTry Shader varying variables from The VerTex Shader are collecTed b The rimiTive assemb s e and assed To The GeomeTr Shader once enough verTices have been collecTed for The currenT geomeTry inpuT Type 39 If There is a GeomeTry Shader Then There musfalso be a VerTex Shader 39 GeomeTry Shaders can access uniform variables jusT like VerTex and FragmenT shaders can 39 GeomeTry Shaders can access all of The sTandard OpenGLdefined variables such as The TransformaTion maTrices Thus you can Transform The original verTices in The VerTex Shader or Transform Them as They are being emiTTed from The GeomeTry Shader whichever is more convenienT 39 In a GeomeTry Shader The userdefined inpuT varying variables coming from The VerTex Shader are declared as varth in The GeomeTry Shader39s oquuT varying variables headed To The rasTerizer are declared as varying aLf Huuii id ii With just a Vertex and Fragment shader varying means two different things varying pervertex one at a time EWi IITWH varying perfragment one at a time mibrlviay 10 2009 With a Vertex Geometry and Fragment shader varying means four different things varying pervertex one at a time varying out pervertex one at a time latched by EmitVertex varying perfragment one at a time mil 7 May101009 Passing information from a Vertex Shader to a Fragment Shader can only happen via a Geometry Shader varying vecA gIP05Itlon J varying vec4 Color Color glColor Already declared for you varying in Iec4 glPosion n339 varying in vec4 Color3 varying out vecll gi30ition varying out vec4 OColor varying vec4 OColor mibilviay 10 2009 Example Expanding 4 Points into a Bezier Curve with a Variable Number of Line Segments bezierurvegib Geometrylnput I l39nesadjacency strip Geometry beziercurvegeom Fragment beziercu e Program BezierCurve Num lt2 4 50gt LineWidth 3 I oo 111 212 3 1o bez iercurveyer f void main0 glPosition glModel ewProjectionMatrix glVertex bez iercurvefrag void main glFragColor vec4 0 1 0 1 mil 7 May 10 2009 Example Expanding 4 Points into a Bezier Curve with a Variable Number of Line Segments beziercurvegeom version 120 extension GLEXTgeometryshader4 enable uniform int Num void main0 float dt 1 lfloatNum 0 lt Num iH float omt 1 t oat omt2 omtquot omt oat omt3 omtquot omt2 oat t2 oat t3 t t2 vec4 xyzw omt3 glPos39t39onln0xyzw glPosition xyzw EmitVertexo t dt mil 7 May 10 009 Example Expanding 4 Points into a Bezier Curve with a Variable Number of Line Segments FpNum 5 FpNum 25 mjb May 10 2009 Note It would have made no Difference if the Matrix Transform had been done in the Geometry Shader Instead bezier cur veverquotr void main glPosition glVertex beziercurvegeom vec4 xyzw omt3 gPositionn0xyzw 3 t omt2 glPositionn1xyzw 3 t2 omt glPositionn2xyzw t3 gPositionn3xyzw glPosition glModeMewProjectionMatrix xyzw EmitVertex t dt mjb May 10 2009 Example Shrinking Triangles shrinkgeom sion 120 extensionGL EXT geometry shaderA enable uniform float Shrink varying in vec3 Normal3 varying out float Lightlntensity co st vec3 LightPos vec30100 vec3 via void ProduceVertex int v Lightlntensity dot normallzeLightPos Vv Normalv 39 39 39 nsity E in a i o E z n 039 n E in a i o Lightlntensity 5 9 Position gl ModeMewProjectionMatrix vec4 CG Shrink Vv cs 1 mitVertexO gl Positionln0xyz ositionln1xyz ProduceVertex 2 r A 10 2009 Example Sphere Subdivision IT39S someTimes handy To parame rer ize a Triangle inTo sfr I V2 01 vo gt v1 5 Vs rVOsV1 VOTV2 VO mlbrlvlay 10 2009 Example Sphere Subdivision V0 V1 V0 Level 1 V1 V0 Level 2 V1 ve O numLayers Z Wl 1 numLayers 2 numLayers 4 mlbrlvlay 10 2009 Example Sphere Subdivision spheresubdglib Geometrylnput gltriangles Geometryoutput gltrianglestrip rt Geometry spheresubdgeom Fragment spheresubdfrag Program Spheresubd Level lt0 o1ogt Radius lt515gt Color 1 5 15 Triangles o 1 10 0 0 10 Triangles10 0 0 o1 0 10 Triangles001 10 0 0 10 Triangles10 0 00 1 0 10 Triangles00 1 10 0 0 1o Triangles o 0 00 1 040 Tnangles0 o 1 10 0 040 Triangles10 0 00 1 040 mlbrlvlay 10 2009 Example Sphere Subdivision spheresubdverT void maino glPosition glVertex spheresu bdfrag varying float Lightlntensity uniform vec4 Color v main glFragColor vec4 Lightlntensity Colorrgb 1 mlbrlvlay 10 2009 Example Sphere Subdivision spheresubdgeom version 120 extension GLEXTgeometryshader4 enable uniform int Level uniform float Radius varying float Lightlntensity veca V0 V01 V02 void ProduceVertex float 5 float t const veca lightPos ve03 0 10 0 v 13 v V0 sV01 t V02 v normalizev vec n v veca tnorm normalize gINormalMatrix n II the transformed normal vec4 ECpo 39 39on glModelViewMatrix vec4 Radiusv 1 Lightlntensity dot normalizelightPos ECpositionxyz tnorm Lightlntensity abs Lightlntensity Lightlntensityquot 15 glPosition glProjectionMatrix quot ECposition EmitVertexo mlbrMameOOQ Spheresubdgeom Example Sphere SudeVISIon void main0 V01 glPos nln1 glPos nln0 xyz V02gl Pos onln2glPos onln0 yz V0 glPositIonln0xyz int numLayers 1 ltlt Level float dt 1 I oat numLayers oat ttop 1 for int it 0 it lt numLayers it mlbrlvlay 10 2009 Spheresubdgeam Example Sphere SubdiVIsion itlt numLayers ir t 1 mm dstop smaxtop l oatnums 1 oatdsbot smaxbokl oatnums forint is o is lt nums is ProduceVertex sbot Um ProduceVertex stop ttop I dsop sboH dsbot Produc eVertex sbot Um EndPn39miu39veo ttop tbot tbot dt mmrMav 1m 2m Example Sphere Subdl on with One triangle Lml o Example Sphere Subd Lml o on with the Whole Sphere 8 triangles Example Explosion Lml 2 V Tm numLayers 4 TlmE Break the triangles into points 2 Treat each point s distance 39om the triangles CG as an initial velocity 3 Follow the laws ofprojectile motion 1 p p0 v0t3m2 mlbrMav m ma Example Explosion silhgeom version 120 extension GLEXTgeometryshader4 enable uniform lm Level uniform float Gravity uniform float Time uniform float VelScale veca V0 V01 V02 veca CG void ProduceVertex float 5 oat t veca v V0 5V01 t V02 veca vel VelScale v CG v CG velTime 05quotve030GravitM0TimeTime glPosItI gIProjectionMatn39x vec4 v 1 EmitVertexo mil 7 May 10 2009 Silhgeom Example Explosion Id mainl 39 nln1 gl Pos39 39onln0 xyz nlnZ gl Positionln0 xyz nlno nlnoxyz gl Positionln1xyz gl Positionln2xyz13 int numLayers 1 ltlt Level oat dt 1 I oatl numLayers oatt for int it 0 it lt numLayers it oat smax 1 t int nums 1 oat ds smaxl oatl nums 1 oat s 0 for int is o is lt nums is ProduceVertexl s t s 1 ds t dt 7 May 10 2009 Example Explosion Example Explosion 18 Example Silhouettes 1 common edge Compute the normals of each of the four triangles 2 If there is a sign difference between the 2 component of the center triangle and the 2 component of an adjacent triangle draw their mil 7 May 10 2009 silhgib Example Silhouettes Obj bunnyobj Geometrylnput gltrianglesadjacency GeometryOutput gllinestrip Vertex silhvert Geometry silhgeom Fragment silhfrag Program Silhouette Color 0 1 0 I ObjAdjbunnyobj I mlbrlvlay 10 2009 Example Silhouettes silhver r silhfrag void main glPosition glModelVielelatrix glVertex uniform vec4 Color void main glFragColor vec4 Colorrgb 1 mmer 10 2009 silhgeom Example Silhouettes version 120 extension GLEXTgeometryshader4 enable void main0 veca V0 gl gl 9 9 9 veca V5 gl veca N042 ross V4V0 V2V0 ross V2V0 V1V0 ross V4V2 V3V2 veca N405 crossV0V4 V5V4 1 1 if dot N042 N021 lt 0 N021 vec3000 N021 if dotN042 N243 lt 0 N243 vec3000 N243 if dot N042 N405 lt 0 N405 vec3000 N405 mmer 10 2009 20 silhgeom Example Silhouettes if N042Z N021z lt 0 glPosition glProjectionMatrix vec4 V0 1 EmitVertex 39 0 glPosition glProjectionMatrix vec4 V2 1 EmitVertexo EndPri tiveo if N042Z N243z lt 0 glPosition glProjectionMatrix vec4 V2 1 EmitVerte 39 glPositio EmitVertexo mltiveo EndP if N042Z N405z lt 0 glPosition glProjectionMatrix vec4 V4 1 quotVertex 39 glPosition glProjectionMatrix vec4 V0 1 V rt 39 Emit 2 ex EndPrimitiveo X0 n glProjectionMatrix vec4 V4 1 mlbrMay 10 200 Example Bunny Silhouettes mlbrMay 10 2009 21 Example Hedgehog Plots hedgehoggeom version 120 extensionGL EXT geometry shaderA enable uniform int Detail i g In vec Color varying out vecA FragColor 3 int lLength vecA v0 vo1 v02 void ProduceVerticesUloat 5 float t vecA v v0 s V01 rvoz vec3 n normalize N0 s MM mom for int i 0 ilt Length i gl Position gl ProjectionMalrix v FragColor Color0 EmitVertexO vxyz Step n vy Droop floati i EndPrimitiveO m brMay 102009 22 hedgehoggeom void main v0 gl Positionlnw v01 gl Positionn1 g Positionln0 V02g Positionn2 g Positionln0 Norm0Tnorm 1 Norm1Tnorm1 Norm2 Tnorm2 ifdotNormONorm1ltO orm1 orm1 ifdotNorm0Norm2lt0 Norm2 Norm2 0 normalizeNormO N01 ormalizeNorm1Norm0 N02normalizeNorm2Norm0 int numLayers 1ltlt Detail mjbrMay 10 2009 hedgehoggeom float dt 1 I float numLayers floatt forint it0 itlt numLayers it float smax t int numsit1 float ds smaxlfloat nums 1 0 for int is 0 is lt nums is ProduceVertices s t S 939 d5 dt mjbrMay 10 2009 23 Hedgehog Plots Gone Wild A New GLSL Builtin Variable for the Geometry Shaders int IPrimitivelDln Tells the number of primitives processed since the last time ngegin was called Calling a vertex array function counts as an implied ngegin gllgtrirnitivelnln is o rortne rst primitive aitertiie ngegin Geometry shuders can set the builtin variable gLPr39imitiveID to send a primitive number to the fragment shuder mlbrMaV iii ma CS 519 Signal amp Image Processing Complex Numbers Review cont Euler s Formula Remember that under complex multiplication Magnitudes multiply Phases add Under what other quantityoperation does multiplication result in an addition Exponentiation cacb c b for some constant c If we have two numbers of the form me where c is some constant then multiplying we get mca rrcb mncab What constant c can represent complex numbers Euler s Formula Remember that phases add if the magnitude is 1 then a complex number can be completely speci ed by its phase 1 z a bi cos sin i Note that when 0 z l Differentiating 3 sin cos i sin 1 cos i sin i2 cos i cos sin ii 2139 39 z d lnz 139 C by inde nite integration Sincez l when 0thenlnl0iCC0 Finally solving for z we get 2 e i cos sin i 9 Euler s formula Euler s Formula Graphical Interpretation Any complex number can be represented using Euler s formula 2 zei z zcos zsin i a bi Imaginary A i a izicos b izisin a 1 Real Powers of Complex Numbers Suppose that we take a complex number 2 zei W2 and raise it to some power Zn lzlei zn lzln ein 2quot has magnitude zquot and phase n 2 Powers of Complex Numbers Example 7 39 What is iquot for various n Powers of Complex Numbers Example What is i for various n What is em4quot for various n CS 519 Signal amp Image Processing Harmonic Functions Harmonic Functions What does xt em look like 39 xt is a harmonic function a building block for later analysis Imaginary Angular frequency 6 at Time Harmonic Functions as Sinusoids Real Part Imaginary Part meiw 3613901 cosal sinal Harmonics and Systems 4 If we present a harmonic input function x10 em to a shiftinvariant linear system it produces the response x10 gt MO yiU KW I x10 KW t 6 where for now we simply define Kat e Harmonics and Systems Shifted Input Suppose we create a harmonic input function by shifting the original input x20 m 73 eW I The response x2t gt y2t to this shifted input is y2ltrgt Km t 73 x20 Km t 7 em Harmonics and Systems Shifted Input 4 However note that x2t em I 6 e m T x1t e m T Thus the response can be written x20 gty1t in y2t y1t 6 KW t x10 6quot KW t x20 Harmonics and Systems Shifted Input Now we have both y2t Kw t xii y2t Kw t T 9620 Thus Ka t T Ka I So K is just a constant function with respect to t KW Harmonics and Systems Thus for any harmonic function xt 6 we have W 9 ya ya mm W Kltwgt em Implication When a system a shiftinvariant linear transformation is applied to a harmonic signal the result is the same harmonic signal multiplied by a constant that depends only on the frequency L Transfer Functions We now have a second way to characterize systems 1 If you know the impulse response at any point you know everything there is to know about the system 2 The function K 0 de nes the degree to which harmonic inputs transfer to the output K 0 is the called the transfer function Transfer Functions 4 Expressing K 0 in polar magnitudephase form Ka Aa ellX Recall that the magnitudes multiply and the phases add eiwt Aw eiat a Aa is called the Modulation Transfer Function MTF Magnitude of the transfer function Indicates how much the system ampli es or attenuates input w is called the Phase Transfer Function PTF Phase of the transfer function Only effect is to shift the time origin of the input function CS 519 Signal amp Image Processing Point Operations Point Operations Simplest kind of image enhancement Also called level operations Process each point independently of the others Remaps each sample value g g where g is the input value graylevel g39 is the new processed result f is a level operation f Adding a Constant Simplest level operation g g b for some constant bias b b gt 0 Brighter Image b lt 0 Darker Image Amplification Gain Another simple level operation is amplification multiplication g ag for some constant gain amplification a a gt 1 altl Amplifies signal louder more contrast Diminishes signal softer less contrast Linear Level Operators Linear operator combine gain multiplication and offset addition gagb where a is the gain b is the bias Negative Computing the negative of the signalimage g g Or to keep the range positive g gm g Where g E 0 gmax g This is simply a line with slope 71 Thresholding Thresholding a signal 0 ifgltT 1 otherwise ml for some intensity threshold T g Quantization Quantization is choosing different nite values to represent each value of a possibly analog input signal Quantization is usually monotonic g1 S g2 implies gl S gz Can be thought of as multi level thresholding ql ifgmm Sg lt T1 12 if T1 38 lt T2 fg L13 ifTZSgltT3 3911quot if Tm Sg ltgm L Logarithm Used to consider relative changes glg2 instead of absolute ones g1 g2 g 10gg Useful when the dynamic range is large Examples Apparent brightness Reichter scale Human Vision Exponential 7 Can be used to undo logarithmic processing geg Contrast Enhancement Any time we use level operations to make one level more distinguishable from another we call it contrast enhancement If number of levels stays fixed contrast enhancement trades off decreased contrast in one part of our signal range for increased contrast in a range we re interest in If we plot our level operation as a function All sections where the slope is greater than one increase the contrast in that intensity range All sections where the slope is less than one dimish the contrast in that intensity range lb Windowing Windowing is constrast enhancement of one part of the signal range 39 Example mapping 0 4095 input to 0 255 display The simplest mapping is 256 fg 4096g g64 Windowing c0nt Suppose we re interest mainly in the range 5001200 Better mapping 0 if g lt 500 g 255g 7 5001200 7 500 if 500 s g s 1200 255 if g gt 1200 Windowing is often a continuous piecewiselinear mapping ib Mike Bailey CS 519 Oregon State University MCThe Basic Computer Graphics Coordinate Systems 1 WC EC sc 1 sc lt F rameh uffer NDC Normalized Device Coordinates SC Screen Coordina es Oregnn 5mg Univels ily Cnmpuler Graphics WW 1 m5 The Generic Computer Graphics System Vertex Input Processor Devices Network A Shadereye View of the Grap cs Process h Geometry Shaders Assemmed Wm Hammad Va ues mew State Unwers y Eomvma Erapmcs mbr wm znnv CS 519 Signal amp Image Processing Level Operations using Histograms Histograms Brightness and Contrast Histograms Brightness and Contrast mem quotmy xmmm we Point Operations on Histograms Suppose we have a monotonic level operation such that ll if b 5quot Then the histogram H becomes 139 such that b b jHltggtdg H ltgdg A Point Operations on Histograms cont Let b a A for some very small A then f0 fa f aA aA a f aA Thus JHgdg JH gdg 1 L1 or approximating H aA x H a f aA Md 2 Hf Hf f 60A f a Hf 1g H g f f 1g Histogram Equalization Automatic contrast enhancement Basic Idea allocate the most output levels to the most frequently occurring inputs Look at the histogram of the input signal Ifwe allocate output levels proportional to the frequency of occurrence for our input levels the output histogram should be uniform This process is know as histogram equalization Histogram Equalization We want a at constant output histogram H 1 A H g f g 0 f f 1g gm Thus g g g f g mquot Hg gt fg mquot JHxdx A0 A0 0 where g is the input gray level gm is the maximum input A0 is the image area area of objects with gray level 2 0 g is the output gray level Histogram Equalization However the probability density function is the normalized histogram ie pg Hg A0 g fg gm Jpxdx gmmg 0 where p is the probability density function normalized histogram of the input image P is the cumulative probability density function Adaptive Histogram Equalization Remap based on local not global histogram Allows localized contrast enhancement Example 7 x 7 window around each pixel Problem ringing artifacts noise is enhanced Solution partial constrain the remapping quot y Equilized Equilized Original Histogram Matching Histogram equalization produces a uniform output histogram 0 We can instead make it whatever we want 0 Use histogram equalization as an intermediate step First equalize the histogram of the input signal f1 g gm P1 g Then equalize the desired output histogram f2g gm Png Histogram specification matching is g f2391fi g P2391P1g CS 519 Signal amp Image Processing Image Arithmetic Image Arithmetic ImageImage Operations C xy fAxy Bxy Operates on each corresponding point from two or more images Usually requires that both images have the same dimensions both in their domain and range Image Addition Used to create doubleexposures Clx3ylAlx3ylBxy Image Averaging Average multiple images frames of the same scene together Useful for removing noise Image Subtraction 0 Useful for nding changes between two images of basically the same scene Cxy Axy Bxy 0 More useful to use absolute difference CDC Ax9 Bx9 Background Subtraction What s Changed absolute difference 4 V I Motion Use differencing to identify motion in an otherwise unchanging scene Ie object motion not camera motion Digital Subtraction Angiography Medical imaging technique used to see blood vessels Take one Xray Inject a contrast agent 39 Take another Xray and hope the patient hasn t moved or even breathed too much Subtract the rst background from the second background vessels Multiplication Useful for masking and alpha blending C x y A x y X Bx y Alpha Blending Addition of two images each with fractional 01 masking weights Useful for transparency compositing etc Color images are often stored as RGBOL or RGBA CS 519 Signal amp Image Processing The Fourier Tmnsfoim Linear Systems Linear Systems and Responses Time Spatial Frequency Input f F Output g G Impulse Response h Transfer Function H Relationship g f n G FH The Convolution Theorem Convolution Theorem Let F G andH denote the Fourier Transforms of Thus signals f g and h respectively FfgFfFg gfh implies GFH gfh implies G F H leerse39 Ffg Ff Hg Convolution in one domain is multiplication in the ot er and vice versa v L I Example Example What is the F ouiier Transform of If the impulse response is a Gaussian With standard deviation 039 What does this do to the frequencies going m 7 10 cost 10 if H g 7 through the system 0 otherwise L L Convolution in the Frequency Domain Deconvolution 4 One application of the Convolution Theorem is that We If G FH can t you reverse the process by F G H can perform timedomain convolution using frequency domain mumphcanon This is called deconvolution the undoing of convolution f g F 1FfFg Problem most systems have noise which limits deconvolution How does the computational complex ity of doing convolution compare to the forward and inverse Fourier transform L System Characterization Active vs Passive Systems We 7 We can measure the transfer function by comparing the Systems can be categorized by Whether they diminish frequencies of the input and output signals or amplify components H G F Passive systems do not use energy hence they only diminish signals not amplify them Expressing Hr in polar magnitudephase form HGN S 1 H r Are lt5 Active systems use energy and can amplify signals Ar is called the Modulation Tmnsfer Function MTF HGN 2 1 Ms is called the Phase Tmnsfer Function PTF L t I Types of Systems Examples of Low Pass Systems Sound through air Your mommate s stereo Multiplicative effect Systems can also be categorized by the shape of their MTF Lowpass Lets low frequencies through better than high ones I Sound through other materials Highpass Lets high frequencies through better than low ones Your mommmg stem Bandpass Lets a particular range offrequencies through 39 M quot Pl a quot effe etter than others I Electronic signal through Wires Limits digital transfer rates Examples oingh Pass Systems I HighB oost systems commonly used in electronic communications I Computing the gmdient magnitude of an image I Not very common in physical passive media Examples of Phase Transfer Functions I Time delay I Transmission of electronic signals through Wires net 62 Mike Bailey Oregon State University s Hardcupy and Display Y Resnlulinn 1024 ii Create Haidcnpy File Dispiayiiis Hardtam File AD Preliminary Background the OpenGL Rendering Context The OpenGL Rendering Context contains aii the characteristic information necessary to produce an image from geometry Tiiis inciudes transformations coiors iigiiting textures wiiereto send tiiedispiay etc Some of these characteristics have a default vaiue e g iines are white the display goes to the screen and some have nothing e g no textures exist mlbrdulv 31 2EIEI7 More Background What is an OpenGL Object An OpenGL ObjecT is preTTy much The same as a C C1 or Java objecT iT encapsulaTes a group of daTa iTems and allows you To TreaT Them as a single whole For example a TexTure ObjecT could be creaTed in C by class Textureobject 1m nums nuln39l nuIIIR void image Then you could creaTe any number of TexTure ObjecT insTances each wiTh iTs own characTerisTics encapsulated wiThin iT n you wanT To make ThaT combinaTion currenT you jusT need To bring in quotbindquot ThaT enTire objecT You don39T need To deal wiTh The informaTion one piece of informaTion aT a Time mieruN 31 2mm More Background How do you Create an OpenGL Object In C objecTs are poinTed To by Their address In OpenGL objecTs are poinTed To by an unsigned inTeger handle You c assign a value for This handle yourself noT recommended or have OpenGL generaTe one for you ThaT is guaranTeed To be unique For example GLuint texA glGenTextures 1 stexA This doesn39T acTually allocaTe memory for The TexTure objecT yeT iT jusT acquires a unique handle To allocaTe memory you need To bind This handle To The ConTexT mierUl 311nm More Background Binding to the Context The Opzr L term binding refers to dockmg a metaphor whtch 1 ma to be hora vtsuaHy ptzasthg ah Opzn L obtzct to the Context Vou cah th2h asstgh c aractzrtsttcs and they tht ow through thz Cohtzxt thto thz o Jest Yexture amen Memory for the obtzct ts aHocatzd the hrst time the handtz goes through the ma process mieru yZH 2W7 More Background Binding to the Context thh you want to use that szturz Obtzct Just bthd tt agath AH of thz charactzrtsttcs WtH then be acttuz Just as tfyouhad spzctttzd them agath Vadure yln veTexturE an mmeO lelndTextIme 9 am 2 mm mteru VZH 2W7 The Overall RendertoOffscreenFramebuffer Process I he all e min1 mleruly 31 2mm Code for the RendertoOffscreenFramebuffer Process 1 In InitGraphics generate Framebufferand Renderbufferhandles 6mm FrameBuffer GLuml ColorBuffer GLumt DeplhBuffer glGenFramebuffer5 ampFrameBuffer glGenRenderbuffer51ampColorBuffer glGenRenderbuffers1ampDeplhBuffer 2 Setupthe size youwannhe offscreen rendering to be ml SlZeX 2048 ml SlZeY 2048 on will ruflrunec that sizus a fuw this so it is a good Han to us vuriubks for Oh six and not hurdth Hum as rumbas in function calls 3 Bind the offscreen frimebuffer to be the current output display nglndFramebuffe GLiFRAMEBUFFER FrameBuffer mleruN 311nm entttyo gtuLookAuO O S 000 000 gtTranstatett TransXYz jL Transxvzm 1 TransXYzm gtMuttMatnx RotMatrtx gtScatett sca es seate scate gtCotomm 1 1 gtutVWeTeapott 1 mterut 31 2007 Code forthe RendermD 39screen ramebuffer Process p m shack em do 5W1an Wm hemlsum 35mm m magma mum mm mm mm m mm m u smmm eLjAcmueww 1 mamas u slzeX 5w 6 Ma GLUNS GNEbtiE We a u Dmd he Framemmer bjad mew DDmEHu gohackm rendamglo HE msp ay deuce rammm a rHamcupy and Dwsp ay 7 v Resummn mzA g gate Hamcu me Dwsp ay me Hamcupy me D Renderr ar amehuffer Sgrsarfm cremmg arhwraryrrexa mmnhmdmpw Remembering the Old Viewpon Senian Vuu can push me me wewpuN semng mm AHNbuve 5mm 5 yuu can 295W recaH n mer vwmshmmx ewmmm u my a m a scveen mam WWW wammmbum egmmzamzw u The Overall RendertoTexture Process mlerlw 31 2mm Code for the RendertoTexture Process 1 ln lniteraphics generate a FrameBuffer handle a RenderBuffer handle and xture handle etmnt Frameamen GLLllnt Deptthen GLull lt Texture glGenFramebuffers 1 ampFrameBuffer glGenRenderBuffers1ampDepthBLlffer glGel lTexture5 2 Setup the size you want the texture rendering to be ll lt SlZeX 2048 lnt SlZeY r 048 3 Bind the offscreen framebuffer to be the currth output display nglndFramebufferEXT GLiFRAMEBUFFER FrameBuffel 4 39 lo the rnntpyt nglndRenderbufferEXT GLiRENDERBUFFERiEXT DepthBuffer glRenderbufferStorageEXT GLiRENDERBUFFERiEXT GLiDEPTl LCOMPONENT SlZeX SlZeY glFramebufferRenderbufferEXT GLiFRAMEBUFFERiEXT GLiDEPTHiATTACHMENTiEXT GLiRENDERBUFFERiEXT DepthBuffer mlerUN 31 2mm Code for the RendertoTexture Process 5 Bind the Texture to the Context gTBdeexturet GLiTEXTUREJD Texture 6 Setup a NULL texture of the size you want to render into and set its properties gTTemeage2Dt GLJEXTUREJD 0 4 STZeX STZeY 0 GLiRGBA GL7UNS GNED7E3YTE NULL gTTexParameteht Gt TEXTURE ZD Gt TEXTURE WRAP 5 Gt CLAMP gTTexParameteht GLJEXTUREJD Gt TEXTUREM N LTER GLiL NEAR gTTethvtt GLJEXTUREiENV GLJEXTUREiENviMODE etjEPtACE 7 Tell DpenGL that you are going to renderinto the color planes ofthe Texture gtFramebufferTeXtureZDEXT GLiFRAMEBUFFER GL7COLOR7ATTACHMENTO GL7TEXTURE72D Texture 0 8 Check to see if DpenGL thinks the frimebuffer is complete enough to use LenUm status gCheckFramebufferstatusEXT GLiFRAMEBUFFERiEXT Ttt status T GLiFRAMEBUFFERicoMPLETEiEXT h f stderr FrameBUfferTs not compete h mteruN 31 2007 Code for the RendertoTexture Process 9 Now ehue 39 39 the color gtC earCo o 0 0 0 2 0 0 1 TCTeart Gt LORiBUF FER iBtT T GLiDEPTHiBUFFERiBTT gTEhabTet Gt EST gTShadeMod e gWTeWport MatrTxMode GLiF ROJECT ON thoadeehtty gTuPerspeetwet 90 1 01 1000 t gTMatthodet GUMODELVTEW thoadeehtty gUL 0kAt0 0 3 000 000 gTTrahsTatett Trahsxvzm TranSXYZH gTMuTtMathm RotMatrTx gTSeaTett sea e seaTe seaTe gtCotomm T 1 t TranSXYzm gtutvweTeapott T t 10 Tell DpenGL to go back to rendering to the hardware framebuffer gTBthramebutterEm GLiFRAMEBUFFEm mtpeuuw 31 2007 Code for the RendertoTexture Process 11 optional Have OpenGL create the multiple mipmap layers for you glGenerateMipmapEXT GLTEXTURE2D 12 Now render the rest of the scene as normal using the Texture as you normally would glEnable GLTEXTURE2D ngindTexture GLTEXTURE2D Texture ngegin GLQUADS ngexCoord2f 0 0 glVertex2f 1 1 ngexCoord2f1 0 glVertex2f 1 1 ngexCoord2f 1 1 glVertex2f 1 1 ngexCoord2f 0 1 glVertex2f1 1 glEnclO ngisable GLTEXTURE2D mjb July 31 2007 RendertoTextu re A Rotating 3D Teapot Displayed on a Rotating Plane mjb July 31 2007 Rendering to a Texture and then using that Texture as both a Rendering Texture and as an Input to the Next Iteration The Game of Life Pingpong between two different textures One texture is being read from the previous state and the other is being written into the next state mjb July 31 2007 Using GLSL you Can Bind Multiple Color Framebuffers and Write Separately to Each of Them Usually in a GLSL Fragment Shader you write color information to the vec4 glFragColor gFragColor vec4 r g b 1 full color Instead you can write color information to an array of vec439s called glFragData glFragData0 vec4 r 0 0 1 glFragData l vec4 0 g 0 red component 1 glFragData2 vec4 0 0 b 1 green component blue component Your OpenGL program must then declare what color attachment each glFragData element corresponds to gluint buffers GLCOLORATTACHMENT2EXT GLCOLORATTACHMENT1EXT GLCOLORATTACHMENT3EXT glFrag Data0 glFrag Data l glFrag Data2 ngrawBuf fers 3 buffers This is called Multiple Render Targets MRT I mjb July 31 2007 10 Visualization Using Shaders Mike Bailey Oregon State University Visualization Imaging Negatives Orzgun Sum Huh5r ily Campum Gmluliis Visualization Imaging Embossing Changing the emboss angle is interesting Oregon 91312 Universi y summer Graphics Visualization Imaging Sharpening Oregon 51312 Unnereiiy nmvma Grimm2s Visualization Imaging Edge Detection Oregnn Stale Universin Cnmpluei Graphics Use the GPU to enhance scientific and engineering 39 n H ustra o Oregnli Slate university Cnmpuier Graphics Image Computation Example Identify the Deserts Use weighied sums of ihe ie and possibly Ik colors mean Stale Universlly Eumpmrl Graphics miii Mayzizuus Image Manipulation Example Where is it Likely to Snow A 7 L l 39 Visible Infrared Water vapor mitmayziimus Image Overlay Example Temperature Zones i a it 39 f i39 w lt7 wri 5 a y 39 e a Overlaid Vi mikweret 2mg GPUBased quot 39 39 a 39 plus 4 Use the em to work with aerial and satellite images that were originally larger than what will fit in graphics card memory Egypjjzg iw Visualixa n it o Muf mikweret 2mg GPUBased quot 39 39 a 39 plusl 4 Visualin39inn by Dan Muf n Visualization Zooming in Polar Hyperbolic Space Use the GPU to perform nonlinear vertex transformations Dome Projection for Immersive Visualization Use the GPU to perform non vertex iransfor muhons n crewquot 511quotan m Cnlmnscmpmm myhiMayZ ZDDQ Placing the 3D Data into a FloatingPoint Texture YW te ampnum5 4 1 Tp M te ampnumt 4 1 fp M te ampnump 4 1 fp for mtp 0 p lt nump D i fo intt0 tltnumt t for mtg 0 5 ltnum5 s ltlt assign red green blue alpha gtgt ma ampred 4 1 fp 1 0mm m anu ly Cnnunzcmnmus myhiMayZ ZDDQ Point Cloud from a 3D Texture Dataset Low values culled Full data M egan Me w uer mm mlh Mav 212uu9 Where to Place the Geometry I personally like thinking ofthe data as living in a cube that ranges 39om 1 to 1 in all directions It is easy to position geometry in this space and easy to view 39 139 quot quot 39 39 lllal pace not just a point cloud can map itselfto the 3D texture data space 30 if we want the s texture coordinates to go 39om 0 to 1 then the linear mapping 39om the physical x coordinate to the texture s coordinate 39s39 x1 s Oregon Me Warm Lanime Graphics mmmy2 2uua The Vertex Shader varying vec3 MCposition void main void MCposition gl glPosition glModelViewProjectionMatrix glVertex Oregon 5mm University compmev vaphlcs minimaym ztl a The Fragment Shader varying vec3 MCposition void main void vec3 stp MCposition 1 2 maps 11 to 01 if any lessThan stp vec3000 dis ard if any greaterThan stp vec3111 dis ard oat scalar texture3D TexUnit stp r if scalar lt Min scalar gt Max discard oatt scalar SMIN SMAX SMIN vec3 rgb Rainbowt glFragColor vec4 rgb 1 unr mlbillayzlj ma A Problem with Uniform Pointclouds owofCorn and Moire Patterns Perspective Orfhographic Oregon stale Unuers y murmur Brzmhms Uniform Points vs Jittered Points Jiffercloud Painfcloud Enhanced Point Clouds The shaders can potentially change Color Alpha Pointsize Oregon State university computer Graphics mibrMay 2i zuua Color Cutting Planes Now change the Point Cloud geometry to a quadrilateral geometry If we keep the coordinate range from 1 to 1 then the same shader code will work Note that the plane can be oriented at any angle because the stp data lookup comes from the xyz coordinates of the cutting plane u cSmpuxevGiaphcs mibrMayZi 2mg Contour Cutting Planes Let s say that we want contour lines at each 10 degrees of temperature Then the main change to the shader will be that we need to nd how close eac 39agment s interpolated scalar data value is to an even multiple of 1 0 To do this we add this code to the 39agment shader oat scalarlo oat 1mm scaiano yw lf abs scalarrscalar10gtTol discard Notice that this uses a uniform variable called Tol which is read 39om a slider quot t05Tol39 39 quot cluetuaneven multiple of g 39 accept auu ulu Iluw to be mikmam 2mg Contour Cutting Planes are Also Color Cutting Planes Notice that when Tol5 the Tol ifstatement oatscalarlo oat 10mm scalar5 mu m ab5 scalarr scaiamo gtTol discard always fails and we end up with the same display as we had with the interpolated colors Thus we wouldn t actually need a separate color cuffi plane shader at all Shaders that can do double duty are always appreciated mikmam 2mg 3D Data Probe Mapping the Data to Arbitrary Geometry w um umquot mlbrMayZl ZEIEIB Lou A Surprising Observation Note that Point Clouds Jitter Clouds Colored Cutting Planes Contour Cutting Planes and 3D Data Probes are really all the same technique They just vary in what type of geometry the data is mapped to They use the same shader code How about something funky like a torus Compuw Graphics Visualization Transfer Function Relating Display Attributes to the Function Values Frequency Histogram ScalarValue osu vxTransfzr Function Sculpting Window mikmam 2mg Visualization Don t Send Colors to the GPU Send the Raw Data UselheGPUlolurnlheduluinlogruphicso m fly Wsualiminn by Chris Janik mkmm 2mg Dragon 313 camm A Visualization Scenario A thermal analysis reveals that a bar has a temperature of 0 at one end and 100 at the other end You want to color it with a rainbow scale as follows You also want to use smooth shading so that you can render the bar as a single quadrilateral Will it matter It so how Oregon sme Universer compmercupmcs mjbimayztj a Should you assign colors first then interpolate or interpolate first then assign colors A Visualization Scenario Assign colors first then interpolate WRONG Interpolate first then assign colors RI HT Conclusion ef fze ras fer39zer I39nferpoafe your femperafures and ef your rs lfragmenf shater assign your coo ZUUB Attributetriggered Pipeline Flushing l glColor3f r g b l McVemces Then9ib 33 w i as mxm something that is already in the pipeline RGEAZ RGEAZ RGEAZ RGEAZ Pixels Pixels Pixels Pixels Oregon sme University coiiipmerenphics mjbimaVZi ZEIEIB Point Clouds Three Ways to Assign the Transfer Function ngegin GLPONTS Vert 50 t0 roigoiboi an gt Assigning colors first QICDIDM mi goi boi an i problems with interpolation and QIPDintSizei P0 i roblems with I i elne flushin glVertex3f x0 y0 20 glUseProgram AssignTransferFunction ngegin GLPONTS gl hi tt ibthe values first glVertexAttriM ocati n so problems with pipeline flushing glVertex3f x0 y0 20 Assigning a gl iiill oregaii State University Colnpma39Graphlcs minimaVZi ZUDB Point Clouds A Third Way I really like this one CD umuuu glUseProgram ngegin GLPONTS quotHidingquot The scalar value in The wcomponenT lVenexM x0 0 20 so no problems wiTh pipeline flushing no aTTribLITe has been seT glEnd I Don39T wanT problems wiTh dividing by The wrong w ver rex Shade replace iT before The pipeline uses iT varying float scalar main l scalar glVertexw m o gPosition gModelViewProjectionMatrix vec4 gVer1exxyz 1 regon sme Universiw ConwuteerDhics rmbiMay zw zuua llSll calarquot 060001070 17 00lcalar1 17 0017 agorzcalmg 17 0017 0117 02 black Volume Rendering Compositing Thinking about it backtofront color12 otzcolor2 1 012 black color01 01100101 1 1 000010732 00101 01000101 0 1 000010r01 Gives the fronttoback equation Oregon sme Universiw ComputerGraphi rmbiMay zw zuua float astar 1 vec3 cstar vec3 0 0 0 for inti 0 i lt NumSteps i STP DirSTP if any lessThanSTP 0 continue if any greaterThan STP 1 continue float scalar texture3D TexUnit STP r float alpha Amax if scalar lt Min alpha 0 if scalar gt Max alpha 0 floatt scalar SMIN SMAX SMIN vec3 rgb Rainbowt cstar astar alpha rgb astar 1 alpha break out ifthe rest of the tracing won39t matter if astar 0 break glFragColor vec4 cstar 1 Computer Graphics 39 mjb May 21 2009 Volume Rendering Ray Casting Oregon State University Computer Graphics Volume Filtering Median Filter Visualization by Anki39 Khan oregan State unwersuv compuner Graphlcs Volume Filtering High Pass Filter Followed by Median Filter Visualization by Anki39 Khan Oregon 5ate Ulilversltv compuner empmcs Visualization 2D Line Integral Convolution Use eveetor field equation or At each fragment hide the velocity field n another exlureirnug vx vz Find the flow field velocity vector there 2 Follow that vector in both directions Blend in the colors at the other fragments along that vector 6 Circular Flow Field Image Oregon State Unnerslly alumna oranhlee Visualization 3D Line Integral Convolution visualiminns by Vosu lakshmnllan Extruding Shapes Along Flow Lines Parameterize the shape and recast it into TNB coordinates along the owline Pt N Tan ent 9 W T P t To This are known as the three Frenet Equations and are very useful for geometrically Binormal 39 39 Wllal is happening on a curve gtlt PtXP 39 39Tx Nx Bx Xquot Normal X TyNyByYy TzNszZz Nt Bt gtlt Tt squot 0 0 0 11 i ANVX Extruding Shapes Along Flow Lines As long as you are writing a shader anyway Visualization by m Dn39llill 7 1 39 Add moving humps to create a peristaltic effect Oregnn 51315 University C h E nmpulel Gmp i Huu we i My BumpMapping for Terrain Visualization Visualization by Nick Gcbbic Oregon sme University com puler Graphics 3D Object Silhouettes Oregon sme University Computer Graphics mjbrMay 212nm 22 Hedgehog Plots 029m sme Universin Cnmlulver Graphics Hedgehog Plots Gone Wild 23 Performance Evaluation How do you evaluate the rmance ofan information retrieval system Or compare two different systems Cleverdon The Cran eld Expe 1950s 19605 Time Lag The ihtewai betweenthe demand being made and the ahs me nts being given Presentation The physieai farm ufthe nut ut User effort The effurt ihteiieetdai m physicai demanded ufthe user a Precision The abihty ufthe system in Withhdid nunrreievant ducuments 0 Cleverdon says Prec Recall Measure ability to nd the relevant information Ifa system can t identify relevant information what use is it Why not the others According to Cleverdon e Presentatiun sdeeessmi itthe USEY eeh ieed and dhdeisiehdihe hsi diieieiehees ieidihed 7 User effurt can he meesdiedwdh a sheighvdmeid Examinaliun at a smeii numbEf utcases N In Reality eed to consider the user task care JIIy everdon was focusing on batch interfaces Interactive browsing interfaces very signi cant Turpin amp Hersh Interactive systems User effort amp presentation very important E39 iiiunop Precision amp Recall Usability In Spite of That extensively evaluated not so much Why Not Usab ty Usability requires a userstudy 7 v nevvfeature needs a new study Expensive 7 Hign variance 7 many cunfuunding facturs Of ine analysis of accuracy 7 Onee a dataset isreund e VS 7 if tne system isn t accurate it isn t guing te be usa ie Prec on amp Recall For a query documents are relevant or not 7T e enReievant Set A system retrieves a set of documents 7 Tne Selected Set 7 e Nenseieeted Set Relevance is binary 7 Eitner a ducurnent is valuable ur nut netning in between w 0pm Prec39 39on and Recall Notsaemd Percentage of documents selected that are relevant Probabilit that a selected ocumen is relevan How well does the system lter out nonrelevant stuff Recall Percentage of all relevant documents se ected Probability that a given relevant document will be retrieved How complete are the results E39 humpt How Big is the Selected Set 1 5 10 100 Effect of Size of P R H selected set reCI Ion VS eca As we increase the number of Inverser related documents selected what call increases precision happens to precision and decreases recall Preclsmn y Recall Precision vs Recall Precision and Recall depend size ofselected set Will depend on user interface the user and the user39s task Solutions Precision vs Recall Measure the precision at several levels of recall ax r Measure precision at increasing 3 sizes of selected set mmm s mm miczen m MumdApvvlmm rm mm maremmmmmwsmmMW Resath 4 m newmmmmmn Comparing Large Numbers of Systems Precision amp Recall not easy Which one is more important to compare two interacting variables across many systems Single number metrics Precision vs Recall H rm nic means E amp Measures see the book Interpolation in Precision vs Recall PreCISIonRecall Measurement Which one is more important PreclslunRecall Table er Graph Delmnds 0 the taSK Cumputlng average preclslun Generic web search engine Aimed levels etreeah m m Burma 7 Aeress many erttereht queues P EE S W Eaeh que has a drtterehthurhher er Index of court cases relevant ducuments Need all legal precedentsl 7 Same quevlesr can t cumpule pveclslun atthe Recalll llXEd level ulvecall ImagmeSvelevenldacumemsl quewrcan my mum Pvemsmn at reeeh mm m our an ear mu mamam Innsian Interpolation Solution Precision vs Recall If precision is not measured at exact recall level 7 Use preclslun measured at next hlghest level at reeall texample Nule thrs rheahsthat reeau at a and x vecall al2E Wlll be reehhealterthat quevy Mumm mmm 5 mm Hughmam m MWMMAIJnllmm rm mm marksmmmmmwsmmqmm msemmmem m mm Reference Data Collections TREC EC Much more than a dataset T I R I I c f Text Retrieval Conference ex 9 neva 0quot erence i Started In 1992 by Jenna Harman CACM e Spunsured by the Natlunal lnstltute er s Communications ofthe ACM tandards a d Teehhele mus Cerhrher D ISI DARF A Institute for Scienti c Information He d EVEN year 7 Only these whe perrerrh the TREC retrleval tasks far that year eah pammpate urn41mm Mumam A TREC Conference Sphtmtu tracks 7 Each track evaluates a d terent usertask ZEIEISTracks 7 Crussrlanguage Frltermg 7 Genume 7 Hrgh accuracy 7 lnteramwe 7 uvelty 7 QuesrmAnmnng Rubus1 retrreval 7 Drgrtalwdeu 7 Web TREC Procedure 0mm adrnnctask Eampanmpannspmvmeu matm mam WWquot as ms Dacmmautm Eacnpa rc pantmu tnen mm mmmm requests in queues mm 7 mmmrtrtvanraacmmmsm mm Rwansthevvecrsmnandrecall wwwmmv swam Panmpama Wu my ascnbmv mm m mm M rsenttheviverattheYREEvmrkshav w 0pm Example Topic w mprmxwrmm mm G z r mwmmr WWW Documents Used in TREC Wall Street Journal Federal Register Associated Press Depa ment of Ener abstracts Financial Times Congressional Record Foreign Broadcast Information Service Los Angeles Times GOV web documents references 39om MEDLIN E Obtaining TREC Data Not Free Must sign data use agreement E39 urn mm TREC 2003 Web Track Data 71393125 Mum Home page finding 7 Tupi distillation ve evanl home pages given a timed quevv R PvecisinM vecisinn at Nmeye N lithe number m is event documents 7 NavigatinnalTask Named Page mm e g mime USDA homepage wanquenesweimhnmepages15nwe m n Mean veummi yank mnpnmnn at queues Wnere arist appears m min Mean Rec procal Rank Reciprocal ofthe rank ofthe rst relevant document returne lf rst relevant document is in third position Reciprocal rank 13 Mean across all queries Top Web Track Performers CSIRO Hummingbird Univ Amsterdam Copernic Univ Sunderland Univ Glasgow cswo Documents sewed m my Gammaquot m m w my maximum untmmmam my mmm and am unmime um um e mmm aumu wuwm nmmn mmwz Stemmmvimvmved Vvecsianbvatumev umga Hummmvbndbacumems Mum aammwmmm laakedlikea hameviveURL m mmmmw Vimpm mamas m HYML mm m as le mm ms nu use m m We m ammle 3mm m We meet UAnstemam um dmeventievvsentahans m mum mam amiwmawtuanaacums titles m anchors um madellmvwavkedvewwell an anchais m is ies A mum mmquot a S a it Firstvesultswevetmmabaalun wow vsuls Va eistemmmvvvas a in Use anaveldacumemieviesentmmnbasedan amammcaltxslvnedwamsemsasavvasedtatems m mka ivmm consisted m vanillaquot m Kleinbev s mu N e u ch atel U n iv 5LZZEiiliG l iAquot3 l 2Zl isl gmylidm 17313 W mmicmazmm MHLM mutiny1km What worked Referring anchor text Stemming URL information and link struc ure E39 urn mm Other Data Sets CACM Articles about computer science research and practice Includes metadata Authors Da es Subject Categories Bibliographic linkage and co ci ation ISI Documents about information science Metadata Bibliographic cocitations Bibliographic Citation Information in CACM Direct references 7 A referenee Bibliographic coupling 7 a b n n duedrnents were eited by bath a and b e indieates putentiai sirniiarity Cocitations e a b n a and b appear tugether in a blbllugraphy in n different ddedrnents Many Other Datasets of Interest Many other datasets are available besides the ones mentioned 7 Must ddn t have relevance data Others 7 EaenM EIle e cullaburative filtering 7 eni ELEari Repusitdry atU uf Query Language Features Important to understand the different query capabilities of hes the popular search engI Query Elements Words index terms Generally automatically extracted What de nes a word Separated by spaces What abuut Either haramers Cuntra luns7 yphens Elilivith Mu Other Query Op ons Consider Index39 9 of Proximity phra s Phrases 7 on the difference er man and man 0lt c uld quotea each separaw u WM Proximny phrase as a keyword 7 Kunstan wz Heriucker Citeseer 7 Destrumiun ltneargt Oregun Verity Cumbinaturiai expansiuni What about NEAR Tsunzm39s Meeting the united Slates West cm 131271939 Other Query Possib es Some IR Challenges Premseimknnvhastmncannn Users new WW 1 pm pm preempt Wm a inn They uften can t describe their need perfectiy WWWME 7 i2 AnumaiuusStateuiKnuwiedqe ASK 7 my whimpernmMummepern Orpemapsmey can Dnrydesmbe n m 5mm their terminuiu smem Vavuiavinvenuiawandthe Examinetheducumentalinntnivnuviavnntesearch engines 7 Which mi ht be diiieientimm utheis Or perhaps they are iazy r unstr Amnesnewihaieatuvetabieis 7 http seavchenqmeshuwduwn cumleatuies El time awed 7 Can tWun t spend the eiiuntu design venue query paw Mutiny1km Query Expansion Relevance Feedback Can we automatically expand User issues short simple query the query From retrieved resu ts Goal improve h 7 Userindicates reievantducuments effediveness Of seamh by CRTiIe rrexrleequot ofa more creating a better query de ned query Find documents similar to the identi ed relevant document tsp 353 Simple Theory Behind Examples of Relevance Relevance Feedback Feedback The key assumptions Term distribution in relevant documents will be similar Term distribution in nonrelevant documents will be different from those in relevant documents Etle get new gs nus wnasw Help Debug Q 0960 Muf n mama39lm Com utin ue Examples of Relevance p 9Q ry Weights for Vector Feedbac i a Space Algorithm 7 4 o How to construct a quotrefinedquot 674 a i a query vector inlif lm quotunatumw m39 Given puma thuva H Origina query quot Several relevant documents Several documents that were A A retrieved but not marked relevant 1 J wwhlwhlutlwh 9 i 1 39quot trl t31li m mt 39 w f bt Document vector Ideal Query sweats One Update Method Weighting factors 1 e l e d 39 7 C J N C J qwaq Zair 2 r dJECV r dJeC39 Dr 516D u 516D Old Query Set of all relevant documents A all documents 5 lm Set of relevantretrieved Setomomwlevant mm 533 dowmems retrieved documents maturerm mamrm Evaluating Relevance Feedb Need to do a user experiment Relevant documents selected by user depend on the original Precision and Recall Suppose you measure PampR of the results from the expanded query Prec amp Recall w39 h Relevance Feedback User pruyldeslmtlal duery ystem pruyldes mltlal results say lEI User ldehtlrles th releyaht dueumehts at rah s E ah System pruvldes updated results based em R F E relevant ducuments that were urlglnally at ranks a and lEI are flEIW at ranks l and 2 7 Ah easy cheatl ll Precision and Recall amp Relevance Feedback Residual collection precision and recall Compute precision and recall after removing all documents used as feedb ck Consider the user interaction Could compare relevance feedback to manual query reformulation m nlrfnhm Fully Automatic Query Exp 39 Relevance feedback requires user input 7 User pruyldes examples ur relevant and rlEIrlrrElEVarlt dueumehts 7 System extracts Wurds frum thuse dueumehts and adds them Dr remuyes them frum the query do without user input 7 Statlstleal thesaurus EXparlSlEIrl Consider User searches for ca quot Won39t see documents that use the word felinequot instead of cat 7 Thus terms are hut lhdepehdeht A potentially better search 7 eat OR rellhe fyou had a thesaurus e Luuk up each dueryterm 7 Add allthe syhuhymstu the query Construct a Thesaurus May not have a useful thesaurus available Research has shown that generic thesauri don39t fare well Luse tun mueh preelsluh Need a thesaurus speci c to the topics in your query Can you construct one Use Document Collection to Construct Thesaurus Analyze thetext m the ducumentsture Glubal r Eutld a thesaurus mm all documents 7 Fwsl vetneve ducuments based un mmal query 7 Eutld a thesaurus vumthuse ducumen s Expand queryusmg thesaurus 7 Palm query Co occurrences Iftwo words frequently co occur They might be related For examp e Any document that mentions felinequot frequently will include mention catquotfrequently Association Clusters i melsuun fw All Retrieved Dumments Frequency umm dumment 0pm MM Tues 2 2806 Some reading to do before test Chapter 5 through 521 524 Chapter 6 through 643 Chapter 7 through 723 Statistical Properties of Text Not all terms occur with equal frequency They follow a Zipf distribution Zipf Distribution A few words occur very frequently Lots ofwords occur very infrequently In the middle Words with medium frequncies Today Statistical Properties of Text Text Processing min St 2in Distribution Zipf Distribution ith most 39equent word occurs 1ik the frequency ofthe most frequent word k depends on the document lfk 1339d most frequent term occurs 13 as manytimes as the most frequent term Zipf gt So Atewhundvedlevmslakeup5DuHhelex i t E e E E atthe index and impvave Pvecisian Em esalmmn lheSMARYSVslem vemavedthe tap Dries slap was then canvenedthe next highest bun2MB thases Emmpesa thesetemis using athesaums The medium vankedlevms aieme lmp ant unesl Document Structure Extractio Identify how terms relate to useful structural elements such as titles headers links etc 7 Record relatiunship between terrns arid surreuriuirig strumure Separate nontext data from all other data such as structure tags 7 For example ramming HTMLtags Stemming Collecting exceptionally similar words into a single c m t stemming it yuu search for burri you Will miss uueurnerits With hurried urriirig Also creates more medium frequency terms 7 Less luvvrfreuuency terrns Text Processing Pipeline Transform the raw text into new 5 ml representations 1 Document structure extraction 2 Tokeniza ion 3 Stemming 4 Phrase creation thesaurus generation 5a these inthe Saltun paper 5 Store in inverted index Token zation Converting stream of text into tokens representing keywords Reasonably easy for English Primary challenge is purietuatiuri Whatabuuteapitaiizatium r The Music StuveWs a music 51mg Complicated for Asian S Basis for Stemming MurphulugicalAnalysis e Morphemer smalles1 meaningtul unit utlanguage e in ectiunalMuvphulugy migr sMW erratum emit neuter e Dewationamuvphulugy Favmatmnm e um natanat x asenchant m gt ammenHN We m gtieductianN iecmd v Widow nrnrnvemmwamgstiess Datum 7 Mess iii gt prams V mimw mem MuwwsesleedsacukkmwshiSMsdrm hemest Approaches Apply full morphological analysis 7 Require slgnlflant egtltpen knuwledge uf grammar and a prerexlstlng Wurd dictlunar 7 Nums tandard English nuld cause prublems an 2 Heuristic 7 Party Slammer Simple bulwuvksamazmglywell Porter Errors Errors of Commission 7 Organizatiunnrgan 7 Duingdue 7 Generalizatiungeneri 7 Analysisanalyzes 7 Cylindercylindrical Vs Example vules ssEs7 s Porter Stem mer Applias simple vules 7 lClVc lVl slammcamm lavmavevaweb m gt 1 AL my gt m CS 519 Signal amp Image Processing Filter Design Noise amp LowPass Filters What is Noise Anything that is NOT signal Signal is what carries information that we are interested in Noise is anything else Noise may be Completely random both spatially and temporally Structured Structured randomness Statistical Review Mean The average or expected value MEx2x Variance The expected value of the squared error 02 ExM Ex2M2 Standard Deviation The square root of the variance 040 2 9 Covariance The expected value of the product of the error between two elements of the signal 01 Eltx axxj 1 This measures statistically the relationship between the error for the two elements 0391 0 Independent not related 0391 gt 0 Correlated related 0391 lt 0 Inversely correlated inversly related k Ensembles of Images Consider the picture Rx as a random variable from which we sample an ensemble of images from the space of all possibilities This ensemble or collection of images has a mean average image x If we sample enough images the ensemble mean approaches the noisefree original signal Often not feasible SignalToNoise Ratio If we compare the strength of a signal or image the mean of the ensemble to the variance between individual acquired images we get a signaltonoise ratio SNR E C The better higher the SNR the better our ability to discern the signal information Problem How to measure in to compute the SNR jt Covariance Matrix k We can build a matrix that contains the covariances between all samples C 039 U if The diagonal elements are the individual sample variances A diagonal matriX indicates that the noise at each sample is independent of the others ie uncorrelated Nonzero diagonal elements of C indicate relationships between the noise at different positions ie correlated Additive Noise Noise is often additive causing the resulting signal to be samplebysample higher or lower than it should be Such noise can be modeled by Rx x nx Poisson Noise Poisson distribution 6 x px7 x Related to the quantum nature of light and matter For discrete values Applies only to nonnegative values Variance equal to the mean Approaches a normal Gaussian distribution as the mean gets larger Because the mean value changes for each pixel the variance of each pixel is different lr White Noise Many noise process can be modeled by a normal Gaussian Unlike Poisson noise Gaussiandistributed noise is usually uniform over the image Noise x that is Gaussiandistributed Zeromean Uncorrelated Additive is called white noise Noise and the Frequency Domain Noisy input Rx x 1506 Spectrum of noisy input Ffx F x 170506 White noise has equally random amounts of all frequencies Colored noise has unequal amount for different frequencies Since signals often have more low frequencies than high the effect of white noise is usually greatest for high frequencies lb I Noise and Systems Spectrum of noisy input 17606 F gt0 170506 Spectrum of system s output H FUN x H F x H 170506 LowPass Filter Recall that quick changes in a signalimage require high frequencies High frequency details are often buried in noise which also requires high frequencies One method of reducing noise is pixel averaging Average same pixel over multiple images of same scene Average multiple neighboring pixels in single image Pixel Averaging Averaging multiple images not feasible if Obj ects in scene are moving Only given a single image Averaging multiple neighboring pixels in a single image Gain reduces noise Cost blurring Noise Filtering If an image is mainly low frequencies with some high frequencies white noise corrupts the high frequencies more than low So reduce the high equency content of the noisy signal through lowpass ltering LowPass Filtering Spatial Blurring Lowpass filtering and spatial blurring are the same thing Any convolution kernel with all positive or all negative weights does Weighted averaging Spatial blurring Lowpass ltering They are all equivalent f Filtering and Convolution Two ways to think of general ltering Spatial Convolution by some spatialdomain kernel Frequency Multiplication by some frequencydomain lter Can implementanalyze either way LowPass Filtering Tradeoff Reduces Noise but Blurs Image The worse the noise the more you need to blur to remove it k Ideal LowPass Filtering For cutoff frequency us I iflulSuc Hulluuc 0 otherwise What is the corresponding convolution kernel What problem does this cause What could you do differently Better Smoother LowPass Filtering Gentler ways of cutting off high frequencies Hanning 050500s 7T7T2u if u Su W c c 0 otherwise Gaussian Z M Hue A Butterworth 1 n controls the sharpness of the cutoff Hu CS 519 Signal amp Image Processing Filter Design HighPass amp BandPass Sharpening Blurring is lowpass filtering so deblurring is high pass filtering Explicit highpass ltering Unsharp Masking Deconvolution Edge Detection Tradeoff Reduces Blur but Increases Noise HighPass Filtering Ideal 0 if S we Hul lluuc 1 otherwise Flipped Butterworth 1 Hul 1u2u3 Unsharp Masking Unsharp masking is a technique for highboost filtering Procedure Blur the image Subtract from the original Multiply by some weighting factor Add back to the original 1 1a1 1g where I is the original image g is the smoothing blurring kernel and I is the final sharpened image T Unsharp Masking Frequency Domain Blur the image Lowpass Filter Subtract from the original Original 7 low high pass Multiply by a weighting factor Scale high passed frequencies Add back to the original Original scaled high high boost Hauwg HmmRmRnG Unsharp Masking Implementation IocI Ig 10 0 0 0 0 0 111 30900109 0 111 0 0 0 0 0 0 111 l oc oc oc oc 98a oc 9 0L 0L 0L f Unsharp Masking Another Way AI IgA 1II Ig 1 0 0 0 111 3 0 9A 0 1 1 1 0 0 0 1 1 1 1 1 1 21 1 9A 1 1 9 1 1 1 t Deconvolution If we want to undo lowpass filter Hu 1 Hinv Problem 1 This assumes you know the pointspread function Problem 2 H may have had small values at high frequencies so H W has large values multipliers Small errors noise roundoff quantization etc can get magnified greatly especially at high frequencies This is a common problem for all highpass methods BandPass Filtering J Tradeoff Blurring vs Noise LowPass reduces noise but accentuates blurring HighPass reduces blurring but accentuates noise A compromise Bandpass ltering boosts certain midrange frequencies and partially corrects for blurring but does not boost the very high most noise corrupted frequencies Example Sobel Filter Summary Filtering Consider what you want to do in the frequency domain the shape of the lter Frequency Domain SpatialTemporal Domain Convert signal tofrom Convert lter to equivalent frequency domain convolution kernel lter Multiply in the frequency Convolve in the spatial domain domain L Nonlinear Operations They are often mistakenly called filters Strictly speaking nonlinear operators are not lters They can be useful though Examples Order statistics eg median lter Iterative algorithms e g CLEAN Nonuniform convolutionlike operations Median Filtering Instead of a local neighborhood weighted average compute the median of the neighborhood Advantages Removes noise like lowpass ltering does Value is from actual image values Removes outliers 7 doesn t average blur them into result despeckling Edge preserving Disadvantages Not linear Not shift invariant Slower to compute CS 519 Signal amp Image Processing Human Vision Human Vision Some properties of human vision that affect image perception Linear and nonlinear parts Nonlinear approx logarithmic encoding of input Adaptation Relativecontrast encoding Varying sensitivity to spatial frequencies Generally treats brightness and color separately Discrimination Experiments Many vision experiments involve comparisons Two alternative forced choice 2AFC experiments lsthere a ditrerencev yesno which is brighter farther apart etc topbottom leftlight etc Random guessing without bias 50 correct oftm 75 half the time they u see itquot halfthe time they guess Vary experimental parameter to determine the threshold r above which the observer reaches this desired level ofcon dence This is called the just noticeable difference JND Sensitivity 1r Weber s Law Many visual properties obey Weber s Law For intensity discrimination 7 c I for some constant c In other Words the JND A1 for intensity is proportional to the intensity itself Also applies to distance judgments spatial frequency discrimination and many others L I Weber s Law and Logarithmic Encoding log1AI7logI log1 N 1 log1 c constant Differences of logarithmic encoding produces Weber s Law The human intensity sensitivity function isn t exactly logarithmic but it s close enough to be a useful model Adaptation Our eyes have an incredible ability to adapt to lighting conditions Total JND steps for the eye is about 1000 Total JND steps for xed adaptation is about 200 Contrast Encoding The response of the eye to light isn t absolute it s relative to the surrounding intensities This causes the Mach effect at strong intensity transitions Even our color perception seems to be to a degree based on relative differences Intensity measurable light Brightness the perceived illumination Spatial Frequencies J Stimulus sinusoidal grating of some frequency f and amplitude1 Vary A in a 2AF C and find the JND for each frequency f Plot the sensitivity as a function of frequency f the contrast sensitivity function CSF Implications 7 The eye is less sensitive to extremely gradual changes quot 7 The eye is fairly sensitive to more rapid changes quotquot quot 7 The eye is decreasingly sensitive i to yet higher spatial frequencies a w a w 40 Hpquoncy Wequ Human Visual System Photoreceptor cells rods cones Rods K Brightness perception only nger y More concentrated on the 51 periphery 9 Peripheral vision Emmi Good for seeing motion Lowlighting conditions scotopic 01mm Cones Color perception Concentrated at the fovea 9 Central foveal vision Brightlighting conditions photopic Color Perception Three types of cones n2 Green most sensitive Red medium sensitivity Blue weak Each type of cone responds to to a different range of wavelengths in the spectrum Fractnn m hgm absmbed av mne by each ype Spectral response of the cones annnm 450nm Dnm EEDnm Wavelength Corresponds to the notion that colors can be described in terms of a weighted sum of red green and blue Brightness vs Color The human visual system seems to treat brightness and color separately Physically separate pathways in the visual cortex brain Some crossover but weak Perception of shape and form seems to be based on brightness not color Much more sensitive to changes in brightness than to changes in color Displays When building visual displays you have to consider properties of vision Exponential encoding for perceptual linearization Be care ll of Mach effects Consider adaptation Make it bright Consider the human CSF Be careful with color CS 519 Signal amp Image Processing Color What is Color gr I Color is the spectm of light being perceived by some the human visual system I Visible light is electromagnetic energy in the 400 to 700 nm Wavelength range of the spectrum I Why discuss color Man ways to talk about color tint shade hue brightness luminance color chromatlclty Useful to understand how the eye perceives color Color Receptors I Cones have three kinds of colorsensitive pigments each responding to a different range of Wavelengths I These are roughly red green and blue in their peak response but each responds to a Wide range of Wavelengths I The combination of the responses of these different receptors gives us our color perception I This is called the tristimulus model of color vision The Luminous Ef ciency Function I Because the three receptors respond with different nt peak frequencies the apparent brightness of a color depends not only on the luminance but on the Wavelength E S a a g a E a a I Plotting perceived luminance as a function of Wavelength gives the luminousef ciency function of the eye I The RGB Color Model 1 I The simplest color model is to attempt to model these three stimulus values red green blue I 24bit color one byte each for red green and blue I Each is called a channel I Problem Machine word sizes are usually 32 bit not 24 bit I Solution Use 32 bits per pixel r 24 color 3 channels 8 alpha channel Luminance and Chromaticity RGB isn t the most intuitive model of color Anists usually think of darklight and color as two different things Luminance Chromaticity Luminance Luminance Intensity Brightness Lightness Luma Value 0 All refer to the lightdark properties of the stimulus Some models use linear nonlinear or perceptually linear encoding l Chromaticity Requires two parameters Example Hue 9 The dominant wavelength Saturation 9 How pure that color is Some models use different chromaticity parameters Examples Red vs Blue Red vs Pink Darkred vs Bright red CIE X YZ CIE In French Commission Internationale de l39Eclairage In English International Commission on Illumination Not the same as RGB and not tied to hardware Describes the entire Visible gamut Two chromaticity similar to hue or color values X and Z One luminous ef ciency similar to intensity value Y 0 Provides a standard for sharing color information between disciplines Computer graphics and fabric design The CIE Chromaticity Diagram CIE uses three primaries X Y and Z to replace red green and blue XYZ primaries computed from RGB Rec 709 through a linear transform X 0412453 0357580 0180423 R709 Y 0212671 0715160 0072169 G709 Z 0019334 0119193 0950227 B709 Normalized CIE primaries Z x2 y z XYZ XYZ XYZ L The CIE Chromaticity Diagram Consider the plane xyzl Every point on this plane can be characterized by unique x andy z 17x7y Plotting on this plane the color of each wavelength gives the CIE chromaticity diagram J A The CIE Chromaticity Diagram L Color Gamuts The color space spanned by a set of primary base colors is called a color gamut 39 Example the space of all colors that can be displayed by a device with three color phosphors is the gamut of that device 0 No threeprimary color with positive weights spans the full space of perceivable colors 0 The CIE chromaticity diagram spans the gamut of human color Vision Recall that cones in the eye integrates over a wide range of wavelengths t The RGB Model 1 When light is mixed wavelengths combine add The RedGreenBlue RGB model is used most often for additive models Adding a color and its complement create white Primary Complement Red Cyan Green Magenta Blue Y ellow RGB Space 0 Can t produce all Visible colors Contained within CIE X YZ Additive to produce other colors 0 Perfect for imaging since hardware uses three color phosphors The Displayable CIE Diagram RGB Color Space i Subset of 3D Cartesian space B I ncyan x Magenta E quot quotWhite Gmys Black U Yellow i n Additive Colors RGB 7715 Mixture ofCaIared Light Magenta L The CMY Model re ects the rest 0 The CyanMagenta Yellow CMY common subtractive model extent of the diagonal Already White so we can t add to it Paintinkdye subtracts color from white light and Mixing paint pigments subtracts multiple colors model is the most Same as RGB except White is at the origin and black is at the 0 Very important for hardcopy devices 4 CMY C 1 R le G Y 1 B 39I39Elllllll M l White 9 minus Blue minus Green Red Subtractive Colors Magenta White 9 E ulltrac vr Color Mixing Yellow absorbs Blue Magenta absorbs Green Cyan absorbs Red l lllllllll H if minus Blue minus Green Red 4 The CMYK Model 0 Black created by Adding a subtractive color and its complement Adding all three subtractive colors 0 Problem paintsinksdyes are imperfect so it s hard to make pure black especially a problem in print where black is used so often 0 Solution add black K as a fourth primary 9 CMYK color model L Tints and Shades Tint adding more white to a color Shade adding more black to a color Tints and shades are not inverses The NTSC YIQ Model The NTSC i e commercial television standard uses a color model called YIQ 0299 0587 0114 R I 0596 0275 0321 G Q 0212 0532 0311 B Brightness Y Y is luminance or brightness I and Q are chromaticity Color TV broadcasts are YIQ Blackandwhite TV uses only Y Why separates luminance or brightness from color We perceive brightness ranges better than color so we can give it more bandwidth The HSV Model Another color model is the HueSaturationValue HSV model Y 4 5 MHZ More intuitive to understand and manipulate I l 5 MHz I I Q 0 6 MHZ Nonlinear computation to convert to RGB Singularities at pure black and pure White HSV HSV e r I Much more intuitive for specifying colors Value 02 0 lt5 lt 1 IHue the color OSVSI 0 S h S 360 I Saturation the depth of the color quot5 7 Wh e I Value the brightness of the color Gm WW I Results from a nonlinear distortion of the RGB cube Saturatzun v 35 W El M m W V u age a y ame Magenta Whne G any ame a mum Other Color Models Luv Perceptuale linear version of CIE XYZ CIE or Lab HLS HueLightnessSatumtion H VC HueValueChroma And many others Pseudo Color Many colorencoding schemes use some nite number of colors from a larger palette Example 8bit color uses a 256element lookup table LUT to map pixel values indices to 24bit color values Problem howto convert 24bit image to 8bit pseudocolor Answer color quantization Color and Visualization Color can make information visualization more interesting but be careful using color to convey 39 ormatio Speci c color meanings useful but keep them limited Color scales are harder to interpret than intensity scales Heat scale 15 an ufthefzw hatpartzally warrs Color Image Processing Pick a color space that makes sense for your application To manipulate intensity Without changing chromaticity or to manipulate chromaticity without affecting intensity use an intensitychromaticity based model For perceptual linearity interpolation etc use ClE Luv or CE Lab or similar color space For user interaction try HSV or something similar Point Dun mm thmkthat RGB z the wily way to manipulate Calm at Mike Bailey CS 519 Oregon State University The GLSL Shader Process create Vertex Shader Source Fragment Shader Source compile Initializing the GL Extension Wrangler GLEW include quotglewh GLenum err gewnit if err GLEWOK fprintf stderr quotglewlnit Errornquot exit 1 fprintf stderr quotGLEW initialized OKnquot fprintf stderr quotStatus Using GLEW snquot glewGetStringGLEWVERSON Reading a Shader Source File include ltstdiohgt FILE fp fopen filename r fseek fp 0 SEEKEND int numBytes ftell fp GLchar str new GLchar numBytes1 rewind fp fread str 1 numBytes fp fclose fp strnumBytes 0 Reading a Vertex File Fragment files work the same way Int status 9 GLint location GLuint vertShader glCreateShader GLVERTEXSHADER GLuint fragShader glCreateShader GLFRAGMENTSHADER nghaderSource vertShader 1 const GLchar M8 str NULL glCompileShader vertShader CheckGlErrors quotVertex Shader 1quot glGetShaderiv vertShader GLCOMPILESTATUS ampstatus if status GLFALSE fprintf stderr Vertex Shader compilation failedn glGetShaderiv vertShader GLINFOLOGLENGTH ampogLength log new GLchar logLength glGetShaderlnfoLog vertShader logLength NULL log fprintf stderr nsn log delete log exit1 CheckGlErrors quotVertex Shader 2quot Creating the Program and Attaching the Shaders to It GLuint program gCreateProgram glAttachShader program vertShader glAttachShader program fragShader Linking the Program and Checking its Validity glLinkProgram program CheckGlErrors quotShader Program 1quot glGetProgramiv program GLLINKSTATUS ampstatus if status GLFALSE fprintf stderr Linkfailedn glGetProgramiv program GLINFOLOGLENGTH amplogLength log new GLchar logLength glGetProgramlnfoLog program logLength NULL log fprintf stderr n sn log delete log exit1 CheckGlErrors quotShader Program 2quot glValidateProgram program glGetProgramiv program GLVALIDATESTATUS ampstatus fprintf stderr Program is sn status GLTRUE valid invalid Making the Program Active glUseProgram program Making the Program Inactive use the fixed function pipeline glUseProgram 0 Passing in Uniform Variables float lightLoc3 0 100 0 GLint location glGetUniformLocation program LightLocation if location lt 0 fprintf stderr Cannot find Uniform variable LightLocation nquot else glUniform3fv location 3 lightLoc Passing in Attribute Variables GLint location glGetAttribLocation program abArray if location lt 0 fprintf stderr Cannot find Attribute variable abArray nquot else ngegin GLTRIANGLES VertexAttrib2f location a0 b0 glVertex3f x0 y0 20 glVertexAttrib2f location a1 b1 glVertex3f x1 y1 21 glVertexAttrib2f location a2 b2 glVertex3f x2 y2 22 2 glEndO Checking for Errors void CheckGIErrors const char caller unsigned int glerr glGetErrorO if glerr GLNDERRDR re urn fprl39ntf stderr quotGL Errordiscovered from caller quotns caller switch glerr case GLNVALDENUM fprl39ntf stderr quotInvalid enumrlquot rea case GLINVALDVALUE fprl39ntf stderr quotInvalid valuenquot rea case GLNVALDDPERATIDN fprintf stderr quotInvalid Dperationnquot rea case GLSTACKDVERFLDW fprl39ntf stderr quotStack overflownquot rea case GLSTACKUNDERFLDW fprl39ntfstderr quot tack underflownquot rea case GLDUTDFMEMDRY fprl39ntf stderr quotOut of memoryn break default fprl39ntf stderr Unknoer DpenGL error m 0x0xn glen glerr Sharpening Blurring is lowpass ltering so deblurring is high pass ltering Explicit highpass ltering CS 519 Signal amp Image Processing Unshaxp Masking Deconvolution Filter Design HighPass amp BandPass 39 Edge Demmquot I Tradeoff Reduces Blur but Increases Noise High Pass Filtering Unsharp Masking We a I Ideal Unsharp masking is a technique for highboost ltering 0 if lul S uc Procedure H 1 u H 1 otherwise Blur the image Subtract from the original Multiply by some weighting factor Flipped Butterwonh Add back to the original 1 I I0171g H u 1 Z Z n 1 u uc Where 1 1s the original image g1s the smoothing blurring kernel and I is the nal sharpened image V Je I Unsharp Masking Frequency Domain Unsharp Masking Implementation t Blurthe image Lowrpass Filter 1 0LI a I g Original 7 low high pass Seaiehighpassedfrequencies 1 0 0 0 0 0 0 1 l l Addbacktotheoriginal Onginalscaledhighhighboost 0 9 0 0t 0 9 0 7 1 1 1 9 0 0 0 0 0 0 l l l 34143 FIOCF1F139G 1 70c 70c 70c Unsharp Masking Another Way AI IgA lI Ig 0 0 0 l l 1 0 9A 0 l l l 0 0 0 l l 1 l l l 21 1 9A l l 9 l l l A compromise 1 BandPass Filtering Tradeoff Blurring vs Noise LowPass reduces noise but accentuates blurring HighPass reduces blurring but accentuates noise Bandpass ltering boosts certain midrange frequencies and partially corrects for blurring but does not boost the very high most noise corrupted frequencies Deconvolution If we want to undo lowpass filter Hu Hinvu Problem 1 This assumes you know the pointspread function Problem 2 H may have had small values at high frequencies so H W has large values multipliers Small errors noise roundoff quantization etc can get magnified greatly especially at high frequencies This is a common problem for all highpass methods Example Sobel Filter x Consider what you want to do in the frequency domain Summary Filtering the shape of the lter Nonlinear Operations 439 Frequency Domain SpatialTemporal Domain Convert signal frequency domain to from Convert lter to equivalent convolution kemel lter domain Multiply in the frequency Convolve in the spatial domain L They are often mistakenly called filters Strictly speaking nonlinear operators are not lters They can be useful though Examples Order statistics e g median lter Iterative algorithms e g CLEAN Nonuniform convolutionlike operations Median Filtering Instead of a local neighborhood Weighted avemge compute the median of the neighborhood I Advantages Removes noise like lowpass ltering oloes Value is from actual image Wines emoves outliers 7 doesn t average blur them into result despeckling Edge preserving I Disadvantages Not linear Not shi invariant Slower to compute Mike Bailey Oregon State University The GLSL Shadercreation Process create compile Vertex Shader Source Geometry Shader Source Use create compile Fragment Shader Source create compile quotIber 19 mm Initializing the GL Extension Wrangler GLEW include quotglewh GLenum err glewlnit if err GLEWOK fprintf stderr quotglewlnit Errornquot exit 1 fprintf stderr quotGLEW initialized OKnquot fprintf stderr quotStatus Using GLEW unquot glewGetStringGLEWVERSlON http glew sourceforge net mlbrMav 15 2mm Reading a Vertex Geometry or Fragment Shader source file into a character array include ltstdiohgt FILE fp fopen lename r iffpNULL fseek fp 0 SEEK E 39 int numBytes ftell 23 length of le GLchar buffer new GLchar numBytes1 rewind fp same as fseek in 0 SEEKSET quot fread buffer 1 numBytes fp fclose p buffernumBytes 0 lthe entire le is nowin a byte string mlbrMav 19 2mm Creating and Compiling a Vertex Shader from that character buffer Geometry and Fragment files work the same way Only parT of This process specific To The Type of Shader iT is l int status int logLength GLuint vertShader glCreateShade 39 GLVERTEXSHADER 39 nghaderSource vertShader 1 const GLchar ampbuffer NULL delete buffer glCompileShader vertShader CheckGlErrors quotVertex Shader 1quot glGetShaderiv vertShader GLCOMPILESTATUS ampstatus if status GLFALSE fprintf stderr veuu shader compilation failedn glGetShaderiv vertShader GLINFOLOGLENGTH amplogLength GLchar log new GLchar logLength glGetShaderlnfoLog vertShader logLength NULL log fprintf stderr nsn log delete og exit 1 CheckGlErrors quotVertex Shader 2quot zana How does that array of strings thing work GLchar ArrayOfStrings3 ArrayOfStrings0 de ne SMOOTHSHADNG ArrayofStrings1 a commonlyused procedure s2 the real vertex shader code nghaderSource vertShader 3 ArrayofStrings NULL These are two ways to provide a single character buffer GLchar buffer1 buffer 0 the entire shader code nghaderSource vertShader 1 buffer ULL GLchar buffer the entire shader code nghaderSource vertShader 1 const GLchar ampbuffer NULL mlbrMav 15 2mm Why use an array of strings as the shader input instead ofjust a single string You can use the same shader source and insert the appropriate defines at the beginning You can insert a common header file a h file H can simulate a inciude to reuse common pieces of code Iftests versus preprocessing if Mode SmoothShading ifdef SMOOTHSHADNG else if Mode PhongShading endif ifdef PHONGSHADNG endif mibrMav 5 2m Creating the Program and Attaching the Shaders to It GLuint program glCreateProgram glAttachShader program vertShader glAttachShader program fragShader 39 glAttachShader program geomShader mibrMav 19 2mm Linking the Program and Checking its Validity glLinkProgram program CheckGlErrors quotShader Program 1quot glGetProgramiv program GLLINKSTATUS ampstatus if status GLFALSE fprintf stderr Link failedn glGetProgramiv program GLINFOLOGLENGTH amplogLength log newGLchar logLength glGetProgramlnfoLog program logLength NULL log fprintf stderr nsn log delete log exit 1 CheckGlErrors quotShader Program 2quot glValidateProgram program glGetProgramiv program GLVALDATESTATUS ampstatus fprintf stderr Program is sn status GLTRUE 7 valid invalid mlbrMav 15 2mm Making the Program Active glUseProgram program This is now an attribute ie this shoder combination is in effect until you change i Making the Program Inactive use the fixed function pipeline instead glUseProgram 0 mlbrMav 15 2mm Passing in Uniform Variables oat lightLoc3 o 100 o GLint location glGetUniformLocation program LightLocation if location lt 0 fprintf stderr Cannot nd Uniform variable LightLocation n else location 3 lightLoc mlbrMav 15 2mm Passing in Attribute Variables GLint location glGetAttribLocation program abArray if location lt 0 fprintf stderr Cannot nd Attribute variable abArray n else ngegin GLTRANGLES glVertexAttrib2f location at b0 glVertex3f x0 yo 2 glVertexAttrib2f location a1 b1 glVertex3f x1 y1 Z1 glVertexAttrib2f location a2 b2 glVertex3f x2 y2 22 glEnd mlbrMav 15 2mm Checking for Errors void CheckGlErrors const char caller unsigned int glerr glGetErrorO 39 glerr G N DR return fprintf stderr quotGL Error discovered from caller quotns caller switch glerr case GL INVALIO ENUM fprintf stderr quotInvalid enumnquot brea case GL INVALIO VALUE fprintf stderr quotInvalid valuenquot brea case GL INVALIO OPERATION 39 r stderr quotInvalid Dperationnquot break case GL STACK DVERFLDW fprintf stderr quotStack overflownquot break case GL STACK UNDERFLDW r 39 tfstderrquotStack underflownquot break case GL OUT OF MEMORY fprintf stderr quotOut of memorynquot break default fprintf stderr Unknown DpenGL error m 0x0xn glerr glerr 777139s 5 naf a bad idea fa do a fzraLgz your Open l programs even WI39fzaLf shaders zana Writing a C Class to Handle Everything is Fairly Straightforward I I 3qu Hangman This loads compiles and links the shader lt prints error messages and returns NULL if something failed Using the GPU program during display Reverting to the fixedfunction pipeline during display mlbrMav 15 2mm CS 519 Signal amp Image Processing Other Transforms Other Transforms 7 39 A transform is a change in the numeric representation of a signal that preserves all of the signal s information Transforms can be thought of as a change of coordinates into some coordinate system basis set An alternate form of expressing the vectorsignal They all have the same basic form 1 Choose your basis functions 2 Get the weights using inner product of signal and basis functions 3 Reconstruct by adding weighted basis functions The Fourier Transform Basis functions complex harmonics 6139th or ei21ltmN Transform calculating the weights of each basis function Fs Chap quot2de oo Inverse transform putting together the weights ft 0Fsei2mds oo Discrete Cosine Transform DCT Basis functions realvalued cosines 539 005 M 2N where 060 v and 0 s vf0r0ltsltN Discrete Cosine Transform DCT Transform N l 2tls1 t G s 20c s t cos c ggll 2N Inverse transform N l gm ZaltsgtGcscos s0 Treat signal as alternatingperiodic Realvalued transform Different Transforms Transform Basis Functions Good for Fourier Sines and Cosines Frequency analysis Complex harmonics Convolution Cosine Cosines Frequency analysis but not convolution Haar Square pulses of different Binary data widths and offsets Slant Ramp signals of different Firstorder changes slopes and offsets Wavelets Various Timefrequency analysis ix Hotelling KarhunenLeuve Transform Basis functions eigenvectors of covariance matrix Idea Measure statistical properties of the relationship between pixels Find the optimal relationships eigenvectors Use these as basis functions Signalimage speci c I Wavelets 7 Basis functions Scaled resized copies of the same function Functions must have nite extent Stretching frequency Limited extent spatial localization Cojoint Representations Signals are pure time space domain no frequency component Sinusoidal Fourier DCT transforms are pure frequency domain no spatial component Wavelets and other coj oint representation are Somewhat localized in space Somewhat localized in frequency Accuracy in the spatial domain is inversely proportional to accuracy in the frequency domain L Energy Compaction All of these transforms produce a more compact representation in terms of energy than the original image Energy compaction means large part of information content in small part of representation Representation Compaction AndBut Image Poor Easily interpreted Fourier Good Convolution Theorem Cosine Better Fast Hotelling Best Basis functions are signalspecific Wavelets Good Some spatial representation as well q A Hardwareeye View of the Graphics Process l Input Devices 39 Uniforr1 variables 4 Network I I Uniforr Ivariables 4 MC Model Coordinates WC World Coordinates EC Eye Coordinates CC Clip Coordinates NDC Normalized Device Coordinates SC Screen Coordinates TC Texture Coordinates RGBAZ Pixels Texels MC Comput WC EC CC H NDC SC I I omogeneous Graphic Transform I gluLookAt Prejectlon mjbApr 2 2007 A Shadereye View of the Graphics Process I 391 9 3 I h 0 1 3 0 D 5 E 0 0 I I Assembled Primitive Pixels I 419 I b April 2 2007 IIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIII ulllll Computer Graphics mJ39 A Shadereye View of the Graphics Process 03quot Computer Graphics with Geometry Shaders Transformed Vertices Assembled Primitive lnterpolated Values Pixels mjb April 2 2007 Varying Variables are Interpolated by the Rasterizer I Color Lightlntensity Color Lightlntensity Color Lightlntensity Color Lightlntensity Color Lightlntensity Color Lightlntensity Color Lightlntensity Color Lightlntensity Color Lightlntensity Color Lightlntensity Color Lightlntensity Color Lightlntensity Color Lightlntensity Color Lightlntensity Color Lightlntensity Color Lightlntensity Computer Graphics mjb April 2 2007 In Hardware it s Really ParallelProcessed MC Veltices RGBAZ RGBAZ RGBAZ RGBAZ Pixels Pixels Pixels Pixels Computer Graphics mjb April 2 2007 CS 519 Signal amp Image Processing Display and Reconstruction Properties of Displays Size and of Pixels Brightness Linearity Flatness Resolution Reconstruction v Reverse of digitization Undo sampling or at least make is seem continuous Gaussian spots Resampling Undo quantization convert back to analog Interpolation Dithering Intensity Discrimination 1 Human eye can discriminate 1000 shades of gray For constant adaptation about 200 levels 8 bits 9 28 256 shades Linearity f Applies to output as well as input Twice the recorded value should be twice as bright Problem monitor response not linear Gamma Output Intensity gt1 2 c V 7 b lt Offset bias Gain slope Input Voltage Gamma Response l Gamma Correction KW g graylevel I After Gamma Correction 1 quot39 k I cg1Vi g graylevel Gamma Correction I N J Different monitors have different gammas Be careful of different operating systems drivers etc Applies to imaging devices as well cameras scanners etc Gaussian Spots I J 0 Digital display devices generate output via a collection of dotsspots 0 Each spot has a 2D intensity distribution Modeled as a radially symmetric Gaussian my e ltx2y2gt equot 2 R 9 radius at which intensity drops to 12 maximum 190 e rZ e rR21n2 eln2rR2 2 012 Flat Display quotN A constant highintensity image should look at Problem Hard to make individual spots blend into a constant eld Solution Use wider spots fxijvj V x Iquot r KM quotJ Put spots closer together g 08 x g x k r If k I ff f gr u 08 I N r l V lquot r R f 04 fr K f if x 02 ff Xi 39 ltRE 39139H390395HH0HI39OTSHHlHi H39s ll Resolution x 0 Wider or closer together spots mean less resolution 0 Individual spots spread and interact with neighbor Rapid changes lose contrast Modulation scaled contrast between neighboring high and low intensity pixels L Contrast vs Frequency Display sinusoids of different frequencies Measure contrast as a function of frequency Transfer Function CS 519 Signal amp Image Processing Histograms Definition Properties Histograms count the number of occurrences of each Sum of histogram elements equals the image size graylevel value Discrete 255 HD ZHD of pixels D 0 Count Graylevel Continuous 39HDdD area 0 Properties Properties Sum of values between a and b equals the size of all Integrated optical density weight of image or objects objects in that range b Discrete OD JDHDdD b ZHD of pixels in objects a Dza Mean graylevel average intensity in image or objects b Continuous JDH IO 17 M GL 2 ab JH DdD 2 area of obj ects area jHDdD a Application Camera Parameters 0 Too Bright lots of pixels at 255 or max Too Dark lots of pixels at 0 Gain Too Low not enough of the range used Application Segmentation 0 Can be used to separate bright obj ects from dark background or vice versa Histograms Normalizing and Cumulative 0 Probability density function histogram normalized by area HD 90 A Cumulative histogram counts pixels with values up to and including the speci ed value a Ca jHDdD 0 0 Cumulative density function normalized cumulative histogram a C61 Pm jpltDgtdD 7 0 41 CS 519 Signal amp Image Processing Rec onstruction Image Reconstruction i Reconstruction is the process of attempting to recreate the original signal given a corrupted one Terms Scene the real world image a possibly corrupted picture ofa scene Image reconstruction attempts to recreate the scene from an image ml Required Knowledge Reconstruction algorithms usually use one or more of Knowledge about the process that corrupted the irnage Knowledge about properties ofthe original scene Examples Deconvolution requires knowledge ofthe point spread nction Weiner ltering requires an estimate ofthe strength ofthe noise Knowledge About Corruption Process Knowledge about the corruption process puts limits on rec nstruction Usually though of a tting the data the reconstructed irnage can t wry too rnuch from the original corrupted irnage Example Assuming white noise with standard deviation 6 the probability ofgetting noisy irnage g from scene fis pltglfgt139e g m r r I Knowledge About Scene Properties Knowledge About Scene Properties Possible geneml properties Example Penalize unsmooth images Generally smooth s I H p0 EEO ft A few scattered rapid transitions ken0 Possible speci c properties Where Nl39 denotes the neighborhood of l39 Known scene contents subject anatomy etc Notice that one large discontinuity in intensity is more 39 other related eggscenes likely than several smaller discontinuities pf can be deterde for an scenesf Results in piecewiseconstant images with infrequent but rapid discontinuities i rL Statistical Reconstruction Goal for all possible reconstructed scenesf nd the one that maximizes p f l g for image g Problem your knowledge of the imaging process tells you Pg f but how do you determine Pf g Really big problem How big is the space of all possible scenes f 7 Bayesian Reconstruction n Pf t ggt Pgf Pgf is the data term Pf is the a priori knowledge prior Pg is usually assumed to be uniform Pf g is called the a posteriori estimate This is o en called maximum a posteriori MAP estimation A V Bayesian Reconstruction If Pg f and P f are negative exponentials the 6 process usually boils down to minimlzrng som function gs mam g x mom where dataf g penalizes reconstructions f that don t agree with the data g and priorf penalizes reconstructions that are a priori unlikely The weight 9 controls the relative importance of the two Balancing the Data and Prior Terms dataf g 9 prior f If 9 is set too low the data term dominates and there is little improvement If 9 is set too high the prior term dominates and the reconstruction may not be true to the original I Optimization Since the space of all fto search is far too large non exhaustive functional minimization techniques must be used Gradientdescent Simulated annealing Neural networks Graduated nonconvexity Etc Other Reconstruction Methods There are many other reconstruction methods but 1 all near y Use knowledge about the process that corrupted the imagesignal Use knowledge about properties ofthe original scenedata Attempt to optimize some form of likelihood function Mike Bailey Oregon State University Noise Can be 1D 2D or3D Is a function of in ut values Ranges from 1 to 1 or from 0 to 1 ls equal to the midvalue at integer input values Looks random but really isn t Has continuity ls repeatable mm 4pm 7 ma Positional Noise mm 41m znnv Gradient Noise min 4m 7 mm Quintic Interpolation Creates More Continuity Than Cubic Cubic c1 continuity at the integer values Quintic 1 continuity at the integer values new 7 2m Coef cients for Cubic and Quintin Forms NfCmN Noisevalues Cubic CN0173IZ213 CN13t272t317CN0 CGOI72IZ13 Cm 42 t3 CCU 0 C01 0 CMNCGnGnCmGiCchnCmCi Gradients curvatures Quintic CNU17101315147615 CN110137151461517CN0 CG0t76t38f 7315 CGl 74 714 7315 1 Z 3 3 3 4 1 5 t 7 t t i t CC 2 2 2 2 1 3 4 1 5 C655 it 5 mib 7km 7 2m Noise Octaves Idea Add multiple noise wuv frequency and hqu the am 4th 7 2mm Turbulence e about the ue of the n 39 appearance shur Idea Take the ubsolut cente ne ng the Normal waulent m b 4w 7 2mm 1 Octave 4 Octave Image Representation of 2D Noise 4 Octaves 1 Octave miniRpm 7 2mm 3D Surface Representation of 2D Noise 4 Octaves miniAmi 7 znua 3D Volume Rendering of 3D Noise 1 Octave WW zanv 1 Octave 3D Volume lsosurfaces of 3D Noise 5 Midvalue 4 Octaves Examples Color Blending Colon Blending for Fire Color Blending Deciding w hen to f 5390 Discard for Erosion HM 1 Have an Equation Have Actual Coordinates 1 AddNoise to the 1 Produce New Coordinates Use the New Coordinates in the Equation Surface and Displacemenf Shaders Surface Only Displacement Only Strfaoe Displacement myb 4w 7 mm I What s the Difference Between These Two Images I Displacementmapped Bumpmapped P P normalizeW at disp N calculatenormal P normalizeN disp N calculatenormalP mmrApm 7 2m When Muddy Dinosaurs Roamed the Earth Mmepm 7 2m How Now Muddy Cow mm 7Ava 7 may What s the Difference Between These Two Images P P normalizeN disp N calculatenormaKP if disp o P P normalizeN disp N calculatenormaKP mjbrAan zuua Mike Bailey CS 519 Oregon State University May 31 2006 Fundamental Differences Between RenderMan Shaders and OpenGL Shaders RenderMan GLSL Goals Shader Surface Microfacels Gel Rid of Pixels shader sels Shader Variables Noise Builtin Texture for now Shaders OpenGL Graphics Pipeline 1 LJ Vonlces gt Plxel Gmuns rammabln Unlt Fragments j Tnxmm 39 PM Courtesy of Randi Rest GLSL Vertex Shader Inputs and Outputs Standard OpenGL Generic attribUtES attributes 1 2 Userdefined uniforms epsilon myLightPos surfColcr etc Texture Maps p we Builtin uniforms glFogColor glModelViewMatrix etc Standard Special Userdefined arying Varia arying glFrontColor normal Must Fm glBackColor g p e refraction etc etc E 0m 3m gPolntSize CourteSy of Randi RoSt A GLSL Vertex Shader Replaces These Operations Vertex Transformations Normal Transformations Normal normalization Handling of per vertex lighting Handling of Texture coordinates A GLSL Vertex Shader Does Not Replace These Operations Frustum clipping Homogeneous division Viewpor r mapping Backface culling Polygon mode Polygon offset Builtin Vertex Shader Variables You Will Use a Lot vec4 gVertex vec4 gINormaI vec4 gCoor vec4 gMutiTexCoordi i0 1 2 mat4 gModeViewMatrix mat4 gProjectionMatrix mat4 gModeIViewProjectionMatrix mat4 gNormaMatrix GLSL Vertex Shader Internal Names glNormal Normal SecondaryCoDr FogCuo Mulli Tex Coordo Mum Tex Guard 1 Multi Tex Coord2 gIMuIriTexCoordm 4gt gLMurifexcoordN Application calls Current gt d amibute value BulllIn attribute variables venex attributes Courtesy of Ram Roar GLSL Shaders Are Like C With Extensions for Graphics Types include in r ivecZ ivec3 ivec4 Types include floa r vecZ vec3 vec4 Types include maTZ ma r3 ma r4 Types include bool bvecZ bvec3 bvec4 Types include sampler To access Tex rures VecTor componen rs are accessed wi rh index rg ba Xyzw and squ VecTor componen rs can be swizzled c1rgba c2abgr dI39s39car39dopera ror used in frag shaders To discard fragmen rs Type qualifiers consT a r rribu re uniform varying Procedure Type qualifiers in ouT inouT GLSL Shaders Are Missing Some Cisms No Type cas rs use cons rruc rors insTead No au rorna ric promo rion No swi rch sTa remen r No poinTers No sTrings No bi rwise opera rors No enums Can only use 1 D arrays no bounds checking No fIebas39edpre processor direc rives GLSL Fragment Shader Inputs and Outputs Standard Special Userdefined Varying Variables Varying giC giFragCoord normal giSecondaryColor glFrontFacing refraction elc etc Userdefined uniforms epsilon myLighlPos surfCoior etc 4 Builtin uniform 39 Texture Maps giFogColor giModeiViewMatrix etc Special Variables i FraCoior th in I Must Fm glFragDep glFragData Courtesy of Randi Rest A GLSL Fragment Shader Replaces These Operations 39 Color compu rcrrion 39 Tex ruring 39 Color ari rhme ric 39 Ligh ring 39 Fog 39 Blending 39 Discarding fragmen rs A GLSL Fragment Shader Does Not Replace These Operations S rencil Tes r Z buffer Tes r S rippling Builtin Fragment Shader Variables You Will Use a Lot vec4 glFragCoor vec4 glFrag Depth CS 519 Signal amp Image Processing Sampling Sampling A Continuous g0 Discrete Sampling SpatialTemporal Domain Sampling a continuous function f at time space interval At to produce a discrete function g gn nAt is the same as multiplying it by a comb g f combh where h At g0 Sampling SpatialTemporal Domain f0 Continuous t combht Sampling Comb t g0 Discrete Sampling Frequency Domain Sampling in the spatialtemporal domain by multiplying with comb 1 g f combh is the same as convolution in the frequency domain with the transform of combh G F comblh Convolution of a function and a comb causes a copy of the function to stick to each tooth of the comb and all of them add together 4 Sampling Frequency Domain F0 Spectrum 3 comblhs Comb s Spectrum 3 GS Spectrum of Discrete Signal 3 q Reconstruction In theory we can reconstruct the original continuous function by removing all of the extraneous copies of its spectrum created by the sampling process FS GSH1hs In other words keep everything in the frequency domain between i S s S and throw the rest 2h 2h away QL Reconstruction Graphical Example GU Spectrum of Discrete Signal s 111110 Rectangular Box Filter s F S Reconstructed Signal Spectrum s The Sampling Theorem We can only do this reconstruction if the duplicated copies do not overlap They do not overlap iff l The signal is band limited and l 2 The highest frequency in the signal is less than E In other words the sampling rate lh must be twice the frequency of the highest frequency in the image This is called the Nyquist rate J Aliasing What if the duplicated copies in the frequency domain do overlap High frequency parts of the signal those higher than Z lh intrude into neighboring copies The higher the frequency the lower the point of overlap in the adjacent copy These high frequencies masquerading as low frequencies is called aliasing False lowfrequency patterns are called Moire patterns Sampling Frequency Domain 175 Spectrum S commAs Comb s Spectrum S GS Spectrum of Discrete Signal Li Sampling Above the Nyquist Rate W W E Moir Patterns Preventing Aliasing You have two choices 1 Increase your sampling 2 Decrease the highest frequency in the signal before sampling ix Reconstruction Reconstruction was Fs GSH1hs But in the time spatial domain this is equivalent to t gt sinc21lth So convolve your discretelysampled nonaliased image with a sinc function and you can reconstruct the original continuous image lb Imperfect Reconstruction 7 Problem you can t do it the sinc function has infinite extent The best you can do is come close By not perfectly clipping in the frequency domain the duplicate copies now look like false high frequencies Jaggies in graphics false high frequencies cased by poor reconstruction f Imperfect Reconstruction GS Spectrum of Discrete Signal 3 Imperfect Reconstruction Correcting Imperfect Reconstruction 1 Sample well above the Nyquist rate 2 Lowpass lter after reconstruction Typical Processing Pipeline l Lowpass lter to reduce aliasing 2 Sample 3 Do something with the digitized signalimage 4 Reconstruct 5 Lowpass filter to correct for imperfect reconstruction f The Discrete Frequency Domain If sampling in the time spatial domain is multiplication by a comb so is sampling discretizing the frequency domain Multiplication by a comb in one domain is convolution with a comb of inverse spacing in the other Discrete timespatial samples 9 replicated copies of the signal s spectrum appear in the frequency domain Discrete frequencies 9 replicated copies of the signal appear in the time spatial domain ie the signal is periodic lb The Discrete Frequency Domain If a signal is N time samples long and we disccretize the frequency domain a UN intervals like the DFT we reproduce the signal every N samples in the time domain The Discrete Fourier Transform of a truncated nite domain signal is the Continuous Fourier Transform of the same periodic signal Frequency Resolution An N element signal is accurate in the frequency domain only to UN To be more accurate in the spatial domain sample more frequently To be more accurate in the frequency domain sample longer CS 519 Signal amp Image Processing Convolution Impulse Response 7 Remember that we can entirely characterize a system by its impulse response 8 t gt ht Problem given an input signal xt how do we determine the output yt f Linearity and Shift Invariance Because a system is linear a 5t gtaht Because a system is shift invariant 8t k gtht k I Response to an Entire Signal A signal xt can be thought of as the sum of a lot of weighted shifted impulse functions xt Ia75t Td7 oo where 517 t is the delta function at 17 at is the weight of that delta function CRead the integral simply as summation Response to an Entire Signal c0nt Because of linearity each impulse goes through the system separately 517 81 7 gt 517 ht 7 This means oo oo 51mm 7d7 gt 51mm 7d7 oo oo i5 Response to an Entire Signal c0nt But the weight 517 of the delta function at 7 just x 7 So y1 51mm 7d7 oo This operation is called the convolution of x and h q Convolution Convolution of an input xt with the impulse response ht is written as W W That is to say xt ht ij7ht 0dr oo ib Response to an Entire Signal So the response of a system with impulse response ht to input xt is simply the convolution of xt and ht W a ya x0 ha ox7ht mm oo q Convolution of Discrete Functions For a discrete function x39 and impulse response h39 xU hJ Z xlk 39hU k k I One Way to Think of Convolution xt ht ij7ht 0dr xU hJ Z xlk 39hU k k Think of it this way Shift a copy of h to each position I or discrete position k Multiply by the value at that position xt or discrete sample xlkl Add shifted multiplied copies for all tor discrete k Example Convolution One way x3914312 hj12345 x0h1390 i i i i i i i if x1h1391 i i i i i i i if x2h1392 i i i i i i i if x3h1393 i i i i i i i if x4h1394 i i i i i i i if xv hm 2de huec k i i i i i i i if ib Example Convolution One way x3914312 hj12345 x0hgt0123 45iiii x1h1gt1 i i i i i i i if x2h1gt2 i i i i i i i if x3h1gt3 i i i i i i i if x4h1gt4 i i i i i i i if xv hm 2de huec k i i i i i i i if f Example Convolution One way x39 1 4 3 1 2 hj12345 x0hgt012 3 4 5 iiii x1hgt1 i 4 8121620iii x2hf2 i i i i i i i if x3hf3 i i i i i i i if x4hf4 i i i i i i i if x111 hm 2x114 huec1 k i i i i i i i if ib Example Convolution One way x3914312 hj12345 x0hj7012 3 4 5 iiii x1h39717 4 8121620777 x2hj72773 6 91215ii x131h11397311 7 7 7 7 7 7 7 if 1 x141h11397411 7 7 7 7 7 7 7 if 1 x11 hm 2x114 huec1 k i i i i i i i if Example Convolution One way x3914312 hj12345 x0hgt012 3 4 5 iii x1hgt1 i 4 8121620iii x2h397277 3 6 91215ii x3hj7377712 3 4 5 i x4hf4 i i i i i i i if x111 hm 2x114 huec1 k i i i i i i i if Example Convolution One way x3914312 hj12345 x0hgt012 3 4 5 iiii x1hj717 4 8121620iii x2hj72773 6 91215ii x3hj7377712 3 4 5 i x4h397477772 4 6 81 x11 hm 2x114 huec1 k i i i i i i i if f Example Convolution One way x39 1 h111 1 940114170 1 x11hf11 i 942114172 i 943114173 i x41h3941 i 4 234 4 3 1 21 5 345 8121620i 36912 1 2 3 i24 xv hm 2de huec k 1 614 23 34 39 15 25 13 l ll ll ll ll l I Another Way to Look at Convolution x11 111139 2 WC 39 111139 k k Think of it this way Flip the function h around zero Shift a copy to output position j Pointwise multiply for each position kthe value of the function x and the ipped and shifted copy of h Add for all k and write that value at position j f Example Convolution One way xv hm 2de huec k i i i i i i i if ib Why Flip the Impulse Function An input at t produces a response at t Tof h7 Suppose I want to determine the output at I What effect does the input at t Thave on t h 7 q Polynomial Multiplication p1x 3x22x4 p2x 2x25xl p1x p2x 7x4 7x3 7x2 7x Convolution in Statistics 7 If you add the results from two independent distributions the distribution of this sum is the convolution of the two independent distributions Central Limit Theorem as you convolve any distribution with itself n times in the limit as n approaches in nity the result approaches a normal Gaussian distribution Convolution in Two Dimensions In one dimension oo xt gtxlt ht jxmha my oo In two dimensions 1x y gtxlt hx y j 1any hx 7x y Tyd7xd7y Or in discrete form My hxy ZZHLWIDC jy k k J ib Example TwoDimensional Convolution 1122 111 1122 121 1122 111 1122 Properties of Convolution 4 Commutativef g g f Associativef g h fgtxlt g gtxlt h Distributive over addition f g h f g f h Derivative fgtxltg f gtxltg fgtxltg t Convolution has the same mathematical properties as multiplication This is no coincidence L Useful Functions Square Hat Triangle Aat Gaussian Gt s Step ut ImpulseDelta 5t Comb Shah Function combht Each has their twodimensional equivalent Square 1 1 if lalSt Ha 0 othemlse l a What does t Haa do to a signal t What is Una Una Triangle Aat1 iflalSt 0 othelwise Gaussian Nonnalized Gaussian area l Gt039 1 e ADJ 5c ConVOIVing a Gaussian with another 7 Gto1gtIlt Gt 0392 Gt 03912 0 Gaussian maximum value 1 1 Z Gt 039 e 40 7 7 1 1 L Step Function 1 iftZO ut 0 otheiwise What is the derivative of a step function ImpulseDelta Function We ve seen the delta function before oo ift0 St d 6tdt1 0 otherwise an J ice I Shifted Delta function impulse att k 00 if t k 0 otherwise 5t k What is a function t convolved with 5 t What is a function t convolved with 5 t k L Comb Shah Function A set of equallyspaced impulses also called an impulse train combht 2 5a hk k h is the spacing What is t combht 3h 2h h 0 h 2h 3h Convolution Filtering Convolution is useful for modeling the behavior of linear shiftinvariant devices It is also useful to do ourselves to produce a desired effect When we do it ourselves we get to choose the function that the input will be convolved with This function that is convolved with the input is called the convolution kernel ib I Convolution Filtering Averaging Can use a square function box filter or Gaussian to locally average the signalimage Square box function uniform averaging Gaussian centerweighted averaging Both of these blur the signal or image k Convolution Filtering Unsharp Masking To sharpen a signalimage subtract a little bit of the blurred input 1xy 05 1xy 1xy Gxy 0 This is called unsharp masking The larger you make a the more sharpening you get li

### BOOM! Enjoy Your Free Notes!

We've added these Notes to your profile, click here to view them now.

### You're already Subscribed!

Looks like you've already subscribed to StudySoup, you won't need to purchase another subscription to get this material. To access this material simply click 'View Full Document'

## Why people love StudySoup

#### "There's no way I would have passed my Organic Chemistry class this semester without the notes and study guides I got from StudySoup."

#### "Selling my MCAT study guides and notes has been a great source of side revenue while I'm in school. Some months I'm making over $500! Plus, it makes me happy knowing that I'm helping future med students with their MCAT."

#### "Knowing I can count on the Elite Notetaker in my class allows me to focus on what the professor is saying instead of just scribbling notes the whole time and falling behind."

#### "Their 'Elite Notetakers' are making over $1,200/month in sales by creating high quality content that helps their classmates in a time of need."

### Refund Policy

#### STUDYSOUP CANCELLATION POLICY

All subscriptions to StudySoup are paid in full at the time of subscribing. To change your credit card information or to cancel your subscription, go to "Edit Settings". All credit card information will be available there. If you should decide to cancel your subscription, it will continue to be valid until the next payment period, as all payments for the current period were made in advance. For special circumstances, please email support@studysoup.com

#### STUDYSOUP REFUND POLICY

StudySoup has more than 1 million course-specific study resources to help students study smarter. If you’re having trouble finding what you’re looking for, our customer support team can help you find what you need! Feel free to contact them here: support@studysoup.com

Recurring Subscriptions: If you have canceled your recurring subscription on the day of renewal and have not downloaded any documents, you may request a refund by submitting an email to support@studysoup.com

Satisfaction Guarantee: If you’re not satisfied with your subscription, you can contact us for further help. Contact must be made within 3 business days of your subscription purchase and your refund request will be subject for review.

Please Note: Refunds can never be provided more than 30 days after the initial purchase date regardless of your activity on the site.