### Create a StudySoup account

#### Be part of our community, it's free to join!

Already have a StudySoup account? Login here

# Image Processing CS 6640

The U

GPA 3.78

### View Full Document

## 28

## 0

## Popular in Course

## Popular in ComputerScienence

This 47 page Class Notes was uploaded by Marian Kertzmann DVM on Monday October 26, 2015. The Class Notes belongs to CS 6640 at University of Utah taught by Staff in Fall. Since its upload, it has received 28 views. For similar materials see /class/229999/cs-6640-university-of-utah in ComputerScienence at University of Utah.

## Popular in ComputerScienence

## Reviews for Image Processing

### What is Karma?

#### Karma is the currency of StudySoup.

#### You can buy or earn more Karma at anytime and redeem it for class notes, study guides, flashcards, and more!

Date Created: 10/26/15

Image Segmentation Ross Whitaker SCI Institute School of Computing University of Utah What is Segmentation Partitioning images volumes into meaningful pieces Labels quotPartitioning problems s f quot Isolating a speci c region of interest nd the star or bluish thing Binary 2 3239 r39 I A3 i Delineation problem Why Detection recognition Where is the vehicle Whattype of vehicle is it Quantifying ohject properties How big is the tumor ls is expanding or shrinking How much of a radiation do I need to treat this prostate Statistical analyses of sets of biological volumes What is The Best Way to Segment Images Depends Kind of data type of noise signal etc What you are looking for shape size variability Application speci cs how accurate how many State of the art Speci c data and shapes Train a template or model variability Detorm to tit speci c data General data and shapes So many methods So few good ones in practice hand contouring O 0 Hand Contouring Quick and easy generalpurpose seg tool Time 0an 3D slieebysliee with cursor de ning boundary User variation esp sliee to slice Tools available Lg Harvard SPL Slicer General Purpose Segmentation Strategies Regionbased methods connected Regions are locally homogeneous in some property Regions satisfy some property to within an tolerance Eg Flood ll Edgebased methods Regions are bounded by features I Features represent sharp contrast in some property locally maximal constrast 4k Eg Canny Edges Pixel Classi cation Simplest Thresholding l ixels above threshold in class A below class Connected components on class label Extension of thresholding gt pattern recognition lmage intensities not enough De ne set of features at each pixel O O 0 Options for Pixel Features Intensity Derivatives at different scales Also differential invariants eg grad mag Neighborhood statistics Mean variance Neighborhood histogram Texture eg handpass lters Multivariate data vectorvalued range Color Spectral MRI Spectral MRI Classi cation T1 T2 PD Feature Space Classi cation Tasdizen et al Pattern Recognition Relatively old idea mid 20th century Classify an instance based on multiple measurements features Statistical decision theory min prob of error For each set of measurements say which class and maybe prob Concept Feature Vector Set of measurements 95 6 ERquot Position in featurespaee ac 1952 736 Domain Range Feature Space Classi cation T pical a proach construct a function w Ich tel s you the extent to which x is in classl f7 2 ER l gt R Two types of problems Supervised classi ed examples are given Unsupervised only raw data is given Pattern Recognition 39 What is the form of f0 Could be anything but Linear fix x wz Difference of Gaussiane homogeneous coordinates lt O Finding f0 From Examples For each class use prototype Classify instance based on nearest prototype Neural nets eg perceptron Learn set of parameters cg Ws through many examples Statistical Construct probability density functions from examples Hierarchical Grouping Methods Splitting merging of regions Construct metric on region con gurations Mi Statistics of region average intensity etc Are two regions similar enough to be merged Should they be split I Larger regions Higher cost Small regions Eg pixels WPWN Simple Merging Alogoritm Each pixel gt one region For each region check merge with each neighbor Cost of merge Cij Min Mi Mi k Sort by cost eg heap and merge min regionj lt N Stop at number of regions or no more merges Minimum Cut Shi and Malik Treat image as graph Vertices gt pixels Edges gt neighbors Must de ne a neighborhood stencil the neigbhors to which a pixel is connected 0 Nieg horhood Q Q connections Q OQ GOO Image 0 O O O Q Weighted Edges Graph Minimum Out Solving NxN matrix N number of pixels Min eigen value vector discribes min cut omputationally expensive EEEEEEEEEEEEEEEEEE m Matrix is sparse because of EEEE neighborhood structure EEE le most connections are zero EEEEEEEEEEEEEEEEEE Run recursively to get more quot IIIIIIIIIIIIII regions Watershed Segmentation Boundary Function e grad magjq Catchment Basin v watemhed Regions Generalizes to any dimension or boundary measure Watershed height Watershed Salieney Height of water before ooding neighbor Used as cost of merge to build hiearehy O O Watershed Segmentation Properties General Nonlocal regions can leak Boundary based Poor in lowcontrast data Sensitive to noise Low level pixel based Lack of shape model Preproeessing Necessary for reliable boundary measure Anisotropic Diffusion V 1 n Raw ooooooiiom m FEWQWEE DQ W omiooimgoio iiimiom Mquot m G mn 6quot Dr B 9W da em mu Mom 0 Wate shed Segmentation Level Watershed Segmentation Level Interactive Watershed Sanmantatinn Active Contours Snakes Cass Witkin Terzopoulos 81 Curve 55 5R H 3 Tangent vector 55 l sl De ne quotfittingquot energy EC mm ac 36 d5 A39imv39inn 390 flail Membrane energy sh rink Thinplate me my stiff Minimize grad descent gt deformable contour Snakes Motion First variation gives motion 33 Glass ssss Snake slides quotdownhilquot on feature image while trying to be quotsmoothquot N Mt stays smooth Model is attracted to featquot res I Snakes Computation Represent curve as polyline Ci Where i 17N Approxirnate derivatives as nite diffenBCi a a a fat 3 0241 20239 Ci1 C1 02 Update with forward differences in t 1 003 makes Exampe Deformable Models Spawned many new ideas in segmentation and surface processing Extensions that include Many different kinds of features Combined with statistical classi cation Spectralcolor data 3D surfaces segmentation and processing Changing topology splitmerge objects Ties into other POEbased image processing Other curvesurface Morphological Image Processing Serra 1980s quotMathematical Morphometry Basic mathematical operations gt theory algorithms Binary images gt greyseale Filtering segmentation feature detection Morphometry Concepts Sets in images Regions of value 1 everything else is 0 Basie operations Intersection union subtract nn complement A B ALJB A B Morphometry Concepts 39 Translation of set to point z 82 39 Re ection o39Pefiqlq w 2qu e B B 39 Dilation of Ei flf w V w 6 B 39 Makes objects biggerA ea B 43 n A 75 0 39 Erosion of A by B Makes objects wallet 9 B 23932 C A Opening and Closing B quotStructuring Element Opening Generally smoothing by removing material A o B A e B EB B Closing Generally smoothing by adding material A03 2 AEBB GB Idempotent General Concepts For smoothing B is normally round Discrete implementation is easy Dilations of A with B tend to make the result look at little more like 3 B convex and normalize for size Openings remove small pieces and connections Closings ll in holes and gaps Morphological Filtering Thresholding for segmentation quotwhite matter of the brain from MRI C86964 Notes On Linear Systems 1 Linear Systems Systems of equations that are linear in the unknowns are said to be linear systems For instance azl bxg c dzl 6x2 gives 2 equations and 2 unknowns More generally we have 011 aiN N b1 021 azN N 52 aM i aMN N bM This can be written in matrix representation as Ax b where an 112 MN 021 022 azN 0M1 0M2 0MN and b1 b2 b bM The solution is given by multiplying boths sides by A l A IAX Ix x A lb 4 This is the same answer you would get if you isolated one variable at a time and then substituted back into the other equation We will be concerned with solving linear equations with uknowns K under the conditions 0 When M lt N or M N and the equations are degenerate or singular Degeneracy happens when the equations are not linearly independent The determi nant of a square matrix is zero if and only if its singular The number of linearly independent equations is called the rank of A If the rank is less than the size the matrix is said to be not offull rank Because of this there is a set of nonzero vectors that produce zero output These vectors can be added to any solution and its still a solution Therefore the solution is not unique The space spanned by the set of vectors that produce zero is called the nullity of A We would like to know a single solution and the characterize the nullity of A 0 When M N and A is of full rank We would like to nd the unique solution x in a way that is ef cient and accurate In some cases the matrix A can be nearly singular In such cases it can be impossible to compute the solution even though it exists because of numerical errors When M gt N the system is over determined In this case we would like to nd the best compromise solution Often the best solution is de ned in the sense of least squares That is minimize AX b2 5 There are many ways to do this One way is to convert the equations into another linear N gtlt N problem that solves this least squares This is ATAX ATb 6 These are called the normal equations of the initial problem The solution is x ATA 1ATb 7 The matrix ATA 1AT is called the psuedo inverse Notice sometimes ATA 1 can be solved with a method for well posed linear systems but can be singular or nearly so Therefore the normal equations for overconstrained systems should be used with caution 0 In some cases we would like to solve for many different bis 2 Mechanisms for Solving Linear Systems For well posed systems ie square matrix of full rank there are several mechanisms for solving Note that explicitly computing the inverse A l is generally not recommended Gaussian Elimination Somewhat robust Can detect singularities Not very ef cient LU Decomposition Somewhat robust Ef cient Can solve for many b7s Gives the determinant directly Iterative techniques Can be slow Are very accurate Often use to clean up the accu mulated round off error associated with these other methods 3 Singular Value Decomposition A very powerful tool in numerical linear algrebra is singular value decomposition LU1 0 wz A UWVT U VT 9 LUN Where the matrices U and V are orthogonal in their columns V in rows as well The sin gular values and their associated columns are de ned to within a scalar factor and therefore we usually assume the column vectors of U and V are normalized UT U VT V 10 Such a decompostion is always possible regardless of the matrix and there are stable numerical algorithms given in Numerical Recipes The decomposition is unique up to a swapping of rows in all matrices Why would we want to do a SVD 31 SVD of Square Matrix If A is square say N gtlt N then U V and W are all the same size Because U and V are orthogonal inverses are the transpose Because W is diagonal inverse is reciprical of diagonal elements So the inverse of A is A 1 V diag1wj UT 11 This wont work if one of the ms is zero or even if one is very small so small that its value is dominated by roundoff error If more than one ms is zero then the number of zero ws gives the nullity of A SVD gives a mechanism for nding the inverse and some clear indications of whats wrong when it fails We can see this as follows Any x that is composed of columns of V with corresponding ws that are zero gives Ax 0 We know this because if multiply A by column of V call it Vi we get AvUWVTVUW010U0w0wu 12 where is zero when w 0 Thus the null space is spanned by the vectors columns of VT associated with zero wis is the null space Any point in this space when multiplied by A returns 0 The space spanned by the vectors associate with the nonzero elements of W is called the range of A The dimensionality of that space is the rank The rank plus the nullity is equal to the size of A which is N Therefore the nullity of A is the dimension of its null space Solving Singular Problems If A is singular one might want to single out a solution that is somehow better than the others In this case want might want the smallest solution ie the smallest length X2 One way to do that is to use the diagonals but replace 111 by zero where w is zero le use zero whereever 111 blows up The solution is then x V diag1wj 1 13 This gives the shortest solution to the problem Proof Consider a solution that is modi ed by a vector x could it be shorter by some other solution What happens when we add a vector x that is in the null space xx VW lUTbx W lUTb VTx But the rst term has nonzero elements only in those places where the w 7 0 The second term because its in the null space has nonzero elements only in those places where w 0 Therefore the two terms are orthogonal vectors and their sum must be greater length than either part Solving Overconstrained Problems We can solve over constrained problems and get the least squared solution The solution strategy is the same x V diag1wj UTb 14 but it is quaranteed to minimize the square of the residual E Axib 15 Proof Consider the solution given above7 and modify it by adding some arbitrary vector x Let b Ax Clearly b is in the range of A le 7 b b l 7 UWVTVW1UTb7 b b lltUWW 1UT 7 1 b b 7 UWMr1 71UTb UTb 7 WW 1 71UTb UTb However WW4 71 is a diagonal matrix with nonzero element only where 10 0 Because b lies in the range of A7 UT has nonzero elements only where 10 31 0 Therefore these terms are orthongal vectors7 and the minimum of their sum is obtained when b 0 32 Solving Homogeneous Problems Consider a homogeneous problem of the form Ax 0 16 If it is overconstrained7 we would like to solve the associated least squares problem7 which minimizes min where E AX 17 The solution of such systems is not uniqueiit is de ned up to a scalar value Therefore7 we usually look for the solution to the constrained problem7 ie 1 le we look for the unit length solution that produces the smallest residual The solution is given by SVD to be the column vector of V which corresponds to the singular value in W which has the smallest magnitude How do we know this Proof Consider the input 5 which is this vector7 and assume it is position 239 The output is UWVTs UW077170 U077wi70 wiui 18 Notice that ul is a column of U7 and it has length 1 This is the shortest output possible from this matrix7 because any other unit length input would include weighted sums of uZs7 with weights that must be larger than w because it is the smallest singular value CS 6640 IMAGE PROCESSING Spring 2009 Practice Final Name Student ID Number Rules 0 Closed book 0 One page of notes front and back 0 No calculators Hints o The term describe77 does not mean complete sentences and paragraphs or essays If its easier you may use simple bullets and meaningful phrases to answer such questions 0 If you split answers across pages or on the backs of pages make a clear note on the page where the question is posed to indicate you have done so Clearly note the question number and part on the separate page 0 There are six questions for a total of 100 points Point values are roughly correlated with the amount of time you should devote to each question 1 15 pts Suppose that a digital image is A transformed by a histogram equalization to produce B Prove that a second application of histogram equalization produces the same exact image7 B 2 20 pts Give the Fourier transform of the following functions you may derive them any way you wish 7 show relevant work Hint They are quick to derive if you use known identities of the Fourier transform a 1 7z2 27 h k t t Wexp QUZ cos k w ere 70 are cons an s b n 1 WW1 W W S 1 and M 31 fay T 0 otherwise 00 15 pts Consider an even function7 Prove that the following statements hold for all integers n 2 0 a m is even b 2 1 d n f i W is odd 4 20 pts The Canny edge7 as originally proposed7 computes a binary map E7 which is the zero crossings of second derivativesias we discussed in class Rather than a simple threshold7 however7 it incorporated a special thresholding mechanism called hysteresis thresholding Hysteresis thresholding of the gradient image entails two thresholds7 T10 and Thj A pixel p from E makes it into the nal result if one of the following two criteria hold a p is marked as a zero crossing in E and has gradient magnitude above Thj b p is marked as a zero crossing in E7 has gradient above T107 and is connected to pixel meeting the rst criterion along a path of pixels this criterion ie above the lower threshold and in The idea is that a pixel with high gradient is an edge7 and it brings with it all those pixels of lower gradient which are connected to it Write psuedo code high level7 eg Matlab like to perform hysteresis thresholding You can assume as input the image zero crossings of second derivatives E7 the gradient magnitude image G7 and the thresholds You may also assume that you can call some appropriate ood ll function or equivalent low level operations 9 15 pts Suppose you have an edge detector such as Canny that produces binary images that indicate thin edges Suppose you want to save many such images to disk Given two good methods by which you could take advantage of the special structure of such an image to do lossless image compression 3 e 15 pm Oaxmdmthelmega helm x MM m 5 mum and m mew Me mm mm mm as we m m we mp mummy we new we mm m me has M m and M mm and m m M mm 5 u 1 mg mm m mm of the use is Even by the equsum y 5 GM 2 gt mm mm mummy d m msz mm 1 Pmpdhmsl m yM end the mm am Wed m the Image m m mg hm have m mnvnlved m functions mg fallowmg m my new um um 15 3mm 9 m cumalbum W m m m a 11 an m We have m m Mam m functions WM meme 9 magma a W m end a 5mm 11 wa mm m hm mnespmds mm mg m fumums eboveend Justify yvur choices usmg qe Rmmx txensmrm Nme theme 5 esme amount dam m the mme mam end the mug so the new axe m PM

### BOOM! Enjoy Your Free Notes!

We've added these Notes to your profile, click here to view them now.

### You're already Subscribed!

Looks like you've already subscribed to StudySoup, you won't need to purchase another subscription to get this material. To access this material simply click 'View Full Document'

## Why people love StudySoup

#### "Knowing I can count on the Elite Notetaker in my class allows me to focus on what the professor is saying instead of just scribbling notes the whole time and falling behind."

#### "Selling my MCAT study guides and notes has been a great source of side revenue while I'm in school. Some months I'm making over $500! Plus, it makes me happy knowing that I'm helping future med students with their MCAT."

#### "It's a great way for students to improve their educational experience and it seemed like a product that everybody wants, so all the people participating are winning."

### Refund Policy

#### STUDYSOUP CANCELLATION POLICY

All subscriptions to StudySoup are paid in full at the time of subscribing. To change your credit card information or to cancel your subscription, go to "Edit Settings". All credit card information will be available there. If you should decide to cancel your subscription, it will continue to be valid until the next payment period, as all payments for the current period were made in advance. For special circumstances, please email support@studysoup.com

#### STUDYSOUP REFUND POLICY

StudySoup has more than 1 million course-specific study resources to help students study smarter. If you’re having trouble finding what you’re looking for, our customer support team can help you find what you need! Feel free to contact them here: support@studysoup.com

Recurring Subscriptions: If you have canceled your recurring subscription on the day of renewal and have not downloaded any documents, you may request a refund by submitting an email to support@studysoup.com

Satisfaction Guarantee: If you’re not satisfied with your subscription, you can contact us for further help. Contact must be made within 3 business days of your subscription purchase and your refund request will be subject for review.

Please Note: Refunds can never be provided more than 30 days after the initial purchase date regardless of your activity on the site.