### Create a StudySoup account

#### Be part of our community, it's free to join!

Already have a StudySoup account? Login here

# Computer Science 0 CS T101

WVU

GPA 3.77

### View Full Document

## 23

## 0

## Popular in Course

## Popular in ComputerScienence

This 277 page Class Notes was uploaded by Abe Jones on Saturday September 12, 2015. The Class Notes belongs to CS T101 at West Virginia University taught by Staff in Fall. Since its upload, it has received 23 views. For similar materials see /class/202776/cs-t101-west-virginia-university in ComputerScienence at West Virginia University.

## Popular in ComputerScienence

## Reviews for Computer Science 0

### What is Karma?

#### Karma is the currency of StudySoup.

#### You can buy or earn more Karma at anytime and redeem it for class notes, study guides, flashcards, and more!

Date Created: 09/12/15

Medical Image Analysis CS 778 578 Computer Science and Electrical Engineering Dept W 39t est Virginia Universi y March 30 2009 39 Outline 0 Minimizing the energy function 9 Methods based on Taylor series expansion 9 Search methods 39 Outline 0 Minimizing the energy function 39 Minimizing the energy function 0 We wish to nd at least a local minimum of some functionf o The function may or may not be differentiable o The function may be expensive to evaluate There is need for a variety of minimization techniques 0 Techniques which make use of analytically computed derivatives 0 Techniques which use approximations to the derivatives 0 Techniques which require a small number of function evaluations I Match 30 2009 A 23 39 Outline 9 Methods based on Taylor series expansion 0 Gradient Descent 0 Newton s Method 0 QuasiNewton Methods l Gradient Descent Gradient Descent 0 Technique is based on a rstorder Taylor series expansion 0 Local linear approximation to f x fx0fx0xx0 x fx0 fxoxx xo We want to decrease the function f so f x 7 f x0 lt 0 This implies that fx0x 7 x0 lt 0 or equivalently x 7x0 iafxo where 04 gt 0 3052059 5 quot23 I I Gza iem Descent Gradient Descent In higher dimensions the descent direction is parallel to the gradient xi xt 7 oV xt a Slow convergence 0 Stability depends on 04 I 7 Mammogzo g Newton s Method Newton s Method 0 Technique is based on a secondorder expansion 0 Local quadratic approximation to f M fxo f xox7xo gmxoxx xogt2 We want to nd the minimum of our local quadratic approximation 1 x 0 Differentiating the Taylor series expansion gives fx fxo f xox x0 Setting f x 0 and solVing for x gives M fx0 3660 30 2009 8 x23 I Newtonrs Method Newton s Method In higher dimensions the descent direction depends on the gradient and the Hessian Hf Hf is the matrix of second partial derivatives 32f 32f 32f m 31513152 39 39 39 3113 32f 32f 32 31523151 W2 39 39 39 3123 32f Z 32f 311311 311312 39 39 m MatchSOLXOUQ gjquotzs l Newton sMemod Newton s Method In higher dimensions Newton s method becomes xi xi 7 Hfxt 1Vfx This converges in one iteration if f x is quadratic Usually we will need to modify the step size to account for f x not being quadratic xtl xt 7 aHfxt71Vfxt where 04 is chosen so thatfxt1 lt fxt Note that if Hf is approximated as I Newton s method becomes equivalent to gradient descent I Match 3032009 1023 l Newton sMeLhod Newton s Method 0 Faster convergence than gradient descent o Requires computation and inversion of Hessian matrix Ideally the gradient and Hessian can be computed analytically by differentiating f May also be approximated using nite differences but this can require 0n2 evaluations off Mmph302239009 1123 l Quasmemn Methods QuasiNewton Methods Approximate the Hessian using the gradient information from previous iterations Usin Vfxt1 WW Hfxt1xt1xt there are a variety of ways of approximating the new Hessian approximation These are often in the form of an additive update HVWH HUM CHfxt7 Vfxt7xt Some techniques even directly approximate H f x 1 1 These techniques are used in the Matlab function fminunc I Malch3 02009 1223 l Quasmevnon Methods LevenbergMarquardt LevenbergMarquardt approximates the Hessian as HJTJuI o J is the Jacobian matrix matrix of rst derivatives 0 Where In is a damping parameter gt When JTJ is a good approximation make M small Newton s method gt When JTJ is a bad approximation make M large Gradient descent Mmusm oog 13 23 39 Outline Cu39idlem Jcic39em N mom s Men hnd W e i Lquot hi 3 6 Search methods I Search methods General idea 9 Very general techniques 0 Simple to implement 0 Few parameters 0 Useful in hybrid approaches as initialization of a quasiNewton scheme 0 Can handle discontinuous functions Mmmm oog 15 23 39 Direct search methods Hooke and Jeeves 1961 Direct Search sequential examination of trial solutions involving comparison of each trial solution with the best obtained up to that time together with a strategy for determining as a function of earlier results what the next trial solution will be Mmusm ooa 15 23 Brent s Line Search Derivativefree technique for optimizing a 1dimensional function parabola through 2 parabola though Q9 0 Evaluate f x at 3 points 17 27 3 and interpolate with a quadratic 0 Find the minimum of the quadratic 4 o If 4 is between 12 the neXt 3 points are 17 47 2 o If 4 is between 23 the neXt 3 points are 27 47 3 Matlab See fminbnd documentation C S 778 57S 1W39est Virginia University Mentaal hgamalyng March 30 2009 17 23 Search Inethods n dimensional Optimization Find a sequence of directions to optimize along in 1 D One approach simply use the n coordinate directions o Inef cient when valleys of f x are not aligned with coordinate axes CS 778 578 West Virginia University 39Medicallm agevAnal ysis March 30 2009 18723 39 ndimensional optimization Powell s method Try to nd an optimal sequence of directions to optimize along in 1D o n linearly independent search directions plus the last step direction 0 Perform the quadrch line search in each search direction 0 Update the direction set 0 Repeat until converged Matlab See fsolve medium scale optimization method Mmusogzoog ig 23 39 ndimensional optimization Powell s method 0 Start position x0 Mk 3k k 1 to N i0 9 I70 xi 9 For k 1 to N pk min in direction Mk throughpkq 9 Forj1toN1 llMm 9 MNI7N I70 ii1 0 xi min in direction MN through pointpo 9 If not converged go to 2 Mmmm oog 20 23 Simplex Search A simplex is a set of n 1 points in R a 1D simplex is a line segment 0 2D simplex is a triangle 0 3D simplex is a tetrahedron Simplex search methods use the values of the function f x at the vertices of a simplex to drive the search Aha Basic operation re ect the worst vertex through the center of the opposite face March 30 2mg 2123 5amp3th methods Simplex Search If the re ected vertex is still the worst vertex Choose the next worst vertex 1 r l I 7 l r 11 1k rk T1 01 7 7393 n o If we have found the optimum the simplex may rotate about the best vertex 0 When this is detected assume the optimum value is near the best vertex CS 778 578 We f 39 Ma39r th 302009 Simplex Search Some variants o Spendly et a1 Simplex re ection and shrink operation 0 NelderMead Simplex Search Expansion and contraction operations Matlab fminsearch uses the NelderMead simplex search Mama 20129 2323 Medical Image Analysis CS 778 578 Computer Science and Electrical Engineering Dept West Virginia University April 15 2009 39 Outline 0 Reactiondiffusion 0 a 39 Outline Reactiondiffusion 0 Turing Model 0 Other Models and applications 0 Vector Tensor Field Visualization O Generalized ReactionDiffusion Turing Model Introduction 0 Proposed as a model of pattern formation in animals gt zebra stripes gt leopard spots 0 It models the chemical reaction between two morpho gens o Simultaneously those chemicals diffuse within a developing organism TURlNG A M 1952 The chemical basis of morphogenesis Philosophical Transactions of the Royal Society B 237 3772 April 1512009 127 Turinngoael Turing Model The reaction model proposed by Turing 8 l dMV2MMV7M704 at g dvvzv izw7 o M V concentration of the 2 morphogens C d 7 d are diffusion rates 0 a B are constants controlling the reaction process I Apml 1mm 5 m TmmgMu el Generated textures TURK G 1991 Generating textures an arbitrary surfaces using reactionrdiffusion In Proceedings of the 18th annual conference on Computer graphics and interactive techniques 2897298 Apn152w9 527 gramme mm Generated Iexlures V hpnu lt MM TURK G 1991 Generatmg textures on arbmary surfacesusmg reacuonrdxffu technxqueamp 2897298 gay tw waan m u 2m smn 7 TunngMoael Anisotropic reactiondiffusion diVDuVZt MV 7 u 7 Oz 8 3 diVDVVV B 7 MV7 o Du7 Dv are diffusion tensors WITKIN A AND KASS M 1991 Reactiondiffusion textures In Proceedings of the 18th annual conference on Computer graphics and interactive techniques 299308 I Am 1mm 8 m TunngModel Anisotropic reactiondiffusion mmms mmw Fowler at al Seashells FOWLER D R MEINHARDT H AND PRUSINKIEWICZ P 1992 Modeling seashells In Computer Graphics Proceeding of ACM SIGGRAPH 92 ACM Press New York NY USA 379387 I April 15 20W 10 27 Kondo and Asai Fish c04 08 12 16 20 24 28 32 36 40 444 48 n 39 39 Farr I aa g n n GEN J m 5139 KONDO 8 AND ASAL R 1995 A reactiondiffusion wave on the skin of the marine angel sh pomacanthus Nature 376 765768 39 Preusser and Rumpf Flow Visualization divltAltu WM fp 0 Only a single equation not a coupled system 0 Tensor AV7 Vpe is constructed so that 21 is parallel to ow direction V 0 Reaction function f p improves the contrast of the resulting image PREUSSER T AND RUMPF M 1999 Anisotropic nonlinear diffusion in ow visualization In IEEE Visualization 325332 Reaction difo sion Vestal Tensor Field Visualization Preusser and Rumpf The reaction function Concentrations image intensity 6 0 l 0 Two unconditionally stable concentrations 0 and l 0 One conditionally stable concentration 05 CS 778 578 West Virginia University Medical Image Analysis April 15 2009 13 27 Reactiondiffusion Vector l Tensor Field Visualization Preusser and Rumpf Constructing the tensor A Construct A from the eigenvector decomposition In general 0 AZBVT Bv 0 GltvpegtIdH gt Where a G are scalar functions which determine the eigenvalues of A and p6 is a smoothed version of p 0W Gd max CS 778 578 t West Virginia University Image A alxsis 39 April 15 2009 1427 39 Preusser and Rumpf Constructing the tensor A Construct A from the eigenvector decomposition 7 mum 0 AiBWV o GWVMW4 B The eigenvectors are given by the columns of B BM thid where V is the ow velocity and mejil are orthogonal to V I Apnl l39SQZ39ODR 1 5 quot27 Results Ap liim 1627 WE Vector Tensor Field Visualization Results April 15 2009 1727 mym We Tensor Fiel Visualjm uoil Results 3D vector eld Visualization CS 778578 IWestV39 39 39239 39 V 39 3 Medical Image An 39 Tschumperle and Deriche DT MRI Visualization Using the fact that diVDVI trDH VI diVD where H is the Hessian matrix of I and 1MB diVD11D12iT diVD21D22T the reactiondiffusion equation was approximated as 814 E trDH ApnI li DDR o O quot amp39 39 V D Tschurnperle and R Deriche Tensor Field Visualization with PDE s and Application to DT MRI Fiber Visualization In Proc of Variational and Level Set Methods 2003 lt Medical Image An l KindlInann et al Reaction Diffusion in 3D Gordon KindlInann David Weinstein David Hart Strategies for Direct Volume Rendering of Diffusion Tensor Fields IEEE Transactions on Visualization and Computer Graphics 62 1138 AprilJune 2000 I Aphlls 2009 21 27 39 Generalized diffusion Up until now the diffusion coef cient has been a scalar or a matrix 0 A scalar is a rank0 tensor You need zero indices to access the elements of a scalar o The matrix is a rank2 tensor 0 Only even rank tensors characterize the diffusion process 0 Let s simulate diffusion with higher rank diffusion tensors Apni irsgzuog 22727 39 NonGaussian Diffusion Einstein s formulation for the concentration C of particles undergoing Brownian motion in 1D is Cx7 t 739 p7rCx 7 r7 tdr 1 Expanding Cx 7 r7 1 in a Taylor series about Cx7 1 8C r282C r383c Cxir7t Cxt7ra ii m 2 Substituting 2 into 1 we see that Cxt T Cxtp7rdr 7 7p7rrdr 2 3 ZTEp7rr2dr ZTEp7rr3drm Note that the quantities in square braces are momentsrof the distribution p70 I Aprill 2009 23 3927 I Since fp7rdr 1 and using the fact that hm Cxt 739 7 Cx7t 87C 7H0 739 8t thenforTltlt1 8t 7 8x 8x2 2739 8x3 3X7 where angle brackets lt gt denote moments of p7r 0 First term drift 0 o Truncating this series at the second term yields the familiar isotropic diffusion equation a Extending to 3D yields the KramersMoyal expansion of the diffusion equation I April l ilDDR 24 2397 39 KramersMoyal expansion 8M l W The nabla symbol WW here is a compact notation denoting a tensor of rank n whose components are partial derivatives of degree n v52 8L 3918392 x The rankn tensors D are related to the central moments of the molecular displacement probability a 1st moment mean is a rank 1 tensor vector 0 2nd moment covariance is a rank 2 tensor 0 3rd moment skewness is a rank 3 tensor 0 4th moment kurtosis is a rank 4 tensor I Aprill 52009 25 3927 a Isolxopm Gaussmn b Amsomrpxc Gaussian c Noanausslml Rank a Noannnsslan Rank4 a Nuanausslan Rank4 t Kansansst Rank4 Generalized reactiondiffusion textures 1W3 Virginia Universilyr April 15 2009 27 27 Medical Image Analysis CS 778 578 Computer Science and Electrical Engineering Dept West Virginia University March 4 2009 39 Outline 0 Introduction 0 The Inin cut problem a Segmentation using graph cuts 39 Outline Introduction 39 Graph Cuts 0 Reduce the problem of image segmentation to the problem of graph cuts a We can then use existing algorithms for graph cuts to perform segmentation We will present an interactive technique for segmenting an object from the background only 2 regions The user must specify some hard constraints some pixels which are known to belong to the object and some pixels which are known to belong to the background I March A 2009 419 Graph Let G V7 E be a graph which consists of nodes V and edges E In general the edges may be a directed or undirected 0 weighted or unweighted The algorithm presented here uses a weighted undirected graph Additionally all weights are assumed to be positive I March41009 5 M9 39 Outline 9 The Injn cut problem The min cut problem 0 Let the set of nodes include 2 special nodes terminal nodes labeled S and T 0 An ST cut or just cut is a subset C of E such that there is no path between S and T when C is removed from E o The cost of a cut is the sum of all of the weights of the edges in C o The mincut problem is to nd the cut with minimum weight Background Background terminal terminal terminal lenninnl h Gl39nplL r 39nl cs 778578 West Virginia Universin Medioai imagemnysis March4 2009 719 39 Outline Se mentation usin ra hcuts g g g P I Overview 0 Convert an input to the segmentation problem into an input to the graph cuts problem gt Convert image and initialization into a graph G gt Give high weights to edges between similar nodes gt Give 10w weights to edges between dissimilar nodes 0 Find minimum cost cut C of the graph G 0 Convert the output C into a segmentation I Mamba 2009 g m Create the graph G Convert the image into an undirected weighted graph fi ZZQ I ZII DiCI De 1 II I zizesza39 332322 I a 4 nsystem b 8 n system c 16 nsystem 39 Create the graph G Convert the image into an undirected weighted graph 9 Create a node V E V for each pixel in the image 0 Pick a neighborhood system to use gt for example 4 or 8 neighbor system in 2D gt for example 6 or 26 neighbor system in 3D 0 For each pair of neighbors create an edge nlink 0 Set the weight for each nlink to BIN pixel similarity BPH is a measure of the similarity of image intensities at pixels 17 and q 1 7 42 1 202 Bp q cexmi dist q Mama 42009 1119 39 Create the graph G Incorporate the initialization a set of pixels 0 and B in object and background 0 Create 2 terminal nodes S and T gt S node represents the obj ect gt T node represents the background 0 For each image vertex p create an edge to S obj terminal gt lfp E 0 then wins K a large constant 0 For each image vertex p create an edge to T bkg terminal gt lfp e Othenpr O gt lfpe Bthenpr K gt Else WET ARpC obf RP A is a regionbased function re ecting how well the intensity at p ts into object or background initialization March 232009 12 3919 39 Create the graph G Use the initialization regions to de ne histograms for regionbased segmentation Let 0 PrIl0 be the intensity histogram of the object initialization o PrIlB be the intensity histogram of the background initialization Use negative loglikelihoods to determine weights Rp abj ilnPrIpl0 Rpmkg 7 lnPrUplB Similar penalty terms have previously been used in MAPMRF approaches to segmentation Maps 42009 1319 Find min cut I Find min cut Known algorithms for nding the min cut 0 FordFulkerson method EdmundsKarp algorithm 0VE2 The max ow algorithm optimized for graph cuts on grid graphs was presented in An Experimental Comparison of MinCutMaxFlow Algorithms for Energy Minimization in Computer Vision Yuri Boykov and Vladimir Kolmogorov In IEEE Transactions on Pattern Analysis and Machine Intelligence September 2004 o Outperforms other algorithms 0 Seems to scale linear in the number of pixels March 232009 15 3919 l The output The function AP C returns the region of pixel p given the cut C Recall o S node represents the 0bject o T node represents the backgr0und 0bj if p7 T E C APC bkg if p7 S E C Segmentation Results C Cardiac MR 6 Lung CT C 1 LV Segment f Lobe Segment h Liver Segment aUniversity 39 Margh4200 1719 Segmentation using graph cuts Results a Liver 144x170x170 b Lung 253x165x205 CS 778578 Vest March 4 2009 1819 39rginia University m Timing cs 778578 1W rginiaUniver39 f 39 gt7 1 1 Mamh42009 19719 Medical Image Analysis CS 778 578 Computer Science and Electrical Engineering Dept West Virginia University February 27 2009 39 Outline 0 Markov Random Fields for Image Segmentation 9 Classical MRF model for segmentation a Hidden Markov Measure Field Model For Image Segmentation 9 Review 9 Finding the MAP estimators p 9 0 Application to brain MRI segmentation 0 Results 39 39 V 2152 39 Outline 0 Markov Random Fields for Image Segmentation I Introduction 0 Same assumption image is locally smooth with a few discontinuities 6 Image intensities depend on labels classes and noise 0 Label eld is a lattice of random variables 0 Smoothness of label eld is enforced through conditional probabilities I Introduction Given an observed image I nd a label eldf consisting ofM nonoverlapping regions R This label eld is the segmentation result In modelbased segmentation also nd a parametric model I for the mapping between labels and image intensities 9 Label eld Tissue classes a Model Piecewise constant piecewise smooth I w h I ma 5 52 I Overview 0 Some label elds f are more likely to occur than others Speci cally smooth label elds have a higher probability P0 associated with them 9 The parametric model ltIgt0 is deterministic so the probability of observing I given the label eld and model parameters PM 0 depends only on the noise 9 From P0 step 1 and PM 0 step 2 we can compute P0 0 1 The label eld f and parameters 0 which maximize this distribution will be the solution to the problem I w h I ma 552 39 Outline 9 Classical MRF model for segmentation Classical MRF model for segmentation De nitions Label eld f 3 Observed images Observation model o L pixel lattice consisting of N pixels 0 r is a single lattice point r E L o M the number of classes 0 ZM 1 M class labels ZAAII space of all possible label elds 0 f label eldfr E ZM o 6 model parameters n noise CS 778 578 West Virginia University 39 39 1quot j I jquot February 27 2009 8 l5 Classical MRF model for segmentation Observation Model Label eld L Observed images o f label eld f E ZAIYI o I observed image 0 k region label 0 bk indicator function for region k bkr 6 f r k Piecewise constant model ltIgtr7 9k 2 oak Piecewise linear model ltIgtr7 9k 2 04er kyr Wk CS 778 578 West Virginia University Medical Image Analysis February 3 230079 9 552 Classical MRF model for segmentation The label eld Smoothness over labels statistical dependence Label eld 0 Label f r is likely to be the same as one or more neighbors 0 Label f r is statistically dependent on f s when s is near r independent when s is far from r 0 Local smoothness can be modeled using Markovian property CS 778 578 West Vir ia University 39 T j quot Februarle 2009 10532 Classical MRF model for segmentatitm Markov Chains A sequence of random variables Markovian property PCnC07 X17 Cn1 Pxnxn1 For time series given the present the future is conditionally independent of the past 2009 11 52 CS 778578 1W Classical MRF model for segmentation Markov Random Field Extension of the Markov chain concept to 2D and 3D an undirected lattice of random variables Let Nr be the neighbors of r PXXL PXXs7 s e N CS 778 578 West Virginia University Medical Image Analysis FebruW 2099 12 39 52 Neighborhood Systems Dc muonNa 1bmhood of N x e L dut gl g dm a EEE m m m Fm soc mi and Home ncxgquhood systems GHEN mem 0152 Cliques ll Ill ll DUI ll ll ll 2 n A clique c is a subset of L such that o M 1 or o F0rallrs cr Ns 7W Febmafy 2509 14 52 39 Gibbs Random Fields De nition f is a GRF if 1 Pm 2 expe Z w EEC where C is the set of all cliques of L and V5 is the clique potential energy Hammersley Clifford Theorem An MRF with respect to L7 N is also a GRF with respect to L7 N The converse is also true The label eld f is an MRF so we may formulate it as a GRF I February 27239009 15 f5 39 Generalized Ising Model for V6 Pm gem Z w EEC A GRF formulated using the generalized Ising model controls the smoothness of the label eld For 2 pixel cliques c fh WWI iffi 1 Botherwise Cliques of size 7 2 have zero potential This potential discourages discontinuities in the label eld I February 271009 15 f5 39 Likelihood Term PI 1 7 6 Let i be the ideal image no noise and I be the observered image Iln where n is zero mean independent noise n N P For Gaussian noise N07 02 a 2 MM am b expltgt this is the probability that the noise has the value a 7 17 Since the noise is independently distributed PM HPnIr 20 reL I February 27239009 17 5 2 39 Likelihood Term For the parametric model m Z ltIgtr 09mm k1 bk indicator function for region k 17W 50W kl The parametric model depends on the label eld f and the model parameters 0 so the likelihood term is PIlf7 0 PnIr 7 ltIgtr0k TL 39 18 x52 39 Solution We want the posterior P0 0 1 We compute this by Bayes Rule Hf 0 1 10 023 9 a PI is a normalizing constant Pt 0 PCfP0 gt Assuming label eld and model parameters are independent 0 We can impose prior knowledge on model parameters as well as label eld February 272009 1952 39 MAP estimators We want label eld f and model parameters 0 which maximize the posterior distribution P0 0 1 f and 0 are called the MAP maximum a posteriori estimators Finding the global maximum of P0 0 1 is dif cult slow 0 MetropolisHastings o Gibbs Sampler 0 Simulated Annealing The hidden Markov measure eld HMMF model allows us to ef ciently compute the MAP estimator for a similar problem I February 212009 20 395 2 39 Outline 9 Hidden Markov Measure Field Model For Image Segmentation 0 Prior Distribution 0 Posterior Distribution Hidden Markov Measure Field Model For Image Segmentation The probabilistic model Label eld f f 032 Pp p p gt Observation eld model I 0 SM 2 ME RM 221W 2 luk 2 0k lM o pr E SM is a discrete probability distribution over labels 0 19W Pfr kpa 9 0 f r arg manpkr P1rlfrapa 9 P1rlfra 9 pr is the hidden eld the observation model depends on f r directly CS 778 578 West Virginia University 39 i j T j if February27 2009 22512 Prior Distribution Prior Distribution The smoothness constraint on the p eld is enforced treating the discrete pdfs as vectors with the quadratic potential M VrsPr7PS AHPU PWHZ AZWU PkS2 k where is a parameter controlling the smoothness of the eld and r7 5 are in the same clique PU 6917 2 WW Note the p eld is Markovian the label eld is not February 27239009 23 395 2 3 alimninln Recall Joint Conditional Probability PWRQ PABC PM Multiplicative Law PA7 B7 C PB7 CPAB7 C PCPBCPAB7 C So thejoint conditional probability PA7 BlC PAB7 CPBC I F bruany 27239009 24 f5 3 allulululn Posterior Distribution Palm 0PpP0 Pwm PU We need PIlp7 0 We have a P rlp7 0 the p eld 0 PIr lfrp7 0 PIr Wr7 0 from the observation model We can write the joint conditional probability PI r7 f r p 0 as P1r7frlp70 P1rlfr7p70Pfrlp70 Next marginalize over f I February 27239009 25 f5 3 mllulululn Posterior Distribution Recall Marginal Probability PAa ZPAaBb b We obtain PI r lp 0 by marginalizng over all possible labels M PIrlp7t9 ZPUUWU km t r 141970 kl I T L 2 6 52 l Posterior Distribution Since the p eld is hidden PUUWU 121270 PUUWU 120 And using the de nition of the p eld POW M1770 POW kip Pkr I 27ml l Posterior Distribution M PUWIL 0 ZVWWU W 39PU k1 Where m expltiwltxgt 7 ltIgtltx70kgtrgt from the acquisition model for class k with additive Gaussian noise 28 x52 I axiomm Posterior Energy The posterior distribution can be written as Pm 0 1 PIp7 0PpP0 PU exptiw 70gt where U 7 0 is the posterior energy UP7t9 21054170 Vr Z VcP10gP0 C rEL The MAP estimators 17 and 0 for 177 9 are those which maximize the posterior distribution or minimize the posterior energy Note For the HMMF model the posterior energy is a differentiable function which can be minimized used a gradient descent scheme I February 212009 29 395 2 39 Outline Pnux39 ssu ihntiuu Fosicrmn39 Dusmhmn 10m 0 Review Review The classical MRF model 9n Label eld Observed i i images Observation model I I o Smoothness constraint on f Markovian o f is formulated as a GRF using the Ising model 0 I is a parametric model for image intensity 0 The likelihood PIf 6 depends on the noise model P o The posterior distribution can be computed using Bayes rule 0 f 6 maximize Pf 6I optimization can be very slow CS 778 578 West Virginia University 39 fj 7 39 February27 2009 31 5239 Review The HMMF model Ppp P eld C S 778 578 West Virginia University Label eld 1 a ll Observation model Thep eld is a vector eld pr E 391 Vectors in the p eld have the properties of a discrete pdf Smoothness constraint on p Markovian p is formulated as a GRF quadratic potential Observed images Labels are elements of p with maximum probability f r manpkr The P1rlfrapa 9 P1rlfra 9 February27 2009 32 I SZ Review The HMMF model Label eld f f 03322 121 p Observation eld model I To compute the likelihood PIp 6 First rewrite the joint conditional probability P1rafr ii 9 P1rlfr71979Pfrpa9 then marginalize over labels M P1r197 9 ZPUUWO kap 9Pfr 141979 kl C S 778 578 West Virginia University 39Merlieal A aly i Februarle 2009 33 IS HMMF vs classical MRF Likelihood HMMF Likelihood M PIrlp70 ZPIrVr kp0pfr k pyg k1 M 18X71r7 r70 2 r N M lt w W M Emma vltrgt pltrgt k1 MRF Likelihood w A N A V i A V H W V H ex i ywr 1r70kl2 M PUUW Z expkvlm 1r7 0kl2bkr H 34 f5 l The HMMF model The posterior distribution is PH 19 0PpP0 1 P 01 7 7U 0 p7 gt PU Z exp lt 7 1 where U 7 0 is the posterior energy 10370 21054130 Vr Z VcP10gP0 rEL C The posterior energy is a differentiable function I 1 t L 3552 Outline 9 Finding the MAP estimators 17 0 Overview 0 Optimize wrt 9 using a modi ed gradient descent if z 7V9Up o Optimize wrt 17 using a modi ed gradient descent ii z 7VPUp 0 Modify 17 so that the constraints are satis ed I W L 3732 Newtonian Descent Consider the unknowns xto be the position of a particle of unit mass Let the acceleration of the particle be given by 56 7VUx 7 2045c 0 First terrn the particle is being forced down hill direction of decreasing U a Second terrn restrict the speed of decent friction or damping February 272009 38 x52 Newtonian Descent From kinematics we know that xltrhgt xltrgt xltrgth 55m where x X 56 are the position velocity and acceleration of the particle Substituting the given acceleration we have xltrhgt xltrgt t xltrgth t atvwm 2Mltrgth2 Grouping together the X terms xltrhgt xltrgt h 7 Maw 7 wwrw February 272009 sag3952 Newtonian Descent Substituting the centered difference approximation x0 xth 7 36041 and multiplying by 2 we have MW 7 2x0 7 1 7 ahxth 7 W 7 hZVUxt Grouping the 360 terms together we have ah 1xltthgt 7 2xlttgt ah 7 1xt h 7 hZVUx 40 f5 Newtonian Descent The nal evolution equation is 2 ah 7 1 h2 Hh t 7 Pk 7 7 U I x 04h1x ah1x ozhlV x This idea is applied to 9 and 17 using 7V9 7 2a 395 7V1 7 204 Aim Discretization 2 ah 7 1 h2 Hh I 7 Pk I I 0 04h10 ah10 ah1v9Ultp 70 2 ah 7 1 h2 I 7 Pk p ah1p ah1p ah1 170M for all r E L SM VPUpt7 The operator HSM nd the closestx 6 SM t0 the vector 7R 6 WM Recall SM u e W 2241 1uk 2 01 1M AZVSVZ 39 Outline 9 Application to brain MRI segmentation Intensity Model Problem Due to magnetic eld inhomogeneities image intensities may not be constant Within a tissue class We cannot use a piecewise constant acquisition model The authors use a sum of J basis functions on a subgrid of L J r707k ZOkJIW 11 Where the functions centered at each node look like February 272069 44 52 39 Model Parameter Prior P0 The rigidity of this model can be controlled with a prior on the weights 9 1 M gexpeZ Z Miriam k1 ltuvgt P909 where 7 is a constant controlling the smoothness of the weights and therefore the smoothness of the acquisition model AS 3952 39 Anatomical constraints Pp Other types of constraints in addition to smoothness may be imposed on the label eld For example a prior constructed using a GRF with clique potential given by Vrspr7ps Mr 7 195 2 p1rp3s p3rp1s will penalize cliques where tissue classes 1 and 3 are adjacent February 27239009 236 f5 39 Outline Pncy ssu ihntiuu Fosicrmn39 Dusmhmn mm a Results I n Synthetic data u rate 15 HMMF is less sensitive to initialization and faster than MRF CS 778 S78 1 Label eld and model Fig 4 a Sample slice of a simulated brain MR volume with 9 percent noise and 40 percent inhomogeneity b Anatomical model ground truth c HMMF segmentation d Reconstructured intensity 06 m Febiuary 27 2009 49 52 39 Performance index If we have a ground truth segmentation ZVGPk 5k VPk VGk o VGPk number of voxels correctly assigned to class k a Vpk total number correct incorrect of voxels assigned to class k a VGk total number of voxels in class k ground truth Note that Q ranges from 0 to 1 with Q 1 corresponding to perfect segmentation I February 27239009 50 f5 Performance 0 Inhomogeneities b 092 o HMMFampy Mat x 3 09 K E KVL mm 055 KVL wme Mat 036 n54 y r o 4 a 1n nnlse 40 Inhomogeneities b 036 092 o HMMF Gray Mat 0 3 M L D HMMF White Mat E KVL Gray Mat 033 K LKVL White Mat area 034 noise KVL is a comparable modelbased MRF method CS 778578 1W February 27 2009 5152 39 Conclusions 0 HMMF can segment image corrupted by eld inhomogeneities and additive noise a HMMF priors can enforce smoothness and other spatial constraints a HMMF is faster and more accurate than other MRF techniques 0 HMMF is relatively insensitive to initialization February 272009 152752 Medical Image Analysis CS 778 578 Computer Science and Electrical Engineering Dept West Virginia University January 26 2009 39 Outline 0 More implementation details 9 The diffusion process 9 Perona Malik Inhomogeneous diffusion o Perona Malik Anisotropic diffusion 39 Outline 0 More implementation details 0 Stability and Convergence 0 Boundary Conditions 9 The diffusion process a PeronaMalik Inhomogeneous diffusion 9 PeronaMalik Anisotropic diffusion 39 Stability and Convergence 0 Stability noise from initial conditions roundoff error is not ampli ed a Convergence numerical scheme approaches solution of the PDE as t gt 00 i 205g 2 58 39 Convergence of the explicit lD heat equation The 1D heat equation I I has solution 367 t e tcos This corresponds to the problem with initial condition I 67 0 cos Discretize only in time forward Observe that 105367 t ie tcosx 71x t n5 7 F 7 t 6 1H5 It 7 M Januaryl 2009 5 3958 39 Convergence criterion ratio test The sequence I t is convergent if It6 taco It lt 1 The explicit equation we formed earlier 1H6 It 7 61t has convergence criterion 15 It 1 7 6 lt 1 This is satis ed for 0 lt 6 lt 2 Only conditionally convergent human262009 6 3958 39 Convergence of the implicit lD heat equation Discretize only in time backward 1H5 7 F 4H5 6 It6 It 7 6It6 The implicit equation has convergence criterion 1M 1 1 T quot16 lt This is satis ed for 6 gt 0 Januaryil l2009 7 58 39 Convergence In general it can be shown that o Explicit methods are conditionally convergent o Implicit methods are unconditionally convergent Implementation Constant Boundary Value X xlt0orxgtn1xc ForczO 13050 21O Il CS 778 578 West Virginia University Medical Image Analysis Implementation Constant Boundary Slope X Fixing the slope at zero adiabatic gives x lt 0 1x 0 x gt n 1x 1n CS 778 578 West Virginia University Medical Image Analysis Implementation Periodic Boundary Conditions lx CS 778 578 West Virginia University Medical Image Analysis Implementation Re ective Boundary Conditions x lt 0 1x x xgtn 1xI2n x 1mm x 210 211 CS 778 578 West Virginia University Medical Image Analysis 39 Outline 0 More implementation details a The diffusion process 0 Flux 0 Conservation Laws 6 PeronaMalik Inhomogeneous diffusion 9 PeronaMalik Anisotropic diffusion I Flux Flux de nition Flux Rate of movement of samething per unit area What is moving 0 Diffusion molecules 0 Heat energy For diffusion the units of ux are quot1 71 mzs l Flux Flux Isotropic Diffusion j 7qu Fick s First Law Molecules diffuse from high concentration to low concentration a d is a scalar diffusivity constant a Flux is parallel to concentration gradient but in opposite direction Heat q 7kVT Heat ows from high temperature to low temperature Anisotropic Diffusion j 7DVu I Concentration gradient causes a ux which is transformed by the matrix D January 252009 15 m Discrete example Let C be the number of molecules C Cout Cin Cour ACstored Matterenergy is not created or destroyed CS 778 578 West Virginia University Medical Image Analysis Diffusion Gonseryation LaWS Discrete example Let j be molecular ux over time At jX gt jxAX LAX 4 A ACstored ltjltxgt jltx AxgtgtA At Matterenergy is not created or destroyed 2009 1 CS 778 578 West Virginia University Medical Image Analysis Diffusion Gonservation LaWs Discrete example Let j be molecular ux over time At jX gt jxAX l Ax l A A S are we jltx AxgtgtA Matterenergy is not created or destroyed Extend this example to 2D and 3D CS 778 578 West Virginia University Medical image Analysis January 2009 18 58 39 Fick s Second Law Conservation of Mass 7divj with Fick s First Law j 7qu yields the diffusion equation diVqu o PeronaMalik idea Make d inhomogeneous dXy gt Slow down speed up diffusion as needed 0 Weikert idea Make d anisotropic DXy gt Direct ux as needed Januany 25g2009 19 58 39 Outline 9 More implementation details a The diffusion process 9 PeronaMalik Inhomogeneous diffusion 0 Weaknesses of the standard scalespace paradigm 0 Adaptive Parameter Setting 0 Edge Enhancement 0 Maximum Principle 0 Implementation 9 PeronaMalik Anisotropic diffusion Januany ZSgZ39ODR 20 x58 Scalespace 1 quot 39 39 h quotM only 39 Scalespace De nition a family of images I x y t where o The scalespace parameter is t o Ixy 0 is the original image 0 Increasing tcorresponds to coarser resolutions I x y t can be generated by convolVing with wider Gaussian kernels as t increases or equivalently by solVing the heat equation 39 Earlier Scale space properties a Causality coarse details are caused by ne details 0 New details should not arise in coarse scale images 0 Smoothing should be homogeneous and isotropic This paper will challenge the last property and propose a more useful scalespace de nition The new scalespace will be shown to obey the causality property I January 252009 23 x53 l Lost Edge Information a Edge location is not preserved across the scale space a Edge crossings may disappear 0 Region boundaries are blurred Gaussian blurring is a local averaging operation It does not respect natural boundaries Inning2621509 wsx 39 V Linear Scale Space Def Scale spaces generated by a linear ltering operation 0 Nonlinear lters such as the median lter can be used to generate scalespace images 0 Many nonlinear lters Violate one of the scalespace conditions 39 New Criteria 0 Causality 9 Immediate localization edge locations remain xed Piecewise Smoothing permit discontinuities at boundaries At all scales the image will consist of smooth regions separated by boundaries edges l a Diffusion equation 81 E d1Vcx7 y7 tVI The diffusion coef cient Cx7 y7 1 controls the degree of smoothing at each point in I The basic idea Setting Cx7 y7 t 0 at region boundaries and Cx7 y7 t 1 at region interior will encourage intraregion smoothing and discourage interregion smoothing I January 2542009 27 58 39 V Diffusion equation By the chain rule 81 Cx y g 7 d1V 7 7 3quot at lt Cx7y7ty 7 acal 0821 acal 0821 mm C W 8x2 ayay H W ayz cxy7 OVZI We VI Notation The paper uses the symbol A to represent the Laplacian AI V21 diVVI Januanyam39oog 28 x58 l Conduction coef cient What properties would we like Cx7 y7 t to have 0 c 1 at interior of a region 0 c 0 at boundary of a region a c should be nonnegative everywhere Since Cx7 y7 1 depends on edge information we need an edge descriptor Ex7 y7 t to compute 6 Notation When written as a function of the edge descriptor the authors use the symbol g for conduction coef cient I January 252009 29 53 The function g W l Perona and Malik suggest two possible functions gmwm elt gt2 WIN i 4 a gt o g 7 1 OA Ugt104 Effect of varying K on g IVII I AWN agt0 1 VI 1 1 KH1 gmvxm guwxm gmvxm Figure K 2 4 6 cs 773573 AVestV inia University Medicallmage Analysis 39 7 r Effect of varying 07 on g IVII I gHV1H agt0 1 VI 1 1 KH1 gmvxm guwxm guwxm gmvxm Figure 07 1 3 5 7 9 cs 773573 AVestV iniaUniversityh Medicallmage Analysis mu 12m MEI WED an 2mm 22m Figure and CS 778578 39 39 39 January 26 2009 33 58 mu ndim n 1 2n 4n BEIV an mu 12m MEI WED an 2mm 22m 2n 4n ED an mu 12m MEI WED an 2m 2m Figure K 3510100 As K increases more edges will get smoothed out CS 778 578 West Virginia University 2009 34 5398 January 26 Perona Malik Inhomogeneous diffusion Effect of varying 04 on c 394 39Im 115 39 39 W 2n 4n ED an tEIEI 12D MEI WED an 2m 2U 2U 4n ED an tEIEI 12D MEI WED an 2m 2m Figure oz 21235 As oz increases the cutoff gets sharper Madieal nage Analysis CS 778 578 West Virginia University mu ndim n 1 Januaryzggzow 33 5398 Perona Malik Inhomogeneous diffusion Adaptive RarameterSetting Set K every iteration Compute a histogram of HVI Find K such that 90 of the pixels have gradient magnitude lt K If 21 2 09m2 then bin b corresponds to gradient magnitude K CS 778 578 West Vir ia University MedicallmageAnalysis 39 Edge Enhancement Inhomogeneous diffusion may actually enhance edges for a certain choice of We y t 1D example Letsx and gts gIxIx gss The 1D inhomogeneous heat equation becomes 8 8 It aglxlx by chain rule g It gtsxIm Januany ZSgZ39ODR 37 x53 39 Edge Enhancement We are interested in the rate of change of edge slope with respect to time 8 8 31 gar 1fIlssm00th 8 i glt ltsgtImgt 8 s31n s1m SI x gt s1m PeronarMalik Inhmnogeneous di usion E geiEnhance ment Edge Enhancement f L W a 5a 5X1 x 5X1m For a step edge With Ix gt 0 look at the in ection point p Where the slope is maximum Observe that Imp 0 and Imxp lt 0 8 5 IX W 5p1xxxP The sign of this quantity depends only on IVIXVIXJCVIXXJC CS 778 57S Vest Virginia University Me diL aT Image Analysis Ianuaiy262009 3939 58 39 Edge Enhancement At the in ection point 8 5119 gt SImp o If gt s gt 0 then Ixp lt 0 slope is decreasing 0 If gt s lt 0 then Ixp gt 0 slope is increasing Since gts gss selecting the function gs determines which edges are smoothed and which are sharpened January ZSgZ39ODR 40 x58 Perona Malik Inhomogeneous diffusion Edgngnhan eement The function gbs gss o q50 0 o q5 s gt0forsltK o q5 s lt0forsgtK o limsgt00 s gt 0 CS 778 578 West Virg a University Medical Image Analysis I 39 Maximum Principle Maximum Principle 0 The maximum and minimum intensities in the scalespace image x7 y7 1 occur att 0 the nest scale image 0 Since new maxima and minima correspond to new image features the causality requirement of scalespace can satis ed if the evolution equation obeys the maximum principle 0 We will make some less rigorous observations concerning causality January 252009 42758 Perona Malik Inhomogeneous diffusion Principle Maximum Principle For the 1D heat equation It Ixx X X X f X X X 0 Solving the heat equation is equivalent to convolution 0 Convolution is a local averaging operation a Averaging is bounded by the values being averaged CS 778 578 West Virginia University Medical image Analysis 58 Maximum Principle Maximum Principle For the Perona Maljk equation 81 E cxytV21 Vc VI Note that at local minima VI 0 and we are evolving by the original heat equation It can be shown that this general class of PDEs obeys the maximum principle We will also inspect a maximum principle for the discretized equations January 252009 A4 58 I implementation Explicit Formulation g cxy7 IV21 Vc VI Using centered differences for the Laplacian and gradients 11 7 gay I t t t t 7 CW 1171 Ix1y Lyil Ixy1 7 411531 cx1y 517131 Ix1y 7 157131 C1C1I1I1 I irnpl menlalion Implicit Formulation 0 Same diagonal structure as homogeneous heat equation I implementation Implicit Formulation 0 Same diagonal structure as homogeneous heat equation Yes 0 Symmetric I 39 implementation Implicit Formulation 0 Same diagonal structure as homogeneous heat equation Yes 0 Symmetric No a Diagonal dominance I 39 implementation Implicit Formulation 0 Same diagonal structure as homogeneous heat equation Yes 0 Symmetric No a Diagonal dominance Data dependent 39 Outline 0 More implementation details a The diffusion process a PeronaMalik Inhomogeneous diffusion a PeronaMalik Anisotropic diffusion 0 Discrete Maximum Principle 39 Explicit Formulation 81 E cxy7 tVZI Vc VI By splitting the Laplacian and averaging the forward and backward differences in the gradient t1 t 1 7 Jay I 7 t t 7 t CLVKIxily 15 Ix1y 151 13171 7 Iggy 1y1 7 Iy 85 Ix1y 7 Jay 1531 7 157151 t at t f 88 Ixy1 7 Ly Ly 7 Lyil 8y 2 2 39 Explicit Formulation g cxy7 IV21 Vc VI It1 7 It cm 7 55mm 188 51w 7 Eaxg y 1 86 51531 7 58731115171 1 86 cm 387311tw4r1 7 Jay 7 Iggy 7 Iggy 7 Iggy 39 Explicit Formulation These are rst order Taylor series approximations 1 86 5 N Cxgty 186 Cw gg Chg Cw 22 3 3 1 ss1 Clily g Where SKy HVIQQ 39 Explicit Formulation 7W 7 W 8SW 2S171y11tc71y 7 guy 3 3 glt WM I1y 7 151 3 s gt 1 glt gtlt1 7 Lay jg1 7 Lay 5 515 g yfy1 39 Anisotropic Implementation Compute g using the projection of the gradient along one direction For example in gsx yzsx1 y let 5w i 81 Hgmw 81 Sx1y Hgltx17y l Computing SKy using forward differences and SHU using backward differences 5w HIHLy IwH Sx1y HIHLy IWW so were mm 1y 7 WM l January ZSgZ39ODR 152758 39 Explicit Formulation 5amp1 7 Itcy t t f gOIl U 7 ywaxily 7 11w 7 g Ix1y 7 ly ltc1y 7 Iggy gOIxyil 7 1y ltcy717 Iggy g Ixy1 7 lgty ltcy1 7 Iggy Notation The authors use V to denote nite differences This is not the gradient operator V Januany ZASgZDDR 53 x58 PeronaMalik Anisotropic diffusion 1 W VNIiJ E Iiilzj Ii 39 VSII39J Ii1j ij VEIiJ39 E Iij1 Iij VWIiJ39 E Iizjil Iij Wm 14 15 141 CNlv 8IVNII39JI Cswv 8IVSII39JI GEM 3IVE1z39Jl CWlv 8IVW1z39Jl Image neighborhood system 54 158 CS 778 578 39 39 3 Medical Image Analysis 39 Explicit Formulation The previous explicit formulation 5amp1 7 Itcy t t f gOIl U 7 My 0 131 7 11w gltllx1y 7 Jay Uc y 7 Iggy gltllw 1 7 Jay IJtWA 7 Iggy gltllw1 7 Jay Iy1 7 Iggy can be rewritten as 1 Ly MUM VN Iij le VS 11 CEM WE 1in CW WW my 39 Discrete Maximum Principle Let M maXIINIS7 IE7 IW the maximum intensity in the neighborhood of ij and c E 07 1 lt 14 if EJ CNJ VN 1in le Vs 1in 813 WE 1in CW WW Iij Substitute the de nitions of the nite differences if If CNZIE1J 19 US154 and rearrange the terms if fl1 MONA le CE 8 CNWINW Cswlsw CEWIEW CWWIWW January 252009 55 3958 X SLS 60011 9z manner WI W49 39 SJ 1N9Y 1 We 32 S2 New 4 1W1 S 71 AOS 40 30 F150 W WK 1 S WJWO 1 30 YIN WWW S MMIIWQJFmglzwgaMSIMSJJFMNIWJW 111191 puooes eq 10 pun NMJ P133 Pisa wgY 4 1M3 P133 Pisa MNY 4 118111 0 2 m 5 9 59 NW 4 I PH12 7 S 3971 931118 pi IMAM malmgg mSImSJ 1NII 1N3 m m m m 1 4 A MH 39 99 N3 I39 I 4 111 91d13u11d uaner 9191mm l 39 Discrete Maximum Principle 0 So this procedure will not lead to the production of new local maxima 0 Similarly no new local minima will be created a PeronaMalik can be used to create scalespace image representations Medical Image Analysis CS 778 578 Computer Science and Electrical Engineering Dept West Virginia University February 25 2009 l 0 ChanVase 9 Results l 0 ChanVeS e Our assumption has been that images are characterized by 0 Piecewise smooth regions 0 separated by edges So far we have concentrated on nding the edges and designing a speed function which emphasizes the edges we are interested in Now we will concentrate instead on identifying the homogeneous regions a It may be easy to identify these regions when noise is present 0 Our image model may not hold there may be no discernable edges between regions Azx Active Contours Without Edges Objects may not have edges de ned by the image gradient it x 39 This is a regionbased segmentation technique Evolution of the curve is governed by propenies of the region of 1x7 y enclosed by the curve Chan and Vese present a level set implementation for minimizing the MumfordShah functional I Fetimarygs 002 5128 39 Murnford Shah Functional Let M0 67 y be the original image and 14367 y be some model for the image 1 Wm c M Haw W 0 AA luox7yux7ylzdxdy HWx7yszxdy 9 0 Term 1 Smooth boundary curve 0 Term 2 Model t error 0 Term 3 Smooth uXy except on boundary I 6 WE 39 Mumford Shah Functional Piecewise constant model tting term uxy cl inside Cs 62 outside Cs so Vuxy 0 1 EMS617627C n widows 0 1 uox7y761tzdxdy insidec A2 Wm Czidedy outsidec I V 728 39 Mumford Shah Functional Piecewise constant model 11 min 26 7 xi2 6 i1 So 6 arithmetic mean of xi I 8 WE 39 Mumford Shah Functional 1 EMsltc1c2cgt M Haws 0 A1 M0X7YCiizdx iy inside0 Az M0X7Y02 24x 1y outsidec where cl is the mean of M0 inside the curve c and 62 is the mean of M0 outside the curve c I V 9723 With a level set approach it is simple to determine which voxels are insideoutside by checking the value of 11 39 Evolution Equation In the ChanVese paper the energy functional is written in levelset form in terms of heaViside functions H z and delta functions 6 Let gtx y be the embedding function for the curve de ned so that j gt 0 inside of the curve umwmeom nuinmw and c1 meanu0 m 2 0 fa M0067 yH gtx7 nydy hHanwa and 2 meanu0 m lt 0 nuinmwuww CZ I February 2552009 11 3928 39 ChanVese Functional Length of boundary gt 0 L HVH gtx7ydedy Q Q6 x7yHWgtx7ydedy I E bfua y ii 39 ChanVese Functional Model Fitting Terms uox7y761izdxdy gt0 Ail10067351 2H X7Ydxdy may i Czizdxdy lt0 QMox7y76221H gtx7ydxdy 39 ChanVese Functional Eltc17c27 gt M l xvl mv x d my A1 4 uo7c12Hlt gtltxygtgtdxdy A2 9 uo7c22lt17Hlt gtltxlygtgtgtdxdy This leads to the level set evolution equation 7 W Mdm 7 A1040 7 c12 A2040 7 22 I 1 114 39 ChanVese Evolution Equation Use regularized H z and 6z functions Use same discretized divergence operator as in RudinOsher Fatemi TV norm paper 0 Initialize j 9 Compute cl W and cl gt 9 Compute W 9 If solution is not stationary go to 1 L 1 39 15 WE 39 ChanVese Implementation n1 n A n1 6hlt ngtmAilt WW AW 3 n1 MAM AM 7 1u0 7 c12 2M0 Cz 39 Outline 9 Results Jk ll ChanV636 Results Region based top and edge based bottom segmentation results GEM w v Febmary252009 1828 W esults Chan Vese Results 39 Piecewise smooth Let M0 67 y be the original image and 14367 y be some model for the image 1 Wm c M Haw W 0 AA luox7yux7ylzdxdy lWx7ylzdxdy 9 0 Term 1 Smooth boundary curve c 0 Term 2 Model t error 0 Term 3 Smooth uXy except possibly at c r L 1 39 20 28 TsaiYezzi Piecewise Smooth Results CD 39 e a 4 lmuld km rm ma Results Simultaneous Restoration and Segmentation CS 778 578 1West ginia Universin si February 25 2009 22 28 39 Multiphase ChanVese Supports more than 2 regions by using multiple embedding functions A Multiphase Level Set Framework for Image Segmentation Using the Mumford and Shah Model Vese LA and Chan TFInten1ationa1 Journal of Computer Vision v01 50 no 3 pp 271293 2002 39 Multiphase Chan Vese f llt0 2lt0 o Regionlz 1gt0and 2gt0 o Region22 1gt0and 2lt0 o Region3z 1lt0and 2gt0 o Region4z 1lt0and 2lt0 February 252013 Multiphase Chan Vese Multiphase Chan Vese 39 Multiphase ChanVese Energy Functional E54514 MA HVH 1dedy HVHlt 2gtdedy Q m 7 cugt2Hlt 1gtHlt 2gtdxdy 9 we 7 clogt2Hlt 1gtlt1 7 Hlt 2gtgtdxdy 9 we 7 com 7 Hlt 1gtgtHlt 2gtdxdy Qltuo 7 coogt2lt17 Hlt 1gtgtlt17 Hlt 2gtgtdxdy 27 23 39 Multiphase ChanVese Evolution Equations 65lt 1gtwdivltH 7j1Hgt uo 0102 i V0 0012H 2 m 7 0102 7 we 7 coogt2gtlt17 mam 65lt 2gtwdivltgt uo 0102 i V0 0012H 1 M4 0102 0 Coo21 H 1 I A L 28 WE Medical Image Analysis CS 778 578 Computer Science and Electrical Engineering Dept West Virginia University April 19 2009 Outline 9 Tensor Field Processing 9 Tensor Field Visualization Tel m or Field Registration For scalarvalued images 0 Transforming the image involved only changing the coordinate system 0 Image intensities were unaffected Will this work for vector and tensor valued images Tensor Field Pro 0000000 TensorReori Mon Tensor Field Registration Not When there is a connection between the coordinate system of the image and the vectortensor at each voxel o For example no reorientation is required for color images a For diffusion tensor images reorientation is required for a spatial transformation to be meaningful 0 mm an Tensor Field Registration For rigid rotation of vector image TV RV For rigid rotation of tensor image TD RDRT Consider D X AX T This is equivalent to rotating the eigenvectors TltDgt RxgtAltRxgtT RX WXTRT RXAXTRT RDRT For nonrigid transformations we must factor out the rigid part of the transformation a I Tensor Similarity o In the problems of segmentation and registration the notion of similarity is important 0 We compared intensity images by looking at the magnitude of intensity differences 0 How can we compare tensors One possibility Matrix norms like the Frobenius norm m m 22 W i1 j1 MW can be used to de ne tensor distance dT17T2HT1 TzHF t imilarity Frobenius norm is invariant under rigid transformations dT1 T2 dRT1RT RTZRT but is not invariant to af ne transformations umlnv matic 1 theoretic tensor distance Recall that tensors describe a stochastic diffusion process 1 irTD lr r tD ex 7 plt gt 2mm plt 4t and that KL divergence is an information theoretic distance gt Mm mm w Which is not symmetric KLp7 q 7 KLqp um matior theoretic tensor distance A true distance can be obtained by symmetrizing KL 1 1 71 5KL yq KLqyp This is called Jdivergence The tensor distance 41T17 T2 Jprt7 T1prt7 T2 can be expressed in closedform as 1 dT1 T2 E trT1T2 T2 1T17 2n where n is the dimension of the tensor This distance is invariant under af ne transformation Tensor Field Processing 000 000 O O Tensor Similarity Measures Geodesic distance Consider 2 x 2 symmetric positive de nite matrices Da bac b2gt0agt0 b c The parameter space of SPD matrices forms a manifold embedded in R3 speci cally a cone Distances between points on the cone can be interpreted a distances between tensors Tensor Field V 000000000 Tensor Glyphs Linear transformation Ellipsoid Let D be the diffusion tensor and x be points on the unit sphere GDx Tenwr Field Processing Tensor Field Visualization QQOQQQQQ 00000000000000 Tensor Visualization Glyphs Tensor Glyphs Quadratic form Metric glyph Let D be the diffusion tensor and X he points on the unit sphere G xTDxx Ellipsoids top and metric glyphs bottom Tensor Field Processing 000 C20 Tensor Visualization Glyphs Tensor Glyphs Metric glyphs generalize to higher rank tensors 3 3 3 3 G Z Z Z Dijklxixjxkxl il il il il RANK 0 RANK 2 RANK 4 RANK 6 alm X awm w RANK 3 Tensor Field Visualization OOOOOOOOOQQDQ Tensor Field Processing Tensor Field Visualization OOOOOOOOOOOOOO 0 Q Ellipsoids left and superquadrics right Fan 1 L Tensor Fiel Visualization 51190000000 OOOOOOOOOOQGQQ Tensor Visualization Glyphs Superquadrics x6 d 0080 6 sing d y6 sino 6 sing z6 cos d Where xo sgnxlxl Tensor Field Processing Tensor Field Visualization 00063000 OOOOOOOOOGOOOQ Tensor Visualization Glyphs Superquadric Glyphs A1 A2 Cl 1 A2 A3 1 j 2A2 A3 2 3A3 3 3 L 1 l A2 l A3 3 geometric measures based on eigenvalues A1 gt A2 gt A3 of D o c linear anisotropy 0 cp planar anisotropy 0 cs spherical isotropy These measures are used to determine oz and for the superquadric glyph Tensor Field V isni iza on 00000000000000 Ifc 2 cp then a 1 CPW 1 61W Ifc lt 6 then a 1 cl Y 1 CPW where 39y is a user controllable sharpness parameter 0 O O Tensor Field Processing Tensor Field Visualization 000005300 00000000030090 Tensor Visualization Glyphs Superquadric Glyphs Q Q Q Q Q Q Q l 0 O O O O 7 ellipsoid glyphs from 2 viewpoints top bottom QQQQQQQ ll 0 0 O O O O Superquadric glyphs computed from same tensors as above and rendered from same Viewpoints Tensor Field Procel 39 V Tensor Field V15ualizati0n QOOOOOOO OOOOOOOOOOOOOO kk I r r y y W t 39 39 is v m L u Texture mapped glyphs Tensor Field Pr easing Tensor Field Visualization 000600013 QOCBDOOOUGOOOOO Volume Rendering Direct Volume Rendering A ray casting approach Al Ag Z 1 l l 1 l A2 A3 i lt a 22 3 LP 1 2 3 2 3A3 5 I 3 L A1 A2 A3 Use a coordinate system based on anisotropy Tensor Field Processing Tensor Field Visualization 00000000 00000000000000 Volume Render 3 Din V me Transfer function over this coordinate system can determine color and opacity Tensor Field Processing Tensor Field Visualization 000000 00000000000000 Volume Renderin g Hyperstreamlines 0 Trace streamlines in dominant eigenvector eld 0 Form streamtube by sweeping an ellipse de ned by two eigenvalues 3 39 Tensor Field r1snr zntion 00000000 0000000000000 Volume Rendering Tensor eld topology 0 Degenerate points where 2 or more eigenvalues become equal 0 Separatrices hyper streamlines on the tensor eld Medical Image Analysis CS 778 578 Computer Science and Electrical Engineering Dept West Virginia University January 30 2009 39 Outline 0 Norms e TVnorm e TVnorm minimization 9 Implementation 9 Fixed point methods a Results 39 Outline Norms a TV norm 9 TV norm minimization 9 Implementation 9 Fixed point methods 9 Results 39 Vector and function norms HVH1XyZ Hsz x2y2z2 Hva max xi M W NW Figure Points equidistant from the origin under various norms The generalized vector p norrn is xiV YP The p norm of the function f x is given by LN 9 wxwxyp lt1 nia University 39 ry39 Imuaryz o oog 4 A32 39 Outline Norms a TV norm 0 Properties 6 TV norm minimization 9 Implementation 9 Fixed point methods 0 Results 39 De nition The TV norm is the L1 functional norm of the gradient magnitude mu Q HWH My Recall Minimizing the L2 functional norm of the gradient magnitude led to the isotropic heat equation Properties Membrane Spline vs Total Variation The membrane spline energy functional HWH2 xdy 9 represents the elastic potential energy of a thin sheet of material The total variation energy functional HWdedy 9 represents the oscillation of the function u llamas3052059 7 x32 mopenies Membrane Spline vs Total Variation Note that 0 WW up 9 Vx dx For ghu hu1P if hu is positive and g is monotonic then gu and hu have extrema at the same values of u has the same minimizer as 391 205 8 f3 Properties Membrane Spline vs Total Variation The membrane spline energy functional HWszxdy Q has the same minimizer as The total variation energy functional HWdedy Q has the same minimizer as 2002 9332 Total Variation Does Not Penalize Discontinuities Consider a function u with a step of height h h MEMO Auidxbbiazdx Total Variation Does Not Penalize Discontinuities Consider a function u with a step of height h TVf A dij am h b h dx b7 Eva E a b W Total Variation Does Not Penalize Discontinuities MEMf TVf lhl MEMfl 098 MEMf2 163 MEMf3 49 MEMf4 00 TVf1 07 TVf2 07 TVf3 07 TVf4 07 CS 778 578 West Virginia Universityh Meth cal Image Ali yis I 30 2009 12 32 TVUs gt TWA TVltf2gt TVUa Tquotf4 39 Outline Norms a TV norm 9 TV norm minimization 9 Implementation 9 Fixed point methods 9 Results 39 Variational Calculus Minimizing the TV norm functional u u dx n n has EulerLagrange condition for minimization 8 8 i 7 i 0 8x x 8yqu 3 2 2 fl 3 2 2 fl 07auluxuy 2787yuyuxuy 20 VEfF Leading to the descent equation V14 8M le7 WW I January 3022009 15 W32 39 Energy Minimization Total Variation Membrane spline energy mm Vdedy 2 2 u Q y mum Q ix 11 dxdy Has EularLagange condition for s Euleeragange condition for minimization given by minimization gven by Vn am 7 dWVn 0 WM and descent equation and descent equation 8m aivwn 814 tim HVHH 39 Descent Equation V14 8M div7 HWH This is an inhomogeneous diffusion equation with diffusivity gHWH 1 HWH o Diffusion is slowed for large image gradients o Diffusion is speeded up for small image gradients o Divergence is due only to changing gradient directions Ianuany3 022009 17 x32 39 Outline Norms 9 TV norm e TV norm minimization 0 Implementation 0 Upwind nite differences a e regularization 6 Fixed point methods 9 Results 39 Upwind nite differences Since we are allowing steep discontinuities numerical differentiation can be a source of instability 1 5 l l l l l l l l 02 04 us 05 x 7 v t Solution a slope limiting derivative operator I January 39 2009 1232 Denote the forward nite difference of u at i7 j in the X direction by A1140 Him W and the backward difference by Ailti uiJ 714i We will denote the upwind nite difference in the X direction as minlnodA1uijAiuiJ 39 Minmod operation I minmodab mm a M where 71 ifxlt0 sgnx 0 ifx0 1 ifxgt0 39 Minmod operation 19 mimodw mmUaL W o ifa or b are 0 mi11m0doz7 b 0 o if a and b have opposite sign minmodoz7 b 0 0 otherwise pick the one with the smallest magnitude Januany aoa r iregnlarizalion What happens when Vu gt O Singularities may develop in very smooth regions V14 8 u div 7 t HWH e 0 One strategy use xed constant 6 ltlt 1 0 Another strategy relaxation Start with a moderate e and decrease over time See Nonlinear Total Variation Based Noise Removal Algorithms L Rudin S Osher E Fatemi Physica D 1992 I January 3032009 23 W32 39 Outline Norms a TV norm 9 TV norm minimization 9 Implementation 9 Fixed point methods 9 Results 39 Another approach to minimization Rather than solving the descent equation we may try to directly solve the minimization condition V14 0 1M7 7 504 7 14 WW A xedpoint iteration scheme is obtained by assigning iteration numbers to M such as Mk and Mk1 Note that there is no longer any time step parameter The system is iterated until convergence is achieved I January 3022009 25 393 2 39 Lag ged diffusivity xedpoint One possible method for assigning iteration numbers to the equation is Vuk1 0 1M7 7 5140 7 W1 MWquot H This assignment results in a linear system Since the diffusivity is computed based on the previous iteration this is known as the laggeddiffusivity xedpoint iteration I January 3022009 26 W32 39 Lag ged diffusivity xedpoint The equation can be discretized and a linear system in the following form can be obtained Buk1 CM07Mk Solving for Mk1 represents a single xedpoint iteration See also Iterative Methods for Total Variation Denoising Vogel CR and Oman ME SIAM JOURNAL ON SCIENTIFIC COMPUTING v01 17 pp 22772381996 On the Convergence of the Lagged Diffusivity Fixed Point Method in Total Variation Image Restoration Chan TE and Mulet R SIAM Journal on Numerical Analysis v01 36 no 2 pp 354367 1999 human3032009 27 x32 39 Outline Norms a TV norm e TV norm minimization 0 Implementation 3 Fixed point methods 6 Results 0 Drawbacks Jumn39yiam ma DeconvoluLion Results Ymuy in m an m A Drawback 39 E Image contours may be rounded Image Contours Another geometric interpretation of TV norm minimization Consider isocontours of the image the curves of constant image intensity 0 Evolution of the image u is also evolution of the isocontours c o TV norm minimization is smoothing of the isocontours of u I l n CS 778 578 West Vir 1ia University Medical Image Analysis Medical Image Analysis CS 778 578 Computer Science and Electrical Engineering Dept West Virginia University April 8 2009 39 Outline 0 Topological mesh correction o Cortical surface mesh in ation a Spherical mesh parameterization 39 Outline Topological mesh correction Tpolgical mesh correcticm Mesh correction Application B Fischl A Liu A Dale Automated manifold surgery Constructing geometrically accurate and topologically correct models of the human cerebral cortex IEEE Transactions on Medical Imaging 2001 The extracted cortical surface may contain small errors Fixing these errors is necessary for correct Visualization and to simplify texture mapping 8 209 4 CS 778 578 West Virginia University Medical Image Analysis Topological Inesh correction Topology These surfaces are different from each other in a fundamental way Topology is the study of these properties CS 778 578 West Virginia University Medical Image Analysis Topological mesh correction Topology o Topology is sometimes called rubber sheet geometry objects which can be continuously deformed into one another are considered topologically equivalent 0 Continuous deformation no cutting or gluing o Topologically equivalent objects have some common properties CS 778 578 West Virginia University Medical Image Analysis Topological Inesh correction Topology These shapes each have a different genus or number of handles Technically genus is the maximum number of nonintersecting cuts along simple closed curves which do not disconnect the manifold o The curves must be topologically distinct from one another 0 The curves must be essential cannot be shrunk to a point CS 778 578 West Virginia University Medical Image Analysis Topological mesh correction Topology 0 Differential geometry gt normal vector gt tangent vector gt curvature o Topology gt genus number of handles gt orientability gt connected components gt number of boundaries CS 778 578 West V39 39 ia University Medical Image Analysis F Euler number A mesh is a piecewise planar approximation to a manifold For a closed surface mesh the Euler number x XV7EF272g 0 V number of vertices o E number of edges 0 F number of faces 0 g genus If surfaces f17f2 have the same genus then there exists a mapping M f1 a f2 That mapping M is continuous invertible and M 1 is continuous M is a homeomorphism I Apnl X 2009 333 39 Euler number Examples 0 ThecubehasV8E12F6sz2andg0 o The subdivided cube has V 12E 20F 10 so X 2 andg 0 o Thesquarehas V 4E 4F1sz 1 gt Cannot use this version of the Euler characteristic to compute genus gt This shape is not closed gt There is another form of X Which accounts for open boundaries I iApnl qu39ODR 1033

### BOOM! Enjoy Your Free Notes!

We've added these Notes to your profile, click here to view them now.

### You're already Subscribed!

Looks like you've already subscribed to StudySoup, you won't need to purchase another subscription to get this material. To access this material simply click 'View Full Document'

## Why people love StudySoup

#### "Knowing I can count on the Elite Notetaker in my class allows me to focus on what the professor is saying instead of just scribbling notes the whole time and falling behind."

#### "I made $350 in just two days after posting my first study guide."

#### "I was shooting for a perfect 4.0 GPA this semester. Having StudySoup as a study aid was critical to helping me achieve my goal...and I nailed it!"

#### "Their 'Elite Notetakers' are making over $1,200/month in sales by creating high quality content that helps their classmates in a time of need."

### Refund Policy

#### STUDYSOUP CANCELLATION POLICY

All subscriptions to StudySoup are paid in full at the time of subscribing. To change your credit card information or to cancel your subscription, go to "Edit Settings". All credit card information will be available there. If you should decide to cancel your subscription, it will continue to be valid until the next payment period, as all payments for the current period were made in advance. For special circumstances, please email support@studysoup.com

#### STUDYSOUP REFUND POLICY

StudySoup has more than 1 million course-specific study resources to help students study smarter. If you’re having trouble finding what you’re looking for, our customer support team can help you find what you need! Feel free to contact them here: support@studysoup.com

Recurring Subscriptions: If you have canceled your recurring subscription on the day of renewal and have not downloaded any documents, you may request a refund by submitting an email to support@studysoup.com

Satisfaction Guarantee: If you’re not satisfied with your subscription, you can contact us for further help. Contact must be made within 3 business days of your subscription purchase and your refund request will be subject for review.

Please Note: Refunds can never be provided more than 30 days after the initial purchase date regardless of your activity on the site.