Computer Science 0
Computer Science 0 CS T101
Popular in Course
Popular in ComputerScienence
This 272 page Class Notes was uploaded by Abe Jones on Saturday September 12, 2015. The Class Notes belongs to CS T101 at West Virginia University taught by Staff in Fall. Since its upload, it has received 15 views. For similar materials see /class/202776/cs-t101-west-virginia-university in ComputerScienence at West Virginia University.
Reviews for Computer Science 0
Report this Material
What is Karma?
Karma is the currency of StudySoup.
You can buy or earn more Karma at anytime and redeem it for class notes, study guides, flashcards, and more!
Date Created: 09/12/15
Medical Image Analysis CS 778 578 Computer Science and Electrical Engineering Dept est Virginia University January 12 2009 Outline 0 Topic Overview 9 Computational Ovenliew Outline a Topic Overview 0 Image Enhancement and Denoising 0 Segmentation 0 Registration 0 Visualization a Computational Overview llmge Enhancement and Denoising Problem Statement Recover an unknown ideal image given an observed image llmge Enhancement and Denoising Problem Statement Recover an unknown ideal image given an observed image The observed image may be degraded during acquisition or transmission Thermal noise quantization llmge Enhancement and Denoising Problem Statement Recover an unknown ideal image given an observed image The observed image may be degraded during acquisition or transmission Thermal noise quantization Challenges Preserve features which would be destroyed by lowpass filtering Overview Methods 30000 000000 Segmentation Problem Statement 0 Classify cluster voxels according to tissue type hardsoft classification 0 Partition image and find boundaries separating tissue classes Overview Methods 0000 000000 Segmentation Problem Statement 0 Classify cluster voxels according to tissue type hardsoft classification 0 Partition image and find boundaries separating tissue classes Motivation Find anatomical structures within images Overview Methods 0000 000000 Segmentation Problem Statement 0 Classify cluster voxels according to tissue type hardsoft classification 0 Partition image and find boundaries separating tissue classes Motivation Find anatomical structures within images Challenges 0 Partial volume effect 0 Prior knowledge smoothness shape representation Methods 000C700 Prob39le7m 39St 39tement Determine a coordinate transformation which will align two images Overview Registration Methods 000000 Problem Statement Determine a coordinate transformation which will align two images 0 Compare corresponding voxels from 2 or more images 0 Atlas construction Methods ODOOOO Overview Registration Problem Statement Determine a coordinate transformation which will align two images 0 Compare corresponding voxels from 2 or more images 0 Atlas construction Challenges 0 Different patients 0 Different image modalities 0 Non rigid deformation pre and postsurgery 0 Finding an appropriate metric Problem Statement Produce a rendering which portrays meaningful data features Problem Statement Produce a rendering which portrays meaningful data features Clinicians such as radiologists may visually interpret images to diagnose injury and disease Problem Statement Produce a rendering which portrays meaningful data features Motivation Clinicians such as radiologists may visually interpret images to diagnose injury and disease Challenges 0 High dimensional data is difficult to inspect visually o Extracting quantifying the relevant features 0 May require enhancement segmentation or registration O verVIew 0000 M s n VW m e H O u M 0 mm mWW mu Q m M M p 6 3 mm m 610 Methods 000000 Methods Outline 0 Topic Overview 9 Computational Overview 0 IIIPosed Problems 0 Energy Minimization Methods 0 Numerical Solutions Methods Energy functionals Image model We will consider images IX as surfaces or functions which evolve according to some partial differential equation Methods Image model We will consider images IX as surfaces or functions which evolve according to some partial differential equation We will apply partial differential equations from other engineering fields J Example from mechanics Membrane spline and Thinplate spline energies model the smooth deformations of materials IllPosed Problems Definition A problem is illposed if the solution 0 does not exist 0 is not unique or 0 does not depend continuously on the data Many inverse problems such as many of the problems of computer vision are illposed problems IllPosed Problems Definition A problem is illposed if the solution 0 does not exist o is not unique or 0 does not depend continuously on the data Many inverse problems such as many of the problems of computer vision are illposed problems Methods Energy Minimization Methods Regularization Solution of PDEs with noisy initial conditions is generally an illposed problem Methods Energy Minimization Methods Regularization Solution of PDEs with noisy initial conditions is generally an illposed problem Illposed problems can be made wellposed by the process of regularization J This is accomplished by imposing a penalty term which penalizes lack of smoothness of the solution Methods Energy Minimization Methods Regularization Solution of PDEs with noisy initial conditions is generally an illposed problem Illposed problems can be made wellposed by the process of regularization J This is accomplished by imposing a penalty term which penalizes lack of smoothness of the solution Methods Energy Minimization Methods Variational methods Next we find the EulerLagrange conditions which characterize the minimization of EU Methods Energy Minimization Methods Variational methods Next we find the EulerLagrange conditions which characterize the minimization of EU Variational Calculus is a general technique for minimizing functionals with respect to functions Methods Energy Minimization Methods Variational methods Next we find the EulerLagrange conditions which characterize the minimization of EU Variational Calculus is a general technique for minimizing tunctionals with respect to functions This is analogous finding the minimum of fX using the conditions f X O f X gt O Methods Numerical Solutions Curve Surface Evolution Evolve the image in such a way that the steadystate behavior obeys the EulerLagrange conditions Methods Numerical Solutions Curve Surface Evolution Evolve the image in such a way that the steadystate behavior obeys the EulerLagrange conditions For EL equations of the form E I O we may introduce an artificial timemarching parameter 3 E l J Methods Numerical Solutions Numerical Methods o Linearize it may be necessary to find a linear approximation to nonlinear PDEs Numerical Solutions Numerical Methods o Linearize it may be necessary to find a linear approximation to nonlinear PDEs o Discretization we may replace differential operators with finitedifference approximations Methods Numerical Solutions Numerical Methods o Linearize it may be necessary to find a linear approximation to nonlinear PDEs o Discretization we may replace differential operators with finitedifference approximations 0 Choice of numerical method will depend on the problem Methods Numerical Solutions Next Class We will apply this framework to the problem of image smoothing Medical Image Analysis CS 593 791 Computer Science and Electrical Engineering Dept West Arginia University 15th February 2006 lmpiiult Curve Evolution Outline 0 Geometry Implicit Curves a Implicit Curve Evolution Geometry Implicit Curves lhmiim i Ciurve Evoiutoh Parametric vs Implicit Parametric Curve XS C s y8 Evaluating the function gives coordinates of points on the curve Implicit Curve 1W y C The curve is the set of all points which satisfy the equality V J Geometry Implicit Curves Implicit Cum Evolution Normal and Curvature The gradient ofthe embedding function is perpendicular to the level curve W09 y W llWOcnll Recall Directional Derivative d u Dufp fphu VfW HVchos0 o Dufp has the least magnitude 0 when u is parallel to the curve 0 The directions of least magnitude and greatest magnitude are perpendicular Geometry Implicit Curves Implicit Cum Evolution Normal and Curvature The curvature of the level curve is the rate of change of the normal vector Viz amp 7 div 7 iWi This can be rewritten as iwxx39lJE 2 x y xy i yyiJE i3 it 5 Geometry Implicit Curves Implicit Cum Evolution Example a circle Consider the implicit equation for a circle X7612y7b2 r2 The gradient is HWH 7 Mx 7 a2 4y 7 m2 7 2W 7 a2 y 7 m2 X a HWH X7a2y7b2 y7b Geometry Implicit Curves Implicit Curve Evolu on Example a circle Computing the curvature iwxx39lJ 2 x y xy i yyiJE wgw g 11 2 11yy Zviny 0 8y 7 b2 8X 7 a2 1 imwiavqyima Geometry Implicit Curves Implicit Cum Evolution Signed Distance Function One possible embedding function for implicit curves 0 iwxyi distance from Xy to the curve 0 wxy 0 if Xy is on the curve 0 Xy lt 0 if Xy is inside the curve 0 zpxy gt 0 if Xy is outside the curve WW 1 Almost everywhere Geometry Implicit Curves Implicit Curve Evolution Lagrangian Approach This was the quotsnakequot approach to curve evolution 0 Discretize the curve into individual particles 0 Track these particles as they move through a field Geometry llnpll Implicit Curve Evolution Eulerian Approach o Discretize the embedding function 11Xy to represent and evolve the curve 0 Evolve 11Xy by updating at fixed grid locations 0 For simplicity the function 11Xy can be discretized to have the same resolution as the image vye are segmenting Implicit Curve Evolution CurvatureBased Evolution 2 31 Eli u 9 Geometry Implicit Curves Implicit Curve Evolution Cu rvatu reBased Evolution Suppose we have an evolving curve Ct Xtyt Let s derive the evolution equation for for 1X y t which has Ct as a level set Let Ct be the zero level set 0le so that wxtyt t O for all t This implies that Xtyt t 0 By the chain rule gg dt 8X dt 8y dt 8t Wettingif Geometry Implicit Curves Implicit Curve Evolution Cu rvatu reBased Evolution Decompose c t into components tangent and normal to Ct o vwvNNtvTTt vwVNNt since Viz is perpendicular to the tangent to Ct Substituting the level set definition for the normal to the embedded curve 7 V1 811 0 Vi1 VNWD a Geometry Implicit Curves Implicit Curve Evolution Cu rvatu reBased Evolution We can rewrite this result W 6i W 8i O 7 VMViw www r 8t 7 WW2 8i 0 VNHWH Wit 7 81 0 7 VNHVW a Geometry Implicit Curves Implicit Curve Evolution Cu rvatu reBased Evolution Letting VN be function of curvature we have evolution equation 2 FHllWll The segmentation problem is now reduced to finding an appropriate function FIltL We can factor FIltL kFA F6 where 0 FA the advection term is usually a constant 0 FG depends on the geometry curvature of the level set 0 k is a stopping term to slow evolution near boundaries We can use k0 y 1 HVGsigma X7Y Geometw implicit lurvee Implicit Curve Evolution Entropypreserving solution The upwind finite difference scheme we used for TV norm minimization prevents singularities like this quotswallowtailquot from developing Genmetry lmplic39 Curm Narrowband update x Since we are primarily interested in the zero level set of 21 we may evolve only in a small region surrounding the level set Geometry Implicit Curves Implicit Curve Evolution Next Time quotShape modeling with front propagation A levelset approachquot Details ofthe level set implementation How to get shaders into your programs There are many steps but the interface is easy to use Shader objects Compiling shaders Program objects Linking programs Using programs Sending uniform variables to your program Sending vertex attributes to your program Shader Objects Create vertex and fragment shader objects Add source code Why are shader objects separate from shader programs Avoid redundancy you may use the same vertex shader in several programs To create GLuintgICreateShadershaderType GLenum shaderType GLVERTEXSHADER GLFRAGMENTSHADER Returns a shader ID Shader Objects Shader source code is stored in strings and compiled at runtime void gShaderSourceshader count strings Iengths GLint shader ID for created shader object Glsizei count number of strings const GLchar strings array of strings hold source code How many strings count const GLint lengths array of string lengths Can use NULL if strings are NULL terminated Runtime shader compilation Some interesting possibilities Edit and recompile shader text files without exiting program Download shaders from network during execution Generate shaders onthefly using string processing Shader Objects void glCompileShaderGLuint shader Status failure or success is stored as part of object state To check shader object state void glGetShaderivshader pname params pname GLCOMPLESTATUS GLSHADERTYPE GLINFOLOGLENGTH Info log holds error messages warnings and other messages Similar function for programs glGetProgramiv Shader Objects Error Checking void printShaderInfoLogGLuint obj int infologLength 0 int charsWritten 0 char infoLog glGetShaderivobj GLINFOLOGLENGTHampinfologLength if infologLength gt O infoLog char mallocinfologLength glGetShaderInfoLogobj infologLength ampcharsWritten infoLog printfquot snquotinfoLog freeinfoLog int status glCompileShaderVShader glGetShaderivVShader GLCOMPILESTATUS ampstatus ifstatus GLFALSE printShaderInfoLogVShader Program Objects Create program object GLuintglCreateProgramvoid Add shader objects void glAttachShaderGLuint program GLuint shader Link program void glLinkProgramGLuint program Assign locations for uniform variables and vertex attributes Resolves references between shader objects Varying variables output from VP input to FP Program Objects Error Checking void printProgramInfoLogGLuint obj int infologLength 0 int charsWritten 0 char infoLog glGetProgramivobj GLINFOLOGLENGTHampinfologLength if infologLength gt O infoLog char mallocinfologLength glGetProgramInfoLogobj infologLength ampcharsWritten infoLog printfquotsnquotinfoLog freeinfoLog int status glLinkProgramProgram glGetProgramivProgram GL LINK STATUS ampstatus ifstatus GLFALSE printfquotShader Link Errornquot printProgramInfoLogProgram Using a program void glUseProgramGLuint program glUseProgramO reverts to fixedfunction pipeline We can also pass the value of uniform variables to the program currently in use Overview void maintvoid gI D m n 1 Create Shaders 2 Add Source Code 3 Compile Shaders 5 Add Shaders 6 Link Program 4 Create Program 7 Use Program 8 Set variables UnifOrm and Attribute Variables Voidvmaintvoid 1Frage ib r vee4 1q 10 004 10 Shader Example initialization GLuint vs glCreateShaderGLVERTEXSHADER const char vscode void mainvoid glPosition gIModeIViewProjectionMatriXgVerteX nghaderSourcevs 1 vscode NULL glCompiIeShadervs int status glGetShaderivvs GLCOMPILESTATUS ampstatus ifstatus GLFALSE printShaderlnfoLogvs GLuint fs glCreateShaderGLFRAGMENTSHADER const char fscode void mainvoid glFragColor vec410 00 00 10 nghaderSOL IrceGs 1 fscode NULL glCompiIeShaderfs glGetShaderivfs GLCOMPILESTATUS ampstatus ifstatus GLFALSE printShaderlnfoLogfs GLuint prog glCreateProgram glAttachShaderprog vs glAttachShaderprog fs glLinkProgramprog glGetProgramivprog GLLINKSTATUS ampstatus ifstatus GLFALSE printProgramnfoLogprog Example usage void display setup modelview matrix glUseProgramprog glutSolidTeapotlO or whatever you want to draw glUseProgramO return to fixed function pipeline draw other objects Passing variables to programs We can also pass the value of uniform variables to the program currently in use GLint glGetUniformLocationprogram name const GLchar name Variable name in shader source code Returns 1 if name is invalid does not exist void glUniform1fGLint location GLfloatv Example time based animation If the GLSL vertex or fragment shader declares uniform float time static float t 00f t 01f fake time glUseProgramprog int timeloc glGetUniformLocationprog time glUniform1ftimeloc t or eliminate temp variable glUniform1fglGetUniformLocationprog time t gIUniform There are glUniform functions for float int Scalars vectors 2 3 4 components matrices 2x23x34x4 Use int or float for bool O false otherwise true Use int for texture sampler types glUniform family glUniform1f glUniform2f glUniform3f glUniform4f glUniform1i glUniform2i glUnform3i glUniform4i glUniform1fv glUniform2f glUniform3f glUniform4f glUniform1iv glUniform2i glUnform3i glUniform4i glUniformMatrix2f glUniformMatrix3f glUniformMatrix4f glUniformMatrix2fv glUniformMatrix3fv glUniformMatrix4fv Latest GLSL version also allows rectangular matrices Pass pointers when calling glUniformv functions Texture Samplers OpenGL controls which textures the samplers use with glActiveTexture and ngindTexture If GLSL vertex or fragment shader declares uniform sampler2D diffuseTex Then set the texture by glUseProgramprog glActiveTextureGLTEXTUREO set texture unit ngindTextureGLTEXTURE2D teXID set texture target glUniformlfglGetUniformLocationprog diffuseTeX O Texture Samplers Since texture unit names are sequential you can write glUseProgramprog glActiveTextureGLTEXTUREOn set texture unit ngindTextureGLTEXTURE2D texDn set texture target gIUniform1fgGetUniformLocationprog diffuseTex n User defined vertex attributes Similar to the process for uniform variables If GLSL vertex shader declares attribute ve03 tangent attribute ve03 bitangent glUseProgramProgram GLint taniloc glGetAttribLocationProgram tangent GLint bitaniloc glGetAttribLocationProgram bitangent ngindBufferARBGLiARRAYiBUFFERiARB VBO DRAW VBO glEnableClientStateGL VERTEX ARRAY glEnableClientStateGLNORMALARRAY glEnableClientStateSLATEXTUREiCOORDiARRAY glEnableVertexAttribArraytan 10c glEnableVertexAttribArraybitaniloc glVertexPointer3 GL FLOAT O BUFFER OFFSETVertexOffset glNormaleointerGL7FL6AT 0 BUFFERioFRSETnormaloffset ngexCoordPointer2 GLiFLOAT O BUFFERioFFSETtexCoordOffset glVertexAttribPointertaniloc 3 GLiFLOAT false 0 BUFFERioFFSETtangentoffset glVertexAttribPointerbiiloc 3 GLiFLOAT false 0 BUFFERioFFSETbitangentoffset ngrawArraysGLiTRIANGLES O NumVerts ngisableClientStateGL7VERTEX7ARRAY ngisableClientStateGL NORMAL ARRAY ngisableClientStateGLTEXTURE7COORD7ARRAY g1DisableVertexAttribArraytaniloc g1DisableVertexAttribArraybitaniloc Vertex Attribute Pointer void glVertexAttribPointerlocation size type normalized stride pointer GLuint location from glGetAttribLocation GLint size components per attribute 1 2 3 4 GLenum type data type GLFLOAT etc GLbooIean normalized Want signed integer attribs mapped to 10 10 Want unsigned integer attribs mapped to 00 10 GLsizei stride O for tightly packed const GLvoid pointer Address relative to start of array or VBO Use the BUFFEROFFSET macro For more details See OpenGL Shading Language Randi Rost Chapter 7 OpenGL Shadin Language API OpenGL spec wwwopengorgregistrydocglspecZ120061201pdf GLSL spec wwwopengorgregistrydocGLSLangSpecFull1208pdf Displacement Mapping on the GPU Vertex Shader Per vertex displacement mapping Fragment Shader Per pixel displacement mapping Normal Mapping Parallax Mapping Representing surface details Coarse surface detail is represented geometrically vertices Small scale details are stored in texture maps Normal maps 7 Tangent space perturbed normal vector Displacement maps Tangent space offset in normal direction Vertex Shader Displacement Mapping highly tessellated macrostmcmre surface vertex modified Veltices shader I mun helght map stored as a texture AAWW High geometric complexity need to send all vertices Correct silhouettes Breaks shadow volumes VP can only fetch from texture on recent GPUs Geometry shaders can generate vertices on the GPU May be ef cient soon Normal Mapping No vertex displacement at all Merely compute lighting using a perturbed normal Low geometric complexity Incorrect silhouette Incorrect selfocclusion Raytracing in tangent space n View vector vTBN T uoff Voff uoi V0 Tangent Plane Height map huv Max Displacement hmax Note that we are displacing the surface in the n direction For each fragment Find the first intersection of the tangent space view vector with the height field Color the fragment using the displaced texture coordinates uo vo Problems finding the intersection Intersection may be arbitrarily far away Multiple intersections No intersection Intersection point out of range Setting up parallax mapping Same as tangentspace normal mapping Vertex program Set varying values for tangent bitangent normal Compute tangent space view vector Fragment program vec2 texcoords ComputeOffsetglTexCoordOst view Use these offset coordinates to access normal map diffuse color etc What is a ray Half infinite ine Line with one endpoint at infinity Parametric equation Described in terms of one endpoint p and a direction vector v rtptv tEOoo Ray Equation n i Tangent Plane View vector vTBN Height map huv uoff Voff huoff Voff Max Displacement hma X The intersection is along the ray through point uo v0 0 in the view direction uo uO Vx vo v0 t Vy h0 0 V2 Parallax Mapping n T uoff Voff 0 Tangent Plane View vector vTBN huo v0 Height map huv Max Displacement hmax Assumption huv is constant huv hu0v0 hO 0 0 Vx vo v0 t Vy hO O VZ KANEKO T KAKAHEI T INAMI M KAWAKAMI N YANAGIDAY MAEDAT TACHI 5 Detailed shape representation with parallax mapping In Proceedings oflCAT2001 2001 pp 205 208 Parallax Mapping of O Vx vo v0 1 Vy h0 0 V Z From the last equation we have t 420 V Z Vx VZ Vy VZ uo uO Vo h0 V0 Parallax Mapping In glsl ve02 ComputeOffsetve02 uv ve03 view float h0 texture2Dheightmap uvr return uvst h0viewxyviewz Problem with Parallax Mapping T uoff Voff uo V0 hu0 v0 At shallow angles the offset can be very large VZ the normal component of the tangentspace view vector becomes very small Parallax Mapping with Offset Limiting In theory make the maximum offset be equal to ho Extreme cases Shallow angles If VZ gt 0 then VX2 Vy2 gt 1 uo uO h0 Vx vo v0 Vy Steep angles VZ gt 1 uo uO h Vx O vo v0 Vy In practice simply eliminate the factor of 1VZ WELSH T Parallax Mapping with O setLl39mibng A PerP brelApproximabbn of Uneven S urfaces Technical report lnfiscape Corporation 2004 Steep Parallax Mapping n T uoff Voff UO V Tangent Plane Assumption constant slope Get slope information from normal map Compute ray vs plane intersection MCGUIRE M MCGUIRE M Steep parallax mapping In I3D 2005 Poster2005 httpwwwcsbrowneduresearchgraphicsgames SteepParallaXindexhtm Plane equation Plane can be described in terms of a pointpo in the plane and the normal vector n perpendicular to the plane f Plane equation Plane is implicitly defined as the set of pointSp such that nippamp0 7n 5 Po 19190 is a vector parallel to the plane n is by definition perpendicular to all vectors and lines and curves which lie in the plane Steep Parallax Mapping n T uoff Voff UO V Tangent Plane Slope at uo v0 Plane normal comes from normal map nu0v0 pO is the point uo h0 p is the intersection of the ray and plane Plug in the equation for the ray and solve fort Ray plane intersection nfppd0 n39pn39p0 0 0 v0 1 Vn v0 0 ho 0 0 72 v0 tnVn v0 0 ho u0u0 tnVn vO vO h0 0 tnV nzh0 nzho Steep parallax mapping Pluggingtinto the ray equation uo v0 1 V uo MO Vo We see that V0 In practice offset limiting is imposed by setting denominator to 1 O Vx V y uo Vo nzh0 V0 Iterative parallax mapping n T u2 v2 u1 v1 uo V0 Tangent Plane Slope at uo v0 Iterate the steep parallax mapping approach Usually 34 iterations Convergence not guaranteed May overshoot the first intersection Requires multiple texture reads momparalaxmappmgwith slope information In CenlmlEuropean S eminaron Computer Graphics 2006 httpwwwcescgorgCESCG 2006papersTUBudapest Premecz Matyaspdf Binary Search um Vina 391 umid Vmid uout Vout O hmidgt hu mid Vmid SO set gt uout Vout uin Vin umid Vmid uout Vout hmidlt humid Vmid SO set uin Vin umid Vmid umid Vmid uout Vout Linear Search u3 V3 U27 V2 U17 V1 uoi V0 gtlt 39 I Advance along ray with fixed step size in uv Stop when intersection is detected Relief Mapping 2 phase algorithm combining search techniques Phase 1 linear search Phase 2 binary search OLIVEIRA M M BISHOP G MCALLISTER D Relief texture mapping In SIGGRAPH 2000 Proceed39ngs 2000 Pp 359 368 POLICARPO F OLIVEIRA M M COMBAJ Realtime relief mapping on arbitrary polygonal surfaces In ACM SIGGRAPH 2005 Symposium an Interactive 3D Graphics and Games 2005 pp 155 162 Seoant search um Vin usec Vsec uout Vout lt L umid vmid from binary search Similar to binary search Rather than simply testing the midpoint test the intersection point of the ray and the line between uvhin and uvh out Assumption the map is planar over this interval Interval mapping 2 phase algorithm Phase 1 Linear search Phase 2 Secant search RISSER E A SHAH M A PA I39I39ANAIK S Interval mapping poster In Symposium an Interactive 3D Graphics and Games BB 2006 Cone stepping 3 T ui17 VH1 ui Vi Main idea skip empty space Precompute the the largest empty cone above each texel in the height map Cone axis normal direction DUMMERJ Cone S tep Mappinngn Iterative RayHeight eld Intersection Algorithm Tech rep 2006 httpwwwonesocknetfilesConeStepMappingpdf Cone apex uvhuv Only need to store angle slope of cone Parallax Occlusion Mapping I 039 View vecto x Light vector Incorporate self shadowing Phase 1 View vector heightmap intersection Phase 2 Light vector heightmap intersection TATARCHUKN Practical parallax occlusion mapping with approximate soft shadows for detailed surface rendering ln ShaderX 5 Engel W Ed Charles River Media Boston 2006 pp 75 105 Silhouette correct parallax mapping Compute curvature from mesh Approximate surface as locally quadric rather than locally planar Warped tangent space Ray vs displaced quadric test Discard fragments with no intersection OLIVEIRA M M POLICARPO E An Ef cient Representation arSurface DemiLs Tech rep 2005 UFRGS Technical Report RP351 Light Sources Hemispheric lighting Point light directional light spotlight Projective textures Hemispheric Lighting Simulate indirect lighting from a pair of hemispheres Sky Ground lfz is up in your world Vec4 SkyCOlOr l Firm rmal 10207 GLSL m1xvec x vec y float a returnsthe linear blend ofx and y 1 w ay Hemispheric lighting Sky and ground colors Simple constant colors Diffuse cube maps for sky ground colors Renderto texture Can use a physically based model for sky color Perez Seals Michalsky All weather model for sky luminance distribution Solar Energy 50 3 1993 pp 235245 Preetham Shirley Smits A Practical Analytic Model for Daylight SIGGRAPH 1999 pp 91100 Constant color Physicallybased model Light source models Point light Omni light Directional light Color L Color L Position plight Direction d I I lightpsurface plightpsurface I I d I Z lighfpsurface light9smy aceH Color L If Z dspwltcos6 then I 0 POSltioni Plight else I L Z dspof Direction dspo t Cone falloff e spot Cone cutoff 6 psmface Other light source models Ronen Barzel Pixar Lighting controls for computer cinematography Journal of Graphics Tools 21 1997 pp 120 Aspects of Lighting Models Selection per object basis Shape Generalize round spot shape conepyramid cuton and cutoff Cookiecutter slide projector image noise Drop off Distance beam falloff Direction Other properties Intensity color Also discussed in OpenGL Shading Language Orange book Projective Textures Projection Matrix P 1w Coordinate Scale and bias Eye Space Camera at origin X right Y up Z ook Clip Space w to w in each direction NDC 1 to 1 in each direction Texture Coordinates O to 1 in each direction TC NDC12 Texture Projected From Camera vec4 clip glModelViewProjectionMatrixglVertex veCB ndc clipxyz clipw veCZ texcoord O5ndcxy 10 Texture projected from an arbitrary projector Model the projector with a View matrix and projection matrix just like you would a camera Use gluLookAt gIGet to form a view matrix Use glFrustum or gluPerspective and gIGet to form a projection matrix Texture projected from an arbitrary projector vec4 spotlightColor vec4 textureColor vec4 diffuseColor texture2DspotMap texture2DbaseMap textureColor spotlightColor varying veCZ spotCoord uniform mat4 spotView uniform mat4 spotProj spotProjspotViewglVertex Clipxyz Clipw O5ndcxy 10 vec4 clip 2 VGC3 ndc spotCoord spotCoord texCoord GLSL projective texture access vec4 textureZDProj samplerZD sampler vec4 coord float bias The texture coordinate coords coordt is divided by coordq before texture lookup textureZDProj tex coord texture2Dtex coordstcoordq Define the scale and bias matrix 1 1 2 2 11 0 0 52 2 0011 22 0001 In practice don39t dothis Jiu sTtT send the product of the matrices as at Mirrife m39atgi Projective texturing GLSL can be done by vec4 spotCoord SspotProjspotViewglVertex vec4 spotlightColor textureZDProj spotMap spotCoord In the fixed function pipeline simply set the texture matrix glMatrixMode GLTEXTURE ngeXGen with GLEYELINEAR mode GLSL projective texture access There are Proj versions of many texture access functions texture1DProj texture2DProj texture3DProj And more Projector focus Use texture LOD bias vec4 spotlightColor textureZDProj spotMap spotCoord focus focus 00 focus 20 focus 30 focus 40 Reverse projection Projective texture mapping produces a dual projection One in projector View direction One in opposite direction To fix this vec4 Sputllghtculur StepDD SputcuurdqtextureZDPrDj SthMap Sputcuurd focus GLSL functions stepvec edge vec x Ifx lt edge then return 00 else return 10 signvec x Ifx gt 00 return 10 Ifx lt 00 return 10 Ifx 0 return 00 edge GLSL functions smoothstepvec edgeO vec edge1 vec X 1 le lt edgeO then return 00 ifX gt edge1 return 1 t i Smooth Hermite interpolation in between edge1 edgeO Clampvec X float minVal float maXVal Return min max X minVal maXVal rnaxval n nVbl nnnval rnaxval N 0TE04L12 BRIEF INTRODUCTION TO IMAGE PROCESSING Image Model An image of a surface is a 2D function ofthe light intensity falling on the surface For a given spatial location xy on the surface the image fx y is formed from a combination of the illumination on the point and the surface reflectance at that point fxy ix ySxy where i x y is the illumination component sx y is a component due to surface reflectance Typically 0 lt ix y lt co and 0 lt sx y lt 1 thus 0 lt fx y lt oo Practicallyfis bounded within certain limits called the greyscale such that Lmin S fx y S Lmax The scale is further shifted such that me0 and meEL with values of fx y 0 corresponding to black and values of fx y L corresponding to white The image formed in this way is called a monochrome or greyscale image as it considers only the intensity values Sam lin and uantization To convert to digital form the image is discretized both in its spatial coordinates and its amplitude values Discretization of the spatial coodinates xy is called image sampling Descretization of the amplitude values fx y is called greylevel quantization The result of the digitization process is a digital image We can then represent the image as an NXM array of equallyspaced samples f00 f01 f0M 1 0 11 1M 1 fog g 3 f E L N 10 fN 11 fN 1M 1 Each element in the array is called a picture element orpixel emsge Typxcal examples he2122256m24ete mu 239 b MNg The prudthst emeathe mnge m measuredmpxxels a th t t esmahstes The Wen mkla Hume wa 3mm new vhrvnamsvunshmhpum 1 v w 4 4 7 1 l H34 my 1 my quotI um um um 1 14A u m mm H w wz 6536 1 m n x mm EA HJ nzn my mm w 7165 m 4 wt ltlt See machedxmagzs mthe eereetsnessxuhshe and emphe wathz shave resh lsgtgt 13 IRA mushgs Pl39xllleg bolus spset The fallawmg types uf nexghbwurs he arteh used mm sge pmeesshg o Hunzmtalnexghbmxs o Vemcal 1 bmxs o Dugaml nexgmbmxs o Aeheghhmxs 7 empeshgyemeu andhanzantalnexghbmxs o aheeghhsus 7 cmnpnangAnexghbwurs and msgshuheeghhsms Pl39xllConnntivity Fax tws pexexs ts he eshheetea they must he nexghbwurs and they must he smh based sh serum mtehs A sempe buz they emt differ bym are than s specx ed Lhreshald E 55 types af eshheetmty used heme o some th o 340me o meeowmtmmheaemheetmty Aria c and Logic opmh39m We eshpehrpm the nu hssee smhmetee apexsteshs mphexs 9 gt E p a T E a E vs 3 e us e suhpsepshpspzeeseamehsgehsteshshseyss shshsekgsuhshehm e mutephesteshpsweeseames sm mdxmagl shadingapenuans DwxamptPlrmedmcalmpxacesang Logic operations are primarily used on binary images These are used in image feature detection shape analysis and image masking The basic logic operations are AND 1 AND P2 OR p1 OR p 2 NOT NOT p also called complement ltlt see attached diagrams of sample binary operationsgtgt Colour Processing Ph sics 0 Colour The colour of an object is determined by three basic factors O the surface re ectance function of the object S O the spectral energy distribution function of the ambient illumination the illuminant E O the spectral responsivity of the colour sensors R For a give wavelength A the radiance outgoing light from a point xy on the surface of an object is given by 1xyl ElSxyl The colour as seen through thekth sensor of a visual system will be given by pk 1xyIRkxyIdI A For the usual trichromatic system ofthe human visual system k l 2 3 representing the RGB tristimulus colour values The KGB are also called the primary colours and any other colour can be generated by some combination of the RGB values and a variation of the wavelength Notice that the luminance as represented above does not depend on the object s surroundings However perceived luminance or brightness depends also on the luminance of the surroundings Some terminologies related to colour Intensity 7 The amount of light without a consideration of the colour attributes This is usually indicated by the greylevel values which varies from black to grey to white Brightness 7 this is a subjective attribute and is related to human perception of light Hue 7 this is a colour descriptor that depends on the dominant wavelength in a mixture of light waves The hue represents the perceived colour by a human observer Saturation 7 indicates the relative purity of the hue This depends on the amount of white light that is added to a given colour to produce another colour The more white we add the less pure the hue and thus the less the saturation Hue and saturation together are called the chromaticity And thus a given colour can be described in terms of the brightness and chromaticity Colour Models The colour models used in image processing generally represent colour information in a 3D coordinate system whereby each colour can be represented as a single point in the 3D space Various models have been proposed and are currently in use Which model to use is often dependent on the application In image processing the three models that are often used are the RGB YIQ and HSI RGB Representation In the RGB representation each image is represented with three independent image planes one for each of the three primary colours R G or B When the RGB values are normalized to unit values the RGB model can be represented as a unit cube The vertices of the cube represent different colours For instance the colour at the origin 000 is taken to be black while the colour at the furthest point 111 is taken to be white The line from 000 to l l 1 thus defines the greyscale representation Sometimes the normalized KGB values are also used in image processing V g g b RGB RGB RGB Obviously rgbl The normalized values are also called the trichromatic coe cients The RGB has found applications mainly in colour monitors and colour cameras One advantage of the RGB representation is that most of the other colour models can be obtained by simple linear transformation on the RGB values The problem is that the intensity component is not decoupled from the chromatic components YI Q Representation This is used mainly in TV broadcast and in image transmission The YIQ decouples the colour components LQ from the achromatic component Y which represents the monochrome greylevel information It is thus compatible with monochrome TV standards Y 0299 0587 0114 R I 0596 0275 0321 G Q 0212 0523 0311 B HSI Hue Saturation and Intensity Representation The main motivation for using the HST representation is that it is more closely related to the way humans perceive colour H hue is the colour attribute indicating the density of pure colour S saturation indicates how saturated the colour is 7 ie to what extent the colour is diluted by white light I 7 intensity The HSI components are defined as follows 1 R G B S1mjn GB I R GR B H cos 1 R G2R BG BF Image Processing Operations Image processing operations can generally be represented as follows where fx y is the input image Tx y is the image processing operator ft7 x y is the output image Notice that some operators can act on more than one input image at the same time Two broad types ofprocessing can be defined depending on the extent of the operation These are pointprocessing and neighbourhoodbased processing Point Processing ln point processing the result at pointxy in the output image depends only on the pixel values at the corresponding xy position in the input images Effectively the greylevel at a given point is mapped into another greylevel using a mapping function STr where r is the input greylevel ands is the output greylevel Examples of point processing are contrast stretching image thresholding generation of image negatives dynamic range compression 9999 Others examples are image differences ft7 x y x y f2 x y image addition f0 96 y Xy 120 y n image average for multiple input images 7 x y fl x y 11 Neighbourhoodbased Processing spatial filtering An operation is neighbourhoodbased if the result at a given point depends on the pixel values in the neighbouring points The neighbourhoods are usually defined by use of some mask or windows also called filters which are applied on the input image The process is sometimes called filtering Using a filtering mask centred at say x0 yt7 see diagrams the result of the filtering operation will be given by nx0y0wafxxy1 where n is the number of neighbours and W1 is the weight at the ith position in the mask Basic Filter sh apes Low Pass Filters This eliminates the high frequency components edges sharp details while passing the low frequency components Thus the overall effect in a blurring ofthe input image From the shape of the filter response we can observe that all coefficients of the mask will have to be positive Using the above masks the result at a given point will simply be the average of its neighbours This operation is therefore also called neighbourhood averaging As the size of the neighbourhood increases the averaging effect increases Median Filter Here rather than using the direct average the media ofthe neighbouring pixels is used The effect is that the sharp edges are preserved while noise is removed ltltsee sample imagesgtgt Sharpening Filters These emphasize the fine details or sharp edges in the images Thus the high frequency components are passed while the low frequency components are eliminated From the impulse response of the sharpening filter we can expect to have positive values for the w s near the mask centre and negative values as we move away from the origin The high pass filters form the basis ofmost edge detection algorithms The major difference is in the weights assigned to points in the mask Examples of masks used in edge detection Notice that in all cases the sum ofthe weights is always equal to 0 Other issues of interest Image transforms Image segmentation Description and representation of segmented images O O O O Motion analysis MULTIMEDIA DATA COMPRESSION u MM data compression basically aims at reducing the amount of information needed to represent a given Mll data m The basis of compression is the exploitation of the various redundancies in the original MM data Practically this means removing the statistical correlation that may exist amongst the various parts of the data The amount of compression that can be achieved is proportional to the amount of redundancy in the data The amount of redundancy in turn depends on the type of representation used for the data Given two representations for the same information let 1 11 and 1 12 represent the respective data sizes in the two representations Assume 1 11 2 1 12 quot1 The compresswn ratio is given by CR 7 2 l The relative data redundancy is given by RD l C R Apglications O Storage smaller object sizes less storage requirements archives VoD O Transmission smaller object sizes mean less bandwidth 1 39 TV video f tele medicine O Processing faster processing in the compressed domain39 used in image manipulations and video analysis Sources of Redundancy Coding Redundancy This is a form of redundancy that results from the basic representation scheme used to describe the image primitives eg the usual colour or grey level representation LetL number of grey levels I represent the kth grey level k 012 L l lrk number of bits used to represent I nk number of pixels with greyvalues rk andN image of size The probability of having pixels with greylevel rk in the image is simply quotk P Q F The average code length for the image is then given by Lil Lavg Z Kn Pr Vk k0 The problem of image compression then is to code the image such thatLavg is minimized Any coding scheme in whichLaVg is not minimized has some coding redundancy and thus could be a candidate for further compression We can observe that this is in fact the case with most coding schemes that use constantlength coding eg our usual greylevel or color representations When different values of lrk are used for different rk we have variable length coding ltlt example table from Gonzalez pp 311gtgt Spatial Redundancy Spatial redundancy usually occurs as a result of the natural similarity between neighbouring points on an object surface This usually manifests in the form of correlation between nearby pixels disorderorder in the image This is independent of the coding redundancy Correlation between pixels implies possible predictability of the pixels using information from some other pixels Spatial redundancy can be exploite O representing the images in terms of pixel differences rather than explicit pixel values O using runlength pairs for instance along a given path in the image Temgoral Redundancy This is similar to spatial redundancy but correlation is now with respect to temporal rather than spatial neighbourhoods Used mainly for video and audio coding Psychovisual Redundancy Perception is often subjective rather than quantitative Some information items are treated as less important than others by the human visual system HVS Psychovisual redundancy exploits this limitation of the HVS by laying more emphasis on those aspecm ofthe data that the human will perceive O This is usually achieved by use of quantization O Results in lossy compression since the process in not reversible O This is important in image and video compression Psychoacoustic Redundancy Similar to psychovisual redundancy only that here consideration is with respect to the human auditory system This is more important for audio compression Measures of Compression Performance O Compression ratio O Distortion quantitative measures eg mean square error MSE PSNR O Perceptual quality subjective rating scales justnoticeablethresholds etc O Coding complexity O Decoding complexity Compression Models The compression process can be described by use of the encoderdecoder model The encoder performs the compression and each encoding stage removes or facilitates the removal of one form of redundancy The decoder performs the inverse process of decompression Two Types of Compression Lossless compression O Reconstructed data is an exact replica of the original O Provides low compression ratios O Used in applications such as data processing law medicine etc Lossy compression O Reconstructed data is only an approximation of the original O Possible information loss O Provides avenue for huge compression ratios O Applications include TV broadcasting VoD data storage Lossless Compression Since the primary source of information loss is the quantization stage for lossless compression there will be no quantization and dequantization stages The basic issues then are O Representational problem the transformation stage which exposes the redundancies in the data O Elimination of the exposed redundancy the encoding stage Lossless Compression Schemes Variable Length Coding This reduces the coding redundancy in the data representation by representing the most probable symbols eg greylevels with the shortest possible codewords On popular technique for VLC is Huffman coding The Huffman code is said to be optimal in the sense that it produces the smallest possible number of codes per source symbol if the source symbols are coded one at a time Steps in Huffman Coding 1 Source reduction by ordering and propagation of probabilities of symbol occurrence O Order symbol probabilities O Combine lowest probability symbols into a single symbol O Replace the combined symbols with one symbol at the next reduction O Repeat the above until we get a combined symbol with probability 1 2 Code assignment for the reduced source O Assign the smallest codes to the smallest source and work backwards to the original source O For a reduced symbol that is generated by a combination of other symbols assign the symbol that is used to code the reduced symbol to BOTH the parent symbols Then distinguish the parent symbols by arbitrarily appending unique symbols to them The coded words can be used to form a lookup table which is then used for encoding and decoding The Huffman code is thus often said to be an instantaneous and uniquelydecodable block code since each code word can be decoded without reference to succeeding symbols any string of symbols can be decoded uniquely and each source symbol is mapped into a fixed sequence of symbols The overall implication is that decoding can be performed by simply scanning the coded string in a lefttoright manner Problem with Huffman coding Complexity for n symbols requires n2 source reductions and n2 code assignmenm Could be time consuming for large n eg n256 for typical greylevel images Other nearoptimal methods such as arithmetic coding are sometime used Pattern Substitution The basic idea here is to replace a data segment with a code word This thus results in a form of table lookup For compression to be achieved the length of a symbol in the table should be less than the corresponding original symbol This is used mainly in situations where there are many repeating patterns Examples are O Repetition J J 39 eg 24 Ican be 1 a 245980 O Dictionary lookup O Constantarea coding used especially for binary images ldentify large areas with contiguous l39s or 039s Code the areas with special codes Run Length Encoding RLE This represents the data stream in terms ofthe number of runs for each symbol The 245999999990 above could then be codes as 214l 5l 980l The results from the RLE could further be compressed by passing them as the input to say a variable length coding scheme Predictive Coding Predictive coding predicts the value at a given data point based on the preceding data points and codes only the difference between the predicted value and the actual value at the data point Thus only the prediction error sometimes called the new information is coded This is used to reduce the spatiotemporal redundancies that may exist between nearby data poinm example pixels in an image For an mth order predictor the predicted value is given by M f 061 fnil where 061 is a weighting factor 1 For DPCM the current data point is assumed to have the same value as the preceding data point That is fnfnil T hus e2 i2 nl Notice that the transformation stage here corresponds to the mapping of the original data into prediction error The amount of compression is thus proportional to the entropy reduction produced by the mapping Lossy Compression Basic motivation O Huge compression ratios O For some applications exact reconstruction is not necessary and thus accuracy can be compromised Sources of Error O Quantization process O Truncationroundup errors Basic Methodology Simply insert a quantization stage between the transformation and encoding stages In general the transformation stage would have exposed some form of redundancy in the original data The quantization stage then exploits the exposed redundancies by quantizing the transform coefficients or prediction errors rather than the original data Quantization O Process of converting a continuous signal into a discrete signal O Basic procedure in digitization O Manyto one mapping and thus comes with an inherent loss of information Uniform Quantization Here the same quantization step size is used for all quantization levels LetR be the dynamic range ofthe data ie the total range of the input values andL the number of quantization levels Then the quantization step size is just R A L Then the quantization is performed by mapping an input value S to SI SI iifi 1AS s lt139A 11 2 L Non Uniform Quantization O The quantization step size is not the same for all quantization levels O For an image some regions will have coarse quantization while others will be finegrained O Generally more finegrained quantization is used in rapidly changing areas while slowly varying areas can use more coarse quantization Optimal Quantizers These aim at deriving the best values for the quantization step size for each quantization step Usually this is done based on the probability distribution function of the input data Transform Coding Spatial domain transformations for example predictive coding operate directly on the sample data values for example image pixels ln transform coding the data is first transformed into the frequency domain and operations are then performed on transform coefficienm rather than direct pixel values The steps in transform coding are as follows O Divide the image into suitable subimages O Apply a linear transform on the subimages to produce a set of transform coefficients O Quantize the coefficients O Encode the quantized coefficients Note that the transformation on its own does not produce compression Rather it is expected that the transformation should further expose the redundancies in the data by O Decorrelating of the pixels in the image O Packing most of the important information in only a few transform coefficients O Aiding in the removal of the coefficients with the least information eg by quantization Given a 1D signal x the forward and inverse transforms are defined as Forward transform M fxgxu u o12N 1 lnverse transform Nil fx ZTuhxu x 012N 1 140 gxy is called the forward transformation kernel while hxy is called the inverse transformation kernel For aanM 2D array Forward transform NilMil Tuvzzfxygxyuv uvoL2N 1 0 y0 lnverse transform Nil Mil my 22 Tuvhxyuv xy 012N 1 110 v0 The transformation kernel is independent of the input data It depends only on the indices xyuv The kernels are sometimes called the basis functions for the transforms Clearly the transformation kernel determines the nature of the transformation Therefore for most transforms used in transform coding the major difference is in the definition of the forward and inverse kernels For example Fourier Transform j2 wcvy MN 1 x uv e g y MN Xpl Discrete Cosine Transform gxyu v XuXV COSCOSW L a0 N 4 a L2N l Choice of Transform where 0661 O Decorrelation ability O Energy packing ability O Computational complexity O Allowable error O Application dependent Generally the DCT provides a better information packing ability than the DFT But the KLT is the optimal transform in terms of decorrelation and energy packing ability The KLT however requires more computation Thus the DCT is used in most transform image coding applications Effect of SubImage Sizes O Generally error decreases with increasing subimage size O Also achievable compre551on increases with increasing subimage size O However computational complexity also increases with subimage size ltlt see attached figuregtgt For the DCT at image sizes greater than 16x16 the differences in error and compression tend to become insignificant 8x8 subimage size has been used as the standard in most existing systems Quantization and Coefficient Selection The quantization values can be chosen to effectively ignore certain coefficients or to reduce their importance The choice of coefficients is done using two basic methods O Zonal method select coefficients with overall maximum variance across the image O Threshold method select coefficients with maximum magnitudes Those with values below a given threshold are ignored Quantization is performed by using a quantization table Q based on the simple formula u v roundm Q01 V Thus Tat V 0 whenever Qu V gt 2Tu V Inverse quantization dequantization Tuv TuvQuv Clearly if T01 V 0 the original value of Tu V cannot be recovered Hence by increasing the values in Q we can obtain more compression but at the expense of more error Some compression schemes also use a hybrid approach by combining say DCT and predictive coding Encoding Stage After quantization we may have several runs of zeros or repetitions which can be coded using any of the methods for lossless encoding such as the RLE or Huffman codes Multimedia Data Compression contd Exam le usin a h othetical ima e block For transforms such as the DCT we can precompute the transformation matrix and then use simple matrix manipulation on the image data to produce the required transform coefficients For instance for an NxN image block the DCT matrix can be defined as l i 0 N b cosw il2N 1 Given an input image matrixA the DCT transform will then be where BU j C 313 7 and the inverse A B TCB ltlt see attached page for the examplegtgt We see that only 128 rather than 512 bits will be required to code the data Of course this is just an example Most practical coders use different values in the quantization table to force more values to zero and therefore produce higher compression Also more compression will be produced when we apply RLE on the coded bit streams The JPEG Compression Standard JPEG stands for the Joint Picture Expert Group a CCITT and ISO standard body for image compression Goals O Quality reconstructed image should rate from quotverygood to excellent in human perceptualjudgement O Generalizy standard should be general enough for almost any image grey scale colour different image sizes O Simplicity ability to implement in software on general computing platforms or with simpleaffordable hardware Three Coding Systems used in JPEG Baseline coding system ossy based on the DCT adequate for most applications moderate compression O O O O also called baseline JPEG Extended coding system O O O O lossy higher compression better precision progressive coding Independent coding system O O O O loslesss completely reversible uses simple predictive coding does not involve DCT or quantization JPEG Coding Procedure 9999 9 Divide the image into 8X8 blocks Process blocks from lefttoright toptobottom Level shift the image pixel values f5 x y fx y TH where 2391 is the number ofgrey levels Compute the 2D DCT of each block Quantize the coefficients using the quantization table Different quantization tables are used for the colour and intensity components Form lD sequence of the coefficients using the zigzag representation Encode the resulting lD sequence uses Huffman coding Different Huffman tables are also used for the luminance and chrominance components The DC coefficients are coded relative to previous DC s using differential coding That is for the DC39s only the prediction error between the current DC and the DC value from the preceding block is coded Thus the coding tables are based on DC difference category pairs where categories indicate the range or size ofthe DC difference For AC coefficients a variable length coding based on tables of stipulated output symbols for given pairs of coefficient value and number of preceding zeros sometimes called runlength category pairs Progressive Encoding In JPEG M O Browsing search and retrieval O Network support for prioritization ie lower resolution information are given higher priority and thus sent out first to enhance speedy browsing O Lower resolution can typically be coded with less precision without degrading the visual quality Hence they will require less bandwi th M eth ods used or Qrogressive en coding Spectral selection This is based on the zigzag ordering At the ith scan only information from the ith diagonal are sent DC AC1ACZ AC63 Usually the DC and the first two AC coefficients are sent at the same time This is conceptually simple The distortion introduced depends on the cutoff ie the number of diagonals scaned Successive Approximation Send information from all frequencies at the same time starting from the MSB to the LSB Usually the first 4 MSBs are sent at the first scan Then the remaining bits are sent one at a time This is generally slower than spectral selection but produces a relatively constant distortion across all the spatial frequencies Some techniques try to combine spectral selection with successive approximation Hierarchical Encoding in JPEG Generally used where the source image is at a higher resolution than the display device Hierarchical encoding basically uses a pyramidal image representation to encode images at multiple resolution The resolutions differ by a factor of 2 vertically and horizontally Lower resolution images can be accessed without decompressing the entire image Each resolution is coded as an ordinary image Video Compression Video is often viewed as a sequence of images thus the approaches used in image compression can also be applied to directly video For example the JPEG compression scheme can be applied to each video frame independently This is what happens in the socalled MotionJPEG However while this may be a simple extension of JPEG it does not take the special characteristics of video into consideration Video Scene Characteristics O Scene breaks O Motion object background and camera O Activitycomplexity O Large variations in scene characteristics even with the same sequence O Temporal redundancy General Methods used in Video Compression O Temporallyoriented DPCM 3D Transform Coding O Sequential 2D transform coding with motion compensation O Sequential 2D transform coding without motion compensation e g MotionJ PEG Motion Compensation Motion compensation involves the use of a given motion model in temporal prediction This helps to reduce the amount of spatial information that may need to be coded as prediction error Motion compensation thus requires some form of motion estimation Motion estimation is simply a type of image matching For a given block of pixels motion estimation determines a motion vector which describes the movement of the block Motion estimation thus involves searching an image for a block that is a match or most similar to the reference block The search could be either on a previous image forward prediction or with a future image forward prediction Like any search problem the issues then are how to constrain the search space and how to define a matching or similarity criterion Simple matching criteria that have been used include O minimum absolute distance MAD and O minimum Euclidean distance MED The motion vectors are the horizontal and vertical offsets to the matching block The prediction error is the pixel differences for corresponding pixel positions in the two blocks Motion estimation is very time consuming but provides an avenue for huge compression in the image sequence Problems with motion compensation O Not universal O Cannot handle scene changes which are common in video O Depends critically on the motion model the search criteria eg translation models cannot handle zooming or rotational motion 9 Quite time consuming depending on the search area MPEG Compression Standard Target Applications O Data storage CD ROM DVD support for random access O Broadcast Video O O Networked Mll M PE G Parts O System multiplexing and synchronization of various streams O Video Q Audio O Conformance test to estimate conformance with design goals MPEG Video Basic procedure O Video Representation O Temporal Prediction exploit temporal redundancy O Decomposition by DCT O Quantization selective reduction in precision HVS consideration perceptual quality O Variable length coding of quantized coefficients Syntactical Layering in MPEG Sequence Near parameters eg rate access context access set frame in a GOP must be an I frame Includes SMIPTE time Framebased Representation in MPEG Colour Space YCCb Y luminance or intensity component Cr and Ch are the colour componenm So the RGB values have to be transformed into corresponding values in the YQCb colour space For each macro block the colour components are further downsampled by 2 in the horizontal and vertical directions This is due to the fact the human visual system is not very sensitive to colour degradations as compared to changes in intensity A macro block will thus contain 16X16 blocks for Y one 8X8 block for C and one 8X8 block for CI for a total of 6 8X8 blocks Picture Types O lpictures intrapictures macro blocks are nonpredicted O Ppictures predicted pictures nonpredicted or forward predicted O Bpictures bidirectionally predicted pictures macroblocks could be non predicted forward predicted backward predicted or bidirectionally predicted O DCpictures dpictures a sequence of the DC average images Used primarily for fast forward I and P pictures are called anchor pictures since they can be used for further prediction Bidirectional prediction Pros O Allows possible prediction of occluded frame areas in the current frame O Reduces noise due to averaging effect O Results are comparable to fractional pixel accuracy in motion estimation O Avenue for high compression Cons O Increase in complexity of prediction Bpictures are not used in further prediction So error in the Bpictures cannot propagate to other pictures We only need to worry about perceptual quality when considering Bpictures Thus Bpictures are usually given low bitbudget in the final encoding Macroblock Decomposition by DCT DCT is applied to O The original lframes which are stored without motion compensation O The residual error prediction error from the B and Pframes after motion compensation Remember Each macro block contains 4 8X8 blocks for the intensity component and 2 8X8 blocks for the colour components Each of the 6 8X8 blocks is transformed using the DCT just like in JPEG Quantization O Different quantization tables from JPEG O Two default quantization matrices one for lframes and the other for P and Bframes Codin Uses a zigzag scan to convert the 2D coefficients into 1D Uses VLC with modified Huffman codes Generally the reordering makes the coefficients that are usually 0 to be adjacent to one another so that RLE can exploit the runs of zeros Runlength Amplitude Coding ln JPEG we had runlength category pairs In MPEG the amplitudes are used directly rather than forming categories The result is runlength amplitude pairs MPEG Constrain Parameters These are restrictions or defined compliance points For example in MPEG1 the following restrictions are used Recent Developments Medical Image Analysis CS 593791 Computer Science and Electrical Engineering Dept W 39t est Virginia Universi y 3rd March 2006 Outline 0 Review 9 Finding the MAP estimators 17 9 9 Application to brain MRI segmentation 9 Results Review Finding the Mg LP estimators p i 19 Application to brain ALPJ segmentation The classical MRF model Label field 9 1 Observed i images Observation model A I o Smoothness constraint on f Markovian o f is formulated as a GRF using the Ising model 0 CD is a parametric model for image intensity 0 The likelihood PIf 0 depends on the noise model P o The posterior distribution can be computed using Bayes rule 0 f 6 maximize Pf 6I optimization can be very slow REView Funding the MALE tm mOTSP k 9 Application 10 brain 339ka segmentation The HMMF model Label eld f 17 02201 PAP p Observation eld model o Thep eld is a vector eld pr E 3M o Vectors in the p eld have the properties of a discrete pdf 0 Smoothness constraint on p Markovian o p is formulated as a GRF quadratic potential 0 Labels are elements of p with maximum probability f 7 manPkO The P1I lfrapa 9 P17 lf7 a 9 estimators p i H Review Finding the Mg LT The HMMF model Label eld Pp p P eld To compute the likelihood PIp 6 Application to brain h segmentation n a l Observed images Observation model First rewrite the joint conditional probability P1rafrlpa 9 P1rlfrapa 9Pfrlpa 9 then marginalize over labels M P1I IP 9 ZPUUWO kapa 9 Pf7 klpa 9 kzl HMMF vs classical lVlRF Likelihood HMMF Likelihood PUUW MRF Likelihood PUUWU 120 P W l t9 M PUUWU k7p70P r W E exp7v1r 7 mailman expewm 7 w an M Z expewm 7 dgtr 0kl2bkr H The HMMF model The posterior distribution is 131 177 0PPP0 1 P 9 I i 7U 9 pi i PU Z em pt 1 where U p7 0 is the posterior energy UP70 Zlogwr Vr Z Vcp10gP0 FEL C The posterior energy is a differentiable function Nerview o Optimize wrt 9 using a modi ed gradient descent z 7V9Up o Optimize wrt 17 using a modi ed gradient descent d 7 ivPUp7 0 Modify 17 so that the constraints are satis ed Newtonian Descent Consider the unknowns xto be the position of a particle of unit mass Let the acceleration of the particle be given by 56 7VUx 7 2045c 0 First term the particle is being forced down hill direction of decreasing U 0 Second term restrict the speed of decent friction or damping Fu ulmg um 13 a Newtonian Descent From kinematics we know that xow xv 54011 56m where x7 5c 56 are the position velocity and acceleration of the particle Substituting the given acceleration we have M 7 xv m amtXe 7 Manhz Grouping together the Sc terms WM 7 x0 h 7 mice 7 VUxltrgth2 Fu ulmg um 13 an Newtonian Descent Substituting the centered difference approximation 0 w 2h and multiplying by 2 we have MM 7 2x0 7 1 7 ahx h 7 x0710 7 hZVUx Grouping the x0 terms together we have ah 7 1M 7 2x0 7 ah 7 WW 7 hZVUx gt Fu ulmg um 13 a Newtonian Descent The nal evolution equation is 2 ah 7 1 h2 01 7 Y 7 41 7 U f x ahlx ahlx ozhlV x This idea is applied to 9 andp using 0 iveilo l 2i ivpiloqa lemg um 13 9111 Discretization 2 ah 7 1 h2 01 f 7 H1 7 f f 0 ah10 ah10 ah1WUp 70 2 ah 7 1 h2 f 7 F1 7 f f p ah1p ah1p ah1VPUp 70 17W H rforallr L 3M The operator nd the closest x 6 SM to the vector 17R 6 Recall SM u e W 2211 1uk 2 0k1M Full Intensity Model Problem Due to magnetic eld inhomogeneities image intensities may not be constant Within a tissue class We cannot use a piecewise constant acquisition model The authors use a sum ofJ basis functions on a subgrid ofL J WM ZekJIVJr j1 Where the functions centered at each node look like Model Parameter Prior P0 The rigidity of this model can be controlled with a prior on the weights 9 M P909 9XP Z 2 770 7 01w2 k1 ltuvgt where 7 is a constant controlling the smoothness of the weights and therefore the smoothness of the acquisition model Anatomical constraints Pp Other types of constraints in addition to smoothness may be imposed on the label eld For example a prior constructed using a GRF with clique potential given by VrsPr7pS Mr pslz p1rp3s p3rp1s will penalize cliques where tissue classes 1 and 3 are adjacent Finding the MAP CSIi h r alien to brain MRI nemation M rate Avg 0 15 sec HMMF is less sensitive to initialization and faster than NIRF Fig 4 a Sample slice of a simulated brain MR volume with 9 percent noise and 40 percent inhomogeneity b Anatomical model ground truth 0 HMMF segmentation d Reconstructured intensity 7 9J r Performance index If we have a ground truth segmentation 2 VGPk 5k VPk VGk o VGPk number of voxels correctly assigned to class k 0 Vpk total number correct incorrect of voxels assigned to class k D VGk total number of voxels in class k ground truth Note that 5k ranges from 0 to 1 with 5k 1 corresponding to perfect segmentation Finding the MAP 2 Performance ion to In am MRI 0 Inhomogeneities b 095 092 7 oHMMFayMaL X g a a K C E KVL GayMaL 055 j KVmee Mat use 154 v v v r o 4 6 1D nulse 40 Inhomogeneities b use 092 L o HMMF Gray Mat 0 3 n HMMF While Mat E quot5 gt 058 KVL Gray Mal J KVL White Mat 036 084 D 10 4 s noise KVL is a comparable modelbased MRF method C nclusions o HMMF can segment image corrupted by eld inhomogeneities and additive noise 0 HMMF priors can enforce smoothness and other spatial constraints 0 HMMF is faster and more accurate than other MRF techniques 9 HMMF is relatively insensitive to initialization Monda Image registration Medical Image Analysis CS 593 791 Computer Science and Electrical Engineering Dept West Arginia University 1st February 2006 eniatlon Details Examples Outline 0 Total Variation 9 Implementation Details 9 Examples Total Variation tion Details Examples Outline 0 Total Variation 0 Definition 0 Properties 9 Implementation Details 9 Examples E D Total Variation Implementation Details Defi on Definition The TVnorm is the L1 functional norm of the gradient magnitude TVu HVquxdy Q Minimizing the L2 functional norm of the gradient magnitude led to the isotropic heat equation Total Variation Definition Energy Minimization Membrane Sphne energy 2 2 mumnuX My 1x 1y Has Euientagrange condition for minimization giyen by leVU o and descent equation by mm Implemen39 tion Details Examples Totai Variation 2 2 mum7 Vux uydxdy Has Euientagrange condition for minimization giyen by VU dwi HVUH and descent equation Vt 8udw7 HVUH Total Variation Implementation Details Properties Membrane Spline vs Total Variation The membrane spline energy functional HVUH2 dx dy 0 represents the elastic potential energy of a thin sheet of material The total variation energy functional HVquxdy 2 represents the oscillation of the function u Imaman Daalls Euwmm MEMf ufdx Q D biga dxbfa2bia 72 773 Imaman Daalls Euwmm TVf 7 2 D 7 2 uxdxSbiadx h 7 h lea dxbabia w Total Variation Implementation Details Examples ooooooooooooooo Properties Total Variation Does Not Penalize Discontinuities MEMO TVU lhl MaiI nliig MEquotM 2 quot65 MEWEHB Maia TVf1O7 TVf2O7 TVf3O7 TVf4O7 Total Variation Implementation De uils Properties Total Variation Does Penalize Oscillation TVfs gt TVfi TVf2 TVfa TVf4 Examples Total Variation Implementation Details Properties Descent Equation Vu 6w diV 7 HWH This is an inhomogeneous diffusion equation with diffusivity 1 9llVUll W o Diffusion is slow for large image gradients o Diffusion is fast for small image gradients o Divergence is due only to changing gradient directions mmahan mulls Exalmles pmumws A Drawback Image contours may be rounded Total Variation Implementation Details ooooooooooooooo 39 Examples 0000 Properties Image Contours Another geometric interpretation of TV norm minimization Consider isocontours of the image the curves of constant image intensity Total Variation Implementation Details Examples ooooooooooooooo ooooo image m mra The gradient of the image is perpendicular to the curve n is a unit normal to the isocontour c Total Variation tion Details Examples Properties Cu rvatu re The rate of change of the normal tells us the curvature 9 of the curve Vu d39 7 WM 0 a0cisalineuisaplane o r 7 0const c is a circle 0 r can be negative or positive Total Variation Implementation Details Examples OOOOODOOOOOOODO DOOOO Properties Curve evolution Evolving a curve with speed proportion to curvature results in curve smoothing small negative motion large positive motion Total Variation lmplarrientation E iataila Examples ooooooooooooooo 39 Properties Curve evolution Eventually the curve will become a circle C H K 7 lt H A a W H V V N J7 v x d d ilj a V Kv y 4 x quotmum Daalls ExdlYnks V th a data constraint 3gudivH HiAuiu0 the isocontours will still be slightly smoothed Total Variation Implementation Details Examples Outline 0 Total Variation 0 Implementation Details 0 Upwind finite differences 0 e regularization 9 Examples Upwlnd nnne differences Since we are allowing steep discontinuities numerical differentiation can be a source of instability 1 5 l l l l l l l 02 04 us 05 x 7 v t Implementation Details Upwind f e differences Denote the forward finite difference of u at ij in the x direction by NW0 H11 u and the backward difference by Afuij UI39J39 7 UI3971J39 We will denote the upwind finite difference in the x direction as minmodA r uij A uij Total Variation Implementation Details Upwind finite differences Minmod operation mimodab min ai M where 71 ifx lt O san O ifX 0 1 ifx gt O Total Variation Implementation Details Upwind finite differences Minmod operation minmodab W miniai w o if a or b are Om139nmodab O o if a and b have opposite sign mjnmoda b O 0 otherwise pick the one with the smallest magnitude Total Variation Implementation Details gularization What happens when Vu 0 Singularities may develop in very smooth regions Wu 8 u div HVUH e 0 One strategy use fixed constant 6 ltlt 1 0 Another strategy relaxation Start with a moderate e and decrease over time Total Varialion tion Details Outline a Total Variation 9 Implementation Details 9 Examples mm mm Immmn naalls Total Variation Implementation Details Examples ooooooooooooooo m mi mam Inpainting Ir heniationDetails F day Introduction to image segmentation Medical Image Analysis CS 593 791 Computer Science and Electrical Engineering Dept West Arginia University 3rd February 2006 gt mags Denoising and Restoratlon gmenmion Outline 0 Review of Image Denoising and Restoration 9 Introduction to Segmentation Review of Image Denoising and Restoration Outline Introduc n m Segmentation 0 Review of Image Denoising and Restoration 0 Forms of the Heat Equation 0 PeronaMalik Impementation 0 Tensor Anisotropic Diffusion 0 TV Norm Minimization G Energy Minimization 0 Introduction to Segmentation Review of Image Denoising and Restoration Introde n to Segmentation Forms of the Heat Equation Variants Concepts Gradient Flux Divergence 0 Linear Nonlinear o Homogeneous Inhomogeneous o Isotropic Anisotropic Properties of restored images Conservation Laws Variational Principles Review of Image Denoising and Restoration Il ltdeLI on to Segmentation Forms of the Heat Equation Numerical Methods 0 First order forward and backward difference approximations to first derivative 0 Second order centered difference approximation to first derivative 0 Second order centered difference approximation to the second derivative Write in matrix form 0 Forward Explicit discretization o Backward Implicit discretization 0 Mixed discretization with appropriate boundary conditions General stability w of Image Denoising and Restoration Forms of the Heat Equation Isotropic Homogeneous Heat Equation Equivalent to Gaussian convolution Inhomogeneous Heat Equation Properties of cxyt gi iVH Review of Image Denoising and Restoration Introduc n to Segmentation PeronaMalik Impementation Formulation 0 New scale space criteria 0 Causality and the maximum principle 0 When are edges enhanced o Rational speed function 98 Review of Image Denoising and Restoration IntrodLI on to Segmentation Tensor Anisotropic Diffusion Edge enhancing diffusion Construct tensor from eigenvalue decomposition D XAXT o v1HVuU o VZL v1 0 Hv1HHv2H1 o Xv1iv2 o A1giVuUi2 0 2 1 flmage Denolslng and Restoration gmenmion 0 Properties of the TV norm 0 Compute the TV norm of a function 0 EulerLagrange conditions and evolution equation Review of Image Denoising and Restoration Introdu n to Segmentation Energy Minimization Variational Calculus We wish to find u such that Eu is minimized where Eu is of the form Eu fxy uuXuydX Q d d 7 qu 7 ny 0 WE fu dy fui div y oflmage Denoising and Restoration Outline Introduction to Segmentation 0 Review of Image Denoising and Restoration 0 Introduction to Segmentation 0 The Problem 0 The Goals 0 Complications 0 Solutions Review of Image Denoising and Restoraticn Introduction to Segmentation ooooooo ooooooooooo The Problem Segmentation To partition an image into regions which are homogeneous according to some characteristic Approaches 0 Find region boundaries surface extraction 9 Cluster voxels into regions Label each region w of Image Denoising and Restoration Introduction to Segmentation The Problem Taxon my 0 Automatic semiautomatic user input 0 2D 3D 0 Number of tissue classes 0 Soft Hard classification Review of Image Denoising and Restoration Introduction to Segmentation 0000000 00000000000 The Goals o Visualization o Quantification Volume shape thickness motion relative location of image features Review of Image Denmising and Restoration Introduction to Segmentation 0000000 00000000000 The Goals Left Ventricular LV wall motion can be used to diagnose heart dysfunction aaaywa aaayy avayuv 9amp9 Hippocampus shape has correlation with schizophrenia Review of Image Den 9 and Restoration Introduction to Segmentation 000 OD Complications 5 and Exgenmode 7 0 Image Noise Partial Volume Effect multiple tissue classes in a single voxel Shape Representation simplicity vs flexibility w of Image Denoising and Restoratien n to Segmentation o Thresholding 0 K means 0 Region Growing 0 Active Contours 0 Markov Random Fields 0 Level Sets 0 Graph Cuts iew of Image Denoising and Restoration Introduction to Segmentation Solutions Information we may use in the segmentation model 0 Image intensity 0 Image gradients edges 0 Region statistics mean variance entropy Prior Information 0 Geometric Smoothness Shape Atlas 0 Topological Connectedness 0 Statistical Markovian property iew Of Image Denoising and Restoration Introduction to Segmentation Solutions Techniques we will investigate Snakes 2D parametric curve evolving under forces generated by image intensity and gradient Level Set Methods 3D implicit surface model evolving under under the influence of image gradients and region characteristics Markov Random Field Models Given a model of how image intensity depends on labels and assuming the label field is Markovian compute the most probable label field All of these will involve some form of energy minimization Review 0f Image Denoising and Restoration Introduction to Segmentation ooooooo coooooooooo Solutions Snakes Demo 13f Image Denoising and Restoration Introduction to Segmentation g i i m m Solutions Level Sets w of Image Denoising and Restoration Introduction to Segmentation 1 Solutions Monday Geometry of curves in the plane quotSnakes Active Contour Modelsquot Witkin Kass Terzopoulos Medical Image Analysis CS 593791 Computer Science and Electrical Engineering Dept W 39t est Virginia Universi y 29th March 2006 Outline 0 Topological mesh correction a Cortical surface mesh in ation 9 Spherical m esh param eterization Topological mesh correction Cortical surface mesh in ation Spherical mesh parameterization 39 2 Aplition B Fischl A Liu A Dale Automated manifold surgery Constructing geometrically accurate and topologically correct models of the human cerebral cortex IEEE Transactions on Medical Imaging 2001 The extracted cortical surface may contain small errors Fixing these errors is necessary for correct Visualization and to simplify texture mapping gical mesh correction 39 3 inrlatlcm Topology These surfaces are different from each other in a fundamental way Euler number A mesh is a piecewise planar approximation to a manifold For a closed surface mesh the Euler number x V7EF272g 0 V number of vertices o E number of edges 0 F number of faces 0 g genus number of quothandles If f 1 7 f 2 have the same genus then there eXists a mapping M f1 H That mapping M is continuous invertible and M 1 is continuous a homeomorphism Euler number We know the cortical surface has genus 0 topologically equivalent to a sphere The Euler number x tells us how many topological errors there are in our mesh The Euler number does not tell us where those errors are or how to X them The approach of Fischl et a1 0 Find a mapping M from the original surface to a sphere 0 Remove all regions where the inverse mapping is multivalued 9 Retriangulate the holes Spherical In ation In ate the mesh into a sphere by updating vertex positions X using ch1 ch l Fl l ArFla 0 F5 smoothing force 0 FR radial force moves vertices toward the surface of the enclosing sphere Sm thing Tenn 1 1 V FS mjgij 7 Xk 7 Z 2niniTXXj Xi 139 16M 0 First term move each vertex toward the centroid of neighbors 0 Second term project outward to correct for shrinkage Radial Force FR Rk Xk where Rk is the radial projection of xk onto the sphere of radius R After convergence every vertex is projected onto the sphere We The topological errors will appear as overlapping triangles on the sphere These are regions where the mapping is noninvertible Discard all vertices belonging to overlapping triangles gical mesh con ection Comcal gulface mesh 11111311011 Sphencal mesh neten m on Types of surface defects Handles and holes Surface retriangulation 0 Form a sorted list of the removed edges 9 Traverse list adding edges back to the mesh if they don t result in overlap 9 Triangulate the remaining small holes gical mesh correction Surface retriangulation Observe When smoothing the mesh the edges belonging to the topological error stretch Retriangulate using an edge list sorted on this edge length m 33 r39 Q 51 Topological mesh con ection Cortical surface mesh in ation Sphe cal mesh parameten39zation mm 1 3amp3quot v l mquot J 4amp9 E qr Small handle successfully removed Fis hl M Serena A Dale Conical lrfacebased anal Nemjolmage 1999 Visualization accumtely pomay distances Distortion minimization Distortion energy V Ed ggzw idrmz 11 HEM whered n HXHLH This represents the change in local shape between time 0 and time t hness constraint Spring energy between neighboring vertices 1 V Eg zzuxzixm i1 neN o Vertices in concave regions move outward o Vertices in convex regions move inward Surface in ation minEs ME 0 Ad controls the tradeoff between smoothness and distortion 0 Where curvature is high smooth the mesh 0 Where curvature is low minimize distortion This gives us the smoothed shape but not the means to texture map the sutface Topological mesh correction Conical suiface mesh in ation Spherical mesh parameterization Mesh parameterization Application Texture mapping is a mapping from a mesh into the domain of some image T S gt R2 triangular region of the image can be mapped to each triangular face of the mesh In ating the brain surface allows the areas Within the deep folds to be seen This allows the activation map to be more easily Visualized Topological mesh correction Cortical surface mesh in ation Spherical mesh parameterization Text aure maypp rn We cannot map our closed genus 0 surface to the plane without introducing cuts so we map the surface to a sphere and compose this mapping with a known spherical texture mapping scheme 1mimmemm Distortion minimization Previously We saw how to map a surface to a sphere While minimizing the squared change in edge length This does not prevent stretching area distortion or angular distortion We may prefer to minimize a combination of angular and stretch distortion face mesh in ation Spherical mesh parameteiization Minimizing only angular distortion can lead to stretch Topological mesh con eclion Conical suiface mesh in ation Spheiical mesh parameteiization Distortion minimization Topological mesh correction Cortical surface mesh in ation Spherical mesh parameterization about We must cut the surface green lines into a disk then map the disk to the image plane The problem the cut path will in uence the distortion of the mapping Friday Diffusion tensor imaging and Visualization Medical Image Analysis CS 593791 Computer Science and Electrical Engineering Dept W 39t est Virginia Universi y lst March 2006 Outline a Markov Random Fields for Image Segmentation a Hidden Markov Measure Field Model For Image Segmentation lxiarlm REH UZICHTI F ields for lm age Segmentation Hidden Markov Measure Field Model For 1111 age Segmentation The probabilistic model Label eld f 17 018122 PF 19 p Observation eld model I 0 SM u E RMZ1uk luk Z Ok l 0 pr E SM is a discrete probability distribution over labels 0 1910 Pf7 klpa 9 o arg manpkr P1rlfr71979P1rlfr 9 Prior Distribution The smoothness constraint on the p eld is enforced treating the discrete pdfs as vectors with the quadratic potential M VrsPr7Ps WW psllz AZWU Pksz k1 where is a parameter controlling the smoothness of the eld and r7 s are in the same clique mp gem mp Now the p eld is Markovian the label eld is not Conditional Probability PAB C PABC Pm PAB C PB CPAB C PCPBlCPAlB7 C So the joint conditional probability PA7 BlC PAlB7 CPBlC Posterior Distribution PWV Pumgww We need PIlp7 0 We have a Pfr 0 the p eld 9 Prlfrp 0 PIrb r7 0 from the observation model We can write the joint conditional probability PI r 7 f r l p 0 as P1r7frlp7 t9 PUUWUM 0Pfr lpy 0 Next marginalize over f Posterior Distribution Recall Marginal Probability We obtainPUp7 0 by marginalizing over all possible labels M P1 ll770 ZPUUWU k7p70P r W7 t9 k1 Posterior Distribution M PUMW 0 ZVkrpkr Vr pr k1 Where vk expwmx 7 x7 mm from the acquisition model for class k with additive Gaussian noise Posterior Energy The posterior distribution can be written as Pm 0V 1W exptiwi 0 where U p7 0 is the posterior energy UP70 10gPV VV Z VCP10gP0 FEL C The MAP estimators 17 and 0 for p 9 are those which maximize the posterior distribution or minimize the posterior energy Note For the HMMF model the posterior energy is a differentiable function which can be minimized used a gradient descent scheme The HMMF numerical implementation results Lighting Models Beyond Phong Lighting models TorranceSparrow CookTorrance Blinn OrenNayar Schlick Min naert Ward Anisotropic n n m t r e b m a L k 0 b e S e h t f O y n a O D What about the moon Physically based lighting models Surface is assumed to be a collection of planar 39microfacets39 small scale faces At the scale of a photon even a smooth surface appears rough Each microfacet is a perfect specular reflector Appearance of a lit surface depends on The distribution of microfacet orientations Fresnel effect surface more reflective at grazing angles Geometrical attenuation some microfacets occlude other microfacets Distribution of microfacets In reality not necessarily uniform Distribution of microfacets Geometrical Attenuation No attenuation Masking Reflected light is intercepted Shadowing Incoming light is intercepted Fresnel Effect Imperfect reflection Degree of absorption depends on angle of incidence Less absorption More absorption Fresnel Effect Has major impact on water rendering Degree of refraction depends on angle of incidence Lambertian diffuse model The Lambertian assumption in the diffuse term of Phong lighting can be seen as a special case of this model Distribution uniform over the hemisphere Fresnel effect neglected Geometrical attenuation neglected BRDF The bidirectional reflectance distribution function is surface property BRDFpAvv0 Given incoming light direction vi and outgoing light direction v0 what fraction of incoming light is reflected Can depend on wavelength k BRDF New illumination equation for n point light sources n nli 1AVgtZi1LipAligtVgt Ll Intensity of light source i 11 unit vector pointing toward light source i dz distance from surface to light source n surface normal v view direction BRDF We will eliminate the wavelength dependence and handle diffuse and specular terms separately n I l39li 1ltvgtkaLaZi1Liylkdpdlzivgtkspslt1pvgtgt kd ks diffuse and specular material color pd p5 diffuse and specular BRDF Analytic Models for BRDFS Phong BRDF CookTorrance BRDF He Torrance BRDF OrenNayar BRDF Measuring BRDFs Gonioreflectometer measure reflected light for various light and reflection directions of the hemisphere Measured BRDFs Latex house paint measurements for various wavelengths quot L3 as g Torrance Sparrow 1967 A physics paper not a computer graphics paper Many CG researchers have based their lighting models on this research 2 Gaussian distribution of microfacets D66Xp Analysis of the geometrical attenuation in symmetrical Vshaped grooves Masking shadowing Wavelengthdependent Fresnel effect Cook Torrance 1982 FDG 1Tnlnv S A model of specular reflection F Fresnel term D roughness term Distribution of slopes Beckmann distribution function G geometric term Assumption microfacets form symmetrical vshaped grooves n normal I light v view all unit vectors FDG Cook Torrance 1982 p3 W F Fresnel term 2gcgt Cgcgt1gt2 g n2CZ 1 cvh h unit half vector or re ection vector r7 index of refraction Free parameter pick index of refraction FDG Cook Torrance 1982 p3 W D roughness term Distribution of slopes Beckmann distribution function tanor 2 eXp l D 4m2cos4olt or acos n h m average slope of microfacets sinzor 1 coszor 1 nh2 2 tan or 2 2 2 cos or cos or nh Free parameter pick m Variations Remove 4 from the denominator of D for a more metallic look Use mixture of Beckmann distributions
Are you sure you want to buy this material for
You're already Subscribed!
Looks like you've already subscribed to StudySoup, you won't need to purchase another subscription to get this material. To access this material simply click 'View Full Document'