New User Special Price Expires in

Let's log you in.

Sign in with Facebook


Don't have a StudySoup account? Create one here!


Create a StudySoup account

Be part of our community, it's free to join!

Sign up with Facebook


Create your account
By creating an account you agree to StudySoup's terms and conditions and privacy policy

Already have a StudySoup account? Login here

Computer Graphics

by: Jacey Olson
Jacey Olson

GPA 3.69


Almost Ready


These notes were just uploaded, and will be ready to view shortly.

Purchase these notes here, or revisit this page.

Either way, we'll remind you when they're ready :)

Preview These Notes for FREE

Get a free preview of these Notes, just enter your email below.

Unlock Preview
Unlock Preview

Preview these materials now for free

Why put in your email? Get access to more of this material and other relevant free materials for your school

View Preview

About this Document

Class Notes
25 ?




Popular in Course

Popular in Computer Science and Engineering

This 112 page Class Notes was uploaded by Jacey Olson on Thursday October 22, 2015. The Class Notes belongs to CSE 167 at University of California - San Diego taught by Staff in Fall. Since its upload, it has received 16 views. For similar materials see /class/226795/cse-167-university-of-california-san-diego in Computer Science and Engineering at University of California - San Diego.

Similar to CSE 167 at

Popular in Computer Science and Engineering


Reviews for Computer Graphics


Report this Material


What is Karma?


Karma is the currency of StudySoup.

You can buy or earn more Karma at anytime and redeem it for class notes, study guides, flashcards, and more!

Date Created: 10/22/15
Global Illumination CSE167 Computer Graphics Instructor Steve Rotenberg UCSD Fall 2005 Classic Ray 4 2 rituals asAquot My 3 Tracing WE 39 The Classic ray tracing algorithm shoots one primary ray per pixel 39 If the ray hits a colored surface then a shadow ray is shot towards each light source to test for shadows and determine if the light can contribute to the illumination of the surface I If the ray hits a shiny reflective surface a secondary ray is spawned in the reflection direction and recursively traced through the scene I If a ray hits a transparent surface then a reflection and a transmission refraction ray are spawned and recursively traced through the scene I To prevent infinite loops the recursion depth is usually capped to some reasonable number of bounces less than 10 usually works I In this way we may end up with an average of fewer than 20 or so rays per pixel in scenes with only a few lights and a few reflective or refractive surfaces 39 Scenes with many lights and many interreflecting surfaces will require more rays 39 Images rendered with the classic ray tracing algorithm can contain shadows exact interreflections and refractions and multiple lights but may tend to have a rather sharp appearance due to thelimitation to perfectly vim polished surfaces and point light sources Classic Ray Tracing Distribution Ray Tracing Distribution ray tracing extends the classic ray tracing algorithm by shooting several rays in situations where the classic algorithm shoots only one or two For example if we shoot several primary rays for a single pixel we can achieve image antialiasing We can model area light sources and achieve soft edge shadows by shooting several shadow rays distributed across the light surface We can model blurry reflections and refractions by spawning several rays distributed around the reflectionrefraction direction We can also model camera focus blur by distributing our rays across a virtual camera aperture As if that weren t enough we can also render motion blur by distributing our primary rays in time Distribution ray tracing is a powerful extension to classic ray tracing that clearly showed that the central concept of ray tracing was a useful paradigm for high quality rendering However it is of course much more expensive as the average number of rays per pixel can jump to hundreds or even thousands Distribution Ray Tracing m a aw x Ray Tracing 18b The classic and distribution ray tracing algorithms are clearly important steps in the direction of photoreal rendering However they are not truly physically correct as they still are leaving out some components of the illumination In particular they don t fully sample the hemisphere of possible directions for incoming light reflected off of other surfaces This leaves out important lighting features such as color bleeding also known as diffuse interre ection for example if we have a white light source and a diffuse green wall next to a diffuse white wall the white wall will appear greenish near the green wall due to green light diffusely reflected off of the green wall It also leaves out complex specular effects like focused beams of light known as caustics like the wavy lines of light seen at the bottom of a swimming pool Hemispherical Sampling WE 42 rum s an 23mg h We can modify the distribution ray tracing algorithm to shoot a bunch of rays scattered about the hemisphere to capture additional incoming light With some careful tuning we can make this operate in a physically plausible way However we would need to shoot a lot of rays to adequately sample the entire hemisphere and each of those rays would have to spawn lots of other rays when they hit surfaces 10 rays is definitely not enough to sample a hemisphere but let s just assume for now that we will use 10 samples for each hemisphere If we have 2 lights and we supersample the pixel with 16 samples and allow 5 bounces where each bounce shoots 10 rays we end up with potentially 1621105 4800000 rays traced to color a single pixel This makes this approach pretty impractical The good news is that there are better options f39 ima i Path Tracing W In 1985 James Kajiya proposed the Monte Carlo path tracing algorithm also known as MCPT or simply path tracing The path tracing algorithm fixes many of the exponential ray problems we get with distribution ray tracing It assumes that as long as we are taking enough samples of the pixel in total we shouldn t have to spawn many rays at each bounce Instead we can even get away with spawning a single ray for each bounce where the ray is randomly scattered somewhere across the hemisphere For example to render a single pixel we may start by shooting 16 primary rays to achieve our pixel antialiasing For each of those samples we might only spawn off say 10 new rays scattered in random directions From then on any additional bounces will spawn off only 1 new ray thus creating a path In this example we would be tracing a total of 1610 paths per pixel We will still end up shooting more than 160 rays however as each path may have several bounces and will also spawn off shadow rays at each bounce Therefore if we allow 5 bounces and 2 lights as in the last example we will have a total of 2151 18 rays per path for a total of 81601280 rays per pixel which is a lot but far more reasonable than the previous example FEW a an Ev5393 Path Tracing 2M Mam rmv BRDFS mm 23mg h A I a In aprevious lecture we briefly introduced the concept of a BRDF or bidirectional re ectance distribution function The BRDF is a function that describes how light is scattered reflected off of a surface The BRDF can model the macroscopic behavior of microscopic surface features such as roughness different pigments fine scale structure and more The BRDF can provide everything necessary to determine how much light from an incident beam coming from any direction will scatter off in any other direction Different BRDFs have been designed to model the complex light scattering patterns from a wide range of materials including brushed metals human skin car paint glass CDs and more BRDFs can also be measured from real world materials using specialized equipment f39 ima i BRDF Formulation 39 The wavelength dependent BRDF at a point is a 5D function BRDF f9p9pl 39 Often instead of thinking of it as a 5D scalar function of A we can think of it as a 4D function that returns a color BRDF feiP9Pr l Another option is to express it in more of a vector notation BRDF fww l Sometimes it is also expressed as a function of position BRDF fxww 71 a an Physically Plausible BRDFs 1m W I For a BRDF to be physically plausible it must not violate two key laws of physics 39 Helmholtz reciprocity frwi1 wr frwr1 wi Helmholtz reciprocity refers to the reversibility of light paths We should be able to reverse the incident and reflected ray directions and get the same result It is this important property of light that makes algorithms like ray tracing possible as they rely on tracing light paths backwards I Conservation of energy 0 fwwwndw lt 1 for all 0 For a BRDF to conserve energy it must not reflect more light than it receives A single beam of incident light may be scattered across the entire hemisphere above the surface The total amount of this reflected light is the double integral of the BRDF over the hemisphere of possible reflection directions a BRDF Evaluation I The outgoing radiance along a vector w due to an incoming radiance irradiance from direction w I dLrx wr frx7 wit wr Lixt wi wi39 n dDi 39 To compute the total outgoing radiance along vector 00 we must integrate over the hemisphere of incoming radiance Lrxwr o frxwitwrLix wiwinlgwi f39 ima Rendering Equation m Lay whee s Lrx wr JO frx1wi1wrL39xiwiwi39 391de 39 This equation is known as the rendering equation and is the key mathematical equation behind modern photoreal rendering I It describes the light L reflected off from some location x in some direction 0 I For example if our primary ray hits some surface we want to know the light reflected off of that point back in the direction towards the camera I The reflected light is described as an integral over a hemispherical domain 0 which is reallyjust shorthand for writing it as a double integral over two angular variables I We integrate overthe hemisphere of possible incident light directions 0 I Given a particular incident light direction 0 and our desired reflection direction 03 we evaluate the BRDF f at location x I The BRDF tells us how much the light coming from direction 0 will be scaled but we still need to know how much light is coming from that direction Unfortunately this involves computing L which Involves solving an integral equation exactly like the one we re already trying to solve l The rendering equation is unfortunately an infinitely recursive integral equation which makes it rather difficult to compute a him i Monte Carlo Sampling Path tracing is based on a mathematical concept of Monte Carlo sampling Monte Carlo sampling refers to algorithms that make use of randomness to compute a mathematical result Monte Carlo famous for its casinos Technically we use Monte Carlo sampling to approximate a complex integral that we can t solve analytically For example consider computing the area of a circle Now we have a simple analytical formula forthat but we can apply Monte Carlo sampling to it anyway We consider a square area around our circle and choose a bunch of random points distributed in the square If we count the number of points that end up inside the circle we can approximate the area of the circle as area of square number of points in circle total number of points Monte Carlo sampling is a brute force computation method for approximating complex integrals that can t be solved with any other reasonable way It is often considered as a last resort to solving complex problems as it can at least try to approximate any integral equation but it may require lots of samples Limitations of Path Tracing m m 39 Path tracing can be used to render any lighting effect but might require many paths to adequately resolve complex situations I Some situations are simply too complex for the algorithm and would require too many paths to make it practical 39 It is particularly bad at capturing specularly bounced light and focused beams of light I Path tracing will converge on the correct solution given enough paths so it can be used to generate reference images of how a scene should look This can be useful for evaluating and comparing with other techniques mfg 3n raid Bidirectional Path Tracing WE was A a Bidirectional path tracing BPT is an extension to the basic path tracing algorithm that attempts to handle indirect light paths better For each pixel several bidirectional paths are examined We start by tracing a ray path from the eye as in path tracing We then shoot a photon path out from one of the light sources Lets say each path has 5 rays in it and 5 intersection points We then connect each of the intersection points in the eye path with each of the intersection points in the light path with a new ray If the ray is unblocked we add a contribution of the new path that connects from the light source to the eye The BPT algorithm improves on path tracing s ability to handle indirect light such as a room lit by light fixtures that shine on the ceiling Metropolis Sampling We ugly Metropolis sampling is another variation of a path tracing type algorithm For some pixel we start by tracing a path from the eye to some light source We then make a series of random modifications to the path and test the amount of light from that path Based on a statistical algorithm that uses randomness a decision is made whether or not to keep the new path or discard it Whichever way is chosen the resulting path is then modified again and the algorithm is repeated Metropolis sampling is quite a bizarre algorithm and makes use of some complex properties of statistics It tends to be good at rendering highly bounced light paths such as a room lit by skylight coming through a window or the caustic light patterns one sees at the bottom of swimming pools The Metropolis algorithm is difficult to implement and requires some very heuristic components It has demonstrated an ability to render features that very few other techniques can handle although it has not gained wide acceptance and tends to be limited to academic research Photon Mapping We l rt 4 In 1995 Henrik Jensen proposed the photon mapping algorithm Photon mapping starts by shooting many millions of photons from the light sources in the scene scattered in random directions Each photon may bounce off of several surfaces in a direction that is random but biased by the BRDF of the surface For each hit a statistical random decision is made whether or not the photon will bounce or stick in the surface eventually leading to all photons sticking somewhere The photons are collected into a 3D data structure like a KD tree and stored as a bunch of 3D points with some additional information color direction Next the scene is rendered with a raypath tracing type approach Rays are shot from the camera and may spawn new rays off of sharp reflecting or refracting surfaces The more diffuse components of the lighting can come from analyzing the photon map To compute the local lighting due to photons we collect all of the photons within some radius of our sample point The photons we collect are used to contribute to the lighting of that point Jr Photon Mapping Photon Mapping There are many variations on the photon mapping algorithm and there are different ways to use the photon map The technique can also be combined with other approaches like path tracing Photon mapping tends to be particularly good at rendering caustics which had previously been very difficult As the photon mapping algorithm stores the photons as simple points in space the photon map itself is independent of the scene geometry making the algorithm very flexible and able to work with any type of geometry Some of the early pictures of photon mapping showed focused caustics through a glass of sherry shining onto a surface of procedurally generated sand a task that would have been impossible with any previous technique Volumetric Photon Mapping A later extension to the core photon mapping algorithm allowed interactions between photons and volumetric objects like smoke A photon traveling through a participating media like smoke will travel in a straight line until it happens to hit a smoke particle then it will scatter in some random direction based on the scattering function of the media This can be implemented by randomly determining the point at which the photon hits a particle or determining that it doesn t hit a anything allowing it to pass through As with surfaces the photons will also be able to stick in the volume Volumetric photon mapping allows accurate rendering of beams of light even beams of reflected or focused light to properly scatter through smoke and fog with the scattered light properly illuminating other objects nearby Translucency 39 The photon mapping algorithm and other ray based rendering algorithms has also been adapted to trace the paths of photons scattered through translucent surfaces Antialiasing I an 7M1 Iii m7 iiilr ampquot M 4 CSE167 Computer Graphics Instructor Steve Rotenberg UCSD Fall 2005 5 Texture Minification m 42 tug1 Maw air53 w 4 Consider a texture mapped triangle Assume that we point sample our texture so that we use the nearest teer to the center of the pixel to get our color If we are far enough away from the triangle so that individual texels in the texture end up being smaller than a single pixel in the framebuffer we run into a potential problem If the object or camera moves a tiny amount we may see drastic changes in the pixel color as different texels will rapidly pass in front of the pixel center This causes a flickering problem known as shimmering or buzzing Texture buzzing is an example of aliasing f39 ima i in Small Triangles me a a 3 A similar problem happens with very small triangles Scan conversion is usually designed to point sample triangles by coloring the pixel according to the triangle that hits the center of the pixel This has the potential to miss small triangles If we have small moving triangles they may cause pixels to flicker on and off as they cross the pixel centers A related problem can be seen when very thin triangles cause pixel gaps These are more examples of aliasing problems Stairstepping 3 What about the jagged right angle patterns we see at the edges of triangles This is known as the stairstepping problem also affectionately known as the jaggies These can be visually distracting especially for high contrast edges near horizontal or vertical Stairstepping is another form of aliasing Moir Patterns When we try to re noer highA etail patterns with a lot of regularity like a grid we occasionally see strange concentric curve patterns forming These are known as Moir patterns and are another form of aliasing You can actually see these in real life if you hold two window screens in front of each other m The Prope ller Problem v r 39v 33 Consider an animation of a spinning propeller that is rendering at 30 frames per second If the propeller is spinning at 1 rotation per second then each image shows the propeller rotated an additional 12 degrees resulting in the appearance of correct motion If the propeller is now spinning at 30 rotations per second each image shows the propeller rotated an additional 360 degrees from the previous image resulting in the appearance of the propeller sitting still If it is spinning at 29 rotations per second it will actually look like it is slowly turning backwards These are known as strobing problems and are another form of aliasing Aliasing m e w 33 These examples cover a wide range of problems but they all result from essentially the same thing In each situation we are starting with a continuous signal We then sample the signal at discreet points Those samples are then used to reconstruct a new signal that is intended to represent the original signal However the reconstructed signals are a false representation of the original signals In the English language when a person uses a false name that is known as an alias and so it was adapted in signal analysis to apply to falsely represented signals Aliasing in computer graphics usually results in visually distracting artifacts and a lot of effort goes into trying to stop it This is known as antialiasing Signals mg 9531 pn 1 2am The term signal is pretty abstract and has been borrowed from the science of signal analysis Signal analysis is very important to several areas of engineering especially electrical audio and communications Signal analysis includes a variety of mathematical methods for examining signals such as Fourier analysis filters sampling theory digital signal processing DSP and more In electronics a one dimensional signal can refer to a voltage changing over time In audio it can refer to the sound pressure changing over time In computer graphics a one dimensional signal could refer to a horizontal or vertical line in our image Notice that in this case the signal doesn t have to change over time instead it varies over space the x or y coordinate Often signals are treated as functions of one variable and examples are given in the 1D case however the concepts of signal analysis extend to multidimensional signals as well and so we can think of our entire 2D image as a signa f39 ima i Sampling Hquot i 33 If we think of our image as a bunch of perfect triangles in continuous floating point device space then we are thinking of our image as a continuous signal This continuous signal is can have essentially infinite resolution if necessary as the edges of triangles are perfect straight lines To render this image onto a regular grid of pixels we must employ some sort of discreet sampling technique In essence we take our original continuous image and sample it onto a finite resolution grid of pixels If our signal represents the red intensity of our virtual scene along some horizontal line then the sampled version consists of a row of discreet 8 bit red values This is similar to what happens when a continuous analog sound signal is digitally sampled onto a CD I I x I 1 r I I i 1 r I I a rI39L Evr73 m Reconstruction 4 M 4 I Once we have our sampled signal we then reconstruct it I In the case of computer graphics this reconstruction takes place as a bunch of colored pixels on a monitor I In the case of CD audio the reconstruction happens in a DAC digital to analog converter and then finally in the physical movements of the speaker itself m Reco Filters 31 nstruction i La cut my Normally there is some sort of additional ltration that happens at the reconstruction phase In other words the actual pixels on the monitor are not perfect squares of uniform color Instead they will have some sort of color distribution Additional filtration happens in the human eye so that the grid of pixels appears to be a continuous image In audio the perfect digital signal is filtered first by the analog electronic circuitry and then by the physical limitations of the speaker movement Low Frequency Signals 35 a 51 39 Original signal relatively high frequency I I I I I quot 39I 39Iquot quot I 39Reconstructedsignal 5555555555555555 High Frequency Signals m mi 472 rig ax mi Wang 5quot I Original signal WM 39 Point sampled at I v relatively low frequency l Reconstructed signal 1 Regular m 42 1o AM 39 Original repeating signal 39 Point sampled at relatively low frequency 39 Reconstructed signal repeating at incorrect frequency Signals 3 Nyquist Frequency 39 Theoretically in order to adequately reconstruct a signal of frequency X the original signal must be sampled with a frequency of greater than 2x I This is known as the Nyquist frequency or Nyquist limit 39 However this is assuming that we are doing a somewhat idealized sampling and reconstruction I In practice it s probably a better idea to sample signals at a minimum of 4X Aliasin m s m L2 Hg g Problems quot 3 rail Hquot i 33 l Shimmering Buzzing Rapid pixel color changes flickering caused by high detail textures or high detail geometry Ultimately due to point sampling of high frequency color changes at low frequency pixel intervals I Stairstepping Jaggies Noticeable stairstep edges on high contrast edges that are nearly horizontal or vertical Due to point sampling of effectively infinite frequency color changes step gradient at edge of triangle 39 Moir patterns Strange concentric curve features that show up on regular patterns Due to sampling of regular patterns on a regular pixel grid 39 Strobing a Incorrect or discontinuous motion in fast moving animated objects Due to low frequency sampling of regular motion in regular time intervals f39 ima i Spatial Temporal Aliasing Aliasing shows up in a variety of forms but usually those can be separated into either spatial or temporal aliasing Spatial aliasing refers to aliasing problems based on regular sampling in space This usually implies device space but we see other forms of spatial aliasing as well Temporal aliasing refers to aliasing problems based on regular sampling in time The antialiasing techniques used to fix these two things tend to be very different although they are based on the same fundamental principles Point Sampling l The aliasing problems we ve seen are due to low frequency point sampling of high frequency information I With point sampling we sample the original signal at precise points pixel centers etc I Is there a better way to sample continuous signals F 7 r5 3 f aw Box Sampling 39 We could also do a hypothetical box sampling or box lter of our image I In this method each triangle contributes to the pixel color based on the area of the triangle within the pixel 39 The area is equally weighted across the pixel my Pyramid Sampling 39 Alternately we could use a weighted sampling filter such as a pyramid filter I The pyramid filter considers the area of triangles in the pixel but weights them according to how close they are to the center of the pixel Sampling Filters mg I any on of several different sampling filters l Common options include the point box pyramid cone and Gaussian filters l Different filters will perform differently in different situations but the best all around sampling filters tend to be Gaussian in shape I The filters aren t necessarily limited to cover only pixel It is possible and not uncommon to use filters that extend slightly outside of the pixel thus overlapping with the neighboring pixels Filters that cover less than the square pixel however tend to suffer from similar problems as point sampling Edge Antialiasing in rage cue Ma 5 w J N i 3 Cove I There have been several edge antialiasing algorithms proposed that attempt to base the final pixel color on the exact area that a particular triangle covers However without storing a lot of additional information per pixel this is very difficult if not impossible to do correctly in cases where several triangle edges cross the pixel Making a coverage based scheme that is compatible with zbuffering and can handle triangles drawn in any order has proven to be a pretty impractical approach Some schemes work reasonably well if triangles are sorted from back to front distant to near but even these methods are not 100 reliable Supersampling in W l A more popular method although less elegant is supersampling I With supersampling we point sample the pixel at several locations and combine the results into the final pixel color I For high quality rendering it is not uncommon to use 16 or more samples per pixel thus requiring the framebuffer and zbuffer to store 16 times as much data and requiring potentially 16 times the work to generate the final image I This is definitely a brute force approach but is straightfonNard to implement and very powerful mfg 3n raid in Uniform Sampling 39 With uniform sampling the pixel is divided into a uniform grid of subpixels I Uniform supersampling should certainly generate better quality images than single point sampling I It will filter out some high frequency information but may still suffer from Moire problems with highly repetitive signals in Random Sampling i La Hg 31m V th random sampling the pixel is supersampled at several randomly located points Random sampling has the advantage of breaking up repeating signals and so can completely eliminate Moire patterns It does however trade the regular patterns with random noise in the image which tends to be less annoying to the viewer It also suffers from potential clustering and gaps of the samples Jittered Sampling m I With jittered or stratified sampling the pixel is divided into a grid of subpixels but the subpixels themselves are sampled at a random location within the subpixel 39 This combines the advantages of both uniform and random sampling Weighted Sampling EL 5 2 ENE 1 091 Ran l 11 v59 If we average all of the samples equally to get the final pixel color we are essentially performing a box filter on the samples We can also perform a weighted average of the samples to achieve other shaped filters For example we can weight the samples according to a box cone pyramid or Gaussian shape if desired We can apply weighting to uniform random orjittered supersamples with little additional work m Weighted Distribution By combining supersampling jittering and Gaussian weighting we make good progress against aliasing problems However if we look at the 16 samples in the previous image we see that some are much more important than others yet they all have the same computational cost In other words the 4 samples in the center of the grid might have more total weight than the other 12 samples around the perimeter By adjusting our distribution so there are more samples in the higher valued areas we can achieve the benefits of jittered and weighted sampling while maintaining efficiency by treating all samples equally f39 ima i Adaptive Sampling I An even more sophisticated option is to perform adaptive sampling I With this scheme we start with a small number of samples and analyze their statistical variation I It the colors are all similar we accept that we have an accurate sampling I If we find that the colors have a large variation we continue to take further samples until we have reduced the statistical error to an acceptable tolerance m SemiJittered Sampling We can apply a unique jittering pattern for each pixel fully jittered or we can reuse the pattern for all of the pixels semijittered Both are used in practice Semijittering has potential performance advantages and has the other advantage that a straight edges look cleaner Semijittering however will potentially allow subtle Moire patterns due to the semiregularity of the sampling grid Spatial Aliasing 39 Many of the spatial aliasing problems we ve seen so far happen because of the regular grid of pixels 39 Using antialiasing techniques such as Gaussian weighted jittered supersampling can significantly reduce these problems I However they can definitely add a large cost depending on the exact implementation q 3n 4quot Mipmapping amp Pixel Antialiasing We saw how mipmapping and other texture filtering techniques such as elliptical weighted averaging can reduce texture aliasing problems These can be combined with pixel supersampling to achieve further improvements For example each supersample can be mipmapped and these can be blended into the final pixel value resulting in better edgeon behavior than mipmapping alone Another hybrid approach is to compute only a single shading sample per pixel but still supersample the scanconversion and zuffering This combines the edge antiaiasing properties of supersampling and the texture filtering of mipmapping without the excessive cost of full pixel supersampling Modern graphics hardware often uses this approach resulting in a large increase in framebufferzbuffer memory but only a small increase in cost FEW a an Ev5393 m Temporal Aliasing 39 Properly tuned supersampling techniques address the spatial aliasing problems pretty well I We still may run into temporal aliasing or strobing problems when we are generating animations I Just as the spatial antialiasing techniques apply a certain blurring at the pixel level temporal antialiasing techniques apply bluring at the frame level I In other words the approach to temporal antialiasing is to add morion blur to the image 3 Motion Blur ML 5 9quot 5 Mi anon AH u a Motion blur can be a tricky subject and several different approaches exist to address the issue The simplest brute force approach is supersampling in time Just as pixel antialiasing involves increasing the spatial resolution and blurring the results motion blur involves increasing the temporal resolution and blurring the results In other words if we want to apply motion blur over a 130th second interval we render several images spaced in time and combine them into a final image If an object moves 16 pixels in one frame then 16 supersample images or even less should be adequate to blur it effectively m Com bining Antialiasing Techniques 12 km 1 r 39v 339 We can even combine the two techniques without exponentially increasing the work without requiring 16x16 times the work This can be done by rendering 16 supersamples total each one spread in time and jittered at the pixel level This overall approach offers a powerful foundation for other blurry effects such as soft shadows penumbrae lens focus depth of field color separation dispersion glossy reflections diffuse interreflections etc Midterm Review m Traditi onal L2 engu g 5 Graphics Pipeline 31 Transformation Lighting Clipping Culling Scan Conversion Pixel Rendering As we have seen several times these stages are not always completely distinct and may overlap somewhat They will also vary from implementation to implementation We will consider a pretty classic implementation of this approach which is very similar to how things are done in GL Triangle endering m J mam quotrashI rquot 1 2w Each triangle goes through the following steps Transform vertex positions amp normals from object space to camera space Apply backface culling in camera space to determine if triangle is visible Compute lit vertex colors based on the unlit color the vertex position amp normal and the light positionsdirections lighting is computed in camera space Compute dynamic texture coordinates if desired such as environment mapping Clip amp cull triangle to view frustum This may involve creating new temporary verts and splitting the triangle into two or more Transform clipped verts into device coordinates by applying perspective transformation perspective division and viewport mapping Scan convert the triangle into a series of pixels For each pixel com ute interpolated values for the color rgba depth 2 and texture coordinates tX ty For each pixel generated in the scan conversion process test the 2 value against the zbuffer value lfthe pixel is visible compute the final color by looking up the texture color combining that with the interpolated lit color and apply alpha blending if necessary m Camera Space 31 1 Tran form to m 2 tugi 1 aka quotrant1 The vertex positions and normals that define the triangle get transformed from its defining object space into world space and then into camera space These can actually be combined into a single transformation by precomputing MC1W If our matrix is nonrigid ie if it contains shears andor scales then the normals must be transformed by M391T which should be precomputed and then renormalized to unit length M C 1 W V MV V vx vy v2 1 n M1Tn n nx ny n2 0 n n ma a few i m 2 Backface Cull in Object Space We want to perform backface culling as early as possible because we expect that it will quickly eliminate up to 50 of our triangles We choose to do the test in camera space for simplicity but it could also be done in object space with a little extra setup Note that in camera space the eye position e is located at the origin np1pogtltp2po if e p0 n S 0 then triangle is invisible 3 Compute Vertex Lighting Next we can compute the lighting at each vertex by using a lighting model of our choice We will use a variation of the Blinn model We perform lighting in camera space so the camera space light positions should be precomputed We loop over all of the lights in the scene and compute the incident color cm for the light and the unit length direction to the light I These are computed differently for directional lights and point lights and other light types Once we have the incident light information from a particular light we can use it to compute the diffuse and specular components of the reflection based on the material properties mm diffuse color mspec specular color and s shininess Each light adds its contribution to the total light and we also add a term that approximates the ambient light c mamb gtkcamb 24ng gtxltnldifnoli39 111517960nhis like a F 5 927173 fair 4 Compute Texture Coordinates m l W 39 Very often texture coordinates are just specified for each vertex and don t require any special computation 39 Sometimes however one might want to use dynamic texture coordinates for some effect such as environment mapping or projected lights 5 Clipping amp Culling 39 The triangle is then clipped to the viewing frustum I We test the triangle against each plane individually and compute the signed distance d of each of the 3 triangle vertices to the plane dn Vn pclip nclip 39 By examining these 3 signed distances we can determine if the triangle is totally visible totally invisible or partially visible thus requiring clipping 39 If an edge of the triangle intersects the clipping plane then we find the point x where the edge intersects the plane 6 Projection amp Device Mapping 39 After we are done with all of the camera space computations lighting texture coordinates clipping culling we can finish transforming the vertices into device space I We do this by applying a 4x4 perspective matrix P which will give us a 4D vector in unnormalized view space I We then divide out the w coordinate to end up with our 25D image space vector 39 This is then further mapped into actual device space pixels with a simple scaletranslation matrix D V 4D PV I I vx vy vz V I I I 7 7 v vw v m fazr ginrs ream V DV ME 78 can 12 RM Conversion 4 M 91 3 3 At this point we have a triangle that has been properly clipped to the viewing area and so we are guaranteed that the entire triangle will be visible It can then be scan converted into actual pixels The scan conversion process is usually done by sequentially filling in the pixels from top to bottom and left to right Several slopes are precomputed so that the actual cost per pixel is very low While scan converting we interpolate various properties across the triangle such as the rgb color resulting from lighting the txty texture coordinates and the z depth lnterpolating the texture coordinates requires a perspective correct which an additional division per pixel 7 1quot s Lightin EK uh rillquot liii fiilrl 7 wi 12 CSE167 Computer Graphics Instructor Steve Rotenberg UCSD Fall 2005 Triangle Rendering m I The main stages in the traditional graphics pipeline are I Transform I Lighting I Clipping Culling I Scan Conversion I Pixel Rendering Transform Clip Scan Convert W l The transformation clippingculling and scan conversion processes provide us a way to take a 3D object defined in it s object space and generate a 2D image of pixels I If each vertex has a color that color gets smoothly interpolated across the triangle giving us a way to generate full color images I Lighting is essentially the process of automatically assigning colors to the vertices based on their positionorientation in the world relative to lights and other objects a 3n 4quot Lighting 39 Today we will mainly focus on vertex lighting l Each vertex goes through a lighting process which determines its final color I This color value is then interpolated across the triangle In the scan conversnon process I Usually each vertex has some sort of initial color assigned to it which defines what color it would be if well lit by a uniform white light I This initial color is then modified based on the position and normal of the vertex in relation to lights placed in the scene in other words a grey vertex dimly lit by red lights will appear dark red I In GL you pass in the unlit color through glColor3f It will then compute the lit color which gets interpolatedin W s the scan conversion process WE Normals 9 Me as lam The concept of normals is essential to lighting 39 lntuitively we might think of a flat triangle as having a constant normal across the front face However in computer graphics it is most common to specify normals and perform lighting at the vertices This gives us a method of modeling smooth surfaces as a mesh of triangles with shared normals at the vertices It is often very convenient if normals are unit length normalized normals Models m A 52 4 s MAquot wags Lu A I m I We will extend our concept of a Model to include normals I We can do this by simply extending our vertex class class Vertex Vector3 Position Vector3 Color Vector3 Normal public void Draw glColor3fColorx Colory Colorz glNormal3fNormalx Normaly Normalz glVertex3fPositionx Positiony Positionz This has to be last Single Indexed Model 1 raga a mm class Vertex Vector3 Position Vector3 Color Vector3 Normal class Triangle Vertex Vert3 Note triangle stores pointers to verts class Model int NumVertsNumTris Vertex Vert Triangle Tri No rmal Transformations Wk 1 1231 rug on wax u all Lighting requires accurate measurement of distances and angles so we want to compute lighting in a regular 3D space ie not 4D unnormalized view space or 25D device space This leaves object space world space or camera space as our most natural op ons To light in object space we would have to transform the lights from world space into each object s space If we are applying shears or nonuniform scales to our object this will distort the object which will mean that object space isn t actually a legitimate place to do lighting Lighting in world space would be fine but it would require transforming the object into world space which is a step that we usually avoid explicitly doing Therefore it makes good sense to light in camera space as we will probably want to perform clipping amp some culling in this space as well GL does it s lighting in camera space which means that we must transform normals into camera space in addition to the vertex positions in a an Ev5393 f39 ima i Normal Transformations m 93 rug m M 39 Remember that when we transform a normal we don t want to apply the translation portion of the matrix the right hand column I A normal transforms as a direction not a position and so we expand it into its 4D format as Direction Vector Transformation m 9 my a n 8 3 x X gtlt W W G a ltamp 3 a nkhall y y Z Z 0 0 0 l Q w 0 H3 05 I mmmmg b m mmymg I mmmmg WE Normal Transformations 9 Me as am It s actually worse than that 39 Let s say we take the 3 vertices of a triangle and compute the normal then we transform the 3 vertices and the normal If the transformation contains any shear or nonuniform scaling then it is possible that the transformed normal will no longer be perpendicular to the transformed triangle itself To fix this we should actually transform the normal by the inverse transpose of the matrix or M1T The transformed normals will also not be unit length any more so they must also be renormalized before lighting g v fag 1 F39UL Normal Transformations W A use as all I In other words if we have nonrigid transformations we need to compute a matrix inverse once and then renormalize every normal to properly transform the normals I This is expensive so should only be done when necessary I The good news is that most of the time we tend to use rigid transformations in computer graphics transformations that are built up purely from rotations and translations and for a rigid matrix M1TM l Another good piece of news is that we only need to transform the normals in to world or camera space and don t need to project them or compute a perspective division W Normal Transformations m 93 rag m M I If we want to compute lighting in camera space we need to first transform the vertices amp normals into camera space MC 1W v Mv vlvx vy v2 ll nMlTn nznx my 722 0 n n gtllt Lighting I l Whether we are computing lighting per vertex or lighting per pixel the basic process is the same I In either case we are computing the lighting at some position v with some normal n Mate rial Colors WE an ENE mm 3 qu p A Objects have an inherent material color which is the color that the object reflects The material gets its color because the reflectivity varies with the wavelength of light In CG we usually don t represent color as a continuous spectrum Instead we just represent it is a combination of red green and blue Obviously an object can t reflect more light than it receives so at best it could reflect 100 of the light in all wavelengths thus appearing bright white A more realistic white object might reflect perhaps 95 of the light so we would say that its actual material color is 095 095 095 We will assume that material colors are limited from 00 to 10 in red green and blue or more realistically from 00 to 098 or so Light Color 1m However if we are looking at a white piece of paper under uniform light we can alwa s turn more lights on and get more light to re lect off of the paper I There is no upper limit to the intensity of light I If we want to represent a light intensity light color we can store it as red green and blue values ranging from 00 to an arbitrarily high value In other words a bright white light bulb might have an color of 10 10 10 q 3n 4quot Color amp Intensity 931 rm s MAquot wags Lu A 39 We need to make a distinction between material color and light color 39 Material colors represent the proportion of light reflected l Light colors represent the actual intensity of a beam of light I We never actually perceive the inherent material color all we see is the light reflected off of a material I If we shine a red light on a grey surface the object appears dark red because it is reflecting beams of dark red light I I will use m to represent a material color and c to represent an actual light color Exp osure W 12 whee 3 AA gagjg 0 A I an The monitor has an upper limit to the brightness it can display If light intensity has no upper limit then how do we determine what the value of white is This relates to the issue of white balance and exposure control The human eye and digital cameras will adjust their internal exposure settings to normalize the appearance of white In other words if we are in a moderately lit room the light color 05 05 05 might appear as white but when we go outside our eye adjusts its exposure so that 10 10 10 looks white Ideally we would have some sort of virtual exposure control There are various advanced rendering techniques that handle the issues of color and exposure in fairly complex ways For today we will just assume that a light intensity of 111 is white and any light intensity values above 10 will simply get clamped to 10 before storing the color in the actual pixel Reflectivity l A white sheet of paper might reflect 95 of the light that shines on it I An average mirror might reflect 95 of the light that shines on it 39 Yet these two things look completely different because they reflect light in different directions I We say that the paper is a diffuse reflector whereas the mirror is a specular reflector 1 an 4quot WE Diffuse Reflection 9 all I An ideal diffuse reflector will receive light from some direction and bounce it uniformly in all directions I Diffuse materials have a dull or matte appearance I In the real world materials will not match this behavior perfectly but might come reasonably close Specular Reflection m 9 a I An ideal specular reflector mirror will bounce an incoming light ray in a single direction where the angle of incidence equals the angle of reflection Specular Glossy Reflection m 39 Sometimes a material behaves in a specular way but not quite perfect like a mirror like an unpolished metal surface I In CG this is sometimes referred to as glossy reflection 39 Glossy materials look shiny and will show specular highlights DiffuseSpecular Reflection m l Many materials have a mixture of diffuse and specular behavior I Plastics are a common example of this as they tend to have an overall diffuse behavior but still will catch highlights Real Reflector 39 Materials in the real world might have fairly complex reflection distributions that vary based on the angle on the incoming light 39 Modeling these properties correctly is a very important part of photoreal rendering and we will talk more about this in later lectures I For today we will allow materials to have a mixture of ideal diffuse and glossy properties 1 FUL 4quot WE Diffuse Reflection 9 all At first we will consider a purely diffuse surface that reflects light equally in all directions The light reflected is proportional to the incident light the material color determines the proportion Lets assume we have a beam of parallel light rays shining on the surface The area of the surface covered by the beam will vary based on the angle between the incident beam and the surface normal I The larger this area the less incident light per area I In other words the object will appear darker as the normal turns away from the light WE Diffuse Reflection 9 all I We see that the incident light and thus the reflected light is proportional to the cosine of the angle between the normal and the light rays I This is known as Lambert s cosine law and ideal diffuse reflectors are sometimes called Lambertian reflectors ln Diffuse Reflection m e r We w m I We will use the vector to represent the unit length vector that points to the light source c O l czmdlfclgtnl mdz39f Directional Light I When light is coming from a distant source like the sun the light rays are parallel and assumed to be of uniform intensity distributed over a large area I The light can therefore be described as a simple unit length direction vector d and a color cc I To get the unit length vector to the light we simply use d I And the color shining on the surface cgtcd q 3n 4quot Point Lights 1a For closer light sources such as light bulbs we can t simply use a direction A simple way to model a local light source is as a point light that radiates light in all directions equally In the real world the intensity from a point light source drops off proportionally to the inverse square of the distance from the light p c l p V pm VI 1 c H n pm clgt Clgt VI2 F326 mdif Attenuation m 9 ms A I m 39 Sometimes it is desirable to modify the inverse square falloff behavior of point lights I A common although not physically accurate model for the distance attenuation is c pnt c g kc kldkqd2 where d lp V Multiple Lights an I Light behaves in a purely additive way so as we add more lights to the scene we can simply add up the total light contribution c 2111azzfczgrIl 391239 l OK well actually in some very specific cases light can interfere with other light and effectively behave in a subtractive way as well but this is limited to very special cases like the coloration of soap bubbles mfg 3n raid Ambient Light mm gagjg h A I a In the real world light gets bounced all around the environment and may shine on a surface from every direction Modeling accurate light bouncing is the goal of photoreal rendering but this is very complex and expensive to compute A much simpler way to model all of the background reflected light is to assume that it is just some constant color shining from every direction equally This is referred to as ambient light and can be added as a simple extra term in our lighting equation c mambcamb zmdlfclgtin Usually mm is set to equal m f39 ima i Specular Highlights I There are a variety of ways to achieve specular highlights on surfaces I For now we will look at a relatively simple method known as the Blinn lighting model I We assume that the basic material is specularly reflective like a metal but with a rough surface that causes the actual normals to vary at a small scale I We will say that the surface at a microscopic scale is actually composed of many tiny microfacets which are arranged in a more or less random fashion REE 3n raid Specular Highlights l The surface roughness will vary from material to material I V th smooth surfaces the microfacet normals are very closely aligned to the average surface normal 39 V th rougher surfaces the microfacet normals are spread around more but we would still expect to find more facets close to the average normal than far from the average 39 Smooth surfaces have sharp highlights while rougher surfaces have larger more blurry highlights Polished Smooth Rough W 7 3 pa 5257173 f39 iu39v Very rough W Specular Highlights W 39 To compute the highlight intensity we start by finding the unit length halfway vector h which is halfway between the vector pointing to the light and the vector e pointing to the eye camera el n J r el h cgt l e Specular Highlights W l The halfway vector h represents the direction that a mirrorlike microfacet would have to be aligned in order to cause the maximum highlight intensity n h cgt l e m q 3n 4quot e Specular Highlights 39 The microfacet normals will point more or less in the same direction as the average surface normal so the further that h is from n the less likely we would expect the microfacets to align I In other words we want some sort of rule that causes highlights to increase in brightness in areas where h gets closer to n l The Blinn lighting model uses the following value for the highlight intensity fhnS Where 3 is the shininess or specular exponent W Specular Highlights m 39 hn will be 1 when h and n line up exactly and will drop off to O as they approach 90 degrees apart I Raising this value to an exponent retains the behavior at O and 90 degrees but the dropoff increases faster as 3 gets higher thus causing the highlight to get narrower flh nls Specular Highlights 39 To account for highlights we simply add an additional contribution to our total lighting equation I Each light will potentially contribute highlights so it is included in our loop over the lights c Lmambcamb 2mdifclgfjn mSPecclgfin hi S I This is essentially the Blinn lighting model It appears in a few slightly different forms and in a wide variety of notations REE 3n raid Lighting Models I There are many lighting models out there I Some of the classic ones are I Blinn I Phong I Lambert I CookTorrance I There are many more advanced models used in modern photoreal rendering We will take a brief look at these later Gouraud Shading l Back in the old days triangles were lit as flat surfaces with a single normal I In 1971 Henri Gouraud suggested that computing the lighting at the verts and then interpolating the color across the triangle could simulate the appearance of smooth surfaces I This technique is called Gouraud shading and is the default behavior for most hardware renderers Phong Shading Computing lighting at the vertices is fast but has several limitations For high quality rendering it is much more common to compute lighting per pixel In order to render triangles as smooth surfaces the most common technique is to interpolate the normals across the triangle and then use the interpolated normal and position to compute the perpixel lighting This is known as Phong shading or Phong interpolation not to be confused with the Phong lighting model Modern graphics hardware can perform Phong shading through the use of pixel shaders 3 7 raid WE Flat vs Curved Triangles 9 Me We see that a triangle can represent a flat surface or approximate a small curved surface Even if we want a triangle to be flat like on the face of a cube we should still compute the lighting at each vertex The reason is that the resulting color might be different due to inverse square attenuation specular lighting or other reasons In other words we don t really need to make a distinction between flat and curved triangles as the lighting is computed the same for each only the normals vary


Buy Material

Are you sure you want to buy this material for

25 Karma

Buy Material

BOOM! Enjoy Your Free Notes!

We've added these Notes to your profile, click here to view them now.


You're already Subscribed!

Looks like you've already subscribed to StudySoup, you won't need to purchase another subscription to get this material. To access this material simply click 'View Full Document'

Why people love StudySoup

Jim McGreen Ohio University

"Knowing I can count on the Elite Notetaker in my class allows me to focus on what the professor is saying instead of just scribbling notes the whole time and falling behind."

Anthony Lee UC Santa Barbara

"I bought an awesome study guide, which helped me get an A in my Math 34B class this quarter!"

Bentley McCaw University of Florida

"I was shooting for a perfect 4.0 GPA this semester. Having StudySoup as a study aid was critical to helping me achieve my goal...and I nailed it!"

Parker Thompson 500 Startups

"It's a great way for students to improve their educational experience and it seemed like a product that everybody wants, so all the people participating are winning."

Become an Elite Notetaker and start selling your notes online!

Refund Policy


All subscriptions to StudySoup are paid in full at the time of subscribing. To change your credit card information or to cancel your subscription, go to "Edit Settings". All credit card information will be available there. If you should decide to cancel your subscription, it will continue to be valid until the next payment period, as all payments for the current period were made in advance. For special circumstances, please email


StudySoup has more than 1 million course-specific study resources to help students study smarter. If you’re having trouble finding what you’re looking for, our customer support team can help you find what you need! Feel free to contact them here:

Recurring Subscriptions: If you have canceled your recurring subscription on the day of renewal and have not downloaded any documents, you may request a refund by submitting an email to

Satisfaction Guarantee: If you’re not satisfied with your subscription, you can contact us for further help. Contact must be made within 3 business days of your subscription purchase and your refund request will be subject for review.

Please Note: Refunds can never be provided more than 30 days after the initial purchase date regardless of your activity on the site.