New User Special Price Expires in

Let's log you in.

Sign in with Facebook


Don't have a StudySoup account? Create one here!


Create a StudySoup account

Be part of our community, it's free to join!

Sign up with Facebook


Create your account
By creating an account you agree to StudySoup's terms and conditions and privacy policy

Already have a StudySoup account? Login here

Computer Graphics

by: Jordon Hermiston

Computer Graphics CS 307

Jordon Hermiston

GPA 3.89


Almost Ready


These notes were just uploaded, and will be ready to view shortly.

Purchase these notes here, or revisit this page.

Either way, we'll remind you when they're ready :)

Preview These Notes for FREE

Get a free preview of these Notes, just enter your email below.

Unlock Preview
Unlock Preview

Preview these materials now for free

Why put in your email? Get access to more of this material and other relevant free materials for your school

View Preview

About this Document

Class Notes
25 ?




Popular in Course

Popular in ComputerScienence

This 70 page Class Notes was uploaded by Jordon Hermiston on Thursday October 29, 2015. The Class Notes belongs to CS 307 at Wellesley College taught by Staff in Fall. Since its upload, it has received 12 views. For similar materials see /class/230940/cs-307-wellesley-college in ComputerScienence at Wellesley College.

Similar to CS 307 at

Popular in ComputerScienence


Reviews for Computer Graphics


Report this Material


What is Karma?


Karma is the currency of StudySoup.

You can buy or earn more Karma at anytime and redeem it for class notes, study guides, flashcards, and more!

Date Created: 10/29/15
Or using homogeneous coordinates DP B 312 3 2 Z 1 31311 We can accomplish that mapping with the following projection matrix 93p 1 O O O m 31 O 1 O O 1 2p O O 1 O z 1 0 1 31 0 0 1 So in summary our steps are a move the origin to where we want the COP to be namely the location of the light source a project a move the origin back a draw the object We move the origin back so that the object s shadow is drawn on the plane 3 O and not on the plane 3 y Demo The code is in Ncs3 O 7 pubdemos l6 shadowTeapot cc Note u The sequence of steps in the shadow rendering a How the projection matrix is computed That we turn off lighting when we draw the shadow The fact that we translate multiply by a projection matrix and then untranslate may seem odd In what way are we projecting We re not projecting at the moment we multiply by the matrix of course39 we are merely modifying the CTM Thus each vertex of the teapot is rendered Vrenaerea Mcwmmera Mm Mpmjeccsoz l rH Enigma When the vertices go through the projection matrix their 3 coordinate gets squished down to 3 and the translation afterwards makes the 3 coordinate equal to zero This really should feel like the Road Runner painting a hole on the wall We re drawing the teapot in a 50 percent gray color and getting its shadow Very cool 13 More Generality While this is cool it certainly seems very restrictive However it s not too bad a we could use blending and generalize it to more light sources a we could use use some transformations to move the shadow plane to another height though we d need to be careful about this 2 Fog Based on the explanation in the OpenGL Programming Guide rst edition Fog is a blending process where the amount of blending depends on the distance of the fragment from the eye location It can be used for haze smoke fog and so forth This is often useful in outdoor scenes where distant objects can just fade into the fog instead of being rendered in sharp detail The rst important step is to de ned your fog color Ordinary fog might be gray A hazy day might have a light blue tinge to it Smoke might be black The fog color is de ned using RGBA as follows GLfloat fogColor 05 05 05 10 glFogfv GLFOGCOLOR fogColor You also have to enable the fog calculations as follows glEnable GLFOG The fog process calculates a fraction f that depends on the distance of the fragment The fraction is used to blend the incoming fragment color C with the fog color Cf thus Cfci1 fcf So the f value needs to be high for things that show clearly through the fog and law for things that are hidden in the fog This means that the fog function needs to start out near 1 and decrease to zero as the distance from the eye increases The f value is clamped by OpenGL to the range 01 21 Fog Functions There are many kinds functions that monotonically decrease actually nonincrease as distance 2 increases The OpenGL people chose three 211 Linear The linear function is mathematically de ned as end z end start f Where end and start are the Z distances where the fog starts and ends You can look at examples of this function by using gnuplot on the Unix machines Here s are some examp es gnuplot gnuplotgt plot 2025 5 25 0 gnuplotgt plot 2025 4 24 l This is speci ed in OpenGL as follows glFogi GLFOGMODE GLLINEAR glFogf GLFOGSTART 1 5 glFogf GLFOGEND 5 0 This function is nice because you can delay the fog setting in for a while so that foreground stuff is entirely clear You can also easily know when the fog will entirely obliterate objects 22 Exponential The exponential decay function is mathematically de ned as f e C where c is a density constant You can use gnuplot to look at some examples gnuplotgt plot 2025 exp 2 gnuplotgt plot 2025 exp 2 gnuplotgt plot 2025 exp 052 gnuplotgt plot 2025 exp 0 gnuplotgt plot 2025 exp 0 This function is nice because it trails off at a decreasing rate never really reaching zero so it has a nice realism in that way However unless you use a pretty small density Lecture on Light Material and the Phong Model Reading Most of whatI know about Light Material and the Phong Model I learned from Angel chapter 6 and the Red Book chapter 5 in the 3rd edition chapter 6 in the lst edition Some of the gures in this reading are drawn from Angel 1 Lighting Models You ll notice that when we color objects directly using RGB there is no shading or other realistic effects They re just cartoon objects In fact since there is no shading it s impossible to see where two faces meet unless they are different colors Lighting models are a replacement for direct color where we directly specify what color something is using RGB Instead the actual RGB values are computed based on properties of the object the lights in the scene and so forth There are several kinds of lighting models used in Computer Graphics and within those kinds there are many algorithms Let s rst lay out the landscape and then explore what s available in OpenGL The two primary categories of lighting are a Global take into account properties of the whole scene a Local take into account only material surface geometry lights location kind and color Global lighting models take into account interactions of light with objects in the room For example a light will bounce off one object and onto another lighting it objects may block light from a source shadows may be cast re ections may be cast a diffraction may occur Global lighting algorithms fall into two basic categories Raytracing conceptually the algorithm traces a ray from the light source onto an object in the scene where it bounces onto something else to something else until it nally hits the eye Often the ray of light will split particularly at clear surfaces such as glass or water so you have to trace two light rays from then on Most rays of light won t intersect the eye For e lciency then algorithms may trace the rays backwards from the eye into the scene back towards light sources either lights or lit objects Figure 1 illustrates this Radiosity any surface that is not completely black is treated as a light source as if it glows Of course the color that it emits depends on the color of light that falls on it The light falling on the surface is determined by direct lighting from the light sources in the scene and also indirect lighting from the other objects in the scene Thus every object s color is determined by every other object s color You can see the dilemma how can you determine what an obj ect s color is if it depends on another object whose color is determined by the rst object s color How to escape Radiosity algorithms typically work by iterative improvement successive approximation rst handling direct lighting then primary effects other objects direct lighting color then secondary effects other objects indirect lighting color and so on until there is no more change Figure 1 This gure illustrates how ray tracing works tracing light rays back into the scene Global lighting models are very expensive to compute According to Tony DeRose rendering a single frame of the Pixar movie Finding Nemo takes four hours For The Incredibles the new Pixar movie rendering each frame takes ten hours which means that the algorithms have gotten more expensive even though the hardware is speeding up Local lighting models are perfect for a pipeline architecture like OpenGL s because very little information is taken into account in choosing the RGB This enhances speed at the price of quality To determine the color of a polygon we need the following information a material what kind of stuff is the object made out of Blue silk is different from blue jeans Blue jeans are different from black jeans surface geometry is the surface curved how is it oriented What direction is it facing How would we even de ne the direction that a curved surface is facing lights what lights are in the scene Are they colored How bright are they What directions does the light go The rest of this document describes local lighting models as in OpenGL The mathematical lighting model that is used by OpenGL and which we ll be developing is called the Phong model We re going to proceed in a bottomup fashion rst explaining the conceptual building blocks section 2 before we see how they all t together section 5 2 Local Lighting To see a demo of what we ll be able to accomplish with material and lighting run the lit teddy bear demo N053 O7 publichtml demosmaterial and lightingTeddyBearLit Once the program is running rightclick to get a menu and then choose enable lighting You can also choose show lights By turning the lighting on and off you can see the effects of light and material versus direct color 21 Material Types Because local lighting is focussed on speed a great many simpli cations are made Many of these may seem simplistic or even bizarre The rst thing is to say that there are only three ways that light can re ect off a surface Figure 2 illustrates this Diffuse These are rough surfaces where an incoming ray of light scatters in all directions The result is that direction that the material is viewed doesn t matter much in determining its color and intensity Examples a b C Figure 2 Ways that a light ray can interact with a surface carpet cloth dirt rock dry grass Look at the lit bear from different angles and you ll see Specular These are smooth shiny surfaces where an incoming ray of light might bounce mirrorlike and proceed on The result is that if the camera is lined up with the re ected rays we ll see a bright spot caused by that re ection This is called a specular highlight Examples plastic metal polished leather Look at the lit bear s eyes and you ll see a specular highlight Translucent These are surfaces that transmit as well as re ect light These can really only be handled properly using ray tracing Local lighting can do transparency after a fashion However we will not be talking more about transparency in this course Examples water glass So we really only need to understand specular and di use surfaces 3 Kinds of Light In talking about kinds of material we divided them into diffuse and specular and translucent but we re not going to talk about that Of course most materials have some of each you get color from the diffuse properties of say leather but a shine of specular highlight at the right angle A major part of the Phong light model then is light interacting with these two properties of material The model therefore divides light into different kinds so that the di use light interacts with the di itse material property and the specular light interacts with the specular material property The three kinds of light are ambient diffuse specular As we just said the diffuse and specular light components interact with the corresponding material properties What is ambient light As you might guess from the name it s the light all around us In most realworld scenes there is lots of light around bouncing off objects and so forth We can call this ambient light light that comes from no where in particular Thus ambient light is indirect and nondirectional It s the locallighting equivalent of radiosity Even though in local lighting we don t trace ambient light rays back to a speci c light source there is still a connection This is because in the real world when you turn on a light in a room the whole room becomes a bit brighter Thus each OpenGL light source can add a bit of ambient light to the scene brightening every object That ambient light interacts with the ambient property of a material Because of the way it s used a material s ambient property is often exactly the same color as the diffuse property but they need not be Thus each material also has the three properties ambient diffuse and specular We ll get into the exact math ematics later but for now you can think of these properties as colors For example the ambient property of brown leather is well brown so that when white ambient light falls on it the leather looks brown Similarly the diffuse property is brown The specular property of the leather is probably gray colorless because when white specular light re ects off shiny leather the re ected light is colorless not brown 4 Light Sources In 0 enGL we can have global ambient light plus up to 8 particular light sources each of which can be one of three different types These light sources are not accurate models of the real world but they are reasonable approximations 41 Global Ambient Light As we said above ambient light is generalized nondirectional light that illuminates all objects equally regardless of their physical or geometrical relationship to any light source In OpenGL you can specify a global ambient value The default is 03 That s as if there were a uniform gray light of 030303 falling on every object The following code reduces that to 02 GLfloat globa1ambient 02 02 02 10 glLightModelfvGLLIGHTMODELAMBIENT globalambient Note that the function glLightModelfV ends in a v this means that the argument is a vector that is an array In this case it is a fourplace vector of RGBA values RGBA We ll talk about RGBA below in section 6 In addition each of the eight light sources see the following sections can have its own ambient value When the light source is enabled that ambient light is added to the global ambient light brightening the scene 42 Point Sources The most common source of light in OpenGL is a point source You can think of it as a small light bulb radiating light in all directions around it This is somewhat unrealistic but it s easy to compute with Note that GLLIGHTO is one of the eight OpenGL light sources They are named GLLIGHTO through GL LIGHT7 If we wanted to make GLLIGHTO be a dim red light suitable for the red light district in Amsterdam we would do the following GLfloat dimRed 05 0 0 0 glLightfvGLLIGHTO GLAMBIENT dimRed glLightfvGLLIGHTO GLDIFFUSE dimRed glLightfvGLLIGHTO GLSPECULAR dimRed Our light needs a location too which is a point That s the de nition of being a point source To make GL LIGHTO be at location 102030 we would do the following GLfloat light0p1ace 10 20 30 1 glLightfvGLLIGHTO GLPOSITION light0place The l is the w component of the homogeneous coordinates for the light s position The intensity of light attenuates falls off with the square of the distance This is because the area of the surface of a sphere is proportional to the square of the radius so the photons are spread out over a wider area For example if the Earth is twice as far from the sun as Venus it would get l4th as much solar energy as Venus gets Actually the earth is about 14 times as far from the sun so it would get about half the solar energy 1142 051 This is just physics Thus that equation is incorporated into OpenGL 1 1m 7 d2 11 The intensity at a point p from the light source I is the intensity of the light source divided by the square of the distance between p and 1 However the inverse square law can be harsh with the light falling off too fast Therefore it s often softened like 10W 11 1 a bd cd2 where a b and c are parameters that the OpenGL programmer can control glLightf GLLIGHTO GLCONSTANTATTENUATION a glLightf GLLIGHTO GLLINEARATTENUATION b glLightf GLLIGHTO GLQUADRATI CATTENUATION C If you re physicsminded you ll object that this isn t realistic and you d be right but it s one of those approxima tions that are made in local lighting If we wanted GLLIGHTO to have no attenuation which is the default we would do glLightf GLLIGHTO GLCONSTANTATTENUATION l glLightf GLLIGHTO GLLINEARATTENUATION O glLightf GLLIGHTO GLQUADRATI CATTENUATION O 43 Spotlights A spotlight is just a point source limited by a cone of angle 9 Think of those Luxo lights made famous by Pixar Intensity is greatest in the direction that the spotlight is pointing and trailing off towards the sides but dropping to zero when you read the cutoff angle 9 One way to implement that is let a be the angle between the spotlight direction and the direction that we re interested in When a 0 the intensity is at its maximum I In The cosine function has that property so we use it as a building block To give us additional control of the speed at which the intensity drops off therefore how concentrated the spotlight is we allow the user to choose an exponent e for the cosine function The resulting function is I 00060 ifalt9 7 0 otherwise To do this in OpenGL we use the following functions glLighth GLLIGHTO GLSPOTDIRECTION Vector glLightf GLLIGHTO GLSPOTCUTOFF angle glLightf GLLIGHTO GLSPOTEXPONENT exp To have GLLIGHTO pointing downwards and to the left and in a cone of angle 90 degrees like a Luxo lamp with an exponent of 2 we would have to do the following in addition to all the other information GLfloat spotDir 1 1 0 0 glLightfvGLLIGHTO GLSPOTDIRECTION spotDir glLightfGLLIGHTO GLSPOTCUTOFF 45 glLightfGLLIGHTO GLSPOTEXPONENT 2 Remember that some of these parameters need arrays and therefore use glLightfv and others need scalars and therefore use glLightf Figure 3 plots some of these functions for us 1 I I I I I I I 1 cosx180 39 08 cosx180ltpi8 wix39l n39ph II 06 I I 04 02 0 I I I I I I I I I 0 Figure 3 Spotlight intensity curves 44 Distant Lights If a light source is nearby the vector from the surface point to the light source changes from place to place Since that vector is important in the lighting computation the Phong model having a nearby light means that the calculation must be redone for each point on the surface However if the light is distant the vector doesn t change and so the computation can be done only once Therefore the main idea is to speed up the computation by avoiding having to recompute the angle with light for each point on the object Light comes from in nity from a particular direction For example any outdoor scene with sunlight or even moonlight would bene t from this In OpenGL this is done by giving the position of the light in homogeneous coordinates distant lights are vectors last component is zero and near lights are points last component is unity 45 Color Sources So far we ve been only talking about the intensity of light as a function of light distance angle and so forth Intensity is a scalar onedim ensional quantity ranging from zero to one so at this point we re really talking about blackand white images To handle color we will treat each of the three primary colors the same way so we will just have scalar equations for now The actual color of a light or an object is just the intensity of each of the three primary colors 1139 I 19 lb 5 Phong Model Phong s model combines all of the above into one big hairy equation It s a reasonable approximation of the real physics that can be computed in a reasonable amount of time In the section we ll work topdown attacking it one piece at a time First we need some notational building blocks a n the normal vector for the surface In this context normal means perpendicular ou ll also sometimes hear the term orthogonal so the normal vector is a vector that is perpendicular to the surface The normal vector is how we de ne the orientation of a surface 7 the direction it s facing The normal vector is the same over a whole plane but may change over each point on a curved surface a 1 the vector towards the light source39 that a vector from the point on the surface to the light source Not used for ambient light Doesn t change from point to point for distant lights a v the vector towards the viewer39 that is the Center of Projection COP If v says that the surface faces away from the view the surface is invisible and OpenGL can skip the calculation r the re ection direction of the light If the surface at that point were a shiny plane like a mirror r is the direction that 1 would bounce to Figure 4 illustrates these vectors Figure 4 The vectors of the Phong model 51 Cosine In the handout on geometry we learned how simple it can be to nd the cosine of the angle between two vectors especially if they are normalize Assume these are normalized so that we can compute cosines by a simple dot product three multiplies and two adds 52 Kinds of Light Now we do something weird we break light into a three colors of light red green and blue RGB three kinds of light ambient diffuse specular ADS Thus we have 3 x 3 9 different light intensities to worry about We can throw them into a matrix Lra Lyn Lba er Lgd Lbd Lrs L93 Lbs But we re not actually going to do any matrix equations so that s not the issue We can treat each color of light the same way so let s just drop that subscript So the L values here are the intensity of the light Turn em up and the light gets brighter We also have to worry about how much of the incoming light gets re ected Let this be a number called R This number is a fraction so if R 08 that means that 80 percent of the incoming light is re ected We actually have 9 such numbers such as the re ection fraction for specular red ambient green and so on for all 9 combinations As we discussed earlier in general R can depend on a material properties cotton is different from leather orientation of the surface a direction of the light source a distance of the light source Of course R is really a function of those four parameters The light that gets re ected is the product of the incoming light intensity L and the fraction R ILR That is the intensity of light that is re ected and ends up on the image plane and the framebuffer is the incoming light intensity multiplied by the re ection number Note that the previous equation is just shorthand for I LruRru LguRgu LbuRbu Lmer Lnggd Lbded 1394st 1395395 LbaRbs And that s just for one light If we have multiple lights we have returning to our shorthand 1 Z Liza i This leads to the problem of averdn39ving the lighting where every material turns white because there s so much light falling on it This happens sometimes in practice you have a decently lit scene and you add another light and then you have to turn down your original lights and your ambient to get the balance right Why does R depend on i That is why does the re ection fraction depend on which light we re talking about Because the direction and distance change But since all the light sources work the same way we re not going to worry about i and we ll just have I LQRQ LsRs l That is the intensity of the light coming from an object is u the ambient light falling on it multiplied by the re ection amount for ambient light plus a the diffuse light falling on it multiplied by the re ection amount for diffuse light plus a the specular light falling on it multiplied by the re ection amount for specular light Equation 1 is our abstract Phong model Now let s see how to compute the three R values 53 Ambient Re ection of ambient light obviously doesn t depend on direction or distance or orientation so it s solely based on the material property is the material dark or light Note that it can be dark for blue and light for red and green If white light falls on such a material what does it look like So Ra is a simple constant which we will call ha just to remind ourselves that it s a constant Ra ha 2 Note thatO 3 ha 3 1 Why This ha constant is chosen by the OpenGL programmer as part of the material properties for an object in the same way that you choose color There are actually three such values one each for red green and b ue 54 Diffuse Diffuse re ection is also called Lambertian for the developer For diffuse matte surfaces we assume that light scatters in all directions If we looked at it up close we might see However the angle of the light does matter because the energy photons are spread over a larger area jigE dcos u a b a 10 Consequently we have Rd M1 39 n 3 That is the amount of re ection from a diffuse surface is the product of a constant chosen by the OpenGL programmer that is multiplied by the cosine of the angle between land n As before there are actually 3 such constants one each for red green and blue 55 Specular Specular surfaces are somewhat mirrorlike Imagine the ball on the left in the following picture is a pingpong ball or a billiard ball shiny and smooth The light rays bounce off and some bounce right to our eyes 0 Doing specularity right is hard Again Phong s model is a compromise We assume that the material is smooth in the vicinity of the point and that a bunch of light is bouncing in direction r If the direction of our view v is near r we should get a bunch of that re ected light Hence Rs ksif 39 Ve 4 As usual 0 S 19 S 1 The dot product is large when the two vectors are lined up The e exponent is a number that gives the shininess The higher the shininess the smaller the spotlight OpenGL allows 0 S e S 128 In addition to e the OpenGL programmer gets to choose k for each of red green and blue 56 The Phong Model All together now add up equations 2 3 and 4 to get 1 kaLa dedn 1 hLr ve 5 Compare that to equation 1 To account for distance which applies to diffuse specular and ambient for any particular light source but not the global ambient we invent three more constants and use 1 e I 7 ma dlzdn39 1 hsLsff v hula 6 Note that the d in the subscripts of the numerator is for di use while the d in the denominator is for distance 6 Transparency In addition to RGB red green and blue OpenGL allows a computation on a fourth component called alpha 1 but usually notated A Thus a light or a color is an RGBA value An alpha value of l is opaque An alpha value of 0 is completely transparent and so the color doesn t matter Intermediate values mix some color in but allow others to show through We ll talk more about this in a later class devoted to transparency For now we ll keep the alpha value to 1 You see this for example in the code for setting the global ambient in section 41 7 Light and Material in OpenGL Have you been keeping count of how many parameters there are to control Let s summarize For the scene as a whole 3 global ambient primaries 3 attenuation constants c for each material 3 ambient constants 3 diffuse constants 3 specular constants l shininess constant a for each surface position in 3D orientation surface normal in 3D for each light 4 position coordinates homogeneous coordinates 3 ambient constants 3 diffuse constants 3 specular constants optional direction vector cutoff angle and concentration exponent That s a huge number of parameters to play with It s actually even worse because OpenGL allows you to play with transparency of a color so color is actually speci ed in a fourdimensional space RGBA The A dimension sometimes called alpha or a is the opacity of the color where 1 means completely opaque and 0 means perfectly transparent We will talk about transparency in a few weeks For now I suggest you make your life a little easier and ignore that However you will see it in one of the tutors so I wanted to warn ou Now let s look at how this actually works in OpenGL First please look at two tutors for light and material a N053 O7pubTutorslightmaterial u N053O7pubdemosmaterialTutor The rst tutor uses plain OpenGL functions I suggest you cd to the directory and run it there because there are data les that it wants to access This tutor lets you select a variety of gures rightclick in the upper left subwindow but they are all made of just one material For that material you can control 17 parameters a 4 ambient constants These are the constants multiplied by the RGBA intensity of incoming light They are also multiplied by the global ambient 4 diffuse constants These are analogous to the ambient except there s no global diffuse 4 specular constants Similar to ambient 4 emission constants I haven t talked about these You can think of these as constants that make the object glow The default emission is zero so you can safely ignore these whichl recommend a l shininess parameter which is the exponent in the Phong equation You can also control 16 parameters for the one light source a 4 for position Recall that we use homogeneous coordinates so you can make a distant light by setting the W coordinate to zero a 4 for the ambient light Each light produces ambient light its contribution to the total ambient light Strange but true Even weirder the light color is speci ed as RGBA so the light can be transparent whatever the heck that means a 4 for diffuse light These get multiplied by the diffuse re ection constants c 4 for specular light Same comments as for diffuse Wow That s a lot Try some of the following a try moving the light around Try putting it in the center of the dolphins Can you see the effects particularly on the specular highlight a try making a distant light and ipping it up and down a try playing with the different properties ambient diffuse and specular and the colors try different shininesses The TW tutor is a bit different hopefully a bit simpler Instead of 17 parameters for a single material TW s APT allows only 5 for the following reasons a the ambient triple and diffuse triple are often the same and based on the intrinsic color of the material In fact there s an OpenGL call that allows you to specify them both at once using a single quadruple of values a the specular term is usually gray all three RGB components have the same value since the material is acting as a mirror at that point This reduces the 17 parameters to 5 TW allows you to specify just those ve without having to build as many matrices and send them to OpenGL tholorTriple ambientanddiffuse specular shininess Furthermore the global ambient is often gray light so TW allows you to set that using a single function call twAmbientvalue This API is what the materialTutor lets you experiment with It doesn t offer a choice of objects but it does let you save a particular set of parameters so you can do a sidebyside comparison of a particular change 8 Polygonal Shading Polygons are easy to shade compared to curved surfaces because they re at over a region hopefully several pixels We ll look at three ways to shade a polygon OpenGL allows two of them For curved surfaces we think in terms of representing them as a mesh of polygons thereby reducing the problem of shading them into that of shading polygons The following picture is of a polygon mesh For additional demos you can look at the following N053O7pubdemos08 TeddyBearLit N053O7pubdemosO9 JewelAndBall The rst is one we ve looked at before but as we know the spheres of the teddy bear are implemented with polyhedra approximation depending on slices and stacks You can try modifying those to see the effects on shading The second demo contrasts a faceted jewel with a smooth ball but the remarkable thing is that the objects are identical except for the shade model OpenGL allows two kinds of shading Flat shading exempli ed by the jewel Smooth shading exempli ed by the ball 81 Flat Shading To use at shading in OpenGL you use the following call This sets a policy that is applied to all subsequentpolygons nghadeModelGLFLAT The shading assumptions with at shading are the following a n is constant that is the polygonal patch is at so the surface normal is the same over the entire patch a v is constant if viewer is far We can set the near viewer ag to false see below a 1 is constant if light is far compared to the size of the polygon or the light is directional r is constant iflis Thus all pixels are the same shade since all the relevant vectors are constant over the polgon This is at or constant shading Adjoining polygons will have different shades if they have different values for any of the important vectors Fortunately or unfortuately the human eye is remarkably sensitive to differences from one patch to the next lateral inhibition causes Mach bands Lateral inhibition has to do with how the neurons in the eye work and the effect is that at boundaries where shade changes the difference or contract is enhanced Lateral inhibition produces a kind of image enhancement Perceived intensity Actual intensity The Mach Band effect is why it s so obvious when we use at shading with the teddy bear Thus at shading is appropriate when the actual object is faceted like a jewel but if the polyhedron is just an approxim ation to a real object that is smooth like a ball In cases like that it s better to use smooth shading 12 82 Gouraud Shading Smooth shading is sometimes called Gouraud shading in honor of one of the pioneers To get smooth shading in OpenGL use the following call nghadeModel GLSMOOTH Smooth shading assumes that the surface normal at each vertex is different and so the shade of each vertex is computed the 1 and v vectors might also be different Then the shade of all the interior pixels is computed by interpolation 83 Phong Shading A third form of shading which isn t available in OpenGL is Phong Shading It s like smooth shading but instead of interpolating the shade the vectors are interpolated particularly the surface normal Then the shade of each pixel is compute The result is smoother and nicer than Gouraud but is more computationintensive so it s done offline 84 Normals Since all of these computations depend on normals how do we determine the normal vector at a vertex in OpenGL The programmer de nes the normal at a vertex using one of the following two calls glNormal3f X y Z glNormal3fvfloat As with colors you de ne the normal before you send the vertex down the pipeline That normal applies to all subsequent vertices For example here s a triangle in the 3 plane ngegin GL TRIANGLES 7 glVerteX3 f glEnd Note that the current transformation matrix CTll is applied to normals as well as vertices as they go down the pipeline so you can transform the triangle above to anywhere in your scene at any angle and the normal goes along for the ride This is really useful If you have a more dif cult situation you can always do the following Choose three points on your surface three points de ne a plane Compute two vectors by subtracting those points hint use thector Compute the cross product of those two vectors hint use throssProduct Normalize the cross product hint use thectorNormali ze That normalized cross product is the surface normal of the plane so you can give it to OpenGL using glNormal 3 fv The default normal is the vector 001 which might be completely wrong If so your diffuse and specular terms will be nearly zero and the usual result is that the surface is black You can de ne a different normal each time you send a vertex down the pipeline Speci cally if you send vertex V down the pipeline twice it can have a different normal each time Why might you do that Figure 5 shows some of the vertices in the jewel and ball and their normal The glut objects such as glutSolidSphere de ne the normals for you39 yet another reason they re nice Note that OpenGL will usually assume that your normals are normalized If they re not they ll usually be longer than unit length resulting in light values that are too high and your surfaces become white There are two solutions 13 Figure 5 The normals on the jewel change depending on which facet is being send down the pipeline but the normals for the ball don t Normalize your vectors yourself as needed Tell OpenGL to normalize all vectors automatically using 91 Enabl e GLNORMALI ZE The former is more ef cient but less convenient TW enables GLNORMALIZE so you don t have to worry about normalizing vectors when using TW 9 Lighting and Light Sources First you must remember to enable lighting There s so much else to do it s easy to forget glEnable GLLIGHTING You can turn lighting off and on as desired but typically we set it up once and leave it on OpenGL allows all four types of light ambient point spot and directional and at least eight sources We have to specify a ton of information but it s organized by the Phong model Notice that you have to enable the light as well as specifying a bunch of information about it 91 Global Ambient GLfloat globa1ambient 02 02 02 10 glLightModelfvGLLIGHTMODELAMBIENT globalambient 92 Point Lights All the following are necessary for each point light You can have up to at least eight lights GLLIGHTO through GLLIGHT7 Some OpenGL implementations allow more Consult GLMAXLIGHTS39 our implemenation allows 8 You can see this with the OOLimits demo glEnableGLLIGHTO GLfloat light0place l23l Xyzw values GLfloat light0ambient l00l rgba values GLfloat light0diffuse l00l rgba values GLfloat light0specular llll rgba values glLightfv GLLIGHTO GLPOSITION light0place glLighth GLLIGHTO GLAMBIENT light0ambient glLighth GLLIGHTO GLDI FFUSE light0diffuse glLightfvGLLIGHTO GLSPECULAR light0specular 93 Attenuation We can add distance effects specifying the a b and c with one call each glLightf GLLIGHTO GLCONSTANTATTENUATION a glLightf GLLIGHTO GLLINEARATTENUATION b glLightf GLLIGHTO GLQUADRATI CATTENUATION C 94 Spotlights In addition to all the information for the point lights you can do the following to set up a spotlight glLighth GLLIGHTO GLSPOTDIRECTION vector glLightf GLLIGHTO GLSPOTCUTOFF angle glLightf GLLIGHTO GLSPOTEXPONENT exp OpenGL limits the spotlight exponent to the range 0128 and the angle to between 0 and 90 with a special angle of 180 The cutoff is the halfangle angle at the top of the light That is it s a limit on the angle between the vector to the vertex and the spot direction 95 TW Lights OpenGL allows you to control all 9 parameters of a light but in practice most lights are gray so twLightllightid position value sets the position and value all 9 parameters are set to value for a given light This may be too restrictive I m open to suggestions 96 Distant Viewer Distant is the OpenGL default since it s faster If we want a near viewer glLightModeli GLLIGHTMODELLOCALVI EWER GLTRUE 10 Materials Specifying material of a vertex is a bit like specifying a light at least the way the function calls work As with glLightf there are basically two functions glMaterialfvface parameter valuesvector glMaterialfface parameter value The face is one of the following OpenGL constants GLFRONT GLBACK GLFRONTANDBACK If one side of your polygon is a different color or material than the other you can specify which side you are giving the color of How does OpenGL de ne which side of a polygon is the front When viewed from the front the vertices of a polygon are counterclockwise You can control this with 91 Front Face see the man page of that function for more information The second argument called parameter is one of the following OpenGL constants GLAMBIENT GLDIFFUSE GLSPECULAR GLAMBIENTANDDIFFUSE GLSHININESS GLEMISSION You choose one of those based on the property you d like to set The value you set the property to the last argument Some values need to be vectors in fact all of them except for shininess are RGBA arrays 11 Shading Large Areas We ll play with the O9 Spotlight demo This illustrates the important point that OpenGL only knows about vertices so the play of light over a big polygon will be disappointing Suppose you have a spotlight falling like All four vertices will be unlit so any kind of interpolation will result in a rectangle that is completely black despite the light falling on it To solve this problem we break the polygon up into lots of little polygons in a mesh and handle it as we have curved surfaces the slices and stacks idea A convenience for this is thrawUnitSquare which draws a unit square broken up into small rectangles you choose how many in each direction 16 111 Two Sided Planes By default the color of the back side of polygons is not computed so they ll be black If you want that calculation to be made use the following OpenGL call glLightModel i GLLI GHTMODELTWOSI DE GLTRUE Lecture on Plane Geometry1 As we ve seen CG usually breaks down a model into a large number of planar regions quads and triangles Even curved surfaces are ultimately rendered as a large number of planar facets Very often the CG system then needs to do some additional geometry with the planes such as determining if a ray of light say from a spotlight intersects the planar region To do that we need a bit of geometry To motivate this we will look at the following demo N053 O7 publichtml demosanimationLaser cc This program has an animation of a UFO ying over a eld with a bar on it It has a photon torpedo that it res a random downward directions We can animate the moving photon torpedo but we need to determine when and where the torpedo intersects the eld or the barn so that we can draw the explosion 1 Implicit Equation of a Plane First let s see how to de ne the implicit equation of a plane Let Pu be a speci c point on the plane any point but one where we know the coordinate Let n be the normal vector for the plane Here is an example P0 5547 n 122 Now note that any vector lying in the plane must be perpendicular to the normal vector If we let P be a variable standing for every point in the plane P w y 2 Then we know that 0 n P P0 With a little abuse of notation we can derive 0 nP nPu Using our example 0 nP Pu 15252 3653152 5457 12217 5y 4z 7 1w 52y 42z 7 w2y22 152427 w2y22 27 ll Note that this is an implicit equation of a plane It can tell us whether a point is on the plane or not but it doesn t easily generate points on the plane Also the implicit equation doesn t generalize to higher dimensions but that s a problem for mathematicians not us In general the equation of a plane in 3D is awbyc zd0 1Some of this development is helped and improved by Dan Sunday s work at http geometryalgorithms comArchivealgorithm o l 04algorithm701 04 htm 2 Parametric Equation of a Plane Another way to de ne a plane is by a point on the plane and two vectors that lie in the plane This works in higher dimensional spaces not just 3D Let the two vectors be 1 and 111 We can then de ne the equation Pst Pu so tw A very common situation for this is when we have a triangle of three points P0 P1 P2 and we develop it like this 12 P1 Pu 111 P2 Pu Ps t Pu s tw l s tPusP1tP2 This is a nice equation using only the three points we started with plus two parameters Notice that the point P s t is inside the triangle when 0 0 t IA w 5 The point is on the perimeter if s z t 0 or s t 1 Each condition corresponds to one edge For a parallelogram we have 0 g s t g 1 Of course if we want the implicit equation we can take the cross product of 1 and w to nd 17 and proceed from there 3 Fun Facts 31 Any Point Notice that the argument for the implicit equation works for any point on the plane so that if we used P1 instead of P0 we should get the same equation 0 nP nPu 0 nP1 nPu nPu nP1 This means that the constant term 1 in the equation is the same for any point on the plane The normal vector dotted with any point on the plane yields this same value 32 Finding the Normal Vector Notice that you can just read off a normal vector to a plane from its implicit equation Very convenient 33 Normalized Normal Vector In the development you notice that we just chose an arbitrary normal vector What happens if we choose a different one a scalar multiple 0 P Pu Figure 1 Distance from a Point to a Plane Q is the point not on the plane and P0 is some point on the plane 9 is the angle between the normal vector and the vector from P0 to Q Of course we can just multiply both sides of this equation by 1k and get the same result So it doesn tmatter It s commonto use a normalizednormal vector so that 1 Using our example we have n 1 2 2 so 3 so 0 w2y22 27 13w23y23z 9 34 Dividing Space lfwe de ne f1yz aw I by I cz I d Of course this function will be zero when the point lies on the plane More interestingly this function divides space into two halves and the points on one half yield a positive result and the points on the second half a negative result Thus you can use the function to tell what side of a plane a point is 35 Distance from a Point to a Plane If we use a normalized normal in our implicit equation we have an interesting property namely that the function gives the signed distance of the point to the plane Why Let 9 be the angle between the normal vector and the vector u from a point on the plane to the point off the plane See gure 1 The perpendicular distance is then qu Qjcos n Pu Q in When 1 the denominator goes away Also since P0 is a point on the plane 7 P0 is just 1 the constant in the implicit equation Thus we have nP nQ dnQ 1 an be lcQ This is just our implicit equation So the key thing is that if the normal vector is unit length the implicit equation gives the distance from the point to the line 4 Intersecting a Ray with a Plane Suppose we have a point Q not on the plane we ll reserve P for points on the plane and a vector 7 indicating a ray starting at Q Does the ray intersect the plane If so where How far We could solve this many ways We will do it by creating a parametric equation of the ray and intersecting that with the implicit equation of the plane That is we combine these two equations QU Qtr 0 aw I by I cz I d And we get 0 0Qx m bQy try 0Q m d This is an equationjust in t so we solve for t and we re almost home Let s do an example with Q 11158 and r 132 and using our plane from before 0 11 t2153t282t 27 11 t306t164t 27 9t30 t 103 The fact that t is negative tells us that the ray does not intersect the plane The photon torpedo is moving away from the plane If we reverse 7 we would get a positive t and the photon torpedo would intersect the plane If we compute this intersection parameter with several planes we know that the torpedo will hit them in the order of the parameter values Thus it will blow up the one with the smallest parameter 5 Intersecting a Ray with a Plane Again The math in the last section is ne but here s a better way again thanks to geometryalgorithms com Given a line de ned by Q and R and a plane de ned by P and N all knowns l ve droppedthe subscripts for convenience we can substitute the parametric representation of a line into one of our representations of a plane Q I tR N PN From there we can use some reasonable algebra to solve for t Figure 2 Two projections of one vector onto another QtTNPN QNtRNPN tRNPN QN tRNP QN t P N R i Not exactly intuitive but simple to compute We should check for special cases a A ray parallel to the plane which means there s no intersection If that s the case the ray will be perpendicular to the surface normal or R N 0 So we just check that the denominator is not zero a A ray lying in the plane If that happens not only will the ray be parallel to the plane but the point P will lie on the plane That means PQ is a vector lying in the plane and therefore the dot product with the normal vector is zero Thus in this case the numerator is zero 6 Intersecting a Ray with a Triangle But we don t care about whether the ray intersects the plane we care whether it intersects a triangle or a quad We can compute the point of intersection from the parameter and our parametric equation of the ray and then try to compute the s and t values so that we can use the constraints in section 2 61 Dot Products as Projections Before we develop the code for nding the intersection point within a triangle it s helpful to have as a building block a more complete understanding the usefulness of the dot product We know that the dot product gives us something like the cosine of the angle between two vectors In fact for unit vectors it gives us exactly the cosine of the angle between them Let s start with unit vectors Suppose we have unit vectors 1 and 111 If we compute the following 1239 12411 11139 v ww Figure 3 Projecting a nonunit vector The 139 and 11139 vectors are scalings of the original vectors where they are scaled by the dot product Geometrically this is equivalent to the projection of the other vector onto this one39 see gure 2 But what about nonunit vectors Consider the projection in gure 3 and assume that 1 is a unit vector but 11 is not To project 11 onto 1 we want to resize 1 so that it has a length equal to the length of 11 multiplied by the cosine of the angle between 1 and 11 Call that angle 9 The length of 1 should be 1 11 1 11 lvllwl 1v 1ultos9 jw Since 1 1 we can drop that and we nd that The dot product gives the projection of any vector onto a unit vector 62 Finding the Intersection Point Parameters If I is the intersection point of the line and the plane we have I P0 5P1 Pu tP2 P0 P0 sut1 We have to solve this for s and t the parameters of the intersection point Since we have three dimensions in two unknowns we can certainly solve this Indeed we can solve it three different ways depending on which equation we decide to leave out I consulted geometryalgorithms com and decided for better or worse to solve it using their ma Here s my explanation of their math If you d like to see their way consult http geometryalgorithms comArchivealgorithm0105 algorithm0105 htm Let 11 be the vector from P0 to the intersection point 11 I Pu We want to solve the following equation for s and t 11 su t1 Note that this equation just says that 11 is a linear combination of vectors to and 1 They solve this in a very clever way To solve for t they construct a vector that is orthogonal perpendicular to u but that also lies in the plane39 call it uL pronounced u perp The dot product of 1rL with u is of course zero so taking the dot product of the right side of this equation with 1rL nulli es the su term leaving an equation with only the t parameter to solve for Of course there are in nitely many vectors perpendicular to u it s important that they choose one that lies in the plane since that gives us the situation shown in gure 4 In that gure a and b are the projections of 11 and 1 onto uL The scalar multiples are a 11 1rL b 11rL Figure 4 Taking the dot product of 11 with 11L shown as u39 We start with known vectors u 1 and 11 where 11 is some linear combination of u and 1 111 su t1 We rst nd LP the perpendicular to u lying in the plane denoted u in the gure Taking the dot product nds t1 the amount of vector 1 that is in the linear combination of u and 1 to m ake up 111 By similar triangles the vector t1 is to 1 as a is to b 1 b Therefore we can nd t ab Here it is again purely algebraically t1 a 11 su t1 11 11L su t1 11L suuLt1uL t1 1 t 11 11 1 11 Note that because the numerator and denominator both have 1 in them it doesn t matter whether 1 is a unit vector because the scale factor to normalize it appear in both the numerator and denominator and therefore would cancel Similarly we can solve for s by nding a 1L that is perpendicular to 1 and lies in the plane t 11 1L u 1L How can we nd these perpendicular vectors Since they lie in the plane they must be perpendicular to the plane normal N Since the cross product nds a vector perpendicular to two others we have 11L qu 1L Nx1 Next they introduce a computational shortcut It turns out that there is an identity for cross products namely axb Xcabb bca We re not even going to think about proving that39 we re just going to use it Consequently uL nxuuxvxuuuv uvu 12L nxv uxv xv uvv vvu And now we can compute s and t using only dot products u vw 12 1 vw u u 122 u uv 12 u 12w u u uw u u 12 u uv 12 Notice the similarity between the two calculations Thus the complete calculation only requires ve distinct dot products We need to check for special cases where the triangle is degenerate One way this can happen is if two of the points de ning the triangle are the same If this happens either to or 1 will be the zero vector and to to 0 or 1 1 0 Another way is if the three points are colinear in which case to is a scalar multiple of 1 One way to test for this is to test whether the cosine of the angle between to and 1 is l which happens when the angle is zero lullvl u 12 ju M u 02 u uv 12 u 122 u uv 1 0 Since the quantity on the left is the denominator of our fractions to compute s and t all we need to do is check for zero before dividing 7 Photon Torpedoes Let s look at the photon torpedo Laser demo Notice how the animation of the motion of the UFO is done We give it an initial location and direction controlled by global variables The display function uses the location variable and the idle callback updates it a Notice how the random direction of the laser is generated Feel free to use ideas like that if you want random numbers in your own animation Notice how the laser is drawn using m Norma l i quot and quot le to ensure that the barrel is always 20 units long a Look at the implementation of blast First we nd the nearest fragment using thearestFragment We ll look at the code for that function which is in Ncs3 O7 publichtmltwtw geometry cc thearestFragment iterates over the fragments determines the parameter values and nds the smallest twLineTriangleIntersection computes three parameters the one on the line ray and the two in the plane for the intersection point twLinePlane Intersection computes the parameter on the line using the math we developed in section 5 thointInTriangle computes the parameters in the triangle using the math we did in section 6 See how the parameter of the torpedo depends on the frame number This is how it advances in each frame When the parameter exceeds the parameter of the intersection point the collision with the surface has occurred so start drawing the explosion Lecture on the Accumulation Buffer Motion Blur AntiAliasing and Depth of Field 1 The Accumulation Buffer There are a number of effects that can be achieved if you can draw a scene more than once You can do this by using the accumulation bu er You can request an accumulation buffer with glutInitDisplayMode GLUTACCUM Conceptually we re doing the following We initialize the accumulation buffer to zero then add in each frame with an associated constant and then copy the result to the frame buffer This is accomplished by a Clear the accumulation buffer with glClearAccumrgba glclear GLACCUMBUFFERBIT This is just like clearing the color buffer or the depth buffer The accumulation buffer is like a running sum though so you will usually initialize it to zero a Add a frame into the accumulation buffer with glAccumGLACCUMfi This function allows other values see the man page but we ll use this today Copy the accumulation buffer to the frame buffer glAccumGLRETURN 10 The second argument is a multiplicative factor for the whole buffer We ll always use 10 11 Advice Get your scene working rst before adding the accumulation buffer stuff a Make sure you clear the accumulation buffer to zero a Make sure your factors add to l 2 Motion Blur By drawing a trail of fading images you can simulate the blur that occurs with moving objects You can fade the images by using smaller coef cients on those images Demo N053 O 7publichtml demosaccumulationFallingTeapot cc Notice the three images of the teapot Even without the accumulation buffer just by drawing it three times we can get an interesting effect a Type s and notice how if we draw them with a small motion the teapot just looks bigger or distorted Type s to return to the large motion Type m to switch to using the accumulation buffer Notice how the frames can be made to vary in strengt In fact the rst frame has moreorless disappeared Type s to use small motion and see the result The teapot looks blurred by motion which is roughly the effect we want Though if you think the effect without the accumulation buffer is nicer I wouldn t blame you a Type or to increasedecrease the number of frames Looking at the code notice how the accumulation buffer is initialized drawn into and read out of a Notice how the motion is interpolated based on the frame number This is based on our parametric equations a Notice how the fractions for each frame are computed What is the fraction for the frame that disappeared The problem with using the accumulation buffer for largemotion motion blur is that we really want to turn all of the coef cients up so that the rst image doesn t fade away and the last isn t too pale yet that s mathematically nonsense and also doesn t work the table turns bluel Nevertheless for smallmotion blur it works pretty well 3 AntiAliasing An important use of the accumulation buffer is in antialiasing Aliasing is the technical term for jaggies It comes about because of the of an over a real worl The idea of antialiasing using the accumulation buffer is u The scene gets drawn multiple times with slight perturbations jittering so that a Each pixel is a local average of the images that intersect it Generally speaking you have to jitter by less than one pixel Demos N053 O 7publichtml demosaccumulationA1iasing cc Without antialiasing turned on look at the four gures callbacks 14 and notice the jaggies at the edges Turn on antialiasing callback a and notice the difference a Notice also that it doesn t quite work for the wire cube Why not Because the jitter amount is too small at the back and too big at the front 4 Better AntiAliasing A better technique than jittering the objects is to jitter the cam era or more precisely to modify the frustum just a little so that the pixels that images fall on are just slightly different Again less than one pixel Here s a gure that may help The red and blue cameras differ only by an adjustment to the location of the frustum The center of projection the big black dot hasn t changed so all the rays still project to that point The projection rays intersect the two frustums at different pixel values though so by averaging these images we can antialias these projections How much should the two frustums differ though By less than one pixel How can we move them by that amount We only have control over left right top and bottom and these are measured in world coordinates not pixels We need a conversion factor We can nd an conversion factor in a simple way the width of the frustum in pixels is just the width of the window more precisely the viewport while the width of the frustum in world coordinates is just right left Therefore the adjustment is right left AG f mth pmcls wando w wzdth Here s the code adapted from the OpenGL Programming Guide void accCameraGLfloat pixdx GLfloat pixdy GLfloat dx dy GLint viewport4 glGetIntegervGLVIEWPORT viewport GLfloat windowwidthviewport2 GLfloat windowHeightviewport3 GLfloat frustumWidthright left GLfloat frustumHeighttop bottom glMatrixModeGLPROJECTION glLoadIdentity dx pixdxfrustumWidthwindowwidth dy pixdyfrustumHeightwindowHeight printfquotworld delta f fnquotdxdy glFrustumleftdxrightdxbottomdytopdynearfar glMatriXModeGLMODELVIEW glLoadIdentity u The pixdx and pixdy values are the jitter amounts distances in pixels subpixels actually a The frustum is altered in world coordinates Therefore dx and dy are computed in world coordinates corresponding to the desired distance in pixels void smoothDisplay int jitter int numJitters 8 glClearGLACCUMBUFFERBIT forjitter0 jitterltnumJitters jitter accCamerajitterTablejitterO jitterTablejitterlH glClearGLCOLORBUFFERBIT GLDEPTHBUFFERBIT drawObject glAccumGLACCUM 10numJitters glAccumGLRETURN 10 glFlush glutSwapBuffers This does just what you think a We draw the image 8 times each time adjusting the pixel jitter Each drawing has equal weight of 18 There are lots of ways to imagine how the pixel jitter distances are computed The domino pattern is a good idea for 8 However a paper on the subject argues for the following which I just used without further investigation One idea I have is that we want to avoid regular patterns From the OpenGL Programming Guide first edition table 10 5 float jitterTable2 05625 04375 00625 09375 03125 06875 06875 08124 08125 01875 09375 05625 04375 00625 01875 03125 DemosNcs3O7publichtmldemosaccumulationAntiAliasingcc Notice the difference in quality between the two images We ll look at how the code is written but it follows what we ve shown above This better approach to antialiasing works regardless of how far the object is from the center of projection unlike the objectjitter we did before Furthermore we have a wellfounded procedure for choosing the jitter amount not just trial and error 5 Depth of Field Another thing we can do with the accumulation buffer is to blur things that a cam era would blur We ll investigate why cam eras blur things a what is meant by focal depth a what is meant by depth of eld a how to accomplish this by using the accumulation buffer Here s a very good web site in terms of the gures and the pictures of the rulers I ve also added this link to our course home page httpwwwcsmtueduNsheneDigiCamUser Guide950depth of fieldhtml Ignore what it says about the circles of confusion being caused by the amount of light going through the lens The circles of confusion are caused by the aperture size which also controls the amount of light but the causality goes the other way Smaller aperture means a greater depth of eld and a less light Of course in CG we don t have apertures Instead we have to fake the blurriness But how do we blur the things except at the focal distance First let s remind ourselves of the basic camera terminology since our gures are about to get very complicated The big blue dot is the eye location39 everything about the camera is measured from that point In particular le right top and bottom are measured with respect to that The frustum is the blue trapezoid Make sure you understand what left right top and bottom are The dashed line is where some vertices project39 in this case they happen to project to the center of the frustum but that s just to simplify some of the geometry we ll do later quot000 Next notice that if we move the camera points project onto different pixels In the following gure the difference is that we ve moved the eye to the left red location the camera moves as a result of moving the eye The dashe lines show where the three vertices project to In the blue setup they all project to the center of the frustum In the red setup they all project to different locations That doesn t accomplish our depthof eld goal of course because all the gures the pentagons are blurred Suppose we want to make the middle one nonblurred What we do is adjust the frustum s location as it s measured from the new eye location to cancel out the effect of the eye movement so that that one ray still projects to the center of the frustum The geometry is just by similar triangles O If F is the focus distance measured from the eye to the object the height of the larger triangle and A336 is the amount that we move the eye the width of the larger triangle we can see that Axe A32 F F near So we adjust the frustum by Axf Note that to accomplish the desired cancelation the two deltas have to have opposite signs if we move the eye to the left negative we have to move the frustum to the right positive We nd Axf AxeFXF near The code for setting up our camera frustum then is Set up camera Note that near far and focus remain fixed all that is modified with the accumulation buffer is the eye location void accCameraGLfloat eyedx GLfloat eyedy GLfloat focus GLfloat dXdy glMatrixModeGLPROJECTION glLoadIdentity dx eyedXfocusfocus near dy eyedyfocusfocus near printfquotdx 5f dy 5fnquot dx dy glFrustumleftdxrightdxbottomdytopdynearfar glMatriXModeGLMODELVIEW glLoadIdentity Translating the coordinate system by eyedxeyedy is the same as moving the eye by eyedX eyedy Nobody better re load the identity matrix ngranslatefeyedxeyedy00 The next question is how far to move the eye The farther the eye moves the narrower the depth of eld First look at this gure The blurriness of the other two gures depends on the difference between where they project to in the two camera setups 116 ar One pair gives us the following E F near E e The other pair gives us Fi d F d near E e c What are these values a near the distance from the eye to the frustum F the distance where objects are in focus a E the maximum distance the eye moves a d the desired depth of eld a c the blurriness distance about 1 pixel The unknown is E We d like to compute E as a function of 1 By eliminating 8 between the two equations we can do this Skipping some horrendous algebra check me on this please we get E Fidinear Finem39 Fid F Note that all these values are in world units except for c which is about 1 pixel So we need to convert c from pixels to world units before we do this computation Demos N053 O 7publichtml demosaccumulationDepth0fField cc Lecture on Texture Mapping Texture mapping is one of the major innovations in CG in the 1990s It allows us to add a lot of surface detail without adding a lot of geometric primitives lines vertices faces Think of how interesting Caroline s loadedDemo is with all the texturemapping Figure l contrasts the two 1 Reading As with everything in computer graphics most of what I know about texture mapping I learned from Angel s book so check that chapter rst Unfortunately it s one of his weakest chapters because it doesn t do a very good job of connecting the theory with the OpenGL code A more practical presentation is a chapter of the Red Book the Red OpenGL Programming Guide You re encouraged to look at both 2 Conceptual View Texture mapping paints a picture onto a polygon Although the name is texturemapping the general approach simply takes an array of pixels and paints them onto the surface An array of pixels is just a picture It might be something your program computes and uses More likely it will be something that you load from a le Demos These all live in the cs3 07public7htm1demostexture rmapping directory USFlag This actually shows two ags a checkered ag and a US ag You can switch between two ags using the u key This is about the simplest texturemapping code Please look at it a Li tUSFlag The US ag texturemapped with lighting You have to turn off the wireframe mode w and then turn on texturemapping t You can also switch between having the texture interact with lighting and having a decal This code is fairly ambitious but it shows what we can do with texturemapping Don t look at it until you re ready Figure l Caroline Geiersbach s loadedDemo with and without textures QuadPPM TW makes it easy to read a texture from a PPM le and texturemap it onto something The code for this demo is relatively simple and is worth reading This demo uses a PPM le of the US ag texture as the default but you can specify your own on the command line There s a directory of image les in N053 O 7publichtmltextures Teapot The GLUT teapot de nes texture coordinates for each Bezier patch so we can texturemap stuff onto it easily This demo is a variation on QuadPPM but mapping onto the teapot instead of a unit square lts code is even simpler than QuadPPM Conceptually to use textures you must do the following de ne a texture a rectangular array of pixels 7 texels specify a pair of texture coordinates s t for each vertex of your polygon The graphics system paints the texture onto the polygon 3 How It Works Texture mapping is a raster operation unlike any of the other things we ve looked at Nevertheless we apply textures to 2D surfaces in our 3D model and the graphics system has to gure out how to modify the pixels during rasterizing AKA scan conversion Since texturemapping happens as part of the rasterizing process let s start there 31 Rasterizing When the graphics card renders a polygon it conceptually determines the pixel coordinates of each corner determines the edge pixels of the polygon using a linedrawing program an imporant one is Bresenham s algorithm which we won t have time to study determines the color of the edge pixels on a single row by linear interpolation from the vertex colors walks down the row coloring each pixel by linear interpolation from the two edge pixels Note standard terminology is that the polygon is called a fragment since it might be a fragment of a Bezier surface or some such Thus the graphics card applies a texture to a fragment This all happens in either in the framebu er or an array just like it 32 Texture Mapping To do texture mapping the graphics card must compute a texture coordinate for each pixel during the rasterizing process using bilinear interpolation look up the texture coordinates in the array of texels either using the nearest or a linear interpolation of the four nearest Either Use the color of the texture as the color of the pixel or Combine the color of the texture with the color of the pixel mmz mm Comdmatns m Olllzrlnnnvz nns bum mm 1mm 4 mm mm Banning mm Wm cumplmrlg ch hm xe zccnn result We mum WWW 4 szlm 9pm 4 charges mm mba so M mmmmmx wemammhwmwnhhmm Pal ka Fm M1212 hm hm mmnrmyumzqud gmagmcumsy EJTameazith glvenemfk 717m 91acmxa2 u1hgvaneya 17m anacmxaziumh glvenemfk 1 my gn39acmxdzfm h givenemfk a my auxAim 3914 Mam of WM mane n dawn1391 we Mume and m m m mum as we km 2 Em cmxdmatns LU ms myeemuddmhl39s W a As we go down the rst column of the array until we get to element ColLength O we get to texture coordinates 01 Again this may seem odd but it s true Unsurprisingly the last element of the teer array is the corner opposite the rst element so array element ColLength RowLength corresponds to texture coordinates 11 Conventionally the texture coordinates are called 815 just as spatial coordinates are called 1 y Thus we can say that 8 goes along the rows of the texture along the y of the ag The t coordinate goes along the columns of the texture along the hoist of the ag Although you will often use the entire texture so that all your texture coordinates are 0 or 1 that is not necessary In fact because the dimensions of texture arrays are required to be powers of two the actual image that you want is often only a portion of the whole array The computed US ag array has that property The array is 256 pixels wide by 128 pixels high but the ag itself is 198 pixels wide by 104 pixels high Thus the maximum texture coordinates are y 198256 0 7734 hoist 104128 0 8125 Of course we also need to ensure that the rectangle we are putting the ag on has the same aspect ratio as the US ag namely 19 See http cs wellesley eduNcs307flagspec htm 35 Basic Demos Please look at the code for the following demos All of them are in N053O7publichtmldemos texture mapping SimplestTextures cc This is a simple example using very small 4 x 4 textures There are actually two textures use u to switch 36 Texture Mapping in OpenGL Conceptually to actually do texture mapping in OpenGL you have to do all the following steps 1 Create or load a 1D or 2D array of texels All dimensions must be a power of two Different kinds of data are possible a RGB values RGBA values Luminance grayscale Also the data in the array can be in different formats unsigned bytes short oats etc You must tell OpenGL what it is 2 Set various modes These have default values see the man pages so they can be skipped in some cases but I tend to set them all 1 copypaste the code from some working example of texturemapping then change the modes as necessary 3 Send the texel data to the graphics card 4 Enable texturem apping 5 Specify a texture coordinate for each vertex Sometimes this is done automatically as for the teapot or is calculated as for Bezier surfaces We ll get into Bezier stuff later For coding that means the following steps We ll go through these functions in detail ngeXEnvfGLTEXTUREENV GLTEXTUREENVMODE GLDECAL ngeXParameteriGLTEXTURE2D GLTEXTUREWRAPS GLREPE AT ngeXParameteriGLTEXTURE2D GLTEXTUREWRAPT GLCLAMP ngeXParameteriGLTEXTURE2D GLTEXTUREMAGFILTER GLNEAREST ngeXParameteriGLTEXTURE2D GLTEXTUREMINFILTER GLLINEAR ngeXImage2DGLTEXTURE 2D 0 3 width height 0 GLRGB GLUNSIGNEDBYTE texelArray glEnable GLTEXTU39RE2D ngegin i ngeXCoorde O 1 glVerteX3f O O O The ngexEnvf has several settings The settings can mean different things depending on the format of the texture such as luminance versus RGB and the color model in OpenGL Ignoring those nuances here s a basic summary GLDECAL and GLREPLACE These do the same thing except when there s transparency The intrinsic color of the fragment the polygon in the model is ignored and the color of the texel is used instead GLMODULATE The color of the pixel is the product plain old multiplication of the color of the fragment and the color of the texel The texel is often a luminance value and you use the texture to darken or lighten the color of the fragment GLBLEND The color of the pixel is a mixture weighted average of the color of the fragment and the texture environment color with the texel determining the weig ting The texture can either replace the scene colors like a decal or it can blend with the scene colors as with wood grain nishes or even adding surface smudges and dirt to make things look more realistic The parameters settings in ngexEnvf help to set this interaction between the color of the fragment whether direct RGB color or material and lighting and the texture We ll look at the TextureParameters cc demo and code You can see a screenshot in gure 3 We ll also look at N053 O7 pubTutorstexture We ll talk about the ngexParameteri function calls later The ngexImageZD function has a lot of parameters Most are xed though 1 GLTEXTURE2D or GLTEX39I39URE1D Those are the values we ll use 2 level It s possible to give OpenGL several images at different resolutions called mipmaps but it seems to be akey The OpenGL examples I ve downloaded don t work So always use the base level which is 0 3 internal format speci es the number of color components in the texture Typically this is 3 meaning RGB data 4 width of the texture Must be a power of 2 or one more than a power of 2 if you have a onepixel border 5 height Same as width 6 border zero or one 7 format The kind of pixel data Typically GLRGB or GLLUMINANCE 8 type The datatype of the array Typically GLU39NSIGNEDBYTE 9 pixels A pointer to the image data in memory We ll look at the USflag cc le for an example of this 4 Issues Here are some issues to face and choices to make Aspect Ratio Your texture is always a rectangle Even if your polygon is one too you ll have to deal with matching aspect ratios if you want the image to be undistorted With a plain texture such as grass or wood this may not matter but for pictures it may Wrapping What happens when your texture parameters fall outside the 01 range We ll try this with the tutor You can wrap around essentially removing the integer part and using only the fractional part This repeats the texture For real textures you often want to do this You can clamp the value at the edge pixel If your texture has a border of some sort that can work out well Filter What to do when the pixel doesn t exactly match a texel You get to specify this for both magni cation pixel smaller than texel and mini cation pixel larger than texel but in practice I think they are usually set to the same value Use the nearest Manhattan distance texel to the center of the pixel Use a weighted average of the four texels nearest to the center of the pixel We ll look at the LinearNearest demo to understand this Note the functions to set the lters appear not to have adequate defaults if you don t set them you won t get a texture density of the texture repetition Too little and it looks badly stretched Too much can squeeze the texture too much Look at Grass cc Try the three different textures Use the r callback to reveal the vertices that are created Look at the texture from above by using the Y callback 41 More Demos Please look at the code for the following demos All of them are in N053O7publichtmldemos texture mapping Rainbow cc This is lovely example of a 1D texture Use the R keyboard callback to turn the rainbow onoff The illusion is much better if you switch to immerse mode Note that another version of this demo that doesn t use TW called Rainbowaeet looks better because the illusion is much better if you re inside the scene Original code from lLichael Sweet J c LuI cr a cc This uses of texture parameters such as decal vs blending It gen erates gure 3 LinearNearest cc This demonstrates the difference between LINEAR and NEAREST for magni ca tionmini cation You can see screen shots in gure 4 LitUSFlag cc This demonstrates how to combine Bezier surfaces and texture mapping Lighting too a N pub Tutors texture An OpenGL tutor I got online Pretty slick but I don t understand everything so experiment with it re 4 Both gures are checkerboard textures stretched ovaquot a large numbaquot ofpixels Consequently the texture coordinate values for many pixels fall between texel values In the picture on the left we use a linear inta polation between the texel values In the picture on the right we use the nearest texel value 5 Images and File Formats Images come in dozms offormats with di erent kinds ofcompression techniques and so forth We will look at the following kinds BMP MSWindows Bitmap format this is an uncompressed Windows format It has a simple format but the les are very large Note Flick s campaign to stamp than out htcprwwwwe11es1eyedu ChemistryFlickhsobuhtml TIFF Tag Image File Format an industry standardpixmap le format File sizes are large but le format is fairly simple Some digital cameras produce th39s GIF Graphic compressed I I f t limited tn 256 colors and mcumbaed with a patent Allows index transparency viewable by all web browsers JPGTnint p l 439 f RGRcolor millions of diffamt colors in an image Because of the fancy compression algorithm the le format is complex No transparency viewable by all web browsas NG portable Network Graphics an opensource compressed lossless format that removes some restric tions of GIF Supported by all modern browsers The le format also stores orginal Vector information ireworks uses this format as its native format Other formats can be imported or exported PPM portable Pixmap an opensource uncompressed format The format is 7 P6 two ASCII charactas identifying the le type 7wh 39 39 39 39 widaspaceafterthem quot 39 wquot dh39w M e 255 the largest possible value of a color componmt e a carriage return charactaquot ASCII 13 data 111 x h x 3 bytes giving the R G B values for each pixel It s standard to store the image in toptobottom lefttoright order I got this info from httpastronomyswineduauNpbourkedataformatsppm Search the web for more info Here s a promising page I found http www dcs ed ac ukhomemxr ngZd hihtml For TW we will always use PPM format You can convert images tofrom PPM format to other formats using Windows Mac or Linux graphics programs or various Linux commands such as ppmtogif ppmtojpeg bmptoppm topnm pnmto PNM is a portable anymap le the programs seem to be able to guess whether it s black and white PN39B grayscale PGM or color PPM 51 Demo Start Fireworks Draw something a Save default is PNG so that s ne FTP it to Puma convert to PPM display foopng pngtopnm verbose foopng gt fooppm display fooppm do a Run llTexImageCube fooppm 52 Loading Images We ll explore the code ofTexImageCube You can read in an image from a le and use it as a texture You should read the le in just once so don t call thexZD from your display function a Note that most glut objects don t have prede ned texture coordinates Only the teapot does You can generate them for the others using a fairly incomprehensible interface We ll try to learn more about this as the semester goes on 53 Binding Textures For additional speed you can load several textures associating them to integer identi ers just like display lists and then referring to them later Setup steps Ask for a bunch of identi er numbers glGenTexturesnumwantedresultarray Then for each texture you want get one of the numbers out of the array and ngindTextureGLTEXTURE2D textureNumber ngexEnvf GLTEXTU39REENV GLTEXTUREENVMODE something ngexParameteriGLTEXTURE2D GLTEXTUREMAGFILTER something ngexParameteriGLTEXTURE2D GLTEXTUREMINFILTER something ngexParameteriGLTEXTURE2D GLTEXTUREWRAPS something ngexParameteriGLTEXTURE2D GLTEXTUREWRAPT something glPixelStorei GLUNPACKALIGNMENT l ngex2D thex2D Reference step When you want a texture just ngindTextureGLTEXTURE2D textureNumber As a convenience you can replace each of the texture steps with twLoadTexturetextureIDsn filename However this function uses GLMODULATE GLREPEAT and GLLINEAR which may not be what you want Demos TextureBinding cc and USflag binding cc Try spinning either of these Notice how rela tively quick they are This is because the texture is already loaded into memory on the graphics card so almost nothing needs to be sent down the pipeline to draw the next frame of animation 54 Saving Images You can also save the contents of the framebuffer as a PPM le Just hit the S key This is accomplished thanks to an interesting function void glReadPixels GLint X raster location of first pixel GLint y GLsizei width dimensions of pixel rectangle GLsizei height GLenum format GLRGB GLenum type GLUNSIGNEDBYTE GLvoid pixels glReadPixelsOywidthlGLRGBGLUNSIGNEDBYTE GLvoid pixels The le is saved as savedimageOl ppm in the current directory If you hit s again you get saved image02 ppm and so forth In honor of family and friends weekend convert these to PNG and put them on your web page Or email them Note that PPM les are big In many of our examples the framebuffer is 500 by 500 The le size is therefore 500 X 500 X 3 lenP6500 500 255 1 750014 o s ppmtojpeg v saved frame01ppm gt saved frame01jpg ppmtojpeg Input file has format P6 It has 500 rows of 500 columns of pixels with max sample value of 255 ppmtojpeg No scan script is being used ls l saved frame01 rw rw r 1 cs307 cs307 25290 Nov 7 0006 saved frame01jpg rw r r 1 cs307 cs307 750014 Oct 31 1433 saved frame01ppm The JPG is a bit smaller YMIVIV Since you have a nite lespace quota manage your space carefully 6 Texture Mapping using Modulate When you texturem ap using GLMODULATE you have to think about the color of the underlying surface In partic ular if you re using material and lighting you have to use material and textures Caroline s texture tutor can help Ncs3 O 7 pub1 i chtml demcgts textureTutor 7 Texture Mapping Onto Odd Shapes 71 Triangles There are actually two choices here If you want a triangular region of your texture there s no problem just use the texture coordinates as usual If you want to squeeze one edge of the texture down to a point it would seem that all you have to do is use the same texture coordinates for both vertices but that yields odd results Instead you can use linear Bezier surfaces to make a triangular region DemoTeXturemapTrianglescc 72 Cylinders If mapping onto a curved surface we usually represent the surface with parametric equations and map texture param eters to curve parameters For example a cy in er a rcos27ru y rsin27ru z vh With the easy mapping 8 t 0 Demo CylinderFlag cc This shows how to put a 2D texture onto a nonplanar gure It uses the US ag since it s easy to see the orientation of the texture Essentially we have to build the gure ourselves out of vertices so that we can de ne texture coordinates for each vertex There are two ways to put a ag onto a cylinder with the stripes going around the cylinder or along its length This demo does either39 the l keyboard callback switches the orientation Understanding this code is not easy but it really only 1 39 quot r quot ical 39 The texture coordinates are relatively straightforward 73 Bezier Surfaces We ve already seen this and we got another dose of it when we looked at mapping onto triangles but let s look at it again To map onto a surface with material and lighting consider Demo LitUSFlag cc 74 Globes In general mapping a at surface onto a globe sphere is bound to produce odd distortions It s essentially a 3D version of the problem of mapping a rectangle onto a circle The reverse mapping is interesting to contemplate namely a at rectangle that shows the surface of the globe This is a problem that cartographers have wrestled with for years Indeed both of the examples I gave above for circles and squares have equivalents in cartography The distortion problem presents several tradeoffs the most important of which is shape distortion versus area distortion area To preserve the equalarea property you have to compress the lines of latitude particularly those far from the equator A famous current example is the Peters projection shape To preserve shape you end up expanding the lines of latitude particularly those far from the equator One important side effect of preserving shape is that a straight line on the map is a great circle which makes these maps better for navigation A famous current example is the Mercator projection Let s spend a few minutes discussing the pros and cons of these There are some good web pages linked from the course home page To texturemap a globe I created a globe by hand iterating from the north pole 7r to the south pole 7r and from 0 longitude around to 2K longitude I converted each longitudelatitude pair into xyz values but also made a st texturemap pair This works pretty well except possibly at the poles Ma a cos a uitude gtx cos ongitude y sinlatitude z cos a uitude gtx sinlongimde s 1 longitude2w t 1 latitude7r 7r2 Demo GlobeTexture cc Readings on B zier Curves and Surfaces 1 Additional Reading Most of whatI know about Curves and Surfaces I learned from Angel s book so check that chapter rst It s pretty mathematical in places though There s also a chapter of the Red Book the Red OpenGL Programming Guide 2 Organization This reading is organized as follows First we look at why we try to represent curves and surfaces in graphics models but I think most of us are already pretty motivated by that Then we look at major classes of mathematical functions discussing the pros and cons and nally choosing cubic parametric equations Next we describe different ways to specify a cubic equation and we ultimately settle on Be zier curves Finally we look at how the mathematical tools that we ve discussed are re ected in OpenGL code The preceding develops curves that is 1D objects 7 wiggly lines In graphics we re mostly interested in smfaces that is 2D objects 7 wiggly planes The last sections de ne surfaces as a generalization of what we ve already done with curves 3 Introduction Be zier curves were invented by a mathematician working at Renault the French car company He wanted a way to make formal and explicit the kinds of curves that had previously been designed by taking exible strips of wood called splines which is why these mathematical curves are often called splines and bending them around pegs in a pegboard To give credit where it s due another mathematician named de Castelau independently invented the same family of curves although the mathematical formalization is a little different 4 Representing Curves Most of us know about different kinds of curves such as parabolas circles the square root function cosines and so forth Most of those curves can be represented mathematically in several ways explicit equations implicit equations a parametric equations 41 Explicit Equations The explicit equations are the ones we re most familiar with For example consider the following functions 3 mxb y ax2bxc y 72x2 An explicit equation has one variable that is dependent on the others here it is always 3 that is the dependent variable the one that is calculated as a function of the others An advantage of the explicit form is that it s pretty easy to compute a bunch of values on the curve just iterate x from some minimum to some maximum One trouble with the explicit form is that there are often special cases for example vertical lines Another is that the limits on x will change from funtion to function the domain is in nite for the rst two examples but limited to JET for the third The deadly blow is that it s hard to handle nonfunctions such as a complete circle or a parabola of the form x 132 by c You could certainly get a computer program to handle this form but you d need to encode lots of extra stuff like which variable is the dependent one and so forth Bezier curves can be completely speci ed by just an array of coef cients 42 Implicit Equations Another class of representations are implicit equations These equations always put everything on one side of the equation so no variable is distinguished as the dependent one For example ambycz d0 plane M2 532 032 d2 0 egg These equations have a nice advantage that given a point it s easy to tell whether it s on the curve or not just evaluate the function and see if the function is zero Moreover each of these functions divides space in two the points where the function is negative and the points where it s positive Interestingly the surfaces do as well so the Sign of the function value tells you which side of the surface you re on It can even tell you how close you are The fact that no variable is distinguished helps to handle special cases In fact it would be pretty easy to de ne a large general polynomial in x and 3 as our representation The deadly blow for this representation though is that it s hard to generate points on the surface lm agine thatl give you a value for a b c and d and you have to nd a value for x y and 3 that work for the two examples above Not easy in general Also it s hard to do curves wiggly lines in general those are the intersection of two surfaces 43 Parametric Equations Finally we turn to the parametric equations We ve seen these before of course in de ning lines which are just straight curves With parametric equations we invent some new variables the parameters typically 3 and t These variables are then used to de ne a function for each coordinate 338 t 98 t 38 t Parametric functions have the advantage that they re easy to generalize to 3D as we already saw with lines The parameters tell us where we are on the smface or curve rather than where we are in space Therefore we have a conventional domain namely the unit interval That means that like our line segments our curves will all go from t 0 to t 1 They don t have to but they almost always do Similarly surfaces are all points where 0 g s t g 1 Thus another advantage of parametric equations is that it s easy to de ne nite segments and sheets by limiting the domains of the parameters The problem that remains is what family of functions we will use for the parametric functions One standard approach is to use polynomials thereby avoiding trigonometric and exponential functions which are expensive to compute In fact we usually choose a cubic we Co Cit Czt2 033 1 3 20m 2 i0 Another problem comes with nding these coef cients We ll develop that in later sections but the solution is essentially to appeal to some nice techniques from linear algebra that let us solve for the desired coef cients given some desired constraints on the curve such as where it starts and where it stops Join point cpl p PM no Do W 3 p p1 PW 0 0 p2 F2 3 b C Figure 1 Using cubic curves a shows four points specifying a curve b and c show curves being joined up for more complex curves 5 Why We Want Low Degree Why do we typically use a cubic Why not something of higher degree which would let us have more wiggles in our curves and surfaces This is a reasonable question In general we want a low degree quadratic cubic or something in that neighborhood There are several reasons a The resulting curve is smooth and predictable over long spans In other words because it wiggles less we can control it more easily Consider trying to make a nice smooth curve with a piece of cardboard or thin wood a literal spline versus with a piece of string a It takes less information to specify the curve Since there are four unknown coef cients we need four points or similar constraints to solve for the coef cients If we were using a quartic we d need 5 points and so forth u If we want more wiggles we can join up several splines Because of the low degree we have good control of the derivative at the end points so we can make sure that the curve is smooth through the joint Finally it s just less computation and therefore easier for the graphics card to render OpenGL will permit you to use higher and lower degree functions but for this presentation we ll stick to cubics If you d like to play with higher degree functions I m happy to help you with that Figure 1 shows points de ning a curve and how curves might be put together to be smooth at the joints 6 Ways of Specifying a Curve Once we ve settle on a family of functions such as cubics what remains is determining the values of the coef cients that give us a particular curve If I want for example a curve that looks like the letter what do I have to do It turns out that there are three major ways of doing that It s strange how everying seems to break down into threes in this subject Interpolation That is you specify 4 points on the curve and the curve goes through interpolates the points This is pretty intuitive and a lot of drawing programs allow you to do this but it s not often used in CG This is prim arily because such curves have an annoying way of suddenly lurching as they struggle to get through the next speci ed point One exception is that this technique is often used when you have a lot of data as with a digitally scanned face or gure Then you have thousands of points and the curve pretty much has no choice but to be what you want although the graphics artist may still want to do some smoothing say for measurement error or something Hermite In the Hermite case the four pieces of information you specify are 2 points and 2 vectors the points are where the curve starts and ends and the vectors indicate the direction of the curve at that point They are in fact derivatives If you ve done singledimensional calculus you know that the derivative gives the slope at any point and the slope is just the direction of the line39 the same idea holds in more dimensions This is a very important technique because we often have this information For example if I want a nice rounded corner on a square box I know the slope at the beginning vertical say and at the end horizontal B zier With a B zier curve we specify 4 points as follows the curve starts at one heading for second ends at fourth coming from third See the picture in gure 2 This is a very important technique because you can V Figure 2 Three ways of specifying a curve a interpolation b Hermite and c Bezier easily specify a point using a GUT while a vector is a little harder It turns out there are other reasons that Bezier is preferred and in practice the rst two techniques are implemented by nding the Bezier points that draw the desired curve Figure 2 compares these three approaches An Xll drawing program that will let you experiment with the examples drawn in the gure is x g X g uses quadratic Bezier curves Try it For a demo of drawing Bezier curves in 2D using OpenGL try N053 O7 publichtml demoscurves and surfacesCurveDraw cc OpenGL curves are drawn by calculating points on the line and drawing straight line segments between those points The more segments the smoother the resulting curve looks The CurveDraw cc demo requires you to specify on the command line the number of segments in the drawing of the curve Try different numbers of segments until the curves look pretty smooth 61 Solving for the Coef cients We ll now discuss how we can solve for the coef cients given the control information the four points or the two points and two vectors Essentially we re solving four simultaneous equations We won t do all the gory details but we ll appeal to some results from linear algebra Let s look at how we solve for the coef cients in the case of the interpolation curves the others work similarly Note that in every case the parameter is t and it goes from 0 to 1 For the interpolation curve the interior points are at t 13 and t 2 3 Let s focus just on the function for The other dimensions work the same way If we substitute t 0 g 1 into the cubic equations 1 we get the following Pg 350 Co p13C1012C130 17x 7 U 3 3 2 3 3 p mg0430 320 330 27m 7 U 3 3 2 3 3 P3x1CgCIC2Cs What does this mean It means that the x coordinate of the rst point Pg is MD This makes sense since the function x starts at Pg it should evaluate to Pg at t 0 This is also exactly what happens with a parametric equation for a straight line at t 0 the function evaluates to the rst point Similarly the x coordinate of the second point P is 5101 3 and that evaluates to the expression that you see there Most of those coef cients are still unknown but we ll get to how to nd them soon enough Putting these four equations into a matrix notation we get the following 1 9 390 390 1 7 H2 73 P 1 E 32 EV C 1 1 1 1 The P matrix is a matrix of points It could just be the at coordinates of our points P or more generally it could be a matrix each of whose four entries is an m y 3 point We ll view it as a matrix of points The matrix C is a matrix of coef cients where each element 7 each coef cient 7 is a triple sz Cy CZ meaning the coe lents of the t function the yt function and the 3t function If we let A stand for the array of numbers we get the following deceptively simple equation PAC Figure 3 The blending functions for interpolating curves By inverting the matrix A we can solve for the coef cients The inverse of a matrix is the analog to a reciprocal It s also equivalent to solving the simultaneous equations in a very general way The inverse of A is called the interpolating geometry matrix and is denoted M1 M1 Al Notice that the matrix M does not depend on the particular points we choose so that matrix can simply be stored in the graphics card When we send an array of control points P down the pipeline the graphics card can easily compute the coef cients it needs for calculating points on the curve CM1P The same approach works with Hermite and Bezier curves yielding the Hermite geometry matrix and the Bezier geometry matrix In the Hermite case we take the derivative of the cubic and evaluate it at the endpoints and set it equal to our desired vector instead of points P and P2 In the Bezier case we use a point to de ne the vector and reduce it to the previously solved Hermite case 62 Blending Functions Sometimes looking at the function in a different way can give us additional insight Instead of looking at the functions in terms of control points and geometry matrices let s look at them in terms of how the control points in uence the curve points Looking just at the functions of t that are combined with the control points we can get a sense of the in uence each control point has over each curve point The in uences of the control points are blended to yield the nal curve point and these functions are called blending functions Blending functions are also important for understanding NURBS which we may cover later in the course Equivalently the curve points are a weighted average of the control points where the blending functions give the weight We looked at weighted averages in our discussion of bilinear interpolation That is if you evaluate the four blending functions at a particular parameter value t you get four numbers and those numbers are weights in a weighted sum of the four control points The following function gives the curve Pt as a weighted sum of the four control points where the four blending functions evaluate to the appropriate weight as a function oft Pt ButPgBtPBgtPgBgtPg The following are the blending functions for interpolating curves Bum 79t ltt ltt 1 Bum 3t gxt 1 Bgt T27tt t 1 2r 33w2 39quot39t 32 t 2t Figure 4 The blending functions for hermite curves 33m 79ta gtt 2 These functions are plotted in gure 3 First notice that the curves always sum to 1 When a curve is near zero the control point has little in uence When a curve is high the weight is high and the associated control point has a lot of in uence a lot of pull In fact when a control point has a lot of pull the curve passes near it When a weight is l the curve goes through the control point Notice that the blending functions are negative sometimes which means that this isn t a normal weighted average A normal weighted average has all nonnegative weights What would a negative weight mean If a positive weight means that a control point has a certain pull a negative value gives it push i the curve is repelled from that control point This repulsion effect is part of the reason that interpolation curves are hard to control 7 Hermite Representation The coef cients for the Hermite curves and therefore the blending functions can be computed from the control points in a similar way we have to deal with derivatives but that s not the point right now Note that the derivative tangent of a curve in 3D is a 3D vector indicating the direction of the curve at this moment This follows directly from the fact that if you subtract two points you get the vector between them The derivative of a curve is simply the limit as the points you re subtracting become in nitely close to each other The Hermite blending functions are the following 23 32 1 2t3 3t2 t3 2t2 t t3 t2 w The Hermite blending functions are plotted in gure 4 The Hermite curves have several advantages Figure 5 The Bezier control points determine the vectors used in the Hermite method 0 02 04 06 08 1 Figure 6 Bezier blending functions Smoothness easier to join up at endpoints Because we control the derivative direction of the curve at the endpoints we can ensure that when we join up two Hermite curves that the curve is smooth through the joint just ensure that the second curve starts with the same derivative that the rst one ends wi The blending functions don t have zeros in the interval so the in uence of a control point never switches from positive to negative For example the in uence of the rst control point peaks at the beginning and steadily monotonically drops to zero over the interval a The Hermite can be easier to control As we mentioned earlier there s no need to know interior points and we often only know how we want a curve to start and end 8 B zier Curves The Bezier curve is based on the Hermite but instead of using vectors we use two control points Those control points are not interpolated though they exist only in order to de ne the vectors for the Hermite as follows pzm 10113110 31 p0 9 0 1 3123 02 That is the derivative vector at the beginning is just three times the vector from the rst control point to the second and similarly for the other vector Figure 5 shows the derivative vectors and the Bezier control points Like the Hermite Bezier curves are easily joined up We can easily get continuity through a joint by making sure that the last two control points of the rst curve line up with the rst two control points of the next Even better the interior control points should be equally distant from the joint This ensures that the derivatives are equal and not just proportional The blending functions are especially nice as seen in equation 3 In that equation we are using 1 as the parameter instead of t 1 113 2 bu Mgu 1 11 3 u The functions in eq 3 which are plotted in gure 6 are from the Bernstein polynomials d 51340 uk1 ind k These can be shown to Have all roots at 0 and l Be nonnegative in the interval 01 Be bounded by l Sum to 1 Perfect for mixing Thus our Bezier curve is a weighted sum so geometrically all points must lie within the convex hull of the control points as shown in the following gure To be concrete let s take an example of the Bezier blending functions For example what is the midpoint of a Bezier curve The midpoint is at a parameter value of u 05 Evaluating the four functions in equation 3 we get 1 33 g 1 1 351 32 b 7ME 7 315211 7 g 4 5 Thus to nd the coordinates of the midpoint of a Bezier curve we only need to compute this weighted combination of the control points Essentia y 1 3 3 1 PU5 P0 Pl P2 P3 P0 439 3P 439 3P2 439 PslS It turns out that there s a very nice recursive algorithm for computing this 9 Bezier Curves in OpenGL To draw a curve in OpenGL the main thing we have to do is to specify the control points However there are other things we might want OpenGL to calculate for us Here are some n vertices points on the curve or surface normals colors texture coordinates They all work the same way For example if we specify four control points we can have OpenGL compute any point on the curve and if we specify four colors one for each of those points we can have OpenGL compute the color of the associated point using the same blending functions Each of these is called an evaluator You can have multiple evaluators active at once say for vertices and colors The basic OpenGL functions for curves are glMaplftarget umin umax stride order pointarray glEnabletarget glEvalCoordlfu The rst two are setup functions The rst glMaplf is how we specify all the control information points colors or whatever The target is the kind of control information you are specifying and the kind of information you want generated such as GLMAP1VERTEX3 a vertex in 3D GLMAP1VERTEX4 a vertex in 4D homogeneous coordinates GLMAP1COLOR4 a RGBA color GLMAP1NORMAL a normal vector GLMAP1TEXTURECOORD1 a texture coordinate We ll talk about textures in a few weeks The umin and umax arguments are just the min and max paramater so they are typically 0 and l The stride is a complicated thing that we ll talk about below The order is one more than the degree of the polynomial 4 for a cubic and is therefore equal to the number of control points we are supplying Finally an array of the control information vertices RGBA values or whatever They should be of the same type as the target The second function glEnable simply enables the evaluator39 you can enable and disable them like lights Finally the last function glEvalCoordlf replaces functions like glVertexf glColorf and glNormalf depending on the target In other words that s the one that actually calculates a point on the curve or its color or whatever and sends it down the pipeline Here s an example of how you might do 100 evenly spaced steps on a curve ngegin GLLINESTRIP fori0 iltlOO i glEvalCoordlfi1000 instead of glVertex3f glEnd If you know that you just want evenly spaced steps which is often the case you can use the following two functions instead of calls to glEvalCoord glMapGridlfstepsuminumax glEvalMeshlGLLINEstartstop This is a grid of steps say 100 from the minimum 1 to the maximum 1 typically 00 to 10 The second actually evals the mesh from start typically 0 to stop typically the same as steps The demo Ncs3 O 7 publichtml demoscurves and surfacesFunkyCurve cc shows a partic ular curve that starts and ends along the edges of a box The code is relatively terse and is well worth looking at See gure 7 on page ll This program displays a funky curve through the unit cube This program aims to be as simple as possible Scott D Anderson Fall 2003 include ltGLglut hgt include lttw Void drawibeziericurveGLfloat cp int steps l6 glMaplfGLiMAPlivERTEX73 0 l 3 4 Cp glEnable GL7MAP17VERTEX73 glMapGridlfsteps00l0 glEValMeshlGLiLlNEOsteps Void drawifunkyicurve GLfloat curVeCPl 717171 07elel l07l 111 const int stride3 glPushAttrib GLiALLiATTRIBiBITS glPointSize5 tholorNameTW7CYAN ngeginGL7POlNTS forint i0 ilt4stride istride glVerteX3fcurVeCPicurVeCPilcurVeCPi2 glEnd ltgt glLineWidth3 tholorNameTW7YELLOW drawibeziericurvecurVeCP glPopAttrib Void displayvoid thisplaylnit thamera drawifunkyicurve glFlush7 glutSwapBuffers int mainint argc char argV glutlnitampargcargv glutlnitDisplayModeGLUTiDOUBLE GLUTiRGB GLUTiDEPTH glutlnitWindowSize500500 glutCreateWindowargV0 glutDisplayFuncdisplay thoundingBoXell7ll7ll thainlnit glutMainLoop Figure 7 Draws a curved line in the unit cube 11 91 Strides What the heck is a stride Since the control point data is given to OpenGL in one at array the stride is the number of elements to skip to get from one rowcolumn to the next rowcolumn For curves the stride is almost always 3 because the elements are 3place coordinates consecutive in memory If you re specifying colors the stride will be 4 because the elements are 4place RGBA values consecutive in memory The stride becomes more complicated when we deal with 2D surfaces instead of 1D curves 10 Representing Surface Patches Using parametric representation each coordinate becomes a function of two new parameters x8t 000 0011 Cozt2 Costsr 0108 CHSt l 0128t2 013813439 0209 0219 01292 02382t3 03083 03183t C22 82t2 03383t3 3 3 xst Z Z Cijsitj i0 j0 Since there are 16 coef cients need 16 points on the patch 3 y WINK v 0 pm mum x x Z Z The four interior points are hard to interpret Concentrating on one corner if it lies in the plane determined by the three boundary points the patch is locally at Otherwise it tends to twist This is hard to picture It is of course related to the double partial derivative of the curve 62p maul 90100 0m 1110 10 5 11 Bezier Surfaces in OpenGL To handle surfaces we just convert the OpenGL functions from section 9 above to 2D The basic functions are glMapZ f type umin umax ustride ucgtrder vmin vmax vstride vcgtrder pointarray glEnable type glEvalCoordZ f u v Here it s even more common to let OpenGL do the work of generating all the points glMapGridZ f usteps umin umax vsteps vmin vmax glEvalMeshZ GLFILL ustart ustcgtp vstart vstcgtp U Ac OB Figure 8 Normals for Bezier surfaces are computed as u x v 12 Normal Vectors If you want to use lighting on a surface you have to generate normals as well You can do it yourself using glMapiGLMAP2VERTEX3 umin umax ustride ucgtrder vmin vmaX vstride vorder pointarray glEnable GLMAP2VERTEX3 glMap2fGLMAP2NORMAL umin umaX ustride ucgtrder vmin vmax vstride vcgtrder normalarray glEnableGLMAP2NORMAL However you can make OpenGL compute the normals for you glMapiGLMAP2VERTEX3 umin umax ustride ucgtrder vmin vmaX vstride vorder pointarray glEnable GLMAP2VERTEX3 glEnab1eGLAUToNORMAL TW enables GLAUTONORMAL for you However when OpenGL generates a normal vector for you how does it do it The answer is surprisingly simple and yet the implications aren t obvious To be concrete suppose we want to have a quadratic Bezier surface This means there are only four control points the four corners since the interior is all just bilinear interpolation See gure 8 The normal for the surface in that gure will face towards us because it is computed as u the red arrow cross v the green arrow The n and v vectors are determined by the direction of the two parameters in the description of the Bezier surface For example the control points for the quadratic Bezier surface in gure 8 would be de ned as follows GLfloat cp Ax Ay AZ BX By BZ DX Dy DZ CX C glMap2fGLMAP2ERTEX3 0 1 3 2 O l 6 2 cp This works because the u dimension is what we would call left to right and so we give the points as A then B and D then C with a ustride of 3 The U dimension is what we would call bottom to top and so we give the points as A then D and B then C with a vstride of 6 The n dimension nests inside the v dimension and so the control points are given in the order ABDC The righthandrule tells us that lefttoright crossed with bottomtotop yields a vector that points towards us Of course in that example we decided to give the lower left corner vertex A rst If we decided to have the upper left corner rst vertex D and we still wanted to have the surface normal face towards us we would give the control points in the order DACB because the u dimension is determined by DA and CB which is top to bottom and the v dimension is left to right so AB and DC as before You can have any control point rst and still determine the surface normal An example of this coding can be seen in N053O7publichtmldemoscurves and surfacesNormalscc 13 Demos We ll look at several demos of how to do Bezier curves All of the following are in publichtmldemos curves and surfaces FunkyCurve cc We put the code for this above Note how the curve control points are de ned and drawn The curve isn t very smooth How can we make it smoother Notice the stride CokeSilhouette cc This draws does two threepart curves re ected across the y axis See how we ensure that the curves line up properly at the joints Note the re ection code and the alternative using ngcalef We could still use af ne transformations to place the silhouette anywhere we want Look at how the arrays are de ned BezierTutor cc This program can help you de ne curves Flag cc This is a simple demonstration of a 2D Bezier surface There s no lighting in this example Dome cc This program shows a nicely symmetrical dome with an interface that lets you modify one of the points The code needs to be updated to use TW but the code might still be interesting particularly the way we take one curve and turn it into a symmetrical dome CokeBottle cc The famous Coke bottle Before we look at this we ll discuss circular arcs described in section 14 below Look at how the arrays are de ned and used and contrast this with CokeSilhouette cc Look again at stride now that we re in 2D Look at how lighting and normals are done 14 Circular Arcs with B zier It s a useful exercise to consider how to do a circle using Bezier curves as opposed to computing lots of sines and cosines Note that it is mathematically impossible to do this perfectly since Bezier curves are polynomials and a circle is not any polynomial but the approximation might be good enough First note that we can t do a full circle with one curve because the rst and last control points would be the same and the bounding region would have no area So let s try a halfcircle Observe that the requirement that the curve is tangent to the line between the rst two control points and between the last two control points together with symmetry gives us 121 121 FLO 10 Using the formula for the midpoint of the curve in equation 4 we get 1 0303008 GaS a 43 Trying this value however yields an approximation that isn t really good enough Let s try a quartercircle We get 0 1 211 La 10 Using the formula for the midpoint of the curve again we get lil3ll3gts 1 132 Vi 1 3a a LE3 z 055 Trying this value yields a pretty good approximation We can then get the rest of the circle with rotations Lecture on Alpha Transparency 1 Alpha The color of an object or material can have a fourth component called alpha This is notated the RGBA system or occasionally RGBa The alpha component has no xed meaning but we will see today what meaning it typically has namely the opacity of the material a 1 is perfectly opaque 0 is perfectly transparent To use RGBA you have to initialize your OpenGL program as follows glutInitDisplayMode GLRGBA But since GLRGB is an alias for GLRGBA we ve been using it all along 2 Blending Given the pipeline model we understand that at some moment during the rendering process some of our objects have been drawn and exist only in the frame buffer and some of our objects have not yet been drawn So there is a time when the rendering of the next object is being combined with the rendering of some previous object In the usual case the new object s pixels overwrite the old pixe s In general though OpenGL allows you to blend the two sets of pixels in the following way The pixels already in the frame buffer are known as the destination pixels and a particular pixel is colored Rd Gd Ed The new pixels are called the source pixels and a particular one is colored 1357657857115 You can choose the blending factors 3 and d so that the combined color is computed as R Rd By G Gd 6 B 1 Ed 8 B A Ad A The result components are clamped to the range 07 1 The s and d factors are given to OpenGL using a constant from the following list most of which we will ignore GLZERO GLONE GLSRCCOLOR GLONEMINUSSRCCOLOR GLSRCALPHA GLONEMINUSSRCALPHA GLDSTALPHA GLONEMINUSDSTALPHA GLDSTCOLOR GLONEMINUSDSTCOLOR GLSRCALPHASATURATE GLCONSTANTCOLOR GLONEMINUSCONSTANTCOLOR GLCONSTANTALPHA GLONEMINUSCONSTANTALPHA Note that any of these that need the destination ALPHA will require an ALPHA buffer You also need to use glEnableGL BLEND nglendFunc sourcefactor destinationfactor The default blend function is nglendFunc GLONE GLZERO which just replaces overwrites the destination with the source See the man page for nglendFunc for more information However we will quote one important sentence from that man page Transparency is best implemented using blend function GLSRCALPHA GLONEMINUSSRCALPHA with primitives sorted from farthest to nearest Let s try to understand the rst half of that a We don t care what the alpha of the framebuffer the destination is We only care about the opacity of the new object a We want to use a convex sum so that things have the right relative weight don t get progressively brighter or darker So we use the alpha of the source object for the source factor and the complement for the destination In the next section we ll try to understand the second half of that sentence First let s look at the Ncs3 O 7publichtmldemostutor cc This lets you adjust the alpha values for three quads drawn either furthest to nearest in keeping with the advice from the man page or in the reverse order 93 U1 Another demo is Ncs3 O 7 publichtml demosTransparentQuads cc The red surface is opaque The green surface is slightly transparent The blue surface is mostly transparent You can toggle the background from blackwhite with the b key a You can modify the transparency of the green and blue quads from 0703 to 0505 with the key Note that it is not important that they add to 100 percent39 1 just wanted to start out with one that was mostly opaque and the other mostly transparent Notice the use of glClearColor and glClear The rst is like 91 Color except it sets the color used for clearing the framebuffer The TW default is to use 70 percent gray The second clears the frame buffer and associated buffers We ll see about the depth buffer soon a Notice how the blending is de ned a Notice how the colors are de ned including the alpha value a You can toggle the background color using b Notice the effect it has the green and blue quads are mixedwith the background color when they are drawn Also the red quad may look different surrounding colors can affect our psychological perception of color We ll can see some of the effect of surrounding color on our perceptions of color by switching to immerse mode and comparing Hidden Surface Elimination uppose we render a scene with surfaces that overlap or even interpenetrate For example a a blue teapot sitting on a brown table Some pixels in the framebuffer are both blue and brown s a teddy bear its ears and nose are spheres penetrate its head a a teddy bear with a knife in its guts How does a graphics system determine which color to use for any pixel There are two major algorithms depth sort which is abjectbased and depth buffer which is pixelbased 31 Depth Sort Determine which object is farthest from the camera draw that rst then the next and so forth Since the nearer stuff always overwrites the farther stuff this works well But a We would have to draw things in different orders depending on the position of the cam era a What about objects that interpenetrate How do you handle that Sometimes we break up objects into smaller pieces just so that we can sort them by distance If we take that to its logical extreme and reorganize our thinking we come to the next algorithm 32 Depth Buffer For each pixel keep track of the depth of that pixel This buffer needs to be initialized to some maximum at the beginning of rendering Whenever we consider drawing a pixel rst compute the new depth and compare to the old depth looking it up in the depth buffer If the new depth is less update the color buffer and the depth buffer Computing the depth is easy because we have the original as y z coordinates of the object at the beginning of the transformation process and we maintain all of them to the end 33 Depth in OpenGL OpenGL uses the depth buffer algorithm AKA the Z buffer algorithm You use it you have to glutInitDisplayMode GLUTDEPTH glEnable GLDEPTHTEST TW does the latter for you In plain OpenGL you have to remember to enable the depth test Then in your display function you have to gl c1 ear GLCOLORBUFFERB IT GLDEPTHBUFFERB IT This is what initializes the buffer to the maximum value 10 Finally the depth buffer is only updated if DepthMask is true This is the default value but you can temporarily turn it off with ngepthMask GLFALSE A simple and useful demo is demostransparencySameDepth which draws two quads that occupy the same space By occupy the same space I don t mean just that their 2D projections overlap I mean that in the 3D world their volumes overlap Because OpenGL retains depth information through projection that s the z coordi nate if projections overlap it can still tell which one is in front However if the volumes coincide it can t tell which one is in front because in fact neither one is Therefore if the depth test is enabled OpenGL will make the decision based on the depth buffer where tiny roundoff errors may differ from pixel to pixel so that sometimes it decides that the red one is in front and sometimes the green one Thus we get a speckling effect If you turn off the depth test the second quad the one drawn later always wins 4 Depth and Transparency The depth buffer algorithm has real trouble with transparency Why If you update the depth buffer when you draw a transparent object then an opaque object that is drawn later but is farther won t be drawn Let s pause to make sure we understand that Let s also return to the demo Let s go back to the TransparentQuads demo and look at the effect of depth mask and depth test u If the depth test is OFF notice what happens when the green quad overlaps the red one Where the green and blue quads pass behind the red one and should be invisible they are still visible This is the effect of the lack of the depth test We would want to use the depth test for drawing opaque objects so that only the nearest is visible Notice that the area where the green and blue quads overlap is a blend of both the green and blue and the two halves green behind blue on the right and blue behind green on the left should look identical This is because if we ignore depth we re just mixing green and blue and it doesn t matter which is SRC and which is DEST Notice that if the depth test is off the depth mask makes no difference This makes sense since the depth buffer is ignored so it doesn t matter whether you update it So it seems that the depth test should be on lfthe depth test is ON If the depth mask is TRUE the fact that the blue quad passes behind the green one is noted we only see the right half of the overlap area because in the left half the blue quad is behind the green one and so we don t see it at all This is the effect of the Zbuffer algorithm So even though the green quad is partly transparent we can t see any blue through it That s bad Try nding a view where you can compare red through green with red through blue through green They look the same Should they If the depth mask is FALSE the blue and green colors mix where the projections overlap That s what we want However since the depth buffer isn t being updated OpenGL can t properly interleave this surface with others Solutions a don t update the depth buffer You can turn off updating temporarily with ngepthMaskGLFALSE Draw all the opaque objects rst then disable the depth test and draw all the transparent ones from furthest to nearest switching to the other hiddensurface algorithm The coding scheme is shown in gure 1 glEnable GLDEPTHTEST ngepthMaskGLTRUE this is the default ngisableGLBLEND draw all opaque Objects done nglendFunc GLSRCALPHA GLONEMINUSSRCALPHA glEnableGLBLEND ngepthMaskGLFALSE draw all transparent Objects done glutSwapBuffers Figure 1 Code that draws the opaque objects rst Compare these In particular when Opaque objects are drawn last try it with and without the depth test It just doesn t work DemosNcs3O7publichtmldemostransparencytutorV mthmdemo u the squares are all at the same distance a they are drawn in order from lower left rst to upper right last a Try the following bottom 05 05 0 l middle 01005 top l0005 Compare Does this make sense a Change the middle to 010l Compare Does this make sense You can use this tutor to experiment with different combinations of RGBA values 5 Depth Resolution There are a limited number of bits in the depth buffer39 the actual number depends on the graphics card Quoting from the OpenGL Reference Manual page for gluPerspective Depth buffer precision is affected by the values speci ed by zNear and zfar The greater the ratio of zFar to zNear is the less effective the depth buffer will be at distinguishing between surfaces that are near each other If r ZF ar ZN ear roughly 10g2 r bits of depth buffer precision are lost Because r approaches in nity as zNear approaches 0 zNear must never be set to 0 So even though it seems realistic to set near to zero and far to in nity the practical result is that the depth buffer algorithm won t be easily able to tell which of two surfaces is closer if they are similar in distance Some of you have discovered that if they are exactly the same distance the color can be almost random based on small roundoff errors in the calculation


Buy Material

Are you sure you want to buy this material for

25 Karma

Buy Material

BOOM! Enjoy Your Free Notes!

We've added these Notes to your profile, click here to view them now.


You're already Subscribed!

Looks like you've already subscribed to StudySoup, you won't need to purchase another subscription to get this material. To access this material simply click 'View Full Document'

Why people love StudySoup

Steve Martinelli UC Los Angeles

"There's no way I would have passed my Organic Chemistry class this semester without the notes and study guides I got from StudySoup."

Kyle Maynard Purdue

"When you're taking detailed notes and trying to help everyone else out in the class, it really helps you learn and understand the I made $280 on my first study guide!"

Bentley McCaw University of Florida

"I was shooting for a perfect 4.0 GPA this semester. Having StudySoup as a study aid was critical to helping me achieve my goal...and I nailed it!"

Parker Thompson 500 Startups

"It's a great way for students to improve their educational experience and it seemed like a product that everybody wants, so all the people participating are winning."

Become an Elite Notetaker and start selling your notes online!

Refund Policy


All subscriptions to StudySoup are paid in full at the time of subscribing. To change your credit card information or to cancel your subscription, go to "Edit Settings". All credit card information will be available there. If you should decide to cancel your subscription, it will continue to be valid until the next payment period, as all payments for the current period were made in advance. For special circumstances, please email


StudySoup has more than 1 million course-specific study resources to help students study smarter. If you’re having trouble finding what you’re looking for, our customer support team can help you find what you need! Feel free to contact them here:

Recurring Subscriptions: If you have canceled your recurring subscription on the day of renewal and have not downloaded any documents, you may request a refund by submitting an email to

Satisfaction Guarantee: If you’re not satisfied with your subscription, you can contact us for further help. Contact must be made within 3 business days of your subscription purchase and your refund request will be subject for review.

Please Note: Refunds can never be provided more than 30 days after the initial purchase date regardless of your activity on the site.