19 Rendering

T. Raghuveera

epgp books

 

 

Objectives:

  • To understand how to apply shading models, textures, shadows for visual realism
  • To understand the ‘Ray Tracing method’ of rendering

 

Discussion:

 

The study of 3D Graphics primarily focuses on the aspect of ‘imitating the real world as closely as possible’. This goal is referred to as ‘Visual Realism’. Modeling 3D objects that look exactly or closely similar to that of the real world objects is a daunting task and involves a series of steps, with each step producing an improvised version of its previous step. The stages of Visual Realism are summarized here.

 

Wire frame → Shading → Lighting → Textures → Shadows → Materials → Environment Maps

 

The wire frame model of an object is the first level of visual realism, where the objects skeleton is only modeled using either of the popular techniques available. The next stage is to apply shading techniques that describe the interactions of the light with the surfaces of objects. Lights and their properties are then modeled to give a sense of objects bathed in light. Since there are lights, the objects cast shadows, and so shadows are modeled. Textures and Materials give the object the material with which it is made. The final step is to call the Rendering algorithms to complete the goal of visual realism. Rendering algorithms compute the pixel intensity value for the given viewing dimensions and direction. The figure below shows the stages, as we move towards the right, the objects attain their true visual realism.

 

Shading Model:

 

A shading model captures the interactions of light with surfaces of objects. These interaction include absorption, reflection, refraction. The model assumes that there are two types of lights that illuminate objects in a scene: point light and ambient light. A shading model computes the amount of colored light that gets reflected from a surface given a viewing position. The aim of this step is to compute the amount of reflected light that reaches the eye, its color and intensity. The interaction of light with objects is quite a complex phenomenon. The reflected light intensity depends on properties of light, the surface properties of objects, contributing neighbouring objects and their properties, the viewing direction.

 

 

Lets understand how light interacts with the surface of an object Most of the objects reflect some part of the light incident and absorb some part of it. . If all of the light is absorbed the object is a black body.

 

 

Diffuse Scattering: Occurs when some of the incident light penetrates the surface slightly and is re-radiated uniformly in all directions. Its color is usually affected by the nature of the material out of which the surface is made. It is shown in the figure above. The effect of this interaction is that the object looks dull and seems to be lit equally on all sides, if we assume ambient light. An example of a ball modeled with diffuse properties is shown in the figure below.

 

Specular Reflections: incident light does not penetrate the object, instead is reflected directly from its outer surface. Ex: Plastic, metal etc., It is shown in the figure above. The effect of this interaction is that the object on its surface will have a bright spot called the specular highlight.

The object appears to have been made of reflective materials. An example of a ball modeled with specular properties is shown in the figure below.

 

The truth is that most surfaces produce a combination of the two types of reflections, depending on the characteristics of the surfaces such as roughness and the type of the material with which the surface is made.

 

The computation involves many factors like

  • Amount of light incident
  • The color of incident light
  • The relation between viewing direction and incident direction
  • The surface properties of objects
  • The types of light sources in the scene
  • The reflective properties of neighboring objects
  • Ambient light and its color

 

Flat Shading:

 

This is at the next level of visual realism after the Wire Frame level. The effect is that

  • Color is computed at one point on the face and the same color is applied throughout the face.
  • Specular highlights are rendered poorly
  • Edges between faces are identifiable due to a phenomenon called lateral inhibition which leads to Mach band effect as shown in the figure below.

 

Smooth Shading:

 

Gouraud Shading: This shading model computes a different color value for each pixel. It uses the method of linear interpolation. As can be seen in the first figure below, the color for a polygonal face can be computed by interpolating the known colors given at the vertices.

 

 

Phong Shading: This shading model computes the normal vector at each point on the face, and finds a suitable color there. The normal vector at each pixel is computed by interpolating the normal vectors at the vertices of the polygon. The concept is explained in the second figure above.

 

As we know that, normal vectors are important to identify the orientation of objects in the 3D world, the interpolation of normal vectors gives us the most accurate direction and thus intensity for a pixel, given a viewing position and direction.

 

The common attributes of light are

  • Position
  • Type of light (default is point light)
  • Color of light
  • Attenuation
  • Default diffuse, ambient, specular values

 

Adding Textures to faces:

 

Textures give the look and feel for the objects bathed in light. It is more cumbersome and inefficient to compute color and intensity at each position on the surface of a 3D object. Instead, textures are modeled as wrapper sheets made up of 2D images. You can imagine this to be similar to that of making a Globe, where a printed plastic sheet is wrapped around a spherical object.

There are fundamentally two types of Textures: Bitmap Textures, Procedural Textures. Bitmap textures are 2D captured images; while procedural textures are artificial images generated using any algorithm / procedure.

Before we paste textures onto to the surfaces of 3D objects, we need to represent the 3D surfaces in their parametric form. The procedure takes the following order.

 

Image space Texture Space Object space viewing space Screen space

 

Except for the Object space in 3D, the rest of the coordinates are in 2D space. It is interesting to note that, Textures are in 2D space, while surfaces of objects onto which this texture is pasted, are in 3D space. The aim to find a mapping between the object space in 3D and texture space in 2D. For each position on the 3D surface a mapping texture value is identified and applied. The texture space is a normalized version of the 2D image space. Say for ex: If the image has dimensions 640 x 480, in the texture space this is represented as a normalized rectangle in the interval [0,1], i.e., the pixel position 320×240 in the image space will have its equivalent position in the texture space at [0.5, 0.5].

 

Since the pixel values from an image are being used as Texture values, the name Texels are used in place of pixels. For ex: if we consider a cylindrical surface for pasting a texture, and a bitmap image to be used as texture, we will be doing a reverse mapping from 3D to 2D, i.e., for every position on the cylindrical surface, we shall find a mapping Texel in the texture space, and from texture space find the actual pixel intensity value from the texture image in the image space.

 

The texture value can be used in a variety of ways:

  • It can be used as the color of the face itself.
  • It can be used as the ambient, diffuse, or specular reflection coefficients to modulate the amount of light reflected from the face.
  • It can be used to perturb the normal vector to the surface to give the object a bumpy appearance as shown in the figure below.

 

 

 

Reflection Mapping:

 

In the real world, object surfaces that are shiny and reflective, show images of surrounding environment or objects. The same is imitated in reflection mapping. The two main types are

 

Chrome mapping: In this a rough and usually blurry image suggesting the surrounding environment is modeled.

Environment mapping: A recognizable image of environment is reflected.

 

 

Shadows:

 

Adding Shadows of objects to a 3D scene enhances visual realism. All objects under light cast shadows. The shape of the shadow depends on the orientation of objects towards a fixed light source. Here we assume that the scene is lit by a point light source that spews light equally in all directions.

 

 

This technique displays shadows that are cast onto a flat surface by a point light source. The shape of the shadow is determined by the projections of each of the faces of the object onto the plane of the floor, using the source as the center of projection. The shadow is the union of the projections of the six faces for a cube. After drawing the plane using ambient, diffuse and specular light contributions, redraw the six projections of the cubes faces using only ambient light. This will draw the shadow in the right shape and color as shown in the figure below.

 

Ray Tracing:

 

Ray Tracing is a global illumination based rendering method for generating realistic images on the computer. Its primary use is creating and adding realism to images. This technique takes visual realism to its ultimate level.

  • Enhance visual Realism
  • Lighting simulation
  • Hidden surface removal
  • Shadow calculation
  • Texturing
  • Reflections and Transparency

In ray tracing, a ray of light is traced in a backwards direction i.e., from eye to the source of light as shown in the figure below. We start from the eye or camera and trace the ray through a pixel in the image plane / view plane into the scene and determine what and where it intersects. At each intersection, depending on the surface properties at the hit point, the light intensities are divided among reflections / refractions. These reflected and refracted rays are further tracked till they hit another object or surface and again we perform sub division of light rays. This is continued till we track the rays to end at the light source. We take into account the contributions of those rays that finally make it to the source. We finally add all these smaller contributions to compute the fraction of light intensity that finally reaches a pixel on the grid.

In this technique, the ray is tracked in the reverse order, because, as we know that all light rays emerge from the light source, but all of them need not necessarily pass through the grid of pixels / view plane. So we need to bother about only those light rays that after undergoing multiple reflections and refractions, reach the grid. The pixel is then set to the color values returned by the ray. If the ray misses all objects, then that pixel is shaded with the background color.

The figures above show how ray tracing technique renders 3D scenes with amazing clarity and realism.

 

Summary:

  • Learnt how shading models, lights, textures and shadows are applied on 3D objects for visual realism
  • Learnt the concepts of Ray Tracing technique
you can view video on Rendering