3D Renderer

Starting in early fall of 2016, I started exploring the process of rendering polygonal objects from a different perspective.  In the past, I had used existing renderers such as Maya’s Mental Ray, or the one built-into Blender 3D.  However, this time around, I wanted to look closer and understand how renderers take a collection of points in 3-Dimensional space and translate it to an image on the screen.  -The best solution, to write one myself and tackle the challenges as they arose.

Initial Rectangle OutputFrom Humble Beginnings

As with most things, it is often best to start off with what you know.  For this project, I began using my professor’s C++ API; adding support to render rectangles (with Painter’s Algorithm), before moving on and building out from there.

 

 

 

Screen-Space Render

Trouble With Triangles

Moving ahead, I continued my efforts by attempting to render a series of triangles (in screen-space).  At this phase, I could have used either a Scan-Line or Linear Expression Evaluation (LEE) approach, but for my implementation, I chose to sort the vertices for each face and then rasterize with LEE.  Even so, I ran in to issues where the triangles would render out of order (sometimes closer triangles would cover more distant ones), and so, I also decided to implement Z-buffering.

 

 

Transformed Utah Teapot

The Leap to 3D

I soon realized that, while functional, I would need to expand upon the renderer if it were to do more than produce images based on points given in screen-space.  Since the vertices of most 3D objects are defined in terms of the object’s object-space/model-space, I had to have some way of converting from one coordinate system to another.  To accomplish this, I created a series of matrix transformations (model>world, world>image, image>perspective, perspective>screen) and then placed them in a stack.  From there, I combined the matrices into one model>screen matrix, and then applied this transformation to each of the vertices within the model.  The resulting image may not have appeared too different, but at least I was able to move the camera around and render at different resolutions.

Shaders Compared - Phong, Gouraud ,Shady Business

Up until now, I had been relying on a simplified form of flat shading to represent object depth and shape in 2D.  However this is only one solution.  I also wanted to convey light direction relative to both the camera and the object.

To accomplish this, I ended up storing the vertex normals, transforming them into screen-space (along with the vertexes), and then using the results to calculate color using a simplified form of the shading equation.

Simplified Shading Equation

Once I could calculate color, I proceeded to create three shaders, flat (filling faces with color at one of the vertices, now accounting for lights and specular), Gouraud (color from the vertex normals interpolated across the face), and phong (color calculated from normals which have been interpolated across the face).

Textured Utah Teapot

Textures

Despite the superior shading, I also wanted objects to add more detail to the surfaces of objects.  One clear way to do this is by simply applying textures.  However, I had to transform the U and V coordinates for each face, before I could go about sampling a texture and passing it’s color data in as the diffuse component of the shading equation.

Procedurally-Textured Utah TeapotUnfortunately, the bilinear texture sampling produced some unwanted  blurring.  In an effort to overcome this, I also experimented with applying procedural textures. In this example, I used a Julia Set with colors mapped to ranges on the normalized spectrum.

 

 

AAnti-Aliasing in Actionnti-Aliasing

Eventually, becoming annoyed by the jagged lines and stair-stepping in my images, I started working on a form of anti-aliasing.  My final implementation uses a jitered sampling approach (several, slightly-offset images are rendered and then blended to produce the final result).

 

 

 

Basketball Base MeshAdding Extra Detail

Lastly, I wanted to try adding some additional surface detail to the objects in the renderer.  Collaborating with three of my classmates (Aman VoraAaron Nojima, & Hang Guo), we set out to add support for normal mapping, custom objects, standard texture file types; along with a smoother, more polished interface.

 

HIgh-Poly Basketball Sculpt

 

 

For my task, I started by modeling the base mesh of a basketball in Maya.  From there, I made a stencil (to recreate the bumps on the ball), and shaped a much denser version of the mesh in Mudbox.  Over 57 million triangles later, we had a mesh with about the right level of detail we were looking for.

Final Basketball with Normal Map.

 

After this, I also created an .obj parser for the renderer (previously, it only read in .asc files) and applied the textures I generated from the high-density sculpt to the base mesh.  This, combined with my teammates’ contributions, saw the implementation of normal maps, multi-thread support; .jpg, .png, and .bmp images; dragging, scaling (with pinch), and rotating within the application,;and also the option to let the model auto-rotate.