Why, oh why traceth we the rays? Raytracing produces images ranging from
snazzy to outright pixelrific, incorporating soft shadowing, indirect
lighting, caustics, lens effects, etc., in a manner that is impossible to
achieve in OpenGL.
Why not raytrace?
So why don't we raytrace everything? Raytracing is too slow to be
performed in real-time on standard hardware, and the images it produces are
fundamentally non-interactive. That is, one can't navigate or manipulate a
Our project explores one compromise that allows the user to experience
some interaction with a raytraced scene. We modified LRT (a
raytracing package) to export geometry information with each pixel it
generates. That is, in addition to outputting a .jpeg file, LRT now exports
a file containing - for each i,j position in the final image - the (x,y,z)
coordinates of the point in the scene that corresponds to that pixel.
We will use this geometry information to render the raytraced image in OpenGL
in a manner that allows us to move OpenGL objects around in the scene, and to
haptically interact with the objects in the scene.
Finally, a screenshot...
This screenshot shows the "Cornell box", a standard scene used for
developing raytracing and radiosity algorithms. Note the soft shadows an
indirect lighting... in other words, this was really raytraced, using a
raytracing package (LRT) that we didn't write.
What you can't tell yet is that this is not simply being rendered as a
texture-mapped quad in OpenGL. It's actually a point cloud, with each pixel
in the image being rendered as one GL_POINT. Because we're rendering with an
orthographic camera, all the points line up (no perspective is applied). But
if you could swing the camera out to one side of the box, you would see a
cloud of points, spread out in the z-dimension.
Prepare to be immersed
Now we add an OpenGL object to the scene. The image below is linked to a
movie in which the ball - a standard OpenGL
sphere - is happily moving around the scene. Things to note :
The ball is occluded when it moves behind objects in the scene (you can
see this in the static image also)
The perspective distortion applied to the ball is consistent with the
perspective distortion visible in the raytraced scene. Note in particular
that when the ball moves backwards along the side wall (which is
perpendicular to the camera plane), its x-position follows the wall
perfectly, which tells us that the perspective is lined up correctly.
Learning to love our z-buffer
Here are the key things we had to do to make the GL and raytraced images
line up right :
The trickiest problem was making the depth coordinates match up, so the
ball would be occluded at the correct positions. The OpenGL depth buffer
does not store "z coordinates" or "distance from the camera"; it typically
stores a non-linear function of an object's position between the near and far
clip planes. But this function depends on the projection matrix being used,
so using a standard orthographic projection would not map our raytraced
pixels to the same depth range as the one generated by the perspective
transformation we're applying to the ball. So we rendered the point cloud
with a hybrid projection matrix, effective applying a standard orthographic
transform to x and y, but a perspective transform to z.
The camera position and field-of-view angle are read from the scene file,
so our perspective tranformation (the one applied to the ball, or any other
GL object) is the same that was applied to the environment by the ray tracer
The light source position is read from the scene file, so the lighting on
the ball appears to come from the same light that illuminated the
Feel the box...
To enhance the interactivity of the scene, and to demonstrate that our
OpenGL coordinate system is properly aligned with the raytraced coordinated
system, we parsed the original scenefile to find the world-space coordinates
of the polygons used by the raytracer, and used a SensAble Phantom to haptically render the
scene, which allows a user to feel the surfaces in the scene. If the
colocation was poor, it would really show up when we added haptics to the
project... I can't show you the haptics over the Internet, of course, but the
following celebrity testimonials* will hopefully convince you that
the haptics felt "correct" :
"I loved the haptics. They were much better than Cats." -Matthew
"A haptic box? What's the deal with a haptic box? Someone tell me,
because I'd like to know." -Jerry Seinfeld, actor
"I felt like I was living inside the Cornell box." -Shaquille O'Neal, professional basketball player
"These haptics made me a better-looking man. After I experienced the
Cornell box, I had a date every night for a year!" -Steven Adler, the
original drummer for Guns and Roses