I had a few more topics left over from GDC, but I left my notes at work. And I’ll be on lead engineer duty this week and a bit of next so I’m not optimistic I’ll have time to post anything from there.
So instead I’ll write a bit about some lighting research I’ve been doing. Lately I’ve been interested in doing some improvements to lightmap generation — both making it efficient (in both speed and memory) and improving the quality of the lighting solution.
Dealing with the memory is just a case of removing as much redundant data as possible without hurting the speed too much. I’m not all that worried about that. However, speed is another issue. I still have to do some performance analysis, but I believe that most of the time is spent casting rays and detecting shadows. And as I’ll discuss below, improving the lighting solution may mean adding more raycasting, not less. So speeding up the ray cast really needs to be a priority. I believe the main issue is that the current code uses a voxel-based system for culling intersection tests, and the data is clustered enough that this really isn’t doing a good job of it. Digging around, I found this dissertation, and this flipcode article that imply that a kd-tree might be the solution I’m looking for. So I’ll be investigating that avenue — hopefully it won’t be too much trouble to swap the raycasting code. I’m also a little worried about numeric issues, but again, I’ll see how it goes.
As far as improving the quality, I’m mainly interested in making the direct lighting a little more physically based, and adding some sort of indirect lighting solution beyond just an ambient term. I’ve already done some work on the latter part playing around with photon mapping. However, the results were a bit blotchy for the number of photons I cast, so I’m not sure that will be practical.
Digging around, a couple of other approaches are a possibility. One notes that most of what you want to get out of the indirect lighting is the shadowing. This is represented by ‘obscurances’ or ‘ambient occlusion fields’. To compute these you basically cast a hemisphere of rays at each point of interest, and then compute an occlusion value based on the number of rays that ‘get through.’ This occlusion calculation can also be used to bias the normal which is used to access a cubic environment map. So for example, if you have a cliff with an overhang, the indirect light will come mostly from below, even if a local vertex or face normal points out parallel with the ground, or even slightly upward. For moving objects, this is an alternative to the Precomputed Radiance Transfer technique which is all the rage these days. The most recent article I’ve found on this is Ambient Occlusion Fields, which will be presented next week at the ACM Siggraph 2005 Symposium on Interactive 3D Graphics and Games.
Of course, if you’re computing the occlusion information, you might as well take advantage of the fact that you can pull data from the object you hit with your raycast. Following another chain of references, I found a paper by the team that worked on Shrek 2, from SIGGRAPH 2004. They noted that more than one indirect light bounce doesn’t really add much to the visual quality — at least as far as the perspective of the average viewer — so they use a single-bounce indirect lighting solution. So first they calculate a direct lighting solution and store it in a set of lightmaps (or irradiance cache). Then to calculate indirect lighting for a surface point they cast multiple rays in a hemisphere and collect irradiance data off of the other surfaces hit. This is basically an occlusion map, with some color bleed from the single bounce. So that’s an approach I’m looking into now — hence the need for fast raycasts.
The final approach is one that was recently published in Game Developer (don’t know the issue, sorry) that uses rendering hardware to speed up the process. Whether this will work for what I’m trying to do, I’m not sure; I’m dealing with some pretty big spaces. But again, something else to look into if the raycast approach appears to be too slow.
I’ve summarized pretty egregiously here, so I’d recommend reading the articles themselves to get the full understanding. There is also an article from GPU Gems 2 which I believe describes how to compute occlusion maps for dynamic objects in hardware — I haven’t had a chance to read it in much detail though.
One last thing: I would appreciate is knowing whether anyone is reading this blog, or this is just acting as a Pensieve (for you Harry Potter fans out there). The latter is fine — it’s good to have a central place to store my thoughts and notes which may or may not be useful to others — but still it’s also good to know if there’s an audience. So let me know, either through the feedback form on the main page or through comments here.
Hi there, interesting post. I have some experience with lightmap renderers for games; I wrote the ones in RalliSport Challenge 1 & 2 (Xbox) and the basics for the ones in Battlefield 1 & 2. Those lightmappers were hardware-based, generated good quality soft shadows for arbitary (incl. transparent) geometry and fast.
The basic idea that was used was to render the whole scene with a perspective view oriented to the light source for each pixel in the lightmaps. Sounds quite expensive, and on PC it was (because of the high overhead of small draw calls / commands), but it is a very simple and easily parallizable general solution for direct illumination that can do soft shadows by simply varying the field-of-view of the viewport. Not very physical correct tough 🙂
Indirect illumination from the sky or from bouncing light can also be quite easily added as an additional hemispherical/hemicubical rendering pass for each pixel that accumulates the direct illumination and weighs it. We never implemented that tough and instead chose to simulate indirect illumination from the sky with multiple directional lights and in some cases just by a single directional light pointing straight up with a large fov.
If you’re interested in more details, just send me a mail.
Comment by repi — 7/30/2005 @ 4:36 pm
Hi, Yeah. I’m reading your blog. I’m working on a terrain rendering system.
I only seem to get to work on rendering about once every year or two.
I also do a lot of work with animation systems and was interested in your curve reparametrization blah blah blah.
Thanks for the info.
Comment by dg — 9/27/2005 @ 4:10 am
[…] Final gathering does increase the number of raycasts dramatically, so to speed that up I went with the suggestions of the two papers mentioned here, and changed the original voxel-based system to a kd-tree. That provided about a 10-fold speed-up, and helped the overall lighting calculations tremendously. I also found that a good deal of sloth was being created by virtual memory thrashing, so I reduced the memory used by the data so that the application could fit in physical memory. This required compressing and uncompressing some data which would normally take more time, but because page swaps were minimized it actually ran significantly faster. […]
Pingback by Essential Math Weblog » General and Lighting Updates — 11/16/2005 @ 9:12 pm