It occurs to me that while I’ve been plugging this on Twitter, I haven’t mentioned it here: Essential Mathematics for Games and Interactive Applications has just come out with a third edition, kindly published by our friends at AK Peters and CRC Press. The book has been brought up-to-date and certain chapters have been revised to flow better. The most significant change is probably the lighting chapter, which still uses a very simple lighting model but builds it out of a physically-based lighting approach. There are also updates in the discussions of floating point formats, shaders, color formats and random number generators, plus much more! Please take a look, I think you’ll find it worthwhile.
Another piece of big news is that the code with the book has been updated as well. The previous edition used OpenGL 2.0 and D3D 9 — now it uses OpenGL 3.2 Core Profile, and D3D 11. Anything that depended on the old fixed-function pipeline (such as the old OpenGL varyings and uniforms) has been updated. And the book no longer comes with a CD. All the code is freely (both in terms of beer and of speech) available at GitHub. This should be good news for those of you who purchased the electronic edition; other than the platform updates it should be compatible with the second edition.
Finally, I’d like to include a few notes on the demo code. Some of the code was written after the text for the book was largely complete, so some of the minor challenges and decisions in writing a cross-platform renderer aren’t really covered, or aren’t covered in much detail.
The first challenge I’d like to talk about, which I allude to in the book, is handling uniforms in D3D11. OpenGL has built-in shader reflection so you can extract handles to shader uniforms, and set them via simple interfaces. The main D3D11 interface uses constant buffers instead, which have to be updated explicitly. Uniform/constant buffers can be more efficient, but as this is a math book rather than a rendering book, I wanted to keep the interfaces as simple as possible. So I wrote IvConstantTableD3D11
, which uses the sidecar D3D11 shader reflection library. For the most part this is straightforward — it extracts the names and offsets of the variables found in the $Globals uniform buffer and stores them in a table. It also allocates a buffer of the same size as the $Globals constant buffer. When the client requests a uniform “handle” by name, the ConstantTable looks it up and passes back a descriptor. When a IvUniformD3D11 is changed, it uses the descriptor to get the offset into the ConstantTable’s allocated buffer, and copies its data there. The local CPU buffer is copied to the GPU buffer only when that shader is in use, and the constant data has changed.
Another interesting challenge related to uniforms was that D3D11 separates texture resources from texture samplers (the sampler being the texture state, including wrap/clamp modes, filtering, etc.) whereas core OpenGL combines them together into one object (that said, there is an extension, ARB_sampler_objects, which separates them). The separate approach is convenient as it allows you to use a single sampler instance with multiple texture resources. However, it breaks the paradigm of the book, which assumed that texture uniforms represent one object. The solution I came up with was to add an implicit header to the HLSL shaders, which defines two macros:
#define SAMPLER_2D(x) Texture2D x; SamplerState x##Sampler
#define TEXTURE(x, uv) x.Sample( x##Sampler, uv )
The first is used to define a combined sampler and texture resource, and the second to do a texture lookup, a la:
SAMPLER_2D(defaultTexture);
float4 ps_main( VS_OUTPUT input ) : SV_Target
{
return mul(input.color, TEXTURE( defaultTexture, input.uv ));
}
With these macros, I can use the D3D shader reflection library to search the shaders for both the texture and the matching sampler (since they have similar names), and combine them into a single uniform. To be honest this solution bothers me a little, but I felt constrained by the interface we had created. Had I caught this sooner, I might have used separate objects, and with OpenGL use the ARB_sampler_objects extension.
Another challenge was that D3D11 no longer supports different point sizes, which OpenGL does. Since some of the curve demos use rendered points to represent control points, this needed to be handled. The standard solution is to use instanced geometry, which uses two vertex buffers. The first represents the constant data across all instances — in this case just a camera-facing quad which is scaled based on the point size. The second represents the data that changes per-instance — in this case the position of the point in space. When setting up your vertex attributes you indicate whether they change per-vertex (e.g., the quad) or per-instance (e.g., the position). Finally you render with a special DrawInstanced()
call. A decent tutorial on setting up instanced rendering in D3D can be found here.
I considered writing a general instanced renderer, but that would have involved considerable refactoring and again wouldn’t match the interface presented in the book. So I added a special point renderer to just D3D, which you can find in IvPointRendererD3D11.cpp
. This does unfortunately add a conditional to the main ::Draw()
function in the renderer. As this is demo code that didn’t seem too bad a trade-off, but again if you were to write your own renderer with an eye on high performance, I would recommend another approach (e.g., force the client to call an explicit point renderer, rather than trigger off of the primitive type).
So there are few wrinkles I ran across when updating the code. Comments are welcome — if you think there’s a better way of handling these let me know. Part of the reason the code is in GitHub is that it’s much easier to update, so I’ll be making improvements and fixes as time goes on.
Hi,
I believe I’ve found an error in the derivation of the slerp formula. I have the second edition, and the error is on page 467. A bit of sleuthery with Google books leads me to think the error is still present in the third edition on page 208. The issue is with the intermediary a(t) and b(t) functions, that have a denominator of (1 + cos^2(?)), when it should be (1 – cos^2(?)).
Comment by Chris — 12/14/2016 @ 7:38 pm
Good catch! I’ve added it to the errata. And thanks!
Comment by Jim — 12/18/2016 @ 1:18 pm
ln 4.2.3 (page 119) of 3rd:
What is a0,0w0 in the matrix? Is it a vector?
Comment by ClintGu — 6/25/2018 @ 2:21 am
Yes, it’s a vector — part of the linear combination T(v_0) = a_{0,0}w_0 + a_{1,0}w_1 +… + a_{m-1,0}w_{m-1}. What I’m trying to represent here is expanding out each T(v_j) into the columns of the matrix but admittedly it’s not very clear. In any case, in games we almost always use the standard Euclidean basis, so those w_i terms just drop out.
Comment by Jim — 6/25/2018 @ 7:35 pm