#include <omp.h> ... int main(int argc, char **argv) { omp_set_num_threads(8); int y; #pragma omp parallel for private(y) schedule(dynamic) for (y = 0; y < camera.film.pixels.y; y++) { ... } }

vec3 result = vec3(0.0); result += coefficients[0] * 0.282095; result -= coefficients[1] * 0.488603 * n.y; result += coefficients[2] * 0.488603 * n.z; result -= coefficients[3] * 0.488603 * n.x; result += coefficients[4] * 1.092548 * n.x * n.y; result -= coefficients[5] * 1.092548 * n.y * n.z; result += coefficients[6] * 0.315392 * (3.0f * n.z * n.z - 1.0f); result -= coefficients[7] * 1.092548 * n.x * n.z; result += coefficients[8] * 0.546274 * (n.x * n.x - n.y * n.y);As you may have noticed, the coefficients can even be premultiplied by those constants. It may seem obvious by looking at the first few lines, that the first coefficient is basically an ambient lighting term and the following three coefficients add and subtract colors on each axis based on the three components of the normal vector. But what is the impact of the last five (the third order) coefficients on the result? There are usually images in the literature that show the lobes for the positive and negative spaces for each coefficient. But I was still slightly confused and quickly built an application that lets me interactively fiddle with those coefficients, which was very eye-opening. The code is availible on my GitHub page and should be easy to build on all the primary platforms: spherical_harmonics_playground.

After playing around a bit with the coefficients, I wanted to precompute some real lighting data. I read about several typical ways to precompute coefficients. They are all based on applying a similar computation as above but for each incoming light direction to accumulate the coefficients:

vec3 coefficients[9] = zeroes; for (each incident light ray with direction n and intensity or color c) { coefficients[0] += c * 0.282095; coefficients[1] -= c * 0.488603 * n.y; coefficients[2] += c * 0.488603 * n.z; coefficients[3] -= c * 0.488603 * n.x; coefficients[4] += c * 1.092548 * n.x * n.y; coefficients[5] -= c * 1.092548 * n.y * n.z; coefficients[6] += c * 0.315392 * (3.0f * n.z * n.z - 1.0f); coefficients[7] -= c * 1.092548 * n.x * n.z; coefficients[8] += c * 0.546274 * (n.x * n.x - n.y * n.y); }This is especially trivial for directional light sources. Light rays can also be generated with a typical ray tracing approach with Monte Carlo integration. But I went with the conversion of environment maps to SH light probes. In my application I load a cube map, generate light rays for its pixels and weight these based on their distance from the unit sphere and their approximate size when projected onto the sphere.

I was actually really surprised that this was all I had to do. It was so easy that I was able to read up, understand and implement everything in a single evening :). I want to thank all the authors for that!

This is how some of my results look like:

The algorithm is pretty fast and can now be executed each time I load a mesh for lightmapping. But when I used it for this purpose, there were often visible seams between adjacent triangles. Neighbours in the mesh are located in completely different places in the map and thus are not interpolated correctly. Even slight differences of their edge intensities were very noticeable. Another disadvantage is that all mesh vertices have to be unique due to their unique texture coordinates. All the other vertex attributes can not be reused very easily.

I published the library as a simple to use single-header library on GitHub in case someone has a better use case for it:

"trianglepacker.h is a C/C++ single-file library that packs triangles from a 3D mesh into a 2D map with some specified map size, border and spacing between triangles. It uses a fast greedy algorithm. The output is not optimal!"

I prototyped a new lighting system that is suitable for Nyamyam's game engine, which is specialized on pop-up book worlds. I extended the original forward renderer, which basically only used static lightmaps, to a physically based lighting model that is capable of handling static, stationary and dynamic light sources with several different effects.

An ambient occlusion mask and contributions from two stationary lights are baked into the color channels of one lightmap texture, while static direct and indirect lighting are baked into a second set of lightmaps. I wrote about the lightmapping library, that I created for this purpose, in my previous post.

A physically based paper BRDF is used consistently throughout lightmap precomputation processes and real-time lighting computations. Paper is not just a diffuse lambertian reflector. It shows some specular highlight and has a wider diffuse lobe due to subsurface scattering events, as described by Papas et al.. I implemented the BRDF part of their paper and applied several optimizations and approximations to achieve 60Hz with about four light calculations per pixel on my current-gen smartphone (a Oneplus One). Since about 40% of mobile devices used for gaming still only support OpenGL ES 2.0, I was very limited in the availible feature set. Especially implementing good looking dynamic shadow maps was very difficult and is also still fairly expensive on many ES2.0 devices.

The final lighting result of physically based lighting calculations can easily exceed the displayable range. I used a simple Reinhard tone mapper to solve this. It only costs very few instructions in addition to the gamma correction step at the end.

The first image shows a scene without diffuse texturing to highlight the different effects of the new lighting system. I also created a tool to author the lighting conditions for each scene along with the prototype. This can be seen in the second image.

It is a public domain single-header library that you can use with your existing OpenGL renderer. While the simple example application demonstrates the baking of ambient occlusion only, the library can also be used to precompute global illumination for static scene geometry by doing multiple bounces. Any light shapes and emissive surfaces are supported as long as you draw them with their corresponding emissive light colors.

Well, at least I did it NOW!

Since I can't remember when I wrote the older stuff on this website, I've just pasted it below. Hopefully I'll get myself into the mood of opening this page more often to press some keys instead of only pressing keys while editing random source code files... :)

Implementation of a Lighting System for Real-Time rendered Paper Worlds on Mobile Devices.

@cdaylward: |
"a shelf" |

@worrydream: |
"the whole damn shelf" |

@rygorous: |
"left shelf", "right shelf", "spillover shelf" |

Below is my own little collection of cool books. I deliberately excluded my C++ books ;)

Tale of Light is a puzzle-platform game about light and shadow.

Taking a trip to exciting places throughout a magic world,

the player must solve enlightening puzzles to proceed.

Link to GitHub repository

(AA/Mignon battery, Nexus 7 display, L293D motor controller chips, parts from a CLUB-MATE lid, Stellaris Launchpad)

Graphics: amethyst, ands | Code: ands | Audio: ands

Link to GitHub repository

Link to GitHub repository

Link to GitHub repository

It is based on the results of my bachelor's thesis.

Download Page

ABSTRACT

While multiple methods to extend the expressiveness of tangible interaction have been proposed, e. g., self-motion, stacking and transparency, providing haptic feedback to the tangible prop itself has rarely been considered. In this poster we present a semi-actuated, nano-powered, tangible prop, which is able to provide programmable friction for interaction with a tabletop setup. We have conducted a preliminary user study evaluating the users' acceptance for the device and their ability to detect changes in the programmed level of friction and received some promising results.

One aim of the project was to target as many available input devices as possible. We ended up using multitouch tables/walls, Kinect sensors, Wii controllers, mobile devices as remote controls and the regular mouse and keyboard input. While getting various different gesture systems working was an interesting problem, my main jobs were to implement a scripting system, path finding and the character AIs for our game. It turned out that due to the very different input methods it was quite hard to balance the AIs and the gameplay, but we managed to present a mostly fair and fun game in the end.

Some of these go way back to my early teens :D

Disclaimer: Most of the 3D models aren't mine (except for the procedurally generated trees above the tetris screenshots).