Blog

11.10.2016

Tracing My First Handcrafted Rays

A few days ago some popular graphics programmers on twitter stated that one can only become a real graphics programmer after having written at least one ray tracer ;). Since I have read the second edition of "Physically Based Rendering - From Theory to Implementation" recently and already did some physically based rendering for my master's thesis, I started a quick toy project for some simple physically based ray tracing. My program can only render spheres, but does it with a GGX based microfacet model for direct lighting, soft shadows, specular reflections and a depth of field effect. Tone mapping and other shapes may be added soon :).

Since most compilers support OpenMP today, I just used that to get a speed boost. It requires only three lines of additional code and the -openmp compiler switch. The parallelization results in a >6x speedup on my Haswell i7 quad core with hyperthreading!
#include <omp.h>
...
int main(int argc, char **argv)
{
	omp_set_num_threads(8);

	int y;
	#pragma omp parallel for private(y) schedule(dynamic)
	for (y = 0; y < camera.film.pixels.y; y++)
	{
		...
	}
}

28.09.2016

Understanding Spherical Harmonics

I wanted to learn more about spherical harmonics and their use for diffuse light probes in real-time rendering applications. Even though I read about them in the third edition of "Real-Time Rendering" at some point, I had to refresh my knowledge and I actually wanted to implement something to get some practice. I looked up some additional resources and tried to find out how the data is usually precomputed. Spherical Harmonics for Beginners lists many good resources that really helped me a lot. It seems that third order spherical harmonics are usually precise enough to model lambertian diffuse lighting. Nine coefficients fully describe the diffuse lighting from all directions. They are passed to a shader, where they are applied based on the surface normal. The third order spherical harmonics coefficients consist of one coefficient for the first order, three for the second order and five for the third order functions. Each coefficient can be an LDR or HDR color value. This is all that needs to be done to apply an SH probe in a shader:
vec3 result = vec3(0.0);
result += coefficients[0] * 0.282095;
result -= coefficients[1] * 0.488603 * n.y;
result += coefficients[2] * 0.488603 * n.z;
result -= coefficients[3] * 0.488603 * n.x;
result += coefficients[4] * 1.092548 * n.x * n.y;
result -= coefficients[5] * 1.092548 * n.y * n.z;
result += coefficients[6] * 0.315392 * (3.0f * n.z * n.z - 1.0f);
result -= coefficients[7] * 1.092548 * n.x * n.z;
result += coefficients[8] * 0.546274 * (n.x * n.x - n.y * n.y);
As you may have noticed, the coefficients can even be premultiplied by those constants. It may seem obvious by looking at the first few lines, that the first coefficient is basically an ambient lighting term and the following three coefficients add and subtract colors on each axis based on the three components of the normal vector. But what is the impact of the last five (the third order) coefficients on the result? There are usually images in the literature that show the lobes for the positive and negative spaces for each coefficient. But I was still slightly confused and quickly built an application that lets me interactively fiddle with those coefficients, which was very eye-opening. The code is availible on my GitHub page and should be easy to build on all the primary platforms: spherical_harmonics_playground.

After playing around a bit with the coefficients, I wanted to precompute some real lighting data. I read about several typical ways to precompute coefficients. They are all based on applying a similar computation as above but for each incoming light direction to accumulate the coefficients:
vec3 coefficients[9] = zeroes;
for (each incident light ray with direction n and intensity or color c)
{
	coefficients[0] += c * 0.282095;
	coefficients[1] -= c * 0.488603 * n.y;
	coefficients[2] += c * 0.488603 * n.z;
	coefficients[3] -= c * 0.488603 * n.x;
	coefficients[4] += c * 1.092548 * n.x * n.y;
	coefficients[5] -= c * 1.092548 * n.y * n.z;
	coefficients[6] += c * 0.315392 * (3.0f * n.z * n.z - 1.0f);
	coefficients[7] -= c * 1.092548 * n.x * n.z;
	coefficients[8] += c * 0.546274 * (n.x * n.x - n.y * n.y);
}
This is especially trivial for directional light sources. Light rays can also be generated with a typical ray tracing approach with Monte Carlo integration. But I went with the conversion of environment maps to SH light probes. In my application I load a cube map, generate light rays for its pixels and weight these based on their distance from the unit sphere and their approximate size when projected onto the sphere.

I was actually really surprised that this was all I had to do. It was so easy that I was able to read up, understand and implement everything in a single evening :). I want to thank all the authors for that!
This is how some of my results look like:

27.09.2016

Packing Triangles into a Texture Atlas

When writing the lightmapping library, I had to compute lightmap texture coordinates for several test meshes. While thekla_atlas and uvatlas produce very good results, they can take quite some time to process more complex meshes. There were also some meshes that caused problems that those libraries could not deal with. So, I wanted to quickly compute texture atlases with different resolutions for any of my triangulated test meshes. I thought about simpler texture atlas generation methods and also wanted to know how bad exactly their issues would be. I found an article by Thomas Diewald who documented how he implemented a Triangle Texture Atlas generator and wanted to test his algorithm. So I went and rebuilt it. There was a problem with my initial implementation though. I used a mesh that had two huge triangles and many small ones (see first image), which resulted in a lot of wasted space in the first row of triangles. I was able to improve the algorithm substatially by doing several different passes over the image. The first pass just executes the algorithm as described by Thomas, but does not pack all the triangles. The second pass looks at the right hand side of the filled rows again and tries to fit in the now smaller triangles that are left over (since they are sorted by size). In a third pass, leftover triangles are similarly packed into the left hand side of the rows. The occupied space is tracked in two arrays which store the horizontal position of the leftmost and rightmost pixels occupied by triangles. This was simply done by rasterizing the triangle edges and updating each x coordinate in those arrays.

The algorithm is pretty fast and can now be executed each time I load a mesh for lightmapping. But when I used it for this purpose, there were often visible seams between adjacent triangles. Neighbours in the mesh are located in completely different places in the map and thus are not interpolated correctly. Even slight differences of their edge intensities were very noticeable. Another disadvantage is that all mesh vertices have to be unique due to their unique texture coordinates. All the other vertex attributes can not be reused very easily.

I published the library as a simple to use single-header library on GitHub in case someone has a better use case for it:

"trianglepacker.h is a C/C++ single-file library that packs triangles from a 3D mesh into a 2D map with some specified map size, border and spacing between triangles. It uses a fast greedy algorithm. The output is not optimal!"

19.09.2016

Rendering Physically Based Paper Worlds on Mobile Devices

After a long pause I'm back with some results from my master's thesis. The title of my thesis is "Implementation of a Lighting System for Real-Time rendered Paper Worlds on Mobile Devices".

I prototyped a new lighting system that is suitable for Nyamyam's game engine, which is specialized on pop-up book worlds. I extended the original forward renderer, which basically only used static lightmaps, to a physically based lighting model that is capable of handling static, stationary and dynamic light sources with several different effects.

An ambient occlusion mask and contributions from two stationary lights are baked into the color channels of one lightmap texture, while static direct and indirect lighting are baked into a second set of lightmaps. I wrote about the lightmapping library, that I created for this purpose, in my previous post.

A physically based paper BRDF is used consistently throughout lightmap precomputation processes and real-time lighting computations. Paper is not just a diffuse lambertian reflector. It shows some specular highlight and has a wider diffuse lobe due to subsurface scattering events, as described by Papas et al.. I implemented the BRDF part of their paper and applied several optimizations and approximations to achieve 60Hz with about four light calculations per pixel on my current-gen smartphone (a Oneplus One). Since about 40% of mobile devices used for gaming still only support OpenGL ES 2.0, I was very limited in the availible feature set. Especially implementing good looking dynamic shadow maps was very difficult and is also still fairly expensive on many ES2.0 devices.

The final lighting result of physically based lighting calculations can easily exceed the displayable range. I used a simple Reinhard tone mapper to solve this. It only costs very few instructions in addition to the gamma correction step at the end.

The first image shows a scene without diffuse texturing to highlight the different effects of the new lighting system. I also created a tool to author the lighting conditions for each scene along with the prototype. This can be seen in the second image.


14.05.2016

New Lightmapping Library

I finally cleaned up and released my lightmapping library just now: lightmapper.h
It is a public domain single-header library that you can use with your own OpenGL renderer. While the simple example application demonstrates the baking of ambient occlusion only, the library can also be used to precompute global illumination for static scene geometry by doing multiple bounces. Any light shapes and emissive surfaces are supported as long as you draw them with their corresponding emissive light colors.

13.04.2016

I'm starting to Blog! Or... at least I'll try to! :)

You may have noticed that I didn't update my site THAT often *coughs*.
Well, at least I did it NOW!
Since I can't remember when I wrote the older stuff on this website, I've just pasted it below. Hopefully I'll get myself into the mood of opening this page more often to press some keys instead of only pressing keys while editing random source code files... :)

But now ... back to work on my thesis topic:
Implementation of a Lighting System for Real-Time rendered Paper Worlds on Mobile Devices.

Older Entries Without a Date

#ShowMeYourCoolTechBooks

I asked for and collected interesting tech books on twitter. Here are some pictures of peoples bookshelves:

@cdaylward: "a shelf"
@worrydream: "the whole damn shelf"
@rygorous: "left shelf", "right shelf", "spillover shelf"

Below is my own little collection of cool books. I deliberately excluded my C++ books ;)

Tale of Light

Amethyst and I have started developing a puzzle game somewhere in mid-2015. We've decided to call it "Tale of Light" since the primary mechanics will be all about light :). We also made a website for it that we'll update with pretty pictures. Unfortunately we have to interrupt the development on this project due to my master's thesis. But we really like this project and will get back to it as soon as possible!

TALE-OF-LIGHT.COM

Tale of Light is a puzzle-platform game about light and shadow.
Taking a trip to exciting places throughout a magic world,
the player must solve enlightening puzzles to proceed.

Here is a video showing some current 2D lighting engine progress:

Oculus Meets Augmented Reality

During winter 2014/2015 a few fellow students and I made a hardware setup and software library for low latency augmented reality applications. The setup consisted of an Oculus Rift DK2, two Logitech C310 webcams with fisheye lenses and a motion capture tracking system for additional markers. Subjects were surprised by our latency reduction and prediction techniques. These and the fact that they could participate in real world tasks significantly reduced the risk of motion sickness.
Link to GitHub repository


Fruitinvaders (Demoscene PC 4k intro)

Amethyst and I have visited the evoke demoparty in cologne several times over the last few years. This year we went ahead and submitted our very first 4k intro! :).

Link to pouet.net page
Graphics: amethyst, ands | Code: ands | Audio: ands

libr3d (C Library)

I've made a small realtime software rastarization library with basic shader and texture mapping support. It was originally written to display some fun realtime 3D graphics on the STM32F429-Discovery evaluation board, but it also works with SDL now.
Link to GitHub repository

WebGL Blocky Tunnel (js/WebGL)

For a CG class at my university a fellow student and I made a small game called MINEFLY. The player must navigate a helicopter through a procedurally generated Minecraft-like tunnel with different sections without touching any of the blocks. I liked the procedurally generated tunnel and wanted to try some WebGL coding at that time, so I ported the tunnel generator to WebGL and let a camera fly through it: Click here to start the fly-through

Pony Behaviour Trees (C# library for Legends of Equestria)

For about four years I've worked for the community-made MMORPG "Legends of Equestria". During most of that time Justin Bruening and I were the main programmers on the project. In those four years I've programmed and learned a lot on MMO Network code, Physics, AI and Tools (mostly stuff for the server side). One of the systems I've made was an AI scripting system called Pony Behaviour Trees. PBT is a set of libraries to create, execute and inspect behaviour trees. These are often used to create artificial intelligences for games. The possibility of editing the PBTs while the application (which is the MMO server in that case) is running, C# scripting inside nodes and runtime inspection made this a flexible and powerful tool. At some point I decoupled the code from the server and made it its own open source library.
Link to GitHub repository

GLGUI (C# library)

GLGUI is a fast WinForms-like object oriented GUI based on an improved OpenTK version which allows custom cursors (https://github.com/ands/opentk) and some QuickFont font generator code. Both GameWindows and GLControls, which use different input EventArgs, are supported. It uses scissor rects with clear commands to draw all colored rectangles.
Link to GitHub repository

Haptic Props: Semi-actuated tangible props for haptic interaction on the surface

The first publication I'm involved with!
It is based on the results of my bachelor's thesis.

Download Page

ABSTRACT
While multiple methods to extend the expressiveness of tangible interaction have been proposed, e. g., self-motion, stacking and transparency, providing haptic feedback to the tangible prop itself has rarely been considered. In this poster we present a semi-actuated, nano-powered, tangible prop, which is able to provide programmable friction for interaction with a tabletop setup. We have conducted a preliminary user study evaluating the users' acceptance for the device and their ability to detect changes in the programmed level of friction and received some promising results.

Crystal Wars

During winter 2011/2012 fifteen fellow students and I planned and developed a game called Crystal Wars. The game is a combination of a first person roleplaying game and a strategy game. Characters on the ground, controlled either by AIs or human players, have to fight the enemy team and capture their crystal from the enemy side while a master/god player (also AI/human) with a top-down view on each side helps his/her team with special abilities.
One aim of the project was to target as many available input devices as possible. We ended up using multitouch tables/walls, Kinect sensors, Wii controllers, mobile devices as remote controls and the regular mouse and keyboard input. While getting various different gesture systems working was an interesting problem, my main jobs were to implement a scripting system, path finding and the character AIs for our game. It turned out that due to the very different input methods it was quite hard to balance the AIs and the gameplay, but we managed to present a mostly fair and fun game in the end.

Screenshots for some of the stuff I worked on

Below is a list of screenshots from game engines, renderers, effects and game projects I made or contributed to.
Some of these go way back to my early teens :D
Disclaimer: Most of the 3D models aren't mine (except for the procedurally generated trees above the tetris screenshots).