Monday, January 19, 2015

Modelling with distance functions - SHAPE

For a couple of years I've been fascinated with the rendering techniques used by the guys on the demoscene. I've dabbled a bit with sphere tracing before, but never had enough patience or skill to actually finish a demo.

Since I started making graphics apps for Android, I wanted to try out the idea of generating animated images with distance functions on a mobile device - but in 2D, instead of 3D (of course mobiles are too slow for a full sphere tracing just yet).

Here is the final app (working as a live wallpaper) - check out the screenshots, and get the free version if you happen to possess an Android device:

For anyone that did any sphere tracing, the basic idea should be obvious - a scene is defined by a distance function, and since it's in 2D, no tracing is necessary - just check the value at the pixel, and if it's negative, you're inside.

Check out this ShaderToy entry - a single shader doing all I'm going to describe below:
The way I do it on mobile is somewhat different - not in a single fullscreen shader - and I'll explain it a bit later.

The fist step in generating a shape is a domain (coordinate space) transformation: first rotation and then one of the various kinds of bends and stretches. The shape will be generated in this deformed domain.

A starting shape is generated by a CSG operation between two of the basic shapes (disc, square, line etc.). The shapes and the CSG op are chosen randomly.

At all times two such 'compound shapes' are active and temporally transformed (blended) one to another. After the transformation to the second shape is finished, another (third) compound shape is randomized, and the process continues, blending second to third, and so on.

This way, at some point in time, I may obtain something like this. Through all these transformations I probably lose a distance property of the function, but it's still good enough to procedurally antialias the edges, which makes for a much smoother look.

Another step is to replicate the domain near the origin:

You can notice that every second shape is mirrored left to right or top to bottom (in a chessboard pattern). This makes for a kaleidoscope-like animation and assures the image continuity whenever the shape transformations cause it to cross the edges of the small domain area that is being replicated (it's really obvious, but it's hard explain in words).

A final step in pattern generation is a use of some cool-looking 2D coordinate system (actually, in terms of the code, it is the first step, but conceptually it's easier to think of it as the last one). My favourite one is a log-polar coordinate system that makes for a tunnel-like look.


One problem is that the shape is minified in some areas and magnified in others. This causes poor antialiasing and too much blurring, respectively. I don't know how to fix this easily - this "procedural antialiasing" is done by making a smoothstep transition between a shape color and a background color across the fixed distance range, e.g. (-0.1, 0.1). I could probably make this interval  dependent on a coordinate differential, but it wouldn't always work, because 1) the function loses it's distance property often anyway, and 2) because the implementation is different on mobile.

Implicit function also allows for a procedural fun with colors - I have as much as four colors assigned to different ranges of the implicit function values:
- shape interior color
- second shape interior color, across some range deeper inside the shape
- shape border color - across a very narrow range around zero - between the first interior color and a glow color
- glow color, which is strongest near the border and fades to the background color away from the border

Glow color is very cool and can be used to make an impression of either a light glow or a shadow.

Combining all this, you get images like this one (this is in a regular polar coordinate system):

Initially I also had transitions between various coordinate systems, which looked pretty cool, but I dumped them because of 1) a performance issue on mobile, and 2) transitions between some pairs of systems made the image discontinuous during the transition, and it looked really bad. You can still see some transitions (but performed in a different way) in the ShaderToy prototype.

Now, I said it's done a bit differently on mobile devices, but this post is getting long, so I guess the explanation will have to wait for a next post.