Miss Cache
Rendering, animation, games
Thursday, November 17, 2016
Normal vectors? Normal covectors? Normal pseudevectors?
Normal to a parametrized surface, if defined as dp/du x dp/dv (cross product), is a pseudovector. However this is not how we usually treat normal vectors, because if a surface is mirrored (across itself for example), we usually want the normal vector to be reflected as well and this doesn't happen if we were to stick to the definition precisely.
But otherwise, normal "vector" obviously does not transform like a regular vector. Wikipedia page for a pseudevector (https://en.wikipedia.org/wiki/Pseudovector) says that if a regular vector is transformed by v' = Rv, then a pseudovector transforms by v' = det(R)*(Rv). But this considers only rotations - proper and improper. E.g., the reflection mentioned above is an improper rotation, and it is exactly the situation where we don't want to think of a normal as a pseudovector, and I suppose that any improper rotation in CG represents a similar situation (under proper rotations, vectors and pseudevectors behave the same way). So the above formula for transforming normals (the way pseudovectors transform) seems to be useless in CG.
So what isn't useless for transforming normals? The formula that is well known in graphics: the inverse transpose of a regular transformation. Reflection transformation doesn't change under inverse transpose because it is orthonormal (we remember a useful mnemonic from CG - if there is no nonuniform scaling in the transformation, it is equal to its inverse transpose). Otherwise, the inverse transpose technique takes care of preserving the correct normal in other kinds of transformations, and this is precisely how covectors transform.
So, what is a normal (the mathematical one, defined through the cross product)? It has to be a pseudovector. But it also seems to be a covector, since a covector transformation rule is what turns out to be correct for transforming normals in CG. Apparently, the distinction between vectors and pseudovectors is orthogonal to the distinction between covariant and contravariant vectors. The terminology is a little confusing, but the actual possible combinations are: contravariant vector, contravariant pseudovector, covariant vector, covariant pseudovector. The normal to a surface seems to be of the last type.
What is most confusing to me is that whenever I read a more advanced and supposedly mathematically correct description of transforming normals in CG, they mention that normals are pseudovectors (which is mildly useful, because as described above it apparently boils down to flipping or not flipping the sign when the transformation changes the handedness), but they almost never mention that they are covectors as well, which is where the inverse-transform formula comes from (I actually only found out about this when learning basic tensor analysis and noticing that the transformation for covectors has this specific form).
Monday, January 19, 2015
Modelling with distance functions - SHAPE
For a couple of years I've been fascinated with the rendering techniques used by the guys on the demoscene. I've dabbled a bit with sphere tracing before, but never had enough patience or skill to actually finish a demo.
Since I started making graphics apps for Android, I wanted to try out the idea of generating animated images with distance functions on a mobile device - but in 2D, instead of 3D (of course mobiles are too slow for a full sphere tracing just yet).
Here is the final app (working as a live wallpaper) - check out the screenshots, and get the free version if you happen to possess an Android device:
https://play.google.com/store/apps/details?id=pl.madscientist.shape.free
For anyone that did any sphere tracing, the basic idea should be obvious - a scene is defined by a distance function, and since it's in 2D, no tracing is necessary - just check the value at the pixel, and if it's negative, you're inside.
Check out this ShaderToy entry - a single shader doing all I'm going to describe below:
https://www.shadertoy.com/view/Mtl3WH
The way I do it on mobile is somewhat different - not in a single fullscreen shader - and I'll explain it a bit later.
The fist step in generating a shape is a domain (coordinate space) transformation: first rotation and then one of the various kinds of bends and stretches. The shape will be generated in this deformed domain.
A starting shape is generated by a CSG operation between two of the basic shapes (disc, square, line etc.). The shapes and the CSG op are chosen randomly.
At all times two such 'compound shapes' are active and temporally transformed (blended) one to another. After the transformation to the second shape is finished, another (third) compound shape is randomized, and the process continues, blending second to third, and so on.
This way, at some point in time, I may obtain something like this. Through all these transformations I probably lose a distance property of the function, but it's still good enough to procedurally antialias the edges, which makes for a much smoother look.
Another step is to replicate the domain near the origin:
You can notice that every second shape is mirrored left to right or top to bottom (in a chessboard pattern). This makes for a kaleidoscope-like animation and assures the image continuity whenever the shape transformations cause it to cross the edges of the small domain area that is being replicated (it's really obvious, but it's hard explain in words).
A final step in pattern generation is a use of some cool-looking 2D coordinate system (actually, in terms of the code, it is the first step, but conceptually it's easier to think of it as the last one). My favourite one is a log-polar coordinate system that makes for a tunnel-like look.
One problem is that the shape is minified in some areas and magnified in others. This causes poor antialiasing and too much blurring, respectively. I don't know how to fix this easily - this "procedural antialiasing" is done by making a smoothstep transition between a shape color and a background color across the fixed distance range, e.g. (-0.1, 0.1). I could probably make this interval dependent on a coordinate differential, but it wouldn't always work, because 1) the function loses it's distance property often anyway, and 2) because the implementation is different on mobile.
Implicit function also allows for a procedural fun with colors - I have as much as four colors assigned to different ranges of the implicit function values:
- shape interior color
- second shape interior color, across some range deeper inside the shape
- shape border color - across a very narrow range around zero - between the first interior color and a glow color
- glow color, which is strongest near the border and fades to the background color away from the border
Glow color is very cool and can be used to make an impression of either a light glow or a shadow.
Combining all this, you get images like this one (this is in a regular polar coordinate system):
Initially I also had transitions between various coordinate systems, which looked pretty cool, but I dumped them because of 1) a performance issue on mobile, and 2) transitions between some pairs of systems made the image discontinuous during the transition, and it looked really bad. You can still see some transitions (but performed in a different way) in the ShaderToy prototype.
Now, I said it's done a bit differently on mobile devices, but this post is getting long, so I guess the explanation will have to wait for a next post.
Since I started making graphics apps for Android, I wanted to try out the idea of generating animated images with distance functions on a mobile device - but in 2D, instead of 3D (of course mobiles are too slow for a full sphere tracing just yet).
Here is the final app (working as a live wallpaper) - check out the screenshots, and get the free version if you happen to possess an Android device:
https://play.google.com/store/apps/details?id=pl.madscientist.shape.free
For anyone that did any sphere tracing, the basic idea should be obvious - a scene is defined by a distance function, and since it's in 2D, no tracing is necessary - just check the value at the pixel, and if it's negative, you're inside.
Check out this ShaderToy entry - a single shader doing all I'm going to describe below:
https://www.shadertoy.com/view/Mtl3WH
The way I do it on mobile is somewhat different - not in a single fullscreen shader - and I'll explain it a bit later.
The fist step in generating a shape is a domain (coordinate space) transformation: first rotation and then one of the various kinds of bends and stretches. The shape will be generated in this deformed domain.
A starting shape is generated by a CSG operation between two of the basic shapes (disc, square, line etc.). The shapes and the CSG op are chosen randomly.
At all times two such 'compound shapes' are active and temporally transformed (blended) one to another. After the transformation to the second shape is finished, another (third) compound shape is randomized, and the process continues, blending second to third, and so on.
This way, at some point in time, I may obtain something like this. Through all these transformations I probably lose a distance property of the function, but it's still good enough to procedurally antialias the edges, which makes for a much smoother look.
Another step is to replicate the domain near the origin:
You can notice that every second shape is mirrored left to right or top to bottom (in a chessboard pattern). This makes for a kaleidoscope-like animation and assures the image continuity whenever the shape transformations cause it to cross the edges of the small domain area that is being replicated (it's really obvious, but it's hard explain in words).
A final step in pattern generation is a use of some cool-looking 2D coordinate system (actually, in terms of the code, it is the first step, but conceptually it's easier to think of it as the last one). My favourite one is a log-polar coordinate system that makes for a tunnel-like look.
One problem is that the shape is minified in some areas and magnified in others. This causes poor antialiasing and too much blurring, respectively. I don't know how to fix this easily - this "procedural antialiasing" is done by making a smoothstep transition between a shape color and a background color across the fixed distance range, e.g. (-0.1, 0.1). I could probably make this interval dependent on a coordinate differential, but it wouldn't always work, because 1) the function loses it's distance property often anyway, and 2) because the implementation is different on mobile.
Implicit function also allows for a procedural fun with colors - I have as much as four colors assigned to different ranges of the implicit function values:
- shape interior color
- second shape interior color, across some range deeper inside the shape
- shape border color - across a very narrow range around zero - between the first interior color and a glow color
- glow color, which is strongest near the border and fades to the background color away from the border
Glow color is very cool and can be used to make an impression of either a light glow or a shadow.
Combining all this, you get images like this one (this is in a regular polar coordinate system):
Initially I also had transitions between various coordinate systems, which looked pretty cool, but I dumped them because of 1) a performance issue on mobile, and 2) transitions between some pairs of systems made the image discontinuous during the transition, and it looked really bad. You can still see some transitions (but performed in a different way) in the ShaderToy prototype.
Now, I said it's done a bit differently on mobile devices, but this post is getting long, so I guess the explanation will have to wait for a next post.
Subscribe to:
Posts (Atom)