Home > Meta > isolated-pixel-pushing

isolated-pixel-pushing

This is a copy of what I posted to my own humble blog as it might be of interest here.

After finally deciding to look around for some projects on github, I found a number of very interesting ones in a matter of minutes.

I found Fragmentarium first. This program is like something I tried for years and years to write, but just never got around to putting in any real finished form. It can act as a simple testbench for GLSL fragment shaders, which I’d already realized could be used to do exactly what I was doing more slowly in Processing, much more slowly in Python (stuff like this if we want to dig up things from 6 years ago), much more clunkily in C and OpenFrameworks, and so on. It took me probably about 30 minutes to put together the code to generate the usual gawdy test algorithm I try when bootstrapping from a new environment:

(Yeah, it’s gaudy. But when you see it animated, it’s amazingly trippy and mesmerizing.)

The use I’m talking about (and that I’ve reimplemented a dozen times) was just writing functions that map the 2D plane to some colorspace, often with some spatial continuity. Typically I’ll have some other parameters in there that I’ll bind to a time variable or some user control to animate things. So far I don’t know any particular term that encompasses functions like this, but I know people have used it in different forms for a long while. It’s the basis of procedural texturing (as pioneered in An image synthesizer by Ken Perlin) as implemented in countless different forms like Nvidia Cg, GLSL, probably Renderman Shading Language, RTSL, POV-Ray’s extensive texturing, and Blender’s node texturing system (which I’m sure took after a dozen other similar systems). Adobe Pixel Bender, which the Fragmentarium page introduced to me for the first time, does something pretty similar but to different ends. Some systems such as Vvvv and Quartz Composer probably permit some similar operations; I don’t know for sure.

The benefits of representing a texture (or whatever image) as an algorithm rather than a raster image are pretty well-known: It’s a much smaller representation, it scales pretty well to 3 or more dimensions (particularly with noise functions like Perlin Noise or Simplex Noise), it can have a near-unlimited level of detail, it makes things like seams and antialiasing much less of an issue, it is almost the ideal case for parallel computation and modern graphics hardware has built-in support for it (e.g. GLSL, Cg, to some extent OpenCL). The drawback is that you usually have to find some way to represent this as a function in which each pixel or texel (or voxel?) is computed in isolation of all the others. This might be clumsy, it might be horrendously slow, or it might not have any good representation in this form.

Also, once it’s an algorithm, you can parametrize it. If you can make it render near realtime, then animation and realtime user control follow almost for free from this, but even without that, you still have a lot of flexibility when you can change parameters.

The only thing different (and debatably so) that I’m doing is trying to make compositions with just the functions themselves rather than using them as means to a different end, like video processing effects or texturing in a 3D scene. It also fascinated me to see these same functions animated in realtime.

However, the author of Fragmentarium (Mikael Hvidtfeldt Christensen) is doing much more interesting things with the program (i.e. rendering 3D fractals with distance estimation) than I would ever have considered doing. It makes sense why – his emerged more from the context of fractals and ray tracers on the GPU, like Amazing Boxplorer, and fractals tend to make for very interesting results.

His Syntopia Blog has some fascinating material and beautiful renders on it. His posts on Distance Estimated 3D Fractals were particularly fascinating to me – in part because this was the first time I had encountered the technique of distance estimation for rendering a scene. He gave a good introduction with lots of other material to refer to.

Distance Estimation blows my mind a little when I try to understand it. I have a decent high-level understanding of ray tracing, but this is not ray tracing, it’s ray marching. It lets complexity be emergent rather than needing an explicit representation as a scanline renderer or ray tracer might require (while ray tracers will gladly take a functional representation of many geometric primitives, I have encountered very few cases where something like a complex fractal or an isosurface could be rendered without first approximating it as a mesh or some other shape, sometimes at great cost). Part 1 of Mikael’s series on Distance Estimated 3D Fractals links to these slides which show a 4K demo built piece-by-piece using distance estimation to render a pretty complex scene.

He has another rather different program called Structure Synth which he made following the same “design grammar” approach of Context Free. I haven’t used Structure Synth yet, because Context Free was also new to me and I was first spending some time learning to use that. I’ll cover this in another post.

Post to Twitter Post to Facebook Post to Google Buzz Post to LinkedIn Post to Reddit

Categories: Meta Tags:
  1. No comments yet.
  1. No trackbacks yet.