Archive

Posts Tagged ‘graphics’

Context Free

August 29th, 2011 No comments

This is another copy of another post from my own blog, a continuation of the last… Also, fans of the laser cutter, note that Context Free happily generates SVG files just as easily as it does raster images.

My last post mentioned a program called Context Free that I came across via the Syntopia blog as his program Structure Synth was modeled after it. (Also, Make Magazine mentioned it in Issue 17.)

I’ve heard of context-free grammars before but my understanding of them is pretty vague. This program is based around them and the documentation expresses their limitations; what I grasped from this is that no entity can have any “awareness” of the context in which it’s drawn, i.e. any part of the rest of the scene or even where in the scene it is. A perusal of the site’s gallery shows how much those limitations don’t really matter.

I downloaded the program, started it, and their welcome image (with the relatively short source code right beside it) greeted me, rendered on-the-spot:

The program was very easy to work with. Their quick reference card was terse but only needed a handful of examples and a few pages of documentation to fill in the gaps. After about 15 minutes, I’d put together this:

Sure, it’s mathematical and simple, but I think being able to put it together in 15 minutes in a general program (i.e. not a silly ad-hoc program) that I didn’t know how to use shows its potential pretty well. The source is this:

startshape MAIN
background { b -1 }
rule MAIN {
   TRAIL { }
}
rule TRAIL {
   20 * { r 11 a -0.6 s 0.8 } COLORED { }
}
rule COLORED {
   BASE { b 0.75 sat 0.1 }
}
rule BASE {
   SQUARE1 { }
   SQUARE1 { r 90 }
   SQUARE1 { r 180 }
   SQUARE1 { r 270 }
}
rule SQUARE1 {
   SQUARE { }
   SQUARE1 { h 2 sat 0.3 x 0.93 y 0.93 r 10 s 0.93 }
}

I worked with it some more the next day and had some things like this:

I’m not sure what it is. It looks sort of like a tree made of lightning. Some Hive13 people said it looks like a lockpick from hell. The source is some variant of this:

startshape MAIN
background { b -1 }
rule MAIN {
    BRANCH { r 180 }
}
rule BRANCH 0.25 {
    box { }
    BRANCH { y -1 s 0.9 }
}
rule BRANCH 0.25{
    box { }
    BRANCH { y -1 s 0.3 }
    BRANCH { y -1 s 0.7  r 52 }
}
rule BRANCH 0.25 {
    box { }
    BRANCH { y -1 s 0.3 }
    BRANCH { y -1 s 0.7  r -55 }
}
path box {
    LINEREL{x 0 y -1}
    STROKE{p roundcap b 1 }
}

The program is very elegant in its simplicity. At the same time, it’s a really powerful program. Translating something written in Context Free into another programming language would in most cases not be difficult at all – you need just a handful of 2D drawing primitives, a couple basic operations for color space and geometry, the ability to recurse (and to stop recursing when it’s pointless). But that representation, though it might be capable of a lot of things that Context Free can’t do on its own, probably would be a lot clumsier.

This is basically what some of my OpenFrameworks sketches were doing in a much less disciplined way (although with the benefit of animation and GPU-accelerated primitives) but I didn’t realize that what I was doing could be expressed so easily and so compactly in a context-free grammar.

It’s appealing, though, in the same way as the functions discussed in the last post (i.e. those for procedural texturing). It’s a similarly compact representation of an image – this time, a vector image rather than a spatially continuous image, which has some benefits of its own. It’s an algorithm – so now it can be parametrized. (Want to see one reason why parametrized vector things are awesome? Look at Magic Box. [Or, for another well-known one, especially in Hive13, the venerable OpenSCAD.]) And once it’s parametrized, animation and realtime user control are not far away, provided you can render quickly enough.

(And as @codersandy observed after reading this, POV-Ray is in much the same category too. I’m not sure if he meant it in the same way I do, but POV-Ray is a fully Turing-complete language and it permits you to generate your whole scene procedurally if you wish, which is great – but Context Free is indeed far simpler than this, besides only being 2D. It will be interesting to see how Structure Synth compares, given that it generates 3D scenes and has a built-in raytracer.)

My next step is probably to play around with Structure Synth (and like Fragmentarium it’s built with Qt, a library I actually am familiar with). I also might try to create a JavaScript implementation of Context Free and conquer my total ignorance of all things JavaScript. Perhaps a realtime OpenFrameworks version is in the works too, considering this is a wheel I already tried to reinvent once (and badly) in OpenFrameworks.

Also in the queue to look at:

  • NodeBox, “a Mac OS X application that lets you create 2D visuals (static, animated or interactive) using Python programming code…”
  • jsfiddle, a sort of JavaScript/HTML/CSS sandbox for testing. (anarkavre showed me a neat sketch he put together here)
  • Paper.js, “an open source vector graphics scripting framework that runs on top of the HTML5 Canvas.”
  • Reading generative art by Matt Pearson which I just picked up on a whim.

Post to Twitter Post to Facebook Post to Google Buzz Post to LinkedIn Post to Reddit

Categories: Meta Tags:

isolated-pixel-pushing

August 29th, 2011 No comments

This is a copy of what I posted to my own humble blog as it might be of interest here.

After finally deciding to look around for some projects on github, I found a number of very interesting ones in a matter of minutes.

I found Fragmentarium first. This program is like something I tried for years and years to write, but just never got around to putting in any real finished form. It can act as a simple testbench for GLSL fragment shaders, which I’d already realized could be used to do exactly what I was doing more slowly in Processing, much more slowly in Python (stuff like this if we want to dig up things from 6 years ago), much more clunkily in C and OpenFrameworks, and so on. It took me probably about 30 minutes to put together the code to generate the usual gawdy test algorithm I try when bootstrapping from a new environment:

(Yeah, it’s gaudy. But when you see it animated, it’s amazingly trippy and mesmerizing.)

The use I’m talking about (and that I’ve reimplemented a dozen times) was just writing functions that map the 2D plane to some colorspace, often with some spatial continuity. Typically I’ll have some other parameters in there that I’ll bind to a time variable or some user control to animate things. So far I don’t know any particular term that encompasses functions like this, but I know people have used it in different forms for a long while. It’s the basis of procedural texturing (as pioneered in An image synthesizer by Ken Perlin) as implemented in countless different forms like Nvidia Cg, GLSL, probably Renderman Shading Language, RTSL, POV-Ray’s extensive texturing, and Blender’s node texturing system (which I’m sure took after a dozen other similar systems). Adobe Pixel Bender, which the Fragmentarium page introduced to me for the first time, does something pretty similar but to different ends. Some systems such as Vvvv and Quartz Composer probably permit some similar operations; I don’t know for sure.

The benefits of representing a texture (or whatever image) as an algorithm rather than a raster image are pretty well-known: It’s a much smaller representation, it scales pretty well to 3 or more dimensions (particularly with noise functions like Perlin Noise or Simplex Noise), it can have a near-unlimited level of detail, it makes things like seams and antialiasing much less of an issue, it is almost the ideal case for parallel computation and modern graphics hardware has built-in support for it (e.g. GLSL, Cg, to some extent OpenCL). The drawback is that you usually have to find some way to represent this as a function in which each pixel or texel (or voxel?) is computed in isolation of all the others. This might be clumsy, it might be horrendously slow, or it might not have any good representation in this form.

Also, once it’s an algorithm, you can parametrize it. If you can make it render near realtime, then animation and realtime user control follow almost for free from this, but even without that, you still have a lot of flexibility when you can change parameters.

The only thing different (and debatably so) that I’m doing is trying to make compositions with just the functions themselves rather than using them as means to a different end, like video processing effects or texturing in a 3D scene. It also fascinated me to see these same functions animated in realtime.

However, the author of Fragmentarium (Mikael Hvidtfeldt Christensen) is doing much more interesting things with the program (i.e. rendering 3D fractals with distance estimation) than I would ever have considered doing. It makes sense why – his emerged more from the context of fractals and ray tracers on the GPU, like Amazing Boxplorer, and fractals tend to make for very interesting results.

His Syntopia Blog has some fascinating material and beautiful renders on it. His posts on Distance Estimated 3D Fractals were particularly fascinating to me – in part because this was the first time I had encountered the technique of distance estimation for rendering a scene. He gave a good introduction with lots of other material to refer to.

Distance Estimation blows my mind a little when I try to understand it. I have a decent high-level understanding of ray tracing, but this is not ray tracing, it’s ray marching. It lets complexity be emergent rather than needing an explicit representation as a scanline renderer or ray tracer might require (while ray tracers will gladly take a functional representation of many geometric primitives, I have encountered very few cases where something like a complex fractal or an isosurface could be rendered without first approximating it as a mesh or some other shape, sometimes at great cost). Part 1 of Mikael’s series on Distance Estimated 3D Fractals links to these slides which show a 4K demo built piece-by-piece using distance estimation to render a pretty complex scene.

He has another rather different program called Structure Synth which he made following the same “design grammar” approach of Context Free. I haven’t used Structure Synth yet, because Context Free was also new to me and I was first spending some time learning to use that. I’ll cover this in another post.

Post to Twitter Post to Facebook Post to Google Buzz Post to LinkedIn Post to Reddit

Categories: Meta Tags: