Author Archive

Project Spotlight: Electric Motorcycle

May 23rd, 2012 7 comments


Last week, Hive13 member Rick showed off a rather impressive project: An electric motorcycle he’s building. He started it near the end of 2011, inspired by his father who started building an electric car years back but due to funding issues could not complete it. A lot of online resources helped greatly by providing information on what people had tried, what worked, what did not, what parts they were using, and so on. (Did I really need to mention that part? This is a blog for a hackerspace.)

Whatever its stage of completion, he says he has about 100+ miles on it so far and it can do 54 mph. (Update: This doesn’t mean the bike has a range of 100+ miles, but that he has ridden about 100+ miles on it so far. Actual range is more like 20 miles. Sorry, Hackaday.)


The frame of the bike is a 1989 Honda VTR 250 he had around for spare parts for other bikes. To replace the Honda’s 24 HP gas motor, he used a common golf cart permanent magnet DC motor, 24 V – 72 V, driving it with a fairly standard 48 V electric controller. Four Optima Deep Cycle Yellow Top 12V batteries power it (two of them are visible in the top photo), and he added a 48 V charger to manage them. (Update: Batteries are AGM [Absorbed Glass Mat], not lead acid, for the record.)

Headlights, taillights, turn signals, and other accessories run from 12 V that a DC-DC converter provides. (I think this also included the ridiculously loud horn, strictly for safety reasons because the bike makes practically no noise otherwise. He says he’s only had to use it twice so far…)


Rick estimated the total cost at about $2800, and that was using mostly new parts with warranties rather than used ones from Ebay. In addition to this, he purposely built it with a removable battery rack if we wishes to swap in a different battery type later on – he mentions LiFeMnPO4 batteries and a new controller & charger that would have added $1500 to the price, but would have increased the bike’s range and reduced its weight. (He put its current weight at about 400 lb total.)

Actual problems seemed pretty minimal. He made a mistake in the math when choosing the motor’s drive sprocket; a recent change in this brought the top speed from 35 up to 54 mph. The website from which he bought the battery charger advertised it as weatherproof, and he discovered it was not. He has some concerns about the motor getting too hot on hills or longer runs, and intends to add a temperature gauge and a fan.


I also felt I had to ask a token stupid question, “What is it like to ride a motorcycle where you can’t rev the engine?” but received a fairly serious answer: “It’s weird not having a gas motor. The rev thing isn’t as weird as not having any kind of engine brake when you’re going down hills.”

The Flickr album of pictures is here (first 5 are courtesy of Dave Myers; remainder are my own until the last 5, which are Rick’s). Rick also provided some photos and videos of the initial tear-down, assembly, and first rides:


(Disclaimer: Neither Hive13 nor Rick advocates riding without protective gear. The only reason this is absent in the pictures is that these were very short test runs.)

Post to Twitter Post to Facebook Post to Google Buzz Post to LinkedIn Post to Reddit

Categories: Project Tags:

Glass Block LED Matrix, controlled outdoors via tablet

October 13th, 2011 8 comments


Short version: Using an Android tablet, you can draw things on our Glass Block LED Matrix from the street, and it’s pretty awesome. Video here, photos here.

Long version:

Things have progressed recently on the Glass Block LED Matrix which Chris Davis and Paul Vincent started. For a couple weeks, the code was already in place to let Processing talk to it via simple serial commands to the Arduino & ShiftBrite shield. We wanted to use the tools from Project Blinkenlights to control things over the network; while this didn’t entirely work as planned, the project offered a lot of ideas and inspiration.

The most recent addition I made was the inclusion of oscP5 to the Processing sketch to let it listen for OSC (Open Sound Control) messages. As it happens, a brilliant piece of free software already exists (Control from Charlie Roberts) which turns Android/iOS devices into control surfaces that send out OSC messages. On top of this, Control comes with a handful of example UIs, one of them being “Multibutton Demo” which provides a UI with an 8×8 button grid, sort of like a monome. (The tablet in all of the photos is running Control with that Multibutton Demo UI.)

As our LED matrix is 7×8, this UI was a good initial match. I set Control’s destination URL/port to the backend machine that was running Processing, set the sketch to parse the pretty simple OSC messages Control would send out at every button toggle, and then I was able to control what was on the LED matrix by drawing on that 8×8 grid on my tablet.

I finally got to show it off outside on Tuesday evening when it was dark, and it’s working pretty well, as the video shows.

Next steps:

  • Making a Control UI that allows for color control. These are RGB LEDs, after all – we can control intensity and color, not just whether they’re on or off.
  • Making this web-enabled. I think Control allows this?
  • Fixing the glitchiness that I didn’t show in the video; something cryptic is going on on the Arduino side.

Check out the github project here and the project wiki page here.

Post to Twitter Post to Facebook Post to Google Buzz Post to LinkedIn Post to Reddit

Weekly meetup for “Intro to AI”

October 5th, 2011 No comments

The free online version of the “Introduction to Artificial Intelligence” course ( based on Stanford CS221 starts next week. As several members signed up for this course, we are going to be meeting up at the Hive on a weekly basis to watch the lectures and work through some of the homework. We’ll meet first on Wednesday, October 12th, at 6:30 PM, and the intention is to keep meeting each Wednesday at the same time for the 10 weeks that the course lasts (course schedule is here – it runs through the week of December 12th).

Lectures are supposed to appear online each week, and I’m told they’ll also be downloadable. I will try to download the videos ahead of time and have copies of them locally. Other than that, we’re just going to play things by ear – I’ve never done a course like this before, perhaps because no course I’m aware of has been done like this before. Something you can program on (i.e. a computer)  might be helpful to have. Quoth their website, “Programming is not required, however we believe it will be very helpful for some of the homework assignments. You may write code in any language you would like to (we recommend Python if you are new to programming) and your code will not be graded.” A textbook is not required but is supposedly helpful, and I think some people said they had older editions they could bring in.

If you’re taking the course, or you want to take it, please join us. You don’t need to be a Hive13 member. You don’t even need to be enrolled in the online course, for that matter – but it’s free and enrollment is open until October 9th, so you may as well.

Post to Twitter Post to Facebook Post to Google Buzz Post to LinkedIn Post to Reddit

Categories: Classes, Events Tags: ,

Galileo update & DIY solder mask

September 27th, 2011 9 comments

The Galileo project is progressing, and the next step is a board to drive its many many LEDs.


Center column of Galileo display

Dave B. put together a schematic and board in EAGLE for this purpose, based around the TI TLC5940, a 16-channel LED driver, and had the boards made at DorkbotPDX. Schematic is here, board layout here.


Unsoldered PCB

(Sorry for the light fogging. I really should just get a real macro lens instead of putting an old Series E 50mm on extension tubes.)

Paul and Jim then used the laser cutter to make a solder mask out of acetate film (i.e. garden-variety transparencies). The cream layer in EAGLE provided the mask to cut, and here’s the stencil created from that:

Here are two of the attempts to laser-cut this stencil from acetate sheets. The imperfection on the top one (see the right of the two holes on the bottom left) came from etching the edges of the hole rather than rastering it, supposedly; the bottom one came out a bit better.


Less-successful stencil


More successful cut

Here’s one finished board, having had solder paste applied through the stencil and reflowed on a cheap electric skillet. It looks good aside from a solder bridge at the chip’s 2nd- and 3rd-to-last pins:


Soldered PCB

Keep following the Wiki page - the project progresses pretty regularly and Jim updates the page.

Post to Twitter Post to Facebook Post to Google Buzz Post to LinkedIn Post to Reddit

Categories: Uncategorized Tags:

Long overdue photos from Mu, LitterBoxFriends, & Thriftsore Boratorium

September 14th, 2011 No comments



I took forever to post this item, but here it finally is. The Hive had an experimental music show (the post about it is here) months back, and Douglas Lucas kindly shared his photos from the event.

Here is LitterBoxFriends:







Here is Mu (and the aforementioned Douglas Lucas):




And here is Thriftsore Boratorium:



Post to Twitter Post to Facebook Post to Google Buzz Post to LinkedIn Post to Reddit

Categories: Events Tags: ,

Context Free

August 29th, 2011 No comments

This is another copy of another post from my own blog, a continuation of the last… Also, fans of the laser cutter, note that Context Free happily generates SVG files just as easily as it does raster images.

My last post mentioned a program called Context Free that I came across via the Syntopia blog as his program Structure Synth was modeled after it. (Also, Make Magazine mentioned it in Issue 17.)

I’ve heard of context-free grammars before but my understanding of them is pretty vague. This program is based around them and the documentation expresses their limitations; what I grasped from this is that no entity can have any “awareness” of the context in which it’s drawn, i.e. any part of the rest of the scene or even where in the scene it is. A perusal of the site’s gallery shows how much those limitations don’t really matter.

I downloaded the program, started it, and their welcome image (with the relatively short source code right beside it) greeted me, rendered on-the-spot:

The program was very easy to work with. Their quick reference card was terse but only needed a handful of examples and a few pages of documentation to fill in the gaps. After about 15 minutes, I’d put together this:

Sure, it’s mathematical and simple, but I think being able to put it together in 15 minutes in a general program (i.e. not a silly ad-hoc program) that I didn’t know how to use shows its potential pretty well. The source is this:

startshape MAIN
background { b -1 }
rule MAIN {
   TRAIL { }
rule TRAIL {
   20 * { r 11 a -0.6 s 0.8 } COLORED { }
rule COLORED {
   BASE { b 0.75 sat 0.1 }
rule BASE {
   SQUARE1 { }
   SQUARE1 { r 90 }
   SQUARE1 { r 180 }
   SQUARE1 { r 270 }
rule SQUARE1 {
   SQUARE { }
   SQUARE1 { h 2 sat 0.3 x 0.93 y 0.93 r 10 s 0.93 }

I worked with it some more the next day and had some things like this:

I’m not sure what it is. It looks sort of like a tree made of lightning. Some Hive13 people said it looks like a lockpick from hell. The source is some variant of this:

startshape MAIN
background { b -1 }
rule MAIN {
    BRANCH { r 180 }
rule BRANCH 0.25 {
    box { }
    BRANCH { y -1 s 0.9 }
rule BRANCH 0.25{
    box { }
    BRANCH { y -1 s 0.3 }
    BRANCH { y -1 s 0.7  r 52 }
rule BRANCH 0.25 {
    box { }
    BRANCH { y -1 s 0.3 }
    BRANCH { y -1 s 0.7  r -55 }
path box {
    LINEREL{x 0 y -1}
    STROKE{p roundcap b 1 }

The program is very elegant in its simplicity. At the same time, it’s a really powerful program. Translating something written in Context Free into another programming language would in most cases not be difficult at all – you need just a handful of 2D drawing primitives, a couple basic operations for color space and geometry, the ability to recurse (and to stop recursing when it’s pointless). But that representation, though it might be capable of a lot of things that Context Free can’t do on its own, probably would be a lot clumsier.

This is basically what some of my OpenFrameworks sketches were doing in a much less disciplined way (although with the benefit of animation and GPU-accelerated primitives) but I didn’t realize that what I was doing could be expressed so easily and so compactly in a context-free grammar.

It’s appealing, though, in the same way as the functions discussed in the last post (i.e. those for procedural texturing). It’s a similarly compact representation of an image – this time, a vector image rather than a spatially continuous image, which has some benefits of its own. It’s an algorithm – so now it can be parametrized. (Want to see one reason why parametrized vector things are awesome? Look at Magic Box. [Or, for another well-known one, especially in Hive13, the venerable OpenSCAD.]) And once it’s parametrized, animation and realtime user control are not far away, provided you can render quickly enough.

(And as @codersandy observed after reading this, POV-Ray is in much the same category too. I’m not sure if he meant it in the same way I do, but POV-Ray is a fully Turing-complete language and it permits you to generate your whole scene procedurally if you wish, which is great – but Context Free is indeed far simpler than this, besides only being 2D. It will be interesting to see how Structure Synth compares, given that it generates 3D scenes and has a built-in raytracer.)

My next step is probably to play around with Structure Synth (and like Fragmentarium it’s built with Qt, a library I actually am familiar with). I also might try to create a JavaScript implementation of Context Free and conquer my total ignorance of all things JavaScript. Perhaps a realtime OpenFrameworks version is in the works too, considering this is a wheel I already tried to reinvent once (and badly) in OpenFrameworks.

Also in the queue to look at:

  • NodeBox, “a Mac OS X application that lets you create 2D visuals (static, animated or interactive) using Python programming code…”
  • jsfiddle, a sort of JavaScript/HTML/CSS sandbox for testing. (anarkavre showed me a neat sketch he put together here)
  • Paper.js, “an open source vector graphics scripting framework that runs on top of the HTML5 Canvas.”
  • Reading generative art by Matt Pearson which I just picked up on a whim.

Post to Twitter Post to Facebook Post to Google Buzz Post to LinkedIn Post to Reddit

Categories: Meta Tags:


August 29th, 2011 No comments

This is a copy of what I posted to my own humble blog as it might be of interest here.

After finally deciding to look around for some projects on github, I found a number of very interesting ones in a matter of minutes.

I found Fragmentarium first. This program is like something I tried for years and years to write, but just never got around to putting in any real finished form. It can act as a simple testbench for GLSL fragment shaders, which I’d already realized could be used to do exactly what I was doing more slowly in Processing, much more slowly in Python (stuff like this if we want to dig up things from 6 years ago), much more clunkily in C and OpenFrameworks, and so on. It took me probably about 30 minutes to put together the code to generate the usual gawdy test algorithm I try when bootstrapping from a new environment:

(Yeah, it’s gaudy. But when you see it animated, it’s amazingly trippy and mesmerizing.)

The use I’m talking about (and that I’ve reimplemented a dozen times) was just writing functions that map the 2D plane to some colorspace, often with some spatial continuity. Typically I’ll have some other parameters in there that I’ll bind to a time variable or some user control to animate things. So far I don’t know any particular term that encompasses functions like this, but I know people have used it in different forms for a long while. It’s the basis of procedural texturing (as pioneered in An image synthesizer by Ken Perlin) as implemented in countless different forms like Nvidia Cg, GLSL, probably Renderman Shading Language, RTSL, POV-Ray’s extensive texturing, and Blender’s node texturing system (which I’m sure took after a dozen other similar systems). Adobe Pixel Bender, which the Fragmentarium page introduced to me for the first time, does something pretty similar but to different ends. Some systems such as Vvvv and Quartz Composer probably permit some similar operations; I don’t know for sure.

The benefits of representing a texture (or whatever image) as an algorithm rather than a raster image are pretty well-known: It’s a much smaller representation, it scales pretty well to 3 or more dimensions (particularly with noise functions like Perlin Noise or Simplex Noise), it can have a near-unlimited level of detail, it makes things like seams and antialiasing much less of an issue, it is almost the ideal case for parallel computation and modern graphics hardware has built-in support for it (e.g. GLSL, Cg, to some extent OpenCL). The drawback is that you usually have to find some way to represent this as a function in which each pixel or texel (or voxel?) is computed in isolation of all the others. This might be clumsy, it might be horrendously slow, or it might not have any good representation in this form.

Also, once it’s an algorithm, you can parametrize it. If you can make it render near realtime, then animation and realtime user control follow almost for free from this, but even without that, you still have a lot of flexibility when you can change parameters.

The only thing different (and debatably so) that I’m doing is trying to make compositions with just the functions themselves rather than using them as means to a different end, like video processing effects or texturing in a 3D scene. It also fascinated me to see these same functions animated in realtime.

However, the author of Fragmentarium (Mikael Hvidtfeldt Christensen) is doing much more interesting things with the program (i.e. rendering 3D fractals with distance estimation) than I would ever have considered doing. It makes sense why – his emerged more from the context of fractals and ray tracers on the GPU, like Amazing Boxplorer, and fractals tend to make for very interesting results.

His Syntopia Blog has some fascinating material and beautiful renders on it. His posts on Distance Estimated 3D Fractals were particularly fascinating to me – in part because this was the first time I had encountered the technique of distance estimation for rendering a scene. He gave a good introduction with lots of other material to refer to.

Distance Estimation blows my mind a little when I try to understand it. I have a decent high-level understanding of ray tracing, but this is not ray tracing, it’s ray marching. It lets complexity be emergent rather than needing an explicit representation as a scanline renderer or ray tracer might require (while ray tracers will gladly take a functional representation of many geometric primitives, I have encountered very few cases where something like a complex fractal or an isosurface could be rendered without first approximating it as a mesh or some other shape, sometimes at great cost). Part 1 of Mikael’s series on Distance Estimated 3D Fractals links to these slides which show a 4K demo built piece-by-piece using distance estimation to render a pretty complex scene.

He has another rather different program called Structure Synth which he made following the same “design grammar” approach of Context Free. I haven’t used Structure Synth yet, because Context Free was also new to me and I was first spending some time learning to use that. I’ll cover this in another post.

Post to Twitter Post to Facebook Post to Google Buzz Post to LinkedIn Post to Reddit

Categories: Meta Tags:

I don’t always eat cereal, but when I do, I make business cards from the box.

July 24th, 2011 3 comments

This idea was shamelessly lifted from another space after Craig showed us a business card cut/etched from cereal box cardboard from them. I think the space in question is in Hawaii. (Update: That space is Maui Makers and Jerry Isdale in particular. Look to the first comment – he posted a link to the card he made that was the inspiration for this.)


Anyhow, no one had yet engraved a bitmap successfully with the laser cutter, so I set out trying to do this (more or less coincidentally – I couldn’t figure out how to export a filled shape from Inkscape in a vector format LaserCut would grok, so I rasterized it).

I tried first on the corrugated cardboard that we have a near-infinite supply of. However, this didn’t engrave well for me – its top layer is too thin and once you’ve burned parts of it off you have only the sparse ridges holding the other parts of it on. Maybe someone else will have better luck with less power. (This is not the first corrugated cardboard issue we’ve had…)

Cereal cardboard, interestingly, both cuts and engraves really well (though in the following photo, I set the power far too high and it visibly burned through). I would have preferred to etch from a vector logo, but it seems easier to get different shades if you start from a raster image and Floyd-Steinberg dither it to a halftone monochrome image, as LaserCut requires monochrome.


The LaserCut file is here: If anyone wants to make a better-designed variant, please do – I consider this to be just a draft. A faster version might also be good. This one is around 7 – 8 minutes per card, but the speed probably could be cranked up a bit.

P.S. I suffered a cereal-induced sugar headache in the process of making these business cards. You all better be nice to me.

Post to Twitter Post to Facebook Post to Google Buzz Post to LinkedIn Post to Reddit

Categories: Project Tags: ,

Custom bokeh with laser cutter

July 12th, 2011 4 comments

This is an idea cjdavis had mentioned some weeks back: Using the laser cutter to cut out custom filters that mount to a camera’s lens and create custom bokeh shapes. I finally tried it yesterday.

We had on hand a large pile of little card-stock rectangles salvaged from the garbage; we thought they were blanks for playing cards and we had no use for them. However, I discovered, our laser cutter can cut them very quickly, and they are large enough that a 52mm circle fits inside (which matters because 52mm is the filter size of all my lenses).

My first one looked something like this:

…and it fit perfectly inside my 18-55mm lens (perhaps a little too perfectly, because it was sort of a pain to remove…). Here are a couple test images I shot:
I could have centered it better, and I still should calibrate the size a bit, but I’m impressed with how it turned out for something that took all of 5 seconds to cut.

Here’s one with another pattern (this time on my 35mm f/1.8):

Full Flickr album is here. I can have SVGs or DXFs up if anyone asks, but really, these patterns are dead-simple to put together by hand in Inkscape or something similar.

Post to Twitter Post to Facebook Post to Google Buzz Post to LinkedIn Post to Reddit

Categories: Project Tags: ,