Archive

Archive for the ‘Meta’ Category

HIVE13 has procured a brand-new MIG welder

July 10th, 2013 4 comments

It’s like Christmas in July!  The HIVE’s new MIG welder has arrived.  DaveB trucked it over and set it up on the cart just today.

HIVE13 is the proud owner of  a brand-new Millermatic 211 Auto-Set w/MVP (link) with the M-100 Gun (link).

This is serious equipment.  Electric shock can kill.  Hot parts can burn.  Fumes and gases can be hazardous.  Arc rays can burn eyes and skin.  Welding can cause fire or explosion.  Flying metal or dirt can injure eyes.  Build-up of gas can injure or kill.  Electric and magnetic fields (EMF) can affect implanted medical devices.  Noise can damage hearing.  Cylinders can explode if damaged, etc.   Only qualified persons should install, operate, maintain, and repair this unit.

Does this new tool sound like as much fun to you as it does to us?

Stay tuned and drop by to see developments progress as the experienced HIVE welders and prudently cautious implementers make preparations to enable eager newbies to learn to weld safely with appropriate precautions.

HIVE13 is the place to join to learn new skills and use new equipment to make things.

Post to Twitter Post to Facebook Post to Google Buzz Post to LinkedIn Post to Reddit

Categories: Meta, Press Release, Project, Tools, Uncategorized Tags:

Cincinnati Mini Maker Faire – Saturday, October 19, Noon-10pm

May 29th, 2013 No comments

It’s official.  Here’s your chance to get your INNER GEEK ON and ROCK the ‘NATI.  Hive13 is collaborating with organizer Jason Langdon (associate creative director at Possible) and the West side art collective, Broadhope to host the first annual Cincinnati Mini Maker Faire on Saturday, October 19, noon to 10pm.

Get involved by becoming a volunteer (you may get a cool event STAFF T-shirt for your efforts) and/or sign-up to have a MAKER display to show off your creations.

First event publicity on SoapBox (link)

Call for Makers (link) Hurry, the deadline is July 1.

Cincinnati’s  Washington Park venue (link)

Cincinnati Mini Maker Faire Social Media sites:

Facebook (link) This will be maintained as the most active and up-to-date info page.

Google Plus (link)  Key milestones and big announcements are published here.

Twitter (link) Key milestones and big announcement are also announced here.

Stay tuned for further details and announcements.

Post to Twitter Post to Facebook Post to Google Buzz Post to LinkedIn Post to Reddit

Categories: Events, Meta, Press Release, Uncategorized Tags:

LED table placecards

October 15th, 2011 2 comments

Inspired by this Instructables post, a friend of mine decided to create something similar for his wedding.

The bases were a bit more advanced than the suggestion, and all were created by a friend of the groom. They consist of a block of wood, a battery holder and 3 red LEDs.

basetop basebottom
baselit

The plates were etched & cut on our laser over a period of about 15 hours. It was about one hour per plate, 12 of which we used, 3 were used to perfect the process. This video shows a time lapse of the process.  We had some trouble initially with clouding on the plate, especially around the letter “o”.  We fixed this by adjusting the power and speed of the laser and refining our post etching cleaning process.

After the plates were etched and cut they were soaked in water & simple green for about 30 seconds, then wiped off with a microfiber cloth.

The cards were then set up at the reception hall prior to the wedding, and remained lit throughout the evening.

table

The wedding party:
wpar
We didn’t have the names of all of the dates guests were bringing, so some people got their very own +1.

plusone

 

And of course we had to create a bonus plate for the Hive:
hivecard

The Bride & Groom were pleased with the results as were we.  While it was a lot of work, the project resulted in a unique keepsake for each wedding guest and the wedding party.

 

 

Post to Twitter Post to Facebook Post to Google Buzz Post to LinkedIn Post to Reddit

Categories: Meta, Project Tags:

Context Free

August 29th, 2011 No comments

This is another copy of another post from my own blog, a continuation of the last… Also, fans of the laser cutter, note that Context Free happily generates SVG files just as easily as it does raster images.

My last post mentioned a program called Context Free that I came across via the Syntopia blog as his program Structure Synth was modeled after it. (Also, Make Magazine mentioned it in Issue 17.)

I’ve heard of context-free grammars before but my understanding of them is pretty vague. This program is based around them and the documentation expresses their limitations; what I grasped from this is that no entity can have any “awareness” of the context in which it’s drawn, i.e. any part of the rest of the scene or even where in the scene it is. A perusal of the site’s gallery shows how much those limitations don’t really matter.

I downloaded the program, started it, and their welcome image (with the relatively short source code right beside it) greeted me, rendered on-the-spot:

The program was very easy to work with. Their quick reference card was terse but only needed a handful of examples and a few pages of documentation to fill in the gaps. After about 15 minutes, I’d put together this:

Sure, it’s mathematical and simple, but I think being able to put it together in 15 minutes in a general program (i.e. not a silly ad-hoc program) that I didn’t know how to use shows its potential pretty well. The source is this:

startshape MAIN
background { b -1 }
rule MAIN {
   TRAIL { }
}
rule TRAIL {
   20 * { r 11 a -0.6 s 0.8 } COLORED { }
}
rule COLORED {
   BASE { b 0.75 sat 0.1 }
}
rule BASE {
   SQUARE1 { }
   SQUARE1 { r 90 }
   SQUARE1 { r 180 }
   SQUARE1 { r 270 }
}
rule SQUARE1 {
   SQUARE { }
   SQUARE1 { h 2 sat 0.3 x 0.93 y 0.93 r 10 s 0.93 }
}

I worked with it some more the next day and had some things like this:

I’m not sure what it is. It looks sort of like a tree made of lightning. Some Hive13 people said it looks like a lockpick from hell. The source is some variant of this:

startshape MAIN
background { b -1 }
rule MAIN {
    BRANCH { r 180 }
}
rule BRANCH 0.25 {
    box { }
    BRANCH { y -1 s 0.9 }
}
rule BRANCH 0.25{
    box { }
    BRANCH { y -1 s 0.3 }
    BRANCH { y -1 s 0.7  r 52 }
}
rule BRANCH 0.25 {
    box { }
    BRANCH { y -1 s 0.3 }
    BRANCH { y -1 s 0.7  r -55 }
}
path box {
    LINEREL{x 0 y -1}
    STROKE{p roundcap b 1 }
}

The program is very elegant in its simplicity. At the same time, it’s a really powerful program. Translating something written in Context Free into another programming language would in most cases not be difficult at all – you need just a handful of 2D drawing primitives, a couple basic operations for color space and geometry, the ability to recurse (and to stop recursing when it’s pointless). But that representation, though it might be capable of a lot of things that Context Free can’t do on its own, probably would be a lot clumsier.

This is basically what some of my OpenFrameworks sketches were doing in a much less disciplined way (although with the benefit of animation and GPU-accelerated primitives) but I didn’t realize that what I was doing could be expressed so easily and so compactly in a context-free grammar.

It’s appealing, though, in the same way as the functions discussed in the last post (i.e. those for procedural texturing). It’s a similarly compact representation of an image – this time, a vector image rather than a spatially continuous image, which has some benefits of its own. It’s an algorithm – so now it can be parametrized. (Want to see one reason why parametrized vector things are awesome? Look at Magic Box. [Or, for another well-known one, especially in Hive13, the venerable OpenSCAD.]) And once it’s parametrized, animation and realtime user control are not far away, provided you can render quickly enough.

(And as @codersandy observed after reading this, POV-Ray is in much the same category too. I’m not sure if he meant it in the same way I do, but POV-Ray is a fully Turing-complete language and it permits you to generate your whole scene procedurally if you wish, which is great – but Context Free is indeed far simpler than this, besides only being 2D. It will be interesting to see how Structure Synth compares, given that it generates 3D scenes and has a built-in raytracer.)

My next step is probably to play around with Structure Synth (and like Fragmentarium it’s built with Qt, a library I actually am familiar with). I also might try to create a JavaScript implementation of Context Free and conquer my total ignorance of all things JavaScript. Perhaps a realtime OpenFrameworks version is in the works too, considering this is a wheel I already tried to reinvent once (and badly) in OpenFrameworks.

Also in the queue to look at:

  • NodeBox, “a Mac OS X application that lets you create 2D visuals (static, animated or interactive) using Python programming code…”
  • jsfiddle, a sort of JavaScript/HTML/CSS sandbox for testing. (anarkavre showed me a neat sketch he put together here)
  • Paper.js, “an open source vector graphics scripting framework that runs on top of the HTML5 Canvas.”
  • Reading generative art by Matt Pearson which I just picked up on a whim.

Post to Twitter Post to Facebook Post to Google Buzz Post to LinkedIn Post to Reddit

Categories: Meta Tags:

isolated-pixel-pushing

August 29th, 2011 No comments

This is a copy of what I posted to my own humble blog as it might be of interest here.

After finally deciding to look around for some projects on github, I found a number of very interesting ones in a matter of minutes.

I found Fragmentarium first. This program is like something I tried for years and years to write, but just never got around to putting in any real finished form. It can act as a simple testbench for GLSL fragment shaders, which I’d already realized could be used to do exactly what I was doing more slowly in Processing, much more slowly in Python (stuff like this if we want to dig up things from 6 years ago), much more clunkily in C and OpenFrameworks, and so on. It took me probably about 30 minutes to put together the code to generate the usual gawdy test algorithm I try when bootstrapping from a new environment:

(Yeah, it’s gaudy. But when you see it animated, it’s amazingly trippy and mesmerizing.)

The use I’m talking about (and that I’ve reimplemented a dozen times) was just writing functions that map the 2D plane to some colorspace, often with some spatial continuity. Typically I’ll have some other parameters in there that I’ll bind to a time variable or some user control to animate things. So far I don’t know any particular term that encompasses functions like this, but I know people have used it in different forms for a long while. It’s the basis of procedural texturing (as pioneered in An image synthesizer by Ken Perlin) as implemented in countless different forms like Nvidia Cg, GLSL, probably Renderman Shading Language, RTSL, POV-Ray’s extensive texturing, and Blender’s node texturing system (which I’m sure took after a dozen other similar systems). Adobe Pixel Bender, which the Fragmentarium page introduced to me for the first time, does something pretty similar but to different ends. Some systems such as Vvvv and Quartz Composer probably permit some similar operations; I don’t know for sure.

The benefits of representing a texture (or whatever image) as an algorithm rather than a raster image are pretty well-known: It’s a much smaller representation, it scales pretty well to 3 or more dimensions (particularly with noise functions like Perlin Noise or Simplex Noise), it can have a near-unlimited level of detail, it makes things like seams and antialiasing much less of an issue, it is almost the ideal case for parallel computation and modern graphics hardware has built-in support for it (e.g. GLSL, Cg, to some extent OpenCL). The drawback is that you usually have to find some way to represent this as a function in which each pixel or texel (or voxel?) is computed in isolation of all the others. This might be clumsy, it might be horrendously slow, or it might not have any good representation in this form.

Also, once it’s an algorithm, you can parametrize it. If you can make it render near realtime, then animation and realtime user control follow almost for free from this, but even without that, you still have a lot of flexibility when you can change parameters.

The only thing different (and debatably so) that I’m doing is trying to make compositions with just the functions themselves rather than using them as means to a different end, like video processing effects or texturing in a 3D scene. It also fascinated me to see these same functions animated in realtime.

However, the author of Fragmentarium (Mikael Hvidtfeldt Christensen) is doing much more interesting things with the program (i.e. rendering 3D fractals with distance estimation) than I would ever have considered doing. It makes sense why – his emerged more from the context of fractals and ray tracers on the GPU, like Amazing Boxplorer, and fractals tend to make for very interesting results.

His Syntopia Blog has some fascinating material and beautiful renders on it. His posts on Distance Estimated 3D Fractals were particularly fascinating to me – in part because this was the first time I had encountered the technique of distance estimation for rendering a scene. He gave a good introduction with lots of other material to refer to.

Distance Estimation blows my mind a little when I try to understand it. I have a decent high-level understanding of ray tracing, but this is not ray tracing, it’s ray marching. It lets complexity be emergent rather than needing an explicit representation as a scanline renderer or ray tracer might require (while ray tracers will gladly take a functional representation of many geometric primitives, I have encountered very few cases where something like a complex fractal or an isosurface could be rendered without first approximating it as a mesh or some other shape, sometimes at great cost). Part 1 of Mikael’s series on Distance Estimated 3D Fractals links to these slides which show a 4K demo built piece-by-piece using distance estimation to render a pretty complex scene.

He has another rather different program called Structure Synth which he made following the same “design grammar” approach of Context Free. I haven’t used Structure Synth yet, because Context Free was also new to me and I was first spending some time learning to use that. I’ll cover this in another post.

Post to Twitter Post to Facebook Post to Google Buzz Post to LinkedIn Post to Reddit

Categories: Meta Tags:

Hive13 on TV

May 24th, 2010 1 comment

Last week, our local ABC affiliate WCPO Channel 9 stopped by Hive13 to learn what Hackerspaces were all about. If you found us due to the story, welcome! If you haven’t seen it yet…

One thing we want to clear up – despite the lead-in, we’re not trying to be a business incubator, and we’re certainly not trying to become the next Google (though wouldn’t that be nice?).  Hive13 promotes science and technology through education.  We’re a community-focused collective workspace for tinkerers, hackers, makers, and artists. If you’re with an organization and would like to partner with Hive13, shoot me an email!

If you’ve ever thought up a project you’d like to tackle but didn’t have the space or knowledge to follow through, we’re your group. If you’ve got a garage or basement full of tools, a head full of ideas, but can’t find the time or motivation to get things moving, we want to inspire you. If you’ve made a cool gadget or fixed your own stuff and just want someone else to appreciate how awesome that is, we’re on the same page.

You don’t have to be a computer whiz or an engineer. You just need an open, curious mind and a desire to do things. Our meetings on Tuesday nights are open to the public, so please stop by!

Post to Twitter Post to Facebook Post to Google Buzz Post to LinkedIn Post to Reddit

Categories: Meta Tags: , , , , ,

Hark! The webcam returneth!

September 23rd, 2009 2 comments

Hodapp repositioned the signal generator box to the bar area and then with some help from ChrisA and I connected a network cable to the bar area.  Then through various hackish methods we used a crossover cable and a spare network card to hook the webcam directly to the computer.  In any case our webcam has returned from the dead.

Direct link to the webcam image here.

Post to Twitter Post to Facebook Post to Google Buzz Post to LinkedIn Post to Reddit

Categories: Meta Tags:

We’re live!

August 9th, 2009 No comments

Huzzah!

Post to Twitter Post to Facebook Post to Google Buzz Post to LinkedIn Post to Reddit

Categories: Meta Tags:

First post!

August 8th, 2009 No comments

This is the opening post for the Hive13 Webpage! Yay!

Post to Twitter Post to Facebook Post to Google Buzz Post to LinkedIn Post to Reddit

Categories: Meta Tags: