With a graphics tablet, you can draw!

Continuing a retrospective theme (definitely not stalling while I work out what fresh material I’ll put here), I thought I’d stick up a few of the things I’ve made with the graphics tablet Santa brought me last year – plus one I’d borrowed in 2007. ‘Tis very much the season once again and, conveniently, a few of these were Christmas cards I’d scrawled for family members a few years back. Topicality, hot damn! High-five, Santa Christ.

Firstly, the cards. Not much to say; they’re not good, but that’s all part of starting out, right?

This year, my sister got married, and had asked me to assemble a design for the invitations. The process was a little strained thanks to misaligned visions and limited creative freedom but we settled on something eventually:

For the birthday of chum Andy, I made a t-shirt design based on this Korean sign (which may or may not be authentic accidental Engrish), and on his love of swine-based consumables. I then had it printed by these chaps.

More recently, it was Kris’ birthday, and so I drew a t-shirt for him too. The inspiration for this came from the evening he and I were playing at an open mic night. Bless his timid soul, he was nervous about singing, and so we conjured an alter ego for him to assume while on stage. Thus, JERKASAURUS REX was born! Rawr.

The Kinetype Family

While this endeavour is still in its infancy, I thought it’d make sense to itemise the facets of my substantial (for which, read: insubstantial) web presence, and the plans I have for each.

  • blogFirstly, there is this thing you are reading: the blog; blog-dot-kinetype-dot-com; http://blog.kinetype.com/ Here, I’ll try to keep things mostly (if not strictly) business. You, gentle reader, probably don’t need to know of the peculiarities of my digestive tract, nor read my ideas for Harry Potter / Twilight crossover fiction. I have a twitter account for that. Posts will mostly be split across Flash projects, recorded music, photography, graphic design and maybe some writing – the things I like to do in spite of all the evidence and death threats.
  • blogNext, there’s the WWW subdomain, http://www.kinetype.com/ This is where I stuck the prickly fruits of my undergrad dissertation, submitted earlier this year. It’s for autonomously animating English text, à la human-authored kinetic typography, built in Flash, Python and Flex. It’s somewhat unpolished, but half of the planned improvements are coded up and ready to go, so it’ll be shiny in no time. Yeah.
  • blogLastly, I’ve re-uploaded an old site of mine, Hope Park Square, to the HPS subdomain, http://hps.kinetype.com/ Back only for posterity, this train wreck lasted just six months during 2005, before the card I paid the hosting fees with expired. While I’d put up with them passively, I couldn’t bring myself to register my new details and voluntarily give skanky hosts Angelfire more money. Plus, I had a girlfriend by that point; my need to charm internet babes had been quelled.

And those are the things of mine on THE INTERNET.

Webcam experiment: dynamic pseudo-3D modelling

This is an extension of a little app I made around a year ago, which coloured and filled boxes with hex values based on the sampled colours of an image. Like this!:

That, however, was inspired bybased on … blatantly copied from some images by the hugely talented Luigi De Aloisio.

In the name of originality, and boredom, I more recently hooked the same function up to the Camera class, and updated the grid every frame. Useful if you need to know precisely what web colours your face is made up of, but the whole challenge with the original app was finding a good source image that would remain discernible once grid-ified, so most of the time you’re stuck with meaningless coloured blocks. (As a lifelong fan of Sesame Street, that was enough for me, but I’m trying to make a career out of this stuff.)

Removing the text and squarin’ up the rectangles, you effectively get an over-pixelated form of the video feed – just like having a webcam from the 90s! I thought it might be interesting to dynamically update the z-axis position of each cell, and overall brightness seemed to be the most sensible property to sample. It looks like this:

So, to recap what’s happening: per frame, the feed from the camera is sampled at regular intervals, across a grid of specified resolution, to capture the pixel colour at each point (I couldn’t afford the overhead of averaging across each blocks’ area). On the first pass, for each cell, a Shape object is created, filled with a square, and added to the grid array. On subsequent passes, each Shape object’s colorTransform property is set to the sampled pixel colour (computationally preferable to clearing and redrawing the square, but not hugely). The R, G and B values composing the cell colour are then averaged and normalised, before being used to move the cell backwards along the z-axis by an amount proportional to its darkness. (Because darker things are further away, sometimes, maybe…? It more or less works for the shadowy areas in faces anyway.)

The squares look ok, but with the grid floating in 3D space, I thought it might look cooler to use solid blocks instead. Unfortunately your CPU will not find it cooler. Quite the opposite; performance isn’t great with this one:

The code differences here involved replacing each Shape object with a Sprite (in order to contain the multiple cube sides), and the four extra panels were drawn and rotated into position. Side colours were varied to make the blocks appear shaded, and on updating the colorTransform property, the three channel multipliers were each set to 0.6, in order to avoid washing the shading out altogether.

Next steps? As I’m yet to identify a practical application for all this, there’s little need to have it running in a browser or on Flash at all, so I may take a crack at porting it to Processing or C++ with OpenGL, for the perfomance gain. It might be nice to see it draw one box per pixel, or to to use the brightness to set the Z co-ordinate of vertices rather than faces, and then stretch the video over a surface. A plain ol’ wireframe mesh might look nice too.

Any feedback on performance would be great.

Edit: Source code uploaded to my github repository here.


I'm Adam Vernon: front-end developer, free-time photographer, small-hours musician and general-purpose humanoid.