Tagged:webcam

A poor man’s particle system

For a recent Flash job, I was required to add a particle system on top of an application already very CPU heavy. The project involved a large stage size, AR marker detection from a webcam feed, and a 3D-orientated plane displaying a running video, attached to the marker. This video plane then had to spew out various flavours of 3D particle at appropriate moments in the FLV.

The idea of plugging in a sexy particle engine like Flint, when the SWF was struggling even to maintain its target framerate of 25fps, made me uncomfortable. Bespoke seemed the only way to go, and hey! it worked out. Here’s what one of the particle types ended up resembling:

(It’s worth mentioning that fewer particles than can be made in the above demo were needed for the project itself. Maxing out the version here gets my CPU usage up to around 40%, which would not have left enough room for the marker tracking and FLV playback.)

To briefly cover the tricks used here, I faked 3D z-positioning by scaling clips as a function of their bespoke ‘zPosition’ value. The formula: focalLength / (focalLength + zPosition) gives the scale factor used to set the scaleX and scaleY properties of each MovieClip. The scaleX parameter was also adjusted to spoof motion blur, by overriding the x position property, and repeatedly comparing it to its value at the last update. The greater the change, the larger the scaleX multiplier, and the more stretched/blurred the particle would appear.

Rotation of the whole particle field was done by avoiding setting the x or z properties directly, depending instead on ‘radius’ and ‘rotational offset’ values. All the particles reside inside an imagined cylinder (the particle field), with its central spindle aligned with the y-axis. Each particle has its x and z location calculated on the basis of this rotation and distance from the central axis, as they move in their orbits. Channelling Mr McCallum, my secondary school maths teacher, the formulae to do this kind of positioning are, for the x axis: cos(angle) * radius; and for the z axis: sin(angle) * radius. (These can be used instead with the x and y properties to show rotation around a flat circle as opposed to the cylinder.)

In addition to rotary motion, the particles were required to ‘wander’ of their own accord. To achieve this, a large bitmap of Perlin noise is generated at the beginning of the simulation, with a reference to it passed to each particle instance. Going from pixel to pixel each frame, the RGB value is sampled and used to increment the y-velocity and radius-velocity of each particle. Over time, the Perlin noise is traversed completely, resulting in a wide range of even motion. Each particle begins sampling from a randomised row/column too, so that the movement is staggered.

Thanks Ken Perlin!
Setting the Perlin noise generation to tile allows the bitmap to be traversed from side to side and top to bottom without any sudden changes in value.

With all said and done, there may indeed have been a package out there that was equally lightweight and flexible enough for the nuances of the project. But, hard deadlines being as they are, it sometimes pays just to go from the ground up, omitting any chaff on the way – rather than hunt for elusive wheat through sprawling, multifaceted libraries.

If the source code happens to be of use to anyone I’d be very happy to clean it up and post it here. Until then… indolence wins out. Sorry!

Webcam experiment: dynamic pseudo-3D modelling

This is an extension of a little app I made around a year ago, which coloured and filled boxes with hex values based on the sampled colours of an image. Like this!:

That, however, was inspired bybased on … blatantly copied from some images by the hugely talented Luigi De Aloisio.

In the name of originality, and boredom, I more recently hooked the same function up to the Camera class, and updated the grid every frame. Useful if you need to know precisely what web colours your face is made up of, but the whole challenge with the original app was finding a good source image that would remain discernible once grid-ified, so most of the time you’re stuck with meaningless coloured blocks. (As a lifelong fan of Sesame Street, that was enough for me, but I’m trying to make a career out of this stuff.)

Removing the text and squarin’ up the rectangles, you effectively get an over-pixelated form of the video feed – just like having a webcam from the 90s! I thought it might be interesting to dynamically update the z-axis position of each cell, and overall brightness seemed to be the most sensible property to sample. It looks like this:

So, to recap what’s happening: per frame, the feed from the camera is sampled at regular intervals, across a grid of specified resolution, to capture the pixel colour at each point (I couldn’t afford the overhead of averaging across each blocks’ area). On the first pass, for each cell, a Shape object is created, filled with a square, and added to the grid array. On subsequent passes, each Shape object’s colorTransform property is set to the sampled pixel colour (computationally preferable to clearing and redrawing the square, but not hugely). The R, G and B values composing the cell colour are then averaged and normalised, before being used to move the cell backwards along the z-axis by an amount proportional to its darkness. (Because darker things are further away, sometimes, maybe…? It more or less works for the shadowy areas in faces anyway.)

The squares look ok, but with the grid floating in 3D space, I thought it might look cooler to use solid blocks instead. Unfortunately your CPU will not find it cooler. Quite the opposite; performance isn’t great with this one:

The code differences here involved replacing each Shape object with a Sprite (in order to contain the multiple cube sides), and the four extra panels were drawn and rotated into position. Side colours were varied to make the blocks appear shaded, and on updating the colorTransform property, the three channel multipliers were each set to 0.6, in order to avoid washing the shading out altogether.

Next steps? As I’m yet to identify a practical application for all this, there’s little need to have it running in a browser or on Flash at all, so I may take a crack at porting it to Processing or C++ with OpenGL, for the perfomance gain. It might be nice to see it draw one box per pixel, or to to use the brightness to set the Z co-ordinate of vertices rather than faces, and then stretch the video over a surface. A plain ol’ wireframe mesh might look nice too.

Any feedback on performance would be great.


Edit: Source code uploaded to my github repository here.

Hello

I'm Adam Vernon: front-end developer, free-time photographer, small-hours musician and general-purpose humanoid.