Tagged:3D

A poor man’s particle system

For a recent Flash job, I was required to add a particle system on top of an application already very CPU heavy. The project involved a large stage size, AR marker detection from a webcam feed, and a 3D-orientated plane displaying a running video, attached to the marker. This video plane then had to spew out various flavours of 3D particle at appropriate moments in the FLV.

The idea of plugging in a sexy particle engine like Flint, when the SWF was struggling even to maintain its target framerate of 25fps, made me uncomfortable. Bespoke seemed the only way to go, and hey! it worked out. Here’s what one of the particle types ended up resembling:

(It’s worth mentioning that fewer particles than can be made in the above demo were needed for the project itself. Maxing out the version here gets my CPU usage up to around 40%, which would not have left enough room for the marker tracking and FLV playback.)

To briefly cover the tricks used here, I faked 3D z-positioning by scaling clips as a function of their bespoke ‘zPosition’ value. The formula: focalLength / (focalLength + zPosition) gives the scale factor used to set the scaleX and scaleY properties of each MovieClip. The scaleX parameter was also adjusted to spoof motion blur, by overriding the x position property, and repeatedly comparing it to its value at the last update. The greater the change, the larger the scaleX multiplier, and the more stretched/blurred the particle would appear.

Rotation of the whole particle field was done by avoiding setting the x or z properties directly, depending instead on ‘radius’ and ‘rotational offset’ values. All the particles reside inside an imagined cylinder (the particle field), with its central spindle aligned with the y-axis. Each particle has its x and z location calculated on the basis of this rotation and distance from the central axis, as they move in their orbits. Channelling Mr McCallum, my secondary school maths teacher, the formulae to do this kind of positioning are, for the x axis: cos(angle) * radius; and for the z axis: sin(angle) * radius. (These can be used instead with the x and y properties to show rotation around a flat circle as opposed to the cylinder.)

In addition to rotary motion, the particles were required to ‘wander’ of their own accord. To achieve this, a large bitmap of Perlin noise is generated at the beginning of the simulation, with a reference to it passed to each particle instance. Going from pixel to pixel each frame, the RGB value is sampled and used to increment the y-velocity and radius-velocity of each particle. Over time, the Perlin noise is traversed completely, resulting in a wide range of even motion. Each particle begins sampling from a randomised row/column too, so that the movement is staggered.

Thanks Ken Perlin!
Setting the Perlin noise generation to tile allows the bitmap to be traversed from side to side and top to bottom without any sudden changes in value.

With all said and done, there may indeed have been a package out there that was equally lightweight and flexible enough for the nuances of the project. But, hard deadlines being as they are, it sometimes pays just to go from the ground up, omitting any chaff on the way – rather than hunt for elusive wheat through sprawling, multifaceted libraries.

If the source code happens to be of use to anyone I’d be very happy to clean it up and post it here. Until then… indolence wins out. Sorry!

Dinoglyphs

Been feeling at all out of touch lately? Wondered what the next big thing’s going to be? You came to the right blog post friend, because I’ll tell you what it is for free:

ANAGLYPHS

ANAGLYPHS

And what would be the perfect companion to this timely tech? Extinct animals. Namely:

DINOSAURS

DINOSAURS

Over the past few weeks at work, I got the chance to develop an educational Papervision3D/FLARToolKit application for the the BBC’s Learning Development site. I wrote up some nerdy/whiney information on the process here, and the page itself can be found at the picture-link below.

SpinARsaurus Challenge

Moving images containing terrible lizards are all very well; Steven Spielberg gave us those in 1993 – more than fifty years ago! But what that documentary promised to be the simple matter of drilling into amber and injecting mosquito blood into a frog, still hasn’t yielded any dino fun parks. What gives?

In order to fill this void (over the next couple of months, before one of the parks is complete and we’re busy petting T-rexes), I’ve made a dinosaur that looks so real you could touch it.

Dinoglyph
It will take a minute or two to load up, so please be patient! (You can’t rush virtual reality.) Moving the mouse around alters the spin speed and direction, and zooms in and out.

For the demo, I loaded in a second skeleton model, iterated over the texture files and ColorTransform-ed the red / green+blue out of them, respectively, then set their BlendModes to DIFFERENCE. After that, I moved the red copy a little left, the blue copy a little right, and applied all other movement relative to those locations.

The eventual colours work out well to be mutually invisible in the corresponding panes of the paper glasses that were bundled with the copy of Ben 10 Magazine I bought in shame. (FYI, decent journalism, but the word searches were MUCH too difficult for the recommended age bracket.) More sophisticated glasses will probably result in uneven colouring.

FUTURE
And you wont look as cool.

MIDI-driven Flash: Synaesthesia for Everyone!

Typical music visualisers, as found in desktop media players, have limited means to analyse the music played through them. Audio data from a mixed-down track is a blend of waveforms, normally from many sources. Spectral analysis allows amplitude variation at disparate frequencies to be detected and graphed discretely, which can assist in beat detection and tempo analysis. But due to the noisiness (…) of this data, huge variation in compositional arrangements and the need for realtime processing, the majority of visualisers have limited reactivity and focus on producing partially-randomised, generative animation that would be pretty even if running on white noise.

No.

But that’s all for raw audio. Samplers and virtual instruments (rendering MIDI data) have ever-increasing prominence in music creation and live performance – and greatly outclass standard hardware MIDI synthesizers, with which the typical notion of dorky, low-quality MIDI playback is inexorably tied. Compared to an audio stream, ‘message’-based music protocols describe events – normally pitch and velocity/volume information about notes triggering on and off – and nothing about the timbre of the sound, allowing sonic quality (for instance, the performing instrument) to be altered at any point.

Visualising musical data in this form removes all uncertainty around the timing and nature of note events (along with the overhead of detecting such details), accommodating far tighter correspondence between the audio and visual elements than is possible with conventional visualisers.

The Future

And none of this is new – VJs have been leveraging MIDI precision for years, and the field is still growing – but routing note data into Flash is new to me, and something I’d wanted to do for years, provoked by rhythmic music that carried a strong sense of motion, or was otherwise very visual. (Lookin’ at you guys, MSTRKRFT and Susumu Yokota <3 – among many others.)

There are several possible starting points for running MIDI into Flash, but none especially mature (at least on Windows; different options are available for OS X). The primary issue is that the Flash runtime has never interfaced with environment-level MIDI APIs, which is pretty much in-keeping with conventional use of the platform. Various workarounds are possible, and having read around a lot I settled on the following:

  • Translate the MIDI data into OSC (a newer protocol and likely eventual successor to MIDI, designed to run through the network transport layer), with the VST plugin, OSCGlue, created by Sebastian Oschatz.
  • Receive the OSC packets with the Java flosc server, created by Benjamin Chun, and retransmit them over TCP (OSC data is normally sent over UDP, which Flash does not support).
  • Receive the shiny new TCP-delivered OSC packets with the OSCConnection classes, an XMLConnection extension written by Adam Robertson and revised by Ignacio Delgado.
    (Worth mentioning: the updated flosc server (v2.0.3) given at the Google Code link above didn’t work for me, but I had no problems with the original on Ben Chun’s site (v0.3.1).)

Here is a demo of it running, routing percussive hits from Ableton Live:

The visualisation method (best referred to, generously, as a ‘sketch’ for now), comprises an array of ‘particles’, distributed in 3D space, accelerating and rotating in time with note-on messages. On snare hits, the background gradient is also switched-out for another. The particles themselves comprise a few other graphical experiments: all symbols are a singular colour, but appear different due to variations in their layer BlendMode; I also created a sliding shutter-mask for the otherwise circular symbols in the Flash Pro IDE before embedding them; and I attempted a depth-of-field blur effect by increasing filter strength with distance from the camera – but it’s all a little abstract for that to be very applicable.

You can play with the visualiser ‘offline’, using keypresses: arrow keys (or WASD for the homicidally inclined) to move the particles; spacebar to swap the background. (Note: you’ll probably have to click into the player first.)

Some disclaimers: most of the work outside of Flash was that of the developers cited above; I don’t mean to take credit. This write-up is mostly to aid visitors with the same intentions, riding in on the Google express, since no other documented method I came across suited my needs and/or worked.
And, secondly, the visualiser itself is intended primarily as a tech demo and precursor to a more complete, aesthetic (…synaesthetic?) piece. You’ll see – it’ll be just like having a real mental disorder.

Webcam experiment: dynamic pseudo-3D modelling

This is an extension of a little app I made around a year ago, which coloured and filled boxes with hex values based on the sampled colours of an image. Like this!:

That, however, was inspired bybased on … blatantly copied from some images by the hugely talented Luigi De Aloisio.

In the name of originality, and boredom, I more recently hooked the same function up to the Camera class, and updated the grid every frame. Useful if you need to know precisely what web colours your face is made up of, but the whole challenge with the original app was finding a good source image that would remain discernible once grid-ified, so most of the time you’re stuck with meaningless coloured blocks. (As a lifelong fan of Sesame Street, that was enough for me, but I’m trying to make a career out of this stuff.)

Removing the text and squarin’ up the rectangles, you effectively get an over-pixelated form of the video feed – just like having a webcam from the 90s! I thought it might be interesting to dynamically update the z-axis position of each cell, and overall brightness seemed to be the most sensible property to sample. It looks like this:

So, to recap what’s happening: per frame, the feed from the camera is sampled at regular intervals, across a grid of specified resolution, to capture the pixel colour at each point (I couldn’t afford the overhead of averaging across each blocks’ area). On the first pass, for each cell, a Shape object is created, filled with a square, and added to the grid array. On subsequent passes, each Shape object’s colorTransform property is set to the sampled pixel colour (computationally preferable to clearing and redrawing the square, but not hugely). The R, G and B values composing the cell colour are then averaged and normalised, before being used to move the cell backwards along the z-axis by an amount proportional to its darkness. (Because darker things are further away, sometimes, maybe…? It more or less works for the shadowy areas in faces anyway.)

The squares look ok, but with the grid floating in 3D space, I thought it might look cooler to use solid blocks instead. Unfortunately your CPU will not find it cooler. Quite the opposite; performance isn’t great with this one:

The code differences here involved replacing each Shape object with a Sprite (in order to contain the multiple cube sides), and the four extra panels were drawn and rotated into position. Side colours were varied to make the blocks appear shaded, and on updating the colorTransform property, the three channel multipliers were each set to 0.6, in order to avoid washing the shading out altogether.

Next steps? As I’m yet to identify a practical application for all this, there’s little need to have it running in a browser or on Flash at all, so I may take a crack at porting it to Processing or C++ with OpenGL, for the perfomance gain. It might be nice to see it draw one box per pixel, or to to use the brightness to set the Z co-ordinate of vertices rather than faces, and then stretch the video over a surface. A plain ol’ wireframe mesh might look nice too.

Any feedback on performance would be great.


Edit: Source code uploaded to my github repository here.

Hello

I'm Adam Vernon: front-end developer, free-time photographer, small-hours musician and general-purpose humanoid.