Tagged:AS3

A poor man’s particle system

For a recent Flash job, I was required to add a particle system on top of an application already very CPU heavy. The project involved a large stage size, AR marker detection from a webcam feed, and a 3D-orientated plane displaying a running video, attached to the marker. This video plane then had to spew out various flavours of 3D particle at appropriate moments in the FLV.

The idea of plugging in a sexy particle engine like Flint, when the SWF was struggling even to maintain its target framerate of 25fps, made me uncomfortable. Bespoke seemed the only way to go, and hey! it worked out. Here’s what one of the particle types ended up resembling:

(It’s worth mentioning that fewer particles than can be made in the above demo were needed for the project itself. Maxing out the version here gets my CPU usage up to around 40%, which would not have left enough room for the marker tracking and FLV playback.)

To briefly cover the tricks used here, I faked 3D z-positioning by scaling clips as a function of their bespoke ‘zPosition’ value. The formula: focalLength / (focalLength + zPosition) gives the scale factor used to set the scaleX and scaleY properties of each MovieClip. The scaleX parameter was also adjusted to spoof motion blur, by overriding the x position property, and repeatedly comparing it to its value at the last update. The greater the change, the larger the scaleX multiplier, and the more stretched/blurred the particle would appear.

Rotation of the whole particle field was done by avoiding setting the x or z properties directly, depending instead on ‘radius’ and ‘rotational offset’ values. All the particles reside inside an imagined cylinder (the particle field), with its central spindle aligned with the y-axis. Each particle has its x and z location calculated on the basis of this rotation and distance from the central axis, as they move in their orbits. Channelling Mr McCallum, my secondary school maths teacher, the formulae to do this kind of positioning are, for the x axis: cos(angle) * radius; and for the z axis: sin(angle) * radius. (These can be used instead with the x and y properties to show rotation around a flat circle as opposed to the cylinder.)

In addition to rotary motion, the particles were required to ‘wander’ of their own accord. To achieve this, a large bitmap of Perlin noise is generated at the beginning of the simulation, with a reference to it passed to each particle instance. Going from pixel to pixel each frame, the RGB value is sampled and used to increment the y-velocity and radius-velocity of each particle. Over time, the Perlin noise is traversed completely, resulting in a wide range of even motion. Each particle begins sampling from a randomised row/column too, so that the movement is staggered.

Thanks Ken Perlin!
Setting the Perlin noise generation to tile allows the bitmap to be traversed from side to side and top to bottom without any sudden changes in value.

With all said and done, there may indeed have been a package out there that was equally lightweight and flexible enough for the nuances of the project. But, hard deadlines being as they are, it sometimes pays just to go from the ground up, omitting any chaff on the way – rather than hunt for elusive wheat through sprawling, multifaceted libraries.

If the source code happens to be of use to anyone I’d be very happy to clean it up and post it here. Until then… indolence wins out. Sorry!

Dinoglyphs

Been feeling at all out of touch lately? Wondered what the next big thing’s going to be? You came to the right blog post friend, because I’ll tell you what it is for free:

ANAGLYPHS

ANAGLYPHS

And what would be the perfect companion to this timely tech? Extinct animals. Namely:

DINOSAURS

DINOSAURS

Over the past few weeks at work, I got the chance to develop an educational Papervision3D/FLARToolKit application for the the BBC’s Learning Development site. I wrote up some nerdy/whiney information on the process here, and the page itself can be found at the picture-link below.

SpinARsaurus Challenge

Moving images containing terrible lizards are all very well; Steven Spielberg gave us those in 1993 – more than fifty years ago! But what that documentary promised to be the simple matter of drilling into amber and injecting mosquito blood into a frog, still hasn’t yielded any dino fun parks. What gives?

In order to fill this void (over the next couple of months, before one of the parks is complete and we’re busy petting T-rexes), I’ve made a dinosaur that looks so real you could touch it.

Dinoglyph
It will take a minute or two to load up, so please be patient! (You can’t rush virtual reality.) Moving the mouse around alters the spin speed and direction, and zooms in and out.

For the demo, I loaded in a second skeleton model, iterated over the texture files and ColorTransform-ed the red / green+blue out of them, respectively, then set their BlendModes to DIFFERENCE. After that, I moved the red copy a little left, the blue copy a little right, and applied all other movement relative to those locations.

The eventual colours work out well to be mutually invisible in the corresponding panes of the paper glasses that were bundled with the copy of Ben 10 Magazine I bought in shame. (FYI, decent journalism, but the word searches were MUCH too difficult for the recommended age bracket.) More sophisticated glasses will probably result in uneven colouring.

FUTURE
And you wont look as cool.

MIDI-driven Flash: Synaesthesia for Everyone!

Typical music visualisers, as found in desktop media players, have limited means to analyse the music played through them. Audio data from a mixed-down track is a blend of waveforms, normally from many sources. Spectral analysis allows amplitude variation at disparate frequencies to be detected and graphed discretely, which can assist in beat detection and tempo analysis. But due to the noisiness (…) of this data, huge variation in compositional arrangements and the need for realtime processing, the majority of visualisers have limited reactivity and focus on producing partially-randomised, generative animation that would be pretty even if running on white noise.

No.

But that’s all for raw audio. Samplers and virtual instruments (rendering MIDI data) have ever-increasing prominence in music creation and live performance – and greatly outclass standard hardware MIDI synthesizers, with which the typical notion of dorky, low-quality MIDI playback is inexorably tied. Compared to an audio stream, ‘message’-based music protocols describe events – normally pitch and velocity/volume information about notes triggering on and off – and nothing about the timbre of the sound, allowing sonic quality (for instance, the performing instrument) to be altered at any point.

Visualising musical data in this form removes all uncertainty around the timing and nature of note events (along with the overhead of detecting such details), accommodating far tighter correspondence between the audio and visual elements than is possible with conventional visualisers.

The Future

And none of this is new – VJs have been leveraging MIDI precision for years, and the field is still growing – but routing note data into Flash is new to me, and something I’d wanted to do for years, provoked by rhythmic music that carried a strong sense of motion, or was otherwise very visual. (Lookin’ at you guys, MSTRKRFT and Susumu Yokota <3 – among many others.)

There are several possible starting points for running MIDI into Flash, but none especially mature (at least on Windows; different options are available for OS X). The primary issue is that the Flash runtime has never interfaced with environment-level MIDI APIs, which is pretty much in-keeping with conventional use of the platform. Various workarounds are possible, and having read around a lot I settled on the following:

  • Translate the MIDI data into OSC (a newer protocol and likely eventual successor to MIDI, designed to run through the network transport layer), with the VST plugin, OSCGlue, created by Sebastian Oschatz.
  • Receive the OSC packets with the Java flosc server, created by Benjamin Chun, and retransmit them over TCP (OSC data is normally sent over UDP, which Flash does not support).
  • Receive the shiny new TCP-delivered OSC packets with the OSCConnection classes, an XMLConnection extension written by Adam Robertson and revised by Ignacio Delgado.
    (Worth mentioning: the updated flosc server (v2.0.3) given at the Google Code link above didn’t work for me, but I had no problems with the original on Ben Chun’s site (v0.3.1).)

Here is a demo of it running, routing percussive hits from Ableton Live:

The visualisation method (best referred to, generously, as a ‘sketch’ for now), comprises an array of ‘particles’, distributed in 3D space, accelerating and rotating in time with note-on messages. On snare hits, the background gradient is also switched-out for another. The particles themselves comprise a few other graphical experiments: all symbols are a singular colour, but appear different due to variations in their layer BlendMode; I also created a sliding shutter-mask for the otherwise circular symbols in the Flash Pro IDE before embedding them; and I attempted a depth-of-field blur effect by increasing filter strength with distance from the camera – but it’s all a little abstract for that to be very applicable.

You can play with the visualiser ‘offline’, using keypresses: arrow keys (or WASD for the homicidally inclined) to move the particles; spacebar to swap the background. (Note: you’ll probably have to click into the player first.)

Some disclaimers: most of the work outside of Flash was that of the developers cited above; I don’t mean to take credit. This write-up is mostly to aid visitors with the same intentions, riding in on the Google express, since no other documented method I came across suited my needs and/or worked.
And, secondly, the visualiser itself is intended primarily as a tech demo and precursor to a more complete, aesthetic (…synaesthetic?) piece. You’ll see – it’ll be just like having a real mental disorder.

Rain simulation

As a generative animation exercise (and also just because, um, rain is nice-?), I decided to code me up some good ol’ precipitation. Considerations included: degree of realism, adaptability and performance. Specifically, animating a single layer of tear-shaped drops falling uniformly in perfectly straight lines is relatively easy to do, and certainly has its place within the right art style. On the other hand, ultra-realism would entail thousands of varied and varying particles, deeply distributed in 3D space – something that would better suit pre-rendering than realtime animation, or at the very least, something other than Flash. But that would be no fun at all. I opted for a low-ish number of convincing enough, but inaccurately shaped, particles (apparently raindrops are actually round, mushroom-top blobs – who knew?!), with fairly realistic motion and three distinct layers for the parallactic goodness. I tried to keep everything as configurable as possible, so fall speed, wind strength, blustery-ness, particle graphic, drop size variation and viewport size can be adjusted easily. Reuse or lose! – as they say. Well, they ought to start saying that.
On with the rain!:

You may notice that in the front and backmost layers, the drops are a little blurred. I wanted to give a narrow depth of field effect, and had initially applied a BlurFilter to their containing sprites, but that hit performance hard – so I drew a separate drop symbol for their layers, softening the perimeter with a radial gradient down to transparent, which is just as cheap performance-wise as a solid drop.

Thinking a little more about the worth of such an effect, I realised that I would likely want the rain to interact with whatever other objects might be in the scene. And so I added an umbrella:

It didn’t make much sense for the umbrella to stop drops across all layers, but the effect wasn’t very noticeable with so many other drops falling past it, so I removed them for this demo. I also added some physics-y hinged swinging based on the umbrella’s sideways movement. A splash animation plays at the exact intersecting point of drop and canopy – but I only later realised that, since around 75 drops make contact every second, I could easily have randomised the locations of the splashes and simplified the collision detection. Still! – honesty is the best policy, and when it’s only drizzling, accuracy of the contact point is more relevant.

So now I have almost everything I need for my ‘KEEP THE LOST KITTIES DRY’ minigame. No need to thank me just yet, world.

I drew the visual assets in the Flash CS3 IDE, compiled my classes in the wonderfully lightweight FlashDevelop, and animated the drops using Tweener. That’s all!

Hello

I'm Adam Vernon: front-end developer, free-time photographer, small-hours musician and general-purpose humanoid.