Firefly Makani audio visualiser

The danger of being in an electronic combo, unable to reproduce all audio tracks live is to become uninteresting to look at standing behind your computer screens. As such, for the Firefly Makani-sideproject a system has been developed that both allows the protagonists to manipulate audio live and translate this visually to a live-generative visualiser.

The inner workings of the system:

For our live set we control audio processing software with two MIDI controllers. These patches ( developed by my collaborator Fedde ten Berge and running inside the audio sequencer on one laptop ) not only process the current running audiostream, but also transmit the captured MIDI messages over UDP to the second laptop. Additionally to capturing the activity of the controllers, a periodical update of the current tempo is being broadcast for synchronizing the pulse of the images to the beat.

The second laptop runs a separate application to render the visualisations, which were generated from the MIDI properties manipulated by glorious math. Additionally, it also runs a native layer service, which acts as a mediator between the sent MIDI messages and the visualiser, having the single task of capturing the UDP data and filtering the messages. This frees up resources for the visualisation application considerably as the parsing of relevant data occurs both beforehand and in another thread.

The visual result

When the system is idle - i.e. no MIDI activity is being transmitted - the installation renders a visual loop which randomly chooses from a set of algorithms containing different behaviours, synced to the last known tempo. When data is being received, this pattern is altered (albeit within set constraints), allowing the visuals to respond to the audio manipulations.

e.g. a wild screeching sound can be visualised as a violent explosion of particles or by speeding up and/or reverting a video feed.

As the latency (the delay) between submitting, processing and rendering is so small, all interactions are perceived as "real-time" responses.

Languages and technologies used

Adobe AIR, C, Max/MSP (Max scripting by Fedde ten Berge).