Firefly Makani audio visualiser

The danger of being in an electronic combo, unable to reproduce all audio tracks live, is to become uninteresting to look at standing behind a computer screen. As such, for the Firefly Makani-project a system has been developed that both allows the protagonists to manipulate audio live and translate this visually to a live-generative visualiser.

The inner workings of the system:

For our live set we control audio processing software with two MIDI controllers attached to a dedicated "audio laptop". These patches ( developed by my collaborator Fedde ten Berge and running inside the audio sequencer on one laptop ) not only manipulate the audiostream, but also transmit the captured MIDI messages (via UDP) to a dedicated "video laptop". Additionally to capturing the activity of the controllers, a periodical update of the current tempo is being broadcast to synchronize the pulse of the visual content to the beat.

The second laptop runs a custom-written application to render the visualisations, which are generated from the MIDI properties, albeit manipulated by glorious math. Additionally, this laptop also runs a native layer service, which acts as the mediator between the sent MIDI messages and the visualiser application; having the single task of capturing the UDP data and filtering the messages, leveraging this task from the visualiser application.

The visual result

When the system is idle - i.e. no MIDI activity is being transmitted - the installation renders a visual loop which randomly chooses from a set of algorithms that generate different behaviours, synced to the last broadcast tempo. When new data is being received, this pattern is altered (albeit within set constraints), allowing the visuals to respond to the audio manipulations.

e.g. a wild screeching sound can be visualised as a violent explosion of particles or by speeding up and/or reverting a video feed.

As the latency (the delay) between submitting, processing and rendering is so small, all interactions are perceived as "real-time" responses.

Languages and technologies used

C, ActionScript 3/AIR, Max/MSP (Max scripting by Fedde ten Berge).