Software


People have often asked us what software we use, and if it's available commercially.

Animusic uses a production pipeline based on proprietary software we call ANIMUSIC|studio. It is a MIDI sequencer and animation system based on a visual programming language (looks like boxes connected with wires). At the core is our motion generation software library (in its 5th generation) which we have come to call MIDImotion.

None of our software is currently available commercially (some of the reasoning is discussed in this newsletter), although it is possible that we will release a software product at some point in the future (many people have encouraged us to do so).

Over the years, we've used a number of different production pipelines, but they all have certain things in common. We use commercial software for modeling, shading and rendering, while the instrument animation is always calculated procedurally using custom created software.

Pipeline History

Here's quick summary of how our pipelines have evolved:

The earliest pipeline (which Wayne used for "More Bells and Whistles" in 1990) hinged on Wavefront. All the instrument motion was created by piping together filters written in C (importing MIDI files imported from an external sequencer called Texture 3.0). All synthesizers were real physical keyboards with knobs and wires, and mixing was done on console with faders and knobs…this would eventually change.

When Animusic formed as a company in 1995, Dave joined up and we did our first job for a client (Beyond the Walls, now bonus material on Animusic 1). Dave modeled with Alias Sketch and Form-Z. All animation was created with VPLA (Visual Programming Language for Animation), developed by Wayne at the Cornell Theory Center, and licensed for use as a corporate partner), and rendered with RenderMan. Wayne was sequencing in MOTU Performer. Files were shuttled around using SneakerNet and TireNet.

For Animusic's first DVD, we bought 3ds Max 1.0 (right when it was released) and used it (and subsequent upgrades) for almost everything other than the instrument motion, which was done with MIDImotion (wrapped as a Max plug-in). All the rendered frames were sweetened in Adobe After Effects (single layer, nothing fancy).

The pipeline for the Animusic 2 DVD again used 3ds Max for modeling, materials, lights, and rendering. But at the core was an initial version of ANIMUSIC|studio which embodied MIDImotion. Instrument motion and camera moves were then exported to 3ds Max where models, materials, and lights where created. MIDI sequencing was still done externally using Cubase SX, Nuendo, and FruityLoops, until the last couple of animations when Wayne wrote his own sequencer right into ANIMUSIC|studio. Now things were getting interesting.

Our current pipeline hinges on a total rewrite of ANIMUSIC|studio from the ground up (more about that in this Newsletter). It's based on scene graph technology, has a new sequencer, and even MIDImotion was re-written to be much more real-time. And as much as we liked so many things about 3ds Max, it was time to make all things new. So we've moved to SoftImage XSI (and maybe a little Z-Brush) for modeling, and back to RenderMan for rendering. ANIMUSICstudio does all the sequencing internally, so no more exporting and importing MIDI files. Instead MIDI is sent over Gigabit Ethernet to a second workstation dedicated to hosting VST Software Synthesizers.

More about MIDImotion

Without MIDImotion, animating instruments using traditional "keyframing" techniques would be prohibitively time-consuming and inaccurate. By combining motion generated by approximately 12 algorithms (each with 10 to 50 parameters), the instrument animation is automatically generated with sub-frame accuracy. If the music is changed, the animation is regenerated effortlessly.

Our technique differs significantly from reactive sound visualization technology, as made popular by music player plug-ins. Rather than reacting to sound with undulating shapes, our animation is correlated to the music at a note-for-note granularity, based on a non-real-time analysis pre-process. Animusic instruments generally appear to generate the music heard, rather than respond to it.

At any given instant, not only do we take into account the notes currently being played, but also notes recently played and those coming up soon. These factors are combined to derive "intelligent," natural-moving, self-playing instruments. And although the original instruments created for our DVDs are often somewhat reminiscent of real instruments, the motion algorithms can be applied to arbitrary graphics models, including non-instrumental objects and abstract shapes.