This is the second part of a four-part set of posts about re.flow. If you missed part 1, go here.
Hear and see the end result here: http://dolby.flow.gl.
This post is about the music and surround sound production for the project.
Musical Goals and Creation
An interesting aspect of this project was how different it is than just a typical music sequencer. Sequencers usually provide per-note access to the music, which is great for musicians and hobbyists, but not so good for wider populations. Instead I wanted users to feel like they were inside the music, but couldn’t go wrong, like a ‘super DJ.’ So this is a musical sequencer, where there are no loops, rather each track plays through until it ends. This gives a basis for maintaining musical form and structure that wouldn’t be possible if I gave complete, note-by-note or bar-by-bar control to the user.
Before starting this project, I had created a song just for fun, after getting inspired by my teenage son and his appreciation for DeadMou5 Strobe, which I called A House Built on Sand.
I had found that these tracks were fairly diatonic and could be rearranged many ways, and with some tweaking could get most of them to sound good together in almost any configuration. But it was a bit too sterile, and although I had a very abstract project in mind, I wanted textures and feel of the work to have a natural, organic feel. So I hired a wonderful vocalist, Desiree Goyette Bogas and her amazing vocalist daughter Lily to humanize the piece.
I’m musically inspired by Imogen Heap, who often uses her voice as a wordless instrument. I wrote out a few parts, but in the end used almost entirely the improvised wordless melodies they created on the spot. For example, the resulting “vocal melody” clip is Lily improvising a line, and then Desiree improvising a harmony part. I added a third part where Desiree sang a vocal line I wanted in tight unison with a synth line I wrote, so she was imitating the synth phrasing and scoops. But I wanted to make it tighter, so I used Ableton Live’s ability to convert her melody to MIDI, so in the end had Desiree imitating a synth which then imitated Desiree.
I also wanted a variety of whispers and mouth rhythms, and they brilliantly improvised these as they listened to rough tracks. These are highly spatializable, and I loved the idea of a surprising whisper in the user’s ear.
By the time we were done, I had dozens of instrumental parts, and maybe 50 vocal parts, and whittled them down to clips in Ableton and just experimented until I had the few that felt best together.
I premixed these vocal bits into four premixed sequences: Vocal Melody, Vocal DooDops, Whispers, and Mouth Rhythms. Check them
out one at a time at dolby.flow.gl.
I ended up with 16 clips mixed out of Ableton in stereo.
Getting to 5.1 Surround
When we get to the Web AudioContext, we’ll describe in detail that there are several ways to do spatial audio. For this project, the audio motion could be hard-wired into the sound files themselves, so each file could be created as a 5.1 mix. At this stage of using Dolby tech, this is as far as the browser can go; to be clear, it is not doing real-time positioning of each sound. This is somewhat limiting, but if your project fits within this limitation, Dolby’s real-time processing gives truly amazing spatialization.
I hadn’t done any 5.1 surround production before. So I thought I needed to upgrade some studio hardware. Let me be clear, this project was not about lots of hardware or an amazing studio. With the exception of the vocal tracks, I did the project on about $1200 of equipment (hardware) in a home studio little bigger than my typical desk space. (Software used was more expensive, unfortunately!)
Now, in fact, if you trust visual mixing interface in Adobe Audition, you don’t really need a 5.1 studio. I thought that to do a good job, I should hear it as I worked. But, that depends on your budget and quality expectations.
I needed to upgrade my computer audio interface to get 6 simultaneous outs. The MOTU UltraLite-mk3 Hybrid fit the bill, for about $500. And I needed matching speakers. I bought 5 JBL LSR305 5″ Powered Studio Monitors for about $135 each. And then I needed a subwoofer, so I just reused the one from my home surround A/V system. Plus wires, that is literally it, about $1200 in new outlay.
So now I have the audio hardware and a bunch of stereo audio files to bring into Adobe Audition, I’m ready to play. Note: Adobe Audition CC 2015 is the only version so far to have the ability to mix to Dolby Digital Plus (ec-3).
Audition has this cool panning interface, within which you can drag sounds around the 5 speakers. I tried that, but decided to go with the key-framing interface so that I could have better editing capability after the fact.
I put multiple clips in a single session for convenience.
I exported them out one by one using a right click on the clip -> Export Session to New File -> Selected Clips. This gave me a 5.1 file.
Then I could save it out as an ec3 file.
Wrapping the ec3 files into mp4 files
The browser cannot play this ec3 file directly, but needs it to be wrapped inside of an mp4 file. Encoding.com can do this, but I had the Dolby team help me do this. Contact the developer team at firstname.lastname@example.org for advice on file processing, and soon there will be some good options for this. I’ll update this article for more information.
[update: Dolby Developer is now providing free encoding for registered Dolby Developer members in a lightweight, easy-to-use tool that’s entirely cloud based. http://developer.dolby.com/News/Dolby_Audio_File_Encoding__Now_Free!.aspx]
Once you’ve got the mp4 files, check them out by using the typical Microsoft media player application. Plug your computer’s HDMI into your home A/V system where you’ll be able to hear the surround sound (you may need to select the option on the A/V system to use the Dolby Surround setting). Or just use headphones, which work great too.
Now we need to get it to play in the browser, so moving on to Part 3: Web AudioContext and Sequencer.