In this article, I'll guide you through our audio workflow: from switching through your multitrack/dual-system audio recording to removing unwanted noises to sweetening, cleaning, and even disaster recovery. And although I'll repeat this sentiment at the clean-up section, I can't stress the following enough: Even the sweetest audio-sweetening skills set will take you only so far. No matter how much time you spend processing and cleaning-up an audio track, nothing beats getting it right in the first place.
The Tools for the Job
At Ever After we use the latest Adobe Production Premium CS5.5 suite, which has some new and exciting and-more importantly-timesaving audio features. In fact, the primary reason we upgraded from CS5 is because of CS5.5's new audio capabilities. That said, although CS5.5 will speed things up considerably, there's no process or idea mentioned here that you couldn't achieve in other NLE systems.
With a background in sound design, I've always felt that using the right tool for the job is half the battle won. In our case, this translates as follows: Your NLE is great at editing video (or at least I hope so), but nothing beats working with audio in a proper audio editor. No, I'm not suggesting you go out and buy yourself a shining new Avid ProTools system; Adobe Audition (which ships with Production Premium), or even Audacity (free and multiplatform) will do this job just fine.
At a high level, our workflow goes like this: We sync our audio files in Premiere Pro and/or PluralEyes and open them as a multitrack project in Audition (in case you're not familiar with this application, it looks like any NLE screen you've seen minus the video tracks). Then we switch between the tracks and clean-up/sweeten where needed.
That Syncing Feeling
If you've used 2-3 cameras and 2-3 separate audio devices, like many event filmmakers, that means you find yourself working with at least 4 audio tracks in post. Although you could start by cleaning up all of these tracks, it's far more time-efficient to sync the lot of them, switch between your tracks, and only clean up the parts of a track you'll actually use. Syncing your tracks in the audio editor by visually inspecting your waveforms, then fine-tuning by ear, is certainly a possibility. In fact, that's exactly what I used to do up to a few weeks ago. It will take some practice, but after a while you'll have it down to an art; syncing 5 (more or less) continuous sound sources takes me about 2 minutes. Of course, we have all heard of the magical application PluralEyes. Although I'd love to boast that my DIY approach works faster (unless you have a gazillion clips to sync), PluralEyes does free up my time. Who doesn't appreciate an extra coffee break?
Until the release of CS5.5, I didn't use PluralEyes, as it could not offer me the synced tracks in my multitrack audio recorder (it supports only NLEs). Adobe changed all this in CS 5.5 by allowing you to send your audio tracks into Audition as a multitrack project and, thanks to Adobe's Dynamic Link "roundtripping" utility, you can make changes at a later time and move them between Audition and Premiere Pro without the need to create new mixdowns.
Creating the Mix
If all goes well, you should be looking at a multitrack project with all your tracks synced. So, where do you start the sweetening process? The whole reasoning for using an audio editor is to easily switch between your tracks and make the most of them.
Multitrack view in Adobe Audition
In most instances, you want to hear just one audio track at a time, as this ensures that your listener is fed only the sound that fits the image best. There is no point in having a nice lapel mic on your groom if you end up mixing it in post with the mic on your camera that also picked up the crying baby in the back of the church. By using keyframes, you can bring in and take out the needed audio channel. Sometimes you might have a few audio sources that sound equally good, but most of the time there will be a clear winner.
Here's a rule of thumb: Typically, the mic closest to the sound source should give you your best audio. This means that the sound of the vows will usually come from the lapel mic on the groom. However, if he fiddled with your mic, you might have to use your onboard shotgun mic instead. A second rule of thumb: Any live music should not come from a recording that used auto-gain, as this will flatten the music completely.
Sweeten your Recording
Before you subject a sound to any type of processing, I can't stress enough the importance of a decent pair of speakers! Any computer speaker set, whether just stereo, or the latest 7.1 surround system is typically aimed at gamers. It's designed to create an "immersive" experience, a concept that translates as follows: We'll process the sound in the speakers as well to make those explosions even louder.
Monitor speakers (such as Edirol's MA Series, Genelec, Tascam, and the like) are aimed at people who want a true representation of their sound. They try very hard to make sure it is not biased towards any type of sound. This is important, as you want your films to sound as good as possible on just about any system.
A good multiband compressor has the ability to enhance your sound. If overused, it can make your audio sound like a drowning AM radio; as with all things in our line of work, moderation is key. Just as your speakers can be frequency-biased and influence the sound, your microphone-and, to an extent, even your audio recorder-are not so innocent either.
The Multiband Compressor in Adobe Audition
With your audio recorder the key is to record uncompressed WAV files, rather than MP3s, as this will avoid a lot of issues. But unfortunately this does not fix your microphone's responses. Every microphone has a different pickup pattern and has certain frequencies to which it responds better or worse. Matching the right microphone with the right voice/sound is an art, and certain vocalists now demand the use of certain mics when they perform.
In wedding filmmaking, this type of attention to detail is neither financially sensible, nor feasible during a shoot. I use a multiband compressor to minimize these issues and make the most of my live audio recordings. Let me start by saying there is no such thing as a magical recipe; the settings depend on the mic, the voice, the acoustics, and so forth. Your ears are (or should be) the judge of whether you're enhancing, needlessly changing, or even ruining your recording. The key words: subtleness and consistency.
Let's take a typical voice of the wedding day: a friend or relative giving a toast. Typically, this is untrained "talent" that moves his or her position from our mic and starts talking loud, only to get quieter throughout his or her "moment." Most multiband compressors have an "enhance vocal" (or similar) preset, which is a great starting point. This process will (at least) try to limit the differences between the loudest and the quietest parts of the recording as well as amplify the sound frequencies of the human voice. In most cases, it will lower the overall sound level, so slightly adjusting the master output gain is usually required. Although the fundamental frequencies of the human voice can vary roughly between 80-200Hz for a male voice and 150-260Hz for a female voice, the harmonics above those fundamentals (200Hz-4000Hz) are actually more important for our purposes as they contain the characteristics of the voice (extremely simplified).
If you need to isolate a voice a bit more from background noises, this will be the frequency range you will amplify (hoping your background noise has a different pattern). I would typically split this area in 2 (approx. 200-1000 and 1000-4000) and add between 2 and 4 dB gain to the voice. The lower-frequency part mainly has your vowels, with the consonants being higher up in the spectrum.
Another typical example is the recording of bands or string quartets. I have a great AKG condenser mic that I use for recording choirs and string quartets, but it is a bit bass-biased, so I would typically reduce the frequencies up to 150-200Hz by a few dB to make sure the cello doesn't drown out the violin.
Playing around with these settings is vital to get to know their impact as well as the frequency biases of your equipment. With regards to a string quartet, the impact of the changes can make a great quartet sound like amateurs as often as it may mask the lack of balance of actual amateurs, so be careful with your choices.
Whatever your choices, consistency is key. If the circumstances (your talent, audio recorder, te mics you use) don't change, then neither should your settings. You don't want a best man that starts with Barry White-like bass sound to reappear later sounding like he's ended up on the other side of puberty. There's a reason why a multiband compressor allows you to save a preset!
Although the power of compression can hardly be overstated, in a real emergency (say, if all the mics in close proximity of your sound source failed) even this might not be enough. I'm sure most of us will have at least heard of the magical filter called noise reduction, which is found in virtually every NLE and audio editor. Basically, you feed it a sound that you don't want and then tell it to remove this sound from a recording. Most online examples use an alarm that goes off or something similar and can similarly be used for photography clicks.
The Noise Reduction filter in Adobe Audition
However, I've also used this filter a few times where there was a major issue with my recording. Imagine you're shooting in a room with 2 podiums, and you're told the left one is the only one that will be used and there is no venue sound system. I would mic my podium, have cameras in position (in the UK, during a ceremony, you're not allowed to move a camera even an inch), and see my "talent" go to the other podium. Imagewise? I can just repoint my camera. Soundwise? Welcome to my worst nightmare! The audio recorder did pick up a faint signal of my talent, but when amplified (using a compressor to try and amplify mainly the voice) there was still an awful lot of background noise. No hiss or band-pass filter seemed to help much, but using this noise reduction filter, I did manage to achieve a "usable" sound.
I'll be the first to say it was far from great, but you could understand clearly what was said and recognize the voice, and the background noise was low enough to go unnoticed to the untrained ear. When using such a filter for this purpose, the key is to select your noise sample carefully, making sure a noncontinuous noise (such as a cough) is not part of that sample. Then it all comes down to the typical balancing act: removing this noise versus making your talent sound like a robot.
Back to the NLE
However much or little you do to all your sound sources, make sure that the level of each section remains about the same. There are few things more annoying to a viewer than watching a film with her finger on the volume buttons!
Once you've created your perfect mix, you mix it down and send the result back to the NLE. If you just switched between tracks and haven't cut parts out, you can just line it up with the start of your first audio track and all will be in sync. Mute all the other audio tracks, possibly link (or merge) this sound file with your key-camera angle and start your video editing. When reading this workflow, I realize it can sound like a lot of work for (possibly) a rather subtle return. And I'm sure that when you try this the first few times, you will curse my name (if you can pronounce it).
The mixed-down WAVs back in the NLE (in this case, Premiere Pro CS5.5)
Once it's part of your routine, though, you'll fly through it. For a typical UK wedding with a 45-minute church ceremony and 45 minutes of speeches, I typically get this done in less than 1 hour. It's a time investment that can really push your natural sound to the next level.
In the End...
In the end, you'll be completing the final soundtrack of your wedding film. You'll want to make sure the audio levels have stayed consistent throughout your production, but you should also keep in mind that consistency isn't everything: You can ruin all this work by mastering your sound at too high a level. Your typical clients have consumer TVs and sound systems that they barely understand. Just about all of these systems have "virtual surround," "dynamic boosters," or internal systems in place to "enhance the sound."
Whatever the system, it will always try to boost certain frequencies for whatever reason, so it is good practice to leave a little bit of headroom when it comes to setting the final level value. Mastering your audio at 0dB is likely to cause a complaint at some point in the future if it hasn't done so already.
Niels Puttemans (niels at everafter videos.co.uk) runs Ever After Video Productions of Sheffield, U.K., with his wife, Sylvia Broeckx. 2009 EventDV 25 Finalists and winners of IOV Ltd. (Institute of Videography) and WEVA CEA awards for their wedding-day films, Niels and Sylvia were presenters at WEVA Expo 2010.