These new demands for live video delivery through webcasting (and, to a lesser extent, IMAG) have dramatically changed how an event is captured, directed, mixed, and delivered as the timeline has moved from a delayed turnaround time to as close to real-time as technology and the physical limitations that our world's physics allow. (As with the speeds of sound and light, it still takes a few milliseconds for audio and video signals to be acquired, converted, transmitted, and received.)
In the previous three articles in this four-part series on Producing Conference Video, I discussed the importance of production planning and setup, lighting and audio for video, and shooting presentations and directing live-switched events. Now in this fourth and final instalment I'm going to discuss both traditional postproduction workflows where the event is edited after the filming and live production workflows, specifically as they pertain to live streaming or webcasting.
The New Speed of Video
Postproduction workflows have come a long way since I first started digital video editing with a nonlinear editing computer program in 2001. Analog video tape was replaced by digital video tape, which is, in turn, being replaced by hard drive- and flash memory-based media. Despite these changes, all editors need to ingest their footage before they can start editing, and this used to be the first major bottleneck in the production workflow. As we'll explore, this is no longer the case. Capturing videotape is a linear process, so it takes one hour to capture an hour of footage. Tapeless workflows break away from linear capture times as digital footage can be transferred as fast as the media allows, usually in a fraction of the linear time.
Apples-to-apples comparisons get complicated when comparing new HD video formats to older standards, for the simple reason that most new codecs have a variety of data rate options. Older standards such as DV, DVCAM, DVCPro, and HDV are all 25 Mbps, which is only slightly higher than the 24Mbps of the highest-quality prosumer AVCHD format. But as you move up the codec food chain, codecs come in 35, 40, 50, and even 100Mbps variants. Simple math will tell you that it would take about three times longer to transfer a minute of 100Mbps DVCPro HD footage as it does to transfer a minute of XF Codec footage from Canon's new XF300/305 video cameras, when set to 35Mbps, but this ignores the speed at which the card can theoretically transfer footage, if it even needs to be transferred at all.
On the high end of speed is the new Sony SxS-1 memory card, which promises to have attained transfer speeds of 1.2Gbps. Now be careful not to confuse Gigabits per second (Gbps)with Gigabytes per second (GB/sec), as 1.2Gbps translates into 0.15GB.sec (or 150MB/sec). These transfer rates don't really start to mean much until you translate it into a larger time unit: 1.2 Gbps also equals about 9GB per minute. A bit more math tells me that it takes only 7.1 minutes to transfer a full 64GB (the largest available card capacity), which is a full two hours of footage using the highest quality 50Mbps codec in Sony's XDCAM lineup, or just over 4.7 hours of footage in the SD DV 25Mbps codec. Incredibly, this translates into an unprecedented speed of 1.5 minutes to capture an entire hour of the same DV footage that it used to take an entire hour to capture (and when you factor in the tape rewind time the SxS-1 footage would likely be ready to edit before the tape's footage was even ready to begin capturing).
So what's better than fast transfer times? No transfer times at all. Most of the flash memory formats can be edited straight from the card, without even being transferred to the computer's hard drive. Whether or not this is practical or not depends on the speed of the card and its corresponding ability to serve up the video files fast enough for smooth preview and editing.
Hungry for Editing Speed
Once the footage is ready for editing, the next factor in editing speed is the computer itself. The combination of CPU speed, RAM, and hard drive speed used to be the most important components but, as we'll soon find out, there is a fourth component that is emerging as an equally, if not more important component for some NLEs and software video encoders.
First, let's have a look at the individual components to get a better understanding of how they work together. The Computer Processing Unit (CPU) is to the brain as the computer is to the human. Just as our brains have many parts that work in harmony, a CPU has different elements that work together to process data. In a modern Intel CPU, these elements are the number of cores, clock speed, front side bus, L2 Cache, Hyper threading, and Turbo boost. To help illustrate the roles let's think of our computer as your local grocery store.
Now think of the CPU as the store's checkouts. The number of cores is akin to the number of checkouts that are open. The higher the number, the faster customers can check out. So a dual-core computer can handle twice as many customers as a single-core, and a quad-core four times as many.
CPUs are rated by their core or clock speed, so think of the core speed as the speed in which each cashier can process items. A slower core speed is like a cashier that has to manually punch in each item and a faster core speed is like a cashier who works at a grocery store with a scanner and every item has a bar code.
The next element is the front-side bus (FSB). Think of it as the conveyor belt at the grocery store that is running at a constant speed. Using a small FSB is like having a narrow belt so it takes longer for the load of groceries to arrive, while working with a higher FSB is like having a wider belt as it can deliver more groceries to the cashier to process. The final benefit of a wider conveyor belt is that the groceries can be arranged and scanned more efficiently.
The L2 cache is like the float (change in the till) with which the cashier makes change. Now imagine that every customer paid in cash with bills only. The larger float allows the cashier to give the most efficient change (a quarter instead of 25 pennies), and means that the cashier doesn't have to break as often to replenish the float.
Hyperthreading is what allows the computer to act like it has twice as many core as it appears to have. In our analogous grocery store, this looks like a cashier that now has two cash registers, two lines, and a trainee to operate the second register. So a modern quad-core computer with hyperthreading will display 8 cores in the task manager.
The final element in new CPUs is Turbo-Boost technology and it is the equivalent of giving the cashiers cans of RedBull so that they can process customers faster for short bursts when things get busy. All of these elements contribute to CPU speed but, without the next two elements in sufficient quantity, a computer would be operating under capacity.
The next element in our computer is RAM, or Random Access Memory. In our grocery store our inventory manager has a boardroom table that represents the RAM in our computer. His department managers gather around this table to discuss inventory-level reports and to place orders to replenish stock. A larger store requires a larger boardroom table with additional managers to keep up with the groceries that the cashiers are checking out. Modern computers paired with 64-bit operating systems can address more RAM than ever before, and having 12-24GB of RAM on an editing system is very common.
Finally, the size of the grocery store is akin to the size of the hard drive and the hard drive speed is how fast the stock boys can replace goods on the shelf at the same time as shoppers are loading their carts. If they run out of food, the whole process slows down. To speed up the process our grocery store features wider aisles, which is akin to joining hard drives in a RAID array.
What's important in our grocery store is that all these components work together to get the groceries from the supplier and into our customers' hands as quickly and efficiently as possible. If one of the systems has a lower capacity than another, it becomes a bottleneck in the process. So the fastest CPU in the world won't run at capacity if it doesn't have enough RAM, just like our grocery store can't checkout customers if the shelves are bare.
From CPU to GPU
Up until recently these three components were the most important considerations for video editing speed. This all changed about a year ago with developments in both NVIDIA CUDA Graphics Processing Units (GPUs)and the ability of Adobe's Premiere Pro CS5 to utilize this capacity. Adobe was the first NLE to offload much of the video editing work from the CPU to the GPU and this difference accounts for a dramatic speed increase over the previous CS4 release.
GPUs are found on video cards and offer parallel processing that is much more efficient for video editing than that of a CPU. The speed improvements are nothing short of dramatic, especially in multi-layer compositions and those with effects. And even if you aren't editing on Adobe Premiere Pro CS5, Sorenson Media is allowing Avid and Final Cut Pro users to take advantage of GPU processing improvements with their Squeeze 7 encoder, which works as a stand-alone or direct plugin on those editing systems.
These speed improvements have helped reduce the traditional postproduction bottlenecks, tape capture, and computer processing time, and video production turnaround time is now more a function of the editing speed of the operator as opposed to computer speed. The result is the growth of the same-day-edit market, where the video producer will edit one or even several videos on the day they were filmed, and play them back to the attendees. The SDE is very common in wedding videographer circles, but as far back as four years ago, EventDV writers Chris and Laura Randall of Edit1Media recounted to me their SDE experiences with corporate clients. Two years ago, I filmed a conference in Walt Disney World and the Disney Institute led my client's attendees through a team-building activity that involved creating mock commercials during the day, and their editors quickly turned the videos around for playback in the evening. So popular were the videos that the attendees asked that the videos be replayed again that same evening. But what if you're asked to perform your video live, in real time, as the event in happening?
Live Streaming Components
In my Autumn 2010 installment of this series, I discussed live video switching and, prior to that in my Summer 2010 Edirol review, I discussed my experiences using the Edirol line of video mixers. So rather than repeat myself I'm going to continue where those two articles left off when it comes to delivering live video edits.
In the past I delivered most conference videos on DVD (or even VHS in the early days) but now I'm delivering more and more of my work online. Putting video online is nothing new for most video producers and thanks to sites like YouTube, Vimeo, and Facebook, sharing video online is free (or inexpensive with pro accounts and OVPs) and relatively easy to do.
Now the big challenge and area of business growth for conference video producers is live streaming or webcasting their clients' videos over the internet. Live video streaming with multiple camera angles and audio sources requires both a video mixer and a soundboard to mixdown the audio and switch between camera angles. To further enhance the live video, many video mixers now feature downstream keyers that allow you to insert computer-generated graphics, like logos or titles, or to replace a greenscreen background with a virtual set, by using the luma or chroma key capabilities of the mixer. Generating the graphics can be as simple as connecting a laptop to a video mixer that has VGA inputs, and some video mixers even have memory card slots were graphics can be preloaded, freeing up room on the tech-table and the use of a laptop.
Panasonic's AV-HS450 switcher features a downstream keyer for inserting CG content in live-switched feeds.
The next step is to take the live-edited analog feeds, video from the video mixer, and audio from the soundboard, and convert them to a digital signal that your computer can accept and stream. A single-camera feed can sometimes be fed directly into a computer with a long enough USB or FireWire cable but in multicamera, shoots additional measures need to be taken.
Video and audio signals can be converted to digital signals using a variety of methods. There are inexpensive RCA and S-Video-to-USB devices (generally marketed as VHS to DVD converters by companies such as Pinnacle). Some video cameras and most recording decks have built-in analog-to-digital converters that process audio and video cable signals to FireWire, and on the high-end, companies like AJA, Blackmagic Design, Grass Valley, ViewCast, and Digital Rapids make a variety of PCI express video cards that convert analog audio and video signals to digital signals that can instantly be webcast.
The AJA Kona LHi
For the uninitiated, all these devices and connections can be rather complicated to set-up and get to play nicely with each other so I'm going to present you with two all-in-one options to get audio and video signals into your computer with the least amount of components possible.
Roland recently launched the VR-5 A/V mixer and I have to admit that I am very seriously considering buying one for myself, even though I already have all the individual components that I require to stream live video. The reason is that this one device combines several large pieces of equipment into a single unit and is much easier to transport, set-up, operate and store. My only hesitation is that it lacks a tally light connection and I own a Datavideo ITC intercom system with integrated tally light controls which I regularly use, so I'm hesitant to take a step backwards in functionality or buy a second unit that duplicates much of what I am able to currently offer.
Feature-wise this one A/V mixer combines a 5-Channel Video Switcher (3 video sources, Scan converter/PC input, & audio/video/photo playback from SD), with an audio mixer (2 Mono and 5 Stereo Mixable Audio Channels), a pair of built-in LCD monitors (with touch control for easy video source selection), and allows the video switcher operator to manipulate three video layers using the downstream keyer (chroma- and luma-key). In addition to HDMI, S-Video, and BNC outputs, it also records the output internally to MPEG-4 video with MP3 audio on SD/SDHC cards and, if that wasn't enough the VR-5, is an industry first USB video/audio-class device for web streaming. What this means is that you can connect the Roland VR-5 A/V mixer to your computer with a standard USB 2.0 cable. I still might purchase one, despite the lack of tally light connection, as one of my clients wants me to produce a semi-annual series of six webcasts in six small northern BC fishing communities and not having to pack a separate audio mixer, record deck, scan converter, several monitors , and an analog to digital audio and video conversion solution means that I just might be able to get all my equipment with me on the plane rides within normal baggage allowances.
Taking forward and backwards integration one step further is NewTek with their TriCaster series. These small-form-factor computer boxes combine multiple audio and video inputs, pre-recorded playback, keying, recording, and ultimately video streaming direct to the streaming server. Rather than rattle off the specs from their wide lineup, let me tell you a quick story about the last time I used a TriCaster. Not too long ago I was asked to film and produce a webcast for an executive at a large computer company-one that is large enough that you would likely know its founder's name. On the way to this out of town webcast I received a phone call from my client in a panic explaining to me that all of the equipment that was being couriered would not arrive on time and could I figure out how to replace everything within a few hours and in such a manner that their client couldn't tell that we were working on a Plan B.
The NewTek TriCaster TCX-D850
Now this wasn't just replacing a single item like a video camera, burnt-out light bulb, or forgotten audio cable. I was an hour out and the IT contractor hired to take care of the video streaming just arrived on location and between us we had only a single laptop along with access to hotel provided Ethernet cable and extension cords. My first thought was to find a Newtek TriCaster because it would replace several specialized components and we simply did not have the time to purchase or rent from several providers (not that there were many within a reasonable distance anyways), let alone get them to play nicely with each other.
That turned out to be the right call as only five minutes from the webcast hotel lived a local webcast producer. She had a basement studio and recently had taken out an ad on Craigslist for a part-time webcast operator for her Tricaster system. We raced over to her basement studio and quickly negotiated the rental of her Tricaster, video camera, tripod, wireless lavaliere, and a couple of video cables. The setup and testing were a breeze compared to the usual multi-component setup and cabling that I had become accustomed to, even without consulting the manual, and before the presenter even arrived, we had completed our tests and were ready to stream live video. This simply would not have been possible without an all-in-one system like the Newtek TriCaster.
Serving up Streaming Video
The final step in getting your video streaming live online is to send the signal to a streaming server using an internet connection. One of the keys is to ensure you have sufficient upload capabilities in your internet connection. Most ISPs spend their marketing efforts praising their services' download speed as downloading is what most users spend the majority of their time online doing, but a webcaster has to pay attention to the upload speed-my current package is only 1Mbps up compared to 15Mbps down and on a connection like this I would only stream at 512Mbps in order to account for normal fluctuations and to minimize the occurrence of dropouts or stalling.
Now while it is possible to purchase, license, and operate your own streaming server, new or smaller webcasters will probably want to subscribe to a monthly package or pay-per-use model from a live streaming platform like Livestream, Justin.TV, or Ustream. Regardless, most services will use one of the following streaming servers: Adobe Flash Media Streaming Server, Apple QuickTime Streaming Server, RealNetworks Helix Streaming Server, Wowza Streaming Server, or Microsoft Windows Media Streaming Server. These services take your video feed and distribute them to the viewers who are watching your video live over the internet. Most webcasters will embed the live video stream in a custom webpage but streaming platforms automatically provide a branded YouTube-like webpage that the live stream can be viewed from.
One final note on live streaming. Just be aware that there is a slight delay in live streaming so if you plan on connecting two of more sites simultaneously so that they can interact, be aware that they may not be able to have a completely fluid back-and-forth conversation.
One live streaming platform option: Ustream
Well, this concludes my four-part series on Producing Conference Video. I hope you have enjoyed reading the articles as much as enjoyed writing them for you. I'll leave you with one last thought: Take this info and run with it, even if you are going to apply it to a different niche market. What's important is that you use it to help you grow your current business in the markets that you service and expand into additional niche markets that have strong growth potential and are a great fit for you and your company. Happy streaming!