If you're a working event videographer making the jump to HDV, chances are you won't have to change editing systems when you buy that new HDV camera. This is good news, since most of us ply our trade in our NLEs and would be taking several steps backward in our work if we had to switch to something else. Today, nearly all popular NLEs support HDV, and most systems that can handle heavy-duty DV work can also manage HDV. This is a huge benefit for the budget-conscious event shooter, since it means you can upgrade to HDV without having to make significant up-front investments on the postproduction end—which is not the case with other HD formats.
But just because your system has the horsepower to edit HDV, and your NLE can capture and manipulate HDV source files doesn't mean it will all work smoothly right out of the gate. One of the first things you'll have to do to work with HDV footage is make some decisions about what editing format you'll use—specifically, whether you'll use an intermediate codec for editing HDV, or whether you'll work with it in the same format in which you acquired it, which can be tricky, for reasons we'll explore in this article.
Why is an intermediate codec needed—or at least, considered, and often advisable—for HDV editing? Basically, because HDV was designed as an acquisition format, rather than an editing format, and as such, it is an information manipulation-intensive format. So let's look quickly at what HDV is and how the MPEG-2-based format works.
HDV comes in two flavors, HDV1 and HDV2. HDV1 is a 720p, 16:9 format with the same color space (4:2:0) as NTSC-DV. That's 720 horizontal lines of progressive video in a true widescreen aspect ratio. HDV2 is specified as a 1080i, 16:9 format with 4:2:0 color space and 1080 horizontal lines of interlaced, widescreen video. A 1080p progressive format is emerging as of this writing, too, in the latest cameras announced by Sony. HDV was developed to offer an affordable alternative to full HD acquisition for prosumer and smaller studios who were currently limited to SD (standard definition) video, which is 720x480, 4:3 aspect ratio, based on traditional broadcast TV standards.
In addition to affordability, the other hallmark of HDV is its compatibility with the ubiquitous standard MiniDV tape. HDV cameras are in fact two cameras in one, offering both 16:9 SD DV and 16:9 HDV. Several tapeless formats are emerging for live recording to direct disk recorders (DDRs) that connect to cameras and computers alike as FireWire hard drives and provide either faster-than-real-time ingest to a PC or Mac, or allow users to edit directly off the drive from their NLE, with no ingestion step at all. Tapeless acquisition was available with SD, but was not as popular. The tapeless recording devices were third-party and not trusted by many wedding and event videographers. Now we are seeing tapeless formats built into the cameras themselves.
HDV uses the MPEG-2 compression format that brings the bit rate for HD video down to 25Mbps (same as DV) for 1080i HDV2, and 19Mbps (more than 20% less than DV) for 720p HDV1. All the extra compression and the way the data is compressed should be of great concern to the producer/editor who is considering editing in native HDV format, as we shall see. This MPEG-2 scheme allows for a much larger amount of data to stream from one device to another for storage at faster speeds than the methods used by SD cameras. For our purposes, I'll offer this very basic explanation. The MPEG-2 HDV structure does not store all the data for every frame of footage. That would be a lot of data to have to transport from device to device and to encode and decode on the fly. To conserve bits, and acquire the HD image at bit rates less than half the size of other HD formats, HDV uses three types of frames called I, P, and B-Frames. It combines these in order to be able to rebuild the full image of every frame of footage quickly. An I-Frame encodes all the data for that frame. P-Frames are encoded by predicting data from previous frames. B-Frames encode the image by making predictions from both previous and subsequent frames. Thus, I-Frames are the only frames that can stand independent of any other frames in the footage.
These frames are organized in a tight mathematical structure called a Group of Pictures, or GOP. HDV uses an MPEG-2 LongGOP format. There are ShortGOP formats, but we will only discuss the standard HDV implementation here. The GOP is the group of I, P, and B-Frames, in specific orders, that are combined to create the full footage of your HDV video. The figure on the Left shows an example of how this GOP structure may look in a section of HDV video's encoder input. The I-Frame has all the data. The B-Frames look at data from frames before and after them. The P-Frames look only for data in the frames immediately preceding them. Does the video format do this on its own? No, it needs a CPU (in either a computer, a camera, or a tape deck) to process all that data. That's what's somewhat deceptive about the bit rate of HDV being comparable to (or less than) MiniDV—even though it's the same amount of bits and the same amount of frames (comparing 60i DV to 60i HDV or 24p DV to 24p HDV), the HDV video stream requires the CPU to process much more data. Unlike the data that constitutes an image in a DV stream, the data in an HDV frame (except for the I-Frame) isn't all "there." To construct a complete image from a given frame that you can preview, edit, apply effects to, etc., the processor has to gather data from numerous neighboring frames. This large amount of data is of concern to editors, as we will see.
So how do B and P- Frames get made up of data that doesn't exist at that moment in time? And why should an editor care? The B and P-Frames look at other frames, see what's changed, and record only the changed data. Anything that hasn't changed is borrowed data from other frames. For example, imagine shooting an orange streamer at a county fair against a blue sky with some clouds. The B- and P-Frames will record only changes created by the movement of the orange streamer as it waves in the wind. All the blue of the sky and the details of the clouds would be borrowed from other frames, and not stored in the B and P-Frames. Only the changes are stored in the B and P-Frames; consistent data is not recorded in those frames. Yet upon decoding, that data is borrowed from other frames to fill in the missing data of the B and P-Frames. What this means to the editor is much more time and power are needed for the computer to process and match up all this data. Does that editor want to wait on all this data processing?
Think about that third B-Frame in our example above. It has to wait until at least six more frames (that it borrows data from) in the footage are processed before it can be fully rendered and viewable as a frame of data. This concerns editors because your editing computer has to do all that work and give you real-time playback. This is no knock on the video quality of HDV, mind you, just an acknowledgement of the challenges of using a format that was ingeniously designed for video acquisition in a postproduction environment where its compression method presents a whole new set of challenges. Is the strain on your CPU to handle all that processing, and the demand on your NLE to make sure CPU resources are used efficiently becoming clearer now? The good news is, it is not as much of a problem as it once was.
The Long GOP Editing Dilemma
The dilemma inherent to editing this LongGOP MPEG-2 format is that an NLE editing HDV natively can only make edits on I-Frames. Some NLEs have evolved so that they can restructure the GOP on the fly so that edits can be made on any frame. To do this so fast takes up a a significant number of CPU cycles and a large amount of power. Thus some NLEs require top-end hardware to HDV editing natively. Many older computers are just not up to snuff.
Some NLEs do native HDV editing on the fly on older machines, but you have to wait for them to "conform" the GOP. This takes time, similar to rendering. This adds time to the editing process. It may only take a few seconds, but when you edit a two-hour feature, and you have dozens of those "conforming" periods, it adds up to extra time in the overall production process.
Keep in mind that editing HDV means processing 1080 lines of video, as opposed to NTSC-DV's 480 lines. That's a lot more information to manage, theoretically in the same amount of time.
A very smart solution to this problem is the digital intermediate codec (IC). These codecs were developed some time ago when Hollywood began editing feature films on computerized NLE systems. In this process, the original film is scanned, frame by frame, by very expensive machinery and translated into a digital file that can be used by an NLE for faster, easier editing. From there an Edit Decision List is generated and handed to the negative cutter, who actually follows those instructions to cut the physical film negative, splice it, and send it off to be developed as a finished film product.
A similar solution has been used from the outset with HDV (before Pinnacle Systems introduced native LongGOP editing with Liquid Edition 6.0, it was the only option for working with HDV footage in mainstream NLEs). Using HDV ICs like Cineform's AspectHD (left), an NLE can arguably edit much faster than with native HDV. Some editors prefer to stay in the native HDV format due to image quality concerns associated with the additional encoding generation required to convert ingested footage to an IC. Yet most ICs reveal very little image-quality degradation, if any. The other issue is storage space, since the IC adds frame information and thus creates larger files (more detail on this below). The decision to edit native versus using an intermediate codec is a personal one for the editor and producer to decide for themselves.
Here's how the intermediate codec works. It takes HDV's LongGOP format and transcodes the footage so that all the frames are I-Frames. If every frame contains all the data of that image, just like regular DV does, then you can cut on any frame without any conforming issues to bog down your CPU. Once done, the footage can be output back to native HDV LongGOP format by transcoding all the I-Frames back to I, B, and P-Frames. (Naturally, you can also export to DVD or any other output format your NLE supports.)
The are three drawbacks to the use of intermediate codecs. First is the time the NLE will take to transcode before and after the editing is done. Depending on the system and size of the project, this can be a very long time or a very short time.
The second drawback is that the video files created by the IC are quite a bit larger than those of the native HDV files. This is because the HDV native file is only storing part of the data for most of its frames. The IC is storing all the data for all of its frames, thus requiring more hard drive storage space.
Finally, there is the issue of image quality. Simply because the data is digital does not mean there is no generational loss. This is due to the encoding and decoding process. Encoding is a method of taking a lot of data and compressing it—often by stripping away nonessential data—to make the media easier to edit and distribute. Every time a file is encoded, something is lost. The impact here is no different from rendering a DV project as a DV-AVI file to edit in another program, and thus forcing an additional render later on. Even though the data isn't being compressed to a lower-quality format, there's still some inevitable generational loss. The debate over how much image quality is lost using ICs for HDV rages on.
Since the technology of the HDV IC has come so far in a very short time, all three of these issues may not be of any concern to many professional production houses. Hard drives are very affordable, and the transcoding time in most top-end NLEs is relatively short. Most NLEs have perfected their HDV IC so that image quality loss is minimal, and in some cases nonexistent. Again, the decision to edit one way or the other is one that must be considered by the producer and editor. There are no hard and fast rules with this issue.
It's also worth noting that most of today's high-end computers and NLEs are now able to edit native HDV with very little, if any, wait time for conforming and rendering. Still, the intermediate codecs for HDV editing are valuable to those working on systems that aren't quite powerful enough to support a real-time editing experience. Working with ICs as opposed to native HDV is also worth considering if you have a system that doesn't have enough horsepower to mix video from other formats with the native HDV video files.
In the wedding and event video field, producers tend to work on a tighter budget than, say, MGM Pictures does. Thus, buying a new computer, or set of computers, just to get this functionality may not be reasonable. If your existing system can't handle native HDV, but will perform smoothly if you work with IC-converted video instead, it's probably not worth buying a new system just to work in the native format. On the other hand, if you're on a newer computer with the latest version of Apple Final Cut Pro, Adobe Premiere Pro, Grass Valley EDIUS, Sony Vegas, or Avid Liquid or Xpress Pro, it may not be an issue, as you can edit quickly and easily in the native HDV format.
Editing HDV with footage from other video formats has become increasingly popular in recent months. Since HDV in the wedding and event field is still not as popular as it is in the film and broadcast fields, those using HDV often have need to mix other formats into the same timeline to edit. A videographer going out on a three-camera shoot with two trusty 3CCD DV cameras and one newly added HDV model and then needing to edit footage from all three together in post is a common scenario. Again, most popular NLEs will allow for mixed-format editing.
In these cases, an HDV IC may help; it depends on your NLE and the hardware you're running it on. I've edited DVCPRO-HD and regular NTSC-DV on the same timeline in Final Cut Pro 5.0 on a dual 2GHz G5 Mac (left). There was some real-time playback, but if I stacked up enough text clips and filters, I had to render. I finished the same project on a Quad Core G5 2.5GHz Mac with 4.5GB RAM, and had full real-time playback.
Each NLE will have its own minimum requirements. They are too numerous to list every aspect here. The important thing to remember is that if you're going to edit a mixed-format project, preplanning will save you a lot of headaches. Here's a list of issues you should address:
1. How does your NLE handle each format?
2. What CPU speed do you have in your machine, and how many processors do you have?
3. How much RAM will you need?
4. Do you have enough hard drive space? Are those hard drives fast enough, and of the proper drive type to feed your NLE fast enough?
5. How long is your project?
There are numerous variables that you'll need to consider. You'll have to do some homework to get the answers you need.
I would like to point out that the present state of video editing technology is changing quite rapidly. An NLE might be on the cutting edge today, but be left in the dust by its competitors tomorrow. It's a leapfrog game that all the vendors play. The goal is to find a system that works in a manner that the editor is happiest with, and that the studio believes it can use for the long run, in order to produce a decent ROI.
If you have a newer system (computer, NLE, and necessary peripherals) then editing HDV natively should not be any problem. But if you're on an older system and can't afford a massive upgrade at the moment, simply upgrade your NLE (if necessary) to give you the HDV intermediate codec you need.
Grass Valley EDIUS Pro, Adobe Premiere Pro, Sony Vegas, Avid Xpress Pro, Avid Liquid, and Apple Final Cut Pro all edit HDV natively. Yet a little reading between the lines of sales material is in order. How effectively each of these NLEs works with native LongGOP HDV depends on a specific level of hardware in order to edit without needing an HDV intermediate codec. For example, here are Final Cut Pro and Premiere Pro's minimum requirements to do native HDV editing:
- Final Cut Pro uses the built-in Apple HDV intermediate codec, or will edit natively in the HDV format. As with other NLEs, you configure your capture settings from your HDV camcorder to the "AIC" codec in FCP and capture. All transcoding is done on the fly. There is also the Lumiere codec for FCP, but since Apple has now included its own AIC, Lumiere is only applicable to older versions of Final Cut Pro. According to Apple, to edit native HDV, you need a minimum of 1GHz single or dual processor, 1GB RAM (2GB recommended), and one of Apple's recommended graphics cards.
- Adobe Premiere Pro can also edit in the native HDV format. It requires a minimum of 2GB RAM, a 3.4GHz Pentium 4 processor, and a dedicated 7200RPM hard drive for native HDV editing. If you have a slower processor, you may want to look into the CineForm's Aspect HD third-party solution as a very good-quality HDV IC.
Sad to say, that old 500MHz computer that still edits SD video without a hitch is about to falter once you load it up with a bunch of native HDV footage. That's where the beauty and magic of the HDV intermediate codec comes in handy.
It all boils down to configuring a system that works for your company's needs. Native HDV editing is becoming more and more popular, and is being implemented as a viable editing platform in most popular NLEs. The HDV ICs are beginning to be used less and less as computer hardware and NLE software power increases.
My best advice is to go see professional editors who are doing what you need to do, and look at the systems they're using. Look at several different systems. Actually sit with them during an edit if you can.
A lot goes into building a good native HDV and mixed-format editing system—more than we can cover in this limited space. My hope is that this article helps you understand the terminology and technology well enough to evaluate your options intelligently. At the end of the day, for the professional editor, it all comes down to how it affects the bottom line.