Broadly speaking, there are two types of video editing. One involves editing directly from one tape to another and is called linear editing. The other requires that the sequences to be edited are transferred to hard disk, edited, and then transferred back to tape. This method is referred to as non-linear editing (NLE), a must for video production.
When video first made its mark on broadcast and home entertainment, the most convenient way to edit footage was to copy clips in sequence from one tape to another. In this linear editing process the PC was used simply for controlling the source and record VCRs or camcorders. In broadcast editing suites, elaborate hardware systems were soon devised which allowed video and audio to be edited independently and for visual effects to be added during the process. The hardware was prohibitively expensive and the process of linear editing gave little latitude for error. If mistakes were made at the beginning of a finished programme, the whole thing would have to be reassembled.
For non-linear editing the video capture card transfers digitised video to the PC’s hard disk and the editing function is then performed entirely within the PC, in much the same way as text is assembled in a word processor. Media can be duplicated and reused as necessary, scenes can be rearranged, restructured, added or removed at any time during the editing process and all the effects and audio manipulation that required expensive add-on boxes in the linear environment are handled by the editing software itself. NLE requires only one video deck to act as player and recorder and, in general, this can be the camcorder itself.
The trend towards NLE began in the early 1990s – encouraged by ever bigger, faster and cheaper hard disks and ever more sophisticated video editing software – and was given a massive boost in 1995 with the emergence of Sony’s DV format.
While MPEG-2 video has already found wide use in distribution, problems arise in production, especially when video needs to be edited. If it becomes necessary to cut into a data stream, B and P frames are separated from the frames to which they refer and they lose their coherence. As a result, MPEG-2 video (from, say, a newsfeed) is decompressed before processing. Even when producing an MPEG-2 video stream at a different data rate, going from production to distribution, material needs to be fully decompressed. Here again, concatenation rears its ugly head, so most broadcasters and DVD producers leave encoding to the last possible moment.
Several manufacturers have developed workarounds to deliver editable MPEG-2 systems. Sony, for instance, has introduced a format for professional digital camcorders and VCRs called SX, which uses very short G0Ps (four or fewer frames) of only I and P frames. It runs at 18 Mbit/s, equivalent to 10:1 compression, but with an image quality comparable to M-JPEG at 5:1. More recently, Pinnacle has enabled the editing of short-GOP, IP-frame MPEG-2 within Adobe Premiere in conjunction with its DC1000 MPEG-2 video capture board. Pinnacle claims its card needs half the bandwidth of equivalent M-JPEG video, allowing two video streams to be played simultaneously on a low-cost platform with less storage.
Faced with the problem of editing MPEG-2, many broadcast manufacturers sitting on the ProMPEG committee agreed on a professional version that could be more easily handled, known as MPEG-2 4:2:2 Profile@Main Level. It’s I frame only and allows for high data rates of up to 50 Mbit/s which have been endorsed by the European Broadcasting Union and its US counterpart, the Society of Motion Picture Television Engineers (SMPTE), for a broad range of production applications. Although there’s no bandwidth advantage over M-JPEG, and conversion to and from other MPEG-2 streams requires recompression, this 1 frame-only version of MPEG-2 is an agreed standard, allowing material to be shared between systems. By contrast, NLE systems that use M-JPEG tend to use slightly different file formats, making their data incompatible.
In the mid-1990s the DV format was initially pitched at the consumer marketplace. However, the small size of DV-based camcorders coupled with their high-quality performance soon led to the format being adopted by enthusiasts and professionals alike. The result was that by the early 2000s – when even entry-level PCs were more than capable of handling DV editing – the target market for NLE hardware and software was a diverse one, encompassing broadcasters, freelance professionals, marketers and home enthusiasts.
Despite all their advantages, DV files are still fairly large, and therefore need a fast interface to facilitate the their transfer from the video camera to a PC. Fortunately, the answer to this problem has existed for a number of years. The FireWire interface technology was originally developed by Apple Computers, but has since been ratified as international standard IEEE 1394. Since FireWire remains an Apple trademark most other companies use the IEEE 1394 label on their products; Sony refer to it as i.LINK. When it was first developed, digital video was in its infancy and there simply wasn’t any need for such a fast interface technology. So, for several years it was a solution to a problem which didn’t exist. Originally representing the high end of the digital video market, IEEE 1394 editing systems have gradually followed digital camcorders into the consumer arena. Since DV is carried by FireWire in its compressed digital state, copies made in this manner ought, in theory, to be exact clones of the original. In most cases this is true. However, whilst the copying process has effective error masking, it doesn’t employ any error correction techniques. Consequently, it’s not unusual for video and audio dropouts to be present after half a dozen or so generations. It is therefore preferred practice to avoid making copies from copies wherever possible.
By the end of 1998 IEEE 1394-based editing systems remained expensive and aimed more at the professional end of the market. However, with the increasing emphasis on handling audio, video and general data types, the PC industry worked closely with consumer giants, such as Sony, to incorporate IEEE 1394 into PC systems in order to bring the communication, control and interchange of digital, audio and video data into the mainstream. Whilst not yet ubiquitous, the interface had become far more common by the early 2000s, not least through the efforts of audio specialist Creative who effectively provided a free FireWire adapter on its Audigy range of sound cards, introduced in late 2001.
- The History of Digital Video
- Digital Video Fundimentals
- Capturing Digital Video
- Digital Video Camcorders
- Digital Video Editing
- Digital Video Performance Requirements
- Digital Video Compression
- MPEG Video
- M-JPEG
- Cinepak technology
- IVI Technology
- Other Digital Video Codecs
- Apple Quicktime
- Digital Video for Windows
- ActiveMovie Technology
- VCD Digital Video
- SVCD Digital Video
- miniDVD
- DivX
- Digital Video Format
- Digital Video Format Comparison
- Digital Video Television
- The Evolution of Digital Video
- Digital Broadcasting
- Digital Video Television Sound
- Widescreen Digital Video
- HDTV
- 24p Digital Video
- Digital Video Convergence