A technique in which audio/video information is played back from two videotape machines rolled sequentially, often for the purpose of dubbing the sequential information onto a third tape, usually a composite master.
A/B Editing - A/B Editing mode is intended primarily for editors who wish to work by dragging clips from the Project window to the Timeline window. This mode resembles a conventional editing method called A/B roll editing, which uses two video tapes or rolls (A and B) and an effects switcher to provide transitions.
Any discrete cosine transform coefficient for which the frequency in one or both dimensions is non- zero.
Active video lines
All video lines not occurring in the horizontal and vertical blanking intervals.
Any discrete cosine transform coefficient for which the frequency in one or both dimensions is non- zero. Undesirable visual effects (sometimes called artifacts) in computer- generated images, caused by inadequate sampling techniques. The most common effect is jagged edges along diagonal or curved object boundaries.
The representation of numerical values by physical variables such as voltage, current, etc.; continuously variable quantities whose values correspond to the quantitative magnitude of the variables.
An electronic device that converts analog signals to digital form.
A video signal represented by an infinite number of smooth gradations between given video levels. By contrast, a digital video signal assigns a finite set of levels. See also digital video.
The process of displaying a sequential series of still images to achieve a motion effect.
Software adjustment to make diagonal or curved lines appear smooth and continuous in computer-generated images. See also aliasing.
The measurement of a film or television viewing area in terms of relative height and width. The aspect ratio of most modern motion pictures varies between 3:5 to as large as 3:7, which creates a problem when a wide-format motion picture is transferred to the more square-shaped television screen, with its aspect ratio of 3:4.
AV - Abbreviation for audiovisual
.AVI - Abbreviation for Audio-Video Interleaved; the algorithm created by Microsoft for synchronizing and compressing analogue audio and video signals.
Avisynth - scripting language and a collection of filters for simple non-linear editing tasks. Avisynth is unusual in that it does not generate output files. Instead, Avisynth scripts, which have the extension .AVS, can be opened directly in applications which read AVI files. When an AVS file is opened, Avisynth runs in the background, generating video and audio data according to the script and feeding it to the application as needed.
The ability of a new coding standard to be handled by existing decoders.
Backward motion vector
A motion vector used for motion compensation from a reference picture that occurs later in display order.
The range of signal frequencies that a piece of audio or video equipment can encode or decode; the difference between the limiting frequencies of a continuous frequency band. Video uses higher frequency than audio, thus requires a wider bandwidth.
A half-inch video recording format developed by Sony that offers near one-inch tape quality on a portable system.
The number of bits used to describe the color of each pixel on a computer display. For example, a bit depth of two means that the monitor can display only black and white pixels; a bit depth of four means the monitor can display 16 different colors; a bit depth of eight allows for 256 colors; and so on.
Blank or blanking interval
A period in which no video signal is received by a monitor, while the videodisc or digital video player searches for the next video segment or frame to display.
B-Picture (Bidirectionally predictive-coded picture)
A picture that is coded using motion compensated prediction from past and/or future reference pictures. See also motion compensation.
In the US, a standard of 525 lines of video picture information at a rate of 60 Hz. See NTSC format.
The rate at which a storage medium delivers a compressed bitstream to a decoder's input.
An 8-row by 8-column matrix of pels, or 64 discrete cosine transform coefficients (source, quantized or dequantized).
One of two fields that comprise a frame of interlaced video. The lines of the top and bottom fields alternate on a screen, so that each line of a bottom field is located immediately below the corresponding line of the top field.
A bit in a coded bitstream that is located a multiple of 8 bits from the first bit in the stream.
Comité Consultatif International de Télécommunications et Télégraphie. This committee of the International Telecommunications Union makes technical recommendations about telephone and data communication systems. Plenary sessions are held every four years to adopt new standards.
CD (Compact Disc or compact audio disc)
A 4.75-inch (12cm) optical disc that contains information (usually musical) encoded digitally in the constant linear velocity
(CLV) format. This popular format for high-fidelity music offers 90 dB signal/noise ratio, 74 minutes of digital sound, and no degradation of quality from playback. The standards for this format (developed by NV Philips and Sony Corporation) are known as the Red Book. The official (and rarely used) designation for the audio-only format is CD-DA (compact disc-digital audio). The simple audio format is also known as CD-A (compact disc-audio). A smaller (3") version of the CD is known as CD-3.
CD+G (Compact Disc-Graphics)
A CD format that includes extended graphics capabilities as written into the original CD-ROM specifications. Includes limited video graphics encoded into the CD subcode area. Developed and marketed by Warner New Media.
CD-i (Compact Disc-Interactive)
A compact disk format released in October 1991 that provides audio, digital data, still graphics, and motion video. The standards for this format (developed by NV Philips and Sony Corporation) are known as the Green Book.
CD+MIDI (Compact Disc-Musical Instrument Digital Interface)
A CD format that adds to the CD+G formats digital audio, graphics information, and musical instrument digital interface (MIDI) specifications and capabilities. Developed and marketed by Warner New media.
CD-ROM (Compact Disc-Read Only Memory)
A 4.75-inch laser-encoded optical memory storage medium with the same constant linear velocity (CLV) spiral format as audio CDs and some videodiscs, CD-ROMs can hold about 550 megabytes of data. CD-ROMs require more error-correction information than the standard prerecorded compact audio disc.
The standards for this format (developed by NV Philips and Sony Corporation) are known as the Yellow Book. See also CD-ROM XA.
CD-ROM drive or CD-ROM player
A device that retrieves data from a CD-ROM disc; differs from a standard audio CD player by the incorporation of additional error-correction circuitry. CD-ROM drives usually can also play music from audio CDs.
(Compact Disc-Read-Only Memory Extended Architecture)
An extension of the CD-ROM standard billed as a hybrid of CD-ROM and CD- i, and promoted by Sony and Microsoft. The extension adds ADPCM audio from CD-i and permits the interleaving of sound and video data for animation and sound synchronization.
(Compact Disc-Read-Only Memory Extended Architecture)
An extension of the CD-ROM standard billed as a hybrid of CD-ROM and CD- i; and promoted by Sony and Microsoft. The extension adds ADPCM audio from CD-i and permits interleaving of sound and video data for animation and sound synchronization.
A CD format introduced in 1987 that combined 20 minutes of digital audio and six minutes of analog video on a standard 4.75-inch CD. Upon introduction, many firms renamed 8- inch and 12-inch videodiscs as CDV, in an attempt to capitalize on the
consumer popularity of the audio CD. The term fell out of use in 1990 and was replaced in some part by laserdisc.
CD-WO (Compact Disc-Write Once)
A variant on CD-ROM that can be written to once and read many times; developed by NV Philips and Sony Corporation. Also known as CD-WORM (CD-write once/read many). Standards for this format are known as the Orange Book.
Defines the number of chrominance blocks in a macroblock.
Portion of a video signal that carries color information (hue and saturation, but not brightness). A matrix, block, or single pel represents one of the two color-difference signals related to the primary colors as defined in the bitstream. The symbols used for the color difference signals are Cr and Cb. See also YCbCr.
CIF (Common Image Format)
The standard sample structure that represents the picture information of a single frame in digital HDTV, independent of frame rate and sync/blank structure. The uncompressed bit rate for transmitting CIF at 29.97 frames/sec is 36.45 Mbps.
The order in which pictures are stored and decoded. This order is not necessarily the same as the display order.
The set of user-definable parameters that characterize a coded video bitstream. While bitstreams are characterized by coding parameters, decoders are characterized by the bitstreams they are capable of decoding.
A matrix, block, or single pel from one of the three matrices (one for luminance and two for chrominance) that make up a picture.
The separation of chrominance (color) and luminance parts of the video signal. In component video, these two signals are recorded separately, which helps maintain better picture quality over more generations.
TOP to Alphabet
The complete visual wave form of the color video signal composed of chrominance and luminance picture information; blanking pedestal; field, line, and color sync pulses; and field equalizing pulses. See also RGB.
The size of the original image divided by the size of the compressed image, measuring the degree to which a compression routine can reduce the size of a file.
Computer-based training (CBT)
The use of a computer to deliver instruction or training; also known as Computer-Aided (or assisted) Instruction (CAI),
Computer-Aided Learning (CAL), Computer-Based Instruction (CBI), and Computer- Based Learning (CBL).
Operation in which the bitrate is constant from start to finish of a compressed bitstream.
Constant bitrate coded video
A compressed video bitstream with a constant average bitrate.
CRC (cyclic redundancy code)
A code used for error detection and correction.
Instructional software, including all discs, tapes, books, charts, and computer programs necessary to deliver a complete instructional module or course.
D1 and D2
Digital tape component format (D1) and digital tape composite format (D2) used for professional video recording. Both can go through multiple generations of dubbing without visible loss of picture quality.
D/A (digital to analog)
The conversion of digital signals to analog form.
D/A converter (DAC)
Device that converts digital signals to analog form.
A method for dividing a bitstream into two bitstreams for error resilience. The two bitstreams must be recombined before decoding.
The DCT coefficient for which the frequency is zero in both dimensions.
DCT (Discrete Cosine Transform)
A compression technique in which data is digitized, then put through a process of intraframe coding and interframe coding, enabling the system to transmit the first image and thereafter only transmit the differences from one frame to the next. See
The amplitude of a specific cosine basis function. See DCT.
A logarithmic measure of the ratio between two powers, voltages, currents, sound intensities, etc. Signal-to-noise ratios are expressed in decibels.
Decoder input buffer
The first-in-first-out (FIFO) buffer specified in a video buffering verifier.
Decoder input rate
The data rate specified in a video buffering verifier and encoded in the coded video bitstream.
A process that converts an input coded bitstream into pictures or audio samples.
The process of rescaling quantized DCT coefficients after their representation in the bitstream has been decoded and before they are presented to the inverse DCT.
A video represented by computer-readable binary numbers that describe a finite set of colors and luminance levels. See analog video.
Flat, circular, rotating medium that can store various types of information, both analog and digital. "Disc" is often used in reference to optical storage media, while "disk" refers to magnetic storage media. Disc is often used as a short form for
videodisc or compact audio disc (CD).
Alternative spelling for "disc" that generally refers to magnetic storage medium on which information can be accessed at random. Floppy disks and hard disks are examples.
The order in which decoded pictures are displayed. Normally this is the same order in which the pictures entered the encoder.
A process that improves the perceived quality of a screen graphic when the color palette is reduced. For example, when converting from 24-bit color to 8-bit color (an 8-bit palette has only 256 colors compared to the 24-bit palette's millions),
dithering adds pixels of different colors to simulate the original color. Dithering is also known as "error diffusion."
DIVX - A commercial and non-commercial video codec that enables high quality video at high compresion rates
DivX ;-) - A hacked version of Microsoft's MPEG4 codec.
Digital Linear Tape
The ability to reproduce two audio channels, playing them either simultaneously or independently; a characteristic of all optical videodisc systems.
DYUV or delta-YUV
An efficient color-coding scheme for natural pictures used in CD-i. The human eye is less sensitive to color variations than to intensity variations, so DYUV encodes luminance (Y) information at full bandwidth and chrominance (UV) information at half bandwidth or less, storing only the differences (deltas) between each value and the one following it.
European Association of Consumer Electronics Manufacturers.
A procedure for encoding data that makes it difficult to decode the data without proprietary software of hardware. This procedure protects data or software from unauthorized access or use.
The average amount of information represented by a symbol in a message. Entropy is a function of the model used to produce the message and can be reduced by increasing the complexity of the model to better reflect the distribution of source symbols in
the original message. Because entropy is a measure of the information contained in a message, it represents the lower bound for compression.
The set of alternating lines in an interlaced video frame. An interlaced frame consists of two fields -- a top field and a bottom field. A field is one-half of a complete television scanning cycle (1/60 of a second in NTSC; 1/50 of a second in PAL/
SECAM). When interlaced, two fields combine to make one video frame.
The elimination of video and film frame ambiguity by the use of the full-frame identification process during film-to-tape transfer.
The rate at which a complete field is scanned or displayed, normally 59.94 times per second in NTSC.
The reciprocal of twice the frame rate.
A term used to encompass the total grouping of equipment used to transfer slide or movie film picture frames to electronic picture frames; usually consists of film and slide projectors, a multiplexer and a television camera. Also known as telecine.
A variable that can take one of only two values. See also parameter.
Video effect (usually unwanted) on a still or frozen frame caused when two fields that combine to make the frame are not identically matched, thus creating two different pictures alternating every 1/60 of a second. Interfield flicker can occur when
field dominance is incorrectly specified or if field dominance changes at one or more points on the master tape from having been edited on equipment that is incapable of frame-accurate editing. Also known as jitter or jutter. See also interfield frames.
When used in the clauses defining a coded bitstream, indicates that the value must never be used. This restriction is usually applied to avoid emulation of start codes.
The process by which macroblocks are occasionally intra coded to ensure that mismatch errors between the inverse DCT processes in encoders and decoders cannot build up excessively.
The ability of a coding standard that works with existing decoders to work with new decoders.
Forward motion vector
A motion vector used for motion compensation from a reference picture that comes at an earlier time in display order.
A single, complete picture in a video or film recording. A video frame consists of two interlaced fields of either 525 lines (NTSC) or 625 lines (PAL/SECAM), running at 30 frames per second (NTSC) or 25 fps (PAL/SECAM). Film runs at 24 fps.
1. A device capable of storing all 525 lines of a television frame and functioning as a time-base corrector.
2. A memory device that stores, pixel by pixel, the contents of an image. Frame buffers are used to refresh a raster image.
Sometimes they incorporate local processing ability. The "depth" of the frame buffer is the number of bits per pixel, which determines the number of colors or intensities that can be displayed.
The reciprocal of the frame rate.
The speed at which video frames are scanned or displayed -- 30 frames a second for NTSC, 25 frames per second for PAL/SECAM.
A single frame from a segment of video or film footage held motionless on the screen. Unlike a still frame, a freeze-frame is not a picture intended to appear motionless, but is one frame taken from a longer motion sequence.
Full-frame time code
A standardized SMPTE method of address-coding a videotape that retains all frame numbers in chronological order, resulting in a slight deviation from clock time.
A video sequence displayed at full television standard resolutions and frame rates. In the US, this would equate to NTSC video at 30-frames-per-second.
Future reference picture
A reference picture that occurs at a later time than the current picture in display order.
A display characteristic of CRTs defined by: Light = Volts ^ gamma where gamma is 2.35 plus or minus 0.1. CRTs usually have values between 2.25 and 2.45, and 2.35 is a common value. No direct-view CRTs have values lower than 2.1. CRT projectors
exhibit different values; green tubes are typically at 2.2, while red is usually around 2.1, and blue can be as low as 1.7.
Pictures destined for display on CRTs are gamma-corrected, meaning that a transfer characteristic has been applied to correct for the CRT gamma.
Users of TV cameras have to accept the gamma characteristic supplied by the manufacturer, except for broadcasters who are able to adjust the curves that profile gamma correction. In this case video engineers adjust the gamma correction until they like the look of the picture on the studio monitor. Even so, no TV camera uses a true gamma-correction curve; cameras all use flattened curves with a maximum slope near black of between 3 and 5. The higher the slope, the better the colorimetry but the
worse the noise performance.
The process of aligning the data rate of a video image with that of a digital device to digitize the image and enter it into computer memory. The machine that performs this function is known as a genlock.
Audio data transmitted after a silence detector indicates that no audio data is present. Hangover ensures that the ends of words, important for comprehension, are transmitted even though they are often of low energy.
A block of data in a coded bitstream containing information about the data that follows.
The standard unit of frequency. One Hz equals one cycle (or vibration) per second. One kilohertz (KHz) equals 1,000 cycles per second, and one megahertz (MHz) equals 1,000,000 cycles per second. This standard unit is named after German physicist Heinrich Hertz (1857-1894).
The high-quality extension of the Video 8 (or 8mm) format, which features higher luminance resolution.
High-definition television (HDTV)
Any one of a variety of video formats offering greater visual accuracy (or resolution) than current NTSC, PAL, or SECAM broadcast standards. Current formats generally range in resolution from 655 to 2,125 scanning lines, having an aspect ration of
5:3 (or 1.67:1), and a video bandwidth of 30 to 50 MHz (5+ times greater than NTSC standard). Digital HDTV has a bandwidth of 300 MHz. HDTV is subjectively comparable to 35 mm film.
High Sierra format
A standard format for placing files and directories on CD-ROMs proposed by an ad hoc committee of computer vendors, software developers, and CD-ROM system integrators. (Work on the format proposal began at the High Sierra Hotel at Lake Tahoe, Nevada.)
A revised version of the format was adopted by the International Standards Organization as ISO 9660.
The basis for international standards for videotelephony and the prototype compression technique from which MPEG was designed.
In the archetypal hybrid coder, an estimate of the next frame to be processed is formed from the current frame; the difference is then encoded by some purely intraframe mechanism. In recent years, the most attention has been paid to coders that include motion compensation, in which the estimate is formed by a two-dimensional warp of the previous frame, and the difference is encoded using a block transform (the Discrete Cosine Transform).
The key feature of the hybrid coder is the presence of a complete decoder within it. The decoder processes the difference between the current frame as represented at the receiver and the incoming frame. The receiver must therefore track the transmitter precisely, meaning that the decoders at the receiver and transmitter must match. The system is sensitive to channel errors and does not permit random access. However, it is on the order of three to four times as efficient as coders that use no prediction.
In practice, this coder is modified to suit specific applications. The standard telephony model uses a forced update of the decoded frame so that channel errors do not propagate. When a participant enters the conversation late or alternates between
image sources, for example, residual errors die out and a clear image is obtained after a few frames. Similar techniques are used in versions of this coder developed for direct satellite television broadcasting.
Hybrid scalability is the combination of two or more types of scalability.
Creative content that can be protected by either copyright or patent law. With the proliferation of digital transmission of content without monitoring, intellectual property rights, protection, and compensation have become hotly debated topics in the
A product of the 3:2 pull-down, film-to-tape transfer process, in which the video frame is composed of two fields, each of a different film frame. These mixed fields do not interfere with normal viewing, but on a videodisc -- where a viewer can freeze
on any single frame -- an interfield frame might produce unwanted flicker.
In video signal transmission, a way to compress the video signal that concentrates on coding high-detail areas of a picture at the expense of the less detailed areas.
The pattern described by two separate field scans when they join to form a complete video frame. As the video picture is transmitted, the first field picks up even-numbered scan lines - the second, odd-numbered ones. The two interleave together to
form a single, complete frame.
Coding of a macroblock or picture that uses information only from that macroblock or picture.
A way to compress a video signal for transmission in which half the picture information is eliminated by discarding every other frame as it comes from the camera. During playback, each frame remains on the screen twice the normal duration to simulate the standard 30-frames-per- second video rate.
I Picture (Intra-coded picture)
A picture coded using information only from the picture.
JPEG (Joint Photographic Experts Group)
The international consortium of hardware, software, and publishing interests who, under the auspices of the International Standards Organization, has defined a universal standard for digital compression and decompression of still images for use in computer systems (commonly called "JPEG" or "JPEG-Standard") JEPG compresses at about a 20:1 ratio before visible image degradation occurs.
JBOD (Just bunch of disks)
Used as a bulding block for RAID systems – without the control head, a rack of drives is known as a JBOD.
Signal processing device that cuts a hole in the background video and fills in the hole from a different video source, e.g., computer-generated text and graphics keyed over NTSC video.
A technique for displaying movies on video in the original aspect ratio of the theater, resulting in the apparent cropping of the top and bottom of the screen. The technique accommodates program material that has a wide picture aspect ratio.
A defined set of constraints on the values that may be taken by some parameters within a profile. A profile can contain one or more levels.
A compression technique that preserves all the original information in an image or other data structures.
A compression technique that achieves optimal data reduction by discarding redundant and unnecessary information in an image.
Large Scale Integration; generally more than 1,000 and less than 10,000 components on a computer chip.
One of the coefficients in composite video. Video originates with linear-light (tristimulus) RGB primary components, conventionally contained in the range 0 (black) to +1 (white). From the RGB triple, three gamma-corrected primary signals are computed; each is essentially the 0.45-power of the corresponding tristimulus value, similar to a square-root function. Although television primaries have changed over the years since the adoption of the NTSC standard in 1953, the coefficients of
the luma equation for 525- and 625-line video have remained unchanged. For HDTV, the primaries are different, and the luma coefficients have been standardized with somewhat different values.
The four 8 x 8 blocks of luminance data and the corresponding 8 x 8 blocks of chrominance data coming from a 16 x 16 section of a picture's luminance component. The number of chrominance blocks is two for 4:2:0 chroma format, four for 4:2:2 chroma format, or eight for 4:4:4 chroma format. The term "macroblock" is sometimes used to refer to pel data and sometimes to the coded representation of the pel values and other data elements defined in the macroblock header. The usage should be clear from the context.
An original audio tape, videotape or film; used for broadcast or to make copies.
The use of motion vectors to improve the efficiency of predicting pel values. The motion vectors provide offsets into past and /or future reference pictures containing previously decoded pel values that are used to form the prediction error signal.
The process of estimating motion vectors during the encoding process.
A two-dimensional vector used for motion compensation that provides an offset from the coordinate position in the current picture to the coordinates in a reference picture.
A trademarked abbreviation for Multimedia Personal Computer. The original MPC specification was developed by Tandy Corporation and Microsoft as the minimum platform capable of running multimedia software. In the Summer of 1995, the MPC Marketing Council introduced an upgraded MPC 3 standard.
The MPC 1 Specification defines the following minimum standard requirements: a 386SX or 486 CPU; 2 MB RAM; 30 MB hard disk; VGA video display; 8-bit digital audio subsystem; CD-ROM drive; and systems software compatible with the applications programming interfaces (APIs) of Microsoft Windows Version 3.1 or higher.
The MPC 2 Specification defines the following minimum standard requirements: 25 MHz 486SX with 4 MB RAM; 160 MB hard disk; 16-bit sound card; 65,536 color video display; double-speed CD-ROM drive; and systems software compatible with the applications programming interfaces (APIs) of Microsoft Windows Version 3.1 or higher.
The MPC 3 Specification defines the following minimum standard requirements: 75 MHz Pentium with 8 MB RAM; 540 MB hard disk;
16-bit sound card; 65,536 color video display; quad speed CD-ROM drive; OM-1 complaint MPEG-1 video, and systems software compatible with the applications programming interfaces (APIs) of Microsoft Windows Version 3.1 and DOS 6.0 or higher.
MPEG (Motion Picture Experts Group)
A working committee which, under the auspices of the International Standards Organization, has defined standards for the digital compression and decompression of motion video/audio for use in computer systems. These standards consist of MPEG-1 and
The MPEG-1 standard delivers decompression data at 1.2 to 1.5 MB per second, allowing CD players to play full-motion colour movies at 30 frames per second. MPEG-1 compresses at about a 50:1 ratio before image degradation occurs, but compression ratios as high as 200:1 are attainable. Building on the MPEG-1 standard is MPEG-2, which extends to the higher data rates (2-15 Mbps) needed for signals delivered from remote sources (such as broadcast, cable, or satellite). MPEG-2 is designed to support a range of picture aspect ratios, including 4:3 and 16:9.
National Association of Broadcasters.
Random electrical energy or interference. In video, noise can produce a random salt-and-pepper pattern over the picture. Heavy video noise is called snow.
Nondrop-frame time code
See full-frame time code.
Coding of a macroblock (or picture) that uses information from both that macroblock and from macroblocks occurring at other times.
National Television Systems Committee of the Electronics Industries Association (EAI) which prepared the NTSC format specifications approved by the Federal Communications Commission, in December 1953, for US commercial color broadcasting.
A color television format having 525 scan lines, a field frequency of 60 Hz, a broadcast bandwidth of 4 MHz, line frequency of 15.75 KHz, frame frequency of 1/30 of a second, and a color subcarrier frequency of 3.58 MHz. NTSC uses YIQ. See also PAL, SECAM.
OM-1 (Open MPEG Consortium)
Industrial organization formed to promote the use of MPEG in the consumer marketplace by specifying a common MPEG Application Programming Interface (API).
A technique used in consumer display products that extends the deflection of a CRT's electron beam beyond the physical boundaries of the screen to ensure that images will always fill the display area. See also underscanning.
Packet File (.pkt)
NETmcs MPEG system file wrapper. Used to collate many audio / video / data channels into one compact file – thus allowing ease of sharing, archiving, replay and synchronisation.
Phase Alternation Line; the European standard color television system, except for France. PAL's image format is 4:3, 625 lines, 50 Hz and 4-MHz video bandwidth with a total 8 MHz of video channel width. PAL uses YUV. See also NTSC, SECAM.
A variable that may take one of many values. See also flag.
Past reference picture
A reference picture that occurs at an earlier time than the current picture in display order.
Pel aspect ratio
The ratio of a pel's nominal vertical height to its nominal horizontal width.
PhotoYCC color space
For Kodak's PhotoCDtm, similar to YCbCr.
Source, coded, or reconstructed image data. A source or reconstructed picture consists of three rectangular matrices of 8-bit numbers representing luminance and two chrominance signals. For progressive video, a picture is identical to a frame, while for
interlaced video, a picture can refer to a frame, a top field, or a bottom field depending on the context.
An abbreviation of picture element; the minimum raster display element, represented as a point with a specified colour or intensity level. One way to measure picture resolution is by the number of pixels used to create images.
The stage in the preparation of a film or video program after the original footage has been shot. Can include editing, encoding, computer program authoring, etc.
P-Picture (Predictive-coded picture)
A picture that is coded using motion compensated prediction from past reference pictures. See also motion compensation.
The process of estimating the pel value or data element currently being decoded using a predictor.
The difference between the actual value of a pel or data element and its predictor.
A linear combination of previously decoded pel values or data elements.
All design tasks (flow-charting, story-boarding, script-writing, software design, etc.) that lead up to the actual shooting of material on video or film, or up to the authoring of multimedia software.
In video terms, the period when video or film footage is actually shot. See also pre-production, post-production.
A defined subset of a specification's syntax.
A set of 64 8-bit values used by a dequantizer.
A step in the process of converting an analog signal into a digital signal. Quantization measures a sample to determine a representative numerical value that is then encoded. The three steps in analog-to-digital conversion are sampling, quantizing, and encoding.
Quantized DCT coefficients
DCT coefficients before dequantization. A variable-length-coded representation of quantized DCT coefficients is stored as part of a compressed video bitstream.
A scale factor coded in a bitstream and used by the decoding process to scale the dequantization. See scaling.
QCIF (Quarter Common source Intermediate Format)
1/4 CIF, i.e., luminance information is coded at 144 lines and 176 pixels per line. The uncompressed bit rates for transmitting QCIF at 29.97 frames/sec is 9.115 Mbps.
The process of beginning to read and decode a coded bitstream at an arbitrary point.
The actual time in which a program or event takes place. In computing, real time refers to an operating mode under which data is received and processed and the results returned so quickly that the process appears instantaneous to the user. The term is
also used to describe the process of simultaneous digitization and compression of audio and video information.
The nearest adjacent I picture or P picture to the current picture in display order.
Transparent devices used to interconnect segments of an extended network with identical protocols and speeds at the physical layer (OSI layer 1).
Number of pixels per unit of area. A display with a finer grid contains more pixels and thus has a higher resolution, capable of reproducing more detail in an image.
A type of computer color display output signal comprised of separately controllable red, green, and blue signals; as opposed to composite video, in which signals are combined prior to output. RGB monitors typically offer higher resolution than composite monitors.
The area in the center of a video frame that is sure to be displayed on all types of receivers and monitors. Televisions and other monitors made at different times and by different companies are slightly different in size and shape, and the outer edge
of the video frame (about 10 percent of the total picture) is not reproduced in the same way on all sets.
The process of taking measurable slices of an analog signal at periodic intervals. Sampling can also refer to the process of obtaining values of a usually analog function by making automatic measurements at periodic intervals. Sampling is a step in
analog-to-digital conversion, which comprises sampling, quantizing, and encoding.
The rate at which slices are taken from analog signals in analog-to- digital conversion. The sampling rate also determines the frequency at which points are recorded in digitizing an image. Sampling errors can cause aliasing effects.
Strong, bright colours (particularly reds and oranges) that do not reproduce well on video, but tend to saturate the screen with colour or bleed around the edges, producing a garish, unclear image.
The ability of a decoder to decode an ordered set of bitstreams to produce a reconstructed sequence.
The parallel lines across a video screen, along which the scanning spot travels in painting the video information that makes up a monitor picture. NTSC systems use 525 scan lines to a screen; PAL systems use 625.
Sequential Couleur A Memoire (sequential colour with memory), the French colour TV system also adopted in Russia. The basis of operation is the sequential recording of primary colours in alternate lines. The image format is 4:3, 625 lines, 50 Hz and 6-MHz video bandwidth with a total 8 MHz of video channel width. See also NTSC, PAL.
Information in the bitstream necessary for controlling the decoder.
SIF (Standard Interchange format)
Format for exchanging video images of 240 lines with 352 pixels each for NTSC, and 288 ines by 352 pixels for PAL and SECAM. At the nominal field rates of 60 and 50 fields/sec, the two formats have the same data rate.
Signal-to-noise (S/N) ratio
The strength of a video and/or audio signal in relation to interference (noise). The higher the S/N ratio, the better the quality of the signal.
A series of macroblocks.
SMPTE time code
An 80-bit standardized edit time code adopted by SMPTE, the Society of Motion Picture and Television Engineers. See time code.
A type of scalability in which an enhancement layer also uses predictions from pel data derived from a lower layer without using motion vectors. The layers can have different frame sizes, frame rates, or chroma formats.
Unique 32-bit codes embedded in a coded bitstream. They are used for several purposes, including identifying some of the structures in the coding syntax.
Stuffing (bits); stuffing (bytes)
Code words that can be inserted into a compressed bitstream and that are discarded in the decoding process. Their purpose is to increase the bitrate of the stream.
S-VHS or Super VHS
A higher-quality extension of the VHS home videotape format, featuring higher luminance and the ability to produce better copies.
Type of video signal used in the Hi8 and S-VHS videotape formats. S- video transmits luminance and color portions separately, using multiple wires, thus avoiding the NTSC encoding process and its inevitable loss of picture quality. Also known as Y/C video.
The precise coincidence of two pulses, or signals, such as when the sync pulses of a videotape recorder lock in with the sync pulses of a camera.
Jagged raster representation of diagonals or curves; corrected by anti- aliasing.
A type of scalability in which an enhancement layer uses predictions from a pel derived from a lower layer using motion vectors.
A method for overcoming the incompatibility of film and video frame rates when converting or transferring film (shot at 24 frames per second) to video (shot at 30 frames per second).
The first film frame is actually exposed on three video fields, and the next film frame is exposed on two fields, the next film frame on three fields, and so on. Thus, two of every five video frames will consist of fields that contain information from two
different film frames. The resulting effect is noticed as flicker during freeze frame use.
A frame-by-frame address code time reference recorded on the spare track of a videotape or inserted in the vertical blanking interval. The time code is an eight-digit number encoding time in hours, minutes, seconds, and video frames.
Time code generator
A signal generator designed to generate and transmit SMPTE time code.
One of two fields that comprise a frame of interlaced video. Each line of a top field on a screen is located immediately above the corresponding line of the bottom field, so that the lines of the two fields interlace.
A technique generally used in professional TV and video systems as a way of ensuring that the complete image is always visible within a display area; the opposite of overscanning.
Undefined bits within the 80-bit SMPTE time code word that are available for uses other than time coding.
Operation in which the bitrate varies with time during the decoding of a compressed bitstream. Although a variable bit rate is acceptable for plain linear playback, one reason not to use a variable bit rate is that reasonably quick random access becomes
nearly impossible. Additionally, because MPEG has no table of contents or index, the only tool the play-back system has for approximating the correct byte position is the requested play-back time stamp and the bit rate of the MPEG stream.
Approximating this time stamp is nontrivial because MPEG streams do not encode their play-back time.
VLC (Variable Length Coding)
A reversible procedure for coding that assigns shorter code words to frequent events and longer code words to less frequent events.
Vertical blanking interval (VBI)
Lines 1-21 of the video top field and lines 263-284 of the bottom field, in which frame numbers, picture stops, chapter stops, white flags, closed captions, etc. may be encoded. These lines do not appear on the display screen, but maintain image
stability and enhance image access.
Vertical interval time code (VITC)
SMPTE time code inserted in the vertical blanking interval between the two fields of a tape frame. This method eliminates errors that occur from tape stretch when using longitudinal time code.
VHS (Video Home System)
Popular consumer videotape format developed by Matsushita and JVC.
A system of recording and transmitting primarily visual information by translating moving or still images into electrical signals. The term video properly refers only to the picture -- but as a generic term, video usually embraces audio and other
signals that are part of a complete program. Video now includes not only broadcast television, but many non-broadcast applications, such as corporate communications, marketing, home entertainment, games, teletext, security, and even the visual display units of computer-based technology.
The absence of pictures and sound during video playback, usually at the beginning and ends of a program, and between segments; "dead" video.
Video-on-CD or Video CD
A full-motion digital video format using MPEG video compression and incorporating a variety of VCR-like control capabilities.
See also White Book.
Video 8 or 8mm Video
Video format based on the 8 mm videotapes popularized by camcorders.
A series of one or more pictures.
In general, classified by the width of the magnetic tape used:
1" -- Used for professional or "broadcast quality" video recording and editing; comes in large, open reels.
3/4" -- U-matic (Sony). Most industrial video uses this format, stored in inch-thick cassettes.
1/2" -- Cassette-based, primarily consumer format. VHS -- the most popular home videotape format -- is 1/2", as is SonyÕs Beta format. Their higher-quality counterparts (Super-VHS and Super Beta, respectively) are also in the 1/2" format.
8mm -- New consumer format that provides high-quality recording in tiny tape format; popularly used in hand-held camera-recorders (camcorders). See Video 8.
A standard specification developed by Philips and JVC in 1993 for storing MPEG standard video on CDs. An extension of the Red Book standard for digital audio, Yellow Book standard for CD-ROM, Green Book standard for CD-i, and Orange Book standard for CD
WORM (write-once/read-many) memory
A type of permanent optical storage that allows users to record original information on a blank disc, but does not allow erasure or change of that information once it is recorded.
The three components of component video -- with Y for luma and Cb and Cr for different chroma components. International standard CCIR-601-1 specifies 8-bit digital coding for component video with black at luma code 16 and white at luma code 235,
along with chroma in 8-bit two's complement form (centered on 128 with a peak at code 224). This coding has a slightly smaller excursion for luma than for chroma; luma has 219 risers compared to 224 for Cb and Cr. The notation CbCr distinguishes this set from PbPr, where the luma and chroma excursions are identical. YCbCr coding is employed by D-1 component digital video equipment.
The color difference components used when three video components are to be conveyed in three separate channels with identical unity excursions. YPbPr is employed by analog component video equipment such as M-II and BetaCam. Pb and Pr bandwidth is half that of luma.
A color-encoding system similar to YUV. The U and V signals in YUV must be carried with equal bandwidth, albeit less than that of luma. However, the human visual system has less spatial acuity for magenta-green transitions than for red-cyan. Thus, if signals I and Q are formed from a 123 degree rotation of U and V respectively, the Q signal can be more severely filtered than I (to about 600 KHz, compared to about 1.3 MHz) without being perceptible to a viewer at typical TV viewing distance. YIQ is equivalent to YUV with a 33 degree rotation and an axis flip in the UV plane.
Because an analog NTSC decoder has no way of knowing whether an encoder was encoding YUV or YIQ, the decoder cannot detect whether the encoder was running at 0 degree or 33 degree phase. Thus, in analog usage the terms YUV and YIQ are often used somewhat interchangeably. YIQ was important in the early days of NTSC, but most broadcasting equipment now encodes equiband U and V.
YUV color system
A color-encoding scheme for natural pictures in which the luminance and chrominance are separate. The human eye is less sensitive to color variations than to intensity variations, so YUV allows the encoding of luminance (Y) information at full
bandwidth and chrominance (UV) information at half bandwidth.
Zig-zag scanning order
A specific sequential ordering of DCT coefficients from (approximately) the lowest spatial frequency to the highest.