This article contains text copied from Wikipedia under the terms of the GFDL. It needs to be edited to have a Computer vision focus.

Video compression deals with the compression of digital video data. Video compression is necessary for efficient coding of video data in video file formats and streaming video formats. To be accurate, what people call Video compression is more accurately referred to as video data rate reduction - the methods used usually require discarding data (lossy compression) rather than just reducing the required bandwidth (lossless compression.) While lossless video compression is possible, in practice it is virtually never used, and all standard video data rate reduction involves discarding data.

Video is basically a three-dimensional array of color pixels. Two dimensions serve as spatial (horizontal and vertical) directions of the moving pictures, and one dimension represents the time domain.

A frame is a set of all pixels that (approximately) correspond to a single point in time. Basically, a frame is the same as a still picture. However, in interlaced video, the set of horizontal lines with even numbers and the set with odd numbers are grouped together in fields. The term "picture" can refer to a frame or a field.

However, video data contains spatial and temporal redundancy.

Video compression typically reduces this redundancy using lossy compression. Usually this is achieved by image compression techniques to reduce spatial redundancy from frames (this is known as intraframe compression) and motion compensation and other techniques to reduce temporal redundancy (known as interframe compression). Formats such as DV avoid interframe compression to allow easier non-linear editing.

In broadcast engineering, digital television (DVB, ATSC and ISDB ) is made practical by video compression. TV stations can broadcast not only HDTV, but multiple virtual channels on the same physical channel as well. It also conserves precious bandwidth on the radio spectrum.

See alsoEdit