Writing about some common file formats for video and audio.
A camera raw image file contains minimally processed data from the image sensor. Raw files are named so because they are not yet processed and therefore are not ready to be printed or edited. There are dozens, if not hundreds, of raw formats in use by different models of digital equipment. The idea for the raw format came about for a need to have a digital version for how traditional film works in photography. Because of this, raw files contain the full resolution (typically 12- or 14-bit) data as read out from each of the camera’s image sensor pixels. To obtain an image from a raw file, data must be converted into standard RGB form. This is often referred to as raw development.
Cameras that support raw files typically come with proprietary software for conversion of their raw image data into standard RGB images. Software such as Adobe Lightroom can also process a wide range of raw files.
There are several advantages when it comes to using the raw format. As mentioned above, the first part is when you shoot in raw you record all the data from the sensor. This allows you to work with the highest quality files, allowing the user to decide how to process the image themselves, giving you more creative freedom. Coupled into this is the range of light available in the shot to work with, allowing for a much greater level of contrast within the shot. With some of the limitations being how you could over or under-expose an image during normal operation of a camera, being able to shoot in raw format gives you easily adjustable image options for fixing under or over-exposed images, as well as additional options like being able to adjust white balance all without losing the detail of the shot.
For downsides, there are only really a small number of them around the format. Firstly, is the need for the image to be processed. When compared to a processed image like jpeg, an image shot in raw must go through some level of image processing. Secondly, because of the amount of data being used, storage can become an issue. The average raw file at 1920×1080 is approximately 30MB, whereas the same file in jpeg format would be around 8MB. Finally, it can have a negative impact on the speed of the camera depending on the camera buffer size and speed for when it holds the image as it writes it to permanent storage.
Because of how this format handled data, it is classed as a lossless format.
Advanced Audio Coding (AAC) is a coding standard set up for digital audio compression. It was designed to be a replacement for the MP3 format with the overall aim for it to have better sound quality when compared to MP3 at the same bit rate. AAC is the default or standard audio format for many modern platforms such as YouTube, Apple iPhone/iPod/iPad via iTunes Store and PlayStation to name a few.
Like the mp3 format it was designed to replace, AAC is a lossy file format. It was not explicitly designed to be a lossless file type, instead focussing on a better compression with superior audio quality for the amount of compression being used when compared to mp3.
AAC is a wideband audio coding algorithm which has two key elements to reduce the amount of data needed to represent high-quality digital audio. Firstly, it has signal components that are perceptually irrelevant discarded and secondly redundancies in the coded signal are eliminated. AAC takes a modular approach to encoding. Depending on the complexity of the signal to be encoded, the desired performance and the acceptable output, implementers may create profiles to define which of a specific set of tools they want to use for an application.
In addition to the MP4 and 3GP, AAC audio data was first packaged in a file for the MPEG-2 standard.
H.264 is a block-oriented motion-compensation-based video compression standard. As of 2014 it is one of the most commonly used formats for the recording, compression, and distribution of video content. It supports resolutions up to 4096×2304, including 4K.
Block diagram of H.264
The aim of the H.264/AVC project was to create a standard capable of providing good video quality at substantially lower bit rates than previous standards without increasing the complexity of its design.
Because H.264 encoding and decoding requires significant computing power in specific types of arithmetic operations, software implementations that run on general-purpose CPUs are typically less power efficient. However, the latest quad-core general-purpose x86 CPUs have sufficient computation power to perform real-time SD and HD encoding. Compression efficiency depends on video algorithmic implementations, not on whether hardware or software implementation is used.
Like many other file containers, the overall aim whenever they are produced is to provide superior compression relative to the amount of quality loss when compared to the previous generation. H.264 can reduce the size of a video file by about 50% over the previous MPEG Part-2 standard. This becomes hugely important when considering bandwidth requirements for sending video over network, yet also in terms of how much space files take up on drives.
There are, however, some limitations of this container. Firstly, any compression / decompression technology introduces latency and in the case of H.264, latency is higher. Because of what is involved when it comes to encoding (see above image for an overview) H.264 takes more processing power than earlier codec’s. Some devices like video camera’s may not have high power processor and hence only limited compression/decompression levels can be used with such devices.
H.265 / HEVC
High Efficiency Video Coding (HEVC) or H.265 is a video compression standard, one of several potential successors to the widely used H.264 (AVC). In comparison to AVC, HEVC offers about double the data compression ratio at the same level of video quality, or substantially improved video quality at the same bit rate. It supports resolutions up to 8192×4320, including 8K.
HEVC was designed to substantially improve coding efficiency compared with H.264/MPEG-4 AVC HP, i.e. to reduce bitrate requirements by half with comparable image quality, at the expense of increased computational complexity. HEVC was designed with the goal of allowing video content to have a data compression ratio of up to 1000:1. Depending on the application requirements, HEVC encoders can trade off computational complexity, compression rate, robustness to errors, and encoding delay time.
Two of the key features where HEVC was improved compared with H.264/MPEG-4 AVC was support for higher resolution video and improved parallel processing methods. H.265 has better compression performance and lower bandwidth utilization ratio. For video, the ultimate goal of compressing a movie is to reduce the size of the movie file, to take up less storage space, as well as drop the network bandwidth consumption in transmission. When compared with H. 264, the greatest strength of H. 265 is that it has a higher compression ratio.
MP4 is a digital multimedia container format most commonly used to store video and audio, but it can also be used to store other data such as subtitles and still images. MP4 files can contain metadata as defined by the format standard, and in addition, can contain Extensible Metadata Platform (XMP) metadata.
Compression for MP4 file format is similar to MP3 compression but MP4 compression is lossless and audio or video quality of the actual file is not decreased. Video files using MP4 file container are compressed using H.264 or MPEG-4 while audio files are compressed using AAC which is also used for .AAC extension file compressions.
The main advantages for MP4 are firstly that it is suitable for video streaming over the internet, thanks to the superior compression while maintaining the quality of the content. This has made the format popular for video content across mobile devices where storage is at a premium. The main disadvantage comes from editing an MP4 file in that once the content is encoded, it is hard to edit or update the file, generally resulting in another export being faster.