Analogue and Digital Representation and Conversion

0

As technology has evolved, so have the conversion techniques to convert data from analogue to digital.

Traditional measuring techniques all used to be analogue, with taking the major example of measuring time. The more elaborate the mechanical construction of a timepiece, the more accurate it could be. However, for precise measurement of time, the modern method would be with a digital system.

This is equally true with camera technology. The traditional analogue method of taking a picture with a camera would involve exposing film to light and having a chemical reaction take place based on the intensity of the light, which is then chemically processed (developed) to create a representation of what was photographed. The digital equivalent is the light impacting an optical sensor and the light values interpreted (see the section below about optical sensors) and it is this digital data which is used to display the photographed image. With the introduction of the computer, there has been a shift into creating digital representation and conversion of analogue data.

 

Analogue vs. Digital signals

Although both are used to transmit information, the benefits and negatives for each are very clear. At the simple level, analogue signal is a continuous sine wave signal which represents physical measurements, whereas digital signals are discrete time signals denoted by square waves generated by digital modulation. When looking at real world measurements analogue signals can be things like the human voice when spoken in air, whereas digital signals are the representation of signals like that in electronic devices.

It is this general understanding of analogue signals that has them commonly referred to as having infinite possibilities or values. If you take a natural light source like the sun, although our eyes perceive a specific frequency range in the form of the visible light spectrum, there is such a range of subtle frequency changes within that which can produce a near infinite number of colours within the spectrum.

Because of these fundamental differences, analogue signal can deteriorate with noise reducing the accuracy of the signal, whereas digital signal can be immune during transmission. However, the accuracy of digital can suffer depending on the sample rate when compared to the original analogue signal.

Conversion (analogue to digital conversion – ADC // Sampling)

The general aim of sampling is to measure an analogue signal at regular intervals. If you take the simple examples below, the analogue signal is measured, and the result saved digitally. This then allows a digital representation of the analogue signal.

The accuracy of this digital conversion of an analogue signal depends upon a few concepts. Firstly, is the number of states available. The simplest in computing is binary (2 states), however the total number of states is to the power of 2, so 2, 4, 8, 16 and so on. This is done so that when a signal is on a computing device it can be compatible to binary. This can be changed however for certain technology, for example, the Ternary numeral system (can be called base-3). This is most commonly seen in CMOS circuits, which is of interest when looking at optical sensor technology. The next part is the number of times an analogue signal is sampled to obtain the value of the wave signal at that time. The more times you take that value and convert it into a digital value, the more accurate the digital representation will be. Another important part of this conversion is the bit rate. The bit rate is the amount of information captured each time the signal is sampled. So, a higher bit rate means more information is captured and the analogue signal is turned into digital signal more accurately.

Optical Sensors

The uses of optical sensors range from computers to motion detectors. For optical sensors to work effectively, they must be the correct type for the application, so that they maintain their sensitivity to what they measure. Optical sensors are used in many common devices, including computers, copy machines and light fixtures.

For camera use, there are a few critical uses for sensors. Firstly, as the cameras all need the function to measure the light of what is being shot (commonly when the camera is set to an automatic mode), a light sensor is used purely to measure the light in the shot. Some will do this with an ambient light sensor, with the more common form being a reflected light sensor (the light being reflected to the camera by the scene being shot).

For taking the shot, the light is focussed by the lens onto a sensor to convert the light into a digital format. The most common types of these sensors are CCD (charged coupled device) and CMOS (complementary metal–oxide–semiconductor).

CCD is one of the oldest image sensor types and offers superior image quality compared with CMOS sensors, with better dynamic range and noise control. Although CCD is still prevalent in budget compact models, its basic construction and greater power consumption have for the large part prompted camera manufacturers to replace it with CMOS sensors. For a CCD sensor in a camera, similar to traditional film, there is a photoactive region where the light from the lens impacts. From this, an electric charge proportional to the light intensity at that location is generated in a capacitor array. From here, the array dumps the charge into a charge amplifier, converting the charge into a voltage. It is this process repeated that is then sampled, and digitised to create the digital image, which is the processed and fed into the storage media.

CMOS has been considered an inferior competitor to CCD, but today’s CMOS sensors have been upgraded to match and even transcend the CCD standard. With more built-in functionality than CCDs, CMOS sensors work more efficiently, require less power, and perform better for high-speed burst modes. CMOS first became popular for mass production down to them using the same materials as what you would find in microprocessors and static RAM chips, rather than the specialised silica and additional fabrication techniques used in CCD. The CMOS sensor consists of an integrated circuit containing an array of pixel sensors, with each pixel containing a photodetector and an active amplifier. Since a CMOS sensor typically captures a row at a time in the array within approximately 1/60th or 1/50th of a second it may result in a “rolling shutter” effect, where the image is tilted to the left or right, depending on the direction of camera or subject movement. For example, when tracking a car moving at high speed, the car will not be distorted but the background will appear to be tilted. A frame-transfer CCD sensor or “global shutter” CMOS sensor does not have this problem, instead captures the entire image at once into a frame store.

Sensor Size

Different sensor sizes compared with each other shows how big Full Frame, APS-H, APS-C (Nikon, Sony...

Image: Crisp (2013)

There is a great range of sensor sizes currently available on the market, of which the above image shows a simple comparison of the more popular in use currently. Fundamentally, the size of the sensor determines how much light it uses when creating an image. Therefore, the more information that the larger sensor has available when converting the light into a digital format, the more accurately it can reproduce the image from the shot through the lens.

This increase in size allows the sensors to be significantly more receptive to light at a much larger range of levels, known as dynamic range. This also allows manufacturers to increase the resolution of their cameras without sacrificing other image attributes. For example, a Full Frame camera with 36 megapixels would have very similar sized pixels to an APS-C camera with 16 megapixels. Bigger sensors can also be better for isolating a subject in focus while having the rest of the image blurred. Cameras with smaller sensors struggle to do this because they need to be moved further away from a subject, or use a wider angle (and much faster) lens, to take the same photo.

Although there is a positive result in having a larger sensor, additional considerations need to be taken into account. Firstly, the lens too needs to be larger to ensure that the sensor can be exposed fully. Yet also because of that increase in size, the chassis of the camera itself needs to be larger to hold the sensor and additional power is needed. All of this adds additional cost to the device.

 

General References

Schonbein, W. (2014). Varieties of Analog and Digital Representation. Minds and Machines, 24(4), p415-438.

 

Anderson, J. (2006). Digital Transmission Engineering (2nd ed., IEEE Series on Digital & Mobile Communication). Hoboken: Wiley.

 

Davies, H. (1996). A history of sampling. Organised Sound, 1(1), p3-11.

 

Frieder, Gideon; Luk, Clement (1975). Algorithms for Binary Coded Balanced and Ordinary Ternary Operations. IEEE Transactions on Computers. C–24 (2): p212–215

 

University of St Andrews (2003) Analog to Digital Conversion [online]

Available at: https://www.st-andrews.ac.uk/~www_pa/Scots_Guide/info/signals/digital/ad_da/ad_da.htm

[Accessed: 25th October 2017]

 

Sparkfun (2014) Digital Signals [online]

Available at: https://learn.sparkfun.com/tutorials/analog-vs-digital/digital-signals

[Accessed: 25th October 2017]

 

Crisp, S. (2013) Camera Sensor Size [online image]

Available at: https://newatlas.com/camera-sensor-size-guide/26684/

[Accessed 20th November, 2017]

Chris

Share

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.