Recognizing how cameras have changed over time is incredibly fascinating. The art of photography has come a long way, from the odd-looking tripod-mounted camera in the Charlie Chaplin film through Walkman-sized roll film cameras to the digital cameras of today. With just the push of a button, we can take pictures today.
The touch of a button initiates several operations. In order to get the most out of the features offered by these cameras, it is crucial to comprehend these actions.
The 3A functions are some of those integral parts of the ISP pipeline. This article explains how the camera’s 3A features—Auto Focus, Auto Exposure, and Auto White Balance—work.
Autofocus feature (AF) allows the camera to focus automatically on a subject. Almost all digital cameras have this feature. The available AF methods depend on the model of your camera, and they differ slightly. By analyzing the image input, the ISP calculates the correct focus.
The terms “passive” and “active” refer to different types of autofocus systems. Relative to the price of the camera, some models may combine both types.
A traditional active system is the Polaroid system. A high-frequency sound emitter was employed by the Polaroid camera, and after receiving the echo, it calculated how long it took the ultrasonic sound wave to travel back to the camera from the subject and altered the lens position perfectly.
Today’s cameras use infrared signals rather than sound waves for active autofocus, which works well for objects up to 20 feet (6 meters) away from the camera. The drawback of Active AF is that it can only be utilized for stable, non-moving, and close objects.
On the other side, the “Passive AF” system operates in a significantly different manner. Passive autofocus, which is often featured on single-lens reflex (SLR) autofocus cameras, estimates the distance to the subject through computer analysis of the image itself. In order to achieve the best focus, the camera actively peeks at the scene while moving the lens back and forth. It either employs “Phase Detection” or “Contrast Detection” (or a mix of both) for detecting contrast, rather than depending on the red beam to determine the distance between the camera and the subject.
Phase Detection vs Contrast Detection method of AF
The phase detection method works by dividing the incoming light into pairs and comparing them. CMOS sensors employ microlenses for this purpose. Light entering the camera from the opposite side is captured by two microlenses and diverted to the sensor.
In order to determine whether the object is in front focus or back focus, two images are measured for similar light intensity patterns and the separation error is calculated. With the aid of this technique, it is possible to determine the direction and approximate amount of lens assembly movement needed to achieve the desired focus.
Auto-focus based on contrast detection measures the contrast on the sensor to determine focus. The sensor measures the difference in intensity between adjacent pixels. Perfectly focused images have the greatest intensity difference. Optical systems are adjusted until maximum contrast is achieved.
The method does not provide us with a measurement of distance or direction. Because of this, this method isn’t able to detect moving objects when we are looking at them. This is because we can’t tell if they are moving towards or away from the camera.
Auto-exposure allows the camera to calculate the proper exposure for recording photographs. The ISP works with the sensor to identify the current luminance level, and then it controls the gain level and exposure duration to maintain the constant brightness.
The ability for the camera to control the three factors of aperture, ISO, and shutter speed, and determine the ideal combination for your image is known as automatic exposure mode. You can use creativity in automatic mode when it comes to composition and framing, however not when it comes to exposure. There isn’t much you can do about a scene that is too bright or too dark other than to move or wait for the lighting to change.
When you don’t have the time to play with manual settings, this can be helpful. To achieve the desired effect, you can also combine manual settings with auto exposure.
According to contextual or perceptual factors, auto exposure algorithms modify the obtained image in an effort to replicate the most significant portions with an average level of brightness, approximately in the middle of the range.
Auto exposure algorithms involve three processes:
- Light metering: This is typically done by either an external exposure detector or the camera sensor itself.
- Scene analysis: Using imaging metrics, brightness metering techniques estimate the illumination of the scene. The best exposure can be estimated by adjusting the brightness based on the overall lighting value.
- Image brightness correction: By altering the lighting and shutter time settings, image brightness correction makes sure that the appropriate quantity of light reaches the image sensor. The exposure time is another name for the image sensor parameter. The period of time during which the sensor incorporates light is referred to as the exposure time. To put it another way, it controls how long the sensor photodiodes array is exposed to light.
Auto White Balance
When a digital camera sensor captures a photograph of a scene, the sensor response at each pixel is determined by the scene lighting. The sensor, ISP, and other camera parts work together to collect the image and display it to the user. Depending on the lighting, the scene that was shot has a clear color cast. This is done in a way that mimics how a human eye would see the world.
However, because of the human eye’s exceptional color sensitivity, humans may still perceive objects’ original colors even when the light source that illuminates them changes. The camera must also have this feature that adjusts the color based on the source of illumination in order to get natural photographs. This feature is known as White Balance. Auto-White Balance is the name of the ISP that handles this task without user input.
Many auto white balance algorithms follow a two-stage process:
- Illumination estimation: This can be done either explicitly by selecting from a predetermined list of potential illuminations, or implicitly by making assumptions about how those illuminations would affect the environment. Implicit estimating is a technique used by the white balance algorithms in librraew.
- Image color correction: This is accomplished by independently adjusting the three-color signals’ gains. Typically, the red gain is fixed and only the blue and red gains are changed.
The effectiveness of the ISP, the sensor’s compatibility with the ISP, the lens selection, and other elements have an impact on the quality of 3A functions. We must therefore first have a thorough understanding of the camera performance requirements for certain use cases before selecting the camera configuration. The sensor-ISP-Lens combo must then be tested to see if it satisfies the requirement.
Vadzo offers camera solutions for a wide range of markets, including industrial, medical, retail, fleet management, asset management, smart homes, kiosks, and electrical applications.
We’ll give you our recommendation for the best camera. From lens assembly and general design standards to budgets and timeframes, we’ll assist you with every step of your design. Among the services we offer are scanners, CCTV, CCD/CMOS, medical imaging, surveillance systems, machine vision, and night vision equipment.
Get specific dimensions and performance information for your project by getting in touch with the application specialists at Vadzo.