Pixel Binning: What Is It And Why The iPhone 14 Needs It

According to the latest available information, high-resolution pixel-binning camera sensors will soon be making their way into the iPhone 14. But what is pixel binning, and how does it improve the quality of your pictures?

MORE FROM FORBESApple Reveals New Game-Changing iPhone Technology Details

What is Pixel Binning?

Pixel binning is an image sensor technology where clusters of adjacent pixels are grouped together, or ‘binned’, to form larger ‘superpixels’. This binning occurs before the output of the superpixel is converted to digital information and can provide better low-light performance at the expense of some image resolution. 

Ultimately, it’s all about using the available megapixels in the smartest way possible. You probably don’t need ultra-high resolution images very often, but what if your camera could see better in the dark, cope better in bright light and produce clearer video? That’s where pixel binning can help.

What is Quad Bayer coding?

Lots of camera reviews mention ‘quad Bayer’ sensors, but what is it exactly?

MORE FROMFORBES ADVISOR

Quad Bayer coding is a particular type of pixel binning that uses four-pixel superpixels combined with the commonly-used Bayer filter array (see ‘What is a Bayer Filter’ below). The key factor here is that the Bayer filter is sized to match the size of the large superpixels rather than the smaller pixels from which they are formed. Each individual color filter in the Quad Bayer array, therefore, covers a group of four pixels rather than just one.

For example, a 48-megapixel quad-Bayer sensor would function in a 12-megapixel mode when using pixel binning. Some ultra-high-resolution examples, such as the 108-megapixel main camera in Samsung’s Galaxy S21 Ultra and Galaxy S22 Ultra, use ‘Nonacells’ composed of nine individual pixels, again outputting 12-megapixel JPEG files by default.

Most pixel-binned sensors also offer the option of reverting to a full-resolution mode, providing the flexibility of choosing between improved clarity in low light or improved resolution in brighter lighting conditions.

In some imaging sensors, pixel binning is also used to improve dynamic range, allowing HDR capture to be achieved with a single shot rather than taking multiple shots and merging them later. This becomes possible because each of a superpixel’s individual sub-pixels can be independently tuned to better capture highlights or shadows before their outputs are combined. The result is a superpixel that’s able to capture dark shadows and bright highlights simultaneously.

This ‘real-time HDR function’ makes it easier to capture HDR images of moving objects while also enabling HDR video capture.

Are There Disadvantages Of Pixel Binning?

Pixel binning requires more complex image processing algorithms (see ‘Why is a color filter needed’ below) which can result in visual side effects, or artifacts, that can be hard to remove. It also produces much lower-resolution images than the sensor would otherwise allow.

In well-lit scenarios, where pixel binning isn’t needed, the sensor can instead operate at full resolution by turning pixel binning off.  

However, when in this full-resolution mode, the Quad Bayer filter is less than ideal as it doesn’t match the underlying pixels one-for-one in the way that a standard Bayer filter would. 

Cameras can get around this by means of array conversion or ‘remosaicing’, which uses image processing to approximate the use of a standard Bayer filter. Some sensors come with custom circuitry to accomplish this task but, even then, there will be some reduction in image quality as a result.

A 48-megapixel Quad-Bayer sensor, will therefore produce lower quality full-resolution images when compared to the same sensor equipped with a standard Bayer filter. You will often do better just sticking with the 12-megapixel mode.

What is a Bayer Filter?

The Bayer filter is an array of colored filters that sits on top of an image sensor’s pixels and enables the camera to create color images. These colored filters are arranged in a specific ‘Bayer’ pattern that mimics the sensitivity of the human eye by being most sensitive to the color green.

Why is a color filter needed?

Although camera sensors are most often used to create full-color images, a sensor’s individual pixels are only able to detect levels of brightness; they can’t see color at all. To enable color reproduction, each pixel is positioned behind a red, green or blue colored filter (think tinted glass). This way each pixel is exposed to light of just one primary color.

In the Bayer filter, half of the color filters are green, while red and blue filters each account for one quarter of the total. You can see the Bayer-pattern layout in the image below. 

In a full-color image file, each pixel is composed of levels of red, green and blue.  However, as each sensor pixel contains brightness information for just one color, the remaining two primary colors need to be filled in. This is achieved through a process known as ‘demosaicing’ which estimates the missing colors through image processing algorithms that take in color information from the surrounding pixels. 

After demosaicing, the images can be saved, typically as JPEG or PNG files. But there is another option.

What are raw files?

In some cases, the information can be saved straight from the sensor, bypassing the demosaicing process. This results in a raw image file, so called because it contains all the raw, unprocessed sensor information retaining the Bayer pattern exactly as it was originally captured. These files must be processed with external software before they can be viewed, but typically contain the most detail available from the camera.

Follow @paul_monckton on Instagram

MORE FROM FORBESSamsung Announces Free Upgrade For Millions Of Galaxy Smartphones