( Blog )

HDR – image processing in high bit depth

HDR – high dynamic range – is a buzzword these days. But what exactly is it and where might we need HDR in real life? And if we need HDR, how can we work with it more easily and apply it in production? In this article, we explain that HDR simply means having a relatively high number of color combinations. But although a simple idea, it is still essential for even the slightest post-processing of our image information.

 

What is HDR?

Common digital imagery (like JPG) nowadays uses color space with a depth limited to 8 bits for each component in every pixel. That means an RGB still picture format with red, green and blue components is able to store 256 grades of each of these three components. HDR simply means the limit is higher than 8 bits per component. Today’s industry standard HDR is considered as 12 bits per component. Rarely, we also meet even 16-bit HDR image data, which can be considered as extremely high-quality data.

Let us imagine the standard range – one pixel with 8-bit color depth. If we take the red component, say, we can consider a value of 255 to be the strongest possible red. A 0 value would mean no red is added at all. This way any pixel in an image may be one of 16,777,216 combinations of red, green and blue (see Fig. 1). That seems a lot. So why would anyone need more?

Fig. 1 – red, green, blue color combinations

 

Why do we need HDR?

Let us consider a scene with high differences in lighting, such as a street scene with some parts in sunlight and others shade. The whole picture, viewed simultaneously, may look fine (see Fig. 2).

Fig. 2 – a street with areas in bright sunlight and deep shade (asc/dci standard evaluation material)

But even if a camera chip is able to capture all the details, different parts of the picture, viewed separately, would look either too bright or too dark. To deal with that we usually simply stretch the contrast to get a greater lighting distance between the brightest and the darkest tone in the selected part (see Fig. 3, 4). But unfortunately, in the affected parts there would now be much fewer pixel values than the original 256 values per component (see histograms – Fig. 5, 6). These values, if stretched, are not able to draw enough shades of color to look realistic.

Fig. 3 – contrast adjust for a dark part of an image

Fig. 4 – resulting histogram of contrast adjust for a dark part of an image

 

Fig. 5 – contrast adjust for a light part of an image

Fig. 6 – resulting histogram of contrast adjust for a light part of an image

Obviously, the more processing the picture needs, the less color depth will remain. The resulting images can easily be quite unsatisfactory for the human eye.

This is where HDR comes into play. Let us say we have a source picture with 12-bit color depth. Now, using color manipulation we can lose up to 4 bits while still preserving the final output quality presented to the viewer in all 16,777,216 color combinations. Compare histograms of the same input in both 8 and 12-bit color depth:

Fig. 7 – histogram of a dark picture after color manipulation (left: 8-bit source, right: 16-bit source)

Fig. 8 – histogram of a light picture after color manipulation (left: 8-bit source, right: 16-bit source)

As the histograms show, manipulating with a standard 8-bit source destroys a lot of color information and thus makes the color rendering worse. Using HDR preserves much more information so that the final color rendering is in the highest possible quality.

 

HDR in JPEG2000

The JPEG2000 standard is designed to be very future proof. Today’s HDR standard is considered to be 12-bit depth, but JPEG2000 codecs usually allow you to store up to 30 bits. While allowing such depth, JPEG2000 is still very efficient. The algorithms used (DWT and EBCOT) do not add any extra data, even if high bit depth is set but not used (like the dark parts of an image where only dark colors are used). For example, a totally black picture will have the same file size be it 8, 12 or 16 bits deep.

If we consider the real image Fig. 3, we can compare common 8-bit and HDR 12-bit encoding in terms of quality and file size. See the table below which shows the differences between the original, JPEG2000 and JPEG encoding (HDR not available for JPEG).

Profile / Quality (PSNR) File size (MB) Storage efficiency (from original)
Original 12-bit raw HDR image 35.5 0%
8-bit JPEG encoding / 51.5 7.3 79%
8-bit JPEG2000 encoding / 51.5 5.4 85%
12-bit JPEG2000 encoding / 63 10.7 70%
12-bit JPEG2000 extra quality encoding / 73.4 15.6 56%

HDR with Comprimato SDK

Our unique API keeps even HDR very simple and allows users to use the same settings pattern for different similar use cases. In the example below, I show how to easily use fast GPU accelerated image or video encoding in a HDR J2K picture. Switching from 8-bit to HDR processing is just a change in a number of stored bits (bit depth). See the code samples:

8-bit processing code sample

struct cmpto_j2k_enc_comp_format colorComponent;

colorComponent.bit_depth = 8;

colorComponent.bit_shift = 0;

colorComponent.data_type = CMPTO_INT8;

colorComponent.is_signed = 0;

colorComponent.sampling_factor_x = 1;

colorComponent.sampling_factor_y = 1;

HDR (12-bit) processing code sample

struct cmpto_j2k_enc_comp_format colorComponent;

colorComponent.bit_depth = 12;

colorComponent.bit_shift = 0;

colorComponent.data_type = CMPTO_INT16;

colorComponent.is_signed = 0;

colorComponent.sampling_factor_x = 1;

colorComponent.sampling_factor_y = 1;

To get a better understanding of how fast the conversion uses Comprimato SDK, see our online calculator. HDR is part of future video formats and Comprimato helps facilitate painless HDR format production while also eliminating one of the negatives of HDR, which is big file sizes.

 

Jan Brothánek
by Jan Brothánek
June 9th, 2017