Talk to us

Blog

The new era of large image processing

We all know about the explosion in the size of images we need to deal with, both in the expanding resolution of individual frames and the overwhelming total volume of data. The amount of detail and information that we can now capture has tremendous benefits, from immersive entertainment to medical diagnoses to breakthrough remote sensing. But it does come as a huge challenge to existing IT infrastructures and large image processing. Most IT practitioners understand that the two approaches to solve this are to either massively build up the scale of their storage and networking capability, or to find some way to use image compression to make it more manageable. But compression has been a dirty word, with connotations of loss of data, high compute overhead, and a variety of other costs. It is these concerns we call “compression anxiety” and we want to address them here.

The storage capacity, IO performance and network transmission requirements are immense. While disk and solid state storage has become denser and cost effective, it is still a very large (and ever growing) expenditure. And even the fastest networking can’t keep up with the demands of users who need to access all these images. So invariably, the discussion turns to a smarter way to handle the data, and that means using some sort of compression.  An immediate reaction is often “No Way!” – compression would compromise the integrity of the original data, either reducing the creative component that the filmmaker strove to capture, or hide the minute details that are the key indicators for the radiologist or intelligence analyst. Furthermore, wouldn’t it be complicated to add compression to the workflow, creating disruption and costs.
However, these apprehensions are misguided and there are now solutions that provide easy-to-integrate compression with no loss of data at all.  The key is using JPEG2000 as the compression engine, which is the international image standard for a long time. Comprimato leverages benefits of JPEG2000 and built a product for the accelerated lossless image compression. Here are key points of how Comprimato leveraged JPEG2000 for the large imagery processing:

  1. Comprimato significantly reduces image size. The input image is smaller by 40% – 70%
  2. Comprimato also guarantees NO loss of the image data thanks to mathematically lossless image compression.
  3. Comprimato has a unique and very effective optimization for GPU and CPU and accelerates image encoding/decoding. The image can be processed in real time even on super large resolutions like 10K x 10K pixels.
  4. It’s cost efficient. Comprimato’s product can run on affordable nVIDIA or AMD GPU cards which can be simply added into existing servers for the fraction of the price compared to implementing entirely new HW. The GPUs can be easily upgraded in the future and scale-up image processing together with future higher resolutions.
  5. It’s not complicated to add Comprimato image processing into existing infrastructure. Thanks to a Software Development Kit (SDK) and open APIs it’s easy to plug-it in on any image processing server or client software.

The JPEG2000 format used by Comprimato is well known and a defined standard in the medical and geospatial imaging industries and it’s not bound by any 3rd party commercial license constraints. See below the comparison of the image size before and after encoding and also speed of the encoding/decoding on processors or graphic cards.

JPEG 2000 compression and image size

JPEG 2000 encoding and decoding on nVIDIA GPU and Intel CPU


For better illustration, here is what happens during RAW image processing when Comprimato technology is in place. The picture below shows lossless image compression of data coming from the source camera (or a medical device) and the bottom part display what’s happening on the user side when the image needs to be displayed.

JPEG 2000 encoding and decoding using Comprimato codec.


Comprimato works very closely with Intel, AMD and nVIDIA and gets the most performance from their latest CPU and GPU hardware with each new release.
The Comprimato SDK works on Linux, Windows or MacOS. We will release the new version soon and it brings advanced features like multiple quality layers to encoder and processing images even larger than 100K x 100K pixels.

Comprimato large image processing brings incredible velocity and saves your storage space together with network bandwidth. All of that without any degradation of the original image quality. It’s a win-win answer for compress or not-compress question and a new way how to look at the large image processing.

SUBMIT A COMMENT

Ready to simplify your video workflow?