( Blog )

New support for interlaced content alowing high quality video processing

Interlaced video scanning – a commonly used term in the broadcasting industry and is a legacy technology still widely used in both contribution and distribution video productions. Nowadays, with an increasing need for video processing where a progressive video is required, conversion of interlaced video is an unwanted necessity. As our Comprimato Live transcoder now features support for interlacing, this article will more closely look at its technical meaning and the benefits of our application.

What is interlacing

In video industry, interlacing is a technique that allows for doubling the speed of image acquisition and display and keeping the bandwidth (data rate) at the standard speed for standard progressive video. The trick is done by transferring so-called fields which consist of half of the lines of a complete frame while switching even and odd lines with every field.

So having e. g. an interlaced HD (1080i) video profile at 60fps means a display device receives 60 fields of 540 rows per second, 30 of them belonging to even rows, 30 for odd rows. This is effectively the same amount of data as with the progressive HD profile at 30fps, where we receive 30 full frames (1080 rows) per second.

An example of an interlaced picture with even lines acquired at a different time than odd lines (left). Compare that with the original progressive image (right).

This way interlacing used to improve the smoothness of motion in a television broadcast in days of raster scanning (CRT) displays. The scanning ray of the displays switches between running on even and odd lines. As the picture on those television devices wasn’t sharp even when running progressively, interlacing added smoothness without drawbacks.

In the modern video industry, when display devices don’t use scanning rays, interlacing is used only for historical, not technical reasons. The displays always need some way of deinterlacing because they only work with progressive signals by design. This way interlacing causes trouble while processing video in any way while adding no or a minimal advantage. Nevertheless, interlacing is still a standard commonly used in video contribution and satellite, cable and terrestrial broadcasting with ATSC or DVB.


The need of deinterlacing

Deinterlacing is a process of retrieving the progressive video signal from the interlaced input. The process, regardless of the algorithm used, is never perfect in terms of reconstructing the exact original because some information is always missing.

Every interlaced video signal needs to be converted to a progressive signal before any picture manipulation. Even operations as simple as scaling or subsampling from 4:2:2 to 4:2:0 aren’t possible on interlaced fields because the even and odd lines don’t belong together. This example shows an erroneous output if we attempt to scale fields independently:

Left: original moving object (progressive acquisition); Right: interlaced acquisition – even lines (blue) acquired first, odd lines (red) then later when the object moved.

Fields viewed separately

Fields scaled separately


The resulting frame consisting of joined scaled fields

It is clear that the image needs to be converted to progressive because processing requires the signal to be spatially coherent.


Deinterlacing algorithms

Several deinterlacing techniques and algorithms may be used, from filters as simple as doubling the lines of the fields to fill the currently missing ones, to massively computationally intensive efforts like motion compensation techniques with the use of neural networks.

Because we wanted to support interlaced content with our Comprimato Live transcoder fully, we explored the capabilities of existing deinterlacing implementations. Although some of them can produce quite good quality, unfortunately, there weren’t any which can run as fast as our transcoder needs. In conclusion, we purposed our own algorithm and its fast GPU implementation, as described in the next section.

In the table below, we compare speeds and results of deinterlacing filters, some commonly used, and our own filter (as a part of Comprimato Live transcoder). All of them measured with PSNR and VMAF metrics (from Netflix R&D) on a 10-minute 1080i video with several representative scenes. The quality is compared with the 1080p original.


Filter VMAF score PSNR score Speed Algorithm Implementation in
Edge-aware Weston GPU 97.92 41.71 ultra fast ( ~ 12048 fps using Nvidia P6000 ) Cubic interpolation, edge-aware for static parts Comprimato Live transcoder
QTGMC (very slow lossless preset) 99.02 41.08 slow ( ~ 2 fps using 2 cpu threads ) Motion compensation Avisynth
QTGMC (very slow preset) 98.63 38.82 slow ( ~ 2 fps using 2 cpu threads ) Motion compensation Avisynth
QTGMC (ultra fast preset) 97.59 38.24 medium fast ( ~ 20 fps using 2 cpu threads ) Motion compensation Avisynth
Mcdeint (extra_slow preset) 97.99 41.46 extra slow ( ~ 0.2 fps using 1 cpu thread ) Motion compensation FFMpeg
Mcdeint (fast preset) 98.11 41.26 medium ( ~ 9 fps using 1 cpu thread ) Motion compensation FFMpeg
Nnedi3 95.91 37.87 medium slow ( ~ 5 fps using 1 cpu thread ) Neural network FFMpeg
Yadif 95.58 39.42 fast ( ~ 66 fps using 1 cpu thread ) edge-directed interpolation FFMpeg

Edge-aware Weston deinterlacing filter on GPU

To satisfy the need for high-speed video processing in Comprimato Live transcoder, where we massively take advantages of GPU usage, we aimed to create a CUDA implementation. We searched for an algorithm which could run in parallel (so it isn’t serial, or the serialization may be broken) while simple enough to be very fast for multiple real-time video streams and yet of high-quality output.

One of the most promising we found is a filter invented by Martin Weston for BBC R&D in 1987 which was then patented (the patent expired more than 10 years ago).

This algorithm is easily parallelized, simple enough to be fast, and behaves quite well in terms of quality (PSNR score 38.51, VMAF score 95.30 – compare with the table above).

The algorithm, described in the patent, always interpolates three fields – previous, current and next. For every pixel, it considers the same position in the previous and next fields (particularly the value on the same position and two of its vertical neighbors both ways up and down) and two of its neighbors in both directions in the current field, as shown below.

Our implementation of the algorithm turned out to be very suitable for GPU processing. It gave a very good speed even before we used a few advanced techniques for bandwidth optimization. Yet it was clear there was still space for quality improvement because Weston’s filter always behaves the same, no matter what data it consumes. That e. g. means slight unsharpness is visible.

Particularly when there are static or near-static parts in the video, it should be better to interpolate only the parts which are likely to belong together, in other words, to use object detection and interpolate only the parts of similar color value in the detected objects.

For this purpose, we’ve implemented edge detection and modified the interpolation to process only the parts between the detected edges. The edge detector takes into account five fields – the two previous, the current, and the two next.

With this implementation we significantly improved the deinterlacing quality, so we achieved PSNR score 41.71 and VMAF score 97.92 which we consider being very good and competitive with the best known and highly computationally demanding algorithms, while still allowing many real-time streams. Particularly, the implementation is capable of outputting more than 12000 full 1080p frames per second with a Nvidia P6000.


Interlaced video support in Comprimato Live transcoder

Because users need to be flexible with using different video formats, while designing new features of Comprimato Live transcoder, we aimed to give as much freedom as possible in using both progressive and interlaced contents on both input and output ends.

The newly introduced support for interlaced scanning unlocks the possibility to freely choose between progressive and interlaced input and output and all their combinations (see the screenshot from Comprimato Live transcoder UI).

Concerning interlaced input, as we discussed above, to support interlaced input video, a deinterlacing filter always needs to be used. Comprimato Live transcoder now features our Edge-aware Weston deinterlacer implemented in CUDA to run on GPUs. The transcoder launches the deinterlacing filter every time there is an interlaced video in the input stream. This way the video is always converted to progressive internally.

Once it has the progressive content, the Live transcoder can launch other processing and directly encode the content as progressive video for output. Alternatively,  it can separate every other line to create interlaced output as well, if required.



Jan Brothánek
by Jan Brothánek
March 2nd, 2018