Talk to us

In the past few years, we have been witnesses of rapid evolution in the video industry. The rise of streaming services and new technologies have changed the behaviour of consumers. Today, multiple camera angles and remote post-productions increase amount of high-quality video data which needs to be transmitted. This affects bandwidth requirements more than ever before.

Traditionally, video data are sent via dedicated networks or satellite uplinks for post-production and further distribution. The problem is that these networks are expensive, have to be booked in advance and the bandwidth is limited. Even though the industry has invested many efforts in codec development during the past years, the cost of delivery via reserved links still increases.

However, since the internet has global coverage, the broadcasters start to think about utilizing public networks for live video streaming.

Utilization of the Public Internet

Speaking of real-time video transmission via public internet, Quality and Security are the crucial aspects. The large amounts of packet delay variation (jitter) have to be handled, and packets lost during the transmission (packet loss) recovered to deliver broadcast video quality. While all video data have to stay protected.
This can be solved by proprietary solutions, which are still relatively expensive and lock workflows down to the vendor. Or by open source SRT protocol which is now integrated into Comprimato Live Transcoder 1.3.

Video streaming with SRT

The SRT (Secure Reliable Transport) is an open-source protocol for low-latency video transportation over the public internet. The protocol accounts for packet loss, jitter, and fluctuating bandwidth, maintaining the integrity and quality of the video. The technology uses ARQ mechanism for error recovery to prevent quality degradation, applies encryption which keeps all video data secured from listeners, and easily traverse firewalls.
Furthermore, SRT detects the real-time network performance between the encode/transcode endpoints and dynamically adjusts them for optimal stream performance and quality. It allows to stream video data from any venue despite the unreliable internet connection and monitors remote facility feeds without dedicated networks. Therefore, the solution is perfectly suitable for live sports action or music events.
Last but not least, SRT is codec agnostic. It supports any video or audio media such as JPEG2000, H.265, or H.264 and allows to transfer any UDP video data.

Examples of implementations

Live Transcoder incorporated with SRT has a plethora of applications on both transport or distribution sides of video workflows. Below, you can see examples of recent applications.

Live video encoding and transport

Very often live video feeds are contributed over dedicated satellite links. Because the satellite uplinks are expensive, it is becoming popular to replace them with more effective public internet connections or mobile 4G and 5G networks.

High-quality remote video connection

Conclusion

SRT support for Live Transcoder allows transporting the best quality live video over even the most unpredictable networks. It is applicable to video transport and distribution endpoints as part of your video stream workflows. The whole solution lowers delivery costs. It is software-based and hardware agnostic.
Contact us for more information about SRT integration or Try Live Transcoder Demo.

October 2018 – It has been a year since Comprimato officially launched Live Transcoder – high-performance encoding and transcoding software solution for broadcasters and telco operators. Nowadays, we are pleased to announce to you release of Live Transcoder 1.2.
This major release adds functionality which allows direct distribution from contribution circuits. It improves overall performance, reliability, and video quality. The versatile approach of Live Transcoder 1.2 now provides video operators more flexibility, better live video processing features, and built-in synchronization – All easily manageable from intuitive User Interface.
See all the new features below.

New support for contribution video inputs enables for direct ingestion

Enhanced direct contribution ingestion feature allows to merge the functionality of several hardware devices, simplify overall workflow and stream management.

Ad Markers conversion and Metadata synchronization

Advanced synchronization allows for exceptional QoE and new revenue streams.

Advanced Built-in Video, Audio and Metadata Processing

Real-time video processing for standardized distribution output. Managed from one place.

Support for external monitoring tools

Live Transcoder 1.2 admits to monitor and control hundreds of streams via 3rd party monitoring tools thanks to REST API and SNMP. Version 1.2 newly supports Net Insight’s Nimbra Vision.

The Comprimato Live Transcoder 1.2 software has been tested and approved in real-time production and now available for 30-day free trials.

Contact us for more information about new features or Free-trial options.

At Comprimato, we have always believed in a software approach. We have never wanted to lock ourselves or our customers down to one-purpose appliances. So we have developed Live transcoder and other products as a software-only, running on COTS (commercial-off-the-shelf) hardware. This approach allows us to preserve enough flexibility and scalability to add new functionality in response to ever-changing technologies – codecs, resolutions, quality.

Beyond the flexibility, software approach can bring new possibilities and benefits to current video processing workflows and simplify everyday routine broadcasters have to challenge. To clarify these benefits, we decided to compile them in the infographics below.

We are always open for discussion so do not hesitate to contact us.

All-in-one encoding and transcoding software

Come to see Comprimato and its live video encoding solutions at VidTrans 2018. An annual conference focused on the innovative types of networking and video technologies and their application to video transport is hosted by Video Services Forum in Los Angeles, CA.
We will be presenting our Live transcoder. The software solution for live video encoding which increases agility and flexibility for new media services delivery while leveraging existing hardware. An interactive demonstration showcasing our User Interface will be at booth 40.
Moreover, Comprimato joins forces with its event partner, NetInsight, to demonstrate the latest media transport solutions built on NetInsight’s Nimbra platform. The joint demo will be presented at booth 23.
Last but not least, Comprimato CEO Jiri Matela will be speaking at the conference about the Development of High Throughput JPEG2000 (HT-J2K). See the Full Event Program.
__________________________________________________
Wednesday, Feb 28 @8:30 AM
Development of High Throughput JPEG2000 (HT-J2K)
Presented by Jiri Matela
VIDTRANS 2018
February 27 – March 1
Marina del Ray Marriott
Los Angeles, CA
Booth 40 & Booth 23 (Joint demo with NetInsight)
Schedule your meeting and see us at VidTrans 2018
 

A lot of high-quality digital content providers struggle with network traffic control. Sending a high-resolution video or high-quality still images is a demanding task where network connection may easily become the bottleneck of your solution. In this post, we will try to provide ways to alleviate network traffic with unique JPEG2000 properties and encoding/decoding mechanisms. These mechanisms might be used to achieve constant video bitrate (and thus fulfill various QoS goals), to get most of the connections with very limited bandwidth (e.g. drones) or simply to limit data transmitted while viewing still images. We will pay particular interest to so-called quality layers, which allow progressive video/image quality improvement and we will let you assess their usability in your project.

Bitrate control with JPEG2000

JPEG2000 is based on discrete wavelet transformation  (DWT) which creates so-called “resolutions”, therefore the simplest possibility for bitrate control is discarding resolutions. The idea is that only several of the smallest DWT resolution levels get transmitted to the client in such a case. This option provides a fast way of progressive quality improvement with no special demands/settings during image encoding but may be a poor choice if you’re in desperate need of constant video bitrate because the size of individual resolutions in a JPEG2000 file varies significantly with image content. For example, a black image would be compressed more effectively than images with lots of visual content (see our example below for the reference).
Users of JPEG2000 may be familiar with so-called rate control that allows you to specify a limit on image size during encoding to JPEG2000. Moreover, only the data that contributes the most to visual quality is included in such a rate-limited image. Applying a rate control limit on each frame of the video leaves you with an upper bounded maximal bitrate of the video. For example DCI video standard  (p.33) defines that the size of each frame can’t exceed 1,302,083 bytes at 24 FPS, meaning that the video bitrate won’t exceed 250 Mb/sec. However, we can’t use rate control for lossless storage of data.
Quality layers in JPEG2000 allow you to specify multiple image size limits and thus provide the possibility of progressive image/video quality improvement especially useful in a networked environment. The user can specify several quality layers and their sizes while encoding the JPEG2000 file. When sent over the network each received quality layer contributes to the quality of the image/video that can be displayed on the client side.
Moreover, quality layers can be used for lossless JPEG2000 images where the very last quality layer contains all the image data. In such a scenario, all the original data is preserved (on the server side) and you’re still able to save your network bandwidth by sending only several small quality layers to the client. The killer feature here is again that only the data contributing the most to visual quality (for the given bitrate limit) is transmitted.

How does it work?

Let’s look at a real-life example.
We encoded two sample images with 3 DWT resolutions and specified 2 quality layers (0.5 MB for fast previews and 15 MB). The following graphic depicts how data distribution between resolutions depends on image content and how the data is selected from quality layers.

Only the data that contributes the most to the quality of the image are included in quality layer 0 for image A. Almost all the data in quality layer 0 comes from the very first resolution of the image.

Image B is clearly “simpler”. Lots of similar background results in less data for a compressed image as well as smaller first DWT resolution which in turn drives more data from the second resolution to be included in quality layer 0 than in the case of image A.
In the context of resolution discarding we see that discarding two resolutions would leave us with 1.04 MB in image A, whereas the size of image B would be 0.34 MB – that’s three times the difference!
Images were taken from ASC/DCI Standard Evaluation Material (StEM) Mini-Movie.  Note that bit depth was reduced to 8 bits to reduce image size for educative purposes.

How can I create quality layers?

Quality layers are created during image encoding. Therefore you need a JPEG2000 encoder.
Comprimato UltraJ2K  allows you to create your JPEG2000 encoder on CPU, CUDA or AMD graphics+OpenCL with just a few simple and unified API calls.
Specifying 2 quality layers in your C/C++ program may be as simple as including the following two UltraJ2K API calls in your encoder configuration:

cmpto_j2k_enc_cfg_set_layer_rate_limit(
      enc_cfg,
      0,
      CMPTO_J2K_ENC_COMP_MASK_ALL,
      CMPTO_J2K_ENC_RES_MASK_ALL,
      524288); // 0.5 MB - quality layer 0
cmpto_j2k_enc_cfg_set_layer_rate_limit(
      enc_cfg,
      1,
      CMPTO_J2K_ENC_COMP_MASK_ALL,
      CMPTO_J2K_ENC_RES_MASK_ALL,
      15728640); // full quality (max. 15 MB) - quality layer 1

With this encoder configuration properly set the encoder is fed raw image data and produces your JPEG2000 with the quality layers.

Example application – progressive quality improvement

A common standard seen on well-known websites is that a low-quality image is displayed first and then the quality is progressively improved by sending several smaller versions of the image first followed by the original file; whereas receiving a single JPEG2000 file encoded with quality layers, the client application is able to progressively improve the image quality with no need to transmit any additional data.
With JPEG2000 there is no need to generate multiple versions (sizes) of the image/video. You simply create one master file (JPEG2000 with the quality layers) and you get bitrate control out of the box!
Even better – you can create this master file with Comprimato UltraJ2K SDK and leverage the power of your GPU to achieve state-of-the-art performance – yes it’s possible to do that real-time, even for 4K video.
Let’s do some simple maths. If your server (camera, drone etc.) creates/hosts JPEG2000 video where quality layer sizes are 0.5 MB, 1 MB, and 2 MB and your video is at 30FPS, you can switch instantly between 123 Mb/sec, 245 Mb/sec, and 490 Mb/sec constant bitrates. The choice can be driven for example by the momentary state of your server-client connection or by the client’s ability to process received data.

Example application – digital image storage

Another good application for quality layers may be in the domain of large images where smaller/lower quality previews are often needed. The quality layer mechanism allows the client to display an image without the need to transmit all the data, decreasing thus response latency as well as saving the connection’s bandwidth. The successive quality layers may be sent, later on, increasing image quality progressively.
Images can be even stored in a mathematically lossless mode which does not discard any of your valuable data and saves you storage space anyway. This may be highly desirable for usage in the medical field or even for photos from high-quality digital cameras.

Quality layer encoding with Comprimato UltraJ2K in real-time and even faster

If you’re willing to use GPU platforms and your image resolution is up to 4K, you are good to go with our online speed calculator  (for lossy encoding). Interested in 8K UHD or lossless encoding? Contact our support for performance details.
And of course, Comprimato UltraJ2K allows you to decode JPEG2000 with the quality layers at the client side with either GPU or CPU.
Moreover, Comprimato UltraJ2K is a software solution that is getting faster and faster with new generations of graphic cards and processors!
If you think that the image quality layer and the world’s fastest GPU-accelerated JPEG2000 Comprimato UltraJ2K SDK  may be the right thing for your project or if you have specific questions regarding UltraJ2K integration, please contact our support. We will be happy to help you!