In the realm of video streaming, promises of solutions with “ultra-low,” “extreme-low,” or even “zero” or “no” latency are frequently made. While some of these technologies may come close to these goals, many users are finding that there is still a noticeable delay between when they take an action and when the streaming service reacts. However, it is important to note that the performance of such streaming solutions is highly dependent on the content and delivery model, as well as other factors. So, in this article, we’ll explain it in detail for you with a strong explanation of what video latency is, what causes it, when it matters, and what can be done to reduce it.
What is Video Latency?
End-to-end latency, commonly referred to as “glass-to-glass” latency, is the time it takes for a single frame of video to travel from the camera to the display and is the type of latency that is typically discussed. This type of latency is important for applications such as video conferencing, live streaming, and virtual reality. The speed at which data is delivered to an end user’s device can range greatly, from several minutes to a matter of milliseconds. Low latency is defined as less than 1 second, while under 300 milliseconds is referred to as extremely low latency. This means the user experience can be significantly impacted by just a few milliseconds of delay. It is measured in terms of the round-trip time it takes for a signal to be sent from one device to another. As a result, low latency is essential to ensuring the user experience is satisfactory.
Why Is It Important?
When it comes to video latency, your application is the only factor that matters. Higher latency is completely acceptable for some use cases, such as recording and streaming previously recorded events, especially if it results in greater picture quality through robust packet loss prevention. However, for more dynamic applications, such as two-way video communication or online gaming, low latency is essential to provide an immersive experience. There is frequently a latency of 10 seconds between the feed and the actual live feed in linear broadcast operations. This can be an issue for customers who expect a live experience, so this is why broadcasters take great care to reduce latency as much as possible. To reduce latency, broadcasters use various techniques, such as optimizing their networking topology, using high-performance hardware and software tools, employing specialized encoding and decoding algorithms, and implementing edge caching. We will explore them in the following section.
Video latency: How Does It Occur?
Depending on your supply chain and the number of steps involved in video processing, a variety of factors can cause latency. Although each of these delays may seem insignificant on its own when together, they can really add up. Among the major causes of video delay are:
Type of network and speed
The network you select to transmit your video through, whether it is via satellite, the open internet, or MPLS networks, affects both latency and quality. The throughput of a network, or how many megabits or gigabits it can handle in a second, as well as the distance traveled, are the two main factors that determine speed.
Various aspects of the streaming workflow
Each of the individual elements in streaming workflows—from the camera to video encoders, video decoders, and the final display—creates processing lags that, to varying degrees, add to latency. Because video must go through additional steps before being viewed on a device, OTT latency, for instance, is typically much higher than digital TV, for example.
Protocols for streaming and output formats
Video latency is significantly impacted by the choice of video protocol used for contribution and distribution formats for viewing on a device. Additionally, the kind of error correction the chosen protocol employs to combat packet loss and jitter can increase latency and hinder the passage of firewalls.
How to measure the latency?
It was quickly realized how critical it was to have a tool that could accurately and consistently measure glass-to-glass latency. The evaluation and comparison of various camera types and hardware compute platforms became relevant thanks to the glass-to-glass latency measuring tool. The development of this tool was of paramount importance to the project. With the tool in place, we were able to track each stage of the video processing pipeline, from camera input to final output. We hope you will find it helpful in understanding the performance of your entire system and choosing it over more expensive alternatives.
The primary design principle of this tool is to focus on the detection of the light source on the computer screen and the emission of the light source on the camera lens. As a result, the system can precisely measure the latency between a light pulse being emitted and detected in both directions of data flow. The centralized strategy improves the method’s precision by doing away with the requirement for time synchronization. Once that is determined, we can calculate the amount of time that has passed between the light source’s activation and detection. Using multiple measurements over time to obtain the latency distribution over time is another advantage of this method.
What are some ways to reduce video latency?
There are numerous approaches to reducing video delay without sacrificing image quality. First, select a hardware encoder and decoder pair that is designed to maintain the lowest latency feasible, even when using a regular internet connection. The newest generation of video encoders and decoders can employ HEVC to compress video to incredibly low bitrates, which is less than 3mbps while keeping the great picture quality. In certain circumstances, they can do this with latency as low as 50 ms.
The choice of a video transport protocol that will provide high-quality video at low latency over busy, public networks like the internet is another essential element in obtaining reduced levels of delay. A streaming protocol must include some type of error correction to reduce packet loss in order to transmit video over the internet without sacrificing visual quality. There will be latency introduced by all types of error correction, but some more so than others. In contrast to FEC and RTMP, the Secure Reliable Transport (SRT) open-source protocol introduces less latency while utilizing ARQ error correction to help prevent packet loss.
The ideal compromise between bit rates, picture quality, and latency will ultimately depend on the specific desired use case. Each of these factors affects the user experience as a whole and needs to be properly adjusted to get a satisfying result. Picture quality can frequently be sacrificed in favor of low latency in situations where it is important, such as video surveillance and ISR. On the other hand, applications that prioritize a high-quality image, such as streaming video or video conferencing, will tend to focus more on bit rate and picture quality. Overall, when selecting the best video compression format, there must be a careful evaluation of the end user’s needs and preferences. It is important to note that the type of video content being delivered will significantly impact the ideal compression format selection.
Vadzo Imaging camera experts can help you with recommendation to the right camera solution that provides a balance between the video quality, compression factor, latency, and the needs of the use case. There are cases where the team has implemented customization to ensure the requirement is achieved.
Need more details?
Feel free to Contact Us