TECHNICAL
that is sent. Videos compressed to lower resolutions will become fuzzy or blurry when scaled up to the target device. Videos sent at the same resolution but at lower bit-rate (and hence less detailed) encoding will produce blurriness or blockiness at extremes. If the available bandwidth falls too far, the client will run out of video to play, resulting in frozen video or the dreaded “spinning wheel” indicator. View Like a Person Does (What is Non-Reference?) There are several established ways by which machines evaluate video quality. Generally speaking, they fall into four categories: 1. Packet based —bitrate, delay, lost packets and other parameters are monitored and used to infer video defects such as stalls, missing frames and resolution 2. Frame measurement —video frame sequencing is analysed to determine freezing, lip-sync issues and image breakup 3. Pixel comparison —the source video (the reference) is compared to the received video, pixel by pixel, to determine differences 4. Non-Reference —the received video is analysed by sophisticated visual algorithms As humans, we don’t need an A-B comparison to determine how good something looks. We recognise
field (How well does my network deliver HBO to mobile handsets, compared to yours?). It allows smart TV and other device vendors to easily regression test new software releases in the lab. And it enables video service and video gaming providers to understand how their applications consume network bandwidth and how well they fare when the network is less than ideal. What Are We Scoring? What are the artifacts of interest? Artifacts show up in all sorts of ways and can be introduced at many points along the video distribution and reproduction chain: at the encoding, across the network, and at the decoding. Some common types of artifacts include blurriness or fuzziness, blockiness or pixelation, halos, shearing or tearing, and colour issues. Let’s take streaming video for example. On-demand and OTT video services such as HBO MAX, Netflix and iQiyi use an adaptive bitrate system (ABR). With ABR, the client application (on a smartphone, in a home video gateway or set-top box, built into a smart TV, or in an HDMI “stick” adapter) receives chunks of video from the server. As that is happening, it measures the throughput it is getting from the network and asks for more packets to render the content at the bit rate requested. If it isn’t receiving enough throughput from the network, the client asks for the video to be sent at a lower bit rate. With this type of adaptive system, available network bandwidth has a direct effect on the quality of the video
Assuring video quality across mobile networks is complicated, with a myriad of dynamic challenges made even more difficult with the advent of 4K high resolution devices. These challenges can be overcome through comprehensive testing, but there are several different ways to measure video performance. Rather than analysing packets or frames for diagnostic testing, many performance evaluation methods use pixel comparisons of the source versus the delivered video to determine overall quality. This is a common standardised method, but unfortunately it is not applicable to most over-the-top (OTT) streaming applications. Now there’s something new. Why Does it Matter? Having a machine-driven and objective way to evaluate video quality helps those building content services and video devices, as well as the network operators who are delivering video over mobile and broadband networks. It provides a video quality measure that is far less expensive, far more repeatable, and much faster to implement than human-based evaluation. Automated video quality analysis (VQA) by itself is not new. But being able to perform objective VQA without needing a reference video for comparison is a breakthrough capability. Now, these designers and operators can accomplish VQA in scenarios where full-reference techniques are not practical or feasible. This opens the door to video performance benchmarking in mobile networks in the
Artifacts can be introduced at many points along the video distribution and reproduction chain: at the capture, during encoding, while traversing the network, and during decode and display. Volume 47 No.4 DECEMBER 2025 93
Made with FlippingBook - Online magazine maker