Testing and Measurement DIAMOND SPONSOR PRESENTATION objective measurement ended IPTV “blame game” scte PRESENTS
continuity errors to distinguish network faults from headend/encoding faults that can look identical on screen. “The cool thing with this graph was that it was able to do one significant thing,” he said. “If you look at the TV screen and it’s… blocking or stuttering… that can have multiple causes… something wrong with your TV encoder back in the headend… Or you can lose some packets in the network. And it will also look exactly the same way.” In this case, the evidence pointed squarely at the network. “Can you guys guess what the problem was?… Well, it was of course a network problem,” Frostad said, adding that hard data helps end the “blame game” because it “cannot be discussed away”. Frostad said video was the service that forces networks to improve because consumer tolerance for visible errors is low. “The killer service is video. Because video is unforgiving,” he said, pointing to premium viewing scenarios where stutter or blocking becomes unacceptable. He also noted that modern high-quality HDR streams can sit in the region of 17–25Mbps, meaning capacity alone is not the issue; stability and packet integrity are. Bridge has since expanded beyond IPTV monitoring into high-bandwidth production and contribution environments. Frostad said the company’s high-end platforms can support dual 200Gbps interfaces and very large media flows – up to 86Gbps per flow – underscoring how quickly professional media networking requirements are scaling. He praised the progress across the sector compared with earlier “cowboy cable days”, while maintaining that the same core challenge remains: giving both network and broadcast teams the same operational view of what is happening, so faults can be found and fixed faster.
switch would periodically disrupt the network when faced with the steady, unidirectional media flow. “Every two minutes… [it] flooded, of course, the network with all of these traffic,” he said, arguing the switch expected the two-way signalling typical of “computer chatter”, not a “one-way street”. The workaround was straightforward: add periodic keepalive signalling to reassure the network. “Every 20 seconds, [it] sends out a small return message that basically says ‘Elvis lives’,” Frostad said, crediting the engineer’s fix for making the stream stable long term. “After that, everything worked as planned… every single day since that time.” From those early deployments, Frostad said Bridge Technologies was founded in 2004 to “bridge” the broadcast and telecoms worlds with tools that generate shared, objective evidence. He argued that network engineers often underestimated how small impairments could become visible in video services. “Losing a packet, what the hell? Does that really matter that much?… Losing one or two, who cares?” he said, describing the mindset gap when broadcast-grade media first moved onto IP. Bridge’s early proving ground came in IPTV, at a time when global deployments were still largely pilots and vendor stacks were immature. Frostad cited a Norwegian fibre-to-the-home trial involving a traditional headend supplier, a major networking vendor and an early IP set-top box platform – and a familiar outcome when things went wrong: everybody blamed everyone else. His answer was a compact probe device, the VB10, that could be inserted at multiple points through the delivery chain to isolate where impairments were introduced. The accompanying diagnostic concept – the Media Window – combined timing behaviour (packet inter-arrival variation/jitter) with transport stream
Bridge Technologies founder and CEO Simen Frostad set out how early Scandinavian broadcast contribution and IPTV deployments
exposed a persistent problem: broadcast and telecoms teams were trying to run the same services, but lacked a shared way to diagnose faults on IP networks. Frostad took us back to the turn of the century, when he worked on a Scandinavian sports contribution network using IP/MPLS across a mix of radio towers and fibre. The shift, he said, was not driven primarily by cheaper transport, but by the difficulty – and cost – of satellite- based logistics across long distances. “Networks are very expensive too. So it wasn’t for the cost savings, it was for the logistics savings,” he said. “Being able to replace one person or two persons or sometimes even three persons travelling with all of this hardware somewhere was actually the game changer.” At the time, however, the broadcast equipment ecosystem was not quite ready for IP contribution at scale. Frostad described struggling to find reliable devices to convert SDI video into IP streams, and he recalled scepticism from vendors who viewed IP as risky compared with established approaches. A turning point came through a back- channel collaboration with a Tandberg Television engineer, who produced a prototype transport stream encapsulator capable of pushing high-quality MPEG-2 4:2:2 video over a 100Mbps interface. But the project then hit an unexpected networking hurdle: continuous, one- way high-bitrate video didn’t behave like normal IT traffic in the eyes of some switching equipment.
Frostad pointed to a particular issue on Cisco 3550 switching, where the receiving
Made with FlippingBook - Online magazine maker