FROM THE INDUSTRY
AI Advancements in QA Managing numerous software versions can be challenging, as versions change frequently and sometimes drastically. They can even revert back to previous versions, removing features instead of adding them. Traditional automation methods that rely on screen displays can be inefficient and time-consuming when dealing with these frequent changes, often resulting in more time and resources being spent on automating tests than would be running them manually. Through recent innovations integrating AI into testing robots, service providers can adapt to unknown environments and varying content. Traditional approaches depend on consistent content streams to compare quality, but in real-world scenarios, the content being streamed constantly changes — for instance, viewers may be watching the Olympics one day and the Tour de France the next. AI-driven robots can adapt to this dynamic content, accurately assessing and comparing quality despite the variability. This adaptability ensures more efficient and valuable automation in the testing process. Through this AI integration, video service providers can also test any end-user’s behaviour on any device. AI-driven technology is capable of automatically testing channel lineups, performing audio and video quality analysis across the entire stream duration, and alerting service providers in real time when they detect something has gone wrong with the video service. All of the data
tested to work perfectly, after launch it has to run on a device that can have it’s OS updated at any time without warning. Platforms such as Android and Apple are continuously updated, impacting the environments on which service providers rely. These platforms also index all third-party content through their search engine and voice control functions, which can impact content discoverability if not working properly. As agile development moves towards more post-release QA testing, teams should be testing how their app continues to run in the real environments it exists in, not just as an isolated piece of software. Additionally, services delivered by third parties and commercial Content Delivery Networks (CDNs) are also frequently updating their services. All the apps installed on a device have to work together in order to perform well, so testing on the actual device has become essential. For example, many teams use work with the Google TV platform, which also hosts frequently-updated third- party apps like Netflix and Prime Video. While it may not be the responsibility of a QA team to test third-party apps alongside their own, we have observed instances on real devices where a Netflix update negatively impacted another app’s live service performance. Teams have to deliver a fully working product, which means testing their own apps alongside third-party ones, both before and after production. This necessitates a continuous and adaptive testing approach to ensure software remains compatible and continues to perform optimally.
was released. With agile software development, the testing phase is continuous. If it was common for QA to run a thousand tests on a hundred different devices in V-cycle, agile teams today might run a hundred tests on thirty devices. However, QA is still performed on the release even after it has shipped to customers. This ongoing testing is crucial in the video industry, where the landscape is constantly evolving. Outside Factors Lead to More Post-Launch Testing While agile development has changed the way video apps are developed and launched, the environment those apps operate in has become complex as well. Even if a video app has been thoroughly
76
Volume 46 No.3 September 2024
Made with FlippingBook flipbook maker