CRN_June2023_Issue_1420

COVER STORY

“This stuff can become easy enough and magical enough that you’re unlocking a very different behavior from customers where they’re doing it because it’s awesome, not because they have to or they think it’s the most efficient thing to do,” he said. It’s emblematic of the large opportunityWard sees in generative AI: allowing companies to unlock productivity and new experi- ences. But he also sees a challenge: Innovation is happening so fast in the generative AI space that it may take some time for customers to settle on a solution. “I think it’s difficult for customers to say, ‘Yeah, totally. I defi- nitely want to pay for you to have a team full of people doing exactly this one thing, which I can write the [statement of work contract] for now and commit to the outcomes for now,’ when the whole tool platform is in upheaval, and there may very likely be a more efficient approach available in the next weeks,” he said. Building On The Shoulders Of Cloud Giants For Hasan, what helped boost Quantiphi’s business in the mid- 2010s after its slow start are two things that have benefited the broader AI market. Around the time the TensorFlow and PyTorch open-source frameworks were released to make it easier for devel- opers to build machine learning models, cloud service providers such as AWS, Google Cloud and MicrosoftAzure made big expansions with compute instances powered by graphics processing units (GPUs) that were fine- tuned to train models—a key aspect of developingAI applications—much faster than cen- tral processing units (CPUs). Over time, these cloud service providers have added a variety of offerings that aid with the development and management of AI applications, such as AWS’ SageMaker Studio integrated development environment and Google Cloud’sVertexAI machine learning platform, which Hasan said serve as crucial building blocks for Quantiphi’s proprietary solutions. “What we’ve done is on top of some of the cloud platform solutions that exist, we have built our own layer of IP that enables customers to seamlessly on-board to a cloud technology,” he said. Quantiphi offers these solutions under the banner of “platform- enabled technology services,” with revenue typically split between application development and the integration of the underlying infrastructure, including cloud instances, data lakes and a machine learning operations platform. But before any development begins, Quantiphi starts by help- ing customers understand how AI can help them solve problems and what resources are needed.

“What we’re able to do is we’re able to go into organizations, help them envision what their value chain can look like if they look at it with an AI-first lens, and from there we can help them understand what are the interesting use cases,” Hasan said. With one customer, a large health-care organization, Quantiphi got started by developing a proof of concept for an AI-assisted radiology application that detects a rare lung disease. After impressing the customer with the pilot’s results, the rela- tionship evolved into Quantiphi developing what Hasan called a “head-to-toe AI-assisted radiology platform.” This platform allowed the organization to introduce a new digital diagnostics platform. In turn, Quantiphi is now making somewhere in the range of $10 million annually from the customer. “The pattern that we’ve seen is if you’re helping organizations grow their business and add new lines to their revenue, this is scaled well or there’s a meaningful reduction in costs,” Hasan said. ‘We All Revolve Around Nvidia’ For solution providers excelling in theAI space, there’s one ven- dor that is often at the center of the infrastructure and services that make applications possible: Nvidia. “Whatever Nvidia wants to do is essen- tially going to be the rules, no matter who you are in the ecosys- tem: OEMs, networking partners, storage part- ners, MLOps software partners,” said Andy Lin, CTO at Houston- based Mark III Systems. “We all revolve around Nvidia, and I think if you get that and you figure out where you fit, you can do very well.” For years, Nvidia was mainly known for designing GPUs used to accelerate graphics in computer games and 3-D applications. But in the late 2000s, the Santa Clara, Calif.-based company began to develop GPUs with multiple processing cores and intro- duced the CUDA parallel programming platform, which allowed those chips to run high-performance computing (HPC) workloads faster than CPUs by breaking them down into smaller tasks and processing those tasks simultaneously. In the 16 years since its launch, CUDA has dominated the landscape of software that benefits from accelerated computing, which has made Nvidia GPUs the top choice for such workloads. Over the past several years, Nvidia has used that foundation to evolve from a component vendor to a “full-stack comput- ing company” that provides the critical hardware and software components for accelerated workloads like AI. This new framing is best represented by Nvidia’s DGX plat- form, which consists of servers, workstations and, starting this

‘Whatever Nvidia wants to do is essentially going to be the rules, no matter who you are in the ecosystem: OEMs, networking partners, storage partners, MLOps software partners.’ — Andy Lin, CTO, Mark III Systems

8

JUNE 2023

Made with FlippingBook interactive PDF creator