GENAI: INNOVATION AND DISRUPTION
‘You Have To Fight AI With AI’
The buzz around generative AI is getting louder as solution providers are addressing the risks and advantages it poses to their customers while vendors are stepping up by adding GenAI-powered capabilities into their cyber defense tools.
By Kyle Alspach
W hile a wide array of cybersecurity chal- lenges and opportunities have presented themselves around generative AI over the course of less than a year since the public release of ChatGPT, there’s much more to come, security experts and executives told CRN . For solution providers, GenAI brings massive implications. Helping customers to safely use apps such as OpenAI’s ChatGPT has stood out as one of the most immediate issues, but the use of these technologies by hackers also poses a heightened threat that solution providers should be getting proactive about, experts said. Meanwhile, GenAI is being widely leveraged by industry ven- dors to enhance the way security teams utilize cyber defense tools, with the goal of improving productivity and enabling faster responses to threats. Ultimately, cybersecurity is “a game of speed where you want to get people to make a decision faster,” said Ian McShane, vice president of strategy at Eden Prairie, Minn.-based cybersecurity vendor Arctic Wolf. And GenAI can accelerate decision-making by providing greater context around a potential threat, giving security teams the confidence to decide whether it can be ignored or deserves further investigation, he said. With the help of GenAI, “getting to that decision point with the right context is what’s going to make a difference,” McShane said. But all of this is just the prelude: Solution providers can expect GenAI technologies to be a source of innovation and disruption for years to come. “There’s a lot of buzz, and the buzz is growing. But I also think that things are moving forward,” said Mike Heller, senior director for managed security services at Phoenix-based solution provider Kudelski Security. “I think the fact that there will be an impact from [generative] AI to our market is clear.” The exact nature of that impact is still taking shape, however. When it comes to enabling secure usage of ChatGPT and other GenAI applications, many solution providers are already working to advise customers—even as numerous security vendors release tools aiming to assist with protecting sensitive data amid the rise of the technology.
Without a doubt, GenAI “can be a security risk if it’s not man- aged properly,” said Atul Bhagat, president and CEO of BASE Solutions, aVienna,Va.-based MSP. “I think one of our responsibilities as MSPs is we have to get ahead of it and have those conversations early on with our cli- ents about how to useAI safely and correctly.We’ve heard about mistakes and horror stories,” Bhagat said. “But overall, I think generative AI is the future, and we’re seeing in a lot of organiza- tions they’re trying to find ways to use it to their advantage.” Deploying data security technologies to help prevent the dis- closure of intellectual property or sensitive data into GenAI apps is one potential approach. In recent months, a number of vendors have released tools to help enable safe usage of GenAI platforms such as ChatGPT. For instance, Zscaler has updated its data loss prevention (DLP) product to thwart potential leakage of data into GenAI apps.That has included implementing new data loss policies and security filtering policies, said Deepen Desai, global CISO and head of security research at the San Jose, Calif.-based cyber- security vendor. Meanwhile, Zscaler said that its recently introduced Multi- Modal DLP capability can prevent data leakage across not only text and images, but also audio and video. Overall, the goal is to “allow our customers to securely embrace generative AI without leaking data, without hitting malicious versions of the chatbots,” Desai said. “The risk definitely exists with this technology if it’s not being embraced in the right way.” Another well-known security risk posed by GenAI technology is the boost it can give to malicious actors, such as hackers using ChatGPT to craft more convincing phishing emails. This summer, security researchers also identified GenAI- powered chatbots that are specifically intended for use by hackers—includingWormGPT, which was disclosed by researchers at SlashNext, and FraudGPT, which was uncovered by Netenrich researchers. But even ChatGPT itself can provide a significant aid to mali- cious actors, such as by improving grammar for non-native English speakers, researchers have noted.
26
OCTOBER
Made with FlippingBook - professional solution for displaying marketing and sales documents online