The team is leveraging cutting-edge AI techniques, including transformers and foundation models, to create more advanced malware detection systems. These AI models move beyond surface-level indicators and instead understand the semantics of binary code, allowing them to detect even previously unseen malware strains—so-called zero-day attacks—that traditional systems often miss. “Think of it as a ChatGPT for code,” Kruegel says. “We're training a foundation model on binary code, allowing it to capture the essence of how malware operates. This approach lets us fine-tune the model for different cybersecurity tasks like detection, classification, and threat analysis.” The results are promising. The team’s AI-based approach has already uncovered dozens of serious security vulnerabilities in open-source libraries, weaknesses that could have allowed attackers to infiltrate systems worldwide.
From Code Security to Network Defense Beyond malware detection, the project also is exploring AI- powered code analysis, which could help software developers catch security vulnerabilities before they release their products. “A lot of security vulnerabilities happen because of human error,” Wagner explains. “The goal is to have AI-powered tools that assist developers in identifying issues before software is shipped, reducing the chances of future exploits.”
THE GOAL IS TO HAVE AI- POWERED TOOLS THAT ASSIST DEVELOPERS IN IDENTIFYING ISSUES BEFORE SOFTWARE IS SHIPPED, REDUCING THE CHANCES OF FUTURE EXPLOITS.
Another crucial application of their research is network threat detection—the ability to monitor network traffic and detect cyber intrusions as they happen. AI can analyze vast amounts of data in real-time, spotting unusual activity that could indicate a cyberattack. “If a system gets hacked, the faster you detect it, the faster you can fix it before it causes more damage,” Wagner said. “It allows us to automate and improve this process, making it accessible even to smaller organizations that lack dedicated security teams.” Empowering Security Analysts Cybersecurity isn’t just about technology—it’s also about the people on the frontlines: security analysts who sift through massive amounts of data to detect and respond to cyber threats. But these teams are often overwhelmed with information. The UCNI researchers are developing AI-powered tools to assist security analysts by filtering and prioritizing threats, allowing them to focus on the most pressing dangers. “The funding from the UCNI allowed us to bring together experts from multiple universities and provide hands-on opportunities for students to work on cutting-edge AI research,” Chen said. “It has also given us the chance to present our findings at major international cybersecurity conferences.” Some of the team's recent contributions include: UniTSyn, a large-scale dataset designed to improve AI’s ability to detect security vulnerabilities, presented at the International Symposium on Software Testing and Analysis (ISSTA) in 2024. Prompt Fuzzing for Fuzz Driver Generation, research on using AI for software security, presented at the ACM Conference on Computer and Communications Security (CCS) in 2024. AI and the Future of Cybersecurity While early experiments with generative AI for code analysis were not always encouraging, the researchers are learning from these setbacks and refining their approaches. Despite initial challenges, the team remains optimistic about AI’s role in fortifying digital defenses. “It’s an exciting time to be working in cybersecurity and AI,” Wagner said. “The threats are constantly evolving, but so are the tools at our disposal. By staying ahead of the curve, we are working to anticipate and mitigate future cyber risks.” ◆
72 Impact Report 2023 - 24 | UC NI
Made with FlippingBook Annual report maker