URI – Discovery 2026 – Research Abstracts

PROMPT INJECTION DEFENSES Jay Nergard , Software Engineering Brudy Gundert , Cybersecurity Angel Madrigal , Software Engineering

MENTOR Sameer Abufardeh , Electrical, Computer and Software Engineering

Large Language Models (LLMs) are increasingly embedded in everyday software systems such as customer service chatbots, educational tools, and productivity applications. These systems rely on built-in constraints to ensure safe and reliable outputs, yet such guardrails remain vulnerable to prompt-injection attacks—inputs crafted to override a model’s intended instructions. Prompt injection is among the most prevalent and dangerous attack vectors, enabling malicious users to bypass safeguards and elicit unintended or sensitive information. As LLM-based systems become more widespread, addressing these vulnerabilities is increasingly critical. This project evaluates the susceptibility of LLM-based systems to prompt-injection attacks and develops effective, intuitive countermeasures to reduce their impact. We analyze how different attack categories, including code injection, role- playing, and indirect injection, affect model behavior and measure their success rates. In response, we propose a defense mechanism that is both practical and easily integrated into existing systems. Addressing the gap between academic research and real-world deployment, this project emphasizes simplicity and accessibility over complex or resource-intensive defenses. The outcomes include a categorized dataset of prompt-injection attacks (safe for research use), quantitative evaluations of attack success rates, and an assessment of the proposed defense strategy.

IGNITE AWARD

UNDERGRADUATE RESEARCH INSTITUTE | 67

Made with FlippingBook - professional solution for displaying marketing and sales documents online