80% accuracy in 4 out of 5 trials. Objectives:
reframes it as a legitimate accessibility tool under IDEA.
ACCURACY AND CRITICAL REVIEW: AI-generated content should include the human element of the student, teachers and therapists in reviewing, fact-checking, and refining AI output. This not only protects the integrity of student work but also ensures agency and authorship. INTENTIONAL TRAINING AND SUPPORT AI use paired with intentional training for students, educators, paraeducators, and families can lead to success. Consistent modeling and guided practice ensure that AI features are used effectively and embedded in the student’s daily communication and writing routines. GUARDRAILS FOR PRIVACY AND BIAS Schools must choose AI tools that protect the students' privacy and the AI-generated content is vetted for potential ableist language. Recognizing that bias, especially ableist assumptions, can be embedded in technology is a critical part of the ongoing review process. AAC USERS PERSPECTIVES Any discussion of AI in AAC must include the perspectives of AAC users themselves. Too often, technology decisions are made about AAC users rather than with them. Research led by AAC users has highlighted both enthusiasm for AI’s potential and concerns about authorship, accuracy, and the risk of ableist assumptions being built into the tools. AAC users' perspectives remind us that AI should not replace or override the intent of the communicator. Instead, it should be developed and implemented in ways that preserve agency, honor lived experience, and respond directly to the priorities identified by people who rely on AAC every day. Moving the conversation beyond AI’s plausibility into its practical use does not mean ignoring valid concerns. By acknowledging and addressing these issues, we can implement AI-enhanced AAC in ways that promote equity, resist ableism, and prioritize student agency. The choice is not between “full speed ahead” and “never,” but between cautious, informed integration now or the risk of leaving AAC users further behind. CALL TO ACTION Therefore, we call upon practitioners, administrators, and policymakers to take three concrete actions: 1. From Hesitation to Action It’s understandable to have questions about AI in AAC, but delaying its use when it can remove barriers means missed opportunities for students. General education peers are already benefiting from AI tools. If AAC users wait for every concern to be addressed first, the gap in
1. The Julieta will evaluate AI-generated sentences for factual accuracy and relevance, correctly identifying and revising at least two inaccuracies per writing sample, in 4 out of 5 opportunities. 2. The Julieta will edit AI-generated sentences by modifying grammar, punctuation, and word choice, with 80% accuracy as measured by teacher observation and writing samples. 3. The Julieta will use an AI spelling and grammar correction tool to review and finalize a paragraph, making at least three corrections per writing sample, in 4 out of 5 opportunities. ISSUES AND CONSIDERATIONS WHEN USING AI IN AAC FOR WRITING We believe there is a strong case for the use of AI-enhanced AAC to support students who use AAC. These tools have the potential to remove long-standing barriers to transcription and the writing process, barriers that, left unaddressed, often limit authentic authorship and access to the general education curriculum. At the same time, we recognize that AI does not arrive without caution. Educators, families, and AAC specialists are right to raise questions about bias, authorship, accuracy, and the potential for ableism in how the technology is applied. The goal is not to ignore these concerns, but to address them head-on while ensuring that students are not left behind as their peers move forward with new tools. PRESERVING AUTHORSHIP AND AGENCY Early adoption must be anchored in the principle that AI is a scaffold, not a replacement for student-generated ideas. Students should remain the decision-makers: selecting, editing, and approving AI suggestions so that their voice and intent remain intact. This approach mirrors the way nondisabled peers use tools like Grammarly or Google Docs to refine their work, without surrendering ownership. PROACTIVE EQUITY Waiting to adopt AI features until every concern is resolved risks widening an existing equity gap. Peers are already using AI for idea generation, organization, and revision. Denying these tools to AAC users in the name of “caution” inadvertently reinforces the ableist assumption that their work must be completed without the same supports others take for granted. ETHICAL TRANSPARENCY Introducing AI with AAC should come with clear communication to staff, peers, and families about what the technology does, and does not, do. This helps prevent misconceptions that AI is “doing the work” for the student and
45
October / November, 2025 | www.closingthegap.com/membership Closing The Gap © 2025 Closing The Gap, Inc. All rights reserved.
BACK TO CONTENTS
Made with FlippingBook Ebook Creator