Dec 22/Jan 23 Closing The Gap Solutions - Collaborating With Augmentative and Alternative Communication (AAC) Users Gains A New Perspective To Best Support Clients By Lydia Dawley
What AI Invisibilizes: Critical Perspectives on AI Literacy and Emergent AT leadership, policy & implementation
AI is already impacting education as schools, particularly secondary and postsecondary educators grapple with the presence of free artificial intelligence (AI) tools that make cheating easy and plagiarism more difficult to detect. Just like many industries, the field of special education has seen an influx of AI-based technologies over the past two years as the global craze for generative AI continues to rage. There is evidence to support AI’s potential to positively affect student learning through intelligent tutoring systems (Ma, Adesope, Nesbit, & Liu, 2014), grammar checks, and smart composition writing tools. However, there is relatively little information available about how AI tools can empower education professionals to be more innovative when it comes to meeting their student’s needs (Office of Educational Technology, 2023, p.59). In the last year, we have seen a number of new technologies marketed to overworked professionals by offering to offload parts of their jobs. This includes the advent of education-specific AI tools to support the rise in communication demands with colleagues, students, and families. Assistive technology (AT) and augmentative and alternative communication (AAC) tools must now also creatively address how AI-powered systems can positively impact the needs of their users. With each technology solution that is marketed to increase the educator’s quality of instruction or student’s quality of learning, it’s essential to ask at what cost? The consequences of ignoring what AI invisibilizes, or obscures, may result in amplifying accessibility barriers within technology systems. After more than two years of formal and informal investigation into AI applications in education and AT, there continues to be a need for rigorous AI training for special educators and related service providers. It’s time we focus on demystifying this seemingly
novel technology. Without relevant background knowledge and dialog about critical aspects of AI that are invisibilized by the veneer of “magic” in AI ed-tech tools, we risk excluding educators, service providers, and school leaders from their rightful seat at the table. As professionals in pedagogy, it is our responsibility to become active participants in shaping the role of AI tools and systems in education. This article presents four ways to demystify AI in education and AT: differentiating traditional and generative AI, AI literacy and the role of the educator, hallucinations and bias, and transparency. TRADITIONAL VS GENERATIVE AI Traditional AI is designed to complete a specific set of narrow tasks. This kind of computing uses predefined algorithms and rules to determine what it should do next. Some examples of traditional AI familiar to the public are computer chess, Google’s smart compose and translation features, and custom “you might like” recommendations on Spotify or Netflix. Generative AI (Gen AI) has long been the subject of research, but beginning with the release of Chat-GPT in November 2022, the general public gained the ability to generate original content in seconds. Gen AI pulls from extremely large datasets, and through statistical learning models and pattern recognition is able to create something new. The speed and accuracy of these results may seem almost magical, hence the namesakes of many new educational technology companies and features. But Gen AI’s magical framing overstates the technology’s underpinnings that enable “autocomplete on steroids.” Moreover, these AI chatbots are built to be human-like and we are anthropomorphising them (e.g., using gendered pronouns to refer to technology) before we address their limitations.
LEILA DENNA , MS, CCC-SLP is a speech-language pathologist and assistive technology (AT) specialist. As a senior consultant at Cotting Consulting, she supports students using assistive technology in the school setting. Her prior research experience at MGH and MIT focused on language and cognition in autism and dyslexia. Leila is dedicated to identifying ways to leverage technology and AI to improve clinical outcomes in real-world settings. In 2025, she started SLPs Talk Tech, a professional network where invited guests share AI-powered innovations and research across domains of practice in communication sciences and disorders (CSD).
11
June / July, 2025 | www.closingthegap.com/membership Closing The Gap © 2025 Closing The Gap, Inc. All rights reserved.
BACK TO CONTENTS
AI LITERACY AND THE ROLE OF THE EDUCATOR Whether you are a student or a teacher, AI literacy skills are becoming increasingly recognized as valued competencies along with media literacy, digital citizenship, and data literacy. Digital Promise defines AI literacy as the“knowledge and skills that enable humans to critically understand, use, and evaluate AI systems and tools to safely and ethically participate in an increasingly digital world” (Lee et al., 2024). For AT professionals in education, our AI literacy must be rooted in understanding AI’s design, training data, ownership, and customizable accessibility features that may enhance educational quality. Over the last year, we have seen a growing number of AI literacy resources across the internet from non-profit leaders like Digital Promise to state-specific guidance, such as the California Department of Education’s AI Guidance (September, 2023). While these resources circulate, new discussions are emerging that suggest AI will directly impact the roles and responsibilities of the educator. First, there may be opportunities to enhance teaching practices with tools that support educator productivity. Second, some are beginning to suggest that the role of the teacher could shift to be more like a "caregiver" due to “intelligent” AI tools. At Unbound Academy, a virtual charter school in Arizona opening in September 2025, “teachers—known as “guides” rather than content experts—will monitor the students’ progress. Mostly, the guides will serve as motivators and emotional support” (Schultz, 2025). The latter is concerning because it ignores the credible, thoughtful, and pedagogical expertise we have cultivated throughout our careers. Here are some guiding questions to discuss AI literacy and the role of the educator in your spheres: • What are some AI literacy resources you have come across that have been impactful? • What has your experience been with the shifting narrative around AI in education (concerning teachers as motivators/caregivers)? • What (including and beyond AI) would make you a more impactful AT professional? UNPACKING HALLUCINATIONS AND BIAS Another key consideration to understanding AI systems is unpacking the training data, as well as the bias and accuracy in AI output. Gen AI tools are trained on copious amounts of information, mostly from the internet. At present, publicly available internet materials are being debated as fair use for AI training data (i.e., the New York Times suing OpenAI and Microsoft). Additionally, users must now “opt-out” of their information being used as training data for the next best AI model. While we await the results of a clear judicial ruling on fair use, there still exists the problem of hallucinations and bias within AI. A hallucination is a computer science term used to describe errors or misleading results generated by AI models. Remember how AI is trained on materials created by humans? Despite our best intentions, we all
carry implicit biases that can permeate what we produce. Whether these biases are perpetrated with purpose or unintentionally, their impact is visible in AI. This Diet and Digest Model (Denna and Burrus, 2024), depicted in Figure 1, was developed to visually represent how flawed training data can yield unreliable AI output. The internet is filled with inaccessible code and false and biased information such as forums. Gen AI derives patterns from this data which result in hallucinations. A team of medical researchers and AI specialists at NYU Langone Health found that .001% of misinformation in a medical training data set led to 7% incorrect answers, demonstrating that it only takes a few articles of false information to skew large language models (LLM) results. This is not to say that all tools powered by LLMs and trained on internet data are all bad or should never be used. Rather, this point is to illustrate how demystifying the accuracy of AI output can aid AT professionals in evaluating the benefit of these tools for various professional purposes. Here are some guiding questions to discuss hallucinations and bias within AI tools: • How have you, in your teaching and professional practice, implemented strategies to combat AI-generated misinformation? • Have you encountered bias in your AI use in your practice? How do you approach it? • Do you have a classroom or practice policy that relates to AI hallucinations and/or bias?
Figure 1. Diet and Digest Model by Denna and Burrus (2024)
12
www.closingthegap.com/membership | June / July, 2025 Closing The Gap © 2025 Closing The Gap, Inc. All rights reserved.
BACK TO CONTENTS
TRANSPARENCY Educators continue to be encouraged to embrace AI technologies in their workflow, but it is imperative to be diligent in our analysis of their technical functioning. AI models function based on statistical calculations that are often hidden to the user. These black box AI tools make it impossible to discern the boundaries of AI’s decisions (see Figure 2). This limits our opportunity to create and modify curriculum thoughtfully, design for unique user needs, and provide child-specific scaffolding. The coming years will undoubtedly result in increased adoption and exploration of technology tools that provide new opportunities to enhance our lives, practice, and learning. We are all excited for the opportunity to optimize our workflow and increase access to robust curricula for students with disabilities, but first we must be able to critically evaluate “AI-powered” tools from a technical and pedagogical perspective. This includes energy costs associated with AI tools, protecting vulnerable users, and the amplification of societal biases in outputs. Empowered by understanding how AI makes decisions, we can confidently advocate for responsible AI and ethical AI implementation. Here are some guiding questions to discuss transparency in AI tools: • What areas of AI, as it relates to AT, do you think need more transparency? • Is AI making decisions in your settings (e.g. grading work, analyzing data, AI-detectors)? Has that yielded positive or negative outcomes for students or staff? • What level of transparency would increase or decrease your use of AT that is built with AI?
REFERENCES: Alber, D. A., Yang, Z., Alyakin, A., et al. (2025). Medical large language models are vulnerable to data-poisoning attacks. Nature Medicine, 31 (4), 618–626. https://doi.org/10.1038/ s41591-024-03445-1 Denna, L., & Burruss, M. (2024). Chat-GPT did NOT write this presentation: AI tools to empower your workflow. Presentation at the Assistive Technology Industry Association (ATIA) Conference, Orlando, FL. Lee, K., Mills, K., Ruiz, P., Coenraad, M., Fusco, J., Roschelle, J., & Weisgrau, J. (2024, June 18). AI literacy: A framework to understand, evaluate, and use emerging technology. Digital Promise. https://digitalpromise.org/2024/06/18/ai-literacy- a-framework-to-understand-evaluate-and-use-emerging- technology/ Ma, W., Adesope, O. O., Nesbit, J. C., & Liu, Q. (2014). Intelligent tutoring systems and learning outcomes: A meta-analysis. Journal of Educational Psychology, 106 (4), 901–918. https:// doi.org/10.1037/a0037123 Schultz, B. (2025, January 6). This school will have artificial intelligence teach kids (with some human help). Education Week . https://www.edweek.org/technology/this-school- will-have-artificial-intelligence-teach-kids-with-some- human-help/2025/01 U.S. Department of Education, Office of Educational Technology. (2023). Artificial intelligence and the future of teaching and learning: Insights and recommendations . https://www. ed.gov/sites/default/files/ai-report.pdf
Thank you to Meriwether Burruss, M. Ed., for her input and collaboration investigating AI in AT.
Figure 2. Black Box AI
A CALL TO ACTION As a community of practitioners in AT, we need to develop a deeper understanding of AI, analyze its design and technical functioning, and critically evaluate AI tools and systems. AI literacy compels us to engage in AI-focused professional networks that foster critical thinking and collaborative learning. Together, we can improve our collective understanding of AI in accessibility and assistive technology, and shape its benefit to our current students, future users, and technology innovators who are building the tools of the future.
13
June / July, 2025 | www.closingthegap.com/membership Closing The Gap © 2025 Closing The Gap, Inc. All rights reserved.
BACK TO CONTENTS
Page 1 Page 2 Page 3Made with FlippingBook Ebook Creator