What AI Invisibilizes: Critical Perspectives on AI Literacy…

TRANSPARENCY Educators continue to be encouraged to embrace AI technologies in their workflow, but it is imperative to be diligent in our analysis of their technical functioning. AI models function based on statistical calculations that are often hidden to the user. These black box AI tools make it impossible to discern the boundaries of AI’s decisions (see Figure 2). This limits our opportunity to create and modify curriculum thoughtfully, design for unique user needs, and provide child-specific scaffolding. The coming years will undoubtedly result in increased adoption and exploration of technology tools that provide new opportunities to enhance our lives, practice, and learning. We are all excited for the opportunity to optimize our workflow and increase access to robust curricula for students with disabilities, but first we must be able to critically evaluate “AI-powered” tools from a technical and pedagogical perspective. This includes energy costs associated with AI tools, protecting vulnerable users, and the amplification of societal biases in outputs. Empowered by understanding how AI makes decisions, we can confidently advocate for responsible AI and ethical AI implementation. Here are some guiding questions to discuss transparency in AI tools: • What areas of AI, as it relates to AT, do you think need more transparency? • Is AI making decisions in your settings (e.g. grading work, analyzing data, AI-detectors)? Has that yielded positive or negative outcomes for students or staff? • What level of transparency would increase or decrease your use of AT that is built with AI?

REFERENCES: Alber, D. A., Yang, Z., Alyakin, A., et al. (2025). Medical large language models are vulnerable to data-poisoning attacks. Nature Medicine, 31 (4), 618–626. https://doi.org/10.1038/ s41591-024-03445-1 Denna, L., & Burruss, M. (2024). Chat-GPT did NOT write this presentation: AI tools to empower your workflow. Presentation at the Assistive Technology Industry Association (ATIA) Conference, Orlando, FL. Lee, K., Mills, K., Ruiz, P., Coenraad, M., Fusco, J., Roschelle, J., & Weisgrau, J. (2024, June 18). AI literacy: A framework to understand, evaluate, and use emerging technology. Digital Promise. https://digitalpromise.org/2024/06/18/ai-literacy- a-framework-to-understand-evaluate-and-use-emerging- technology/ Ma, W., Adesope, O. O., Nesbit, J. C., & Liu, Q. (2014). Intelligent tutoring systems and learning outcomes: A meta-analysis. Journal of Educational Psychology, 106 (4), 901–918. https:// doi.org/10.1037/a0037123 Schultz, B. (2025, January 6). This school will have artificial intelligence teach kids (with some human help). Education Week . https://www.edweek.org/technology/this-school- will-have-artificial-intelligence-teach-kids-with-some- human-help/2025/01 U.S. Department of Education, Office of Educational Technology. (2023). Artificial intelligence and the future of teaching and learning: Insights and recommendations . https://www. ed.gov/sites/default/files/ai-report.pdf

Thank you to Meriwether Burruss, M. Ed., for her input and collaboration investigating AI in AT.

Figure 2. Black Box AI

A CALL TO ACTION As a community of practitioners in AT, we need to develop a deeper understanding of AI, analyze its design and technical functioning, and critically evaluate AI tools and systems. AI literacy compels us to engage in AI-focused professional networks that foster critical thinking and collaborative learning. Together, we can improve our collective understanding of AI in accessibility and assistive technology, and shape its benefit to our current students, future users, and technology innovators who are building the tools of the future.

13

June / July, 2025 | www.closingthegap.com/membership Closing The Gap © 2025 Closing The Gap, Inc. All rights reserved.

BACK TO CONTENTS

Made with FlippingBook Ebook Creator