What AI Invisibilizes: Critical Perspectives on AI Literacy…

AI LITERACY AND THE ROLE OF THE EDUCATOR Whether you are a student or a teacher, AI literacy skills are becoming increasingly recognized as valued competencies along with media literacy, digital citizenship, and data literacy. Digital Promise defines AI literacy as the“knowledge and skills that enable humans to critically understand, use, and evaluate AI systems and tools to safely and ethically participate in an increasingly digital world” (Lee et al., 2024). For AT professionals in education, our AI literacy must be rooted in understanding AI’s design, training data, ownership, and customizable accessibility features that may enhance educational quality. Over the last year, we have seen a growing number of AI literacy resources across the internet from non-profit leaders like Digital Promise to state-specific guidance, such as the California Department of Education’s AI Guidance (September, 2023). While these resources circulate, new discussions are emerging that suggest AI will directly impact the roles and responsibilities of the educator. First, there may be opportunities to enhance teaching practices with tools that support educator productivity. Second, some are beginning to suggest that the role of the teacher could shift to be more like a "caregiver" due to “intelligent” AI tools. At Unbound Academy, a virtual charter school in Arizona opening in September 2025, “teachers—known as “guides” rather than content experts—will monitor the students’ progress. Mostly, the guides will serve as motivators and emotional support” (Schultz, 2025). The latter is concerning because it ignores the credible, thoughtful, and pedagogical expertise we have cultivated throughout our careers. Here are some guiding questions to discuss AI literacy and the role of the educator in your spheres: • What are some AI literacy resources you have come across that have been impactful? • What has your experience been with the shifting narrative around AI in education (concerning teachers as motivators/caregivers)? • What (including and beyond AI) would make you a more impactful AT professional? UNPACKING HALLUCINATIONS AND BIAS Another key consideration to understanding AI systems is unpacking the training data, as well as the bias and accuracy in AI output. Gen AI tools are trained on copious amounts of information, mostly from the internet. At present, publicly available internet materials are being debated as fair use for AI training data (i.e., the New York Times suing OpenAI and Microsoft). Additionally, users must now “opt-out” of their information being used as training data for the next best AI model. While we await the results of a clear judicial ruling on fair use, there still exists the problem of hallucinations and bias within AI. A hallucination is a computer science term used to describe errors or misleading results generated by AI models. Remember how AI is trained on materials created by humans? Despite our best intentions, we all

carry implicit biases that can permeate what we produce. Whether these biases are perpetrated with purpose or unintentionally, their impact is visible in AI. This Diet and Digest Model (Denna and Burrus, 2024), depicted in Figure 1, was developed to visually represent how flawed training data can yield unreliable AI output. The internet is filled with inaccessible code and false and biased information such as forums. Gen AI derives patterns from this data which result in hallucinations. A team of medical researchers and AI specialists at NYU Langone Health found that .001% of misinformation in a medical training data set led to 7% incorrect answers, demonstrating that it only takes a few articles of false information to skew large language models (LLM) results. This is not to say that all tools powered by LLMs and trained on internet data are all bad or should never be used. Rather, this point is to illustrate how demystifying the accuracy of AI output can aid AT professionals in evaluating the benefit of these tools for various professional purposes. Here are some guiding questions to discuss hallucinations and bias within AI tools: • How have you, in your teaching and professional practice, implemented strategies to combat AI-generated misinformation? • Have you encountered bias in your AI use in your practice? How do you approach it? • Do you have a classroom or practice policy that relates to AI hallucinations and/or bias?

Figure 1. Diet and Digest Model by Denna and Burrus (2024)

12

www.closingthegap.com/membership | June / July, 2025 Closing The Gap © 2025 Closing The Gap, Inc. All rights reserved.

BACK TO CONTENTS

Made with FlippingBook Ebook Creator