This free and interactive resource is designed to inform and empower K-12 education into the next generation of learning through artificial intelligence (AI).
The AI Toolkit
For K-12 Education
A free resource from:
The Missouri School Boards’ Association and its Center for Education Safety
This publication is provided free of charge for informational purposes. Reproduction of its contents, in whole or in part, is not permitted without proper attribution to MSBA. Please credit MSBA when referencing or sharing any material from this publication.
INTRODUCTION
INTRODUCTION
On behalf of the Missouri School Boards' Association, I am thrilled to welcome you to our AI Toolkit for K-12 Education! This comprehensive resource is designed to empower school boards, administrators, educators, and parents to explore the exciting possibilities and potential challenges of Artificial Intelligence (AI) in schools. As the educational landscape continues to evolve, it's crucial that we equip our students with the skills and knowledge they need to thrive in an increasingly technology-driven world. AI presents a unique opportunity to personalize learning experiences, enhance critical thinking skills, and prepare students for the careers of tomorrow. At the same time, we must ensure policies and procedures are in place regarding AI usage, and that the safety of our schools and students remains top priority. MSBA recognizes the potential challenges that come with integrating new technologies. This toolkit is here to empower you to make informed decisions about incorporating AI into your schools and is designed to be ever-evolving as AI continues to expand. We encourage you to explore the resources, share your experiences with colleagues, and join us in shaping the future of education in Missouri and beyond.
Sincerely,
Melissa Randol, Executive Director Missouri School Boards' Association
2
Click to jump to a section.
TABLE OF CONTENTS
Introduction, misconceptions, & definitions
Understanding AI
Ways for stakeholders to use, addressing plagiarism
AI Meets K-12
Laws impacted, policy samples, & considerations
AI Law & Policy
Data privacy, deepfakes, & other concerns
AI Safety & Security
Fairness, transparency, & accountability
AI Ethics
Collection of quality tools & information
AI Resource Library
Glossary
What’s New?
Feedback
3
UNDERSTANDING AI
What is AI, anyway?
In simple terms, artificial intelligence (AI) is making machines smart enough to do tasks that only people used to do well. This includes things like solving problems, making decisions, and learning from experiences.
While there have been major advances in recent years, AI is not a new concept. In fact, the first AI chatbot (named ELIZA) made its debut in 1966. It was designed to be a virtual psychotherapist.
Is this science fiction come to life?
Not quite. While AI is the term our society has chosen to describe this technology, it is not the “robots taking over the world” that authors warned us of. No machine exists today that can make decisions and create things without some type of human involvement. These machines must be trained by humans with knowledge, skills, and judgement. Therefore, the term “artificial intelligence” is not entirely accurate. A more accurate term is “machine intelligence.”
4
How is AI used in everyday life?
DID YOU KNOW? 99% of Americans use AI every week, but 65% don’t realize it. Source: Gallop, January 2025
Here are some common ways we use AI to make our lives easier in the 21st century:
Social media platforms use AI programs to determine what content to show users based on their scrolling history and interactions.
Virtual assistants like Amazon’s Alexa and Apple’s Siri use AI to interact with humans and complete their requests.
GPS programs use AI to determine the best directions to provide drivers based on conditions like traffic, construction, and weather.
Chatbots on websites use AI to answer user questions and provide information based on their questions.
Predictive text programs use AI to guess what a person will type next based on their previous writing.
5
Is AI a good or bad thing?
AI is neither good nor bad: it’s morally neutral. AI is simply a tool that can be used in a variety of ways. How tools are used is determined by the humans who operate them. AI is no different. For example, take a hammer. It can be used by humans to do good things, like build birdhouses. But it can also be used by humans to do bad things, like break into vehicles. AI is the same way: humans determine whether how it’s used is good or bad.
What is Generative AI?
While AI has been around for a long time, generative AI (or GenAI) is a fairly new innovation. GenAI are tools capable of creating original text, images, video, audio, etc. Humans simply tell the program what they want it to create, and in seconds the tool creates it. You’ve likely heard of or used some of the popular GenAI programs. Most of them are free or low-cost, and simple for the average person to operate. No advanced technical skills are needed.
6
Popular GenAI Chatbots
Tool
URL
Launched
Owner
Apple Intelligence
n/a (only on select Apple devices)
October 2024 Apple Inc.
ChatGPT chatgpt.com
November 2022 OpenAI
(Sam Altman)
Copilot
copilot.microsoft.com February 2023 Microsoft
Claude
claude.ai
March 2023 Anthropic
Grok
grok.com
November 2023 xAI
(Elon Musk)
Gemini
gemini.google.com
March 2023 Google
MetaAI
ai.meta.com
April 2025
Meta (Mark Zuckerberg)
Perplexity perplexity.ai
December 2022 Perplexity AI, Inc.
Which chatbot should you use?
The choice of a GenAI tool depends on several factors. Different tools excel at different tasks, so the "best" choice often comes down to matching the tool's strengths with your specific needs. Additionally, people tend to form opinions on which tools they prefer based on their experiences and workflows. It's best to try multiple tools and determine which one best meets your needs. TRY THIS: Enter the exact same prompt into multiple GenAI tools and compare the results. This will help you determine which one(s) best meet your needs.
7
What’s so great about generative AI?
Generative AI tools come with many benefits. They include:
Speed GenAI tools can complete tasks in seconds that would take humans much longer to do, which can save a lot of time. Affordability Many GenAI tools offer free versions, while others charge minimal subscription fees. Availability To access GenAI tools, all someone needs is an internet-connected device. No special software or downloads are required. Easy to Use Humans interact with GenAI tools with simple commands, usually in a “chat” style. No fancy coding or programming language is needed.
How do GenAI & search engines differ?
A search engine (like Google or Bing) finds information that already exists on the web. When you ask it something, it gives you a list of websites where you might find the answer. Generative AI (like ChatGPT) creates new text, images, or other content based on what it has learned from lots of data. It doesn’t just find answers, it actually generates them. So, while both give you information, a search engine points you to sources, and generative AI creates answers for you.
8
Is AI always correct?
No. AI programs are only as good as the information they are provided by humans. AI programs may provide users with incorrect information, so it’s always good to double-check what they tell you. Some AI programs like Chat GPT are known to “hallucinate.” This means they make up information that is simply not true. For example, you could ask it to identify the typical color of grass, and it would respond that grass is purple.
Is AI biased?
Yes. Despite the best efforts to keep most AI tools unbiased, it is virtually impossible to prevent bias. This is because the responses provided by tools like Chat GPT are based on the information they were provided, and the feedback they get to shape their understanding – both of which are provided by humans. When using AI programs, always beware of biases in the responses.
“We assume [AI] is neutral. We assume that it’s not prone to [human] biases, but AI tools are as biased as we are because they have been trained on us. They are a black mirror to us.”
- Punya Mishra, Arizona State University
9
Open vs. Closed AI – What’s the difference?
Open AI tools (like Chat GPT) can be used by anyone and the information they provide is available to the public. Open AI tools tend to have larger capabilities and knowledge to draw on. However, you must be careful what information you put into them (see below). Closed AI tools are private. Only members of a certain group (like a business, school district, or government entity) can access and use them. The functions of closed AI tools tend to be more specialized to meet the needs of their audience. Sharing data in a closed AI tool is much more secure, though information leaks are still possible through data compromises - like malware and hacking.
Like squeezing toothpaste out of a tube, you cannot get data back once you input it into an open AI tool. If you’re not sure whether something is safe to share with open AI, it’s best to err on the side of caution.
When in doubt…leave it out!
10
What can’t GenAI do well?
The phrase “AI is like a brain without a mind” describes how AI is similar to humans but very different. AI tools can perform many tasks like humans, but they lack true understanding or consciousness.
AI can process data, recognize patterns, and simulate decision-making processes...
…but it does not possess self-awareness or genuine comprehension.
AI can help create art or writing…
…but it cannot achieve the depth and authenticity that come from human experience and intuition.
AI can translate languages with impressive accuracy…
…but it often misses cultural nuances and expressions that humans understand instinctively.
AI can diagnose diseases from medical images faster than a human doctor…
…but it cannot provide the empathetic care and personalized treatment plans that a human doctor offers.
AI can assist in legal research by quickly sifting through vast amounts of data…
…but it lacks the ability to argue a case with the persuasion and emotional intelligence of a skilled lawyer.
11
How do you talk to AI tools?
To get the best results from a generative AI tool, you need to give it clear instructions. This is called prompting. A prompt is what you type or say to the AI. The better your prompt, the better the answer you’ll get.
Here are a few tips:
Be Specific Instead of saying “What should I do this weekend?” try “Give me three practical ideas of what I can do this weekend.” Give Examples If you want the AI to write something a certain way, give an example. Like, “Write a polite email like this: ‘Dear Principal Smith, I’m writing to follow up on our meeting from last week…’ I need to ask for an extension on a deadline.” Ask For What You Want You can ask the AI to explain, write, help, fix, or even pretend. Like, “Pretend you’re a history teacher. Explain World War II in simple terms.” Use Follow-up Questions If you don’t like the first answer, ask again or say what you want differently. Like, “Can you make it shorter?” or “Use simpler words.” Just remember: the AI doesn’t read your mind; it follows your words. The clearer you are, the better the help you’ll get.
12
Stronger Prompts
Weak Prompts
“Write a funny story about a talking dog who wants to become a chef.”
“Write a story.”
“Explain how to solve a system of equations with substitution, step by step.”
“Help me with math.”
“Rewrite this sentence to sound more formal and professional: ‘Hey, I need that paper ASAP.’”
“Make this sound better.”
“Give me 3 important facts about the Civil Rights Movement, written at a middle school level.”
“Tell me about history.”
“Give me 5 creative science fair ideas that use recycled materials.”
“Give me ideas.”
“Create a cartoon-style image of a cat flying a spaceship through a rainbow galaxy.”
“Make a picture.”
“Make a 5 question quiz about the water cycle for a 7th-grade science student.”
“Help me study.”
13
Can AI tools handle complex tasks?
Yes. However, asking the tool to do something big all at once can lead to confusing or messy results. A better way is to break it into smaller steps.
Let’s say you’re helping plan a school talent show. Telling the tool to “Plan a school event.” is too broad. It doesn’t know what kind of event, who it’s for, or what you need help with. Instead, break it into steps, like this:
AI Prompt
Action
“Give me 5 fun school event ideas that involve student performances.”
Choose the type of event.
1
“Come up with 5 creative themes and names for a talent show.”
Pick a theme and name.
2
“Make a simple timeline of what needs to be done 4 weeks before the event.”
Create a plan.
3
“Write an announcement to read over the school intercom to get people to sign up.”
Develop marketing.
4
“Help me make a schedule for the performances with 10 acts, 5 minutes each.”
Organize the show.
5
“Design a checklist for the day of the talent show to keep everything running smoothly.”
Create a checklist.
6
14
Why give feedback to AI tools?
When you use an AI tool, you might not always provide the perfect answer right away. You can give feedback to help the AI do better. Here’s how to give good feedback: Be Clear Say what you want to change. Like, “Make it shorter,” or “Use simpler words.”
Be Specific Instead of “This is wrong,” try “This fact is not correct. Can you check it?”
Ask For Fixes You can say things like “Try again,” or “Can you give more details?”
Say What You Liked If something worked well, you could say, “That was a good explanation. Can you add more like that?”
Use The Thumbs Up Or Down Buttons If you liked the answer, click
. If it wasn’t helpful, click
. This helps
improve the tool and trains it on your expectations.
Remember, AI tools learn from what you tell them during the conversation. You don’t need to be perfect, just speak up when something isn’t quite right, and ask for what you need.
While many GenAI tools do not require users to sign up or login, it’s best to do so because it will remember your previous conversations and craft responses based on them, providing you with better results.
HERE’S A TIP!
15
16
16
AI MEETS K-12
AI is shaking up just about every area of modern life, and the education world is not immune. The use of AI tools in K-12 schools provides numerous opportunities and challenges for every stakeholder group. In this section, we’ll consider the impacts on each.
Students
Schools
AI
Parents
Teachers
Admin. & Staff
School Board
17
AI & Schools (as a whole)
OPPORTUNITIES
Maintenance and Operations AI can predict maintenance needs for school facilities and equipment, which can reduce downtime and ensure a safe environment.
Resource Allocation AI can help in efficiently allocating resources like classroom space, lab equipment, and library materials based on usage patterns and demand forecasts.
Energy Management AI can optimize energy usage within school buildings by analyzing patterns and adjusting heating, cooling, and lighting systems accordingly.
Nutrition AI can determine menus and food supply orders based on student body, nutrition, budget, dietary restrictions, and more.
Transportation AI can optimize bus routes to reduce travel time, save fuel, and avoid traffic.
18
CHALLENGES
Dependence on Data Quality The effectiveness of AI tools depends on the quality and accuracy of the data they’re given. Poor data quality can lead to poor outcomes.
Algorithmic Bias AI algorithms can become biased from the data they are trained on, which may lead to unfair or discriminatory outcomes.
Technical Failures AI tools can experience technical glitches or failures, which could disrupt services and cause delays or safety issues.
Cost and Resources Implementing AI tools may require big investments in technology, training, and maintenance. Schools with limited budgets might struggle to afford these resources.
19
AI and Students
OPPORTUNITIES
Personalized Learning AI can adapt educational content for individual learning styles, paces, and levels that address each student's strengths and weaknesses.
Special Education AI can offer customized support for students with special needs - like speech recognition, predictive text, or voice-to-text tools.
English Language Learners AI-powered translation and language learning tools can help students who are non-native speakers better understand the curriculum and improve language skills.
24/7 Tutoring AI can provide around-the-clock academic support, answering questions, explaining concepts, and offering additional practice materials.
Mental Health & Wellbeing AI can help identify early signs of stress, anxiety, and depression by analyzing patterns in student behavior and engagement.
20
CHALLENGES
Over-Reliance on Technology Some students may become overly dependent on AI tools for learning and problem-solving, which could reduce their ability to think critically and independently.
Privacy and Data Security Concerns AI systems often require access to personal data, such as student performance metrics and behavioral patterns, raising concerns about data privacy and security.
Dehumanization of Education While AI tutors can be helpful, they can't replace the value of human interaction in the classroom. Social development and communication skills are important too, and AI can't provide those.
Misuse for Cheating AI tools capable of generating text or answering questions could be misused by students to cheat on assignments or tests. Students must be taught how and when to use AI appropriately.
21
AI and Teachers
OPPORTUNITIES
Automated Tasks AI can automate tasks like grading, attendance tracking, and writing communication to parents - freeing up time for more important tasks. Lesson Planning AI can help teachers generate lessons, activities, and assignments based on specific learning objectives and student needs. Data-Driven Instruction AI can analyze student performance data to identify areas where the class is struggling or excelling. Teachers can adjust their approach and focus on the students who need extra help. Classroom Management AI-powered classroom management tools can help teachers track student engagement, identify disruptive behavior patterns, and provide real-time feedback on classroom dynamics. Professional Development (PD) AI can personalize PD opportunities for teachers by recommending resources based on their individual needs and interests.
22
CHALLENGES
Technical Barriers Integrating AI into the classroom requires training for teachers – which can take time, money, and resources.
Security Concerns A lack of understanding about data security can make teachers vulnerable to data breaches or misuse of student data collected by AI tools.
"Black Box" Problem It can be difficult to understand how AI systems create the information they provide. This could make it challenging for teachers to evaluate the validity of AI suggestions.
Resistance to Change Educators who are accustomed to traditional teaching methods may be reluctant to adopt new approaches.
23
AI and the School Board
OPPORTUNITIES
Strategic Planning AI can analyze historical data on student performance, enrollment trends, teacher staffing needs, and more. This allows boards to make data-driven decisions. Budget Management AI can analyze spending patterns and predict future budget needs. This allows boards to make informed financial decisions. Stakeholder Surveys AI can streamline the survey process by generating questions, writing stakeholder communications, and analyzing results to help boards gain maximum insights from surveys.
Multilingual Communication AI translation tools can help boards communicate effectively with those in their communities from diverse backgrounds.
Superintendent Evaluation AI can help set clear, measurable goals for the superintendent, then track progress toward those goals throughout the year.
24
CHALLENGES
Data Privacy and Security Boards need to have strong data security measures in place to protect student privacy and comply with data privacy regulations.
Bias and Fairness Boards need to be aware of potential AI algorithmic bias and take steps to reduce it, like using diverse datasets and carefully evaluating AI recommendations.
Transparency Boards should disclose to stakeholders when AI is used for decision making and provide justification for its usage. Ultimately, stakeholders need to know that humans made the final decision.
Equal Access Not all districts have equal access to modern and high-quality technology. Boards must consider how the use of AI might make existing educational disparities worse.
25
AI and Administrators & Staff
OPPORTUNITIES
Enhanced Communication AI chatbots can handle common inquiries from parents, students, and staff – and provide immediate responses.
Scheduling AI can help with schedules for students, teachers, lunch shifts, facilities, and more – a major time saver for administrators.
Recruitment and Hiring AI can streamline the hiring process by screening resumes, assessing candidate qualifications, and even conducting initial interviews through chatbots.
Financial Management AI can assist in financial planning by analyzing past expense patterns and predicting future financial needs - helping administrators make informed budgeting decisions. Security and Safety AI tools can be integrated into emergency operation plans (like the K-12 Safety app) to ensure thorough and accurate considerations are made for people and locations in the event of an emergency.
26
CHALLENGES
Infrastructure Requirements Implementing AI solutions may require upgrades to existing IT infrastructure - including hardware, software, and network capabilities.
Change Management Introducing AI may require big changes in workflows and processes, which can be met with resistance from staff who are used to more traditional methods.
Data Quality and Integration Ensuring data is accurate, complete, and integrated from various sources can be difficult. Poor data can lead to unreliable AI insights.
Transparency and Accountability Understanding how AI systems make decisions can be challenging. This can raise concerns about transparency and accountability in educational decision-making.
27
AI and Parents
OPPORTUNITIES
Homework Assistance AI tools can help parents understand educational content so they can better aid their children in homework, projects, test prep, and more.
Special Needs Support AI can offer personalized resources, tools, and strategies to better support their child’s unique learning requirements. Behavioral Insights AI can analyze data on a child's behavior and social interactions to provide parents with insights and recommendations to support their child’s social and emotional development. Course and Extracurricular Recommendations AI can help parents and students choose appropriate courses and extracurricular activities based on the student’s strengths, interests, and future goals.
College and Career Guidance AI tools can provide personalized guidance on college and career planning, helping parents and students navigate their futures.
28
CHALLENGES
Access to Technology Not all parents have access to the necessary technology or internet connectivity, which can limit their ability to use AI tools effectively.
Technical Literacy Some parents may lack the technical skills required to navigate and utilize AI applications, leading to frustration and underutilization of available resources.
Consent and Control Parents may be wary of how their child's data is used and if they have sufficient control and understanding of consent of the AI tools.
Over-reliance on Technology There's a risk of parents becoming overly reliant on AI tools, which may neglect the importance of personal engagement and judgment in their child’s education.
29
Watch: AI in Their Words
Hear why AI matters in schools from a variety of stakeholders: a school board member, an administrator, a teacher, and two students.
Watch three education professional discuss how they use AI tools to enhance their work.
30
Does Using GenAI = Cheating?
The biggest debate in the education world when it comes to GenAI revolves around academic honesty. Is it considered “cheating” if students use AI tools on their schoolwork? The answer is complicated. Let’s look at another technology that shook up education: the calculator. In the 1980s, calculators started to become cheap and readily available. This led to a heated debate over whether students should use them on math assignments. Was using a calculator cheating, or just being resourceful? The answer turned out to be both. There are times when it’s appropriate for a student to use a calculator and times when it’s inappropriate.
The Daily Item , April 5, 1986
31
The same can be said about using AI tools. When a student is tasked with creating an original piece of writing or solving a mathematical problem, AI tools can basically do the work for them. This is a form of academic dishonesty.
However, there are many instances where students can benefit from using AI tools to enhance their learning. They include:
Research Assistance AI can help students find relevant articles, books, and resources quickly by summarizing information or suggesting related topics, making research more efficient. Writing Assistance AI tools like grammar and style checkers (e.g., Grammarly) can help students improve their writing by suggesting corrections and enhancing clarity, coherence, and tone. Study Aids AI can create personalized quizzes, flashcards, and practice tests based on the material a student needs to study, making revision more targeted and effective. Plagiarism Detection AI can help students ensure the originality of their work by checking for unintentional plagiarism and suggesting citations where needed.
32
The Stoplight Model
It’s important for teachers to educate students on when it’s okay and not okay to use AI. One popular method used in schools today is the “stoplight model,” which clearly identifies when students may use AI on assignments and to what level.
RED LIGHT AI usage is NOT permitted in this activity. YELLOW LIGHT Permission from teacher is required before using AI. GREEN LIGHT Students are encouraged to use AI tools.
To learn more about this concept, read A Stoplight Model for Guiding Student AI Usage from Edutopia .
33
Detecting AI Usage in Writing
AI writing detectors work by analyzing text for patterns, structures, and stylistic elements commonly associated with AI-generated content. They compare the text against training data from AI models and assess factors like repetitiveness, probability of word choices, and sentence predictability. Some detectors use algorithms designed to identify the linguistic "fingerprint" of AI outputs. However, these detectors are not always accurate. They may incorrectly flag well-written human text as AI-generated, especially if it follows clear, formal, or predictable patterns often taught in academic writing. Conversely, they may fail to detect AI-generated text that mimics human idiosyncrasies or includes intentional errors. Their accuracy depends heavily on the quality of the training data and the complexity of the writing being evaluated, making them imperfect tools for assessing authorship.
A Robot Wrote the Declaration of Independence?
What happened when a data scientist ran the preamble to the U.S. Declaration of Independence through an AI-detection system? The system claimed that 97.75% of the preamble was AI-generated. This is just one example of inaccurate detection of AI writing. Schools can’t rely on these programs to spot plagiarism: teachers must remain engaged in understanding and identifying their students’ work.
Learn More About This Story
34
Detecting AI, Detecting Bias
According to a report by Common Sense Media, the percent of students who use generative AI programs to write their assignments is nearly equal amongst White, Latino, and Black students. However, when plagiarism detectors incorrectly label a student’s work as AI-generated, there is a disparity amongst races.
Teens whose teacher(s) flagged their schoolwork as being created by generative AI when it was not, by race/ethnicity:
White Students
Latino Students
Black Students
7 %
10 % 20 %
Are All Students Using AI to Cheat?
No, according to a report by the Stanford Graduate School of Education. While the percentage of students who admit to cheating on schoolwork remains high, it has NOT increased since the launch of ChatGPT and other GenAI tools in recent years. While some students are certainly using AI to cheat, they appear to be the same ones who were using other methods to cheat prior to the launch of these tools.
35
Ways to Detect AI Writing
Compare it to writing the student did in class. Does the writing style, language, and formatting appear to be similar to the original version?
Quiz the student on the content of their work. Can they answer questions about the content of the writing? If there are big words used, can they explain their meaning?
Use Google Docs’ Version History or MS Word’s Track Changes . Did the document go from blank to full of text nearly instantly? This likely indicates they copied and pasted the writing from an AI source (or any other resource).
Look for a lack of depth or analysis. Does the writing generally address the topic, but fail to go into detail or explain ideas on a deeper level?
Check any sources cited. Does the sources actually exist? Do they support the information referenced in the writing?
Know your students. Does this writing sound like them? Does it seem beyond their academic abilities?
36
Rethinking Assignments in a GenAI World
While there is no way to fully prevent students from using GenAI tools on schoolwork when it is forbidden, there are ways to make it much more difficult for them to do so (or at least get away with it).
Make the assignment personal. Connect to students' own experiences and observations.
Require process documentation. Have students submit notes, photos, interviews, drafts, etc. as proof that they actually completed the assignment as intended.
Add human interaction. Use elements that require interaction with others – such as interviews, teaching others, giving a presentation, etc.
Involve current and/or local data. Have students create, find, or analyze school-specific information, recent events, regional statistics, etc.
Include creative/original elements. In place of a traditional paper, instruct students to design, create, or problem-solve.
Put AI on your side. Upload your assignments to a GenAI chatbot and ask it to suggest ways to make them AI-proof.
37
Common Student Uses of AI
According to a 2024 report by Impact Research, these are the most common ways that K-12 students are using AI chatbots:
Help writing essays and other writing assignments
56% of Students
52% of Students
Studying for tests and quizzes
Completing other types of schoolwork
45% of Students
41% of Students
Deepening subject knowledge
38% of Students
Creating presentations
To learn more, read Summer preparedness: AI for the 2024-25 school year by Flint K12.
38
Where Meaningful Action Lies
What We Can Control
What Matters
It can be easy to get overwhelmed when thinking about all the ways that AI is impacting schools, both in positive and negative ways. One way to stay grounded is to focus on the things that we can control and the things that matter.
…But We Can
We Can’t…
Prepare students to thrive in an ever-changing world
Predict every impact AI will have on education
Teach critical thinking and adaptability
Teach every possible new technology
Set clear expectations and model responsible use
Prevent all misuse of AI
Build responsible guardrails
Eliminate all risks of AI
39
39
Watch: AI in Education – What School Leaders Need to Know
In January 2025, MSBA’s AI team presented this 90-minute webinar to help school leaders understand AI and the opportunities & challenges it poses for schools.
MSBA Interview with Ballotpedia
In August 2024, an interview with MSBA’s Mark Henderson was featured in Ballotpedia’s Hallpass about AI’s role in education. Read the interview here.
40
41
AI LAW AND POLICY
No matter a district’s stance on AI, it’s essential for it to have policies and procedures to ensure that tools are being handled properly. In this section, you’ll learn important factors to consider when drafting & adopting AI policies and procedures.
AI’s Impact on Federal Laws
Law
Requirement
Possible Outcomes
Family Educational Rights and Privacy Act (FERPA)
Parents (and students over 18) have control over educational records.
Student data can’t be fed into any AI tool without consent.
Children's Online Privacy Protection Act (COPPA)
Websites must get parental consent before collecting personal data of users under 13. Schools must use technology to protect students online, including filters and monitoring.
Student data entered into AI may lead to fines, lawsuits, loss of trust, and student safety risks. Failure to comply could result in loss of federal funding and exposure to liability.
Children’s Internet Protection Act (CIPA)
42
Human Accountability
When it comes to using AI in schools, humans must always be “in the loop.” While AI tools may be incredibly helpful for making decisions and solving problems, they cannot have the final say. Humans must use the information provided by the AI tools to make decisions. If there are negative consequences to decisions aided by AI, a human will be held responsible. Saying, “Don’t blame me, the robot did it!” won’t hold up in court.
This slide from a 1979 IBM presentation is more true today than ever.
Watch: AI Affects Everything
You may not realize how many areas of school operations are touched by AI. In this video, attorney Gretchen Shipley of F3 Law explains AI’s connections to other others of operation. 43
Policy Considerations
School districts have a golden opportunity to lead the way in responsible GenAI use, via policy. But AI is constantly changing. We don’t know what new forms it may take in the future. The “Missouri Model” was developed by the Missouri School Boards’ Association for district implementation. While made for Missouri, its universal principles make it applicable to all regions.
Structure of the Missouri Model
The board decides its district AI philosophy and strategy.
The board delegates an AI Coordinator to carry out strategy.
The AI Coordinator enacts a district AI Use Plan.
The AI Coordinator expands the AI Use Plan in consultation with school community.
The board, AI Coordinator, and other involved parties periodically review and adjust the plan.
44
Missouri Model Principles
Safety First AI tools must have strong privacy controls and safety guardrails. There always needs to be a human “in the loop” to ensure protocols are being followed.
Local and Flexible Since no two districts are exactly alike, the model is adaptable to the strategy each district chooses. This allows for quick administrative response to emerging issues.
Room to Grow and Explore With a foundation of safety in place, the district can branch out to the potential enhancements offered by AI.
Accountability Districts can provide concrete reasons for their choices. The district AI Coordinator’s decisions will be guided by the board’s philosophy and strategy, so they can always be traced back to policy.
Students and Staff are Responsible for Negative Effects of their AI Use THE BOTTOM LINE:
45
MSBA Samples
SAaI mUspelePDlainstrict
Policy EHBD: ARTIFICIAL INTELLIGENCE USE
Policy EHBD: ARTIFICIAL INTELLIGENCE USE
Sample District AI Use Plan
Documents updated on July 1, 2025, replacing previous versions.
Watch: The Washington Story
The Washington, MO School District has adopted MSBA’s AI policy. In this video, Superintendent Dr. Jennifer Kephart and Director of Technology Casey Fisher discuss their district’s journey to adapting the policy.
46
The Importance of Documentation
Every school needs to address AI use in their policies and procedures in a way that all stakeholders can understand. Here's a real example of what can happen when that is not done. In October 2024, a Massachusetts high school student used AI to complete an Advanced Placement (AP) U.S. History assignment. The AI-generated work included fake book citations that didn't actually exist. According to a federal judge, the student copied large sections of text directly from the AI tool without making changes. The school's response was serious: the student received a zero on the project, was temporarily blocked from joining the National Honor Society, and had to serve detention. The student's parents sued the school. They argued that the school's handbook didn't have clear rules about AI use when their son completed the assignment. In their view, punishing him for breaking a rule that didn't officially exist wasn't fair. The parents also worried that this incident hurt their son's chances of getting into college, especially for early admission programs. The bottom line: This case shows why schools must create clear, written AI policies and share them with everyone before enforcing consequences.
See the original story about this case from the Associated Press.
47
Policy & Procedure Tools
Legal Considerations for AI in Education Tip Sheet
AI Sample Policy
Sample Insert into Employee Accounts Receivable or Acceptable Use of Technology Policy
Sample Parent/Guardian Consent for Open AI Tool
Sample AI Language to Insert in Student Acceptable Use of Technology Policy
AI Contract Review Flowchart for Student Data Resharing and Parental Consent
Sample Checklist for Vetting Software Applications
A big thanks to Gretchen Shipley of F3 Law for generously sharing these resources!
48
49
AI SAFETY AND SECURITY
While generative AI provides many positive opportunities for schools, it is irresponsible to ignore the safety and security concerns. This section focuses on some of the major issues that could impact schools – including deepfakes, data exploitation, and tool misuse.
The Dirt on Deepfakes
A deepfake is an image, video, or audio that was altered and manipulated by an AI tool. They often show a person doing something or saying something that they did not actually say or do in real life. Recent advances in AI technology has made this process easy and readily available to everyone. This offers scammers and political activists with powerful tools to exploit unsuspecting individuals or manipulate and/or damage the public reputation of a person or group. Why do they work? Deepfakes are created to “stoke” emotions and encourage people to share. One share on social media turns into several shares from recipients, then hundreds, then thousands, and on and on and on…
56% of Americans who encounter a deepfake on social media don’t realize that what they're looking at isn’t real. Source: Utah Valley University, 2024
50
The Evolution to Deepfakes
As AI technology has rapidly improved, it has become harder to detect deepfakes. There used to be tell signs that something was a deepfake, such as a mismatched face or the wrong number of fingers on a hand:
2 3 4
5
1
6
REAL
FAKE
It is no longer easy to detect when something is a deepfake. Sophisticated AI tools can create convincing deepfakes using just one photo of a person.
Test Yourself
Take a close look at these four images of a young woman. Which one do you think was created by AI? The answer can be found on the next page.
A
D
C
B
51
Test Yourself - Solution
All four of these images were created by ChatGPT. The woman depicted does not exist in real life. Thanks to Reddit user AK611750 for creating and sharing these images.
Fake Voice, Real Damage
In January 2024, an audio recording surfaced of a Maryland high school principal making crude, inflammatory remarks about his own community. Within hours, the audio had spread across social media platforms, igniting outrage among parents, students, and residents. But the voice wasn't real. Police traced the sophisticated deepfake to a disgruntled school employee who had weaponized AI against his boss. The perpetrator was fired, charges were filed against him, and authorities definitively cleared the principal of any wrongdoing. Despite official statements from both the school district and police, the fabricated audio had already poisoned public perception. Threats poured in through social media. The principal's family required security protection at their home as hostility escalated throughout the community.
In the end, facts weren't enough to repair a reputation shattered by artificial deception. The principal quit his job and relocated his family.
Learn more: The racist AI deepfake that fooled and divided a community. BBC, Oct. 4, 2024.
52
Deepfakes: Think It Through
When it comes to protecting yourself from misinformation, the key is don’t believe everything you see, read, or hear.
Verify Sources Check the credibility of the source. Look for news from other, sources that are known to be accurate and responsible. Verify the information across multiple trusted sources before believing the information and, especially, before sharing it. Check the Author Research the author of the content. Check to ensure they are credible and have a reputable background. Check to ensure they are not just sharing something unverified. Use Fact-Checking Tools Use trustworthy fact-checking websites and tools and use more than one. Even some “reputable” fact-checking tools are known to share “portions” of the facts that align with specific agendas, so always use more than one. Be Skeptical of Social Media Understand that social media platforms are “breeding grounds” for misinformation. Posts are frequently shared without trusted verification and, often, with emotion. Be cautious of “viral” content and consider potential biases of those sharing the information.
Think Before You Share and Verify with Care! THE BOTTOM LINE:
53
Reverse Image Search
Before clicking “share” on any image or video, try a reverse image search. Open a browser (Google, Bing, etc.) and click on images at the top, then try searching for the image. If you are unable to find it, it may be a fake, as most images are immediately available online.
Watch: Five Safety Tips
In this video, attorney Gretchen Shipley of F3 Law provides five safety tips for organizations when it comes to AI usage.
54
Using AI for Child Sexual Abuse Material
Child Sexual Abuse Material (CSAM) is the modern term for child pornography. Unfortunately, AI tools are being used to create CSAM. As AI technology improves and becomes readily available, the number of cases of AI generated CSAM is quickly rising.
Reports of AI-generated CSAM (January – June)
2024: 6,835
2025: 440,419
Source
AI Deepfakes and Sextortion
Sextortion is a serious crime that occurs when someone threatens to distribute private and sensitive material if not provided with money or other goods/services. Young people are often targeted in these types of scams because they can be easily manipulated. AI can produce fabricated images and videos of minors unclothed and participating in sexual activities. This can create serious psychological damage on young victims, leading to bullying, anxiety, and self harm. The trauma experienced by children and their families remains genuine and severe, regardless of whether the exploitative content was artificially generated rather than photographed. Learn more by reading The Growing Concern of Generative AI and Child Exploitation from The National Center for Missing & Exploited Children.
55
What Can Schools Do?
While there is no way to stop bad actors from creating deepfakes, there are some steps schools can take to protect their students and staff from the effects of deepfakes. Educate The adults and children involved in your schools need to know what deepfakes are and that the material they see online may not be what is seems. Encourage “Think before you share or act” is an important message that stresses people to stop and consider if the material they’re seeing is true before they share it with others or react inappropriately.
Develop Schools need policies and procedures that address the creation and use of deepfakes to slander or spread misinformation.
Feel free to share or print this poster to encourage others to think before they share.
56
Protecting Student Data
For legal and safety reasons, we must ensure that AI tools do not collect or share personally identifiable information (PII).
Biometrics
Health Info
Passwords
PII
Location
Birthdate
Phone Number
Account Numbers
Here are some ways to ensure PII is protected:
Educate Adults and Students Everyone needs to understand PII and why it’s important to not expose it to AI tools. We teach children not to share their personal information with strangers on the internet, but they may think it’s safe to share it with AI chatbots since they are not human. That is not the case.
57
Set Usage Guidelines Ensure that the only data collected by tools is what’s absolutely necessary for their intended educational purpose. Outline how AI tools can use student data and who can access it. Identify consequences for district staff who fail to follow these guidelines and unnecessarily expose student PII. Vet All AI Tools Before Use Tools must be examined to ensure they protect student data BEFORE anyone is given access/permission to use them. It can feel overwhelming to keep up with all the available AI tools. It’s okay to go slow and ensure you fully vet them – don’t rush! Here are two resources than can help:
Common Sense Media publishes AI reviews that act as “nutrition labels” for AI tools. They describe a product's opportunities, considerations, and limitations in a clear and consistent way, putting the right information you need at your fingertips. The CoSN K-12 Community Vendor Assessment Tool, is a free questionnaire specifically designed for schools to measure vendor risk. Before you purchase a third-party solution, ask the solution provider to complete a K-12CVAT tool to confirm that information, data, and cybersecurity policies are in place to protect your sensitive school system information and constituents’ Personal Identifiable Information (PII).
Review Existing Vendor Contracts Many companies are implementing AI tools into their existing products and services. They may not always notify users that this has occurred. That’s why it’s important to ensure any existing agreements with vendors address student PII usage. Don’t let tools fall through the cracks!
58
What’s The Worst That Could Happen?
There is no such thing as “safe data.” While some AI tools do their best to safeguard data, there is no way to guarantee it won’t be breached in some way. Here are some possible consequences when entering data into an AI tool.
Your data may be stored, analyzed, and shared without your knowledge or control.
Your data may be sold to third parties, used for targeted ads, or incorporated into system training, leading to further exposure. In the word of AI, data is the new gold. If your data is breached through a hacking or other cybercrime, it may lead to impersonation, unauthorized account access, or scams.
24% of teenagers admit to sharing personal information with an AI chatbot. Source: Common Sense Media
59
Page 1 Page 2 Page 3 Page 4 Page 5 Page 6 Page 7 Page 8 Page 9 Page 10 Page 11 Page 12 Page 13 Page 14 Page 15 Page 16 Page 17 Page 18 Page 19 Page 20 Page 21 Page 22 Page 23 Page 24 Page 25 Page 26 Page 27 Page 28 Page 29 Page 30 Page 31 Page 32 Page 33 Page 34 Page 35 Page 36 Page 37 Page 38 Page 39 Page 40 Page 41 Page 42 Page 43 Page 44 Page 45 Page 46 Page 47 Page 48 Page 49 Page 50 Page 51 Page 52 Page 53 Page 54 Page 55 Page 56 Page 57 Page 58 Page 59 Page 60 Page 61 Page 62 Page 63 Page 64 Page 65 Page 66 Page 67 Page 68 Page 69 Page 70 Page 71 Page 72 Page 73 Page 74 Page 75 Page 76 Page 77 Page 78 Page 79 Page 80 Page 81 Page 82 Page 83 Page 84 Page 85 Page 86 Page 87 Page 88Made with FlippingBook Digital Proposal Creator