- IEPs are legally binding documents that outline a student’s unique needs, goals, and accommodations.
- AI technology has made it possible for some IEPs to be written by machines, raising concerns about the accuracy and appropriateness of these plans.
- Parents can look for indicators of AI authorship, such as excessively technical language or identical plans, to determine if their child’s IEP was written by a machine.
If you don’t live or work in the online space like I do, you may be unaware of all the buzz about AI lately. Proponents of AI say it will be able to everything from cure cancer to climate change.
Others worry about the loss of human jobs and other implications. As businesses around the globe begin to adopt AI, it won’t be long before schools are trying to use it to write IEPs.
After all, schools are overworked and under-resourced. And writing an IEP is one of the most cumbersome tasks that teachers take on. It makes sense that they would view AI as a savior of sorts, a way to reclaim their time.
Parents like us rely on Individualized Education Programs (IEPs) to ensure that their children receive the support they need to succeed in school. IEPs are legally binding documents that outline a student’s unique needs, goals, and accommodations.
Traditionally, IEPs are written by a team of educators and specialists who work together to create a plan that meets the student’s needs.
However, with the advent of artificial intelligence (AI) technology, getting AI to write IEPs is getting some buzz online.
Does it matter? Should we care? Well, at risk of being the wet blanket, I’ll put my neck out there and say that we are not there. Yet.
Don’t get me wrong–I am optimistic about many aspects of AI. For one, I hope that it can better analyze seizure and epilepsy data, and come up with new solutions.
With this article, I hope to inform you about AI and the implications of having it write IEPs. And, how to tell if your IEP was AI-written, and what to do if you think it was.
I also invite you to read all the articles in this series.
AI and IEPs Series
- 10 Reasons You Should Not Use AI to Write an IEP: A Friendly Reminder
- AI-Written IEPs: How to Tell if Your Child’s Was Generated by a Computer
- Using AI for IEP Writing: Understand the Legal Implications
There is some overlap in the articles, but I felt they each deserved their own space.
AI and Open AI
AI stands for Artificial Intelligence. It refers to the simulation of human intelligence processes by machines, especially computer systems. These processes include learning (the acquisition of information and rules for using the information), reasoning (using rules to reach approximate or definite conclusions), and self-correction.
AI is utilized in various applications such as speech recognition, natural language processing, problem-solving, and decision-making.
OpenAI is an artificial intelligence research laboratory consisting of both for-profit and non-profit entities. It was founded in December 2015 by Elon Musk, Sam Altman, Greg Brockman, Ilya Sutskever, Wojciech Zaremba, and John Schulman. They claim that the organization’s mission is to ensure that artificial general intelligence (AGI) benefits all of humanity. It’s worth noting that they are all billionaires, so I’m not convinced that they are doing this for “the greater good.”
But–there’s that word–OPEN. It’s open. That means that any information that a human puts into AI to generate an answer, is added to the database. By design, these folks have said that they want it to be open and accessible to all.
Over the past several years, various big online entities have been building this database by scraping every bit of information on the entire internet. Anything you clicked, anything you read, said, commented, purchased….it was all added to this giant database.
“Hey! I didn’t give permission for that!” Actually, you did. We all did. It’s been in everyone’s terms and conditions forever….just no one ever reads terms and conditions on websites.
All those apps you download and tap “I agree” is when you gave permission. All the times we used a website, you probably agreed. (As an aside: Not my online chat forums, I have not left those open. Or my online training or any of my sites. They are private and not being scraped.)
My point here is this: Unless a school is using a closed AI platform, if they use it to create your child’s IEP, then they are adding YOUR child’s data to the database.
Even if it’s closed, and only used by that particular district, the information is being added to a district-wide database. So, still not entirely private.
Let’s be honest. Most of us have seen an IEP with the incorrect name on it, because portions were copied and pasted. Do we want this happening on a grand scale?
How AI Works
Here’s a very brief summary of how AI works. It needs two main things to work–information and algorithms.
- First the AI platform has to collect data. It has to get its information from somewhere. This has happened as a result of various online entities basically scraping the entire internet. Any public site you used—yes, they gathered that information.
- Then AI takes that information, organizes it, and puts it through its algorithms. Essentially, the algorithms are what make the AI work. So when the human asks AI to do something, it takes existing information and puts it through the algorithms to give the human an answer.
- This all goes through repetition, evaluation and correction, like….forever. The AI platforms repeat steps 1 and 2 over and over, and based on how the humans interact with AI, it gets refined.
By the way, if you work with/for an AI entity, I know that I am oversimplifying this. You don’t have to write me and tell me. I am working under the assumption that many IEP parents don’t really know what AI is.
It is important to note that AI is not infallible and can make mistakes.
It is also important to note that not all IEPs are developed with the assistance of AI. While some educators have begun using AI tools to help with IEP development, this is not yet a widespread practice.
Risks of AI
Using AI raises several ethical concerns. These are particularly noteworthy:
- Bias and Fairness: AI systems can inherit biases present in the data they are trained on, leading to discriminatory outcomes. This can perpetuate existing societal biases related to race, gender, ethnicity, or other protected characteristics. Ensuring fairness in AI algorithms and mitigating biases is crucial to prevent unjust outcomes. How does society feel about our kids? I think we know! And all that bias and ableism was scraped up by AI bots to generate the answers.
- Privacy: AI systems often require access to large amounts of data, raising concerns about privacy infringement. Personal data collected for AI training or usage purposes can be misused or compromised, leading to violations of individuals’ privacy rights. Proper data anonymization and consent mechanisms are essential to safeguard privacy in AI applications.
- Transparency and Accountability: Many AI algorithms operate as black boxes, making it difficult to understand their decision-making processes. Lack of transparency can undermine trust and accountability, especially in high-stakes applications like healthcare or criminal justice. Ensuring transparency in AI systems, such as providing explanations for decisions and allowing for auditing, is essential for accountability. Ask anyone about the best teacher they ever had–and it’s all about connection. The human element. AI has none of this, and this is something that our kids desperately need, as they are often ostracized by society.
- Autonomy and Control: As AI systems become more autonomous and capable of making decisions without human intervention, questions arise regarding who bears responsibility for their actions. Issues of liability and control over AI systems raise concerns about their ethical use, especially in critical domains like autonomous vehicles or healthcare. Establishing clear guidelines for the autonomy and control of AI systems is essential to prevent unintended consequences and ensure accountability. The case law for AI and IEPs just isn’t there yet, but it will be interesting to see how it plays out as it pertains to IEPs.
- Security Risks and Malicious Use: AI technologies can be exploited for malicious purposes, including cyberattacks, misinformation campaigns, and surveillance. Adversarial attacks can manipulate AI systems to produce incorrect outputs or undermine their functionality. Addressing security risks associated with AI requires robust cybersecurity measures, ethical guidelines for AI development and deployment, and international cooperation to mitigate potential threats.
The Role of AI in IEP Development
IEPs are a beast to write. Even with the best and most efficient software, they are extremely time consuming.
It’s no wonder that an extreme shortcut like AI would be attractive.
AI can support the IEP development process by automating certain tasks, such as data entry and analysis, and providing insights into student performance. For example, AI can help identify patterns in student behavior and performance, which can inform the development of appropriate goals and services in the IEP.
However, while AI can support the IEP development process, it is important to note that AI is not a substitute for the vital role of teachers and their professional judgment.
AI should be viewed as a tool to support teachers in their work, rather than a replacement for human decision-making.
I have spoken with many parents who feel that using an IEP Goal Bank is unethical! Can you imagine if those same parents find out their IEP is written by AI?! For the record, I don’t think using an IEP goal bank is unethical. When used properly it’s a huge time saver, which is why I keep one on this site.
Indicators of AI Authorship
Parents may wonder if their child’s Individualized Education Program (IEP) was written by artificial intelligence (AI) instead of a human.
While AI can be a useful tool for generating IEPs, it is important for parents to know if their child’s IEP was written by AI so they can be aware of any potential limitations or errors.
Here are some indicators that may suggest an IEP was written by AI:
Language Patterns and Consistency: One indicator of AI authorship is the language patterns and consistency of the IEP. AI-generated IEPs may use similar language patterns and phrasing throughout the document, which can make the content seem repetitive or formulaic. In contrast, human-written IEPs may be more varied in their language use and tone. Parents can look for patterns in the language and phrasing used in their child’s IEP to determine if it was generated by AI.
Side note: For sh!ts and giggles, I used AI to help write this article. Can you tell which paragraphs are mine and which aren’t?
Complexity of Content: Another indicator of AI authorship is the complexity of the content in the IEP. AI-generated IEPs may be more simplistic in their language use and content, while human-written IEPs may be more nuanced and detailed. Parents can look for the level of detail and complexity in their child’s IEP to determine if it was generated by AI.
Sniff Test: Is your IEP individualized? Does it feel vastly different from previous IEPs? Certainly different teachers have different writing styles that can create a different feel. This is completely unscientific, but everything that I’ve gotten out of AI has a weird “feel” to it. It doesn’t feel nor sound like a human said it. It says things that humans just don’t say. It repeats itself, often. At the end of a thought, every last paragraph starts with “in conclusion…”
So now what?
I see there being two many issues here: the fidelity of the IEP, and your child’s privacy.
These should be addressed differently.
Parents who suspect that their child’s IEP was written by AI can take steps to verify the source of the document.
Two effective methods for confirming the origin of an IEP include inquiring with school officials and checking documentation.
One way for parents to determine if their child’s IEP was written by AI is to inquire with school officials. Parents can ask school administrators, special education teachers, or other relevant staff members if AI was used to create their child’s IEP. I don’t know, given what I’ve seen over the past 15 years, that I’d expect an honest answer.
I know that sounds jaded, but I’ve been lied to too many times by schools. I don’t expect a client to receive an email with, “Oh yes! We did use AI to write the IEP. Like it?”
If school officials confirm that AI was used to create the IEP, parents can ask for more information about the specific AI technology used and how it was integrated into the IEP development process.
For the fidelity issue, stay the course with the IEP process. Regardless of the language in the IEP or how it was written, the IEP process remains the same. Use the tools you have available to you–your parent concerns letter, procedural safeguards and so on.
But, let’s address the privacy thing. You suspect AI was used to write your child’s IEP, and therefore their specific (and protected!) information may have been exposed.
I wish I had better answers, but all I have is this. Read up on FERPA. HIPAA does NOT apply here. Consider talking to an IEP attorney. The case law and statutes just have not caught up to this issue yet. It’s too new.
Checking Documentation
Another way for parents to verify the source of their child’s IEP is to check the documentation. Parents can review the IEP and look for any indications that AI was used in its creation. For example, if the IEP was generated using an AI-powered IEP writing tool, there may be a disclaimer or other statement indicating this on the document.
In addition, parents can ask for copies of any documents related to the IEP development process, such as meeting minutes or notes from special education teachers. These documents may provide additional information about the use of AI in creating the IEP. In some cases, this may require a FERPA request, so proceed with caution.
By verifying the source of their child’s IEP, parents can gain a better understanding of how the document was created and ensure that it accurately reflects their child’s needs and abilities.
Implications of AI-Written IEPs
While AI can offer assistance throughout the IEP writing process, it raises some legal and ethical considerations.
Legal and Ethical Considerations
According to the Individuals with Disabilities Education Act (IDEA), IEPs must be developed by a team of individuals that includes parents, teachers, and other professionals.
The use of AI in the IEP writing process raises questions about whether the team is truly involved in the development of the IEP. Furthermore, AI-written IEPs may not meet the legal requirements of IDEA, which requires that IEPs are “individually designed to meet the unique needs of a child with a disability.”
Additionally, AI-written IEPs may raise ethical considerations. For example, if an AI algorithm is used to generate goals and objectives, it may not take into account the student’s individual strengths, weaknesses, and interests. This could result in cookie-cutter IEPs that do not meet the unique needs of each student.
Impact on Personalization
One of the key features of an IEP is that it is designed to meet the unique needs of each student. However, the use of AI in the IEP writing process may impact the level of personalization in the IEP. For example, an AI algorithm may not be able to take into account the student’s social and emotional needs, which are critical to their success in the classroom.
AI-written IEPs may not be as flexible as those written by a human. If a student’s needs change throughout the year, an AI algorithm may not be able to adjust the IEP accordingly. This could result in a student not receiving the support they need to succeed in the classroom.
While AI has the potential to streamline the IEP writing process, it also raises legal and ethical considerations and may impact the level of personalization in the IEP.
Human vs. AI IEP Writing
When it comes to writing Individualized Education Programs (IEPs), there are two main methods: human writing and AI writing. Both methods have their own advantages and disadvantages, and it is important for parents to understand the differences between them to determine if their child’s IEP was written by AI.
Benefits of Human Touch
Human writing has the advantage of a personal touch. A human writer can take into account the unique needs and strengths of each individual student, and can tailor the IEP to fit their specific situation.
Human writers can also use their experience and judgment to make decisions that may not be easily quantifiable, such as the appropriate level of support for a student.
Parental Involvement and Advocacy
Parents play a crucial role in the development of their child’s Individualized Education Program (IEP). It is important for parents to be involved in the process to ensure that the IEP accurately addresses their child’s needs. This involvement can also help ensure compliance with regulations and laws related to special education.
One way for parents to ensure that their child’s IEP was not written by AI is to review the document for accuracy and compliance. Parents should review the IEP to ensure that it accurately reflects their child’s needs, strengths, and weaknesses. They should also ensure that the IEP includes all necessary components, such as goals, objectives, and accommodations.
Parents should also be aware of the laws and regulations related to special education. This knowledge can help them ensure that the IEP is in compliance with these laws and regulations. For example, parents should be aware of their child’s right to a free and appropriate public education (FAPE) and their right to participate in the development of the IEP.
If a school didn’t tell you that they used AI, is that really meaningful parent participation?
If you want more information or online training, I have the link below.
Did you read a sentence or paragraph in this essay and think, “That sounded weird?” Yes! I used AI to assist me. So, if it read or sounded awkward to you, imagine how a parent is going to feel if it’s their IEP.
IEP Tools for You
- IEP Toolkit for Parents
- IEP Toolkit for Teachers
- Online Training for Parents
- Online Training for Professional Advocates.
Email us if you have any questions about these products.
Frequently Asked Questions
What are the signs that an IEP might have been generated by AI?
There are several signs that an IEP might have been generated by AI. For example, if the language used in the IEP is too technical or formal, it might be a sign that AI was involved in the writing process. Additionally, if the IEP is missing specific details about the student’s strengths and weaknesses, or if it includes generic goals and objectives, it could be an indication that AI was used to generate the content.
Can you spot the use of AI in writing an IEP with online tools?
It can be difficult to spot the use of AI in writing an IEP with online tools. However, some online tools use natural language processing (NLP) algorithms to generate content that sounds like it was written by a human. If the language used in the IEP seems too polished or if it lacks emotion, it might be a sign that an online tool was used to generate the content.
Are there specific indicators that an AI was used for writing educational plans?
There are several indicators that an AI was used for writing educational plans. For example, if the language used in the plan is too technical or formal, it might be a sign that AI was involved in the writing process. Additionally, if the plan is missing specific details about the student’s strengths and weaknesses, or if it includes generic goals and objectives, it could be an indication that AI was used to generate the content.
What methods can detect AI-generated content in school documents?
There are several methods that can detect AI-generated content in school documents. One method is to use plagiarism detection software, which can identify content that has been generated by AI. Another method is to compare the content of the document to other documents that have been generated by AI, looking for similarities in language and structure.
Is there a way to confirm the authenticity of an IEP against AI authorship?
There is no foolproof way to confirm the authenticity of an IEP against AI authorship. However, educators can compare the language used in the IEP to other documents that have been generated by AI, looking for similarities in language and structure. Additionally, they can use plagiarism detection software to identify content that has been generated by AI.