Understanding Educational Technology #4: AI in Education
- A.J. Ridenour

- Jun 15
- 4 min read
As my final required blog post, but definitely not my last, I want to talk about how my understanding of AI in education evolved substantially over the course of the 5 weeks of the educational technology class that ends the day of this post.
Digital Citizenship
To understand my stance on AI in education, you need to have a grasp on digital citizenship. Being a digital citizen is as complex as being a citizen in the real world. There are as many unspoken, obscure, generally overlooked, or just entirely ignored rules as the world outside of screens. The ISTE Standards, a framework for education adopted by the entire US, speaks directly to educators, calling them to be citizens in the digital world. What this means is addressed in ISTE Educator (Link to Page) 2.3 Citizen. Within are four sub-standards:
Create Positive Experiences
Evaluate Resources for Credibility
Model Safe, Legal, Ethical Practices
Manage, Protect Data
These standards seem easy to understand, except for #3. What are safe, legal, and ethical practices online? Safety is easy; keep others safe through examples they can follow. Legal, easy too. Read the law and don’t break it, right? Ethical practices are a little tricky, too. AI is what muddles the water for both.
AI: Legal & Ethical Practices
This is one of the things that stunted my initial interest in AI. I didn’t know what was legal about using it because there were all of these cases where the AI broke laws by using others' works, and I didn’t want to stumble into that whole mess. Then there is the matter of using AI to create art in the form of writing and visual art. Is it art? I am not here to really answer that question, but I will give my perspective on AI uses in the classroom–-and may give my perspective on AI and art in a future blog post about my creative works as an aspiring author.
AI is a reality of the world that is not going to go away. That means it needs to be addressed in the classroom. For assignments that are digitally made, or have any aspect done on a device with access to the internet, how/if/when/why students can or cannot use AI needs to be addressed. There should also be steps to try and keep students accountable. AI is a powerful computation, synthesis, and even research tool that is becoming more and more reliable. Early on in my educational technology course, an assignment explicitly had the use of Harvard's AI, Claude. Before this I had experimented with AIs like ChatGPT and MagicSchoolAI. Those AI always seemed lacking in one-way or another: accuracy and tool limitations being the most glaring deficiencies. Claude was different. It is designed to research, helping ensure accuracy, and the main way it is designed to answer is through a coding algorithm, meaning it can produce more than a rigid set of products. I became hooked. I needed to set a limit outside of the instructors to use it to help refine ideas, lest I go down a slippery slope of reliance. That reliance would decrease my knowledge base because I wouldn’t actually have to know/retain/learn what I was trying to produce a product about.
I had done light reading about AI before, but was in a dilemma because it was not as comprehensive as the readings that came later in this class. So I came up with a simple solution for myself:
AI is a tool for idea organization, outlining, and brainstorming–-not a tool to expand my ideas.
This worked for me because AI could easily expand my ideas outside of the scope of my understanding, even without directly prompting it to. This rule did allow me to take a lot of the extraneous mental load off of myself. Just like Niederhauser described as an essential use of machines/technology, as I described in my first post in this series. I wasn’t sacrificing my ability to learn/comprehend/apply information. I also wasn’t sacrificing my voice and ownership of a final product. Is it really my work if I just plugged a prompt into a machine? I will definitely dive deeper into that in a post about creative works.
All of this thinking lined up well with the readings that we were eventually presented. An easy version of these readings is actually “The Ethical Framework for AI in Education.”
It thoroughly outlines the use of AI in every aspect of education. The two major points I made line up well with this framework.
Conclusion
AI must be a part of the classroom because it is a part of society now. To ensure ethical use of the tool, how AI can be used in digital assignments must be explicitly taught and outlined for students. AI is a great tool to help off-load extra mental loads, but not replace actual thinking and synthesis of information–unless that is the point of an assignment. Care should be taken to check the accuracy of information given by AI. The use of AI in final products should be disclosed.
Thank you for reading.



Comments