Rethinking AI in STEM: Reflections on the Ethics of Emerging Technologies
- Lisa Knight
- Aug 1
- 3 min read
Updated: 2 days ago
Recently, I had the opportunity to attend a workshop at Michigan State University titled AI Ethics in STEM—and it was truly transformative. As someone who works at the intersection of teaching, learning, and technology, I’ve been exploring how artificial intelligence can enhance student engagement and deepen understanding. But this workshop reminded me that before we design with AI, we must first learn to think critically about it.
Understanding the Ethical Landscape
Our conversations began with a fundamental question:What does it mean to use AI ethically in education and research?
We examined how large language models (LLMs) like ChatGPT are trained—on vast datasets that include publicly available information, and sometimes, content created without explicit consent. This raised important issues of data privacy, intellectual property, and bias.
One takeaway that resonated with me was this:
“AI systems reflect the data they are trained on—so if that data carries bias, the model will too.”
For STEM educators, this isn’t just a theoretical concern. It impacts how algorithms predict disease outcomes, how autonomous systems are designed, and how research data is analyzed. The ethical use of AI requires awareness, transparency, and a commitment to equity.
Teaching STEM Students to Think Ethically About AI
One of the most compelling discussions centered on how we can prepare students, not just to use AI, but to question it.
In STEM disciplines, students are often trained to seek “right answers.” But AI challenges that mindset. Models can produce convincing but incorrect information, or reflect societal biases that distort truth. Our role as educators is to create spaces where students can explore questions like:
How do I verify the accuracy of AI-generated content?
What are the risks of relying on predictive algorithms in healthcare or engineering?
How do ethics and empathy shape technological innovation?
By framing ethics as an integral part of scientific reasoning, we help students become both competent and conscientious scientists, engineers, and healthcare professionals.
Designing Ethically Responsible Learning Experiences
The workshop also inspired me to rethink how I design AI-integrated learning activities. Instead of treating AI as a shortcut, I began to see it as a conversation partner, one that students must engage with thoughtfully and responsibly.
Some emerging ideas include:
Asking students to compare AI-generated explanations with verified sources.
Including AI reflection statements where learners describe how they used AI tools and evaluate their reliability.
Developing projects that ask students to explore bias in datasets or algorithms relevant to their discipline.
These activities move students beyond passive use and into ethical inquiry, where they learn that technology is never neutral—it reflects human choices and values.
A Call for Ethical Leadership in STEM Education
The future of STEM will be shaped not only by how we innovate but by why we innovate.As educators, we have the responsibility to ensure that the next generation of scientists, engineers, and technologists understand both the power and the pitfalls of AI.
Ethical literacy must become as essential as data literacy. When we teach students to think critically about the technologies they use, we equip them to lead responsibly in a world increasingly shaped by algorithms.
Attending the AI Ethics in STEM workshop at Michigan State University reinforced something I’ve always believed: Education should not only prepare students for the workforce, it should prepare them to make wise, compassionate choices in a complex, interconnected world.
Comments