As instructional staff, many of us are exploring cutting-edge ways to integrate AI tools like custom chatbots and generative AI frameworks into our courses. These innovations can enhance efficiency, creativity, and engagement—but they also come with risks. Recent research from Microsoft and Carnegie Mellon highlights a troubling trend: over-reliance on AI can erode critical thinking skills, especially when students trust outputs without scrutiny.
So, how do we strike the right balance? Here are some strategies we’ve been discussing:
- AI as a Catalyst for Inquiry: Design assignments where students must critique or improve AI-generated outputs.
- Transparency in Use: Require students to disclose how they’ve used AI and evaluate its limitations.
- Socratic Prompts: Use AI to simulate debates or flawed arguments that challenge students to think critically.
At our college, we’re piloting frameworks that combine these approaches while developing policies to guide responsible AI use. But this is just the beginning—we need collective insights!
💡 How are you using AI in your teaching? What strategies are you employing to ensure it enhances, rather than replaces, critical thinking?
Let’s collaborate and share ideas below! 👇 #AIinHigherEd #CriticalThinking