When the first big AI tools became public, the first thing I did was join a handful of Facebook groups about AI in education. I wanted a quick, noisy sense of what people actually do and what they fear. It felt like walking into a crowded hallway between conference sessions, with excitement, outrage, resignation, and some careful thinking mixed together. In that mix I began to notice one pattern of pushback that worried me more than any privacy or cheating concern.
The pattern sounded like this: "I tried AI in my class, it did not work, therefore it cannot work." Sometimes it was softer, something like "If I cannot figure out how to use this in a responsible way, then there is no responsible way." This is a classic fallacy, the argument from personal incredulity. In plain language it is the belief that if a solution is not obvious to me, then no solution exists. Many academics would tear apart this argument in a student paper, yet some repeat it when the topic is AI in teaching.
In higher education this fallacy feeds on a deeper habit. Most faculty members think of themselves as experts in teaching. We earned doctorates, we lectured for years, we survived student evaluations. It feels natural to think that we have figured out teaching as we went along. Yet teaching is a complex problem, shaped by cognitive science, sociology, technology, and institutional constraints. Being good in a discipline does not automatically make one an expert in this kind of complexity. Classroom experience is valuable, but it is not a substitute for engagement with a knowledge field.
AI exposes that gap very quickly. The first time someone asks a chatbot to write an essay and the result looks like a B minus paper, the temptation is to generalize. "Well, that kills writing assignments." The first time a student cheats with AI, the next step appears just as obvious. "Well, that kills academic honesty." After two or three such impressions it is easy to feel that one has seen enough. In reality, one has seen a tiny, biased sample at the worst possible moment, when one knows the least.
By now there is a growing body of research and practical accounts of successful AI integration into teaching and learning. Colleagues document AI supported feedback cycles that help students revise more often. Others describe using AI to model thinking aloud or to simulate peer critique. On social media, teachers share specific prompts, assignment designs, and policies that reduce cheating and increase authentic work. This is no longer a complete mystery. There are patterns, lessons, and tested strategies. We just do not see them if we stare only at our own classroom.
The big picture work starts one level higher than tools and tricks. It starts with deconstructing a course all the way down to its learning outcomes. What exactly are students supposed to know and be able to do, in a world where AI is a normal part of knowledge work? Rather than trying to defend old outcomes against AI, we can revise those outcomes to include AI related competencies, such as prompt design, critical evaluation of AI output, and collaboration with AI in discipline specific tasks. Once the outcomes fit the new world, we can reconstruct the course upward, aligning readings, activities, assignments, and assessments with those updated aims. At that point AI is no longer an intruder. It becomes part of what students are explicitly learning to handle.
This is what I mean by a scholarly stance toward teaching. We already know how to do this in our research lives. We begin with a question, look for existing literature, notice methods and results, then run small experiments of our own and compare our findings to what others report. AI in education can be approached in the same way. Before banning or fully embracing anything, we can read a few recent studies, scan what thoughtful practitioners report, try a limited pilot in one course, and gather data that goes beyond a couple of loud student comments.
Some colleagues tell me they do not have time for this. I believe them. The workload in higher education is often absurd. Yet we would never accept "I do not have time" as a reason to ignore scholarship in our own disciplines. No historian would proudly say, "I just ignore recent work and go with my gut." No chemist would say, "I saw one failed experiment with a new method, so the method is impossible." When we treat teaching as exempt from scholarly habits, we send a message that the learning of our students is somehow less real than our research.
What worries me most is not moral panic about AI or even poorly designed bans. It is the quiet decision by thoughtful people to stop at "If I cannot see the solution, it does not exist." In research we teach students to distrust that move. We tell them to assume that someone, somewhere, has thought hard about the same problem. We tell them to read, to test, to revise. When it comes to AI and teaching, we owe our students the same discipline. The fact that I cannot yet see how to integrate AI well is not proof that nobody can. It is only proof that I am not done learning.

