Search This Blog

Wednesday, January 28, 2026

If You Cannot Solve AI in Your Classroom, That Is Not Proof It Cannot Be Solved

When the first big AI tools became public, the first thing I did was join a handful of Facebook groups about AI in education. I wanted a quick, noisy sense of what people actually do and what they fear. It felt like walking into a crowded hallway between conference sessions, with excitement, outrage, resignation, and some careful thinking mixed together. In that mix I began to notice one pattern of pushback that worried me more than any privacy or cheating concern.

The pattern sounded like this: "I tried AI in my class, it did not work, therefore it cannot work." Sometimes it was softer, something like "If I cannot figure out how to use this in a responsible way, then there is no responsible way." This is a classic fallacy, the argument from personal incredulity. In plain language it is the belief that if a solution is not obvious to me, then no solution exists. Many academics would tear apart this argument in a student paper, yet some repeat it when the topic is AI in teaching.

In higher education this fallacy feeds on a deeper habit. Most faculty members think of themselves as experts in teaching. We earned doctorates, we lectured for years, we survived student evaluations. It feels natural to think that we have figured out teaching as we went along. Yet teaching is a complex problem, shaped by cognitive science, sociology, technology, and institutional constraints. Being good in a discipline does not automatically make one an expert in this kind of complexity. Classroom experience is valuable, but it is not a substitute for engagement with a knowledge field.

AI exposes that gap very quickly. The first time someone asks a chatbot to write an essay and the result looks like a B minus paper, the temptation is to generalize. "Well, that kills writing assignments." The first time a student cheats with AI, the next step appears just as obvious. "Well, that kills academic honesty." After two or three such impressions it is easy to feel that one has seen enough. In reality, one has seen a tiny, biased sample at the worst possible moment, when one knows the least.

By now there is a growing body of research and practical accounts of successful AI integration into teaching and learning. Colleagues document AI supported feedback cycles that help students revise more often. Others describe using AI to model thinking aloud or to simulate peer critique. On social media, teachers share specific prompts, assignment designs, and policies that reduce cheating and increase authentic work. This is no longer a complete mystery. There are patterns, lessons, and tested strategies. We just do not see them if we stare only at our own classroom.

The big picture work starts one level higher than tools and tricks. It starts with deconstructing a course all the way down to its learning outcomes. What exactly are students supposed to know and be able to do, in a world where AI is a normal part of knowledge work? Rather than trying to defend old outcomes against AI, we can revise those outcomes to include AI related competencies, such as prompt design, critical evaluation of AI output, and collaboration with AI in discipline specific tasks. Once the outcomes fit the new world, we can reconstruct the course upward, aligning readings, activities, assignments, and assessments with those updated aims. At that point AI is no longer an intruder. It becomes part of what students are explicitly learning to handle.

This is what I mean by a scholarly stance toward teaching. We already know how to do this in our research lives. We begin with a question, look for existing literature, notice methods and results, then run small experiments of our own and compare our findings to what others report. AI in education can be approached in the same way. Before banning or fully embracing anything, we can read a few recent studies, scan what thoughtful practitioners report, try a limited pilot in one course, and gather data that goes beyond a couple of loud student comments.

Some colleagues tell me they do not have time for this. I believe them. The workload in higher education is often absurd. Yet we would never accept "I do not have time" as a reason to ignore scholarship in our own disciplines. No historian would proudly say, "I just ignore recent work and go with my gut." No chemist would say, "I saw one failed experiment with a new method, so the method is impossible." When we treat teaching as exempt from scholarly habits, we send a message that the learning of our students is somehow less real than our research.

What worries me most is not moral panic about AI or even poorly designed bans. It is the quiet decision by thoughtful people to stop at "If I cannot see the solution, it does not exist." In research we teach students to distrust that move. We tell them to assume that someone, somewhere, has thought hard about the same problem. We tell them to read, to test, to revise. When it comes to AI and teaching, we owe our students the same discipline. The fact that I cannot yet see how to integrate AI well is not proof that nobody can. It is only proof that I am not done learning.



Saturday, January 24, 2026

Why Does AI Feel Like Freedom To Me, Not A Threat To Learning?

I realized early that I am one of those people who love working with AI. The reason is simple: I have always hated routine. For years in higher education administration I tried to automate any small piece of work that I could. I was the person who wrote macros and ran Mail Merge in Word when most colleagues did not know the feature existed (many still don't). I liked thinking about problems and working with people. I did not like documenting those problems afterward.

Administrative work in universities is full of documents that must look serious and official. Strategic plans, accreditation reports, assessment summaries, memos, program reviews. These texts are often written in a careful, dry tone that tries to sound objective and important. For me those documents felt like a toll road that I had to pay to get to the interesting parts of the job. I wanted to talk with faculty, students, and community partners about real issues. I wanted to design new programs and rethink old ones. Instead I spent hours polishing language that almost nobody planned to read.

When large language models became usable, something in me clicked. Within days I was intellectually convinced that this technology would matter for education. But I also had a powerful emotional reaction. This felt like a long delayed form of liberation. A big piece of my working life that had always felt wasted could now be reclaimed for thinking and for human interaction.

I do not experience AI as a threat to meaning or to craftsmanship. I experience it as an assistant that removes the crust from the work. I still need to decide what the document should say, who it is for, what matters and what does not. But I do not need to fight with first drafts, with standard phrases, or with the sheer volume of institutional writing. AI gives me more time for the activities I actually value: creative thinking, brainstorming, problem solving, building theory, and talking with real people.

This personal relief also made me more cautious about judging other people's reactions. Our field often talks about AI as if there is a single rational stance to take. In reality, psychotypes matter. Some people are wired to enjoy the very things that I dislike. They take pleasure in the slow craft of sentence building. They value the aesthetics of a beautiful paragraph, the elegance of a careful transition, the feeling of a page that has no awkward phrase anywhere. The process itself is rewarding for them, not just the outcome. These are not shallow preferences. They reflect different theories of what work should feel like and what makes it meaningful.

There are also people for whom accuracy and authority are core values. They want information to be correct, checked, and stable. They trust texts that feel final. For them, any minor error, even a typo, can cast doubt on the whole product. When they look at AI, they see a tool that produces fluent but sometimes wrong text, and that feels deeply unsafe. The idea that something could sound confident and still be mistaken violates their sense of how knowledge should be handled. Their resistance grows from a coherent set of commitments about what scholarship requires.

My preferences run in a different direction. I care more about speed, access, and the flow of ideas than about perfect reliability at the sentence level. I was an early fan of Wikipedia for exactly this reason. I liked the fact that I could reach a reasonable overview of almost any topic in seconds, even if I knew that some entries had gaps or errors. Many colleagues still treat Wikipedia as second rate. For me, being mostly correct is good enough, as long as I can see interesting ideas and follow references further if needed. This is a different epistemology, not a careless one.

AI feels like an extension of that tradeoff. It gives me fast, flexible text that I can shape, question, and rebuild. I do not expect it to be right in every detail. I expect it to help me think faster and wider. What I cannot easily get without AI is a steady partner that never gets tired of drafting, revising, or trying a different structure for the tenth time. The machine does not care how many times I change my mind. That patience has real value.

In one of my earlier reflections I argued that doing a task very well does not prove that the task itself is worthwhile. AI has pushed that point closer to home. Many academic and administrative texts are produced with great skill, but the value of that effort is not always clear. If a machine can now produce a comparable draft in seconds, it becomes easier to ask what exactly we are adding with our human labor, and whether we want to spend our limited time there. This is an uncomfortable question for professions that have built identity around textual competence.

The issue goes beyond individual preference. Different psychotypes produce different institutional cultures. Organizations dominated by people who value routine and formal documentation will resist AI more strongly than organizations where improvisation and speed are prized. These cultural differences shape what counts as quality, what gets rewarded, and who advances. When AI enters the picture, it does not just change tools. It shifts the balance of power between competing visions of professional life.

When we think about AI in education, we should factor in these differences in temperament. Policy debates tend to focus on abstract risks and benefits. Underneath those arguments lie different ways of relating to work, to text, and to uncertainty. People who experience AI as freedom will advocate for rapid adoption and experimentation. People who experience it as erosion of craft or trust will ask for limits and safeguards. Both groups have a piece of the truth. The challenge is to build systems flexible enough to accommodate both without forcing everyone into the same mold.

Any serious conversation about AI in education has to make space for multiple stories and for the many shades in between. What we cannot afford is to pretend that these differences are merely technical, or that one good argument will settle the matter for everyone. The stakes are partly emotional, partly philosophical, and deeply tied to how we understand the purpose of our work.




Wednesday, January 21, 2026

How Much Human Input Is Enough? The Irreplaceable Criterion

I cannot tell you exactly how much of yourself you need to put into AI-assisted writing. I know this measure exists, but I cannot formalize it. I know it exists because I feel its absence. When I provide too little input, something tightens in my chest. A small discomfort. Not quite guilt, not quite fraud, but a sense that I have crossed a line I cannot name.

This happens with different intensity across different tasks. When I merge existing documents into a report, barely any hesitation. When I draft a recommendation letter from a student's CV and my brief notes, a slight unease. When I consider having AI expand a scholarly outline without my detailed argument, real resistance. That produces drafts I never release. The feeling scales with something, but what?

We all have these intuitions, these moments of knowing we have not done enough. But we cannot teach intuitions. We cannot build professional standards on personal discomfort. Students ask how much they should write before turning to AI. Colleagues wonder whether their process is ethical. We respond with vague guidance about "meaningful engagement" and "substantial contribution." These phrases point at something real but fail to grasp it.

The very existence of our hesitation suggests a threshold. We worry because some minimum actually matters. If AI could handle everything, or if everything required full human composition, we would have no decisions to make. The anxiety comes from occupying the middle ground, where we must judge how much is enough. But enough for what?

Perhaps this: whatever cannot be recovered from existing sources must come from you. Call it the irreplaceable input. It is the information, judgment, or observation that exists nowhere except in your direct knowledge or thinking.

A recommendation letter makes this concrete. The student's resume lists accomplishments. Their statement describes goals. Their transcript shows grades. All of this sits in documents anyone could read. Your irreplaceable input is what you observed directly. How they engaged in discussion. Their growth across a semester. The specific moment they demonstrated insight or character. Provide these observations and AI can shape them into proper letter format. Skip them and AI generates hollow praise that could describe anyone. The discomfort you feel comes from knowing the difference.

Data reports work differently. Three documents contain survey results, budget numbers, timeline details. You need them merged into one report. The facts already exist in writing. Your irreplaceable input is minimal: the purpose of combining them, perhaps, or the audience who needs the result. The rest is organizational labor. Your conscience stays quiet because the substance was already captured. You are not replacing your knowledge with AI's generation. You are using AI to restructure what already exists.

Scholarly writing demands far more. Yes, you can point AI toward existing literature. But the conceptual architecture must come from you. Why these sources matter together. What tension their combination reveals. Which question they help answer. AI can summarize sources. It cannot know which summary serves your argument because it does not have your argument. Your irreplaceable input is the entire intellectual structure: the problem you saw, the gap you identified, the synthesis you propose. Without this, AI produces competent prose organized around nothing in particular.

Even routine emails carry irreplaceable elements. The basic facts seem obvious enough. You need to reschedule a meeting. You want to decline an invitation. But relationship context belongs only to you. Whether this is the third reschedule. Whether you are writing to your supervisor or your student. What tone maintains trust given your history with this person. AI works from patterns observed across millions of messages. You work from direct knowledge of this particular person in this particular situation.

The criterion is not about effort or time spent. Merging three documents might consume two hours of tedious work but require minimal thought. Articulating your core scholarly insight might take ten minutes but represent six months of reading and thinking. The measure is what could be reconstructed without you. If another person could assemble the same material from available sources, you have not yet contributed what only you can contribute. If your specific knowledge, observation, or judgment is required, you have met the threshold.

This does not solve every question. How detailed must a scholarly outline be? How many observations make a recommendation sufficient? But it provides a starting point: What am I adding that exists nowhere else? What would be lost if I were removed from this process?

We recognize the threshold by its violation. The letter that sounds generic. The article that demonstrates competence but lacks insight. The message that gets the facts right but the tone wrong. Something missing even when format is correct. That absence marks where irreplaceable input should have been.

The irreplaceable portion need not be large. Three sentences about a student might become one paragraph in a two-page letter. A conceptual framework might occupy two pages in a twenty-page article. But these elements carry the weight that makes the rest meaningful. Remove them and the structure becomes simulation. Keep them and AI serves as a genuine assistant.

This is what we need to identify: the core only we can provide. Not the largest part, not necessarily the hardest part, but the part that requires us to have been there, to know something, to have thought something through. Everything else is real work, but it is work that can be done by pattern. The irreplaceable input requires presence, knowledge, judgment. It requires us.


Friday, January 9, 2026

My Class Companion in Action. What Learning Looks Like Now

In preparation for Spring semester, I updated my class companion bot and tested it. The bot is better than last semester's version, because I have a better knowledge base embedded into it and better behavior instructions. It outperforms the common practice of assigning students to read a chapter, then maybe talking about it in class, or administering a final exam. By no means can it replace classroom instruction. In the transcript below, I play the role of a student. My messages appear as "You said."

Thursday, January 8, 2026

AI can boost creativity

An original idea is not the same thing as a communicable original idea. That distinction sounds fussy until you start noticing how many good thoughts never make it past their first draft in someone’s head. Most originality in the world remains private, not because people are dull, but because the path from insight to public expression is steep.

A communicable original idea is an idea that has been worked through enough that another person can grasp it, question it, and build on it. It has examples. It has boundaries. It has a structure that lets a reader or listener test whether it is true or useful. That structure is not cosmetic. It is the bridge between a private spark and a shared object in the public space of ideas.

Two limits keep that bridge rare. The first is time. Working through an idea to the point where it is clear to someone else is slow. Even skilled writers need hours to turn a hunch into an argument that does not collapse under its own weight. The second limit is skill. Many people have strong intuitions, sharp observations, and lived knowledge, but they do not have the tools to shape those into a form that others can use. They might be brilliant in conversation and helpless on a blank page. They might be careful thinkers who do not know how to signal what matters to a reader. Their ideas are original, yet they remain trapped.

AI changes this equation in a plain way. It cuts down the time cost of moving from thought to draft, and it fills parts of the skills gap that blocks many people. That is the core claim, and it is boring in the best sense of the word. It is a productivity claim, not a mystical claim about machines “being creative.” The creative act stays human. The communicative labor can be shared.

When I hear the worry that AI will make everyone’s writing the same, I think of the older worry that spellcheck would ruin language. It did not. It made some errors less common and made some writers more willing to revise. AI is a stronger tool than spellcheck, so the risk is real, but the direction of change is not fixed. The result depends on what we ask it to do. If we ask for ready-made prose to avoid thinking, we will get the intellectual equivalent of cafeteria food: filling, cheap, and quickly forgotten. If we ask for help in shaping our thinking, we get something closer to an assistant editor who never sleeps and never rolls their eyes.

The simplest example is idea triage. Most of us have more ideas than we can develop. Some are promising, some are noise, and many are promising but vague. AI can help sort them. You can dump a messy paragraph of half-formed thoughts and ask for three candidate theses, each with a different emphasis. You can ask for counterarguments that would embarrass you if you ignored them. You can ask for the hidden assumptions in your claim. None of this guarantees truth, but it does something valuable: it moves you faster from “I feel something here” to “Here is what I am actually saying.”

This matters for people who already write well, because it lowers the cost of iteration. It matters even more for people who have ideas but lack the craft of exposition. We often romanticize that craft, as if difficulty is proof of virtue. In practice, the craft functions like a gate. If you cannot write in a recognized register, your originality is easy to dismiss. If you do not know how to structure an argument, you might never offer it. AI can act as a translator across registers. It can turn a spoken explanation into a readable paragraph. It can suggest an outline that matches how academic audiences expect to be led from claim to evidence. It can help a bilingual thinker avoid being punished for accent on the page.

The anxieties around AI often confuse originality with output. People see more text and assume less thought. That can be true, but it is not necessary. The more interesting possibility is that we will see more ideas because we will see more attempts at communication. Most attempts will be mediocre. That is not a crisis. The public space of ideas has always been full of mediocre attempts. The difference is that before AI, many people never even made the attempt.

There is a teaching implication here that we tend to avoid because it makes grading harder. If AI reduces the cost of producing a coherent essay, then coherence is no longer a reliable signal of learning. That does not mean students should be banned from AI. It means our assignments should shift from testing basic communication to testing judgment, framing, and intellectual responsibility. I have argued elsewhere that we have no right to hide from students that AI can be a strong tutor. The same logic applies to writing support. If the tool exists, students will use it, and some will use it well. Our job is to make “using it well” visible and assessable.

One way is to grade the thinking trail, not only the final product. Ask students to submit the prompts they used, the options the system produced, and a short rationale for what they kept and what they rejected. This turns AI from a shortcut into a mirror. Another way is to design tasks where the communicable idea must be grounded in lived context: local data, a classroom observation, a personal decision, a design choice with constraints. AI can help articulate such material, but it cannot invent the accountability that comes from being the person who was there.

There is also a relational dimension that matters to me as an educator. Communication is not only transmission. It is an invitation. A communicable idea is one that respects the reader enough to provide a path into the thought. AI can help with that respect by making revision less punishing. Many people stop revising because revision feels like failure. AI reframes revision as a normal dialogue. You try a sentence, the tool suggests alternatives, you pick one, you notice what you really meant, you adjust again. That is not cheating. That is apprenticeship, with a strange new partner.

Of course, AI can also flood the world with plausible nonsense. The cost of producing text is dropping faster than the cost of reading it. That creates a new bottleneck: attention and trust. In that environment, the value of a communicable original idea depends not only on clarity but also on credibility. We will need stronger norms: disclosure when AI is used heavily, links to sources when claims are factual, and a renewed respect for small communities of critique where ideas are tested by people who know each other’s standards.

If we take that seriously, AI does not have to be the enemy of creativity. It can be the enemy of silence. The great loss in intellectual life is not bad writing. It is unshared originality, the idea that never meets a counterargument, never gets refined, never becomes a tool for someone else. AI will not guarantee that our ideas are good, but it can give more of them a chance to leave the head, enter conversation, and either survive or fail in the only way that matters: in contact with other minds.



If You Cannot Solve AI in Your Classroom, That Is Not Proof It Cannot Be Solved

When the first big AI tools became public, the first thing I did was join a handful of Facebook groups about AI in education. I wanted a qui...