Search This Blog

Friday, March 20, 2026

Spitting Into the Wind

I watch my colleagues fight, and I understand the impulse. The desire to preserve what we built over decades of careful work comes from genuine care for learning. But much of what I see on campuses right now amounts to spitting into the wind. The effort lands back on the person making it, and the wind does not notice.

Consider the inventory. Turnitin added AI detection, and departments adopted it as a digital checkpoint. The detection is unreliable, generating false positives that punish honest students and false negatives that miss sophisticated use. Faculty end up in adversarial arguments about whether a 23% AI probability score constitutes an honor code violation. Browser-locking software was built for a world where the threat was a second tab. That world is gone. AI assistants are now embedded in operating systems, available through voice, woven into browser extensions. Locking down a browser is like reinforcing the front door while the back wall of the house is missing. 

Some faculty have turned to handwritten assignments, stripping away every advantage of digital composition in order to verify authorship. Others schedule 20- or 30-minute oral defenses for each student, a practice that collapses under its own arithmetic in a class of 35. Still others declare their classroom an AI-free zone, a principled stand that increasingly resembles teaching navigation while pretending GPS does not exist.

And then, in February, Einstein arrived: an agentic AI tool that promised to log into Canvas and complete entire courses on a student's behalf, from watching lectures to submitting assignments. The reaction was predictable. Social media erupted, faculty declared it the death of education, and within 48 hours the product was taken down after a trademark dispute over the Einstein name. Faculty breathed a sigh of relief. But the relief is misplaced. As one observer noted, the line between a flash in the pan and a harbinger of things to come is very thin. The underlying technology is open-source, improving rapidly, and replicable by anyone with modest coding skills. Einstein was a crude prototype. Its successors will not announce themselves with a viral marketing campaign. All of these measures share a common feature. They are perimeter defenses. They try to keep AI out of an existing structure rather than asking whether the structure still makes sense.

Here is what we are avoiding. The entire curriculum, in every discipline, needs re-examination from the foundations. That means returning to learning outcomes, asking which ones still hold, which have been made trivial by AI, and which new ones have become essential. It means rebuilding assessments from those revised outcomes upward. It means redesigning courses so the process of learning, not the product, carries the educational weight.

This is enormous work, and it cannot happen in one summer workshop. It requires sustained time, structured collaboration, and genuine institutional investment. Course releases for faculty redesigning their programs. Instructional design teams embedded in departments, not available by appointment three weeks out. A clear signal from leadership that this work matters as much as research productivity or enrollment targets.

That signal has not come. University leaders have mostly treated AI as a policy question rather than a curricular one. Faculty professional associations could be leading discipline-specific conversations about learning outcomes in a post-AI landscape. Some have begun. Most have not. A conference panel on "AI and Teaching" is not a plan.

Every semester that passes with the old curriculum intact is a semester of lost opportunity. Faculty exhausting themselves with detection and enforcement could be doing the creative, difficult, rewarding work of rethinking what their courses are for. They are dedicated teachers. They simply do what people do when the ground shifts and no one offers direction. They reinforce what they know. They defend what they built.

But the wind does not care. And the longer we spend spitting into it, the less time we have to turn around and walk somewhere that leads to solid ground. 


Thursday, February 26, 2026

Why Hobbled AI Tutors Do Not Prepare Students for Real Learning with AI

I spent a good amount of time building what I thought was an ideal AI tutors for my courses. I made it carefully Socratic. I asked it to avoid direct answers, to respond with questions, to nudge students toward their own reasoning. Technically, it functioned just as I designed it. When I used it like a student who is tired and under time pressure, the charm faded quickly. I wanted a clear explanation, and it kept giving me more questions. After another round of tuning, I tried to make it friendlier and more supportive. Students then told me that the tutor felt noisy and overwhelming. At that point I understood that I was training them to handle my special tutor, not the kind of AI they actually meet outside the class. That is a strange educational goal.

Hobbled AI tutors feel safe and ethical, but they miseducate. They train students to work with artificial constraints that disappear the moment they open a normal AI system in a browser. We act as if a restricted tool is a good stepping stone toward a more powerful one. In practice, students build habits that do not transfer. They learn that AI always refuses direct answers, always behaves in a certain tone, always follows classroom rules. Then they encounter a general model that does none of those things, and much of their practice becomes irrelevant.

This is not a new pattern. Education has long relied on simplified versions of reality. We create word problems that clean up numbers, experiments that always work if you follow the manual, and case studies that fit on two pages. Those devices lower risk and cognitive load. They provide a controlled environment where mistakes are safe and visible. The logic is understandable. A student who is still learning should not make an error that costs a patient, a client, or a company. For that reason, some version of a sandbox is necessary.

The trouble appears when we forget that the sandbox is not the field of practice itself. The rules inside a controlled environment do not match the rules in workplaces, in graduate study, or in everyday online life. In many disciplines, educators now try to bring more authentic tasks into courses, so that students face messy data, conflicting evidence, and imperfect instructions. With AI, that tension between safety and authenticity becomes sharper, because the distance between the restricted version and the real tool is very large.

Once we start to protect students by modifying AI itself, we create a peculiar hybrid. We instruct the model never to give full solutions, or to delay any concrete suggestion until several rounds of questions. We narrow its sources and formats. We insist on a very specific teacherly tone that no professional tool will ever reproduce. The model still has the power to generate complex text, but it is prevented from using that power in the ways that matter most outside class. Students feel both burdened and confused. The system is strong enough to dominate the interaction, yet weak enough to be unhelpful when they need efficiency.

I do not think the answer is to abandon scaffolding. It is to move scaffolding out of the model and into our teaching. Instead of hard technical restraints, we can offer social and cognitive guidance. We can talk about appropriate and inappropriate uses of AI for a given assignment. We can model how to break a task into steps and how to design prompts for each step. We can teach students to read AI output with the same suspicion they bring to an unfamiliar website, to check claims against other sources, and to notice when the model clearly fabricates information.

In my own courses, this leads to a split strategy. For routine classroom work, a custom AI assistant still makes sense. It can generate weekly reading lists aligned with the syllabus, create small formative quizzes, and support simple administrative tasks. Those are narrow functions where tight constraints are actually helpful, because students do not need to reuse those bots later in life. They will not need my quiz generator at work.

For substantial projects, I will now invite students to use the same broad AI tools that everyone else uses. I want them to confront vague or partial answers, and to learn how to ask for clarification. I want them to see different versions of an argument and to practice choosing which one is worth pursuing. That means teaching very specific skills. For example, how to ask the model to reveal its uncertainty, how to request alternative lines of reasoning, how to move from a generic first draft to a more precise second version, and how to document their own use of AI in an honest way.

This approach accepts that mistakes will happen. Some students will trust the model too much. Some will misread an answer. But those risks already exist when they use AI on their own phones and laptops, far away from the course platform. In that context, a hobbled classroom tutor does not protect them. It leaves them underprepared. They know how to navigate a special kind of AI that appears only inside one course, and they lack practice with the systems they actually depend on.

An AI tutor that is permanently handicapped may look safer to us as instructors, but it does not prepare students for real learning with AI. It produces clever conversations inside a narrow frame and trains habits that fail outside that frame. I would rather expose students to the real tools and walk with them through the confusion, than give them a polished imitation that vanishes as soon as the course ends.




Friday, February 20, 2026

Learning With a Machine in the Room: What Students Said After a Semester of AI-Integrated Teaching

Last semester I ran an experiment across three courses that I will call Course A, Course B, and Course C. Each course used an AI Class Companion as a constant presence rather than an occasional tool. Students interacted with it for planning, drafting, testing knowledge, and reflecting on their progress. The exit survey gives an initial picture of how students perceived that experience.

Seventy seven students completed the survey. The headline number is straightforward. Fifty nine students reported that they learned more than they would have in a typical class without AI support. That equals 76.6 percent of respondents. Thirty nine selected “Somewhat Agree,” twenty selected “Fully Agree,” fifteen selected “Somewhat Disagree,” and three selected “Disagree.” These numbers suggest a strong perceived learning gain, but not unanimity.

Another important question asked whether students would take another course using an AI Class Companion. Sixty three students agreed or fully agreed. Thirty two chose “Fully Agree,” thirty one chose “Somewhat Agree,” eleven chose “Somewhat Disagree,” and three chose “Disagree.” This pattern matters because willingness to repeat an experience is often a better indicator of acceptance than enthusiasm in the moment.

The strongest agreement appeared in the skills question. Seventy two students said their AI skills increased significantly. Fifty six selected “Fully Agree,” sixteen selected “Somewhat Agree,” three selected “Somewhat Disagree,” and two selected “Disagree.” Even students who were skeptical about learning outcomes often acknowledged growth in technical fluency.

Below is a simple summary table of the core survey items.

Survey Snapshot (N = 77)

StatementFully AgreeSomewhat AgreeSomewhat DisagreeDisagreeAgree Total
Learned more than typical course203915359 (76.6%)
Would take another AI supported course323111363 (81.8%)
AI skills increased significantly56163272 (93.5%)

The numbers alone do not tell the full story. Students did not describe AI as flawless or magical. Several comments mentioned frustration when the system misunderstood context or produced shallow responses. That tension is important. The Companion was designed to provoke critique rather than passive acceptance. Many students reported that their stance toward AI changed during the semester. Early interactions focused on efficiency. Later reflections described more careful questioning and revision.

It is also important to note that the survey captures only perception. There is rich data beyond these numbers. Students generated extensive interaction logs with the Class Companion across the semester. Those logs include prompts, revisions, and moments where students corrected or challenged the system. In addition, each course produced substantial final artifacts such as research manuscripts, professional portfolios, and organizational proposals. Together, these materials provide a detailed empirical record of how learning unfolded in practice. I plan to analyze those interactions and final products separately.

One pattern that emerges from the survey is continuity. Students interacted with the Companion repeatedly rather than only at moments of difficulty. Many described returning to earlier conversations to revise ideas or test their understanding again. That continuity appears to have shaped perception of learning. Students often framed the Companion as a thinking partner that extended learning time beyond formal meetings.

At the same time, variation across responses should not be ignored. About one quarter of respondents did not agree that they learned more than in a typical course. Some learners may prefer clearer structure or less autonomy. Others may find constant interaction with AI cognitively demanding. These courses asked students to assume a high level of responsibility for their own learning process. For some students that autonomy felt empowering. For others it introduced uncertainty.

There is also a methodological concern that must be acknowledged openly. The survey results may be influenced by social desirability bias. Students may feel pressure to respond positively when a course emphasizes innovation or when AI is framed as central to the learning experience. Even though participation was voluntary and responses were anonymized after grading, the possibility of bias remains. For that reason, I treat these numbers as provisional indicators rather than definitive proof of impact.

Another interesting finding involves how students described their relationship with AI. Many said that the Companion felt supportive but non judgmental. That framing may matter more than technical capability. When AI becomes part of the learning environment rather than an external evaluator, students appear more willing to experiment, make mistakes, and revise their thinking.

What do these numbers suggest overall. First, most students perceived increased learning and strong skill growth. Second, willingness to repeat the experience was even higher than reported learning gains. Third, skepticism and frustration remained present, which may be a healthy sign that students were not treating AI as an authority.

The experiment raises a larger question about pedagogy. AI does not automatically improve education. What matters is how courses are structured around it. When AI becomes a continuous cognitive environment, students begin to externalize drafts earlier, test ideas more frequently, and engage in iterative reflection. The exit survey captures that transition from novelty toward routine practice.

However, I consider the main point to be proven: the use of AI does not prevent learning. 



Wednesday, January 28, 2026

If You Cannot Solve AI in Your Classroom, That Is Not Proof It Cannot Be Solved

When the first big AI tools became public, the first thing I did was join a handful of Facebook groups about AI in education. I wanted a quick, noisy sense of what people actually do and what they fear. It felt like walking into a crowded hallway between conference sessions, with excitement, outrage, resignation, and some careful thinking mixed together. In that mix I began to notice one pattern of pushback that worried me more than any privacy or cheating concern.

The pattern sounded like this: "I tried AI in my class, it did not work, therefore it cannot work." Sometimes it was softer, something like "If I cannot figure out how to use this in a responsible way, then there is no responsible way." This is a classic fallacy, the argument from personal incredulity. In plain language it is the belief that if a solution is not obvious to me, then no solution exists. Many academics would tear apart this argument in a student paper, yet some repeat it when the topic is AI in teaching.

In higher education this fallacy feeds on a deeper habit. Most faculty members think of themselves as experts in teaching. We earned doctorates, we lectured for years, we survived student evaluations. It feels natural to think that we have figured out teaching as we went along. Yet teaching is a complex problem, shaped by cognitive science, sociology, technology, and institutional constraints. Being good in a discipline does not automatically make one an expert in this kind of complexity. Classroom experience is valuable, but it is not a substitute for engagement with a knowledge field.

AI exposes that gap very quickly. The first time someone asks a chatbot to write an essay and the result looks like a B minus paper, the temptation is to generalize. "Well, that kills writing assignments." The first time a student cheats with AI, the next step appears just as obvious. "Well, that kills academic honesty." After two or three such impressions it is easy to feel that one has seen enough. In reality, one has seen a tiny, biased sample at the worst possible moment, when one knows the least.

By now there is a growing body of research and practical accounts of successful AI integration into teaching and learning. Colleagues document AI supported feedback cycles that help students revise more often. Others describe using AI to model thinking aloud or to simulate peer critique. On social media, teachers share specific prompts, assignment designs, and policies that reduce cheating and increase authentic work. This is no longer a complete mystery. There are patterns, lessons, and tested strategies. We just do not see them if we stare only at our own classroom.

The big picture work starts one level higher than tools and tricks. It starts with deconstructing a course all the way down to its learning outcomes. What exactly are students supposed to know and be able to do, in a world where AI is a normal part of knowledge work? Rather than trying to defend old outcomes against AI, we can revise those outcomes to include AI related competencies, such as prompt design, critical evaluation of AI output, and collaboration with AI in discipline specific tasks. Once the outcomes fit the new world, we can reconstruct the course upward, aligning readings, activities, assignments, and assessments with those updated aims. At that point AI is no longer an intruder. It becomes part of what students are explicitly learning to handle.

This is what I mean by a scholarly stance toward teaching. We already know how to do this in our research lives. We begin with a question, look for existing literature, notice methods and results, then run small experiments of our own and compare our findings to what others report. AI in education can be approached in the same way. Before banning or fully embracing anything, we can read a few recent studies, scan what thoughtful practitioners report, try a limited pilot in one course, and gather data that goes beyond a couple of loud student comments.

Some colleagues tell me they do not have time for this. I believe them. The workload in higher education is often absurd. Yet we would never accept "I do not have time" as a reason to ignore scholarship in our own disciplines. No historian would proudly say, "I just ignore recent work and go with my gut." No chemist would say, "I saw one failed experiment with a new method, so the method is impossible." When we treat teaching as exempt from scholarly habits, we send a message that the learning of our students is somehow less real than our research.

What worries me most is not moral panic about AI or even poorly designed bans. It is the quiet decision by thoughtful people to stop at "If I cannot see the solution, it does not exist." In research we teach students to distrust that move. We tell them to assume that someone, somewhere, has thought hard about the same problem. We tell them to read, to test, to revise. When it comes to AI and teaching, we owe our students the same discipline. The fact that I cannot yet see how to integrate AI well is not proof that nobody can. It is only proof that I am not done learning.



Saturday, January 24, 2026

Why Does AI Feel Like Freedom To Me, Not A Threat To Learning?

I realized early that I am one of those people who love working with AI. The reason is simple: I have always hated routine. For years in higher education administration I tried to automate any small piece of work that I could. I was the person who wrote macros and ran Mail Merge in Word when most colleagues did not know the feature existed (many still don't). I liked thinking about problems and working with people. I did not like documenting those problems afterward.

Administrative work in universities is full of documents that must look serious and official. Strategic plans, accreditation reports, assessment summaries, memos, program reviews. These texts are often written in a careful, dry tone that tries to sound objective and important. For me those documents felt like a toll road that I had to pay to get to the interesting parts of the job. I wanted to talk with faculty, students, and community partners about real issues. I wanted to design new programs and rethink old ones. Instead I spent hours polishing language that almost nobody planned to read.

When large language models became usable, something in me clicked. Within days I was intellectually convinced that this technology would matter for education. But I also had a powerful emotional reaction. This felt like a long delayed form of liberation. A big piece of my working life that had always felt wasted could now be reclaimed for thinking and for human interaction.

I do not experience AI as a threat to meaning or to craftsmanship. I experience it as an assistant that removes the crust from the work. I still need to decide what the document should say, who it is for, what matters and what does not. But I do not need to fight with first drafts, with standard phrases, or with the sheer volume of institutional writing. AI gives me more time for the activities I actually value: creative thinking, brainstorming, problem solving, building theory, and talking with real people.

This personal relief also made me more cautious about judging other people's reactions. Our field often talks about AI as if there is a single rational stance to take. In reality, psychotypes matter. Some people are wired to enjoy the very things that I dislike. They take pleasure in the slow craft of sentence building. They value the aesthetics of a beautiful paragraph, the elegance of a careful transition, the feeling of a page that has no awkward phrase anywhere. The process itself is rewarding for them, not just the outcome. These are not shallow preferences. They reflect different theories of what work should feel like and what makes it meaningful.

There are also people for whom accuracy and authority are core values. They want information to be correct, checked, and stable. They trust texts that feel final. For them, any minor error, even a typo, can cast doubt on the whole product. When they look at AI, they see a tool that produces fluent but sometimes wrong text, and that feels deeply unsafe. The idea that something could sound confident and still be mistaken violates their sense of how knowledge should be handled. Their resistance grows from a coherent set of commitments about what scholarship requires.

My preferences run in a different direction. I care more about speed, access, and the flow of ideas than about perfect reliability at the sentence level. I was an early fan of Wikipedia for exactly this reason. I liked the fact that I could reach a reasonable overview of almost any topic in seconds, even if I knew that some entries had gaps or errors. Many colleagues still treat Wikipedia as second rate. For me, being mostly correct is good enough, as long as I can see interesting ideas and follow references further if needed. This is a different epistemology, not a careless one.

AI feels like an extension of that tradeoff. It gives me fast, flexible text that I can shape, question, and rebuild. I do not expect it to be right in every detail. I expect it to help me think faster and wider. What I cannot easily get without AI is a steady partner that never gets tired of drafting, revising, or trying a different structure for the tenth time. The machine does not care how many times I change my mind. That patience has real value.

In one of my earlier reflections I argued that doing a task very well does not prove that the task itself is worthwhile. AI has pushed that point closer to home. Many academic and administrative texts are produced with great skill, but the value of that effort is not always clear. If a machine can now produce a comparable draft in seconds, it becomes easier to ask what exactly we are adding with our human labor, and whether we want to spend our limited time there. This is an uncomfortable question for professions that have built identity around textual competence.

The issue goes beyond individual preference. Different psychotypes produce different institutional cultures. Organizations dominated by people who value routine and formal documentation will resist AI more strongly than organizations where improvisation and speed are prized. These cultural differences shape what counts as quality, what gets rewarded, and who advances. When AI enters the picture, it does not just change tools. It shifts the balance of power between competing visions of professional life.

When we think about AI in education, we should factor in these differences in temperament. Policy debates tend to focus on abstract risks and benefits. Underneath those arguments lie different ways of relating to work, to text, and to uncertainty. People who experience AI as freedom will advocate for rapid adoption and experimentation. People who experience it as erosion of craft or trust will ask for limits and safeguards. Both groups have a piece of the truth. The challenge is to build systems flexible enough to accommodate both without forcing everyone into the same mold.

Any serious conversation about AI in education has to make space for multiple stories and for the many shades in between. What we cannot afford is to pretend that these differences are merely technical, or that one good argument will settle the matter for everyone. The stakes are partly emotional, partly philosophical, and deeply tied to how we understand the purpose of our work.




Wednesday, January 21, 2026

How Much Human Input Is Enough? The Irreplaceable Criterion

I cannot tell you exactly how much of yourself you need to put into AI-assisted writing. I know this measure exists, but I cannot formalize it. I know it exists because I feel its absence. When I provide too little input, something tightens in my chest. A small discomfort. Not quite guilt, not quite fraud, but a sense that I have crossed a line I cannot name.

This happens with different intensity across different tasks. When I merge existing documents into a report, barely any hesitation. When I draft a recommendation letter from a student's CV and my brief notes, a slight unease. When I consider having AI expand a scholarly outline without my detailed argument, real resistance. That produces drafts I never release. The feeling scales with something, but what?

We all have these intuitions, these moments of knowing we have not done enough. But we cannot teach intuitions. We cannot build professional standards on personal discomfort. Students ask how much they should write before turning to AI. Colleagues wonder whether their process is ethical. We respond with vague guidance about "meaningful engagement" and "substantial contribution." These phrases point at something real but fail to grasp it.

The very existence of our hesitation suggests a threshold. We worry because some minimum actually matters. If AI could handle everything, or if everything required full human composition, we would have no decisions to make. The anxiety comes from occupying the middle ground, where we must judge how much is enough. But enough for what?

Perhaps this: whatever cannot be recovered from existing sources must come from you. Call it the irreplaceable input. It is the information, judgment, or observation that exists nowhere except in your direct knowledge or thinking.

A recommendation letter makes this concrete. The student's resume lists accomplishments. Their statement describes goals. Their transcript shows grades. All of this sits in documents anyone could read. Your irreplaceable input is what you observed directly. How they engaged in discussion. Their growth across a semester. The specific moment they demonstrated insight or character. Provide these observations and AI can shape them into proper letter format. Skip them and AI generates hollow praise that could describe anyone. The discomfort you feel comes from knowing the difference.

Data reports work differently. Three documents contain survey results, budget numbers, timeline details. You need them merged into one report. The facts already exist in writing. Your irreplaceable input is minimal: the purpose of combining them, perhaps, or the audience who needs the result. The rest is organizational labor. Your conscience stays quiet because the substance was already captured. You are not replacing your knowledge with AI's generation. You are using AI to restructure what already exists.

Scholarly writing demands far more. Yes, you can point AI toward existing literature. But the conceptual architecture must come from you. Why these sources matter together. What tension their combination reveals. Which question they help answer. AI can summarize sources. It cannot know which summary serves your argument because it does not have your argument. Your irreplaceable input is the entire intellectual structure: the problem you saw, the gap you identified, the synthesis you propose. Without this, AI produces competent prose organized around nothing in particular.

Even routine emails carry irreplaceable elements. The basic facts seem obvious enough. You need to reschedule a meeting. You want to decline an invitation. But relationship context belongs only to you. Whether this is the third reschedule. Whether you are writing to your supervisor or your student. What tone maintains trust given your history with this person. AI works from patterns observed across millions of messages. You work from direct knowledge of this particular person in this particular situation.

The criterion is not about effort or time spent. Merging three documents might consume two hours of tedious work but require minimal thought. Articulating your core scholarly insight might take ten minutes but represent six months of reading and thinking. The measure is what could be reconstructed without you. If another person could assemble the same material from available sources, you have not yet contributed what only you can contribute. If your specific knowledge, observation, or judgment is required, you have met the threshold.

This does not solve every question. How detailed must a scholarly outline be? How many observations make a recommendation sufficient? But it provides a starting point: What am I adding that exists nowhere else? What would be lost if I were removed from this process?

We recognize the threshold by its violation. The letter that sounds generic. The article that demonstrates competence but lacks insight. The message that gets the facts right but the tone wrong. Something missing even when format is correct. That absence marks where irreplaceable input should have been.

The irreplaceable portion need not be large. Three sentences about a student might become one paragraph in a two-page letter. A conceptual framework might occupy two pages in a twenty-page article. But these elements carry the weight that makes the rest meaningful. Remove them and the structure becomes simulation. Keep them and AI serves as a genuine assistant.

This is what we need to identify: the core only we can provide. Not the largest part, not necessarily the hardest part, but the part that requires us to have been there, to know something, to have thought something through. Everything else is real work, but it is work that can be done by pattern. The irreplaceable input requires presence, knowledge, judgment. It requires us.


Friday, January 9, 2026

My Class Companion in Action. What Learning Looks Like Now

In preparation for Spring semester, I updated my class companion bot and tested it. The bot is better than last semester's version, because I have a better knowledge base embedded into it and better behavior instructions. It outperforms the common practice of assigning students to read a chapter, then maybe talking about it in class, or administering a final exam. By no means can it replace classroom instruction. In the transcript below, I play the role of a student. My messages appear as "You said."

Thursday, January 8, 2026

AI can boost creativity

An original idea is not the same thing as a communicable original idea. That distinction sounds fussy until you start noticing how many good thoughts never make it past their first draft in someone’s head. Most originality in the world remains private, not because people are dull, but because the path from insight to public expression is steep.

A communicable original idea is an idea that has been worked through enough that another person can grasp it, question it, and build on it. It has examples. It has boundaries. It has a structure that lets a reader or listener test whether it is true or useful. That structure is not cosmetic. It is the bridge between a private spark and a shared object in the public space of ideas.

Two limits keep that bridge rare. The first is time. Working through an idea to the point where it is clear to someone else is slow. Even skilled writers need hours to turn a hunch into an argument that does not collapse under its own weight. The second limit is skill. Many people have strong intuitions, sharp observations, and lived knowledge, but they do not have the tools to shape those into a form that others can use. They might be brilliant in conversation and helpless on a blank page. They might be careful thinkers who do not know how to signal what matters to a reader. Their ideas are original, yet they remain trapped.

AI changes this equation in a plain way. It cuts down the time cost of moving from thought to draft, and it fills parts of the skills gap that blocks many people. That is the core claim, and it is boring in the best sense of the word. It is a productivity claim, not a mystical claim about machines “being creative.” The creative act stays human. The communicative labor can be shared.

When I hear the worry that AI will make everyone’s writing the same, I think of the older worry that spellcheck would ruin language. It did not. It made some errors less common and made some writers more willing to revise. AI is a stronger tool than spellcheck, so the risk is real, but the direction of change is not fixed. The result depends on what we ask it to do. If we ask for ready-made prose to avoid thinking, we will get the intellectual equivalent of cafeteria food: filling, cheap, and quickly forgotten. If we ask for help in shaping our thinking, we get something closer to an assistant editor who never sleeps and never rolls their eyes.

The simplest example is idea triage. Most of us have more ideas than we can develop. Some are promising, some are noise, and many are promising but vague. AI can help sort them. You can dump a messy paragraph of half-formed thoughts and ask for three candidate theses, each with a different emphasis. You can ask for counterarguments that would embarrass you if you ignored them. You can ask for the hidden assumptions in your claim. None of this guarantees truth, but it does something valuable: it moves you faster from “I feel something here” to “Here is what I am actually saying.”

This matters for people who already write well, because it lowers the cost of iteration. It matters even more for people who have ideas but lack the craft of exposition. We often romanticize that craft, as if difficulty is proof of virtue. In practice, the craft functions like a gate. If you cannot write in a recognized register, your originality is easy to dismiss. If you do not know how to structure an argument, you might never offer it. AI can act as a translator across registers. It can turn a spoken explanation into a readable paragraph. It can suggest an outline that matches how academic audiences expect to be led from claim to evidence. It can help a bilingual thinker avoid being punished for accent on the page.

The anxieties around AI often confuse originality with output. People see more text and assume less thought. That can be true, but it is not necessary. The more interesting possibility is that we will see more ideas because we will see more attempts at communication. Most attempts will be mediocre. That is not a crisis. The public space of ideas has always been full of mediocre attempts. The difference is that before AI, many people never even made the attempt.

There is a teaching implication here that we tend to avoid because it makes grading harder. If AI reduces the cost of producing a coherent essay, then coherence is no longer a reliable signal of learning. That does not mean students should be banned from AI. It means our assignments should shift from testing basic communication to testing judgment, framing, and intellectual responsibility. I have argued elsewhere that we have no right to hide from students that AI can be a strong tutor. The same logic applies to writing support. If the tool exists, students will use it, and some will use it well. Our job is to make “using it well” visible and assessable.

One way is to grade the thinking trail, not only the final product. Ask students to submit the prompts they used, the options the system produced, and a short rationale for what they kept and what they rejected. This turns AI from a shortcut into a mirror. Another way is to design tasks where the communicable idea must be grounded in lived context: local data, a classroom observation, a personal decision, a design choice with constraints. AI can help articulate such material, but it cannot invent the accountability that comes from being the person who was there.

There is also a relational dimension that matters to me as an educator. Communication is not only transmission. It is an invitation. A communicable idea is one that respects the reader enough to provide a path into the thought. AI can help with that respect by making revision less punishing. Many people stop revising because revision feels like failure. AI reframes revision as a normal dialogue. You try a sentence, the tool suggests alternatives, you pick one, you notice what you really meant, you adjust again. That is not cheating. That is apprenticeship, with a strange new partner.

Of course, AI can also flood the world with plausible nonsense. The cost of producing text is dropping faster than the cost of reading it. That creates a new bottleneck: attention and trust. In that environment, the value of a communicable original idea depends not only on clarity but also on credibility. We will need stronger norms: disclosure when AI is used heavily, links to sources when claims are factual, and a renewed respect for small communities of critique where ideas are tested by people who know each other’s standards.

If we take that seriously, AI does not have to be the enemy of creativity. It can be the enemy of silence. The great loss in intellectual life is not bad writing. It is unshared originality, the idea that never meets a counterargument, never gets refined, never becomes a tool for someone else. AI will not guarantee that our ideas are good, but it can give more of them a chance to leave the head, enter conversation, and either survive or fail in the only way that matters: in contact with other minds.



Spitting Into the Wind

I watch my colleagues fight, and I understand the impulse. The desire to preserve what we built over decades of careful work comes from genu...