Search This Blog

Thursday, January 8, 2026

AI can boost creativity

An original idea is not the same thing as a communicable original idea. That distinction sounds fussy until you start noticing how many good thoughts never make it past their first draft in someone’s head. Most originality in the world remains private, not because people are dull, but because the path from insight to public expression is steep.

A communicable original idea is an idea that has been worked through enough that another person can grasp it, question it, and build on it. It has examples. It has boundaries. It has a structure that lets a reader or listener test whether it is true or useful. That structure is not cosmetic. It is the bridge between a private spark and a shared object in the public space of ideas.

Two limits keep that bridge rare. The first is time. Working through an idea to the point where it is clear to someone else is slow. Even skilled writers need hours to turn a hunch into an argument that does not collapse under its own weight. The second limit is skill. Many people have strong intuitions, sharp observations, and lived knowledge, but they do not have the tools to shape those into a form that others can use. They might be brilliant in conversation and helpless on a blank page. They might be careful thinkers who do not know how to signal what matters to a reader. Their ideas are original, yet they remain trapped.

AI changes this equation in a plain way. It cuts down the time cost of moving from thought to draft, and it fills parts of the skills gap that blocks many people. That is the core claim, and it is boring in the best sense of the word. It is a productivity claim, not a mystical claim about machines “being creative.” The creative act stays human. The communicative labor can be shared.

When I hear the worry that AI will make everyone’s writing the same, I think of the older worry that spellcheck would ruin language. It did not. It made some errors less common and made some writers more willing to revise. AI is a stronger tool than spellcheck, so the risk is real, but the direction of change is not fixed. The result depends on what we ask it to do. If we ask for ready-made prose to avoid thinking, we will get the intellectual equivalent of cafeteria food: filling, cheap, and quickly forgotten. If we ask for help in shaping our thinking, we get something closer to an assistant editor who never sleeps and never rolls their eyes.

The simplest example is idea triage. Most of us have more ideas than we can develop. Some are promising, some are noise, and many are promising but vague. AI can help sort them. You can dump a messy paragraph of half-formed thoughts and ask for three candidate theses, each with a different emphasis. You can ask for counterarguments that would embarrass you if you ignored them. You can ask for the hidden assumptions in your claim. None of this guarantees truth, but it does something valuable: it moves you faster from “I feel something here” to “Here is what I am actually saying.”

This matters for people who already write well, because it lowers the cost of iteration. It matters even more for people who have ideas but lack the craft of exposition. We often romanticize that craft, as if difficulty is proof of virtue. In practice, the craft functions like a gate. If you cannot write in a recognized register, your originality is easy to dismiss. If you do not know how to structure an argument, you might never offer it. AI can act as a translator across registers. It can turn a spoken explanation into a readable paragraph. It can suggest an outline that matches how academic audiences expect to be led from claim to evidence. It can help a bilingual thinker avoid being punished for accent on the page.

The anxieties around AI often confuse originality with output. People see more text and assume less thought. That can be true, but it is not necessary. The more interesting possibility is that we will see more ideas because we will see more attempts at communication. Most attempts will be mediocre. That is not a crisis. The public space of ideas has always been full of mediocre attempts. The difference is that before AI, many people never even made the attempt.

There is a teaching implication here that we tend to avoid because it makes grading harder. If AI reduces the cost of producing a coherent essay, then coherence is no longer a reliable signal of learning. That does not mean students should be banned from AI. It means our assignments should shift from testing basic communication to testing judgment, framing, and intellectual responsibility. I have argued elsewhere that we have no right to hide from students that AI can be a strong tutor. The same logic applies to writing support. If the tool exists, students will use it, and some will use it well. Our job is to make “using it well” visible and assessable.

One way is to grade the thinking trail, not only the final product. Ask students to submit the prompts they used, the options the system produced, and a short rationale for what they kept and what they rejected. This turns AI from a shortcut into a mirror. Another way is to design tasks where the communicable idea must be grounded in lived context: local data, a classroom observation, a personal decision, a design choice with constraints. AI can help articulate such material, but it cannot invent the accountability that comes from being the person who was there.

There is also a relational dimension that matters to me as an educator. Communication is not only transmission. It is an invitation. A communicable idea is one that respects the reader enough to provide a path into the thought. AI can help with that respect by making revision less punishing. Many people stop revising because revision feels like failure. AI reframes revision as a normal dialogue. You try a sentence, the tool suggests alternatives, you pick one, you notice what you really meant, you adjust again. That is not cheating. That is apprenticeship, with a strange new partner.

Of course, AI can also flood the world with plausible nonsense. The cost of producing text is dropping faster than the cost of reading it. That creates a new bottleneck: attention and trust. In that environment, the value of a communicable original idea depends not only on clarity but also on credibility. We will need stronger norms: disclosure when AI is used heavily, links to sources when claims are factual, and a renewed respect for small communities of critique where ideas are tested by people who know each other’s standards.

If we take that seriously, AI does not have to be the enemy of creativity. It can be the enemy of silence. The great loss in intellectual life is not bad writing. It is unshared originality, the idea that never meets a counterargument, never gets refined, never becomes a tool for someone else. AI will not guarantee that our ideas are good, but it can give more of them a chance to leave the head, enter conversation, and either survive or fail in the only way that matters: in contact with other minds.



No comments:

Post a Comment

If You Cannot Solve AI in Your Classroom, That Is Not Proof It Cannot Be Solved

When the first big AI tools became public, the first thing I did was join a handful of Facebook groups about AI in education. I wanted a qui...