I cannot tell you exactly how much of yourself you need to put into AI-assisted writing. I know this measure exists, but I cannot formalize it. I know it exists because I feel its absence. When I provide too little input, something tightens in my chest. A small discomfort. Not quite guilt, not quite fraud, but a sense that I have crossed a line I cannot name.
This happens with different intensity across different tasks. When I merge existing documents into a report, barely any hesitation. When I draft a recommendation letter from a student's CV and my brief notes, a slight unease. When I consider having AI expand a scholarly outline without my detailed argument, real resistance. That produces drafts I never release. The feeling scales with something, but what?
We all have these intuitions, these moments of knowing we have not done enough. But we cannot teach intuitions. We cannot build professional standards on personal discomfort. Students ask how much they should write before turning to AI. Colleagues wonder whether their process is ethical. We respond with vague guidance about "meaningful engagement" and "substantial contribution." These phrases point at something real but fail to grasp it.
The very existence of our hesitation suggests a threshold. We worry because some minimum actually matters. If AI could handle everything, or if everything required full human composition, we would have no decisions to make. The anxiety comes from occupying the middle ground, where we must judge how much is enough. But enough for what?
Perhaps this: whatever cannot be recovered from existing sources must come from you. Call it the irreplaceable input. It is the information, judgment, or observation that exists nowhere except in your direct knowledge or thinking.
A recommendation letter makes this concrete. The student's resume lists accomplishments. Their statement describes goals. Their transcript shows grades. All of this sits in documents anyone could read. Your irreplaceable input is what you observed directly. How they engaged in discussion. Their growth across a semester. The specific moment they demonstrated insight or character. Provide these observations and AI can shape them into proper letter format. Skip them and AI generates hollow praise that could describe anyone. The discomfort you feel comes from knowing the difference.
Data reports work differently. Three documents contain survey results, budget numbers, timeline details. You need them merged into one report. The facts already exist in writing. Your irreplaceable input is minimal: the purpose of combining them, perhaps, or the audience who needs the result. The rest is organizational labor. Your conscience stays quiet because the substance was already captured. You are not replacing your knowledge with AI's generation. You are using AI to restructure what already exists.
Scholarly writing demands far more. Yes, you can point AI toward existing literature. But the conceptual architecture must come from you. Why these sources matter together. What tension their combination reveals. Which question they help answer. AI can summarize sources. It cannot know which summary serves your argument because it does not have your argument. Your irreplaceable input is the entire intellectual structure: the problem you saw, the gap you identified, the synthesis you propose. Without this, AI produces competent prose organized around nothing in particular.
Even routine emails carry irreplaceable elements. The basic facts seem obvious enough. You need to reschedule a meeting. You want to decline an invitation. But relationship context belongs only to you. Whether this is the third reschedule. Whether you are writing to your supervisor or your student. What tone maintains trust given your history with this person. AI works from patterns observed across millions of messages. You work from direct knowledge of this particular person in this particular situation.
The criterion is not about effort or time spent. Merging three documents might consume two hours of tedious work but require minimal thought. Articulating your core scholarly insight might take ten minutes but represent six months of reading and thinking. The measure is what could be reconstructed without you. If another person could assemble the same material from available sources, you have not yet contributed what only you can contribute. If your specific knowledge, observation, or judgment is required, you have met the threshold.
This does not solve every question. How detailed must a scholarly outline be? How many observations make a recommendation sufficient? But it provides a starting point: What am I adding that exists nowhere else? What would be lost if I were removed from this process?
We recognize the threshold by its violation. The letter that sounds generic. The article that demonstrates competence but lacks insight. The message that gets the facts right but the tone wrong. Something missing even when format is correct. That absence marks where irreplaceable input should have been.
The irreplaceable portion need not be large. Three sentences about a student might become one paragraph in a two-page letter. A conceptual framework might occupy two pages in a twenty-page article. But these elements carry the weight that makes the rest meaningful. Remove them and the structure becomes simulation. Keep them and AI serves as a genuine assistant.
This is what we need to identify: the core only we can provide. Not the largest part, not necessarily the hardest part, but the part that requires us to have been there, to know something, to have thought something through. Everything else is real work, but it is work that can be done by pattern. The irreplaceable input requires presence, knowledge, judgment. It requires us.
