Search This Blog

Saturday, April 4, 2026

AI Agents Miss the Point of My Work. Human-AI Synergy Is the Work

Software engineers got very excited about AI agents. I did not. Not because the technology is weak. Because the kind of work I do is fundamentally different from what agents are designed to handle.

An agent is a system that pursues a goal with limited human involvement. You assign a task, it plans, executes, checks, and revises. The appeal is obvious: less oversight, less friction, less human labor. That model fits work where the goal is clear, the steps are predictable, and success can be tested against explicit criteria. Software often looks like that.

My work does not. I write, research, and teach. In all three, value is not produced by handing off a task and waiting for a result. It is created in the interaction itself. A small amount of well-placed human input at each stage improves outcomes far beyond what AI can do alone, and far beyond what I could do without it. That is the real advantage. Not autonomy. Synergy.

This is the point the agent hype consistently misses. Human input is treated as a cost to minimize. In my work, it is the source of value. The crucial moves are often small and nearly impossible to formalize: changing the framing, shifting emphasis, spotting the hidden problem, recognizing that this case is unlike the last one. Those are not interruptions to the workflow. They are the workflow.

I see this plainly in a grading assistant I built. Grading appears to be a strong candidate for automation: prompts, rubrics, student papers, structured process. Yet the tool still requires adjustment for every assignment. A classroom discussion from the previous day changes what I want to reward. A pattern of shared misunderstanding changes what feedback is most useful. A stronger cohort shifts the tone and standard I want applied. The assistant is useful not because it runs independently, but because it stays responsive to the classroom. That responsiveness comes from me. Remove the human layer, and the output becomes flatter. Efficient, perhaps, but no longer particularly educational.

The same holds for writing and research. Important changes happen in the middle. A sentence opens a better line of thought. An objection reframes the whole argument. A vague idea sharpens through exchange. I am not executing a plan. I am discovering the plan as I go.

There is a deeper assumption worth naming. Most enthusiasm around agents rests on the premise that human involvement is a bottleneck, and that the ideal system needs us as little as possible. But for much intellectual work, the human is not the bottleneck. The human is the variable that determines whether the output is any good. We bring unspoken assumptions, changes of mind, local knowledge, and felt judgment. We redirect the process without always being able to articulate how. That is not a limitation of human work. That is what makes the work worth doing.

Agents will prove useful for processes that are complex but still predictable. That category is narrower than most people assume. In teaching, writing, and research, the task is rarely to execute a pre-set sequence well. It is to notice when the sequence itself needs to change.

For my kind of work, human-AI synergy is not a temporary compromise while agents mature. It is the model. 


AI Agents Miss the Point of My Work. Human-AI Synergy Is the Work

Software engineers got very excited about AI agents. I did not. Not because the technology is weak. Because the kind of work I do is fundame...