Search This Blog

Thursday, April 30, 2026

Slop In, Slop Out


The common way of talking about AI-generated text begins with a category mistake. People want to know what percentage of a piece was written by AI, as though authorship were a divisible substance. Even the more sophisticated framing, that good content is a blend of human and machine work, retains the assumption that the two contributions can be measured against each other on the same scale. They cannot. A prompt of six unusual words can shape an output more decisively than a thousand words of generic instruction. Size is not the right unit, and there is no right unit, because the contributions are not comparable in kind.

The economist David Ricardo described, two centuries ago, a structure that fits this situation better than any ratio. Two parties with different productivity profiles still gain from specialization, even when one is absolutely more capable at every task. The relevant variable is not absolute capability but opportunity cost. England and Portugal both produced wine and cloth in his example, and Portugal was better at both. They still gained by specializing, because the opportunity cost of each good differed between the two countries. Translated into our problem: it does not matter whether AI can technically generate a thesis statement or a first draft. What matters is which side has the lower opportunity cost for which contribution.

The taxonomy is becoming legible. AI systems offer their distinctive value in coverage and convention. They have read more than any individual human can read. They know what a competent paragraph in a given genre looks like, what citations a given claim usually carries, what the statistical regularities of expert prose tend to be. Humans offer their distinctive value elsewhere. The novel connection between two rarely connected fields. The judgment that a particular argument matters and another does not. The intent that determines what is worth writing in the first place. The taste that recognizes when a sentence lands and when it slides past. These are not contributions a model derives from corpus statistics, because corpus statistics describe what has been written, not what should be.

Readers who follow these debates know the term "AI slop," used to describe the bland, formulaic, superficially competent output that floods comment sections and content farms. The term names a result. It does not name the process that produces it. The process deserves its own name. I will call it slop in. The phrase borrows from the old computing maxim that garbage in produces garbage out, and applies it specifically to the moment when a human, sitting in front of a model capable of doing a great deal, supplies an input that wastes the capacity. Slop in is what produces slop out.

The obvious form of slop in is the underspecified request. "Write me an essay about leadership." The human has supplied no intent, no angle, no judgment, and the model fills the vacuum with the average of everything it has seen on leadership, which is the definition of unreadable. But there is a less obvious form, and it is the one I find more interesting. It is the redundant prompt, where the human repeats back to the model what the model would have produced anyway. The user who asks for a five-paragraph essay with a clear thesis, three supporting points, and a conclusion has not contributed what only a human can contribute. They have only specified what the model already defaults to. The output will be worse than if they had said something distinctive, however brief. Effort is not the issue. A long, careful, redundant prompt is still slop in.

J. C. R. Licklider sketched the division of labor we are still working out, in a 1960 essay called "Man-Computer Symbiosis." He drew up two columns. The machine would handle rapid retrieval, pattern matching across volume, and calculation. The human would formulate the questions, choose the criteria of evaluation, and recognize when something was significant. The specifics need updating after sixty-six years, but the structure is exactly right. Licklider understood that the partnership only worked if each side did what the other could not. He did not call it comparative advantage, but the logic is the same.

The pedagogical implication follows. The skill we now need to teach is not how to operate AI tools. The interfaces are not difficult, and the difficult ones will become easy soon enough. The skill is the generation of human input that enhances rather than degrades the joint output. This means learning to recognize what the model will produce by default, so that the contribution can fall elsewhere. It means cultivating the judgment of what is worth writing about, the taste that distinguishes a serviceable sentence from a memorable one, the willingness to assert an interpretation the model would not reach on its own. These have always been the harder parts of writing instruction. They are now also the more economically valuable ones, because they are precisely the inputs the machine cannot supply for itself.

There is a deeper reason this matters, and it has to do with what we are now able to see about ourselves. Every culture defines the human against something it is not. For most of history those somethings were animals and gods, and the contrast did most of the conceptual work. Aristotle's rational animal needed beasts to be irrational. The imago dei needed a creator on the other side of the comparison. The mirrors we used were dim, and the reflection was generous. AI is a new mirror, and a sharper one. It produces fluent argument, competent prose, plausible analysis, and it does so without intent or taste or any judgment of what is worth saying. What remains uniquely human, when the mirror returns this image, is narrower and harder to name than before. Teaching people to contribute to a joint output is therefore not only an economic skill. It is the practical form of the older question about what we are, asked under conditions that finally make the question answerable in detail.




No comments:

Post a Comment

Slop In, Slop Out

The common way of talking about AI-generated text begins with a category mistake. People want to know what percentage of a piece was written...