Readers who follow these debates know the term "AI slop," used to describe the bland, formulaic, superficially competent output that floods comment sections and content farms. The term names a result. It does not name the process that produces it. The process deserves its own name. I will call it slop in. The phrase borrows from the old computing maxim that garbage in produces garbage out, and applies it specifically to the moment when a human, sitting in front of a model capable of doing a great deal, supplies an input that wastes the capacity. Slop in is what produces slop out.
The obvious form of slop in is the underspecified request. "Write me an essay about leadership." The human has supplied no intent, no angle, no judgment, and the model fills the vacuum with the average of everything it has seen on leadership, which is the definition of unreadable. But there is a less obvious form, and it is the one I find more interesting. It is the redundant prompt, where the human repeats back to the model what the model would have produced anyway. The user who asks for a five-paragraph essay with a clear thesis, three supporting points, and a conclusion has not contributed what only a human can contribute. They have only specified what the model already defaults to. The output will be worse than if they had said something distinctive, however brief. Effort is not the issue. A long, careful, redundant prompt is still slop in.
J. C. R. Licklider sketched the division of labor we are still working out, in a 1960 essay called "Man-Computer Symbiosis." He drew up two columns. The machine would handle rapid retrieval, pattern matching across volume, and calculation. The human would formulate the questions, choose the criteria of evaluation, and recognize when something was significant. The specifics need updating after sixty-six years, but the structure is exactly right. Licklider understood that the partnership only worked if each side did what the other could not. He did not call it comparative advantage, but the logic is the same.
The pedagogical implication follows. The skill we now need to teach is not how to operate AI tools. The interfaces are not difficult, and the difficult ones will become easy soon enough. The skill is the generation of human input that enhances rather than degrades the joint output. This means learning to recognize what the model will produce by default, so that the contribution can fall elsewhere. It means cultivating the judgment of what is worth writing about, the taste that distinguishes a serviceable sentence from a memorable one, the willingness to assert an interpretation the model would not reach on its own. These have always been the harder parts of writing instruction. They are now also the more economically valuable ones, because they are precisely the inputs the machine cannot supply for itself.
There is a deeper reason this matters, and it has to do with what we are now able to see about ourselves. Every culture defines the human against something it is not. For most of history those somethings were animals and gods, and the contrast did most of the conceptual work. Aristotle's rational animal needed beasts to be irrational. The imago dei needed a creator on the other side of the comparison. The mirrors we used were dim, and the reflection was generous. AI is a new mirror, and a sharper one. It produces fluent argument, competent prose, plausible analysis, and it does so without intent or taste or any judgment of what is worth saying. What remains uniquely human, when the mirror returns this image, is narrower and harder to name than before. Teaching people to contribute to a joint output is therefore not only an economic skill. It is the practical form of the older question about what we are, asked under conditions that finally make the question answerable in detail.






