Search This Blog

Friday, March 27, 2026

The Relational University: A Vision for AI in Higher Education

Universities have a set of deep, structural problems that long predate artificial intelligence. Student engagement is thin. Bureaucratic barriers are thick. Advising is reactive and understaffed. Instruction often runs on a low-engagement lecturing models. Faculty who entered the profession to work with ideas and people spend most of their time on mechanical tasks. Students navigate systems designed for institutional convenience, not for learning. These are not new complaints. They are defining failures that decades of reform have failed to resolve. 

AI potentially offers a way to resolve them without reducing the workforce. The vision is direct: universities become more relational, not more automated. Faculty gain time for personal contact with students because machines absorb the mechanical parts of teaching. Instruction improves because AI makes genuine formative assessment practically possible for the first time.  Using results of previous assignments to design better subsequent ones is something the theory always called for but practice never had capacity to deliver. Advising shifts from reactive to proactive. Administrative barriers diminish for routine matters, freeing human staff to handle complex ones. The university does not shrink. It refocuses.

Advising is limited by labor. An advisor with a caseload of 800 students cannot know any of them. The result is a pull model: students must identify their own problems and seek help. Those who most need support are least equipped to ask for it. AI can change this by detecting risk patterns across enrollment, academic, and financial data, flagging students who are drifting before they reach crisis. But the flag is not the intervention. The intervention is a human advisor who calls, meets, and listens. If we build this well, AI handles the surveillance and triage and the bulk of simple questions; advisors handle the relationship and complex questions. The same constraint limits course scheduling. We know that certain combinations of courses and workloads predict failure. We know that work schedules, commute patterns, and academic preparation all interact with course selection. But no human advisor can model these interactions across hundreds of students each semester. AI could make risk visible at the point of decision, so that advisors and students choose schedules with open eyes. The underlying problem is the same: knowledge exists that could prevent failure, but the labor required to apply it at scale has always exceeded institutional capacity.

Instruction faces the same bottleneck. The simplistic lecture/exam teaching modes persists because faculty lack time for anything better. Substantive weekly feedback on student work is physically impossible at current teaching loads. So we compress assessment into a few high-stakes moments and call it evaluation. AI could absorb the mechanical first pass of formative feedback, freeing faculty to review, personalize, and respond to individual learning trajectories. More than that: formative assessment theory has always held that results of one assignment should inform the design of the next. No instructor teaching over a hundred students has time to analyze patterns across a set of papers and redesign the subsequent task accordingly. AI could close that loop, identifying collective strengths and weaknesses and drafting assessments calibrated to what students actually need next. The instructor reviews, adjusts, and teaches. If we get this right, the feedback cycle that formative assessment always promised becomes operational for the first time.

The downstream effect on faculty time could be substantial. Time recovered from mechanical grading and routine administration could go toward direct contact with students: structured individual meetings, small-group conversations about ideas rather than grades. The labor has always been there. It has just been allocated significantly to the wrong tasks. Similarly, administrative staff currently spend large portions of their time answering routine questions with definitive answers. AI can handle those instantly, not to eliminate positions, but to redirect human attention toward complex cases that require judgment.

None of this is guaranteed. The technology exists, but institutional habits are strong. Universities could just as easily use AI to cut costs, reduce staff, and further depersonalize the student experience. The reinvented university, more relational, more responsive, more human, will only emerge if we deliberately choose to augment human labor rather than replace it. The defining question is not what AI can do. It is what we decide to do with the capacity it creates. If we play our cards right, universities could become what they have always claimed to be: places where education happens between people, supported by systems that finally serve that purpose rather than obstructing it.


Thursday, March 26, 2026

The Trouble with Refusal Rights: On the CCCC Resolution to Refuse Generative AI

The Conference on College Composition and Communication recently passed a resolution affirming "the rights of students and teachers to refuse to sign up for, prompt, or otherwise use generative AI in the writing classroom." The resolution draws on a companion document, "Refusing GenAI in Writing Studies: A Quickstart Guide" by Jennifer Sano-Franchini, Megan McIntyre, and Maggie Fernandes. I find much of what they say about the political economy of Big Tech persuasive. Yet the central move of casting the argument in the language of rights is a strategic and conceptual misstep.

When we talk about rights, we usually mean one of two things: legal rights, which are enforceable protections backed by law or contract, or moral rights, which are claims about what people are owed regardless of what the law says. The resolution tries to invoke both registers and achieves neither.

The resolution leans on the AAUP's 1940 Statement on Academic Freedom. That document does establish a real professional norm: faculty can select course materials, determine approaches, and assess student work without administrative veto. But notice what this actually protects. It protects faculty autonomy in making curricular decisions. It does not create a specific right to refuse any particular technology. When an instructor decides that a given tool does not serve her course, she exercises normal pedagogical judgment. No one calls that a right. It is simply teaching.

Academic freedom is symmetric. It protects choices, not specific outcomes. A resolution affirming one particular choice as a right tilts the field. It implies that using AI requires justification while refusing it does not.

There is a deeper problem with invoking the AAUP framework here. The AAUP has been explicit that academic freedom in teaching operates on two levels, and the collective level takes precedence. As the AAUP's own FAQ states, the shared academic freedom of a faculty to determine courses and materials "supersedes the freedom of an individual faculty member to choose a textbook that he or she alone prefers." The AAUP's statement on Freedom and Responsibility reinforces this: it is improper for an instructor to fail to present subject matter "as approved by the faculty in their collective responsibility for the curriculum." This is why we have curriculum committees and course approval processes. Individual instructors teach within a collectively sanctioned framework.

The CCCC resolution quietly reverses this logic. It asserts an individual right to refuse a specific technology, independent of any collective deliberation about what a writing curriculum should contain. If we establish the precedent that an individual instructor has a right to refuse AI on principled grounds, what else can an instructor refuse? The learning management system, on grounds that it embodies corporate surveillance? Plagiarism detection software, which the Quickstart Guide itself criticizes? Peer review platforms? Email? Each of these technologies carries ideological baggage. The line between principled refusal and personal preference becomes impossible to draw once you frame the question as one of individual rights rather than collective curricular judgment.

I have also struggled with a simple question while reading the resolution: What specific violation of rights does it aim to prevent? Who is forcing writing teachers to use ChatGPT?

The AAUP's 2025 survey found that 15 percent of faculty said their institution mandates AI use. But the same survey found that 81 percent must use learning management systems with embedded AI features. The "mandate" is mostly that Canvas or Google Workspace now has AI baked in. That is not the same as being told you must assign AI-assisted essays. I cannot find evidence of any American college or university requiring writing faculty to incorporate generative AI into their pedagogy. To the extent that real pressure exists, it takes the form of institutional nudging, not directives that could be resisted by invoking a right.

Rights are most powerful when they address a concrete threat. The right to free speech protects against government censorship. The right to due process protects against arbitrary punishment. What does the right to refuse generative AI protect against? Against being encouraged to try something? Against the zeitgeist? Rights are a heavy instrument. They should be reserved for heavy problems.

Buried inside the resolution is a pedagogical claim: that writing instruction develops human thought and expression, and that outsourcing parts of the writing process to a language model may undermine that development. The claim is debatable but plausible. However, it does not require the language of rights. It requires the language of curriculum and evidence. If generative AI undermines learning outcomes in first-year composition, instructors should not use it. Not because they have a right to refuse, but because it does not serve their students. By framing refusal as a right rather than a pedagogical judgment, the resolution removes the question from the domain where it should be argued and places it in the domain where it can only be asserted. You do not argue against a right. You respect it or you violate it. This forecloses the very inquiry the authors claim to value.

The resolution also affirms a student right to refuse AI. This is even more peculiar. Students are routinely required to use technologies they did not choose: a particular LMS, plagiarism detection services, specific software. No one frames these requirements as rights violations. A course has requirements. An instructor sets them. If the instructor has determined that AI engagement is central to the course, allowing opt-outs means running two parallel courses. If AI is not central, the point is moot.

The real concerns here deserve a better framework. Shared governance matters: if institutions sign contracts with AI companies without consulting faculty, that is a governance problem already addressed by existing norms. Pedagogical autonomy already exists: faculty can and do decide what technologies to use. The evidence question is primary and researchable. And the labor and environmental concerns, while legitimate, are matters of institutional procurement and social policy, not classroom pedagogy. An instructor who refuses ChatGPT on environmental grounds should, by the same logic, refuse Zoom, Canvas, and university email, all of which depend on data centers with significant environmental footprints.

There is a final paradox. The language of rights is supposed to project strength, but here it signals the opposite. When a profession asks for special protections against a new technology, it announces that it cannot figure out, through its own expertise and collective deliberation, how to respond to a changed environment. Math departments did not need a right to refuse calculators. They debated, experimented, and made curricular decisions. Some banned calculators from exams, some required them, and the question was always pedagogical: Does this tool help students learn, or does it let them bypass the thinking? Writing studies has every intellectual resource it needs to conduct the same kind of inquiry. Framing the matter as a right suggests otherwise. It suggests a profession that feels so besieged it must retreat behind quasi-legal barricades rather than do what professions do: deliberate, adapt, and lead.




Friday, March 20, 2026

Spitting Into the Wind

I watch my colleagues fight, and I understand the impulse. The desire to preserve what we built over decades of careful work comes from genuine care for learning. But much of what I see on campuses right now amounts to spitting into the wind. The effort lands back on the person making it, and the wind does not notice.

Consider the inventory. Turnitin added AI detection, and departments adopted it as a digital checkpoint. The detection is unreliable, generating false positives that punish honest students and false negatives that miss sophisticated use. Faculty end up in adversarial arguments about whether a 23% AI probability score constitutes an honor code violation. Browser-locking software was built for a world where the threat was a second tab. That world is gone. AI assistants are now embedded in operating systems, available through voice, woven into browser extensions. Locking down a browser is like reinforcing the front door while the back wall of the house is missing. 

Some faculty have turned to handwritten assignments, stripping away every advantage of digital composition in order to verify authorship. Others schedule 20- or 30-minute oral defenses for each student, a practice that collapses under its own arithmetic in a class of 35. Still others declare their classroom an AI-free zone, a principled stand that increasingly resembles teaching navigation while pretending GPS does not exist.

And then, in February, Einstein arrived: an agentic AI tool that promised to log into Canvas and complete entire courses on a student's behalf, from watching lectures to submitting assignments. The reaction was predictable. Social media erupted, faculty declared it the death of education, and within 48 hours the product was taken down after a trademark dispute over the Einstein name. Faculty breathed a sigh of relief. But the relief is misplaced. As one observer noted, the line between a flash in the pan and a harbinger of things to come is very thin. The underlying technology is open-source, improving rapidly, and replicable by anyone with modest coding skills. Einstein was a crude prototype. Its successors will not announce themselves with a viral marketing campaign. All of these measures share a common feature. They are perimeter defenses. They try to keep AI out of an existing structure rather than asking whether the structure still makes sense.

Here is what we are avoiding. The entire curriculum, in every discipline, needs re-examination from the foundations. That means returning to learning outcomes, asking which ones still hold, which have been made trivial by AI, and which new ones have become essential. It means rebuilding assessments from those revised outcomes upward. It means redesigning courses so the process of learning, not the product, carries the educational weight.

This is enormous work, and it cannot happen in one summer workshop. It requires sustained time, structured collaboration, and genuine institutional investment. Course releases for faculty redesigning their programs. Instructional design teams embedded in departments, not available by appointment three weeks out. A clear signal from leadership that this work matters as much as research productivity or enrollment targets.

That signal has not come. University leaders have mostly treated AI as a policy question rather than a curricular one. Faculty professional associations could be leading discipline-specific conversations about learning outcomes in a post-AI landscape. Some have begun. Most have not. A conference panel on "AI and Teaching" is not a plan.

Every semester that passes with the old curriculum intact is a semester of lost opportunity. Faculty exhausting themselves with detection and enforcement could be doing the creative, difficult, rewarding work of rethinking what their courses are for. They are dedicated teachers. They simply do what people do when the ground shifts and no one offers direction. They reinforce what they know. They defend what they built.

But the wind does not care. And the longer we spend spitting into it, the less time we have to turn around and walk somewhere that leads to solid ground. 


Slop In, Slop Out

The common way of talking about AI-generated text begins with a category mistake. People want to know what percentage of a piece was written...