Search This Blog

Thursday, February 26, 2026

Why Hobbled AI Tutors Do Not Prepare Students for Real Learning with AI

I spent a good amount of time building what I thought was an ideal AI tutors for my courses. I made it carefully Socratic. I asked it to avoid direct answers, to respond with questions, to nudge students toward their own reasoning. Technically, it functioned just as I designed it. When I used it like a student who is tired and under time pressure, the charm faded quickly. I wanted a clear explanation, and it kept giving me more questions. After another round of tuning, I tried to make it friendlier and more supportive. Students then told me that the tutor felt noisy and overwhelming. At that point I understood that I was training them to handle my special tutor, not the kind of AI they actually meet outside the class. That is a strange educational goal.

Hobbled AI tutors feel safe and ethical, but they miseducate. They train students to work with artificial constraints that disappear the moment they open a normal AI system in a browser. We act as if a restricted tool is a good stepping stone toward a more powerful one. In practice, students build habits that do not transfer. They learn that AI always refuses direct answers, always behaves in a certain tone, always follows classroom rules. Then they encounter a general model that does none of those things, and much of their practice becomes irrelevant.

This is not a new pattern. Education has long relied on simplified versions of reality. We create word problems that clean up numbers, experiments that always work if you follow the manual, and case studies that fit on two pages. Those devices lower risk and cognitive load. They provide a controlled environment where mistakes are safe and visible. The logic is understandable. A student who is still learning should not make an error that costs a patient, a client, or a company. For that reason, some version of a sandbox is necessary.

The trouble appears when we forget that the sandbox is not the field of practice itself. The rules inside a controlled environment do not match the rules in workplaces, in graduate study, or in everyday online life. In many disciplines, educators now try to bring more authentic tasks into courses, so that students face messy data, conflicting evidence, and imperfect instructions. With AI, that tension between safety and authenticity becomes sharper, because the distance between the restricted version and the real tool is very large.

Once we start to protect students by modifying AI itself, we create a peculiar hybrid. We instruct the model never to give full solutions, or to delay any concrete suggestion until several rounds of questions. We narrow its sources and formats. We insist on a very specific teacherly tone that no professional tool will ever reproduce. The model still has the power to generate complex text, but it is prevented from using that power in the ways that matter most outside class. Students feel both burdened and confused. The system is strong enough to dominate the interaction, yet weak enough to be unhelpful when they need efficiency.

I do not think the answer is to abandon scaffolding. It is to move scaffolding out of the model and into our teaching. Instead of hard technical restraints, we can offer social and cognitive guidance. We can talk about appropriate and inappropriate uses of AI for a given assignment. We can model how to break a task into steps and how to design prompts for each step. We can teach students to read AI output with the same suspicion they bring to an unfamiliar website, to check claims against other sources, and to notice when the model clearly fabricates information.

In my own courses, this leads to a split strategy. For routine classroom work, a custom AI assistant still makes sense. It can generate weekly reading lists aligned with the syllabus, create small formative quizzes, and support simple administrative tasks. Those are narrow functions where tight constraints are actually helpful, because students do not need to reuse those bots later in life. They will not need my quiz generator at work.

For substantial projects, I will now invite students to use the same broad AI tools that everyone else uses. I want them to confront vague or partial answers, and to learn how to ask for clarification. I want them to see different versions of an argument and to practice choosing which one is worth pursuing. That means teaching very specific skills. For example, how to ask the model to reveal its uncertainty, how to request alternative lines of reasoning, how to move from a generic first draft to a more precise second version, and how to document their own use of AI in an honest way.

This approach accepts that mistakes will happen. Some students will trust the model too much. Some will misread an answer. But those risks already exist when they use AI on their own phones and laptops, far away from the course platform. In that context, a hobbled classroom tutor does not protect them. It leaves them underprepared. They know how to navigate a special kind of AI that appears only inside one course, and they lack practice with the systems they actually depend on.

An AI tutor that is permanently handicapped may look safer to us as instructors, but it does not prepare students for real learning with AI. It produces clever conversations inside a narrow frame and trains habits that fail outside that frame. I would rather expose students to the real tools and walk with them through the confusion, than give them a polished imitation that vanishes as soon as the course ends.




Friday, February 20, 2026

Learning With a Machine in the Room: What Students Said After a Semester of AI-Integrated Teaching

Last semester I ran an experiment across three courses that I will call Course A, Course B, and Course C. Each course used an AI Class Companion as a constant presence rather than an occasional tool. Students interacted with it for planning, drafting, testing knowledge, and reflecting on their progress. The exit survey gives an initial picture of how students perceived that experience.

Seventy seven students completed the survey. The headline number is straightforward. Fifty nine students reported that they learned more than they would have in a typical class without AI support. That equals 76.6 percent of respondents. Thirty nine selected “Somewhat Agree,” twenty selected “Fully Agree,” fifteen selected “Somewhat Disagree,” and three selected “Disagree.” These numbers suggest a strong perceived learning gain, but not unanimity.

Another important question asked whether students would take another course using an AI Class Companion. Sixty three students agreed or fully agreed. Thirty two chose “Fully Agree,” thirty one chose “Somewhat Agree,” eleven chose “Somewhat Disagree,” and three chose “Disagree.” This pattern matters because willingness to repeat an experience is often a better indicator of acceptance than enthusiasm in the moment.

The strongest agreement appeared in the skills question. Seventy two students said their AI skills increased significantly. Fifty six selected “Fully Agree,” sixteen selected “Somewhat Agree,” three selected “Somewhat Disagree,” and two selected “Disagree.” Even students who were skeptical about learning outcomes often acknowledged growth in technical fluency.

Below is a simple summary table of the core survey items.

Survey Snapshot (N = 77)

StatementFully AgreeSomewhat AgreeSomewhat DisagreeDisagreeAgree Total
Learned more than typical course203915359 (76.6%)
Would take another AI supported course323111363 (81.8%)
AI skills increased significantly56163272 (93.5%)

The numbers alone do not tell the full story. Students did not describe AI as flawless or magical. Several comments mentioned frustration when the system misunderstood context or produced shallow responses. That tension is important. The Companion was designed to provoke critique rather than passive acceptance. Many students reported that their stance toward AI changed during the semester. Early interactions focused on efficiency. Later reflections described more careful questioning and revision.

It is also important to note that the survey captures only perception. There is rich data beyond these numbers. Students generated extensive interaction logs with the Class Companion across the semester. Those logs include prompts, revisions, and moments where students corrected or challenged the system. In addition, each course produced substantial final artifacts such as research manuscripts, professional portfolios, and organizational proposals. Together, these materials provide a detailed empirical record of how learning unfolded in practice. I plan to analyze those interactions and final products separately.

One pattern that emerges from the survey is continuity. Students interacted with the Companion repeatedly rather than only at moments of difficulty. Many described returning to earlier conversations to revise ideas or test their understanding again. That continuity appears to have shaped perception of learning. Students often framed the Companion as a thinking partner that extended learning time beyond formal meetings.

At the same time, variation across responses should not be ignored. About one quarter of respondents did not agree that they learned more than in a typical course. Some learners may prefer clearer structure or less autonomy. Others may find constant interaction with AI cognitively demanding. These courses asked students to assume a high level of responsibility for their own learning process. For some students that autonomy felt empowering. For others it introduced uncertainty.

There is also a methodological concern that must be acknowledged openly. The survey results may be influenced by social desirability bias. Students may feel pressure to respond positively when a course emphasizes innovation or when AI is framed as central to the learning experience. Even though participation was voluntary and responses were anonymized after grading, the possibility of bias remains. For that reason, I treat these numbers as provisional indicators rather than definitive proof of impact.

Another interesting finding involves how students described their relationship with AI. Many said that the Companion felt supportive but non judgmental. That framing may matter more than technical capability. When AI becomes part of the learning environment rather than an external evaluator, students appear more willing to experiment, make mistakes, and revise their thinking.

What do these numbers suggest overall. First, most students perceived increased learning and strong skill growth. Second, willingness to repeat the experience was even higher than reported learning gains. Third, skepticism and frustration remained present, which may be a healthy sign that students were not treating AI as an authority.

The experiment raises a larger question about pedagogy. AI does not automatically improve education. What matters is how courses are structured around it. When AI becomes a continuous cognitive environment, students begin to externalize drafts earlier, test ideas more frequently, and engage in iterative reflection. The exit survey captures that transition from novelty toward routine practice.

However, I consider the main point to be proven: the use of AI does not prevent learning. 



AI Agents Miss the Point of My Work. Human-AI Synergy Is the Work

Software engineers got very excited about AI agents. I did not. Not because the technology is weak. Because the kind of work I do is fundame...