Last semester I ran an experiment across three courses that I will call Course A, Course B, and Course C. Each course used an AI Class Companion as a constant presence rather than an occasional tool. Students interacted with it for planning, drafting, testing knowledge, and reflecting on their progress. The exit survey gives an initial picture of how students perceived that experience.
Seventy seven students completed the survey. The headline number is straightforward. Fifty nine students reported that they learned more than they would have in a typical class without AI support. That equals 76.6 percent of respondents. Thirty nine selected “Somewhat Agree,” twenty selected “Fully Agree,” fifteen selected “Somewhat Disagree,” and three selected “Disagree.” These numbers suggest a strong perceived learning gain, but not unanimity.
Another important question asked whether students would take another course using an AI Class Companion. Sixty three students agreed or fully agreed. Thirty two chose “Fully Agree,” thirty one chose “Somewhat Agree,” eleven chose “Somewhat Disagree,” and three chose “Disagree.” This pattern matters because willingness to repeat an experience is often a better indicator of acceptance than enthusiasm in the moment.
The strongest agreement appeared in the skills question. Seventy two students said their AI skills increased significantly. Fifty six selected “Fully Agree,” sixteen selected “Somewhat Agree,” three selected “Somewhat Disagree,” and two selected “Disagree.” Even students who were skeptical about learning outcomes often acknowledged growth in technical fluency.
Below is a simple summary table of the core survey items.
Survey Snapshot (N = 77)
| Statement | Fully Agree | Somewhat Agree | Somewhat Disagree | Disagree | Agree Total |
|---|---|---|---|---|---|
| Learned more than typical course | 20 | 39 | 15 | 3 | 59 (76.6%) |
| Would take another AI supported course | 32 | 31 | 11 | 3 | 63 (81.8%) |
| AI skills increased significantly | 56 | 16 | 3 | 2 | 72 (93.5%) |
The numbers alone do not tell the full story. Students did not describe AI as flawless or magical. Several comments mentioned frustration when the system misunderstood context or produced shallow responses. That tension is important. The Companion was designed to provoke critique rather than passive acceptance. Many students reported that their stance toward AI changed during the semester. Early interactions focused on efficiency. Later reflections described more careful questioning and revision.
It is also important to note that the survey captures only perception. There is rich data beyond these numbers. Students generated extensive interaction logs with the Class Companion across the semester. Those logs include prompts, revisions, and moments where students corrected or challenged the system. In addition, each course produced substantial final artifacts such as research manuscripts, professional portfolios, and organizational proposals. Together, these materials provide a detailed empirical record of how learning unfolded in practice. I plan to analyze those interactions and final products separately.
One pattern that emerges from the survey is continuity. Students interacted with the Companion repeatedly rather than only at moments of difficulty. Many described returning to earlier conversations to revise ideas or test their understanding again. That continuity appears to have shaped perception of learning. Students often framed the Companion as a thinking partner that extended learning time beyond formal meetings.
At the same time, variation across responses should not be ignored. About one quarter of respondents did not agree that they learned more than in a typical course. Some learners may prefer clearer structure or less autonomy. Others may find constant interaction with AI cognitively demanding. These courses asked students to assume a high level of responsibility for their own learning process. For some students that autonomy felt empowering. For others it introduced uncertainty.
There is also a methodological concern that must be acknowledged openly. The survey results may be influenced by social desirability bias. Students may feel pressure to respond positively when a course emphasizes innovation or when AI is framed as central to the learning experience. Even though participation was voluntary and responses were anonymized after grading, the possibility of bias remains. For that reason, I treat these numbers as provisional indicators rather than definitive proof of impact.
Another interesting finding involves how students described their relationship with AI. Many said that the Companion felt supportive but non judgmental. That framing may matter more than technical capability. When AI becomes part of the learning environment rather than an external evaluator, students appear more willing to experiment, make mistakes, and revise their thinking.
What do these numbers suggest overall. First, most students perceived increased learning and strong skill growth. Second, willingness to repeat the experience was even higher than reported learning gains. Third, skepticism and frustration remained present, which may be a healthy sign that students were not treating AI as an authority.
The experiment raises a larger question about pedagogy. AI does not automatically improve education. What matters is how courses are structured around it. When AI becomes a continuous cognitive environment, students begin to externalize drafts earlier, test ideas more frequently, and engage in iterative reflection. The exit survey captures that transition from novelty toward routine practice.
However, I consider the main point to be proven: the use of AI does not prevent learning.

