News
The degree has always been a signal. AI is changing how we verify it.

Share with colleagues
Download the full Case Study
Take an in-depth look at how educators have used Cadmus to better deliver assessments.
A university degree has always been, at its core, a signal.
Not a guarantee of competence—everyone understands that. Not a promise that the graduate has retained everything they were taught. A signal. A credible, externally verified claim that this person spent three or four years being genuinely tested on their ability to think, reason, and apply knowledge under conditions they couldn’t easily fake.
Employers know this. They’ve always known it. The degree isn’t the product. The signal is.
That signal is now broken. Not weakened, not complicated—broken. And the way most universities are responding to that break is making it worse.
What “broken” actually means
When I talk to institutions, I hear a version of this regularly: AI is making it easier for students to produce work that meets the standard without building the capability underneath it.
That observation is correct. But I don’t think most institutions have followed it to where it leads.
If students can produce work that meets the expected standard without genuine understanding, then the assessment output is no longer a reliable proxy for the learning it’s supposed to represent. The signal and the thing it’s supposed to signal have been decoupled.
In market terms: the product still looks the same. The underlying value has changed. You now have a credential that says one thing and may mean another—and neither the institution, nor the employer, nor in many cases the student, can tell which they’re dealing with.
That is not a compliance problem. It is a product integrity problem. And it is one of the most significant reputational risks sitting on university balance sheets right now, largely unpriced.
Why detection is the wrong instrument
Detection tools are built to answer the wrong question.
The question they answer is: how was this work produced? The question that actually matters is: does this student understand it?
Those are not the same question. They never were. But in the pre-AI world, the correlation between them was high enough that the gap didn’t matter much. A student who wrote the essay probably understood it, at least partially. Now that correlation has collapsed. High-quality output and genuine understanding have been decoupled, and a tool that inspects the output cannot tell you anything about the understanding underneath it.
This failure mode has a pattern across industries. Every sector eventually reaches the point where the audit mechanism was designed for a world that no longer exists. Financial services had it with credit ratings—a measurement system that produced numbers with confidence long after the underlying signal had degraded. Healthcare has versions of it in clinical coding. The response that looked like due diligence was actually compounding the problem by creating false confidence. The detection tool tells you something happened. It cannot tell you what it means.
And here is the structural problem that makes this unfixable at the detection layer: the models being detected are improving faster than the detection layer itself. This is not a temporary gap that will close with the next product update. It is a permanent asymmetry. The adversary has the faster technology, the lower cost base, and the stronger incentive. You are not going to outrun that.
You’re not fixing the signal. You’re auditing the noise.
What actually restores the signal
The institutions making real progress on this aren’t asking “how do we catch more students?” They’re asking a different question: how do we design assessment that requires genuine understanding to complete?
It’s a deceptively simple reframe. The implications are significant.
When a student knows they will be asked to explain their work—to talk through their reasoning, defend their decisions, respond to a question they haven’t seen—the nature of the task changes entirely. AI can still help with structure, with drafting, with early thinking. It cannot stand in for the student at the moment of live demonstration. That moment, by design, can’t be outsourced.
This isn’t just anecdotal. Research from Wonkhe points to the same dynamic. In one study, two students used AI on similar assignments, with one key difference: one expected to be tested on their understanding later, the other didn’t. That single structural feature changed how they used AI entirely—not whether they used it, but whether they actually engaged with the material.
One teaching team I spoke with recently added a ten-minute oral follow-up to their written submissions. Nothing elaborate—just “walk me through your argument and the decisions behind it.” What they found wasn’t that misconduct disappeared. It was that behaviour shifted upstream. Students engaged differently with the material from the start, because they knew the submission wasn’t the end of the process.
That’s the mechanism. Not surveillance. Not detection. Assessment design that makes genuine learning the path of least resistance, rather than an optional extra.
The commercial logic is straightforward. Detection-first integrity requires permanent, escalating investment in tools that are depreciating in effectiveness. Every detection advance is met with a circumvention advance. The cost compounds, the confidence erodes, and the underlying signal problem remains unsolved.
Design-led integrity requires upfront investment in curriculum and capability—harder to procure, slower to implement, impossible to show in an annual report as a line item. But the return is structural: an assessment system that produces a signal you can actually stand behind. One that tells employers, accreditors, and students themselves something true about what the degree represents.
One model pays indefinitely to chase a problem it cannot solve. The other builds something that makes the problem progressively harder to create in the first place.
The window
There is a version of this transition that happens deliberately, with institutional leadership and clear direction. And there is a version that happens reactively—after the story breaks, after the accreditor asks, after the employer stops taking the degree at face value.
Both versions happen. The deliberate one is significantly less expensive.
The institutions making progress on this are not doing it because they’ve solved the resourcing problem or found a way to make curriculum change frictionless. They’re doing it because they’ve understood what’s actually at stake: not just an integrity compliance issue, but the foundational claim their institution makes about what its graduates can do.
That claim is what the degree has always been. AI has forced the question of whether institutions can still make it honestly.
The answer is yes. But it requires building assessment systems that produce a real signal—not tools that inspect a broken one.
-
Herk Kailis is Founder and Co-CEO of Cadmus, an assessment platform built for higher education. He writes about the commercial logic of learning design, institutional strategy, and how universities can build systems that work better than the problems they're trying to solve.
Category
AI
Leadership
More News
Load more
Assessment Design
Student Success
Group work is not the problem. Poor design is.
This article explores how challenges in group work are often linked to assessment design, rather than student behaviour. It highlights how clearer roles, structured milestones, and greater visibility of contributions can improve fairness and outcomes, while better supporting both students and educators throughout the process.
Cadmus
2026-05-12

Teaching & Learning
Feedback & Marking
How to reduce marking time without compromising feedback quality
Marking often feels constrained by a trade-off between speed and feedback quality. This article examines how that pressure is driven by the design of the marking process, not the act of assessment itself. It outlines how reducing repetition, improving consistency, and keeping marking in context allows educators to maintain high-quality feedback while reducing workload.
Cadmus
2026-05-05

Student Success
Using data to identify at-risk students early
A practical piece for educators on identifying at-risk students earlier by looking beyond final submissions. It shows how process-level signals, such as engagement patterns, drafting behaviour, and time-on-task, provide a clearer view of student progress, and how to act on those signals through early check-ins and more targeted support.
Cadmus
2026-04-20