News
Your guide to designing assessments in the age of gen AI

Share with colleagues
Download the full Case Study
Take an in-depth look at how educators have used Cadmus to better deliver assessments.
When generative AI entered higher education at scale, it felt like a rupture. Essays could be written in seconds. Answers could be generated without struggle. Familiar assessment formats suddenly looked fragile.
But this wasn’t a new problem. It was a revealing one.
Long before AI, universities were already grappling with contract cheating, ghost-writing, and outsourced assessment labour. The typical response was largely forensic: detect, investigate, penalise. Cadmus started there too.
We were originally built as a detection tool, designed to surface contract cheating. But it quickly became clear: detection alone doesn’t fix the problem. What we were seeing was disengagement, uncertainty, and students who didn’t understand what “good” looked like until it was too late and learning had already failed.
From catching misconduct to preventing it
Instead of asking “How do we prove this student cheated?” we asked, “How do we design assessments so cheating isn’t necessary?”
This shift led us to incorporate the final submission back into the learning process itself. Cadmus evolved into a platform built around process over product, the principle at the heart of effective assessment in an AI-rich world. Drafting, feedback, revision, decision-making, and reflection became visible parts of the assessment journey.
Generative AI didn’t create the risk, it amplified a truth we already knew: when assessment values only the final artefact, students will find the fastest way to produce it. Whether that’s a contract cheating service in 2019 or a language model in 2026.
AI raises the bar for what good assessment design looks like
Across the sector, including at the recent T&L Forum hosted by Cadmus and the University of Melbourne—a consistent theme emerged:
The question is no longer whether students will use AI, but how it can be integrated in a way that still drives learning.
This is where many assessment strategies still fall short.
Banning AI outright ignores its prevalence in professional life (and outside of an invigilated, in-person assessment is very tricky to do). Allowing it without structure collapses the line between assistance and substitution. And detection-only approaches create adversarial relationships that scale poorly and teach students very little.
What works instead is process-driven assessment, intentionally designed to support learning, judgement, and ethical decision-making.
Building AI literacy through assessment
AI literacy isn’t a standalone skill, it develops through practice, guidance, and feedback, exactly what good assessment already provides.
Students need to learn:
- When a tool is helpful
- How to use it responsibly
- When it undermines learning
- How to critically evaluate output
This kind of judgement doesn’t emerge spontaneously, it should be taught, and assessment is the ideal place to do it.
Process-driven assessment makes AI use part of the learning journey, not just something to police after submission. In practice, this means:
- making expectations about AI use explicit at each stage
- providing guidance on how to use AI appropriately
- asking students to reflect on how tools were used
- structuring tasks so AI supports thinking, without replacing it
When students articulate their process, secrecy disappears. Instead of hiding tool use, they learn to discuss it critically - exactly the behaviour universities want graduates to carry into AI-rich professions.
How Cadmus supports process-driven assessment
Designing for process only works if that process is visible, usable, and actionable. Cadmus achieves this through scaffolded templates and learning analytics, designed to work together.
Templates provide structure, guiding students through planning, research, drafting, feedback, and revision. They make expectations explicit, reduce last-minute panic, and clarify what quality looks like before they submit.
Analytics capture engagement patterns, drafting behaviour, and responses to feedback. Educators see the learning journey, not just a final submission. Early intervention becomes possible, and institutions gain insight into how assessments are functioning in practice.
Together, this approach creates a learning-first environment, not a policing-first one.
Educators can identify where students are struggling early, intervene with targeted support, and adjust teaching based on evidence rather than assumption. Institutions gain clearer insight into how assessment design is functioning - not just whether students passed, but how learning unfolded.
In an AI-rich context, this matters deeply. It allows universities to move beyond surface-level judgements about tool use and instead focus on what actually signals learning: engagement, iteration, reflection, and growth.
This is what makes process-driven assessment scalable. Not surveillance. Not detection. But design plus insight, working together before problems arise.
Why prevention scales when detection doesn’t
One of the clearest lessons emerging across universities is this: investigation doesn’t scale, but design does.
Detection tools add workload after submission. Investigations consume academic time. Appeals erode trust. None of this improves learning.
Preventative, learning-centred assessment does the opposite. It:
- reduces the incentive to outsource thinking
- supports students earlier in the task
- provides multiple feedback and course-correction opportunities
- generates stronger evidence of learning at meaningful points
Institutions implementing scaffolded, process-driven assessment are seeing fewer integrity issues not because students are being watched more closely, but because they are being supported more effectively.
Assessment works best when students understand that it exists for their learning, not against them. AI makes this clarity essential. Without it, students default to optimisation: fastest path, lowest risk, minimal effort. With it, assessment becomes a site for building judgement including when AI helps, when it harms, and when human expertise matters most.
Assessment as education, not enforcement
Cadmus’ evolution reflects this belief. Moving beyond detection didn’t abandon integrity—it strengthened it. Assessments now teach students:
- How to learn
- How to write
- How to revise
- How to work responsibly with AI
In 2026, that isn’t a defensive posture. It’s a pedagogical one.
It’s increasingly clear that the future of assessment won’t be secured by better policing of submissions, but by better design of learning itself.
Example & Resources
Want to see process-driven assessment in action? Check out our Literature Review Template, designed around the learning process, not just the final submission. It scaffolds planning, research, drafting, feedback, and reflection, with AI use made explicit and reflective.
For a deeper dive into the principles of academic integrity in an AI world, read our blog: Academic Integrity in 2026: Beyond Detection Tools.
Category
Assessment Design
Academic Integrity
AI
More News
Load more
Assessment Design
Academic Integrity
AI
Academic Integrity in 2026: Moving beyond detection tools
As universities navigate AI, hybrid learning, and growing assessment complexity, academic integrity is being redefined. This article outlines why learning assurance, grounded in assessment design, is replacing detection-first models.
Cadmus
2026-01-08

Company
Cadmus Takes First Place in APUC Framework Agreement
We're excited to announce that we have been awarded the first place in APUC’s Framework Agreement. As a result, higher education institutions across the UK can now procure Cadmus through the University of London (CoSector), streamlining the procurement route.
2025-11-28

AI
Leadership
Student Success
Teaching & Learning
The future of assessment in a shifting higher education landscape: Lessons from the Teaching & Learning Forum
Discover how universities across Australia are reimagining assessment to foster integrity, authenticity, and student success in an AI-driven world.
Cadmus
2025-10-31