News
Detection to Design: Why secure assessment now demands oral assurance

Share with colleagues
Download the full Case Study
Take an in-depth look at how educators have used Cadmus to better deliver assessments.
Written assessment has always been the backbone of higher education. Long-form writing develops research literacy, synthesis of evidence, critical thinking, and disciplinary argumentation (Joughin, 2018; Bearman et al., 2022). It remains one of the most powerful ways universities assess higher-order learning across disciplines. But in an AI-enabled world, a quiet shift is underway. Writing still matters. Exams still matter. But written and exam submissions alone can no longer carry the full burden of assurance.
Generative AI now makes it possible to produce coherent, discipline-appropriate work with minimal student mastery—often in ways that are indistinguishable from genuine learning. The pedagogical value of writing has not diminished. What has changed is the confidence institutions can place in it as evidence.
As we explored recently in our discussion on moving beyond proctoring, the future of academic integrity will not be built on monitoring alone. This article picks up from there.
If written assessment remains foundational for learning—but no longer sufficient as proof—what additional layers of assurance are now required?
The integrity problem has moved upstream
Generative AI can produce coherent, discipline‑appropriate written responses with minimal student mastery, weakening confidence in writing as evidence of understanding (Cotton et al., 2023; Perkins et al., 2024). In unsupervised or low‑control contexts, AI‑generated work may be indistinguishable from genuine student writing.
As TEQSA and other regulators have emphasised, assessment redesign—not simply detection or surveillance—is now central to maintaining validity and defensibility.
The result is what could now be describe as an assessment “identity crisis”: longstanding assumptions about summative tasks, originality, and the demonstrability of learning outcomes are being upended as institutions articulate what assessment is truly for (TEQSA, Gen-AI academic integrity and assessment reform resources, December 2025).
- Written assessment remains pedagogically essential
- Detection tools are increasingly fragile and contested
- Surveillance raises equity, privacy, and student trust concerns
- And assurance of learning is harder to defend than ever
Integrity conversations are therefore shifting away from “how do we catch misconduct?” toward a more strategic question: “How do we design assessment so that learning becomes visible?”
What secure assessment really needs—and what it doesn’t
In the rush to “secure” assessment, many institutions are still tempted to reach first for surveillance and detection.
But secure assessment requires more than proctoring. It requires a different design model altogether.
They do not need:
- Controls that protect conditions but not learning
- Probabilistic signals that are difficult to defend in appeals
- Blanket approaches that increase workload, anxiety, and institutional risk
What universities do need is a different model of secure assessment, one grounded in learning evidence, not enforcement.
Three principles are emerging from both research and practice.
1. Evidence over surveillance
The strongest integrity signal is not monitoring. It is explanation.
Oral assessment has long been recognised as one of the most authentic ways to capture reasoning, reflection, and applied judgement (Joughin, 2018; Sotiriadou et al., 2020). When students explain their thinking, defend their choices, and articulate understanding, authenticity becomes visible.
This is precisely why vivas and oral examinations have traditionally provided the highest confidence in student learning.
Yet despite their pedagogical strength, academics have often abandoned vivas at scale due to workload, scheduling complexity, and moderation challenge—especially in large cohorts.
The problem with vivas has never been pedagogy. It has been delivery.
2. Precision, not blanket controls
Not every task requires the same level of assurance.
Research increasingly supports layered models of assessment—often described as “Swiss cheese” approaches—that combine multiple methods to strengthen validity rather than relying on a single task type (Joughin, 2018).
In practice, this means:
- Applying stronger assurance only where explanation and judgement matter most
- Combining written work with targeted oral checks
- Avoiding unnecessary surveillance in low‑risk contexts
Evidence also shows that oral assessment is most effective when applied selectively and aligned carefully to learning outcomes (Sotiriadou et al., 2020; Hussain et al., 2024).
Precision is what makes integrity scalable.
3. Defensibility by design
In the AI era, the greatest institutional risk is not misconduct. It is being unable to defend assessment decisions.
Institutions now require assessment designs that:
- Demonstrate validity and authenticity
- Provide auditable evidence of learning
- Support moderation and appeals
- Align with emerging regulatory expectations
This marks a fundamental shift. Secure assessment is no longer primarily about controlling behaviour at submission time. It is about designing assessment journeys that generate credible evidence of learning across multiple touchpoints.
The return of oral assurance—redesigned for the AI era
This is where vivas are re‑entering the conversation. Not as traditional, high‑burden oral exams, but as structured, targeted oral assurance moments embedded within existing learning journeys.
Research shows that contextualising oral prompts using a student’s own written submission improves focus and efficiency while preserving pedagogical value (Samadi & Chee, 2024). Probing reasoning directly strengthens validity by confirming both authorship and conceptual understanding.
Crucially, modern delivery models now remove the historical barriers that once limited scale:
- Asynchronous video responses
- Rubric‑linked marking and moderation
- Recorded evidence for audit and review
- AI‑assisted prompting to personalise questioning while preserving educator control
This resolves a long‑standing trade‑off.
For years, universities have been forced to choose between:
- Trusting written work
- Or imposing surveillance‑heavy controls
Precision oral assurance offers a third path. It strengthens integrity by evidencing learning directly.
From detection to layered learning assurance
Leading institutions are now moving toward layered assurance models that combine:
- Written assessment for higher‑order learning
- Secure environments where appropriate
- Proof‑of‑process artefacts
- Targeted oral explanation moments
- Transparent marking and moderation
In this model, no single tool carries the burden of integrity. Instead, confidence is built progressively across the assessment journey, and oral assurance becomes a strategic layer not an emergency fallback.
Why this moment matters
Three forces are converging. First, AI has permanently weakened writing as a sole integrity signal. No combination of detection or monitoring can fully restore confidence in authorship without additional evidence of understanding. Second, regulators are calling for redesign, not just surveillance. Validity and defensibility are now central quality risks. Third, technology finally allows oral assurance to scale. What was once operationally impossible is now practical across large cohorts. Together, they create a rare opportunity. Not to abandon written assessment. But to strengthen it.
A new foundation for secure assessment
Written assessment will remain foundational for learning. But in a GenAI era, it can no longer stand alone. Institutions that succeed in the next decade will be those that move beyond surveillance and redesign assessment around layered, learning‑centred assurance. The question is no longer whether vivas belong in modern assessment. It is whether universities are ready to design secure assessment around the one signal AI still struggles to fake: A student who can explain what they know.
At Cadmus, we’re partnering with universities to explore how targeted viva moments can strengthen learning assurance at scale—without surveillance or unsustainable workload. More on this work soon.
Category
Academic Integrity
Exam Alternatives
More News
Load more
Academic Integrity
Exam Alternatives
Beyond Proctoring: How Cadmus is redefining secure assessment
Traditional exam proctoring is becoming less effective in the age of generative AI, prompting a shift toward assessment designs that better support integrity. Cadmus takes a learning-first approach that makes student thinking, process, and authorship visible over time.
Cadmus
2026-01-27

Assessment Design
Teaching & Learning
AI
Process-driven assessment isn’t more work—it’s better design
This final article in our three-part series explores how process-driven assessment can embed formative feedback by design—without increasing educator workload. Authored by Director of Learning Jess Ashman, it shows how process-aware rubrics and high-leverage checkpoints make learning visible, improve quality, and support assessment that is sustainable in an AI-rich context.
Jess Ashman, Director of Learning, Cadmus
2026-01-19

Teaching & Learning
Assessment Design
AI
Learning-centred rubrics in the age of AI
In an AI-rich higher education context, learning-centred, process-aware rubrics matter more than ever because they make standards explicit, support fairness, and emphasise how students learn—not just what they submit. This article dives into how when rubrics are embedded across drafting, feedback, revision, and reflective use of AI, assessment stays focused on genuine engagement, judgement, and academic learning.
Jess Ashman, Director of Learning, Cadmus
2026-01-14