News

Process-driven assessment isn’t more work—it’s better design

Share with colleagues

Download the full Case Study

Take an in-depth look at how educators have used Cadmus to better deliver assessments.

Thank you!

We'll get back to you shortly.

Over the past few articles, we’ve been exploring how assessment needs to evolve in an AI-rich higher education landscape.

In our first post, Designing Assessments in the Age of Gen AI, we argued that the integrity challenge created by generative AI isn’t primarily a detection problem. It’s a design problem. When assessment focuses only on the final artefact, students are incentivised to optimise for speed and polish rather than learning. Process-driven assessment—where drafting, feedback, revision, and reflection are part of the task—offers a more durable response.

In our second post, Learning-Centred Rubrics in the Age of AI, we zoomed in on rubrics. We explored how learning-centred, process-aware rubrics can assess not just what students submit, but how they engage with learning: their use of feedback, development of ideas over time, and transparent, appropriate use of AI.

Together, those pieces set out the why of process-driven assessment. This post focuses on the how—specifically, how to embed formative feedback into process-driven assessment without increasing workload. This is where process-driven assessment moves from a design principle to something educators can actually run, semester after semester.

If process-driven assessment and learning-centred rubrics are the goal, the real question many educators ask next is practical:

How do we do this without dramatically increasing workload?

What follows looks at how formative feedback—across drafts, peers, groups, and self-evaluation—can be built into process-driven assessment by design, rather than added on as extra work. And crucially, how the right structures and tools make this not only possible, but sustainable at scale.

Embedding formative feedback loops into process-driven assessment

For most educators, the idea of process-driven assessment is appealing in principle and intimidating in practice.

We know students learn more when they draft, receive feedback, revise, and reflect. We know formative feedback supports deeper understanding, stronger writing, and better academic judgement. And in an age of generative AI, we know that looking only at final submissions no longer tells us much about learning at all.

What often stops people isn’t disagreement. It’s workload.

The good news is this: process-driven assessment doesn’t work by adding more work. When designed well, it redistributes effort, shifts feedback earlier (where it’s cheaper and more effective), and uses structure so educators aren’t carrying the whole load alone.

The myth: process-driven assessment means more marking

It’s easy to assume that making learning visible means reading multiple drafts for every student and leaving long comments each time. That would be unsustainable.

But that’s not how effective process-driven assessment has to work.

Instead of asking educators to do more, it:

  • reduces guesswork for students early,
  • concentrates feedback at high-leverage moments,
  • diversifies who and what counts as feedback,
  • uses visibility to target support before problems escalate,
  • and ultimately produces more learning and better quality submissions (that have the added bonus of being easier to mark!)

Here’s how to do it:

1. Start by building the process into the task

The fastest way to create workload is to release an assessment that students don’t know how to approach. Confusion doesn’t stay contained, it becomes emails, extensions, late-stage panic, and post-grade disputes.

Process-driven assessment starts earlier, by making the learning journey explicit.

That means designing tasks that:

  • explain why the task matters,
  • break complex skills into manageable stages,
  • clarify what students should be doing at each point,
  • and surface opportunities for feedback before the final submission.

When the process is built into the task itself—through clear stages, prompts, and checklists—uncertainty drops dramatically. In Cadmus, pre-built templates and checklists do a lot of heavy lifting here. They don’t tell students exactly what to write—but they give them a structure for how to proceed. That structure reduces uncertainty, encourages earlier engagement, and makes support needs more predictable.

Designing the process once saves time every time the task runs.

2. Use a process-aware rubric to define quality across the journey

Once the process is clear, the next step is to make quality explicit—not just in the final submission, but across the learning journey itself.

This is where process-aware, learning-centred rubrics matter.

Traditional rubrics focus almost entirely on the final product. They describe what a polished submission looks like, but say very little about how students should get there. In an AI-rich context, that gap becomes a real problem.

A process-aware rubric does something different. It articulates expectations around learning behaviours that matter, such as:

  • sustained engagement across drafts,
  • effective use of feedback,
  • development of ideas over time,
  • reflective decision-making (including appropriate use of AI),
  • and discipline-specific ways of thinking.

This doesn’t lower standards or replace academic judgement. It clarifies where judgement is applied. In practice, the rubric becomes the shared language that coordinates self-assessment, peer feedback, tutorials, and final marking.

Most importantly, a process-aware rubric becomes the anchor for all feedback:

  • Students can self-assess meaningfully instead of guessing.
  • Peer feedback becomes more focused and useful.
  • Tutorials and workshops stay aligned to shared standards.
  • Thematic feedback can be framed against common criteria.
  • Final marking is faster and more defensible because the process is already visible.

When the rubric defines quality across the whole task, not just the endpoint, feedback becomes cumulative rather than fragmented.

3. Add one high-leverage checkpoint (not endless drafts)

Formative feedback only changes learning if students receive it while they still have time to act.

That doesn’t mean commenting on full drafts for everyone. It means choosing one (or maybe two) strategic checkpoints that unlock quality in the final submission.

Depending on what you care about most, this might be:

  • If you care about thinking: a plan or thesis statement + the key resources which will support the arguments
  • If you care about writing: an introduction + one body paragraph
  • If you care about evidence: a paragraph showing claim–evidence–reasoning with citations
  • If you care about the quality of resources: a short annotated bibliography
  • If you care about synthesis: a comparison table + a short synthesis paragraph
  • If you care about students engaging with core theory: a summary and/or critique of the theory that students need to apply in later stages of the assessment
  • If you care about appropriate use of AI: a short proposal of how students will or won’t use it 

These checkpoints focus feedback where it matters most, and make it much harder for AI-generated work to stand in for learning, because students’ thinking is already visible. They also dramatically reduce the amount of corrective work needed later (read: the quality of final submissions will be better and easier to mark).

Feedback is most expensive when it arrives too late. Early, targeted feedback is cheaper and far more effective at supporting the learning process. 

4. Diversify feedback so teachers aren’t the only source

One of the biggest workload traps in assessment is the assumption that all meaningful feedback must come from the teacher.

In practice, the most sustainable process-driven assessments treat feedback as an ecosystem, with different channels doing different jobs.

This can include:

  • Self-assessment, where students check progress against the rubric and identify next steps.
  • Peer feedback, structured around specific criteria and stages of the task.
  • Tutor-facilitated feedback, where common issues are addressed in class through discussion or skill-building activities.
  • Thematic or cohort-level feedback, where patterns are addressed once instead of repeated across individual scripts.
  • Targeted teacher feedback, reserved for complex conceptual issues or students who need extra support.

When these channels are coordinated by a shared rubric and clear process, students receive more feedback overall, while teacher workload decreases—a rare win-win in assessment design.

5. Use visibility to intervene early (instead of cleaning up late)

Process-driven assessment only works if educators can see what’s happening while the task is still running.

When you can see:

  • who has started,
  • who is stuck,
  • who is engaging late,
  • which resources are (or aren’t) being used,

you can take small, low-cost actions that prevent bigger problems later. A short reminder, a milestone reset, a clarifying example, or a brief in-class activity can redirect learning before it derails.

This kind of early intervention saves time in extensions, appeals, and post-assessment support (as well as of course, marking). 

Having this visibility turns feedback from reaction into guidance and can support students in engaging earlier and more consistently with the assessment. 

Where Cadmus fits

Most educators already believe in these principles. The challenge is sustaining them across large cohorts, busy semesters, and multiple tutors.

Cadmus supports process-driven assessment by bringing task design, process-aware rubrics, drafting, feedback, reflection, and learning insights into one connected workflow, from task design through to marking and learning assurance.

Rubrics guide students as they work, not just when grades are released. Drafts create real feedback moments without version chaos. Insights make engagement patterns visible early, so support can be targeted where it matters most.

In an AI-rich context, this matters deeply. When students are supported to engage with the process—to plan, draft, revise, reflect, and explain how tools were used—the incentive to shortcut learning drops away. Integrity isn’t enforced after submission; it’s designed into the assessment itself.

Designing for sustainability, not heroics

Process-driven assessment isn’t about asking educators to do more. It’s about designing assessment so learning happens earlier, feedback lands where it has impact, and effort is spent where it counts.

When assessment is built around process, supported by process-aware rubrics and a diverse feedback ecosystem, formative feedback stops being a burden and starts being a shared responsibility.

Design partnerships

For many institutions, the challenge isn’t understanding what process-driven assessment looks like in theory. It’s translating those principles into practice, across different disciplines, class sizes, and teaching contexts.

This is where the Cadmus Academic Team can support you. The Academic Team’s role is not to redesign assessments for educators, but to build shared capability so process-driven assessment becomes normal practice rather than an exception.

The Academic Team works alongside universities, faculties, and individual course teams to design assessments that are genuinely process-driven, learning-centred, and sustainable. Rather than offering generic training, the focus is on co-design: helping educators rethink existing assessments, rubrics, and feedback structures in ways that align with their disciplinary goals and institutional priorities.

In practice, this support can include:

  • partnering with faculties to redesign assessment frameworks at a program or course level,
  • helping disciplines articulate what “process” and “quality” look like in their context,
  • developing process-aware rubrics that align learning outcomes, feedback, and marking,
  • identifying high-leverage checkpoints that improve learning without increasing workload,
  • and supporting teaching teams to embed drafting, reflection, and appropriate AI use into assessment design.

This work is especially valuable at scale. When multiple courses or programs are working toward consistent assessment principles, such as scaffolded process, transparent expectations, and formative feedback, the Academic Team helps ensure that those principles are implemented coherently, not left to individual interpretation.

The result is assessment design that feels intentional rather than improvised: clearer for students, more defensible for staff, and far easier to sustain over time.

Case study: Redesigning assessment without redesigning everything


A first-year unit coordinator came to the Cadmus Academic Team with a familiar concern.

The subject had a large cohort, a traditional written assessment, and growing anxiety about AI use. Students were submitting fluent work, but markers were struggling to tell who actually understood the material. Draft feedback felt impossible at scale, and integrity conversations were becoming increasingly adversarial.

The coordinator didn’t want a wholesale redesign. They wanted to keep the task. They just wanted it to work better.

1. Step one: make the process visible

Rather than replacing the assessment, the Academic Team worked with the coordinator to map the existing learning process that was already implied in the task—reading, synthesising sources, drafting an argument, refining ideas—but never made explicit.

Together, they introduced a Cadmus template and two low-cost checkpoints: a short synthesis table of proposed sources + a brief explanation of why they were relevant, and a paragraph draft focused on demonstrating argument and use of evidence.

Neither was heavily weighted, but would quickly highlight common misuses of AI - misaligned references, misunderstanding of sources, and weak support of arguments. 

2. Step two: shift the rubric from product to process

Next, the rubric was revised. The criteria didn’t just describe the final essay. They articulated what quality looked like as students worked: engagement with sources, development of an argument across drafts, response to feedback, and clarity about how tools (including AI) were used.

Students saw the rubric from day one. Tutors referenced it during tutorials. Feedback was framed using the same language throughout the task. Suddenly, expectations stopped feeling mysterious.

3. Step three: diversify feedback without increasing workload

Instead of adding more individual comments, feedback was redistributed: tutors used draft submissions to identify common issues and addressed them in class, students completed a brief self-review against the rubric at two points during the assessment peers discussed exemplar processes and responses using rubric criteria as a shared reference point.

The coordinator noticed something unexpected: fewer clarification emails, fewer extensions, and fewer integrity concerns—not because students were being monitored more closely, but because they understood what was expected and how to get there.

What changed—and what didn’t

The assessment weighting, learning outcomes or assessment task didn’t change. What changed was the design.

By making the learning process visible, aligning the rubric to that process, and embedding feedback at natural points along the way, the assessment became easier to teach, easier to mark, and harder to shortcut.

For students, the message was clear: how you work matters. For staff, the assessment provided rich insights into student engagement, better quality final submissions and higher confidence in student learning. 

Why this matters in an AI-rich context: When fluent text can be produced quickly, the quality of learning is no longer reliably visible in the final artefact alone. Process-driven assessment—supported by learning-centred, process-aware rubrics—shifts the focus back to what matters: thinking, judgement, engagement, and growth over time.

The task didn’t change, and the workload didn’t increase. But the student learning and teacher confidence did. 

This post is part of an ongoing series exploring how assessment design is evolving in an AI-rich higher education landscape.

Across the series, we examine:

  • why integrity challenges are often design challenges,
  • how rubrics can be used to assess learning processes, not just final products,
  • and how formative feedback can be embedded into assessment without increasing workload.

Together, these pieces argue for a shift toward process-driven, learning-centred assessment—supported by clear design, thoughtful use of AI, and tools that make learning visible at scale.

If you’re exploring how to redesign assessment in response to AI—without increasing workload—our teaching guides, case studies, and resources show how these principles are being implemented across disciplines and institutions.

Action Your Next Assessment Redesign—Dive Into the Series under More News

Category

Assessment Design

Teaching & Learning

AI

More News

Load more
Learning-centred rubrics in the age of AI

Teaching & Learning

Assessment Design

AI

Learning-centred rubrics in the age of AI

In an AI-rich higher education context, learning-centred, process-aware rubrics matter more than ever because they make standards explicit, support fairness, and emphasise how students learn—not just what they submit. This article dives into how when rubrics are embedded across drafting, feedback, revision, and reflective use of AI, assessment stays focused on genuine engagement, judgement, and academic learning.

Jess Ashman, Director of Learning, Cadmus

2026-01-14

Your guide to designing assessments in the age of AI

Assessment Design

Academic Integrity

AI

Your guide to designing assessments in the age of AI

As generative AI reshapes higher education, traditional assessment models are being put to the test. This article explores why detection-first approaches fall short and how process-driven assessment can strengthen learning, integrity, and AI literacy.

Jess Ashman, Director of Learning, Cadmus

2026-01-12

Academic Integrity in 2026: Moving beyond detection tools

Assessment Design

Academic Integrity

AI

Academic Integrity in 2026: Moving beyond detection tools

As universities navigate AI, hybrid learning, and growing assessment complexity, academic integrity is being redefined. This article outlines why learning assurance, grounded in assessment design, is replacing detection-first models.

Cadmus

2026-01-08