News

Learning-centred rubrics in the age of AI

Share with colleagues

Download the full Case Study

Take an in-depth look at how educators have used Cadmus to better deliver assessments.

Thank you!

We'll get back to you shortly.

Why good assessment design matters more than ever

When generative AI entered higher education at scale, it triggered a familiar response: concern about misconduct, calls for better detection, and questions about how much automation is too much.

But for educators designing assessments day to day, a more practical question quickly followed:

If AI is now part of the assessment ecosystem, what foundations need to be strong first?

One answer consistently surfaces in both research and practice: marking rubrics. 

Rubrics sit at the intersection of learning, assessment, and judgement. When they’re well designed, they support clarity, fairness, and learning. They also provide an opportunity to encourage rigorous academic writing and assessment processes. 

Why rubrics still matter (and perhaps more now than ever)

Rubrics have long played an important role in higher education. They make expectations explicit, support consistency in marking, and help students understand what quality looks like. Well-designed rubrics improve transparency and reliability, particularly in large cohorts or when multiple markers are involved.

For students, rubrics also reduce uncertainty. When criteria and standards are clear, students are better able to plan their work, monitor their progress, and reflect on feedback. In this sense, rubrics are not just grading tools—they are learning tools.

The problem is that not all rubrics are designed this way.

Vague criteria, generic descriptors, or rubrics that aren’t clearly aligned to the task often leave students guessing and staff frustrated, potentially undermining learning. 

Once AI enters the picture, these weaknesses are magnified. If students don’t understand what quality looks like, or why it matters, they may default to the fastest path to a polished submission.

What high-quality rubrics have in common

Across the literature, effective rubrics tend to share several characteristics. At their core, they make judgement visible and standards legible to students.

They are:

  • clearly aligned with learning outcomes and disciplinary standards
  • built around specific, observable criteria rather than abstract traits
  • structured with meaningful performance levels that distinguish quality clearly
  • analytic in nature, supporting consistency and reliability in marking

However, students often see rubrics primarily as grading instruments rather than as guides for learning,  particularly when descriptors are ambiguous or overly general.

This highlights an important limitation: even good rubrics have limited impact if students only encounter them at the end of an assessment.

The value of rubrics increases significantly when criteria specific to the learning journey are integrated—encouraging drafting, feedback, revision, and self-evaluation along the way.

Using rubrics to assess writing processes and engagement

For much of higher education’s history, rubrics have been used primarily to evaluate final submissions. That still matters. But in an AI-enabled assessment landscape, the final product alone is no longer a reliable proxy for learning.

This is where many assessment designs are now evolving. 

Enter process-driven assessment.

When we talk about process-driven assessment, we’re talking about assessments that are designed to make learning visible as it happens, not just at the point of final submission.

In practice, this means structuring tasks so students engage in meaningful stages: planning, researching, drafting, receiving feedback, revising, and reflecting. Each stage contributes to learning rather than acting as a hoop to jump through. The focus shifts from evaluating a single artefact to understanding how students develop ideas, make decisions, and respond to feedback over time.

This approach isn’t new. It draws on well-established research in writing pedagogy and formative assessment, which shows that learning deepens when students are supported through cycles of feedback and revision. What is new is the urgency. In an AI-rich context,  where fluent text can be produced quickly,  process-driven assessment helps ensure that assessment still captures thinking, judgement, and engagement.

Learning-centred rubrics play a critical role in this design. When rubrics are aligned to learning outcomes and embedded throughout the assessment process, they guide students as they work, not just explain a grade at the end. In this way, rubrics become tools for learning, feedback, and reflection on process. 

Process-aware rubrics make expectations explicit around learning behaviours that experienced academics already value, including:

  • sustained engagement across drafts and revisions
  • meaningful use of feedback
  • development of argument or voice over time
  • reflective decision-making and justification
  • appropriate and transparent use of generative AI

This does not mean lowering standards or replacing academic judgement with compliance metrics. It means articulating quality in relation to learning behaviours that actually matter. This is particularly important in disciplines where writing is a way of thinking, not just a way of reporting knowledge.

In an AI-rich context, this shift is critical. When fluent text can be generated quickly, assessment that focuses only on surface features risks rewarding efficiency over understanding. Rubrics that attend to process and engagement help re-centre assessment on the act of learning and make visible the intellectual work that AI cannot meaningfully perform on a student’s behalf.

When paired with assessment designs that surface drafts, checkpoints, feedback responses, and reflection, these rubrics support more defensible judgements about learning and send a clear message to students: how you work matters, not just what you submit.

Making process-driven assessment practical

AI hasn’t changed what good assessment looks like, but it has raised the stakes.

Clear learning outcomes, learning-centred rubrics, and thoughtful academic judgement still matter. What’s changed is the pressure to deliver these consistently, across large cohorts, with students using AI as part of their everyday study practices.

Most educators already know what good assessment looks like. The hard part is finding the time and the systems to do it well, consistently, and at scale.

Designing process-driven assessments takes care. Writing learning-centred rubrics takes intention. Supporting drafting, feedback, reflection, and appropriate AI use across a cohort is difficult to sustain without the right structures in place.

This is where Cadmus helps.

Cadmus brings assessment design, rubrics, drafting, feedback, reflection, and learning insights into one connected environment. Rubrics don’t sit at the end of a task - they guide students as they work. Students see criteria while drafting, use them during feedback and revision, and return to them when reflecting on their learning.

Drafts, checkpoints, and feedback are built into the assessment flow, making learning visible well before final submission. That visibility matters in an AI-rich context. When students are supported to engage with the process, the incentive to shortcut learning drops away.

Integrity is no longer something enforced after submission. It’s designed into the assessment itself through a proactive, preventative and educative approach.

For institutions, this approach scales. Instead of relying on individual academics to redesign assessment each semester, Cadmus supports consistent, learning-centred practice across courses and programs, while still preserving disciplinary nuance and academic judgement.

In the age of AI, the goal shouldn’t be tighter control over student work. It’s still just ensuring learning. 

Below is an example of a process-aware, learning-centred rubric. 

Process-aware, learning-centred rubric

Criterion

High Distinction

Distinction

Credit

Pass

Fail

1. Engagement with the Research Process 

(Planning, reading, development over time)

Sustained, purposeful engagement across the task. Clear evidence of early planning, iterative reading, drafting, and refinement of focus. Process shows developing biological understanding over time.

Consistent engagement with clear progression. Evidence of planning and revision is present, with minor gaps in depth or timing.

Engagement evident but uneven. Some planning or revision shown, though development is limited or concentrated late in the task.

Limited evidence of process engagement. Work appears largely linear, with minimal planning or revision.

Little or no evidence of engagement with the research process. Work appears produced in a single stage.

2. Use and Evaluation of Scientific Sources 

(Reading and interpretation)

Selects high-quality, relevant peer-reviewed sources. Demonstrates strong understanding of biological methods, findings, and limitations. Critically evaluates relevance and quality.

Appropriate peer-reviewed sources used. Accurate reporting of findings with some evaluation of methods or limitations.

Relevant sources used, but discussion is mostly descriptive. Partial understanding of methods or findings.

Limited relevance or understanding of sources. Heavy reliance on summary with minimal interpretation.

Sources are inappropriate, insufficient, or misunderstood. Major inaccuracies present.

3. Synthesis and Development of Scientific Argument 

(Thinking across sources)

Integrates sources to construct a coherent, evidence-based argument. Clearly synthesises patterns, debates, or gaps in the literature.

Clear central focus with synthesis across sources. Argument is mostly coherent, with minor integration issues.

Some synthesis evident, but sections rely on source-by-source summary.

Little synthesis. Sources discussed largely in isolation.

No meaningful synthesis. Work consists of disconnected summaries or copied material.

4. Writing Development and Revision 

(Drafting, feedback, improvement)

Multiple drafts show clear improvement in structure, clarity, and scientific reasoning. Feedback is actively incorporated and reflected upon.

Evidence of revision and improvement across drafts. Feedback addressed but not always fully integrated.

Some revision evident, but changes are mainly surface-level (e.g. wording).

Minimal revision. Limited or unclear response to feedback.

No meaningful revision. Draft and final submission are substantially the same.

5. Use of Generative AI and Academic Integrity 

(Judgement, transparency, reflection)

AI use (if any) is clearly declared and thoughtfully justified. Tools support planning, understanding, or refinement without replacing critical reading or judgement. Reflection shows strong AI literacy.

AI use is declared and generally appropriate. Reflection demonstrates awareness of strengths and limitations of AI tools.

AI use is declared, but reflection is limited or descriptive. Some reliance on AI output without clear justification.

AI use is unclear or poorly reflected upon. Limited evidence of judgement in tool use.

AI use is not declared, misrepresented, or substitutes for engagement with the literature.

6. Scientific Communication and Referencing

(Final submission quality)

Writing is clear, precise, and appropriate for a scientific audience. Terminology is accurate. Referencing is correct and consistent.

Writing is clear with minor errors. Referencing is mostly correct.

Meaning generally clear, but expression or structure is inconsistent. Some referencing errors.

Writing is understandable but lacks clarity or precision. Referencing inconsistent.

Writing is unclear, inaccurate, or poorly structured. Referencing missing or incorrect.

Category

Teaching & Learning

AI

More News

Load more
Your guide to designing assessments in the age of gen AI

Assessment Design

Academic Integrity

AI

Your guide to designing assessments in the age of gen AI

As generative AI reshapes higher education, traditional assessment models are being put to the test. This article explores why detection-first approaches fall short and how process-driven assessment can strengthen learning, integrity, and AI literacy.

Jess Ashman, Director of Learning, Cadmus

2026-01-12

Academic Integrity in 2026: Moving beyond detection tools

Assessment Design

Academic Integrity

AI

Academic Integrity in 2026: Moving beyond detection tools

As universities navigate AI, hybrid learning, and growing assessment complexity, academic integrity is being redefined. This article outlines why learning assurance, grounded in assessment design, is replacing detection-first models.

Cadmus

2026-01-08

Cadmus Takes First Place in APUC Framework Agreement

Company

Cadmus Takes First Place in APUC Framework Agreement

We're excited to announce that we have been awarded the first place in APUC’s Framework Agreement. As a result, higher education institutions across the UK can now procure Cadmus through the University of London (CoSector), streamlining the procurement route.

2025-11-28