AEMS

This site is in preview. Enter the password to continue.

Incorrect password.

← All posts

Privacy-First Grading: Why It Matters More Than You Think

· AEMS Team · 4 min read
privacygdprhigher-educationsecurity

When universities evaluate AI marking tools, the conversation usually starts with accuracy: how often does the AI get the mark right? That is the wrong first question. The right first question is: where does the exam data go, and who can see it?

Student exam submissions are among the most legally protected data that a university handles. They contain academic performance records, and in many jurisdictions fall under the same protections as health records. Getting this wrong is not a configuration problem — it is a regulatory one.

The three deployment models and what they mean for privacy

There is no universal answer to “is AI marking private?” It depends entirely on the deployment model.

Local processing means the exam PDF never leaves the examiner’s computer. The AI model runs locally (via tools like Ollama), or the examiner uses a cloud API key that routes their own data directly to a provider under their own terms of service. The university’s IT department is not involved, and AEMS’s servers see nothing. This is the Personal plan model.

EU-hosted processing means exams are uploaded to a hosted service operating in EU data centres. The service processes submissions using a configurable AI provider, stores results under a defined retention schedule, and purges data automatically. No exam content is used to train AI models. This is the Department plan model, and it is subject to GDPR data processing agreements.

On-premises deployment means AEMS runs entirely inside the university’s own infrastructure. The university controls all data residency, can use their own AI servers (including air-gapped models), and AEMS’s external servers are never contacted for anything except license validation. This is the Institutional plan model.

The question is not which model is “most private” in the abstract — it is which model matches your institution’s data governance requirements.

What to ask a vendor

Before adopting any AI marking tool, ask:

  1. Where is inference performed? The AI model processing the exam — is it inside your infrastructure, inside the EU, or in a data centre of unknown jurisdiction?
  2. Is exam content stored? If so, for how long, and what is the deletion process?
  3. Who has access? Can the vendor’s engineers access individual student submissions for debugging?
  4. Is content used for training? This should be a contractual guarantee in the DPA, not just a policy claim.
  5. What happens on a breach? What is the notification timeline, and who are the sub-processors?

These are questions that procurement teams, data protection officers, and IT security departments should ask before any pilot begins — not after.

The regulatory context

In the EU, processing student exam data under a hosted model requires a signed Data Processing Agreement (DPA) between the university (data controller) and the vendor (data processor). The DPA must specify processing purposes, sub-processors, data residency, retention periods, and breach notification procedures.

For institutions operating under sector-specific frameworks — UK GDPR, FERPA in the US, or localised variants — the requirements vary, but the core principle is consistent: you need to know where the data goes and have contractual guarantees about how it is handled.

AEMS provides a DPA template as part of the enterprise onboarding process. If you are evaluating AEMS for a departmental or institutional deployment, requesting the DPA is the right place to start.

Privacy as a design principle, not a checkbox

The most important thing about privacy-first design is that it is architectural, not cosmetic. You cannot add privacy to a system that was not designed for it. A tool that was built to aggregate data centrally cannot credibly offer the same guarantees as one that was designed from day one to keep data local.

AEMS was built for universities that need to satisfy their data protection officers, not just their examiners. The architecture — local-first for individuals, EU-hosted for departments, on-premises for institutions — reflects that priority.

The accuracy question matters too. But if the tool cannot pass your DPO’s review, the accuracy question is moot.