AEMS

This site is in preview. Enter the password to continue.

Incorrect password.

Documentation

Everything you need to get started with AEMS, from your first exam to publishing grades.

Personal Quickstart

The Personal plan runs entirely on your own computer. No account needed, no data sent to external servers (beyond whichever AI provider you choose).

1. Download and install

Download the AEMS desktop app for your operating system from the releases page.

  • Windows — Run aems-agent-setup.exe. AEMS Agent will appear in your Start menu and system tray.
  • macOS — Open AEMS-Agent.dmg, drag AEMS Agent to Applications.
  • Linux — Extract aems-agent-linux.tar.gz and run the binary.

2. Connect an AI provider

AEMS needs access to a vision-capable AI model to read and assess exam submissions. You have two options:

  • Local model (Ollama) — Install Ollama, pull a vision model (e.g., llava), and point AEMS to http://localhost:11434. All inference stays on your machine.
  • Cloud API key — Enter an API key from Anthropic (Claude), OpenAI (GPT-4o), or Google (Gemini). Submissions are sent to the provider for inference, subject to their data policies.

3. Prepare your exam materials

  • Exam PDFs — One PDF per student, or a single multi-student PDF that AEMS will split automatically.
  • Marking scheme — A YAML file or use the built-in marking scheme builder to define your rubric checks. Each check describes what to look for and how many marks it is worth.
  • Reference solution (optional) — A PDF of the model answer. This helps the AI understand what a correct response looks like.

4. Mark your first exam

  1. Open AEMS and click New Exam.
  2. Upload your exam PDFs and marking scheme.
  3. Click Start Marking. AEMS will process each submission against your rubric.
  4. Review the results: each PDF is annotated with colour-coded marks and feedback. Green for correct, red for errors, amber for partial credit.
  5. Adjust any grades you disagree with. Click Approve when satisfied.
  6. Export the results as a CSV, or simply keep the annotated PDFs.

5. Activate your license

AEMS runs in evaluation mode with full functionality. When you are ready, purchase a license from the Plans page. Enter the license key in Settings → License. The key is validated once online, then works offline for up to 30 days between check-ins.

Department Quickstart

The Department plan is a hosted service — no installation required. Your team signs in through a browser and works in a shared workspace.

1. Create your workspace

  1. Go to app.aems.app/signup and create an account with your university email.
  2. Verify your email address.
  3. Name your workspace (e.g., “Mechanics of Materials — Spring 2026”).

2. Invite your teaching team

Go to Settings → Team and invite colleagues by email. Each member can be assigned a role: Examiner (full marking access), Marker (can mark but not publish), or Viewer (read-only access to results).

3. Connect Canvas LMS

  1. Go to Settings → Canvas LMS.
  2. Enter your Canvas instance URL (e.g., https://canvas.university.se).
  3. Generate an API token in Canvas (Account → Settings → New Access Token) and paste it into AEMS.
  4. Select the course and assignment. AEMS will fetch the student submission list.

4. Mark and calibrate

  1. Upload your marking scheme or build one in the rubric editor.
  2. Run AI-assisted marking on all submissions.
  3. Use the Calibration view to compare how different markers (human or AI) scored the same papers. Resolve discrepancies before publishing.

5. Publish to Canvas

When the team is satisfied with the results, click Publish to Canvas. Grades and annotated PDFs are pushed directly to the Canvas gradebook. Students see their feedback in the assignment view.

Use Training Mode (Settings → Canvas LMS) to simulate publishing without actually posting grades — useful for validating your workflow before the first real exam.

Institutional Deployment

The Institutional plan deploys AEMS within your university’s network. Your IT team manages the infrastructure; we provide the software, documentation, and support.

Prerequisites

  • Docker and Docker Compose (or a Kubernetes cluster)
  • PostgreSQL 14+ database
  • Redis 7+ instance
  • An AI provider endpoint accessible from your network (cloud API or self-hosted model server)
  • A reverse proxy (Nginx recommended) for TLS termination

1. Deploy with Docker Compose

git clone https://github.com/your-org/aems-deploy.git
cd aems-deploy
cp .env.example .env
# Edit .env with your database credentials, AI provider keys, and secret key
docker compose up -d

AEMS will be available at the host and port you configured. Place it behind your reverse proxy with a valid TLS certificate.

2. Configure SSO

AEMS supports SAML 2.0 for single sign-on. Configure your identity provider (e.g., Azure AD, Shibboleth) to trust AEMS as a service provider. See the SSO configuration guide for metadata exchange details.

3. Set data retention policies

In the admin panel, configure how long exam PDFs and grading results are retained. Automated purge jobs run on your schedule. All retention settings are audit-logged.

4. Ongoing operations

  • Updates — Pull the latest Docker image and restart. Release notes are published on our changelog.
  • Backups — Follow your standard database backup procedures. AEMS stores all state in PostgreSQL and Redis.
  • Monitoring — AEMS exposes a /health endpoint for uptime checks and a /metrics endpoint for Prometheus-compatible monitoring.

Key concepts

Marking schemes and rubrics

A marking scheme defines the criteria against which each submission is assessed. Each check in the rubric describes what to look for (e.g., “correct use of Newton’s second law”), how many marks it is worth, and optionally what common errors to watch for. AEMS applies each check independently to every submission.

Annotations

AEMS writes feedback directly onto the exam PDF as colour-coded annotations. Green indicates correct work, red marks errors, and amber shows partial credit. Each annotation includes a short explanation referencing the relevant rubric check.

AI models and providers

AEMS uses vision-capable AI models that can read images of handwritten, typed, and printed text. Supported providers include Anthropic (Claude 3.5+), OpenAI (GPT-4o+), Google (Gemini 1.5/2.0+), and local models via Ollama. The model reads each submission page as an image and extracts the student’s work before applying the rubric.

Human-in-the-loop review

Every AI-proposed grade is a suggestion. The examiner reviews the annotated PDF, adjusts individual marks where the AI got it wrong, and approves the final grade. AEMS maintains a full audit trail of the original AI proposal, any human adjustments, and the final approved result.

Memory and calibration

AEMS learns from your corrections. When you adjust a grade, that correction informs future marking on similar questions. This multi-tier memory system operates at the course, exam, question, and user level — building consistency over time without overfitting to individual cases.

Need help?

If you run into issues or have questions not covered here, reach out and we will help you get set up.

contact@aems.app →