Skip to main content

How CodeSpell works

This page describes the end-to-end workflow from creating content to student evaluation.

1) Professors create content

Professors organize practical work inside courses.

  • Create a course.
  • Add assignments (deadline-based) and/or challenges (no deadline; can include a 3D game).
  • Provide documentation (Markdown), goals, and evaluation rules.

2) Students work in the coding environment

Students open an assignment or challenge in the CodeSpell coding environment, which typically includes:

  • A code editor
  • The practical work documentation
  • A list of goals and progress indicators

3) Students create submissions

When a student submits code, CodeSpell creates a submission and processes it.

Execution

Submissions are executed in isolated environments. CodeSpell stores:

  • Output and errors
  • Performance metrics (execution time, CPU/memory usage, I/O reads/writes)

For challenges, CodeSpell can also run a game renderer so the code can interact with the game during runtime.

Analysis

Regardless of success/failure, CodeSpell performs code analysis and produces metrics in three groups:

  • Complexity (e.g., lines of code, cyclomatic complexity)
  • Quality (e.g., typos, code smells, bad practices)
  • Security (potential vulnerabilities)

Professors can also configure unit tests that run when a submission is created.

Evaluation

Evaluation uses data from execution, such as:

  • Output checks: match the program output against expectations.
  • Game position checks (challenges only): match character position to professor-defined targets.

4) Dashboards and feedback

CodeSpell exposes dashboards for both students and professors, and can optionally generate AI feedback (e.g., code review explanations and error explanations) based on metrics and test results.