Key features
This page describes how CodeSpell handles the coding experience, from writing code to execution, analysis, and feedback.
CodeSpell is designed to give students a simple environment to learn programming while providing powerful automation and insights for professors.
Coding Environment
CodeSpell provides a built-in coding environment where students can:
- Read the assignment or challenge documentation
- View the goals they need to achieve
- Write and edit their code
- Execute their solution quickly
- Submit new attempts
The environment is intentionally minimal and focused. It avoids advanced automation so students can truly learn the syntax and fundamentals of the programming language they are studying.
Code Execution
When a student executes or submits code, it runs inside isolated environments managed by CodeSpell.
This ensures:
- Safe execution of untrusted code
- Consistent and reproducible results
- Fair performance measurements
During execution, CodeSpell collects:
- Program output and runtime errors
- Execution time
- CPU usage
- Memory usage
- I/O reads and writes
These metrics are stored and used later for evaluation and analytics.
Interactive Challenges (3D Game Runtime)
For challenges, execution includes a real-time game renderer.
This allows students to:
- Control a game character using their code
- Interact with a virtual environment
- See the results of their logic immediately
This feature transforms practice into an interactive learning experience and encourages experimentation.
This feature is currently under development and should be available soon in the future.
Assessment Phase
The assessment phase runs in parallel with code execution and focuses on checks that do not depend on runtime results.
This phase uses static analysis and automated testing to evaluate the code structure and quality.
Static analysis metrics
CodeSpell analyzes the code in three main categories:
Complexity
Measures how complex the code is, including:
- Lines of code
- Average lines per function
- Cyclomatic complexity
- Function size and structure
Quality
Detects issues that impact maintainability:
- Code smells
- Bad practices
- Typos and style issues
When a problem is found, a detailed entry is created with:
- Description of the issue
- Location in the code
- Category and severity
Security
Identifies potential vulnerabilities and risky patterns that may lead to security problems.
Unit tests
Professors can configure unit tests that run automatically on every submission. These tests verify the correctness of the student’s solution and are later used in grading.
Evaluation Phase
The evaluation phase starts after execution and assessment are complete.
This phase combines all collected data and runs the tests defined by the professor to automatically grade the submission based on the criteria set.
Evaluation can include:
- Output checks
- Unit tests
- Game position checks (for challenges)
- Complexity thresholds
- Quality thresholds
- Security thresholds
Each test produces a score that contributes to the final grade.
From Execution to Feedback
After evaluation finishes, CodeSpell combines:
- Execution metrics
- Assessment results
- Evaluation scores
These results power:
- Dashboards and analytics
- Automated grading
- AI-generated feedback
Students receive clear, actionable information to improve their code, while professors gain scalable and consistent evaluation tools.
AI feedback
AI can turn metrics, tests, and errors into student-friendly explanations and actionable suggestions.
This helps students who are starting out and may not understand technical feedback, while still providing value to more advanced students.
This feature is currently under heavy development. AI-generated error explanations are already available. More detailed AI-generated feedback and suggestions should be available soon in the future.