Skip to content

IITJ-CLARITY-Lab/CLARITY-Assignment-Checker

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

1 Commit
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

AI-Powered Assignment Checker

This is a command-line tool that uses a local Ollama language model to automatically grade programming assignments. It compares student submissions against a sample solution, generates marks in JSON format, and presents the results in a clean, editable HTML dashboard.

Features

  • Automated Grading: Leverages a local LLM (via Ollama) to evaluate code submissions.
  • Structured Output: Generates detailed JSON files for each submission, including marks, confidence scores, and reasoning.
  • Interactive Dashboard: Produces a single HTML file that provides a comprehensive overview of all grades, allowing for manual edits and recalculations.
  • CLI Interface: Easy to use from the terminal with simple arguments.
  • Extensible: Built with a clear structure and can be adapted for different types of assignments.

Prerequisites

  • Python 3.6+
  • An active Ollama instance running on http://localhost:11434. You can download Ollama from https://ollama.com/.
  • A model downloaded for Ollama (e.g., ollama run llama2).

Installation

  1. Clone the repository or download the files.

  2. Install the required Python packages:

    pip install -r requirements.txt

Usage

This tool runs in an interactive mode with an autodetection feature to speed up the setup process.

Quick Start with Autodetection

For the fastest experience, structure your project as follows:

  • Question File: Place your assignment description in question.md or question.txt in the project's root folder.
  • Solution File: Place your single sample solution file inside the solutions/ directory.
  • Submissions: Place all student submission files inside the submissions/ directory.

Interactive Workflow

  1. Start the tool:

    python3 checker.py
  2. Autodetection & Confirmation: The tool will first look for the files and directories mentioned above. If it finds them, it will ask you to confirm their use with a (Y/n) prompt.

  3. Manual Input (Fallback): If any path is not detected or you decline the suggestion, the tool will prompt you to enter the information manually.

  4. Select a Model: Next, the tool will show you a list of your available local Ollama models. Enter the number corresponding to the model you wish to use.

  5. Grading Process: The checker will begin the grading process, showing progress as it evaluates each submission.

Output

  • JSON Files: For each submission, a corresponding .json file will be created in the results/ directory.
  • HTML Dashboard: A single dashboard.html file will be generated in the results/ directory. You can open this file in any web browser to view, review, and edit the grades.

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors