A comprehensive, modular, and production-ready AI-powered exam proctoring system with advanced behavioral pattern detection. Built with a decoupled architecture for maximum flexibility and extensibility.
- β Face Detection - Detect no face, single face, or multiple faces
- β Gaze Tracking - Iris-based eye tracking to detect looking away
- β Head Pose Estimation - Track head rotation (yaw, pitch, roll)
- β Mouth Movement Detection - Detect talking and whispering
- β Mouth Covering Detection - Identify attempts to hide mouth
- β Suspicious Object Detection - Detect phones, books, laptops, tablets
- β Talking Detection - Identify normal speech above threshold
- β Whispering Detection - Detect subtle audio below talking threshold
- β Audio Level Tracking - Continuous audio level monitoring
- β Silence Detection - Track periods of silence
- β Duration Tracking - Measure total talking/whispering time
- β Multi-Modal Analysis - Correlate visual + audio events
- β Time-Windowed Patterns - Detect patterns within configurable windows
- β
Suspicious Correlations - Identify cheating behaviors:
- Looking away + talking + mouth moving
- Looking left/right + whispering
- Mouth covered + audio detected
- Suspicious object + looking away
- Multiple faces + audio detected
- Rapid eye movements (reading notes)
- Repeated tab switches
- β Tab Switch Detection - Monitor when student leaves exam tab
- β Focus Tracking - Detect window focus loss/gain
- β Clipboard Monitoring - Track copy/paste/cut attempts
- β Key Press Detection - Identify suspicious keyboard shortcuts
- β Fullscreen Monitoring - Detect fullscreen exit
- β Mouse Tracking - Monitor mouse leaving window
- β Modular Design - Independent, swappable modules
- β Event-Driven - Reactive architecture with event system
- β State Management - Centralized state with subscription system
- β Configurable - Extensive configuration options
- β Extensible - Easy to add custom patterns and modules
- β Model Cache - Models are cached for faster loading
npm install @timadey/proctorOr with yarn:
yarn add @timadey/proctorimport { ProctoringEngine } from '@timadey/proctor';
// 1. Initialize the engine
const engine = ProctoringEngine.getInstance({
// Enable/disable modules
enableVisualDetection: true,
enableAudioMonitoring: true,
enablePatternDetection: true,
enableBrowserTelemetry: true,
// Callbacks
onEvent: (event) => {
console.log('Proctoring event:', event);
// Send to your backend
},
onBehavioralPattern: (pattern) => {
console.warn('Suspicious pattern detected:', pattern);
// Alert supervisor
}
});
// 2. Initialize modules
await engine.initialize();
// 3. Start proctoring
const videoElement = document.getElementById('webcam');
engine.start(videoElement);
// 4. Get session summary anytime
const summary = engine.getSessionSummary();
console.log('Suspicious score:', summary.suspiciousScore);
// 5. Stop when exam ends
engine.stop();- Installation
- Quick Start
- Architecture
- Modules
- Configuration
- Events
- Patterns
- API Reference
- Examples
- Browser Support
- Performance
- Security
- Contributing
- License
The system uses a decoupled, modular architecture where each component operates independently:
ββββββββββββββββββββββββββββββββββββββββββ
β ProctoringEngine (Orchestrator) β
β βββββββββββββββ ββββββββββββββββ β
β βEventManager β βStateManager β β
β βββββββββββββββ ββββββββββββββββ β
ββββββββββββββ¬ββββββββββββββββββββββββββββ
β
ββββββββββΌβββββββββ¬βββββββββ
βΌ βΌ βΌ βΌ
ββββββββββ ββββββ ββββββ ββββββββββ
βVisual β βAudioβ βPat β βBrowser β
βModule β βMod β βMod β βTelemetryβ
ββββββββββ ββββββ ββββββ ββββββββββ
- ProctoringEngine - Main orchestrator coordinating all modules
- VisualDetectionModule - Computer vision and face tracking
- AudioMonitoringModule - Audio analysis and detection
- PatternDetectionModule - Behavioral pattern recognition
- BrowserTelemetryModule - Browser interaction monitoring
- EventManager - Centralized event handling and logging
- StateManager - Application state management
Handles all computer vision tasks using MediaPipe.
// Access directly if needed
const visualState = engine.visualModule.getState();
console.log('Current gaze:', visualState.currentGazeDirection);
console.log('Number of faces:', visualState.numFaces);
console.log('Mouth moving:', visualState.isMouthMoving);
console.log('Object detected:', visualState.suspiciousObjectDetected);Features:
- Face landmarking with 468 facial points
- Real-time gaze direction (left, right, up, down, center)
- Head pose angles (yaw, pitch, roll)
- Mouth aspect ratio for speech detection
- Object detection for unauthorized materials
Monitors audio using Web Audio API.
// Access audio state
const audioState = engine.audioModule.getState();
console.log('Is talking:', audioState.isTalking);
console.log('Is whispering:', audioState.isWhispering);
console.log('Audio level:', audioState.currentAudioLevel, 'dB');
console.log('Total talking time:', audioState.totalTalkingDuration, 'ms');Features:
- RMS (Root Mean Square) audio level calculation
- Talking detection (configurable threshold)
- Whispering detection (lower threshold)
- Audio level history tracking
- Silence duration tracking
Detects suspicious behavioral patterns through correlation.
// Get pattern summary
const patterns = engine.patternModule.getPatternSummary();
console.log('Suspicious patterns detected:', patterns);Detected Patterns:
suspiciousTriplePattern- Looking away + talking + mouth movinglookingLeftWhispering- Looking left while whisperinglookingRightWhispering- Looking right while whisperingmouthCoveredWithAudio- Mouth covered while audio detectedlookingAwayAndTalking- Looking away while talkingobjectAndLookingAway- Suspicious object + looking awaymultipleFacesWithAudio- Multiple people + audioheadTurnedTalking- Head turned + talking
Monitors all browser interactions.
// Get telemetry summary
const telemetry = engine.telemetryModule.getSummary();
console.log('Tab switches:', telemetry.tabSwitches);
console.log('Copy attempts:', telemetry.copyAttempts);
console.log('Paste attempts:', telemetry.pasteAttempts);Monitored Actions:
- Tab visibility changes
- Window focus/blur events
- Copy/paste/cut operations
- Suspicious keyboard shortcuts (F12, Ctrl+C, etc.)
- Fullscreen changes
- Right-click attempts
- Mouse leaving window
const engine = ProctoringEngine.getInstance({
// ===== Module Toggles =====
enableVisualDetection: true,
enableAudioMonitoring: true,
enablePatternDetection: true,
enableBrowserTelemetry: true,
// ===== Visual Detection Options =====
detectionFPS: 10, // Frame processing rate (5-30)
stabilityFrames: 15, // Frames before event triggers
gazeThreshold: 20, // Degrees for gaze deviation
yawThreshold: 25, // Degrees for head rotation
pitchThreshold: 20, // Degrees for head tilt
prolongedGazeAwayDuration: 5000, // ms for prolonged gaze
mouthOpenRatioThreshold: 0.15, // Mouth aspect ratio threshold
// ===== Audio Monitoring Options =====
talkingThreshold: -45, // dB for talking detection
whisperThreshold: -55, // dB for whisper detection
audioSampleInterval: 100, // Audio check interval (ms)
prolongedTalkingDuration: 3000, // ms for prolonged talking
// ===== Pattern Detection Options =====
suspiciousPatternThreshold: 3, // Events to trigger pattern
patternDetectionWindow: 10000, // Time window (ms)
// ===== Callbacks =====
onEvent: (event) => {
// Handle individual events
console.log('Event:', event);
},
onBehavioralPattern: (pattern) => {
// Handle detected patterns (critical)
console.warn('Pattern:', pattern);
},
onStatusChange: (status) => {
// Engine status: 'initializing', 'loading-models', 'ready', 'error'
console.log('Status:', status);
},
onError: (error) => {
// Handle errors
console.error('Error:', error);
}
});// Lightweight mode - only browser telemetry
const lightEngine = ProctoringEngine.getInstance({
enableVisualDetection: false,
enableAudioMonitoring: false,
enablePatternDetection: false,
enableBrowserTelemetry: true
});
// Heavy mode - full monitoring
const heavyEngine = ProctoringEngine.getInstance({
enableVisualDetection: true,
enableAudioMonitoring: true,
enablePatternDetection: true,
enableBrowserTelemetry: true,
detectionFPS: 15 // Higher FPS for more accuracy
});
// Custom mode - visual + patterns only
const customEngine = ProctoringEngine.getInstance({
enableVisualDetection: true,
enableAudioMonitoring: false,
enablePatternDetection: true,
enableBrowserTelemetry: true
});// Update configuration during session
engine.updateOptions({
gazeThreshold: 30, // More lenient
detectionFPS: 5 // Reduce CPU usage
});
// Update individual modules
engine.visualModule.updateOptions({
detectionFPS: 8
});
engine.audioModule.updateOptions({
talkingThreshold: -40
});{
event: 'TALKING_DETECTED', // Event type
lv: 8, // Severity level (1-10)
ts: 1703098765432, // Timestamp (Unix ms)
source: 'audio', // Module source
sessionDuration: 123456, // Time since session start (ms)
// Event-specific metadata
duration: 5000, // Duration of behavior (ms)
level: -40, // Audio level (dB)
direction: 'left', // Direction (for gaze/head)
severity: 'high', // Human-readable severity
extractedFeatures: {} // The face and hand features extracted from the frame
}NO_FACE- No face detectedMULTIPLE_FACES- Multiple people in framePERSON_LEFT- Student left for extended periodSUSPICIOUS_OBJECT- Unauthorized object detectedTAB_SWITCHED- Tab switch detectedPASTE_ATTEMPT- Paste operationPATTERN_*- Behavioral pattern detected
GAZE_AWAY- Looking away from screenPROLONGED_GAZE_AWAY- Extended gaze awayHEAD_TURNED- Head significantly rotatedPROLONGED_MOUTH_MOVEMENT- Extended mouth movementTALKING_DETECTED- Speech detectedWHISPERING_DETECTED- Whispering detectedWINDOW_FOCUS_LOST- Window lost focusEXITED_FULLSCREEN- Fullscreen exitedCOPY_ATTEMPT- Copy operation
MOUTH_MOVING- Mouth movement detectedMOUTH_COVERED- Mouth appears coveredEYES_OFF_SCREEN- Eyes looking off-screenRIGHT_CLICK- Right-click attemptSUSPICIOUS_KEY_PRESS- Suspicious keyboard shortcut
MOUSE_LEFT_WINDOW- Mouse left windowWINDOW_FOCUS_RESTORED- Focus restoredTAB_RETURNED- Returned to tab
Patterns are critical alerts indicating high probability of cheating.
When: Student is looking away + talking + mouth moving simultaneously
Severity: 10 (Critical)
Interpretation: Likely communicating with someone off-screen
onBehavioralPattern: (pattern) => {
if (pattern.pattern === 'suspiciousTriplePattern') {
// This is extremely suspicious
alertSupervisor('Student likely cheating');
flagExamForReview();
}
}When: Student looking to side while whispering
Severity: 8 (High)
Interpretation: Possibly communicating with nearby person
When: Mouth covered but audio detected
Severity: 9 (High)
Interpretation: Attempting to hide speaking
When: Suspicious object detected + looking away from screen
Severity: 9 (High)
Interpretation: Using unauthorized materials
When: Multiple people + audio detected
Severity: 10 (Critical)
Interpretation: Multiple people taking exam together
Add your own patterns:
// Add custom pattern
engine.patternModule.patterns.myCustomPattern = {
name: 'myCustomPattern',
severity: 8,
events: [],
count: 0,
lastTriggered: 0,
check: (visualState, audioState) => {
// Your custom logic
return visualState.numFaces === 0 &&
audioState.isTalking;
}
};Get singleton instance.
const engine = ProctoringEngine.getInstance(options);Initialize all enabled modules.
await engine.initialize();Start proctoring with video element.
const video = document.getElementById('webcam');
engine.start(video);Stop proctoring.
engine.stop();Update configuration at runtime.
engine.updateOptions({ detectionFPS: 5 });Get comprehensive session summary.
const summary = engine.getSessionSummary();
/*
{
sessionDuration: 1800000,
sessionStartTime: 1703098765432,
sessionEndTime: 1703100565432,
totalEvents: 45,
eventCounts: {...},
eventsBySeverity: {...},
patterns: {...},
visualState: {...},
audioState: {...},
suspiciousScore: 127
}
*/Get all event logs.
const logs = engine.getLogs();Clear all logs and patterns.
engine.clearLogs();Calculate overall suspicious score (0-1000).
const score = engine.calculateSuspiciousScore();
// 0-50: Normal
// 51-100: Some suspicious activity
// 101-200: Concerning behavior
// 201+: High probability of cheatingCleanup and destroy engine.
engine.destroy();Subscribe to state changes.
const unsubscribe = engine.stateManager.subscribe((state) => {
console.log('State updated:', state);
});
// Unsubscribe later
unsubscribe();Get complete current state.
const state = engine.stateManager.getCompleteState();Get visual detection state.
const visual = engine.stateManager.getVisualState();Get audio monitoring state.
const audio = engine.stateManager.getAudioState();Get all recorded events.
const events = engine.eventManager.getAllEvents();Get events of specific type.
const tabSwitches = engine.eventManager.getEventsByType('TAB_SWITCHED');Get events above severity threshold.
const critical = engine.eventManager.getEventsBySeverity(9);Get event summary statistics.
const summary = engine.eventManager.getSummary();import { ProctoringEngine } from './ProctoringEngine.js';
class SimpleProctor {
constructor() {
this.engine = ProctoringEngine.getInstance({
onEvent: (e) => console.log('Event:', e.event),
onBehavioralPattern: (p) => alert(`Warning: ${p.pattern}`)
});
}
async start() {
// Get camera
const stream = await navigator.mediaDevices.getUserMedia({
video: true
});
const video = document.getElementById('video');
video.srcObject = stream;
await video.play();
// Start proctoring
await this.engine.initialize();
this.engine.start(video);
}
stop() {
const summary = this.engine.getSessionSummary();
console.log('Final score:', summary.suspiciousScore);
this.engine.stop();
}
}
const proctor = new SimpleProctor();
await proctor.start();class AdvancedProctor {
constructor(examId, studentId) {
this.examId = examId;
this.studentId = studentId;
this.ws = null; // WebSocket connection
this.engine = ProctoringEngine.getInstance({
onEvent: (event) => this.handleEvent(event),
onBehavioralPattern: (pattern) => this.handlePattern(pattern)
});
}
async start() {
// Connect to backend via WebSocket
this.ws = new WebSocket('wss://api.example.com/proctoring');
// Setup camera
const stream = await navigator.mediaDevices.getUserMedia({
video: { width: 1280, height: 720 }
});
const video = document.getElementById('video');
video.srcObject = stream;
await video.play();
// Initialize and start
await this.engine.initialize();
this.engine.start(video);
// Subscribe to state for real-time updates
this.engine.stateManager.subscribe((state) => {
this.sendStateUpdate(state);
});
// Periodic summaries
this.summaryInterval = setInterval(() => {
this.sendSummary();
}, 30000); // Every 30 seconds
}
handleEvent(event) {
// Send event to backend via REST
fetch('/api/proctoring/event', {
method: 'POST',
headers: { 'Content-Type': 'application/json' },
body: JSON.stringify({
examId: this.examId,
studentId: this.studentId,
event: event
})
});
// Send via WebSocket for real-time monitoring
if (this.ws && this.ws.readyState === WebSocket.OPEN) {
this.ws.send(JSON.stringify({
type: 'EVENT',
data: event
}));
}
// Show to student if critical
if (event.lv >= 8) {
this.showWarning(event.event);
}
}
handlePattern(pattern) {
// Critical alert - send immediately
fetch('/api/proctoring/critical', {
method: 'POST',
headers: { 'Content-Type': 'application/json' },
body: JSON.stringify({
examId: this.examId,
studentId: this.studentId,
pattern: pattern,
timestamp: Date.now()
}),
keepalive: true // Ensure delivery
});
// Alert supervisor via WebSocket
if (this.ws && this.ws.readyState === WebSocket.OPEN) {
this.ws.send(JSON.stringify({
type: 'CRITICAL_PATTERN',
data: pattern
}));
}
// Show strong warning to student
this.showCriticalWarning(pattern.pattern);
}
sendStateUpdate(state) {
if (this.ws && this.ws.readyState === WebSocket.OPEN) {
this.ws.send(JSON.stringify({
type: 'STATE_UPDATE',
data: {
examId: this.examId,
studentId: this.studentId,
state: state
}
}));
}
}
sendSummary() {
const summary = this.engine.getSessionSummary();
fetch('/api/proctoring/summary', {
method: 'POST',
headers: { 'Content-Type': 'application/json' },
body: JSON.stringify({
examId: this.examId,
studentId: this.studentId,
summary: summary
})
});
}
showWarning(eventType) {
const warnings = {
'TALKING_DETECTED': 'Please remain quiet during the exam.',
'TAB_SWITCHED': 'Do not switch tabs during the exam.',
'MULTIPLE_FACES': 'Multiple people detected. Only you should be visible.',
'PERSON_LEFT': 'You have left the exam area.',
};
const message = warnings[eventType] || 'Suspicious activity detected.';
const warning = document.getElementById('warning');
warning.textContent = message;
warning.classList.add('show');
setTimeout(() => {
warning.classList.remove('show');
}, 5000);
}
showCriticalWarning(patternName) {
const modal = document.getElementById('critical-modal');
modal.querySelector('.message').textContent =
`Critical violation detected: ${patternName}. This exam may be flagged for review.`;
modal.classList.add('show');
setTimeout(() => {
modal.classList.remove('show');
}, 10000);
}
async stop() {
// Clear interval
if (this.summaryInterval) {
clearInterval(this.summaryInterval);
}
// Get final data
const summary = this.engine.getSessionSummary();
const logs = this.engine.getLogs();
// Send final report
const finalData = {
examId: this.examId,
studentId: this.studentId,
summary: summary,
logs: logs,
endTime: Date.now()
};
// Use sendBeacon for reliability during unload
const blob = new Blob([JSON.stringify(finalData)], {
type: 'application/json'
});
navigator.sendBeacon('/api/proctoring/finalize', blob);
// Close WebSocket
if (this.ws) {
this.ws.close();
}
// Destroy engine
this.engine.destroy();
}
}
// Usage
const proctor = new AdvancedProctor('exam-123', 'student-456');
await proctor.start();
// On exam submit
document.getElementById('submit-btn').addEventListener('click', async () => {
await proctor.stop();
// Submit exam answers...
});
// On page unload
window.addEventListener('beforeunload', () => {
proctor.stop();
});import React, { useEffect, useRef, useState } from 'react';
import { ProctoringEngine } from './ProctoringEngine';
function ExamProctoring({ examId, studentId }) {
const videoRef = useRef(null);
const engineRef = useRef(null);
const [status, setStatus] = useState('initializing');
const [events, setEvents] = useState([]);
const [score, setScore] = useState(0);
const [warning, setWarning] = useState('');
useEffect(() => {
let mounted = true;
const initialize = async () => {
try {
// Setup camera
const stream = await navigator.mediaDevices.getUserMedia({
video: { width: 1280, height: 720 }
});
if (videoRef.current) {
videoRef.current.srcObject = stream;
await videoRef.current.play();
}
// Initialize engine
const engine = ProctoringEngine.getInstance({
onEvent: (event) => {
if (mounted) {
setEvents(prev => [event, ...prev].slice(0, 20));
}
},
onBehavioralPattern: (pattern) => {
if (mounted) {
setWarning(`β οΈ ${pattern.pattern} detected`);
setTimeout(() => setWarning(''), 5000);
}
},
onStatusChange: (newStatus) => {
if (mounted) setStatus(newStatus);
}
});
engineRef.current = engine;
await engine.initialize();
if (videoRef.current) {
engine.start(videoRef.current);
}
// Update score periodically
const interval = setInterval(() => {
if (engineRef.current && mounted) {
const summary = engineRef.current.getSessionSummary();
setScore(summary.suspiciousScore);
}
}, 5000);
return () => {
clearInterval(interval);
};
} catch (error) {
console.error('Initialization failed:', error);
if (mounted) setStatus('error');
}
};
initialize();
return () => {
mounted = false;
if (engineRef.current) {
engineRef.current.stop();
engineRef.current.destroy();
}
};
}, [examId, studentId]);
return (
<div className="proctoring-container">
<video
ref={videoRef}
autoPlay
playsInline
muted
className="proctoring-video"
/>
<div className="proctoring-info">
<div className="status">
Status: <span className={status}>{status}</span>
</div>
<div className="score">
Suspicious Score: <span>{score}</span>
</div>
</div>
{warning && (
<div className="warning-banner">
{warning}
</div>
)}
<div className="event-log">
<h3>Recent Events</h3>
{events.map((event, i) => (
<div key={i} className={`event severity-${event.lv}`}>
{new Date(event.ts).toLocaleTimeString()} - {event.event}
</div>
))}
</div>
</div>
);
}
export default ExamProctoring;| Browser | Version | Support |
|---|---|---|
| Chrome | 90+ | β Full |
| Firefox | 88+ | β Full |
| Safari | 14+ | β Full |
| Edge | 90+ | β Full |
| Opera | 76+ | β Full |
- WebRTC - For camera access
- Web Audio API - For microphone access
- WebGL - For MediaPipe GPU acceleration
- ES6+ - Modern JavaScript features
// Camera permission
await navigator.mediaDevices.getUserMedia({ video: true });
// Microphone permission (handled internally)
// Requested automatically by AudioMonitoringModule- Adjust Detection FPS
// Lower FPS for better performance
engine.updateOptions({ detectionFPS: 5 });- Increase Stability Frames
// Fewer false positives, better performance
engine.updateOptions({ stabilityFrames: 20 });- Selective Modules
// Only enable what you need
ProctoringEngine.getInstance({
enableVisualDetection: true,
enableAudioMonitoring: false, // Disable if not needed
enablePatternDetection: true,
enableBrowserTelemetry: true
});- GPU Acceleration Ensure WebGL is enabled for MediaPipe GPU acceleration.
| Configuration | CPU Usage | Memory | Accuracy |
|---|---|---|---|
| Low (5 FPS) | ~10% | ~150MB | Good |
| Medium (10 FPS) | ~20% | ~200MB | Better |
| High (15 FPS) | ~30% | ~250MB | Best |
Tested on Intel i5, 8GB RAM, Chrome 120
- β Client-Side Processing - All detection runs in browser
- β No Cloud Dependencies - MediaPipe models loaded from CDN
- β Secure Transmission - Use HTTPS for backend communication
- β No Recording - Video/audio analyzed in real-time, not stored
- β Configurable - Choose which data to send to backend
- Obtain Explicit Consent
// Show consent dialog before starting
const consent = await showConsentDialog();
if (consent) {
await proctor.start();
}- Use HTTPS
// Always use secure connections
fetch('https://api.example.com/proctoring/event', {
method: 'POST',
// ...
});- Implement Data Retention Policies
// Clear logs after exam
window.addEventListener('beforeunload', () => {
engine.clearLogs();
});- Provide Accommodations
// Adjust for students with disabilities
engine.updateOptions({
gazeThreshold: 35, // More lenient
prolongedGazeAwayDuration: 10000
});// Check permission
navigator.permissions.query({ name: 'camera' })
.then(result => {
console.log('Camera permission:', result.state);
if (result.state === 'denied') {
alert('Please allow camera access');
}
});// Check microphone permission
navigator.permissions.query({ name: 'microphone' })
.then(result => {
console.log('Microphone permission:', result.state);
});
// Check if audio module initialized
if (!engine.audioModule.isSetup) {
console.warn('Audio monitoring not available');
}// Reduce detection FPS
engine.updateOptions({ detectionFPS: 5 });
// Disable unnecessary modules
engine.updateOptions({
enableAudioMonitoring: false
});// Check network connection
// MediaPipe models loaded from CDN
// Ensure CDN is accessible: https://cdn.jsdelivr.net/npm/@mediapipe/tasks-vision@latest/Contributions are welcome! Please follow these guidelines:
- Fork the repository
- Create a feature branch (
git checkout -b feature/amazing-feature) - Commit your changes (
git commit -m 'Add amazing feature') - Push to the branch (
git push origin feature/amazing-feature) - Open a Pull Request
# Clone repository
git clone https://github.com/yourusername/ai-proctoring-engine.git
# Install dependencies
cd ai-proctoring-engine
npm install
# Run tests
npm test
# Build
npm run buildThis project is licensed under the MIT License - see the LICENSE file for details.
- MediaPipe - Computer vision framework
- Web Audio API - Audio processing
- Issues: GitHub Issues
- Discussions: GitHub Discussions
- Email: support@example.com
- TypeScript support
- Hand tracking for gesture detection
- Mobile support
- Offline mode with model caching
- Dashboard for supervisors
- API for third-party integrations
- Automated report generation
- Multi-language support
Made with β€οΈ for fair and secure online examinations