- MediaPipe FaceMesh integration for real-time eye tracking
- Eye Aspect Ratio (EAR) calculation to detect eye closure
- Presence detection to identify when participants have left their camera
- Visual status indicators: π Active, π΄ Slept, π» Gone, β³ Checking...
- Smart dwell time logic to prevent false positives
- Works with network adaptation - continues during audio-only mode for local user
- Data channel transmission - activity status sent even in audio-only mode
- Audio mode indicators: Shows "(Audio)" when status received via data transmissionication with Adaptive Network Optimization
A modern, Google Meet-style WebRTC video conferencing application with advanced network adaptation features, active speaker detection, real-time activity monitoring (awake/asleep detection), and intelligent quality management with data channel status transmission.
- Google Meet-inspired interface with Inter font and dark theme (#0f1113)
- Responsive CSS Grid layout that adapts to 1-9 participants
- Hover effects and smooth animations with backdrop filters
- Professional participant tiles with rounded corners and shadows
- Real-time network monitoring every 3 seconds
- Intelligent quality adaptation based on bandwidth, RTT, and packet loss
- Audio-only fallback for severe network conditions
- Smart recovery logic that prevents quality flapping
- Web Audio API integration for real-time audio level monitoring
- Visual highlighting of active speakers with blue borders
- Bandwidth optimization prioritizing active speakers
- 3-person layout optimization with main speaker positioning
- MediaPipe FaceMesh integration for real-time eye tracking
- Eye Aspect Ratio (EAR) calculation to detect eye closure
- Visual status indicators: π Active, π΄ Slept, β³ Checking...
- Smart dwell time logic to prevent false positives
- Works with network adaptation - continues during audio-only mode for local user
- Real-time stats panel in top-right corner
- Comprehensive metrics: Participant count, bandwidth, packet loss, RTT
- Quality indicators with color-coded status
- Adaptive mode status showing ON/OFF state
const BANDWIDTH_THRESHOLDS = {
LOW: 150000, // 150 kbps - minimum for low quality video
MEDIUM: 500000, // 500 kbps - medium quality video threshold
HIGH: 1000000 // 1 Mbps - high quality video threshold
};- RTT > 3000 OR Packet Loss > 15%
- Video transmission disabled, audio prioritized
- Local video preview maintained for user positioning
- RTT: 400-500ms OR Packet Loss: 8-15% OR Bitrate: 60-150 kbps
- Max bitrate: 200 kbps, priority: low
- RTT: 200-400ms OR Packet Loss: 3-8% OR Bitrate: 150-500 kbps
- Max bitrate: 600 kbps, priority: medium
- RTT < 150ms AND Packet Loss < 2% AND Good bitrate
- Max bitrate: 1.2 Mbps, priority: high
const VIDEO_CONSTRAINTS = {
LOW: { width: 160, height: 120, frameRate: 15 },
MEDIUM: { width: 320, height: 240, frameRate: 24 },
HIGH: { width: 640, height: 480, frameRate: 30 }
};- Local camera preview: Always visible when camera is on
- Visual indicator: Orange "π΅ Not Transmitting" badge
- No bandwidth usage: Local preview doesn't consume network resources
- Camera controls: Remain fully functional
- Video placeholder: π΅ "Audio Only" symbol with text
- Audio maintained: Full audio communication continues
- Bandwidth savings: ~90% reduction in data usage
- Visual feedback: Orange styling indicates audio-only state
- Outbound RTP stats: Packet sending rates, bytes transmitted
- Remote inbound stats: Packet loss from receiver perspective
- Candidate pair stats: Round-trip time measurements
- Audio level monitoring: Active speaker detection
- Remote-inbound-rtp reports (most accurate)
- Inbound-rtp reports (fallback)
- Outbound-rtp reports (last resort)
- Initial connection protection: No adaptation for first 10 seconds
- Minimum sample size: Requires >100 packets for loss calculation
- Value capping: Packet loss limited to 0-50% range
- Negative delta protection: Prevents counter reset issues
HIGH β MEDIUM β LOW β AUDIO_ONLY
- Connection quality focus: Uses RTT and packet loss for recovery decisions
- Bitrate filtering: Ignores low bitrate when in audio-only mode (prevents trap)
- Gradual upgrades: AUDIO_ONLY β LOW β MEDIUM β HIGH
- Stability requirements: Sustained good conditions needed for upgrades
// From AUDIO_ONLY to LOW
if (currentVideoQuality === 'AUDIO_ONLY' &&
maxPacketLoss < 2 && maxRtt < 150) {
targetQuality = 'LOW';
}- Sample rate: Updated every 200ms
- Threshold: -50 dB for voice activity detection
- Smoothing: Exponential moving average to prevent flicker
- Silence detection: Automatic fallback when no one speaks
// Active speaker gets high quality
params.encodings[0].maxBitrate = 1200000; // 1.2 Mbps
params.encodings[0].priority = 'high';
// Background participants get reduced quality
params.encodings[0].maxBitrate = 600000; // 600 kbps
params.encodings[0].priority = 'medium';// Detection thresholds
const EYE_EAR_THRESHOLD = 0.30; // Normalized threshold (distance-independent)
const SLEEP_MS = 800; // Time before marking as 'slept'
const WAKE_MS = 250; // Time before marking as 'active'
const GONE_MS = 3000; // Time with no face before marking as 'gone'
// Distance-normalized EAR calculation
function computeNormalizedEAR(landmarks, leftEye, rightEye) {
const rawEAR = (computeEAR(landmarks, leftEye) + computeEAR(landmarks, rightEye)) / 2;
// Calculate face size using nose tip, chin, and face boundaries
const faceSize = calculateFaceSize(landmarks);
const normalizationFactor = Math.max(0.5, Math.min(2.0, faceSize / 0.15));
return rawEAR / normalizationFactor; // Distance-independent EAR
}Problem Solved: Traditional EAR fails when users move closer/farther from camera
- Close to camera: Face landmarks spread out β Higher EAR values
- Far from camera: Face landmarks compressed β Lower EAR values
- Fixed threshold: Causes false positives/negatives with distance changes
Solution: Face-size normalized EAR calculation
- Face size estimation: Uses nose tip, chin, and face boundary landmarks
- Dynamic normalization: Adjusts EAR based on detected face size
- Distance independence: Same threshold (0.30) works at any distance
- Robust tracking: Lower detection thresholds (0.4/0.3) for distant faces
// Enhanced MediaPipe configuration
fm.setOptions({
maxNumFaces: 1,
refineLandmarks: true,
minDetectionConfidence: 0.4, // Better distant face detection
minTrackingConfidence: 0.3, // Smoother tracking continuity
staticImageMode: false // Optimized for video streams
});
// Eye landmarks (MediaPipe FaceMesh indices)
const LEFT_EYE = [33, 160, 158, 133, 153, 144]; // 6 key points around left eye
const RIGHT_EYE = [362, 385, 387, 263, 373, 380]; // 6 key points around right eye- π Active: Face detected with eyes open for > 250ms
- π΄ Slept: Face detected with eyes closed for > 800ms
- π» Gone: No face detected for > 3000ms (participant left camera)
- β³ Checking: Initial state or transitioning between states
// Three detection states
'face-open' -> Active (eyes open, person present)
'face-closed' -> Slept (eyes closed, person present)
'no-face' -> Gone (no person detected)
// State transitions with dwell time prevent false positives
if (detectionState === 'face-open' && timeSinceChange > WAKE_MS) {
status = 'active';
} else if (detectionState === 'face-closed' && timeSinceChange > SLEEP_MS) {
status = 'slept';
} else if (detectionState === 'no-face' && timeSinceChange > GONE_MS) {
status = 'gone';
}// FaceMesh configuration
fm.setOptions({
maxNumFaces: 1,
refineLandmarks: true,
minDetectionConfidence: 0.5,
minTrackingConfidence: 0.5
});- Normal mode: Activity detection for all participants with video
- Audio-only mode: Local user activity detection continues, status transmitted via data channels
- Remote participants in audio-only: Receive activity status via WebRTC data channels
- Data channel transmission: Minimal bandwidth usage (~50 bytes per status update)
- Visual indicators: Shows "(Audio)" suffix when status is transmitted rather than locally detected
- Camera off: Activity detection automatically disabled
- Reconnection: Activity detection restarts when video streams resume
// Activity status transmission
const activityChannel = pc.createDataChannel('activity', { ordered: true });
// Broadcast local activity status to all peers
broadcastActivityStatus(status) {
peerConnections.forEach((pc, peerId) => {
if (pc.activityChannel?.readyState === 'open') {
pc.activityChannel.send(JSON.stringify({
type: 'activity-status',
status: status,
timestamp: Date.now()
}));
}
});
}- Single (1): Centered video, max 900px width, 16:9 aspect ratio
- Double (2): Side-by-side layout, equal columns
- Triple (3): 2 top + 1 bottom spanning, active speaker prominence
- Quad (4): 2x2 grid layout
- 5-6: 3-column layouts with strategic spanning
- 7-8: 4x2 grid for optimal space usage
- 9+: Auto-fit grid with 240px minimum tile size
@media (max-width: 768px) {
.video-grid {
grid-template-columns: repeat(auto-fit, minmax(180px, 1fr));
grid-auto-rows: minmax(100px, auto);
}
}iceServers: [
// STUN servers for NAT discovery
{ urls: 'stun:stun.l.google.com:19302' },
{ urls: 'stun:stun1.l.google.com:19302' },
{ urls: 'stun:stun2.l.google.com:19302' },
{ urls: 'stun:stun.services.mozilla.com' },
{ urls: 'stun:stun.stunprotocol.org:3478' },
// TURN servers for long-distance/complex NAT scenarios
{
urls: 'turn:openrelay.metered.ca:80',
username: 'openrelayproject',
credential: 'openrelayproject'
},
{
urls: 'turn:openrelay.metered.ca:443',
username: 'openrelayproject',
credential: 'openrelayproject'
},
{
urls: 'turn:relay1.expressturn.com:3478',
username: 'efSLANXAY9TzMa3crbhd',
credential: 'StkKGS6j18fnddAdH7W7'
}
]Updated for 400km+ connections between different cities/ISPs:
- STUN Servers: Discover public IP addresses and NAT types
- TURN Servers: Relay traffic when direct P2P fails
- Multiple Protocols: TCP and UDP support for firewall traversal
- Free TURN Services: Using OpenRelay and ExpressTURN public servers
- β Same Network (0-1km): Direct P2P via STUN
- β Different Networks, Same ISP (1-50km): STUN + basic NAT traversal
- β Different Cities/ISPs (50km+): TURN relay when P2P fails
- β Corporate/Mobile Networks: TURN over TCP/443 for firewall bypass
- β Symmetric NAT: TURN relay handles complex NAT scenarios
- Connection timeout: 15-second limit with automatic retry
- ICE failure recovery: Automatic ICE restart on connection failure
- Peer reconnection: 3-second delay before reconnection attempt
- Data channel heartbeat: 30-second ping/pong for connection monitoring
- Connection status colors: Green (connected), Yellow (connecting), Red (failed)
- Quality indicators: Color-coded network status with emoji icons
- Participant info: Hover effects and connection state display
- Mute indicators: Visual feedback for audio/video states
- Join/Leave buttons: One-click meeting access
- Audio/Video toggles: Instant mute/unmute functionality
- Adaptive mode control: Manual override for network adaptation
- Stats panel: Real-time network information display
- Quality monitoring: Every 3000ms (3 seconds)
- Active speaker detection: Every 200ms
- Stats panel update: Every 2000ms (2 seconds)
- Connection heartbeat: Every 30000ms (30 seconds)
All thresholds can be adjusted by modifying the constants at the top of script.js:
// Bandwidth thresholds for quality decisions
const BANDWIDTH_THRESHOLDS = { ... }
// Video quality constraints
const VIDEO_CONSTRAINTS = { ... }
// Active speaker detection sensitivity
const VOICE_ACTIVITY_THRESHOLD = -50; // dB
// Update intervals
const SPEAKER_UPDATE_INTERVAL = 200; // mswebrtc-demo/
βββ π README.md # π This comprehensive documentation
βββ π package.json # π΄ REQUIRED: Node.js dependencies (Express, WebSocket)
βββ π package-lock.json # π΄ REQUIRED: Dependency version lock
βββ π server.js # π΄ REQUIRED: Main WebSocket server (port 3000)
βββ π .gitignore # π‘ OPTIONAL: Git ignore patterns
βββ π tunnel-alternative.js # π’ UNUSED: Alternative tunneling setup (can delete)
βββ π node_modules/ # π΄ REQUIRED: NPM dependencies (auto-generated)
β
βββ π public/ # Client-side files (served by Express)
βββ π index.html # π΄ REQUIRED: Main application UI (Google Meet style)
βββ β‘ script.js # π΄ REQUIRED: Enhanced WebRTC with activity detection
β
βββ π Legacy Files (can be safely deleted):
βββ β‘ script_adaptive.js # π’ BACKUP: Network adaptation version
βββ β‘ script_enhanced.js # π’ BACKUP: Enhanced version backup
βββ β‘ script_original_backup.js # π’ BACKUP: Original implementation
β
βββ π Integrated Source Files (can be safely deleted):
βββ π sentimental-index.html # π’ SOURCE: Original activity detection UI
βββ β‘ sentimental-script.js # π’ SOURCE: Original activity detection logic
| Status | Files | Purpose | Action |
|---|---|---|---|
| π΄ CRITICAL | server.js, package.json, package-lock.json, public/index.html, public/script.js, node_modules/ |
Core application functionality | MUST KEEP |
| π‘ HELPFUL | README.md, .gitignore |
Documentation and Git management | RECOMMENDED |
| π’ OPTIONAL | script_*.js, sentimental-*, tunnel-alternative.js |
Development history and unused code | CAN DELETE |
If you want to clean up the project, run these commands to remove optional files:
# Remove backup script versions (features already integrated into script.js)
Remove-Item public\script_adaptive.js
Remove-Item public\script_enhanced.js
Remove-Item public\script_original_backup.js
# Remove original source files (features already integrated)
Remove-Item public\sentimental-index.html
Remove-Item public\sentimental-script.js
# Remove unused alternative setup
Remove-Item tunnel-alternative.js| File | Purpose | Key Features |
|---|---|---|
server.js |
WebSocket server | Real-time communication, room management |
public/index.html |
Main UI | Google Meet styling, responsive grid layout |
public/script.js |
Main logic | WebRTC + Network adaptation + Activity detection |
- Activity Detection: Integrated from
sentimental-*files into main app - Network Adaptation: Advanced bandwidth management and quality adjustment
- Audio-Only Transmission: Continues activity detection via data channels
- Multi-State Detection: Active (π), Slept (π΄), Gone (π»), Checking (β³)
package.json: Dependencies (Express 5.1.0, WebSocket 8.18.3)server.js: WebSocket server on port 3000 with room management.gitignore: Excludes node_modules and environment files
graph TB
subgraph "Client A Browser"
A1[index.html] --> A2[script.js]
A2 --> A3[MediaPipe FaceMesh]
A2 --> A4[WebRTC PeerConnection]
A3 --> A5[Activity Status: Active/Slept/Gone]
end
subgraph "Server (port 3000)"
S1[server.js] --> S2[WebSocket Handler]
S2 --> S3[Room Management]
end
subgraph "Client B Browser"
B1[index.html] --> B2[script.js]
B2 --> B3[MediaPipe FaceMesh]
B2 --> B4[WebRTC PeerConnection]
B3 --> B5[Activity Status: Active/Slept/Gone]
end
A2 -.->|WebSocket Signaling| S2
B2 -.->|WebSocket Signaling| S2
A4 <-->|Direct P2P WebRTC| B4
A4 -->|Data Channel| B4
A5 -.->|Activity Status| B2
B5 -.->|Activity Status| A2
style A3 fill:#e1f5fe
style B3 fill:#e1f5fe
style S1 fill:#f3e5f5
style A4 fill:#e8f5e8
style B4 fill:#e8f5e8
- Node.js (v14 or higher) - Download here
- Modern web browser with WebRTC support (Chrome, Firefox, Safari, Edge)
- Camera and microphone access permissions
- HTTPS/Local server required for WebRTC security
- Internet connection for MediaPipe FaceMesh CDN access
- Multiple devices recommended for testing full WebRTC functionality
- Install dependencies (already done if you see node_modules folder):
npm install- Start the WebSocket server:
node server.js- Open application:
- Go to
http://localhost:3000 - Open multiple browser tabs to test locally
- Allow camera/microphone permissions when prompted
- Go to
After running node server.js, you should see:
Server running on port 3000
WebSocket server ready
Express server serving static files from public/
Test the application:
- Open
http://localhost:3000in two browser tabs - Click "Join Room" in both tabs
- You should see both video feeds and activity detection working
Server won't start?
# Check if port 3000 is in use
netstat -an | findstr :3000
# Try a different port
set PORT=3001 && node server.jsDependencies missing?
# Reinstall dependencies
rm -rf node_modules package-lock.json
npm installFor basic functionality, you only need:
webrtc-demo/
βββ π package.json # Dependencies
βββ π server.js # WebSocket server
βββ π public/
βββ index.html # UI
βββ script.js # WebRTC logic
Includes documentation, backups, and source files:
webrtc-demo/
βββ All core files above
βββ π README.md # This documentation
βββ π .gitignore # Git configuration
βββ Multiple script_*.js # Development versions
βββ sentimental-* # Original source files
Problem: Connections fail between different cities/ISPs due to complex NAT/firewall scenarios.
Solution: Updated ICE configuration now includes TURN servers for relay connections.
- Start server with port forwarding (ngrok/VS Code)
- Share the public URL with someone 400km+ away
- Both join the meeting from your respective locations
- Check browser console for connection type:
β‘ Connected via direct P2P(ideal)π Connected via TURN relay(works for long-distance)
- β Added TURN servers: Free relay servers for complex NAT scenarios
- β Multiple protocols: TCP/UDP on ports 80/443 for firewall bypass
- β Enhanced debugging: Console shows P2P vs TURN relay usage
- β Automatic fallback: Tries P2P first, TURN if needed
- Local/nearby: Direct P2P connection (faster)
- Long-distance: May use TURN relay (still works, slightly higher latency)
- Corporate networks: TURN over port 443 bypasses firewalls
Recommendation: Keep all files for now, delete optional ones later if needed.
- Start the server: Run
npm startin your VS Code terminal - Open Ports tab: In VS Code, go to "Ports" tab (next to Terminal)
- Forward port 3000: Click "Add Port" β Enter
3000β Set visibility to "Public" - Copy the forwarded URL: VS Code provides a public URL (e.g.,
https://xxx-3000.app.github.dev) - Access from other devices: Open the public URL on phones, tablets, other computers
# Install ngrok (one-time setup)
npm install -g ngrok
# Start your server
npm start
# In another terminal, expose port 3000
ngrok http 3000
# Use the https:// URL provided by ngrok on other devices# Start server with host binding
node server.js --host 0.0.0.0
# Find your local IP address:
# Windows: ipconfig
# Mac/Linux: ifconfig or ip addr
# Access from other devices on same network:
# http://YOUR_LOCAL_IP:3000 (e.g., http://192.168.1.100:3000)For Development/Testing:
- VS Code Port Forwarding: Safest option, automatically handles HTTPS
- ngrok: Good for external testing, provides HTTPS by default
- Local network: Only use on trusted networks (home/office WiFi)
Important Security Notes:
- Never expose production: These methods are for development only
- Temporary access: Stop port forwarding when done testing
- HTTPS required: WebRTC requires secure context for camera/microphone
- Firewall awareness: Port forwarding may bypass some security measures
Recommended Testing Flow:
- Start locally: Test basic functionality with browser tabs
- VS Code forwarding: Test with phone/tablet on same network
- External devices: Use ngrok for testing from different networks
- Production deployment: Use proper hosting with SSL certificates
# Install dependencies
npm install
# Start the WebSocket server
npm start
# β
Server should start on http://localhost:3000- Open two browser tabs to
http://localhost:3000 - Grant permissions for camera/microphone in both tabs
- Click "Join Meeting" in both tabs
- Verify: You should see yourself and the other tab participant
- Test features: Check activity detection, network stats, audio-only mode
- Setup port forwarding using one of the methods above
- Primary device: Open the local URL (
http://localhost:3000) - Secondary devices: Open the forwarded/public URL
- Join meeting from all devices
- Test real scenarios:
- Poor network conditions
- Different device types (phone, tablet, laptop)
- Activity detection across devices
- Audio-only mode when network is poor
- Check port: Ensure port 3000 is not in use
- Install dependencies: Run
npm install - Check Node version: Requires Node.js 14+
- HTTPS required: WebRTC needs secure context (HTTPS or localhost)
- Grant permissions: Check browser permission settings
- Test hardware: Verify camera/mic work in other apps
- Firewall: Ensure port 3000 is not blocked
- Network: Devices must reach the server
- HTTPS for remote: Use ngrok or VS Code forwarding for external access
- TURN servers enabled: Latest version includes free TURN servers
- Corporate networks: TURN over port 443 bypasses most firewalls
- Symmetric NAT: TURN servers handle complex NAT scenarios automatically
- Connection timeout: Allow up to 30 seconds for TURN relay establishment
- Test with browser console: Check for ICE connection state logs
Open browser console (F12) and look for:
// Good signs:
"ICE connection state: connected"
"Using TURN relay candidate"
"Peer connection established via relay"
// Problem indicators:
"ICE connection state: failed"
"All candidates failed"
"TURN server authentication failed"- MediaPipe loading: Check browser console for CDN errors
- Face visibility: Ensure face is well-lit and visible
- Camera permissions: Activity detection requires video access
- Enhanced audio debugging: Press
Ctrl+Alt+Dor rundiagnoseAudioIssues()in console - Check microphone permissions: Browser may have different permissions for each site
- Audio track states: Console shows detailed audio track information for both directions
- TURN relay audio: Audio may work differently over TURN vs P2P connections
- Browser-specific issues: Try different browsers (Chrome/Firefox/Safari) to isolate
- Network firewall: Some corporate firewalls block specific RTP ports for audio vs video
Console commands for audio debugging:
// Run comprehensive audio diagnosis
diagnoseAudioIssues()
// Check if local microphone is working
navigator.mediaDevices.getUserMedia({audio: true}).then(s => console.log('Mic works', s))
// Check audio constraints
localStream?.getAudioTracks().forEach(t => console.log(t.getSettings()))Browser console will show:
- Microphone permissions state
- Local audio track status (enabled/muted/readyState)
- Audio senders/receivers for each peer connection
- Audio element states for all participants
- Inbound/outbound audio statistics
- Setup two devices using port forwarding methods above
- Join meeting from both devices
- Test activity states on Device A while watching Device B:
- π Active: Look directly at camera with eyes open
- π΄ Slept: Close eyes for 1+ seconds
- π» Gone: Move completely out of camera view for 3+ seconds
- β³ Checking: Should appear briefly during transitions
- Simulate poor network: Use browser dev tools to throttle network
- Trigger audio-only mode: Network should automatically adapt
- Verify activity transmission: Activity status should still update with "(Audio)" suffix
- Local video preservation: Your own video should remain visible even in audio-only
- Monitor stats panel: Watch real-time network metrics in top-right
- Test quality changes: Network should adapt between HIGH/MEDIUM/LOW/AUDIO_ONLY
- Verify packet loss calculation: Should show realistic values (not 100%)
- Test recovery: Network should improve quality when conditions get better
# Using Node.js http-server
npm install -g http-server
http-server -p 3000
# Using Python
python -m http.server 3000
# Using Node.js with WebSocket support
node server.js- Dynamic bitrate adjustment: Based on network conditions
- Audio prioritization: Maintains call quality in poor conditions
- Background participant optimization: Reduces quality for non-speakers
- Adaptive frame rates: Adjusts based on network capacity
- Stream cleanup: Proper disposal of media tracks
- Event listener management: Prevents memory leaks
- Participant object lifecycle: Clean creation and removal
- Efficient audio analysis: Optimized FFT processing
- Throttled updates: Prevents excessive re-renders
- Smart grid recalculation: Only when participant count changes
- Check camera permissions in browser
- Verify HTTPS/localhost requirement for WebRTC
- Toggle camera button to refresh stream
- Fixed in latest version with smart recovery logic
- Monitor console for quality assessment logs
- Check RTT and packet loss values in stats panel
- Verify STUN server accessibility
- Check firewall settings for WebRTC traffic
- Review browser console for detailed error messages
Enable debug logging by opening browser console (F12) to see:
- Network quality assessments
- Connection state changes
- Audio level measurements
- Quality adaptation decisions
- Peer-to-peer connections with fallback STUN servers
- Unified Plan SDP for modern browser compatibility
- Bundle policy: Maximized for connection efficiency
- RTCP mux policy: Required for optimal performance
- Web Audio API: Real-time audio level analysis
- Echo cancellation: Enabled by default
- Noise suppression: Hardware-accelerated when available
- Auto gain control: Maintains consistent audio levels
- Hardware acceleration: Utilizes GPU when available
- Constraint-based adaptation: Dynamic resolution/framerate
- Encoding parameter control: Bitrate and priority management
- Track management: Enable/disable without stream recreation
- Video bitrate: Bytes per second transmitted/received
- Audio bitrate: Audio data transmission rates
- Packet loss: Percentage of lost packets
- Round-trip time: Network latency measurements
- Jitter: Variation in packet arrival times
- Connection states: Detailed peer connection status
- Frame rate tracking: Actual vs target frame rates
- Resolution tracking: Current video dimensions
- Audio levels: Real-time voice activity detection
- Network adaptation events: Quality change logging
- Peer-to-peer: No server-side media processing
- Local preview: Camera feed never leaves device unnecessarily
- Permission-based: Explicit user consent for media access
- STUN-only: No TURN servers reduce privacy exposure
- Encrypted connections: All WebRTC traffic is encrypted
- Origin restrictions: Same-origin policy enforcement
# Basic setup
npm install
npm start
# VS Code users (easiest multi-device)
# 1. Run: npm start
# 2. Ports tab β Add Port β 3000 β Public
# 3. Use the provided public URL on other devices
# ngrok setup
npm install -g ngrok
npm start
ngrok http 3000 # In separate terminal
# Local network (same WiFi only)
# Windows: ipconfig | findstr IPv4
# Mac/Linux: ifconfig | grep inet
# Then use: http://YOUR_IP:3000- Single device (browser tabs)
- Multi-device via port forwarding
- Camera/microphone permissions granted
- Activity detection working (ππ΄π»)
- Network adaptation (check stats panel)
- Audio-only mode transmission
- Data channel communication
MIT License - Feel free to use, modify, and distribute.
Contributions welcome! Please read the code structure and follow the established patterns for network adaptation and UI management.
Built with β€οΈ using modern WebRTC APIs, advanced network optimization, and responsive design principles.