The project implements a plugin for VTube Studio that integrates MediaPipe via WebSocket API to control Live2D avatars using webcam data.
Note
My first project done using vibcoding
- FacePositionX
- FacePositionY
- FacePositionZ
- FaceAngleX
- FaceAngleY
- FaceAngleZ
- MouthSmile
- MouthOpen
- Brows
- MousePositionX
- MousePositionY
- TongueOut
- EyeOpenLeft
- EyeOpenRight
- EyeLeftX
- EyeLeftY
- EyeRightX
- EyeRightY
- CheekPuff
- BrowLeftY
- BrowRightY
- MouthX
- FaceAngry
- Python 3.14+
- VTube Studio installed and running
- Webcam
- Clone the repository:
git clone https://github.com/pgalonza/Pipe2VTube- Download FaceLandmarker
curl -O https://storage.googleapis.com/mediapipe-models/face_landmarker/face_landmarker/float16/latest/face_landmarker.task- Install dependencies:
uv sync- Run the script:
uv run python -m src.main- --host VTube Studio host (default: localhost)
- --port WebSocket port (default: 8001)
- --camera Camera device ID (default: 0)
- --fps Camera frames per second (default: 30)
- --debug Enable debug mode with face landmarks visualization
- --no-vtube Run in standalone debug mode without VTube Studio
- --calibrate Enable calibration with force
This project was developed using:
- Gigacode (logic)
- Source Craft Assistant (optimization and refactoring)
This project is licensed under the MIT License - see the LICENSE file for details.