EVOlution of TeleWhisper
Open-sourced due to a lack of sufficient retribution.
For local testing, just install python and the requirements with:
pip install -r requirements.txtIf you want to run it with Docker, make sure you have it installed as well.
The setup variables are set up via an env.ini file in the project root directory. Contents must be as this:
[Telegram]
# Obtained from Telegram Developer console
api_id=<API_ID>
api_hash=<API_HASH>
# Obtained from @BotFather Telegram bot
bot_token=<BOT_TOKEN>
[Database]
host=<DB_HOST>
database=<DB_NAME>
port=<DB_PORT>
username=<DB_USER>
password=<DB_USER_PASSWORD>
# ATM, MANDATORY
[OpenAI]
api_key=<OPENAI_API_KEY>
# OPTIONAL. You can remove this 'Local' Section if you don't want to install and use local whisper model
[Local]
use_local_whisper=True
# Select a model size from the official openai/whisper github repo
model_size=small
# MANDATORY
[DeepL]
api_key=<DEEPL_API_KEY>
# MANDATORY
[FireworksAI]
api_key=<FIREWORKSAI_API_KEY>
url=<SERVICE_URL>
# MANDATORY
[RunPod]
api_key=<RUNPOD_API_KEY>
# RunPod URL must be a FasterWhisper serverless instance or pod
url=<SERVICE_URL>
# MANDATORY
[Downloads]
# VPS Host and port must be accessible from the outside (make sure you have a firewall and a reverse proxy properly configured)
host=<VPS_HOST>
port=<VPS_PORT>python main.pyor install uvicorn and run it with
python -m uvicorn main:appdocker build -t <image_name> .
docker run -d -p <host>:<download_port> <image_name>Here's more or less the structure:

This happens when the user sends a file:

All the code here is under the CC BY-NC License. Any use of this code must not be commercial unless with explicit permission.
