This is the main repo for the modeling tool, including the ML kernel, frontend, and rygg
Make sure that you set up your editor of choice with the following instructions: https://black.readthedocs.io/en/stable/integrations/editors.html:w
For when you want to replicate what the user sees:
cd scripts
# create and activate a python virtual environment. For example:
python -m venv .venv
. .venv/bin/activate
# build the wheel
python build.py wheel
# run it
perceptilabspushd scripts
# create and activate a python virtual environment. For example:
python -m venv .venv
. .venv/bin/activate
# build the images
python build.py docker all
popd
# run it
cd build/docker/compose
../../../scripts/dev_install
# point your browser at http://localhostThis will run all of the services in development mode with code reloading.
- docker from docker.com
- make (included with OSX & Linux, Windows get from chocolatey à la https://stackoverflow.com/a/57042516 )
Just run it and watch the logs roll by
cd docker/dev
make dev_all
docker-compose up
# ... do stuff ...
<ctrl-c> or close the terminalUse the up and down subcommands of docker-compose:
docker-compose up -d
# ... do stuff ...
docker-compose downTo follow the logs of one service, e.g. the rendering kernel:
docker-compose logs -f renderTo run integration tests against the docker-based services:
- rygg:
python -m pytest -rfe --capture=tee-sys --host=localhost --port=80 --path="/rygg" --vol_map="${HOME}/Downloads/Perceptilabs_dev:/perceptilabs/Documents/Perceptilabs"
- kernel: TBD
- The docker files for development mode will mount your source directory into a volume so normal code changes will be picked up immediately.
- If you change your requirements.txt file or package.json in frontend, then you need to rebuild with
make.
- run
make cleanto tear everything down
-
To just run everything (with pyenv, venv, and pip on osx or linux):
cd dev-env ./setup honcho start -f Procfile -e .env.... or run the following steps
-
Redis server
docker run -it -p 6379:6379 redis
-
Rendering kernel
cd backend PL_KERNEL_CELERY="1" PL_REDIS_URL="redis://localhost" AUTH_ENV=dev python main.py
-
Training worker
cd backend PL_REDIS_URL="redis://localhost" AUTH_ENV=dev celery -A perceptilabs.tasks.celery_executor worker --loglevel=debug --queues=training --pool=threads
-
Flower (optional)
cd backend PL_REDIS_URL="redis://localhost" celery -A perceptilabs.tasks.celery_executor flower --loglevel=debug
-
Rygg server
cd rygg PL_FILE_SERVING_TOKEN=12312 PL_FILE_UPLOAD_DIR=$(pwd) PERCEPTILABS_DB=./db.sqlite3 container=xyz AUTH_ENV=dev python -m django migrate --settings rygg.settings PL_FILE_SERVING_TOKEN=12312 PL_FILE_UPLOAD_DIR=$(pwd) PERCEPTILABS_DB=./db.sqlite3 container=xyz AUTH_ENV=dev python -m django runserver 0.0.0.0:8000 --settings rygg.settings
-
Rygg worker
cd rygg PL_FILE_SERVING_TOKEN=12312 PL_FILE_UPLOAD_DIR=$(pwd) PERCEPTILABS_DB=./db.sqlite3 container=a celery -A rygg worker -l INFO --queues=rygg
-
Frontend
cd frontend npm install npm run-script start:web
You won't get auto-reload, but you'll get better handling of the fileserver token:
cd frontend
# Build the frontend static page
npm install
npm run build-render
rm -rf static_file_server/static_file_server/dist
mv src/dist static_file_server/static_file_server/
cd static_file_server
# create and activate a python virtual environment. For example:
python -m venv .venv
source .venv/bin/activate
# Set up and run the static_file_server
pip install --upgrade pip setuptools
pip install -r requirements.txt
python manage.py runserverUse Anaconda[https://www.anaconda.com/] to manage virtual environments.
Do the following steps inside backend and rygg to set up environment for each services
- Install Anaconda from here: https://docs.anaconda.com/anaconda/install/windows/
- Open "Anaconda Prompt"
- Create a new environment:
conda create -n pl_rygg python=3.7 - Create 2 more environments called
pl_backend - cd to the rygg and backend folders and do the following:
- Activate the proper environment:
conda activate pl_rygg(pl_ryggfor rygg,pl_backendfor backend) - Update setuptools:
pip install --upgrade pip setuptools - Install all dependencies:
pip install -r requirements.txt
Now you have set up 3 environments, Now run the following to start services
PL_ROOT=$(git rev-parse--show-toplevel)
# run the rendering kernel
cd "$PL_ROOT/backend"
AUTH_ENV=dev python main.py --mode=rendering --debug
# run rygg
cd $PL_ROOT/rygg
PL_FILE_SERVING_TOKEN=12312 PL_FILE_UPLOAD_DIR=$(pwd) AUTH_ENV=dev python manage.py runserver 0.0.0.0:8000
# Set up and run the static_file_server
cd $PL_ROOT/frontend
npm install
npm run build-render
rm -rf static_file_server/static_file_server/dist
mv src/dist static_file_server/static_file_server/
cd static_file_server
pip install --upgrade pip setuptools
pip install -r requirements.txt
PL_FILE_SERVING_TOKEN=12312 PL_KERNEL_URL=/kernel/ PL_RYGG_URL=/rygg/ PL_KEYCLOAK_URL=/auth/ python manage.py runserver 8080In case of you are facing some issues with calling rygg, run python manage.py migrate inside rygg and run python manage.py runserver.
- Install and test pl-nightly to make sure it's shippable
- Do the same tests against cd.perceptilabshosting.com
- In the perceptilabs repo, update the VERSION file and tag the branch:
# Stash your stuff CUR_BRANCH=$(git branch --show-current) export STASH=1; git stash | grep -i saved || { export STASH=0; } # switch to master git checkout master git pull -r git log # Review the log to pick the git commit to ship. Usually it'll be the one that was built for pl-nightly or for cd, which will be tagged as "docker_xxxx" export COMMIT_TO_SHIP=<the commit from the last step> git co $COMMIT_TO_SHIP # make the release tags git checkout -b tmp export PL_VERSION=<new version> scripts/make_release tmp $PL_VERSION origin # merge the tags into master git checkout master git merge tmp git push origin HEAD git branch -D tmp # switch back to your work git checkout $CUR_BRANCH [ $STASH -ne 1 ] || git stash pop
- In pipelines
- Start "PerceptiLabs Docker" for $PL_VERSION tag. Note the build number from the URL of the build (a four-digit number currently starting with 7).
- Start "PerceptiLabs Pip" for $PL_VERSION tag
- Go to a previous release build
- Click "Run New"
- Select $PL_VERSION as the tag to build
- When Perceptilabs Pip finishes, install the new perceptilabs from PyPI and do some sanity checks on it
- When Perceptilabs Docker finishes, run the "Docker Release" pipeline
- Use tag $PL_VERSION
- Build ID to deploy: The build number from above
- When Docker Release finishes, run "Release Airgapped"
- Branch: master
- Requested Version:
- When Docker Release finishes, run "Docker CD":
- Branch: master
- Release Channel: prod
- Requested version: $PL_VERSION
- GPUs: 1
- CPUs: 4
- Release environment: releasetest
- DNS Subdomain: releasetest
- Minutes to stay running: 120
- SSH Key Name:
- When Docker Release finishes, go to releasetest.perceptilabshosting.com and do some sanity checks