This repository contains code for training and exporting vision transformer models designed to run on myrtle.ai's vision acceleration solutions for FPGA.
The myrtle_vision library contains code common to different vision tasks such
as the core vision transformer model architecture. Before moving to one of the
subdirectories (e.g. classification, segmentation) to train a model for a
specific coputer vision task, follow these instructions to install the
myrtle_vision library.
The myrtle_vision library requires Python >=3.7 and we recommend using
something like venv or conda to manage a virtual environment to install the
Python dependencies.
- Install the non-Python dependencies, CUDA and Ninja. These are needed for the QPyTorch library to work.
- Install the
myrtle_visionlibrary (including Python dependencies):$ pwd <...>/myrtle-vision $ pip install -e .
We suggest installing myrtle_vision in editable mode (using pips -e flag)
to be able to make changes to it easily.