diff --git a/.gitignore b/.gitignore index 5cf9451bf..207f1974f 100644 --- a/.gitignore +++ b/.gitignore @@ -191,3 +191,6 @@ dev/ # requirements backups files requirements.*.backup + +# Local run files +local-run.yaml \ No newline at end of file diff --git a/README.md b/README.md index 8006af769..448f5d9ad 100644 --- a/README.md +++ b/README.md @@ -152,13 +152,32 @@ Installation steps depends on operation system. Please look at instructions for # Run LCS locally To quickly get hands on LCS, we can run it using the default configurations provided in this repository: -0. install dependencies using [uv](https://docs.astral.sh/uv/getting-started/installation/) `uv sync --group dev --group llslibdev` -1. check Llama stack settings in [run.yaml](run.yaml), make sure we can access the provider and the model, the server shoud listen to port 8321. -2. export the LLM token env var that Llama stack requires. for OpenAI, we set the env var by `export OPENAI_API_KEY=sk-xxxxx` -3. start Llama stack server `uv run llama stack run run.yaml` -4. [Optional] If you're new to Llama stack, run through a quick tutorial to learn the basics of what the server is used for, by running the interactive tutorial script `./scripts/llama_stack_tutorial.sh` + +0. install dependencies using [uv](https://docs.astral.sh/uv/getting-started/installation/) + ```bash + uv sync --group dev --group llslibdev + ``` +1. create llama stack `run.yaml`. you can do this by running the local run generation script + ```bash + ./scripts/generate_local_run.sh + ``` +2. export the LLM token environment variable that Llama stack requires. for OpenAI, we set the env var by + ```bash + export OPENAI_API_KEY=sk-xxxxx + ``` +3. start Llama stack server + ```bash + uv run llama stack run local-run.yaml + ``` +4. [Optional] If you're new to Llama stack, run through a quick tutorial to learn the basics of what the server is used for, by running the interactive tutorial script + ```bash + ./scripts/llama_stack_tutorial.sh + ``` 5. check the LCS settings in [lightspeed-stack.yaml](lightspeed-stack.yaml). `llama_stack.url` should be `url: http://localhost:8321` -6. start LCS server `make run` +6. start LCS server + ``` + make run + ``` 7. access LCS web UI at [http://localhost:8080/](http://localhost:8080/) diff --git a/scripts/generate_local_run.sh b/scripts/generate_local_run.sh new file mode 100755 index 000000000..e198e1da9 --- /dev/null +++ b/scripts/generate_local_run.sh @@ -0,0 +1,24 @@ +#!/bin/bash + +# Script to generate local-run.yaml from run.yaml +# Replaces ~/ with the user's home directory + +# Get the directory where the script is located +SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)" +# Get the parent directory (project root where run.yaml is located) +PROJECT_ROOT="$(cd "${SCRIPT_DIR}/.." && pwd)" + +# Input and output files +INPUT_FILE="${PROJECT_ROOT}/run.yaml" +OUTPUT_FILE="${PROJECT_ROOT}/local-run.yaml" + +# Check if run.yaml exists +if [ ! -f "$INPUT_FILE" ]; then + echo "Error: run.yaml not found at $INPUT_FILE" >&2 + exit 1 +fi + +# Replace ~/ with $HOME/ and write to local-run.yaml +sed "s|~/|$HOME/|g" "$INPUT_FILE" > "$OUTPUT_FILE" + +echo "Successfully generated $OUTPUT_FILE"