Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
3 changes: 3 additions & 0 deletions .gitignore
Original file line number Diff line number Diff line change
Expand Up @@ -191,3 +191,6 @@ dev/

# requirements backups files
requirements.*.backup

# Local run files
local-run.yaml
31 changes: 25 additions & 6 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -152,13 +152,32 @@ Installation steps depends on operation system. Please look at instructions for
# Run LCS locally

To quickly get hands on LCS, we can run it using the default configurations provided in this repository:
0. install dependencies using [uv](https://docs.astral.sh/uv/getting-started/installation/) `uv sync --group dev --group llslibdev`
1. check Llama stack settings in [run.yaml](run.yaml), make sure we can access the provider and the model, the server shoud listen to port 8321.
2. export the LLM token env var that Llama stack requires. for OpenAI, we set the env var by `export OPENAI_API_KEY=sk-xxxxx`
3. start Llama stack server `uv run llama stack run run.yaml`
4. [Optional] If you're new to Llama stack, run through a quick tutorial to learn the basics of what the server is used for, by running the interactive tutorial script `./scripts/llama_stack_tutorial.sh`

0. install dependencies using [uv](https://docs.astral.sh/uv/getting-started/installation/)
```bash
uv sync --group dev --group llslibdev
```
1. create llama stack `run.yaml`. you can do this by running the local run generation script
```bash
./scripts/generate_local_run.sh
```
2. export the LLM token environment variable that Llama stack requires. for OpenAI, we set the env var by
```bash
export OPENAI_API_KEY=sk-xxxxx
```
3. start Llama stack server
```bash
uv run llama stack run local-run.yaml
```
4. [Optional] If you're new to Llama stack, run through a quick tutorial to learn the basics of what the server is used for, by running the interactive tutorial script
```bash
./scripts/llama_stack_tutorial.sh
```
5. check the LCS settings in [lightspeed-stack.yaml](lightspeed-stack.yaml). `llama_stack.url` should be `url: http://localhost:8321`
6. start LCS server `make run`
6. start LCS server
```
make run
```
7. access LCS web UI at [http://localhost:8080/](http://localhost:8080/)


Expand Down
24 changes: 24 additions & 0 deletions scripts/generate_local_run.sh
Original file line number Diff line number Diff line change
@@ -0,0 +1,24 @@
#!/bin/bash

# Script to generate local-run.yaml from run.yaml
# Replaces ~/ with the user's home directory

# Get the directory where the script is located
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
# Get the parent directory (project root where run.yaml is located)
PROJECT_ROOT="$(cd "${SCRIPT_DIR}/.." && pwd)"

# Input and output files
INPUT_FILE="${PROJECT_ROOT}/run.yaml"
OUTPUT_FILE="${PROJECT_ROOT}/local-run.yaml"

# Check if run.yaml exists
if [ ! -f "$INPUT_FILE" ]; then
echo "Error: run.yaml not found at $INPUT_FILE" >&2
exit 1
fi

# Replace ~/ with $HOME/ and write to local-run.yaml
sed "s|~/|$HOME/|g" "$INPUT_FILE" > "$OUTPUT_FILE"

echo "Successfully generated $OUTPUT_FILE"
Loading