diff --git a/.claude/kernelci-setup-context.md b/.claude/kernelci-setup-context.md new file mode 100644 index 000000000..d133a40fc --- /dev/null +++ b/.claude/kernelci-setup-context.md @@ -0,0 +1,158 @@ +# KernelCI Labgrid Adapter Setup Context + +## Last Updated: 2026-02-04 + +## Infrastructure Overview + +### Servers +1. **openwrt-kci.aparcar.org** (KernelCI API server) + - SSH: `ssh root@openwrt-kci.aparcar.org` + - Runs Docker containers for KernelCI API, MongoDB, Redis, MinIO, etc. + - Main compose at `/opt/openwrt-pipeline/docker-compose.yml` + +2. **labgrid-aparcar** (Labgrid coordinator + adapter) + - SSH: `ssh labgrid-aparcar` (logs in as labgrid-dev user) + - Runs labgrid-coordinator and labgrid-exporter as systemd services + - Adapter deployed at `~/labgrid-adapter` + +### Running Services + +**On openwrt-kci.aparcar.org:** +``` +openwrt-kci-api - KernelCI API (FastAPI) +openwrt-kci-mongodb - MongoDB database +openwrt-kci-redis - Redis +openwrt-kci-minio - S3-compatible storage +openwrt-kci-scheduler - Test scheduler +openwrt-kci-trigger - Firmware trigger +openwrt-kci-bridge - KCIDB bridge +``` + +**On labgrid-aparcar:** +``` +labgrid-coordinator.service - Labgrid coordinator (port 20408) +labgrid-exporter.service - Labgrid exporter +labgrid-adapter - KernelCI adapter (running as background process) +``` + +## Labgrid Adapter Status + +### Current State: RUNNING +- Process: `python -c 'from labgrid_kci_adapter.service import main; ...'` +- Log file: `~/adapter.log` +- Working directory: `~/labgrid-adapter` + +### Configuration (`~/labgrid-adapter/.env`) +``` +LAB_NAME=labgrid-aparcar +KCI_API_URL=https://api.openwrt-kci.aparcar.org +KCI_API_TOKEN=eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJzdWIiOiI2OTgwYmQ4Mzg2MDcyNTY3OGE4Y2U1YjAiLCJhdWQiOlsiZmFzdGFwaS11c2VyczphdXRoIl0sImV4cCI6MjA4NTQwNDgxMH0.S5hDyVz0E2SIELCZTi8n4CwTpRr_8Sqjn85FmlVHSeQ +LG_COORDINATOR=localhost:20408 +POLL_INTERVAL=30 +MAX_CONCURRENT_JOBS=3 +HEALTH_CHECK_ENABLED=true +HEALTH_CHECK_INTERVAL=86400 +SUPPORTED_TEST_TYPES=firmware +TARGETS_DIR=/home/labgrid-dev/labgrid-adapter/targets +TESTS_DIR=/home/labgrid-dev/labgrid-adapter/tests-openwrt +FIRMWARE_CACHE=/home/labgrid-dev/labgrid-adapter/cache +``` + +### Discovered Devices (6 total) +1. bananapi_bpi-r4 +2. bananapi_bpi-r4-lite +3. genexis_pulse-ex400 +4. openwrt_one +5. rpi-4 +6. tplink_tl-wdr3600-v1 + +### Place Naming Convention +- Places are named: `labgrid-aparcar-{device_type}` +- LAB_NAME must be `labgrid-aparcar` (full prefix, not just `aparcar`) + +## Code Changes Made + +### Fixed place name construction +Files modified (need to be committed): +- `kernelci/labgrid-adapter/labgrid_kci_adapter/health_check.py` +- `kernelci/labgrid-adapter/labgrid_kci_adapter/service.py` + +Change: Removed redundant `labgrid-` prefix since LAB_NAME already includes it. +```python +# Before: place_name = f"labgrid-{settings.lab_name}-{device_name}" +# After: place_name = f"{settings.lab_name}-{device_name}" +``` + +## Known Issues + +### 1. Admin User Auth Problem (DEFERRED) +- Created admin user in MongoDB (ID: `69828a0526ff6640ab0f248b`) +- Token verification works in Python but fails via HTTP API +- **Workaround:** Using pipeline token (user ID: `6980bd83860725678a8ce5b0` - doesn't exist in DB but token works) +- Root cause unclear - possibly related to fastapi-users + fastapi-versioning interaction + +### 2. YAML Template Warnings +Target YAML files use labgrid's `!template` tag which requires special YAML loader. +These are warnings only, not blocking - devices still discovered from coordinator. + +### 3. Pytest Exit Code 3 +Tests complete with exit code 3 (no tests collected). The test execution flow works but actual tests may need configuration. + +## Useful Commands + +### Check adapter status +```bash +ssh labgrid-aparcar "pgrep -fa labgrid_kci_adapter" +ssh labgrid-aparcar "tail -50 ~/adapter.log" +``` + +### Restart adapter +```bash +ssh labgrid-aparcar "pkill -f labgrid_kci_adapter; cd ~/labgrid-adapter && source .venv/bin/activate && export \$(grep -v '^#' .env | xargs) && nohup python -c ' +import asyncio +import logging +import sys +logging.basicConfig(level=logging.INFO, stream=sys.stdout) +from labgrid_kci_adapter.service import main +asyncio.run(main()) +' >> ~/adapter.log 2>&1 &" +``` + +### Check labgrid places +```bash +ssh labgrid-aparcar "labgrid-client places" +``` + +### Check API status +```bash +curl -s https://api.openwrt-kci.aparcar.org/latest/ +ssh root@openwrt-kci.aparcar.org "docker logs openwrt-kci-api 2>&1 | tail -30" +``` + +### Test API with token +```bash +TOKEN="eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJzdWIiOiI2OTgwYmQ4Mzg2MDcyNTY3OGE4Y2U1YjAiLCJhdWQiOlsiZmFzdGFwaS11c2VyczphdXRoIl0sImV4cCI6MjA4NTQwNDgxMH0.S5hDyVz0E2SIELCZTi8n4CwTpRr_8Sqjn85FmlVHSeQ" +curl -s -H "Authorization: Bearer $TOKEN" https://api.openwrt-kci.aparcar.org/latest/nodes?limit=5 +``` + +### Sync adapter code changes +```bash +rsync -av --exclude='.venv' --exclude='__pycache__' --exclude='.pytest_cache' --exclude='*.egg-info' kernelci/labgrid-adapter/ labgrid-aparcar:~/labgrid-adapter/ +``` + +## API Configuration + +### Secret Keys (IMPORTANT - they differ!) +- **openwrt-pipeline/.env:** `KCI_SECRET_KEY=ae914b257bee501de4af4e6c7c8a76bd4a99c7d9ecf2aed0f43f3f8c4f37041d` +- **kernelci/.env:** `KCI_SECRET_KEY=59f2184b5a24d282856eb5accd15278f02f0fe2d11b66a98357d0177f83ba59e` (NOT USED) +- The API container uses the openwrt-pipeline secret (compose runs from /opt/openwrt-pipeline) + +### MongoDB +- Connection: `mongodb://admin:openwrt-mongo-32a6c8216d106e2c@mongodb:27017` +- Database: `openwrt_kernelci` + +## Next Steps (TODO) +1. Set up systemd service for adapter (needs sudo access on labgrid-aparcar) +2. Debug admin user authentication issue +3. Configure actual test execution (fix pytest exit code 3) +4. Commit code changes to repository diff --git a/.claude/settings.local.json b/.claude/settings.local.json new file mode 100644 index 000000000..1515acb08 --- /dev/null +++ b/.claude/settings.local.json @@ -0,0 +1,15 @@ +{ + "permissions": { + "allow": [ + "WebFetch(domain:github.com)", + "WebFetch(domain:docs.kernelci.org)", + "WebFetch(domain:raw.githubusercontent.com)", + "WebFetch(domain:www.collabora.com)", + "mcp__acp__Bash", + "mcp__acp__Edit", + "mcp__acp__Write", + "WebFetch(domain:openwrt-kci.aparcar.org)", + "WebFetch(domain:api.openwrt-kci.aparcar.org)" + ] + } +} diff --git a/.github/ISSUE_TEMPLATE/kernel-selftests.md b/.github/ISSUE_TEMPLATE/kernel-selftests.md new file mode 100644 index 000000000..73790b4fe --- /dev/null +++ b/.github/ISSUE_TEMPLATE/kernel-selftests.md @@ -0,0 +1,61 @@ +--- +name: Kernel Selftests Request +about: Request kernel selftests to be run on a specific device with custom firmware +title: "Kernel Selftests: [Brief description of your test]" +labels: ["kernel-selftests", "testing"] +assignees: "" +--- + +## Kernel Selftests Request + +Use this template to request kernel selftests on real OpenWrt hardware. The tests run using pytest and labgrid infrastructure. + +Please use the following format to request kernel selftests: + +``` +/test-kernel-selftests +device: bananapi_bpi-r64-kernel +command: make -C net run_tests +firmware: https://example.com/path/to/your-openwrt-image.bin +``` + +### Parameters + +- **device**: Target device to run tests on + - Currently supported: `bananapi_bpi-r64-kernel` + +- **command**: Shell command to execute in the `/root/selftests/` directory + - The device will download selftests from the internet automatically + - Examples: + - `make -C net run_tests` - Run networking tests + - `make -C bpf run_tests` - Run BPF tests + - `make -C mm run_tests` - Run memory management tests + - `./run_kselftest.sh` - Run all available tests + +- **firmware**: Direct URL to your OpenWrt firmware image + - Must be a direct download link (e.g., GitHub releases, file hosting service) + - Device must have internet connectivity for downloading selftests + - Supported formats: `.bin`, `.img`, `.gz` files + +### Test Description + +Please describe: + +- What you're testing +- Expected behavior +- Any specific configuration in your firmware + +### How It Works + +1. Your custom firmware is flashed to real hardware +2. Device boots and connects to the internet +3. Kernel selftests are downloaded directly on the device +4. Your specified command runs via pytest and labgrid +5. Results are automatically posted as a comment + +### Additional Notes + +- Tests run using the existing pytest/labgrid infrastructure +- Device needs internet connectivity to download selftests +- Results include pytest output and detailed workflow logs +- Device is automatically powered off and released after testing diff --git a/.github/workflows/daily copy.yml b/.github/workflows/daily copy.yml new file mode 100644 index 000000000..93490b592 --- /dev/null +++ b/.github/workflows/daily copy.yml @@ -0,0 +1,538 @@ +name: Daily test for snapshots and stable releases +on: + push: + branches: + - main + workflow_dispatch: + schedule: + - cron: "0 0 * * *" + +env: + PYTHONUNBUFFERED: "1" + PYTEST_ADDOPTS: "--color=yes" + LG_CONSOLE: "internal" + LG_FEATURE_APK: "true" + +concurrency: + group: ${{ github.workflow }}-${{ github.ref }} + cancel-in-progress: false + +jobs: + generate-matrix: + name: Generate labnet.yaml matrix + runs-on: ubuntu-latest + outputs: + matrix: ${{ steps.set-matrix.outputs.matrix }} + qemu-matrix: ${{ steps.set-matrix.outputs.qemu-matrix }} + steps: + - name: Check out repository code + uses: actions/checkout@v4 + + - name: Generate matrix + run: | + sudo wget https://github.com/mikefarah/yq/releases/latest/download/yq_linux_amd64 -O /usr/local/bin/yq &&\ + sudo chmod +x /usr/local/bin/yq + + # Get versions (reuse from previous step to avoid repeated API calls) + versions_json=$(curl -s https://downloads.openwrt.org/.versions.json) + stable_version=$(echo "$versions_json" | jq -r '.stable_version') + oldstable_version=$(echo "$versions_json" | jq -r '.oldstable_version') + stable_branch=$(echo "$stable_version" | cut -d. -f1,2) + oldstable_branch=$(echo "$oldstable_version" | cut -d. -f1,2) + + versions="[ + {\"type\": \"snapshot\", \"name\": \"snapshot\", \"url_base\": \"https://mirror-03.infra.openwrt.org/snapshots/targets\"}, + {\"type\": \"stable\", \"name\": \"$stable_branch\", \"version\": \"$stable_version\", \"url_base\": \"https://mirror-03.infra.openwrt.org/releases/$stable_branch-SNAPSHOT/targets\"}, + {\"type\": \"stable\", \"name\": \"$oldstable_branch\", \"version\": \"$oldstable_version\", \"url_base\": \"https://mirror-03.infra.openwrt.org/releases/$oldstable_branch-SNAPSHOT/targets\"} + ]" + + device_matrix=$(yq -o=json ' + . as $root | + $root.labs as $labs | + $root.devices as $devices | + $labs + | to_entries + | map( + .key as $lab | + .value.devices + | map( + select($devices[.] != null) | + { + "device": ., + "name": $devices[.].name, + "proxy": $labs[$lab].proxy, + "target": $devices[.].target, + "firmware": $devices[.].firmware + } + ) + ) + | flatten + ' labnet.yaml) + + # Combine devices with versions to create full matrix + matrix=$(echo "$device_matrix" | jq --argjson versions "$versions" ' + [.[] as $device | $versions[] as $version | $device + {"version": $version, "version_name": $version.name}] + ') + echo "matrix=$(echo "$matrix" | jq -c '.')" >> $GITHUB_ENV + + # Create QEMU matrix + qemu_base='[ + {"target": "malta-be", "firmware": "vmlinux-initramfs.elf", "dependency": "qemu-system-mips"}, + {"target": "x86-64", "firmware": "generic-squashfs-combined.img.gz", "dependency": "qemu-system-x86"}, + {"target": "armsr-armv8", "firmware": "generic-initramfs-kernel.bin", "dependency": "qemu-system-aarch64"} + ]' + qemu_matrix=$(echo "$qemu_base" | jq --argjson versions "$versions" ' + [.[] as $qemu | $versions[] as $version | $qemu + {"version": $version, "version_name": $version.name}] + ') + echo "qemu-matrix=$(echo "$qemu_matrix" | jq -c '.')" >> $GITHUB_ENV + + - name: Filter out devices with open healthcheck issues + id: set-matrix + env: + GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }} + run: | + # Get open issues with healthcheck label + issues=$(gh issue list --label "healthcheck" --state open --json title,labels --jq '.[] | .title') + + # Filter out matrix entries where proxy/device combination has an open healthcheck issue + filtered_matrix=$(echo "$matrix" | jq --argjson issues "$(echo "$issues" | jq -R -s 'split("\n") | map(select(length > 0))')" ' + map(select( + . as $entry | + ($issues | map(test("\($entry.proxy)/\($entry.device)")) | any) | not + )) + ') + + deduplicated_matrix=$(echo "$filtered_matrix" | jq ' + group_by(.device) | map(.[0]) + ') + + echo "matrix=$(echo "$deduplicated_matrix" | jq -c '.')" >> $GITHUB_OUTPUT + echo "qemu-matrix=$(echo "$qemu_matrix" | jq -c '.')" >> $GITHUB_OUTPUT + + test-real: + name: ${{ matrix.name }} (${{ matrix.version_name }}) + needs: generate-matrix + runs-on: global-coordinator + strategy: + fail-fast: false + matrix: + include: ${{ fromJson(needs.generate-matrix.outputs.matrix) }} + + steps: + - name: Check out repository code + uses: actions/checkout@v4 + + - name: Install uv + uses: astral-sh/setup-uv@v6 + + - name: Set environment + env: + target: ${{ matrix.target }} + version_type: ${{ matrix.version.type }} + version_name: ${{ matrix.version.name }} + url_base: ${{ matrix.version.url_base }} + run: | + echo "VERSION_NAME=$version_name" >> $GITHUB_ENV + + if [ "$version_type" = "snapshot" ]; then + # Snapshot logic (existing) + export firmware=openwrt-${{ matrix.target }}-${{ matrix.device }}-${{ matrix.firmware }} + UPSTREAM_URL="$url_base" + + mkdir -p /tmp/tftp/${{ matrix.device }} + wget $UPSTREAM_URL/${target/-/\/}/$firmware \ + --output-document /tmp/tftp/${{ matrix.device }}/$firmware + (cd /tmp/tftp/ && gzip -df ${{ matrix.device }}/$firmware) || true + + FIRMWARE_VERSION=$(curl $UPSTREAM_URL/${target/-/\/}/version.buildinfo) + echo "FIRMWARE_VERSION=$FIRMWARE_VERSION" >> $GITHUB_ENV + echo "LG_IMAGE=/tmp/tftp/${{ matrix.device }}/${firmware/.gz/}" >> $GITHUB_ENV + else + # Stable release logic + UPSTREAM_URL="$url_base" + + # Fetch profiles.json to get real firmware filename + profiles_url="$UPSTREAM_URL/${target/-/\/}/profiles.json" + profiles_json=$(curl -s "$profiles_url") + + # Find the kernel image for the device + if [ "${{ matrix.device }}" = "generic" ]; then + profile_key="generic" + else + # For non-generic devices, we might need to search through profiles + profile_key="${{ matrix.device }}" + fi + + # Extract the kernel/combined image filename based on firmware type + case "${{ matrix.firmware }}" in + *squashfs-combined*) + image_type="combined" + filesystem="squashfs" + ;; + *ext4-combined*) + image_type="combined" + filesystem="ext4" + ;; + *initramfs*) + image_type="kernel" + filesystem="" + ;; + *vmlinux*) + image_type="kernel" + filesystem="" + ;; + *) + image_type="combined" + filesystem="squashfs" + ;; + esac + + if [ -n "$filesystem" ]; then + firmware_filename=$(echo "$profiles_json" | jq -r --arg profile "$profile_key" --arg type "$image_type" --arg fs "$filesystem" ' + .profiles[$profile].images[] | select(.type == $type and .filesystem == $fs) | .name + ') + else + firmware_filename=$(echo "$profiles_json" | jq -r --arg profile "$profile_key" --arg type "$image_type" ' + .profiles[$profile].images[] | select(.type == $type) | .name + ') + fi + + if [ -z "$firmware_filename" ] || [ "$firmware_filename" = "null" ]; then + echo "Could not find firmware filename for device ${{ matrix.device }}, falling back to constructed name" + # Fallback to constructed filename + image_prefix=$(echo "$profiles_json" | jq -r --arg profile "$profile_key" '.profiles[$profile].image_prefix') + firmware_filename="$image_prefix-${{ matrix.firmware }}" + fi + + echo "Using firmware: $firmware_filename" + + mkdir -p /tmp/tftp/${{ matrix.device }} + wget "$UPSTREAM_URL/${target/-/\/}/$firmware_filename" \ + --output-document /tmp/tftp/${{ matrix.device }}/$firmware_filename + (cd /tmp/tftp/ && gzip -df ${{ matrix.device }}/$firmware_filename) || true + + FIRMWARE_VERSION=$(echo "$profiles_json" | jq -r '.version_number') + echo "FIRMWARE_VERSION=$FIRMWARE_VERSION" >> $GITHUB_ENV + echo "LG_IMAGE=/tmp/tftp/${{ matrix.device }}/${firmware_filename/.gz/}" >> $GITHUB_ENV + fi + + echo "LG_PROXY=${{ matrix.proxy }}" >> $GITHUB_ENV + + - name: Wait for free device + run: | + eval $(uv run labgrid-client reserve --wait --shell device=${{ matrix.device }}) + echo "LG_TOKEN=$LG_TOKEN" >> $GITHUB_ENV + echo "LG_PLACE=+" >> $GITHUB_ENV + uv run labgrid-client -p +$LG_TOKEN lock + echo "LG_ENV=targets/${{ matrix.device }}.yaml" >> $GITHUB_ENV + + - name: Run test + run: | + mkdir -p ${{ matrix.device }}-${{ matrix.version.name }}/ + uv run pytest tests/ \ + --lg-log ${{ matrix.device }}-${{ matrix.version.name }}/ \ + --junitxml=${{ matrix.device }}-${{ matrix.version.name }}/report.xml \ + --lg-colored-steps \ + --log-cli-level=CONSOLE + + - name: Poweroff and unlock device + if: always() + run: | + uv run labgrid-client power off || true + uv run labgrid-client unlock + + - name: Upload results + uses: actions/upload-artifact@v4 + if: always() + with: + name: results-${{ matrix.device }}-${{ matrix.version.name }} + path: ${{ matrix.device }}-${{ matrix.version.name }}/* + + test-qemu: + name: ${{ matrix.target }} (${{ matrix.version_name }}) + needs: generate-matrix + runs-on: ubuntu-latest + env: + LG_FEATURE_ONLINE: "true" + strategy: + fail-fast: false + matrix: + include: ${{ fromJson(needs.generate-matrix.outputs.qemu-matrix) }} + + steps: + - name: Check out repository code + uses: actions/checkout@v4 + + - name: Setup environment + run: | + sudo apt-get update + sudo apt-get -y install \ + nmap \ + ${{ matrix.dependency }} + + echo "LG_ENV=targets/qemu-${{ matrix.target }}.yaml" >> $GITHUB_ENV + echo "VERSION_NAME=${{ matrix.version.name }}" >> $GITHUB_ENV + + - name: Install uv + uses: astral-sh/setup-uv@v6 + + - name: Download test firmware + env: + target: ${{ matrix.target }} + version_type: ${{ matrix.version.type }} + version_name: ${{ matrix.version.name }} + url_base: ${{ matrix.version.url_base }} + run: | + if [ "$version_type" = "snapshot" ]; then + # Snapshot logic + firmware_name="openwrt-$target-${{ matrix.firmware }}" + wget "$url_base/${target/-/\/}/$firmware_name" \ + --output-document "$firmware_name" + FIRMWARE_VERSION=$(curl "$url_base/${target/-/\/}/version.buildinfo") + else + # Stable release logic + profiles_url="$url_base/${target/-/\/}/profiles.json" + profiles_json=$(curl -s "$profiles_url") + + # Find the appropriate image for QEMU + case "${{ matrix.firmware }}" in + *squashfs-combined*) + image_type="combined" + filesystem="squashfs" + ;; + *ext4-combined*) + image_type="combined" + filesystem="ext4" + ;; + *initramfs*) + image_type="kernel" + filesystem="" + ;; + *vmlinux*) + image_type="kernel" + filesystem="" + ;; + *) + image_type="combined" + filesystem="squashfs" + ;; + esac + + if [ -n "$filesystem" ]; then + firmware_name=$(echo "$profiles_json" | jq -r --arg type "$image_type" --arg fs "$filesystem" ' + .profiles.generic.images[] | select(.type == $type and .filesystem == $fs) | .name + ') + else + firmware_name=$(echo "$profiles_json" | jq -r --arg type "$image_type" ' + .profiles.generic.images[] | select(.type == $type) | .name + ') + fi + + if [ -z "$firmware_name" ] || [ "$firmware_name" = "null" ]; then + echo "Could not find firmware, falling back to constructed name" + image_prefix=$(echo "$profiles_json" | jq -r '.profiles.generic.image_prefix') + firmware_name="$image_prefix-${{ matrix.firmware }}" + fi + + echo "Using firmware: $firmware_name" + wget "$url_base/${target/-/\/}/$firmware_name" \ + --output-document "$firmware_name" + FIRMWARE_VERSION=$(echo "$profiles_json" | jq -r '.version_number') + fi + + echo "FIRMWARE_VERSION=$FIRMWARE_VERSION" >> $GITHUB_ENV + echo "FIRMWARE_FILE=$firmware_name" >> $GITHUB_ENV + + - name: Run test + run: | + gunzip $FIRMWARE_FILE || true + firmware_file=${FIRMWARE_FILE/.gz/} + + mkdir -p ${{ matrix.target }}-${{ matrix.version.name }} + + uv run pytest tests/ \ + --lg-log ${{ matrix.target }}-${{ matrix.version.name }}/ \ + --junitxml=${{ matrix.target }}-${{ matrix.version.name }}/report.xml \ + --lg-colored-steps \ + --log-cli-level=CONSOLE \ + --firmware $GITHUB_WORKSPACE/$firmware_file + + - name: Upload results + uses: actions/upload-artifact@v4 + if: always() + with: + name: results-qemu_${{ matrix.target }}-${{ matrix.version.name }} + path: ${{ matrix.target }}-${{ matrix.version.name }}/* + + results: + name: Deploy results page + runs-on: ubuntu-latest + if: always() + needs: + - generate-matrix + - test-real + - test-qemu + steps: + - name: Download all results + uses: actions/download-artifact@v4 + with: + path: page/data + + - name: Check out repository code + uses: actions/checkout@v4 + with: + depth: 1 + path: openwrt-tests/ + + - name: Setup dashboard + run: | + cp openwrt-tests/contrib/index.html page/index.html + + # Create version-specific directories and organize artifacts + mkdir -p page/snapshot page/stable page/oldstable + + # Get current version info for better classification + versions_json=$(curl -s https://downloads.openwrt.org/.versions.json) + stable_branch=$(echo "$versions_json" | jq -r '.stable_version' | cut -d. -f1,2) + oldstable_branch=$(echo "$versions_json" | jq -r '.oldstable_version' | cut -d. -f1,2) + + # Move artifacts to appropriate version folders + for artifact_dir in page/data/*/; do + artifact_name=$(basename "$artifact_dir") + + if [[ "$artifact_name" == *"-snapshot" ]]; then + mv "$artifact_dir" page/snapshot/ + elif [[ "$artifact_name" == *"-$stable_branch" ]]; then + mv "$artifact_dir" page/stable/ + elif [[ "$artifact_name" == *"-$oldstable_branch" ]]; then + mv "$artifact_dir" page/oldstable/ + fi + done + + # Create devices.json with version-aware structure + device_matrix='${{ needs.generate-matrix.outputs.matrix }}' + qemu_matrix='${{ needs.generate-matrix.outputs.qemu-matrix }}' + + # Group devices by version using dynamic branch detection + snapshot_devices=$(echo "$device_matrix" | jq '[.[] | select(.version_name == "snapshot")]') + stable_devices=$(echo "$device_matrix" | jq --arg branch "$stable_branch" '[.[] | select(.version_name == $branch)]') + oldstable_devices=$(echo "$device_matrix" | jq --arg branch "$oldstable_branch" '[.[] | select(.version_name == $branch)]') + + # Group QEMU targets by version + snapshot_qemu=$(echo "$qemu_matrix" | jq '[.[] | select(.version_name == "snapshot")] | map({ + "device": ("qemu_" + .target), + "name": ("QEMU " + .target), + "target": .target, + "firmware": .firmware, + "version_name": .version_name + })') + + stable_qemu=$(echo "$qemu_matrix" | jq --arg branch "$stable_branch" '[.[] | select(.version_name == $branch)] | map({ + "device": ("qemu_" + .target), + "name": ("QEMU " + .target), + "target": .target, + "firmware": .firmware, + "version_name": .version_name + })') + + oldstable_qemu=$(echo "$qemu_matrix" | jq --arg branch "$oldstable_branch" '[.[] | select(.version_name == $branch)] | map({ + "device": ("qemu_" + .target), + "name": ("QEMU " + .target), + "target": .target, + "firmware": .firmware, + "version_name": .version_name + })') + + # Create version-specific device files + echo "$(echo "$snapshot_devices $snapshot_qemu" | jq -s 'add')" > page/snapshot/devices.json + echo "$(echo "$stable_devices $stable_qemu" | jq -s 'add')" > page/stable/devices.json + echo "$(echo "$oldstable_devices $oldstable_qemu" | jq -s 'add')" > page/oldstable/devices.json + + # Create version metadata files + cat > page/snapshot/version.json << EOF + { + "version_type": "snapshot", + "version_name": "snapshot", + "version_title": "Development Snapshots", + "description": "Latest development builds", + "url_base": "https://mirror-03.infra.openwrt.org/snapshots/targets", + "generated_at": "$(date -u +"%Y-%m-%dT%H:%M:%S.%3NZ")" + } + EOF + + cat > page/stable/version.json << EOF + { + "version_type": "stable", + "version_name": "$stable_branch", + "version_title": "Current Stable Release", + "description": "Daily rebuilds of current stable release ($stable_branch)", + "url_base": "https://mirror-03.infra.openwrt.org/releases/$stable_branch-SNAPSHOT/targets", + "generated_at": "$(date -u +"%Y-%m-%dT%H:%M:%S.%3NZ")" + } + EOF + + cat > page/oldstable/version.json << EOF + { + "version_type": "stable", + "version_name": "$oldstable_branch", + "version_title": "Previous Stable Release", + "description": "Daily rebuilds of previous stable release ($oldstable_branch)", + "url_base": "https://mirror-03.infra.openwrt.org/releases/$oldstable_branch-SNAPSHOT/targets", + "generated_at": "$(date -u +"%Y-%m-%dT%H:%M:%S.%3NZ")" + } + EOF + + # Create main index with version overview + cat > page/versions.json << EOF + { + "versions": [ + { + "name": "snapshot", + "title": "Development Snapshots", + "description": "Latest development builds", + "path": "snapshot/" + }, + { + "name": "stable", + "title": "Current Stable", + "description": "Daily rebuilds of current stable release", + "path": "stable/" + }, + { + "name": "oldstable", + "title": "Previous Stable", + "description": "Daily rebuilds of previous stable release", + "path": "oldstable/" + } + ] + } + EOF + + # Create version-specific index.html files + for version_dir in snapshot stable oldstable; do + case $version_dir in + snapshot) + version_title="Development Snapshots" + ;; + stable) + version_title="Current Stable Release" + ;; + oldstable) + version_title="Previous Stable Release" + ;; + esac + + # Copy template and replace placeholders + sed "s/{{VERSION_TITLE}}/$version_title/g" openwrt-tests/contrib/version-template.html > page/$version_dir/index.html + done + + # Copy 404 page + cp openwrt-tests/contrib/404.html page/404.html + + - name: Publish test report + uses: peaceiris/actions-gh-pages@v4 + if: always() + with: + github_token: ${{ secrets.GITHUB_TOKEN }} + publish_branch: gh-pages + publish_dir: page/ diff --git a/.github/workflows/kernel-selftests.yml b/.github/workflows/kernel-selftests.yml new file mode 100644 index 000000000..72751239d --- /dev/null +++ b/.github/workflows/kernel-selftests.yml @@ -0,0 +1,232 @@ +name: Kernel Selftests + +on: + issue_comment: + types: [created] + +env: + PYTHONUNBUFFERED: 1 + PYTEST_ADDOPTS: "--tb=short" + LG_CONSOLE: 1 + +concurrency: + group: ${{ github.workflow }}-${{ github.event.issue.number }} + cancel-in-progress: true + +jobs: + parse-comment: + name: Parse test request + runs-on: ubuntu-latest + if: github.event.issue.pull_request == null && contains(github.event.comment.body, '/test-kernel-selftests') + outputs: + device: ${{ steps.parse.outputs.device }} + command: ${{ steps.parse.outputs.command }} + firmware-url: ${{ steps.parse.outputs.firmware-url }} + valid: ${{ steps.parse.outputs.valid }} + steps: + - name: Parse comment + id: parse + env: + COMMENT_BODY: ${{ github.event.comment.body }} + run: | + # Extract parameters from comment + device=$(echo "$COMMENT_BODY" | grep -oP 'device:\s*\K\S+' || echo "") + command=$(echo "$COMMENT_BODY" | grep -oP 'command:\s*\K.*' || echo "") + firmware_url=$(echo "$COMMENT_BODY" | grep -oP 'firmware:\s*\K\S+' || echo "") + + echo "Parsed parameters:" + echo "Device: $device" + echo "Command: $command" + echo "Firmware URL: $firmware_url" + + # Validate parameters + valid="true" + if [[ "$device" != "bananapi_bpi-r64-kernel" ]]; then + echo "Error: Only bananapi_bpi-r64-kernel is supported currently" + valid="false" + fi + + if [[ -z "$command" ]]; then + echo "Error: Command is required" + valid="false" + fi + + if [[ -z "$firmware_url" ]]; then + echo "Error: Firmware URL is required" + valid="false" + fi + + echo "device=$device" >> $GITHUB_OUTPUT + echo "command=$command" >> $GITHUB_OUTPUT + echo "firmware-url=$firmware_url" >> $GITHUB_OUTPUT + echo "valid=$valid" >> $GITHUB_OUTPUT + + - name: Add reaction to comment + uses: peter-evans/create-or-update-comment@v4 + with: + token: ${{ secrets.GITHUB_TOKEN }} + comment-id: ${{ github.event.comment.id }} + reactions: eyes + + run-kernel-selftests: + name: Run kernel selftests on ${{ needs.parse-comment.outputs.device }} + needs: parse-comment + runs-on: global-coordinator + if: needs.parse-comment.outputs.valid == 'true' + steps: + - name: Check out repository code + uses: actions/checkout@v5 + + - name: Install uv + uses: astral-sh/setup-uv@v6 + + - name: Set environment + env: + target: ${{ needs.parse-comment.outputs.device }} + FIRMWARE_URL: ${{ needs.parse-comment.outputs.firmware-url }} + TEST_COMMAND: ${{ needs.parse-comment.outputs.command }} + run: | + echo "TARGET=$target" >> $GITHUB_ENV + echo "LG_PLACE=$target" >> $GITHUB_ENV + echo "LG_ENV=targets/$target.yaml" >> $GITHUB_ENV + echo "SELFTESTS_COMMAND=$TEST_COMMAND" >> $GITHUB_ENV + + # Download firmware + mkdir -p $GITHUB_WORKSPACE/tftp/$target + curl -L -o "$GITHUB_WORKSPACE/tftp/$target/firmware.bin" "$FIRMWARE_URL" + + # Set image path for labgrid + echo "LG_IMAGE=$GITHUB_WORKSPACE/tftp/$target/firmware.bin" >> $GITHUB_ENV + + - name: Wait for free device + run: | + source test-env.sh + echo "Waiting for device: $TARGET" + labgrid-client -c labnet.sh reserve name=$TARGET + echo "DEVICE_RESERVED=true" >> $GITHUB_ENV + + - name: Run kernel selftests + run: | + source test-env.sh + mkdir -p results/ + uv run pytest tests/test_kernel_selftests.py \ + --lg-log results/ \ + --junitxml=results/report.xml \ + --lg-colored-steps \ + --log-cli-level=INFO + + - name: Poweroff and unlock device + if: always() && env.DEVICE_RESERVED == 'true' + run: | + source test-env.sh + labgrid-client -c labnet.sh off $TARGET || true + labgrid-client -c labnet.sh release $TARGET || true + + - name: Parse results and create comment + if: always() + run: | + # Extract key information from pytest results + if [[ -f results/report.xml ]]; then + # Parse basic test results from JUnit XML + total=$(grep -o 'tests="[0-9]*"' results/report.xml | grep -o '[0-9]*' || echo "0") + failures=$(grep -o 'failures="[0-9]*"' results/report.xml | grep -o '[0-9]*' || echo "0") + errors=$(grep -o 'errors="[0-9]*"' results/report.xml | grep -o '[0-9]*' || echo "0") + + # Calculate passed tests + passed=$((total - failures - errors)) + + # Determine status + if [[ $failures -eq 0 && $errors -eq 0 && $total -gt 0 ]]; then + status="✅ COMPLETED" + elif [[ $total -gt 0 ]]; then + status="⚠️ COMPLETED WITH ISSUES" + else + status="❌ FAILED" + fi + else + status="❌ FAILED" + total=0 + passed=0 + failures=1 + errors=0 + fi + + # Create results comment + cat > comment.md << EOF + # 🧪 Kernel Selftests Results + + **Status:** $status + **Device:** ${{ needs.parse-comment.outputs.device }} + **Command:** \`${{ needs.parse-comment.outputs.command }}\` + **Firmware:** ${{ needs.parse-comment.outputs.firmware-url }} + **Workflow Run:** [${{ github.run_number }}](${{ github.server_url }}/${{ github.repository }}/actions/runs/${{ github.run_id }}) + + ## Summary + - **Total Tests:** $total + - **Passed:** $passed ✅ + - **Failed:** $failures ❌ + - **Errors:** $errors 💥 + + ## Test Details + + The kernel selftests were executed using pytest with labgrid. Check the [workflow logs](${{ github.server_url }}/${{ github.repository }}/actions/runs/${{ github.run_id }}) for detailed output including the complete selftest results. + + ### Next Steps + - Review the workflow logs for detailed test output + - Check individual test failures in the pytest results + - Verify your firmware includes necessary kernel configs for the tests + + --- + *Results generated automatically by kernel-selftests workflow* + EOF + + - name: Upload results + if: always() + uses: actions/upload-artifact@v4 + with: + name: kernel-selftests-${{ needs.parse-comment.outputs.device }} + path: results/ + retention-days: 30 + + - name: Comment results on issue + if: always() + uses: peter-evans/create-or-update-comment@v4 + with: + token: ${{ secrets.GITHUB_TOKEN }} + issue-number: ${{ github.event.issue.number }} + body-path: comment.md + + validation-error: + name: Report validation error + needs: parse-comment + runs-on: ubuntu-latest + if: needs.parse-comment.outputs.valid == 'false' + steps: + - name: Comment validation error + uses: peter-evans/create-or-update-comment@v4 + with: + token: ${{ secrets.GITHUB_TOKEN }} + issue-number: ${{ github.event.issue.number }} + body: | + # ❌ Invalid Kernel Selftests Request + + Your kernel selftests request could not be processed due to validation errors. + + ## Required Format + + Please format your comment like this: + + ``` + /test-kernel-selftests + device: bananapi_bpi-r64-kernel + command: make -C net run_tests + firmware: https://example.com/path/to/openwrt-image.bin + ``` + + ## Requirements + + - **device**: Currently only `bananapi_bpi-r64-kernel` is supported + - **command**: Any valid shell command to run in the selftests directory + - **firmware**: Direct URL to an OpenWrt firmware image file + + Please update your comment with the correct format and try again. diff --git a/.github/workflows/matrix.json b/.github/workflows/matrix.json new file mode 100644 index 000000000..f61247ddc --- /dev/null +++ b/.github/workflows/matrix.json @@ -0,0 +1,576 @@ +[ + { + "device": "openwrt_one", + "name": "OpenWrt One", + "proxy": "labgrid-aparcar", + "target": "mediatek-filogic", + "firmware": "initramfs.itb", + "version": { + "type": "snapshot", + "name": "snapshot", + "url_base": "https://mirror-03.infra.openwrt.org/snapshots/targets" + }, + "version_name": "snapshot" + }, + { + "device": "openwrt_one", + "name": "OpenWrt One", + "proxy": "labgrid-aparcar", + "target": "mediatek-filogic", + "firmware": "initramfs.itb", + "version": { + "type": "stable", + "name": "24.10", + "version": "24.10.2", + "url_base": "https://mirror-03.infra.openwrt.org/releases/24.10-SNAPSHOT/targets" + }, + "version_name": "24.10" + }, + { + "device": "openwrt_one", + "name": "OpenWrt One", + "proxy": "labgrid-aparcar", + "target": "mediatek-filogic", + "firmware": "initramfs.itb", + "version": { + "type": "stable", + "name": "23.05", + "version": "23.05.5", + "url_base": "https://mirror-03.infra.openwrt.org/releases/23.05-SNAPSHOT/targets" + }, + "version_name": "23.05" + }, + { + "device": "tplink_tl-wdr3600-v1", + "name": "TP-Link TL-WDR3600 v1", + "proxy": "labgrid-aparcar", + "target": "ath79-generic", + "firmware": "initramfs-kernel.bin", + "version": { + "type": "snapshot", + "name": "snapshot", + "url_base": "https://mirror-03.infra.openwrt.org/snapshots/targets" + }, + "version_name": "snapshot" + }, + { + "device": "tplink_tl-wdr3600-v1", + "name": "TP-Link TL-WDR3600 v1", + "proxy": "labgrid-aparcar", + "target": "ath79-generic", + "firmware": "initramfs-kernel.bin", + "version": { + "type": "stable", + "name": "24.10", + "version": "24.10.2", + "url_base": "https://mirror-03.infra.openwrt.org/releases/24.10-SNAPSHOT/targets" + }, + "version_name": "24.10" + }, + { + "device": "tplink_tl-wdr3600-v1", + "name": "TP-Link TL-WDR3600 v1", + "proxy": "labgrid-aparcar", + "target": "ath79-generic", + "firmware": "initramfs-kernel.bin", + "version": { + "type": "stable", + "name": "23.05", + "version": "23.05.5", + "url_base": "https://mirror-03.infra.openwrt.org/releases/23.05-SNAPSHOT/targets" + }, + "version_name": "23.05" + }, + { + "device": "genexis_pulse-ex400", + "name": "Genexis Pulse EX400 / Inteno Pulse EX400", + "proxy": "labgrid-aparcar", + "target": "ramips-mt7621", + "firmware": "initramfs-kernel.bin", + "version": { + "type": "snapshot", + "name": "snapshot", + "url_base": "https://mirror-03.infra.openwrt.org/snapshots/targets" + }, + "version_name": "snapshot" + }, + { + "device": "genexis_pulse-ex400", + "name": "Genexis Pulse EX400 / Inteno Pulse EX400", + "proxy": "labgrid-aparcar", + "target": "ramips-mt7621", + "firmware": "initramfs-kernel.bin", + "version": { + "type": "stable", + "name": "24.10", + "version": "24.10.2", + "url_base": "https://mirror-03.infra.openwrt.org/releases/24.10-SNAPSHOT/targets" + }, + "version_name": "24.10" + }, + { + "device": "genexis_pulse-ex400", + "name": "Genexis Pulse EX400 / Inteno Pulse EX400", + "proxy": "labgrid-aparcar", + "target": "ramips-mt7621", + "firmware": "initramfs-kernel.bin", + "version": { + "type": "stable", + "name": "23.05", + "version": "23.05.5", + "url_base": "https://mirror-03.infra.openwrt.org/releases/23.05-SNAPSHOT/targets" + }, + "version_name": "23.05" + }, + { + "device": "bananapi_bpi-r4", + "name": "Bananapi BPi-R4", + "proxy": "labgrid-aparcar", + "target": "mediatek-filogic", + "firmware": "initramfs-recovery.itb", + "version": { + "type": "snapshot", + "name": "snapshot", + "url_base": "https://mirror-03.infra.openwrt.org/snapshots/targets" + }, + "version_name": "snapshot" + }, + { + "device": "bananapi_bpi-r4", + "name": "Bananapi BPi-R4", + "proxy": "labgrid-aparcar", + "target": "mediatek-filogic", + "firmware": "initramfs-recovery.itb", + "version": { + "type": "stable", + "name": "24.10", + "version": "24.10.2", + "url_base": "https://mirror-03.infra.openwrt.org/releases/24.10-SNAPSHOT/targets" + }, + "version_name": "24.10" + }, + { + "device": "bananapi_bpi-r4", + "name": "Bananapi BPi-R4", + "proxy": "labgrid-aparcar", + "target": "mediatek-filogic", + "firmware": "initramfs-recovery.itb", + "version": { + "type": "stable", + "name": "23.05", + "version": "23.05.5", + "url_base": "https://mirror-03.infra.openwrt.org/releases/23.05-SNAPSHOT/targets" + }, + "version_name": "23.05" + }, + { + "device": "glinet_gl-mt6000", + "name": "GL.iNet GL-MT6000", + "proxy": "labgrid-aparcar", + "target": "mediatek-filogic", + "firmware": "initramfs-kernel.bin", + "version": { + "type": "snapshot", + "name": "snapshot", + "url_base": "https://mirror-03.infra.openwrt.org/snapshots/targets" + }, + "version_name": "snapshot" + }, + { + "device": "glinet_gl-mt6000", + "name": "GL.iNet GL-MT6000", + "proxy": "labgrid-aparcar", + "target": "mediatek-filogic", + "firmware": "initramfs-kernel.bin", + "version": { + "type": "stable", + "name": "24.10", + "version": "24.10.2", + "url_base": "https://mirror-03.infra.openwrt.org/releases/24.10-SNAPSHOT/targets" + }, + "version_name": "24.10" + }, + { + "device": "glinet_gl-mt6000", + "name": "GL.iNet GL-MT6000", + "proxy": "labgrid-aparcar", + "target": "mediatek-filogic", + "firmware": "initramfs-kernel.bin", + "version": { + "type": "stable", + "name": "23.05", + "version": "23.05.5", + "url_base": "https://mirror-03.infra.openwrt.org/releases/23.05-SNAPSHOT/targets" + }, + "version_name": "23.05" + }, + { + "device": "rpi-4", + "name": "Raspberry Pi 4B/400/CM4 (64bit)", + "proxy": "labgrid-aparcar", + "target": "bcm27xx-bcm2711", + "firmware": "squashfs-factory.img.gz", + "version": { + "type": "snapshot", + "name": "snapshot", + "url_base": "https://mirror-03.infra.openwrt.org/snapshots/targets" + }, + "version_name": "snapshot" + }, + { + "device": "rpi-4", + "name": "Raspberry Pi 4B/400/CM4 (64bit)", + "proxy": "labgrid-aparcar", + "target": "bcm27xx-bcm2711", + "firmware": "squashfs-factory.img.gz", + "version": { + "type": "stable", + "name": "24.10", + "version": "24.10.2", + "url_base": "https://mirror-03.infra.openwrt.org/releases/24.10-SNAPSHOT/targets" + }, + "version_name": "24.10" + }, + { + "device": "rpi-4", + "name": "Raspberry Pi 4B/400/CM4 (64bit)", + "proxy": "labgrid-aparcar", + "target": "bcm27xx-bcm2711", + "firmware": "squashfs-factory.img.gz", + "version": { + "type": "stable", + "name": "23.05", + "version": "23.05.5", + "url_base": "https://mirror-03.infra.openwrt.org/releases/23.05-SNAPSHOT/targets" + }, + "version_name": "23.05" + }, + { + "device": "enterasys_ws-ap3710i", + "name": "Enterasys WS-AP3710i", + "proxy": "labgrid-aparcar", + "target": "mpc85xx-p1020", + "firmware": "initramfs-kernel.bin", + "version": { + "type": "snapshot", + "name": "snapshot", + "url_base": "https://mirror-03.infra.openwrt.org/snapshots/targets" + }, + "version_name": "snapshot" + }, + { + "device": "enterasys_ws-ap3710i", + "name": "Enterasys WS-AP3710i", + "proxy": "labgrid-aparcar", + "target": "mpc85xx-p1020", + "firmware": "initramfs-kernel.bin", + "version": { + "type": "stable", + "name": "24.10", + "version": "24.10.2", + "url_base": "https://mirror-03.infra.openwrt.org/releases/24.10-SNAPSHOT/targets" + }, + "version_name": "24.10" + }, + { + "device": "enterasys_ws-ap3710i", + "name": "Enterasys WS-AP3710i", + "proxy": "labgrid-aparcar", + "target": "mpc85xx-p1020", + "firmware": "initramfs-kernel.bin", + "version": { + "type": "stable", + "name": "23.05", + "version": "23.05.5", + "url_base": "https://mirror-03.infra.openwrt.org/releases/23.05-SNAPSHOT/targets" + }, + "version_name": "23.05" + }, + { + "device": "cznic_turris-omnia", + "name": "CZ.NIC Turris Omnia", + "proxy": "labgrid-bastian", + "target": "mvebu-cortexa9", + "firmware": "initramfs-kernel.bin", + "version": { + "type": "snapshot", + "name": "snapshot", + "url_base": "https://mirror-03.infra.openwrt.org/snapshots/targets" + }, + "version_name": "snapshot" + }, + { + "device": "cznic_turris-omnia", + "name": "CZ.NIC Turris Omnia", + "proxy": "labgrid-bastian", + "target": "mvebu-cortexa9", + "firmware": "initramfs-kernel.bin", + "version": { + "type": "stable", + "name": "24.10", + "version": "24.10.2", + "url_base": "https://mirror-03.infra.openwrt.org/releases/24.10-SNAPSHOT/targets" + }, + "version_name": "24.10" + }, + { + "device": "cznic_turris-omnia", + "name": "CZ.NIC Turris Omnia", + "proxy": "labgrid-bastian", + "target": "mvebu-cortexa9", + "firmware": "initramfs-kernel.bin", + "version": { + "type": "stable", + "name": "23.05", + "version": "23.05.5", + "url_base": "https://mirror-03.infra.openwrt.org/releases/23.05-SNAPSHOT/targets" + }, + "version_name": "23.05" + }, + { + "device": "tplink_tl-wdr4300-v1", + "name": "TP-Link TL-WDR4300 v1", + "proxy": "labgrid-bastian", + "target": "ath79-generic", + "firmware": "initramfs-kernel.bin", + "version": { + "type": "snapshot", + "name": "snapshot", + "url_base": "https://mirror-03.infra.openwrt.org/snapshots/targets" + }, + "version_name": "snapshot" + }, + { + "device": "tplink_tl-wdr4300-v1", + "name": "TP-Link TL-WDR4300 v1", + "proxy": "labgrid-bastian", + "target": "ath79-generic", + "firmware": "initramfs-kernel.bin", + "version": { + "type": "stable", + "name": "24.10", + "version": "24.10.2", + "url_base": "https://mirror-03.infra.openwrt.org/releases/24.10-SNAPSHOT/targets" + }, + "version_name": "24.10" + }, + { + "device": "tplink_tl-wdr4300-v1", + "name": "TP-Link TL-WDR4300 v1", + "proxy": "labgrid-bastian", + "target": "ath79-generic", + "firmware": "initramfs-kernel.bin", + "version": { + "type": "stable", + "name": "23.05", + "version": "23.05.5", + "url_base": "https://mirror-03.infra.openwrt.org/releases/23.05-SNAPSHOT/targets" + }, + "version_name": "23.05" + }, + { + "device": "hpe_msm460", + "name": "Hewlett-Packard MSM460", + "proxy": "labgrid-blocktrron", + "target": "mpc85xx-p1020", + "firmware": "initramfs-kernel.bin", + "version": { + "type": "snapshot", + "name": "snapshot", + "url_base": "https://mirror-03.infra.openwrt.org/snapshots/targets" + }, + "version_name": "snapshot" + }, + { + "device": "hpe_msm460", + "name": "Hewlett-Packard MSM460", + "proxy": "labgrid-blocktrron", + "target": "mpc85xx-p1020", + "firmware": "initramfs-kernel.bin", + "version": { + "type": "stable", + "name": "24.10", + "version": "24.10.2", + "url_base": "https://mirror-03.infra.openwrt.org/releases/24.10-SNAPSHOT/targets" + }, + "version_name": "24.10" + }, + { + "device": "hpe_msm460", + "name": "Hewlett-Packard MSM460", + "proxy": "labgrid-blocktrron", + "target": "mpc85xx-p1020", + "firmware": "initramfs-kernel.bin", + "version": { + "type": "stable", + "name": "23.05", + "version": "23.05.5", + "url_base": "https://mirror-03.infra.openwrt.org/releases/23.05-SNAPSHOT/targets" + }, + "version_name": "23.05" + }, + { + "device": "tplink_tl-wr842n-v3", + "name": "TP-Link TL-WR842N v3", + "proxy": "labgrid-blocktrron", + "target": "ath79-generic", + "firmware": "initramfs-kernel.bin", + "version": { + "type": "snapshot", + "name": "snapshot", + "url_base": "https://mirror-03.infra.openwrt.org/snapshots/targets" + }, + "version_name": "snapshot" + }, + { + "device": "tplink_tl-wr842n-v3", + "name": "TP-Link TL-WR842N v3", + "proxy": "labgrid-blocktrron", + "target": "ath79-generic", + "firmware": "initramfs-kernel.bin", + "version": { + "type": "stable", + "name": "24.10", + "version": "24.10.2", + "url_base": "https://mirror-03.infra.openwrt.org/releases/24.10-SNAPSHOT/targets" + }, + "version_name": "24.10" + }, + { + "device": "tplink_tl-wr842n-v3", + "name": "TP-Link TL-WR842N v3", + "proxy": "labgrid-blocktrron", + "target": "ath79-generic", + "firmware": "initramfs-kernel.bin", + "version": { + "type": "stable", + "name": "23.05", + "version": "23.05.5", + "url_base": "https://mirror-03.infra.openwrt.org/releases/23.05-SNAPSHOT/targets" + }, + "version_name": "23.05" + }, + { + "device": "tplink_tl-wr1043nd-v3", + "name": "TP-Link TL-WR1043N/ND v3", + "proxy": "labgrid-leinelab", + "target": "ath79-generic", + "firmware": "initramfs-kernel.bin", + "version": { + "type": "snapshot", + "name": "snapshot", + "url_base": "https://mirror-03.infra.openwrt.org/snapshots/targets" + }, + "version_name": "snapshot" + }, + { + "device": "tplink_tl-wr1043nd-v3", + "name": "TP-Link TL-WR1043N/ND v3", + "proxy": "labgrid-leinelab", + "target": "ath79-generic", + "firmware": "initramfs-kernel.bin", + "version": { + "type": "stable", + "name": "24.10", + "version": "24.10.2", + "url_base": "https://mirror-03.infra.openwrt.org/releases/24.10-SNAPSHOT/targets" + }, + "version_name": "24.10" + }, + { + "device": "tplink_tl-wr1043nd-v3", + "name": "TP-Link TL-WR1043N/ND v3", + "proxy": "labgrid-leinelab", + "target": "ath79-generic", + "firmware": "initramfs-kernel.bin", + "version": { + "type": "stable", + "name": "23.05", + "version": "23.05.5", + "url_base": "https://mirror-03.infra.openwrt.org/releases/23.05-SNAPSHOT/targets" + }, + "version_name": "23.05" + }, + { + "device": "genexis_pulse-ex400", + "name": "Genexis Pulse EX400 / Inteno Pulse EX400", + "proxy": "labgrid-hsn", + "target": "ramips-mt7621", + "firmware": "initramfs-kernel.bin", + "version": { + "type": "snapshot", + "name": "snapshot", + "url_base": "https://mirror-03.infra.openwrt.org/snapshots/targets" + }, + "version_name": "snapshot" + }, + { + "device": "genexis_pulse-ex400", + "name": "Genexis Pulse EX400 / Inteno Pulse EX400", + "proxy": "labgrid-hsn", + "target": "ramips-mt7621", + "firmware": "initramfs-kernel.bin", + "version": { + "type": "stable", + "name": "24.10", + "version": "24.10.2", + "url_base": "https://mirror-03.infra.openwrt.org/releases/24.10-SNAPSHOT/targets" + }, + "version_name": "24.10" + }, + { + "device": "genexis_pulse-ex400", + "name": "Genexis Pulse EX400 / Inteno Pulse EX400", + "proxy": "labgrid-hsn", + "target": "ramips-mt7621", + "firmware": "initramfs-kernel.bin", + "version": { + "type": "stable", + "name": "23.05", + "version": "23.05.5", + "url_base": "https://mirror-03.infra.openwrt.org/releases/23.05-SNAPSHOT/targets" + }, + "version_name": "23.05" + }, + { + "device": "bananapi_bpi-r64", + "name": "Bananapi BPi-R64", + "proxy": "labgrid-hsn", + "target": "mediatek-mt7622", + "firmware": "initramfs-recovery.itb", + "version": { + "type": "snapshot", + "name": "snapshot", + "url_base": "https://mirror-03.infra.openwrt.org/snapshots/targets" + }, + "version_name": "snapshot" + }, + { + "device": "bananapi_bpi-r64", + "name": "Bananapi BPi-R64", + "proxy": "labgrid-hsn", + "target": "mediatek-mt7622", + "firmware": "initramfs-recovery.itb", + "version": { + "type": "stable", + "name": "24.10", + "version": "24.10.2", + "url_base": "https://mirror-03.infra.openwrt.org/releases/24.10-SNAPSHOT/targets" + }, + "version_name": "24.10" + }, + { + "device": "bananapi_bpi-r64", + "name": "Bananapi BPi-R64", + "proxy": "labgrid-hsn", + "target": "mediatek-mt7622", + "firmware": "initramfs-recovery.itb", + "version": { + "type": "stable", + "name": "23.05", + "version": "23.05.5", + "url_base": "https://mirror-03.infra.openwrt.org/releases/23.05-SNAPSHOT/targets" + }, + "version_name": "23.05" + } +] diff --git a/.github/workflows/matrix2.json b/.github/workflows/matrix2.json new file mode 100644 index 000000000..e591e0e4f --- /dev/null +++ b/.github/workflows/matrix2.json @@ -0,0 +1,380 @@ +[ + { + "device": "openwrt_one", + "name": "OpenWrt One", + "proxy": "labgrid-aparcar", + "target": "mediatek-filogic", + "firmware": "initramfs.itb", + "version_url": "https://mirror-03.infra.openwrt.org/snapshots/targets", + "version_name": "snapshot" + }, + { + "device": "openwrt_one", + "name": "OpenWrt One", + "proxy": "labgrid-aparcar", + "target": "mediatek-filogic", + "firmware": "initramfs.itb", + "version_url": "https://mirror-03.infra.openwrt.org/releases/24.10-SNAPSHOT/targets", + "version_name": "24.10" + }, + { + "device": "openwrt_one", + "name": "OpenWrt One", + "proxy": "labgrid-aparcar", + "target": "mediatek-filogic", + "firmware": "initramfs.itb", + "version_url": "https://mirror-03.infra.openwrt.org/releases/23.05-SNAPSHOT/targets", + "version_name": "23.05" + }, + { + "device": "tplink_tl-wdr3600-v1", + "name": "TP-Link TL-WDR3600 v1", + "proxy": "labgrid-aparcar", + "target": "ath79-generic", + "firmware": "initramfs-kernel.bin", + "version_url": "https://mirror-03.infra.openwrt.org/snapshots/targets", + "version_name": "snapshot" + }, + { + "device": "tplink_tl-wdr3600-v1", + "name": "TP-Link TL-WDR3600 v1", + "proxy": "labgrid-aparcar", + "target": "ath79-generic", + "firmware": "initramfs-kernel.bin", + "version_url": "https://mirror-03.infra.openwrt.org/releases/24.10-SNAPSHOT/targets", + "version_name": "24.10" + }, + { + "device": "tplink_tl-wdr3600-v1", + "name": "TP-Link TL-WDR3600 v1", + "proxy": "labgrid-aparcar", + "target": "ath79-generic", + "firmware": "initramfs-kernel.bin", + "version_url": "https://mirror-03.infra.openwrt.org/releases/23.05-SNAPSHOT/targets", + "version_name": "23.05" + }, + { + "device": "genexis_pulse-ex400", + "name": "Genexis Pulse EX400 / Inteno Pulse EX400", + "proxy": "labgrid-aparcar", + "target": "ramips-mt7621", + "firmware": "initramfs-kernel.bin", + "version_url": "https://mirror-03.infra.openwrt.org/snapshots/targets", + "version_name": "snapshot" + }, + { + "device": "genexis_pulse-ex400", + "name": "Genexis Pulse EX400 / Inteno Pulse EX400", + "proxy": "labgrid-aparcar", + "target": "ramips-mt7621", + "firmware": "initramfs-kernel.bin", + "version_url": "https://mirror-03.infra.openwrt.org/releases/24.10-SNAPSHOT/targets", + "version_name": "24.10" + }, + { + "device": "genexis_pulse-ex400", + "name": "Genexis Pulse EX400 / Inteno Pulse EX400", + "proxy": "labgrid-aparcar", + "target": "ramips-mt7621", + "firmware": "initramfs-kernel.bin", + "version_url": "https://mirror-03.infra.openwrt.org/releases/23.05-SNAPSHOT/targets", + "version_name": "23.05" + }, + { + "device": "bananapi_bpi-r4", + "name": "Bananapi BPi-R4", + "proxy": "labgrid-aparcar", + "target": "mediatek-filogic", + "firmware": "initramfs-recovery.itb", + "version_url": "https://mirror-03.infra.openwrt.org/snapshots/targets", + "version_name": "snapshot" + }, + { + "device": "bananapi_bpi-r4", + "name": "Bananapi BPi-R4", + "proxy": "labgrid-aparcar", + "target": "mediatek-filogic", + "firmware": "initramfs-recovery.itb", + "version_url": "https://mirror-03.infra.openwrt.org/releases/24.10-SNAPSHOT/targets", + "version_name": "24.10" + }, + { + "device": "bananapi_bpi-r4", + "name": "Bananapi BPi-R4", + "proxy": "labgrid-aparcar", + "target": "mediatek-filogic", + "firmware": "initramfs-recovery.itb", + "version_url": "https://mirror-03.infra.openwrt.org/releases/23.05-SNAPSHOT/targets", + "version_name": "23.05" + }, + { + "device": "glinet_gl-mt6000", + "name": "GL.iNet GL-MT6000", + "proxy": "labgrid-aparcar", + "target": "mediatek-filogic", + "firmware": "initramfs-kernel.bin", + "version_url": "https://mirror-03.infra.openwrt.org/snapshots/targets", + "version_name": "snapshot" + }, + { + "device": "glinet_gl-mt6000", + "name": "GL.iNet GL-MT6000", + "proxy": "labgrid-aparcar", + "target": "mediatek-filogic", + "firmware": "initramfs-kernel.bin", + "version_url": "https://mirror-03.infra.openwrt.org/releases/24.10-SNAPSHOT/targets", + "version_name": "24.10" + }, + { + "device": "glinet_gl-mt6000", + "name": "GL.iNet GL-MT6000", + "proxy": "labgrid-aparcar", + "target": "mediatek-filogic", + "firmware": "initramfs-kernel.bin", + "version_url": "https://mirror-03.infra.openwrt.org/releases/23.05-SNAPSHOT/targets", + "version_name": "23.05" + }, + { + "device": "rpi-4", + "name": "Raspberry Pi 4B/400/CM4 (64bit)", + "proxy": "labgrid-aparcar", + "target": "bcm27xx-bcm2711", + "firmware": "squashfs-factory.img.gz", + "version_url": "https://mirror-03.infra.openwrt.org/snapshots/targets", + "version_name": "snapshot" + }, + { + "device": "rpi-4", + "name": "Raspberry Pi 4B/400/CM4 (64bit)", + "proxy": "labgrid-aparcar", + "target": "bcm27xx-bcm2711", + "firmware": "squashfs-factory.img.gz", + "version_url": "https://mirror-03.infra.openwrt.org/releases/24.10-SNAPSHOT/targets", + "version_name": "24.10" + }, + { + "device": "rpi-4", + "name": "Raspberry Pi 4B/400/CM4 (64bit)", + "proxy": "labgrid-aparcar", + "target": "bcm27xx-bcm2711", + "firmware": "squashfs-factory.img.gz", + "version_url": "https://mirror-03.infra.openwrt.org/releases/23.05-SNAPSHOT/targets", + "version_name": "23.05" + }, + { + "device": "enterasys_ws-ap3710i", + "name": "Enterasys WS-AP3710i", + "proxy": "labgrid-aparcar", + "target": "mpc85xx-p1020", + "firmware": "initramfs-kernel.bin", + "version_url": "https://mirror-03.infra.openwrt.org/snapshots/targets", + "version_name": "snapshot" + }, + { + "device": "enterasys_ws-ap3710i", + "name": "Enterasys WS-AP3710i", + "proxy": "labgrid-aparcar", + "target": "mpc85xx-p1020", + "firmware": "initramfs-kernel.bin", + "version_url": "https://mirror-03.infra.openwrt.org/releases/24.10-SNAPSHOT/targets", + "version_name": "24.10" + }, + { + "device": "enterasys_ws-ap3710i", + "name": "Enterasys WS-AP3710i", + "proxy": "labgrid-aparcar", + "target": "mpc85xx-p1020", + "firmware": "initramfs-kernel.bin", + "version_url": "https://mirror-03.infra.openwrt.org/releases/23.05-SNAPSHOT/targets", + "version_name": "23.05" + }, + { + "device": "cznic_turris-omnia", + "name": "CZ.NIC Turris Omnia", + "proxy": "labgrid-bastian", + "target": "mvebu-cortexa9", + "firmware": "initramfs-kernel.bin", + "version_url": "https://mirror-03.infra.openwrt.org/snapshots/targets", + "version_name": "snapshot" + }, + { + "device": "cznic_turris-omnia", + "name": "CZ.NIC Turris Omnia", + "proxy": "labgrid-bastian", + "target": "mvebu-cortexa9", + "firmware": "initramfs-kernel.bin", + "version_url": "https://mirror-03.infra.openwrt.org/releases/24.10-SNAPSHOT/targets", + "version_name": "24.10" + }, + { + "device": "cznic_turris-omnia", + "name": "CZ.NIC Turris Omnia", + "proxy": "labgrid-bastian", + "target": "mvebu-cortexa9", + "firmware": "initramfs-kernel.bin", + "version_url": "https://mirror-03.infra.openwrt.org/releases/23.05-SNAPSHOT/targets", + "version_name": "23.05" + }, + { + "device": "tplink_tl-wdr4300-v1", + "name": "TP-Link TL-WDR4300 v1", + "proxy": "labgrid-bastian", + "target": "ath79-generic", + "firmware": "initramfs-kernel.bin", + "version_url": "https://mirror-03.infra.openwrt.org/snapshots/targets", + "version_name": "snapshot" + }, + { + "device": "tplink_tl-wdr4300-v1", + "name": "TP-Link TL-WDR4300 v1", + "proxy": "labgrid-bastian", + "target": "ath79-generic", + "firmware": "initramfs-kernel.bin", + "version_url": "https://mirror-03.infra.openwrt.org/releases/24.10-SNAPSHOT/targets", + "version_name": "24.10" + }, + { + "device": "tplink_tl-wdr4300-v1", + "name": "TP-Link TL-WDR4300 v1", + "proxy": "labgrid-bastian", + "target": "ath79-generic", + "firmware": "initramfs-kernel.bin", + "version_url": "https://mirror-03.infra.openwrt.org/releases/23.05-SNAPSHOT/targets", + "version_name": "23.05" + }, + { + "device": "hpe_msm460", + "name": "Hewlett-Packard MSM460", + "proxy": "labgrid-blocktrron", + "target": "mpc85xx-p1020", + "firmware": "initramfs-kernel.bin", + "version_url": "https://mirror-03.infra.openwrt.org/snapshots/targets", + "version_name": "snapshot" + }, + { + "device": "hpe_msm460", + "name": "Hewlett-Packard MSM460", + "proxy": "labgrid-blocktrron", + "target": "mpc85xx-p1020", + "firmware": "initramfs-kernel.bin", + "version_url": "https://mirror-03.infra.openwrt.org/releases/24.10-SNAPSHOT/targets", + "version_name": "24.10" + }, + { + "device": "hpe_msm460", + "name": "Hewlett-Packard MSM460", + "proxy": "labgrid-blocktrron", + "target": "mpc85xx-p1020", + "firmware": "initramfs-kernel.bin", + "version_url": "https://mirror-03.infra.openwrt.org/releases/23.05-SNAPSHOT/targets", + "version_name": "23.05" + }, + { + "device": "tplink_tl-wr842n-v3", + "name": "TP-Link TL-WR842N v3", + "proxy": "labgrid-blocktrron", + "target": "ath79-generic", + "firmware": "initramfs-kernel.bin", + "version_url": "https://mirror-03.infra.openwrt.org/snapshots/targets", + "version_name": "snapshot" + }, + { + "device": "tplink_tl-wr842n-v3", + "name": "TP-Link TL-WR842N v3", + "proxy": "labgrid-blocktrron", + "target": "ath79-generic", + "firmware": "initramfs-kernel.bin", + "version_url": "https://mirror-03.infra.openwrt.org/releases/24.10-SNAPSHOT/targets", + "version_name": "24.10" + }, + { + "device": "tplink_tl-wr842n-v3", + "name": "TP-Link TL-WR842N v3", + "proxy": "labgrid-blocktrron", + "target": "ath79-generic", + "firmware": "initramfs-kernel.bin", + "version_url": "https://mirror-03.infra.openwrt.org/releases/23.05-SNAPSHOT/targets", + "version_name": "23.05" + }, + { + "device": "tplink_tl-wr1043nd-v3", + "name": "TP-Link TL-WR1043N/ND v3", + "proxy": "labgrid-leinelab", + "target": "ath79-generic", + "firmware": "initramfs-kernel.bin", + "version_url": "https://mirror-03.infra.openwrt.org/snapshots/targets", + "version_name": "snapshot" + }, + { + "device": "tplink_tl-wr1043nd-v3", + "name": "TP-Link TL-WR1043N/ND v3", + "proxy": "labgrid-leinelab", + "target": "ath79-generic", + "firmware": "initramfs-kernel.bin", + "version_url": "https://mirror-03.infra.openwrt.org/releases/24.10-SNAPSHOT/targets", + "version_name": "24.10" + }, + { + "device": "tplink_tl-wr1043nd-v3", + "name": "TP-Link TL-WR1043N/ND v3", + "proxy": "labgrid-leinelab", + "target": "ath79-generic", + "firmware": "initramfs-kernel.bin", + "version_url": "https://mirror-03.infra.openwrt.org/releases/23.05-SNAPSHOT/targets", + "version_name": "23.05" + }, + { + "device": "genexis_pulse-ex400", + "name": "Genexis Pulse EX400 / Inteno Pulse EX400", + "proxy": "labgrid-hsn", + "target": "ramips-mt7621", + "firmware": "initramfs-kernel.bin", + "version_url": "https://mirror-03.infra.openwrt.org/snapshots/targets", + "version_name": "snapshot" + }, + { + "device": "genexis_pulse-ex400", + "name": "Genexis Pulse EX400 / Inteno Pulse EX400", + "proxy": "labgrid-hsn", + "target": "ramips-mt7621", + "firmware": "initramfs-kernel.bin", + "version_url": "https://mirror-03.infra.openwrt.org/releases/24.10-SNAPSHOT/targets", + "version_name": "24.10" + }, + { + "device": "genexis_pulse-ex400", + "name": "Genexis Pulse EX400 / Inteno Pulse EX400", + "proxy": "labgrid-hsn", + "target": "ramips-mt7621", + "firmware": "initramfs-kernel.bin", + "version_url": "https://mirror-03.infra.openwrt.org/releases/23.05-SNAPSHOT/targets", + "version_name": "23.05" + }, + { + "device": "bananapi_bpi-r64", + "name": "Bananapi BPi-R64", + "proxy": "labgrid-hsn", + "target": "mediatek-mt7622", + "firmware": "initramfs-recovery.itb", + "version_url": "https://mirror-03.infra.openwrt.org/snapshots/targets", + "version_name": "snapshot" + }, + { + "device": "bananapi_bpi-r64", + "name": "Bananapi BPi-R64", + "proxy": "labgrid-hsn", + "target": "mediatek-mt7622", + "firmware": "initramfs-recovery.itb", + "version_url": "https://mirror-03.infra.openwrt.org/releases/24.10-SNAPSHOT/targets", + "version_name": "24.10" + }, + { + "device": "bananapi_bpi-r64", + "name": "Bananapi BPi-R64", + "proxy": "labgrid-hsn", + "target": "mediatek-mt7622", + "firmware": "initramfs-recovery.itb", + "version_url": "https://mirror-03.infra.openwrt.org/releases/23.05-SNAPSHOT/targets", + "version_name": "23.05" + } +] diff --git a/.github/workflows/qemu.sh b/.github/workflows/qemu.sh new file mode 100644 index 000000000..d328ce427 --- /dev/null +++ b/.github/workflows/qemu.sh @@ -0,0 +1,64 @@ +VERSION_NAME="24.10" +target=x86-64 +UPSTREAM_URL="https://mirror-03.infra.openwrt.org/releases/24.10-SNAPSHOT/targets" +firmware=generic-squashfs-combined.img.gz + +if [ "$VERSION_NAME" = "snapshot" ]; then +# Snapshot logic +firmware_name="openwrt-$target-$firmware" +wget "$UPSTREAM_URL/${target/-/\/}/$firmware_name" \ + --output-document "$firmware_name" +FIRMWARE_VERSION=$(curl "$UPSTREAM_URL/${target/-/\/}/version.buildinfo") +else +# Stable release logic +profiles_url="$UPSTREAM_URL/${target/-/\/}/profiles.json" +profiles_json=$(curl -s "$profiles_url") + +# Find the appropriate image for QEMU +case "$firmware" in + *squashfs-combined*) + image_type="combined" + filesystem="squashfs" + ;; + *ext4-combined*) + image_type="combined" + filesystem="ext4" + ;; + *initramfs*) + image_type="kernel" + filesystem="" + ;; + *vmlinux*) + image_type="kernel" + filesystem="" + ;; + *) + image_type="combined" + filesystem="squashfs" + ;; +esac + +if [ -n "$filesystem" ]; then + firmware_name=$(echo "$profiles_json" | jq -r --arg type "$image_type" --arg fs "$filesystem" ' + .profiles.generic.images[] | select(.type == $type and .filesystem == $fs) | .name + ') +else + firmware_name=$(echo "$profiles_json" | jq -r --arg type "$image_type" ' + .profiles.generic.images[] | select(.type == $type) | .name + ') +fi + +if [ -z "$firmware_name" ] || [ "$firmware_name" = "null" ]; then + echo "Could not find firmware, falling back to constructed name" + image_prefix=$(echo "$profiles_json" | jq -r '.profiles.generic.image_prefix') + firmware_name="$image_prefix-$firmware" +fi + +echo "Using firmware: $firmware_name" +wget "$url_base/${target/-/\/}/$firmware_name" \ + --output-document "$firmware_name" +FIRMWARE_VERSION=$(echo "$profiles_json" | jq -r '.version_code') +fi + +echo "FIRMWARE_VERSION=$FIRMWARE_VERSION" +echo "FIRMWARE_FILE=$firmware_name" diff --git a/.python-version b/.python-version new file mode 100644 index 000000000..24ee5b1be --- /dev/null +++ b/.python-version @@ -0,0 +1 @@ +3.13 diff --git a/CLAUDE.md b/CLAUDE.md new file mode 100644 index 000000000..e6fcfc114 --- /dev/null +++ b/CLAUDE.md @@ -0,0 +1,136 @@ +# CLAUDE.md + +This file provides guidance to Claude Code (claude.ai/code) when working with code in this repository. + +## Project Overview + +OpenWrt Testing is a pytest-based framework for automated testing of OpenWrt firmware on real hardware devices and QEMU emulators. It uses labgrid for device control and supports 30+ physical devices across distributed labs. + +## Key Commands + +### Setup +```bash +uv sync # Install dependencies +make tests/setup V=s # Verify installation (checks uv, qemu) +``` + +### Running Tests +```bash +# Via Makefile (from openwrt.git parent directory) +make tests/x86-64 V=s # x86-64 QEMU +make tests/malta-be V=s # MIPS big-endian QEMU +make tests/armsr-armv8 V=s # ARM64 QEMU +make tests/x86-64 K="test_shell" # Filter by test name + +# Direct pytest +pytest tests/ --lg-env targets/qemu_x86-64.yaml --lg-log --log-cli-level=CONSOLE --lg-colored-steps --firmware /path/to/firmware.bin +pytest tests/ -k "test_shell or test_ssh" # Filter tests + +# Remote device testing (requires lab access) +LG_PLACE=- LG_PROXY= LG_IMAGE= \ + pytest tests/ --lg-env targets/.yaml +``` + +### Linting +```bash +uv run ruff check . # Lint +uv run ruff format . # Format +uv run isort . # Sort imports +``` + +## Architecture + +### Core Components + +- **tests/** - pytest test suites using labgrid fixtures +- **targets/** - Device configuration YAML files (30+ devices: QEMU, routers, SBCs) +- **strategies/** - Custom labgrid strategies for device boot/provisioning +- **kernelci/** - Self-hosted KernelCI infrastructure (see below) +- **ansible/** - Deployment playbooks for lab infrastructure + +### Labgrid Strategies + +Three custom strategies handle different device boot flows: + +| Strategy | File | Use Case | +|----------|------|----------| +| `QEMUNetworkStrategy` | `strategies/qemunetworkstrategy.py` | QEMU VMs with SSH port forwarding | +| `UBootTFTPStrategy` | `strategies/tftpstrategy.py` | Physical devices via U-Boot + TFTP | +| `SDMuxStrategy` | `strategies/sdmuxstrategy.py` | SD card mux-based provisioning | + +Strategies implement `transition(state)` to move devices through states: `off` → `uboot` → `shell`. + +### Test Fixtures (tests/conftest.py) + +Tests use two primary fixtures: +- `shell_command` - Serial console access via labgrid strategy +- `ssh_command` - SSH access to device + +Both provide `run(cmd)` returning (stdout, stderr, exit_code) and `run_check(cmd)` that raises on non-zero exit. Example: +```python +def test_uname(shell_command): + assert "GNU/Linux" in shell_command.run("uname -a")[0][0] + +def test_echo(ssh_command): + [output] = ssh_command.run_check("echo 'hello'") + assert output == "hello" +``` + +Helper function `ubus_call(command, namespace, method, params)` wraps OpenWrt's ubus JSON-RPC. + +### KernelCI Infrastructure + +Two subprojects in `kernelci/` connect this test framework to KernelCI: + +**labgrid-adapter/** - Generic adapter connecting any labgrid lab to KernelCI: +- Polls KernelCI API for test jobs (pull-mode, no inbound connections) +- Executes tests via pytest with labgrid plugin +- Parses KTAP output for kernel selftest subtest results +- Submits results back to KernelCI +- Key modules: `poller.py` (job polling), `executor.py` (test execution), `ktap_parser.py` (KTAP parsing) + +**openwrt-pipeline/** - OpenWrt-specific firmware pipeline: +- Watches firmware sources (official releases, GitHub PRs, custom uploads) +- Creates firmware entries in KernelCI API +- Schedules tests based on device/firmware compatibility +- Key modules: `firmware_trigger.py` (FastAPI service), `firmware_sources/` (source plugins), `test_scheduler.py` + +### Device Control Flow + +1. GitHub Actions or local Makefile triggers test +2. Firmware downloaded/specified via `--firmware` option or `LG_IMAGE` env var +3. labgrid target config loaded from `targets/*.yaml` +4. Strategy transitions device: power on → boot → shell ready +5. pytest executes tests with `shell_command`/`ssh_command` fixtures +6. Results collected via `results_bag` fixture or uploaded to KernelCI + +### Environment Variables + +| Variable | Description | +|----------|-------------| +| `LG_ENV` | Path to target YAML config | +| `LG_QEMU_BIN` | QEMU binary path | +| `LG_IMAGE` | Firmware path/URL for remote device testing | +| `LG_PLACE` | Remote device identifier (format: `-`) | +| `LG_PROXY` | Lab proxy host for remote access | +| `LG_COORDINATOR` | Labgrid coordinator address (for remote labs) | + +## Dependencies + +- Python 3.13+ +- labgrid from custom fork: `github.com/aparcar/labgrid.git` (branch: aparcar/staging) +- QEMU packages: qemu-system-mips, qemu-system-x86, qemu-system-aarch64 + +## Code Style + +- Line length: 88 (black-compatible) +- Linter: ruff (E, F, I, W rules) +- Import sorting: isort with black profile +- First-party modules: `openwrt_pipeline`, `labgrid_kci_adapter` + +## CI/CD Workflows (.github/workflows/) + +- **daily.yml** - Daily matrix testing across snapshot + stable releases on all devices +- **pull_requests.yml** - PR validation on QEMU targets only +- **healthcheck.yml** - Device health monitoring (24h checks) +- **kernel-selftests.yml** - On-demand kernel selftest execution via issue comments diff --git a/README-kernel-selftests.md b/README-kernel-selftests.md new file mode 100644 index 000000000..b198e9fdc --- /dev/null +++ b/README-kernel-selftests.md @@ -0,0 +1,327 @@ +# Kernel Selftests Workflow + +This repository includes a GitHub Actions workflow for running Linux kernel selftests on real OpenWrt hardware devices with custom firmware images. The workflow leverages pytest and labgrid for robust device testing. + +## 🚀 Quick Start + +1. **Create an Issue**: Open a new issue or use an existing one +2. **Add a Comment** with your test request in this format: + +``` +/test-kernel-selftests +device: bananapi_bpi-r64-kernel +command: make -C net run_tests +firmware: https://example.com/path/to/your-openwrt-firmware.bin +``` + +3. **Wait for Results**: The bot will automatically run your tests and post results as a comment + +## 📋 Parameters + +### Required Parameters + +| Parameter | Description | Example | +| ---------- | ------------------------------------ | ------------------------------------------------------------------ | +| `device` | Target hardware device | `bananapi_bpi-r64-kernel` | +| `command` | Command to run in `/root/selftests/` | `make -C net run_tests` | +| `firmware` | Direct URL to OpenWrt firmware | `https://github.com/user/repo/releases/download/v1.0/firmware.bin` | + +### Supported Devices + +Currently supported devices: + +- `bananapi_bpi-r64-kernel` - Banana Pi BPI-R64 with kernel selftest support + +## 🧪 Common Test Commands + +### Network Tests + +```bash +# All network tests +make -C net run_tests + +# Specific network subsystem +make -C net/forwarding run_tests +make -C net/mptcp run_tests +``` + +### BPF Tests + +```bash +# All BPF tests (requires BPF support in kernel) +make -C bpf run_tests + +# Specific BPF categories +make -C bpf/prog_tests run_tests +make -C bpf/verifier run_tests +``` + +### Memory Management + +```bash +# Memory management tests +make -C mm run_tests +make -C vm run_tests +``` + +### Filesystems + +```bash +# All filesystem tests +make -C filesystems run_tests + +# Specific filesystem +make -C filesystems/overlayfs run_tests +make -C filesystems/binderfs run_tests +``` + +### CPU and Power Management + +```bash +# CPU hotplug tests +make -C cpu-hotplug run_tests + +# CPU frequency tests +make -C cpufreq run_tests + +# Power management +make -C powercap run_tests +``` + +### Security + +```bash +# Security tests +make -C seccomp run_tests +make -C capabilities run_tests +``` + +### Run Everything + +```bash +# Run all available tests (may take hours!) +./run_kselftest.sh + +# Run with timeout per test +./run_kselftest.sh -t 300 +``` + +## 📦 Firmware Requirements + +Your OpenWrt firmware image should have: + +### Essential Requirements + +- **Kernel selftests support**: Enable `CONFIG_SAMPLES` and relevant test configs +- **Root filesystem**: Writable root filesystem with sufficient space (~200MB) +- **Shell access**: Working shell environment (ash/bash) +- **Basic utilities**: tar, gzip, make, gcc (for tests that need compilation) + +### Recommended Kernel Configs + +``` +CONFIG_SAMPLES=y +CONFIG_SAMPLE_SECCOMP=y +CONFIG_NET_SCH_NETEM=y +CONFIG_TUN=y +CONFIG_NAMESPACES=y +CONFIG_USER_NS=y +CONFIG_NET_NS=y +CONFIG_PID_NS=y +``` + +### File Format Support + +- `.bin` - Raw binary images +- `.img` - Disk image files +- `.gz` - Gzipped firmware images +- `.xz` - XZ compressed images + +## 🔧 Validation Tools + +### Validate Before Testing + +Use our validation script to check your firmware before submitting: + +```bash +# Quick validation (no download) +python3 scripts/validate_firmware.py --quick \ + https://example.com/firmware.bin \ + bananapi_bpi-r64-kernel + +# Full validation with download and analysis +python3 scripts/validate_firmware.py \ + https://example.com/firmware.bin \ + bananapi_bpi-r64-kernel \ + --report validation-report.md +``` + +## 📊 Understanding Results + +### Result Format + +The bot posts structured results including: + +```markdown +# 🧪 Kernel Selftests Results + +**Status:** ✅ COMPLETED +**Device:** bananapi_bpi-r64-kernel +**Command:** `make -C net run_tests` + +## Summary + +- **Total Tests:** 3 +- **Passed:** 3 ✅ +- **Failed:** 0 ❌ +- **Errors:** 0 💥 + +## Test Details + +The kernel selftests were executed using pytest with labgrid. Check the workflow logs for detailed output including the complete selftest results. +``` + +### Status Icons + +- ✅ **Passed** - Test completed successfully +- ❌ **Failed** - Test failed or had errors +- ⏭️ **Skipped** - Test was skipped (missing dependencies) +- ⏰ **Timeout** - Test exceeded time limit +- 💥 **Error** - Fatal error during test execution + +## 🏗️ Workflow Architecture + +### Process Flow + +1. **Comment Parsing**: Extract device, command, and firmware URL +2. **Validation**: Check parameters and device availability +3. **Device Reservation**: Lock hardware device exclusively +4. **Firmware Download**: Download and verify firmware image +5. **Pytest Execution**: Run `test_kernel_selftests.py` using labgrid +6. **Device Setup**: Boot device and download selftests via internet +7. **Test Execution**: Run specified command on device +8. **Result Collection**: Pytest captures results and generates reports +9. **Cleanup**: Power down device and release lock + +### Key Components + +- **GitHub Actions Workflow**: `.github/workflows/kernel-selftests.yml` +- **Pytest Test**: `tests/test_kernel_selftests.py` +- **Issue Template**: `.github/ISSUE_TEMPLATE/kernel-selftests.md` +- **Firmware Validator**: `scripts/validate_firmware.py` +- **Device Config**: `targets/bananapi_bpi-r64-kernel.yaml` + +## 🚨 Troubleshooting + +### Common Issues + +| Issue | Cause | Solution | +| ------------------------ | --------------------------- | -------------------------------------- | +| Invalid device | Unsupported device name | Use `bananapi_bpi-r64-kernel` | +| Firmware download failed | URL inaccessible or invalid | Validate URL with validation script | +| Boot timeout | Firmware incompatible | Check kernel configs and architecture | +| Test command failed | Missing dependencies | Ensure required kernel modules/configs | +| Device busy | Another test in progress | Wait for completion or check queue | + +### Getting Help + +1. **Check Workflow Logs**: Click the workflow run link in results +2. **Validate Firmware**: Use `validate_firmware.py` script +3. **Review Requirements**: Ensure kernel configs are correct +4. **Test Locally**: Try your test command on a local OpenWrt setup + +### Debug Tips + +```bash +# Test basic device connectivity first +command: echo "Device is working" + +# Check available test suites +command: find /root/selftests -name Makefile | head -10 + +# Run a simple test first +command: make -C filesystems/binderfs run_tests +``` + +## 🔒 Security & Limitations + +### Security Considerations + +- Firmware downloaded from user-provided URLs +- Tests run with root privileges on isolated hardware +- No persistent data stored after tests +- Device fully reset between test runs + +### Current Limitations + +- Single device type supported +- Maximum test runtime: 30 minutes per pytest test +- Results processed through pytest framework +- Device must have internet connectivity for downloading selftests + +## 🎯 Examples + +### Basic Network Testing + +``` +/test-kernel-selftests +device: bananapi_bpi-r64-kernel +command: make -C net run_tests +firmware: https://github.com/myuser/openwrt-builds/releases/download/v1.0/openwrt-r64-sysupgrade.bin +``` + +### BPF Development Testing + +``` +/test-kernel-selftests +device: bananapi_bpi-r64-kernel +command: make -C bpf/prog_tests run_tests +firmware: https://downloads.example.com/openwrt-r64-bpf-enabled.img.gz +``` + +### Custom Test Subset + +``` +/test-kernel-selftests +device: bananapi_bpi-r64-kernel +command: cd net && ./run_tests.sh -t bridge,vlan,tunnel +firmware: https://my-cdn.example.com/firmware/openwrt-r64-kernel-selftests.bin +``` + +## 🛠️ Development + +### Adding New Devices + +1. Create device configuration in `targets/new-device.yaml` +2. Update workflow validation in `kernel-selftests.yml` +3. Add device to validation scripts +4. Test with sample firmware +5. Update documentation + +### Extending Tests + +The pytest test in `tests/test_kernel_selftests.py` can be extended to: + +- Add pre-test validation steps +- Support different test frameworks +- Add post-test analysis +- Integrate with existing pytest fixtures + +### Contributing + +1. Fork the repository +2. Create a feature branch +3. Test your changes thoroughly +4. Submit a pull request with clear description +5. Update documentation as needed + +## 📚 Additional Resources + +- [Linux Kernel Selftests Documentation](https://www.kernel.org/doc/html/latest/dev-tools/kselftest.html) +- [OpenWrt Build System](https://openwrt.org/docs/guide-developer/toolchain/use-buildsystem) +- [Labgrid Documentation](https://labgrid.readthedocs.io/) +- [GitHub Actions Documentation](https://docs.github.com/en/actions) + +## 📄 License + +This workflow is part of the OpenWrt testing infrastructure and follows the same licensing terms as the main project. diff --git a/ansible/files/adapter/kci-adapter.env.example b/ansible/files/adapter/kci-adapter.env.example new file mode 100644 index 000000000..eca982267 --- /dev/null +++ b/ansible/files/adapter/kci-adapter.env.example @@ -0,0 +1,31 @@ +# Labgrid KCI Adapter Configuration + +# Lab name (used for job identification) +LAB_NAME=labgrid-aparcar + +# KernelCI API +KCI_API_URL=https://api.openwrt-kci.aparcar.org +KCI_API_TOKEN= + +# Labgrid coordinator (local) +LG_COORDINATOR=localhost:20408 + +# Paths +TARGETS_DIR=/etc/labgrid/targets +TESTS_DIR=/etc/labgrid/tests +FIRMWARE_CACHE=/var/cache/labgrid/firmware + +# Discovery settings +DEVICE_DISCOVERY_INTERVAL=300 +REQUIRE_TARGET_FILES=true + +# Health checks +HEALTH_CHECK_ENABLED=true +HEALTH_CHECK_INTERVAL=86400 + +# Polling +POLL_INTERVAL=30 +MAX_CONCURRENT_JOBS=3 + +# Logging +LOG_LEVEL=INFO diff --git a/ansible/files/adapter/labgrid-kci-adapter.service b/ansible/files/adapter/labgrid-kci-adapter.service new file mode 100644 index 000000000..051609a28 --- /dev/null +++ b/ansible/files/adapter/labgrid-kci-adapter.service @@ -0,0 +1,16 @@ +[Unit] +Description=Labgrid KernelCI Adapter +After=network.target labgrid-coordinator.service + +[Service] +User=labgrid-dev +Group=labgrid-dev +Environment="PYTHONUNBUFFERED=1" +EnvironmentFile=/etc/labgrid/kci-adapter.env +ExecStart=/usr/local/bin/labgrid-kci-adapter +Restart=on-failure +RestartSec=10 +WorkingDirectory=/etc/labgrid/ + +[Install] +WantedBy=multi-user.target diff --git a/ansible/files/exporter/labgrid-aparcar/exporter-old.yaml b/ansible/files/exporter/labgrid-aparcar/exporter-old.yaml new file mode 100644 index 000000000..0f77ca896 --- /dev/null +++ b/ansible/files/exporter/labgrid-aparcar/exporter-old.yaml @@ -0,0 +1,100 @@ + + + +labgrid-aparcar-bananapi_bpi-r4-lite: + NetworkService: + address: 192.168.1.1%vlan107 + username: root + PDUDaemonPort: + host: localhost:16421 + index: 7 + pdu: 192.168.128.2 + TFTPProvider: + external: bananapi_bpi-r4-lite/ + internal: /srv/tftp/bananapi_bpi-r4-lite/ + USBSerialPort: + match: + ID_PATH: platform-xhci-hcd.0-usb-0:1.3:1.0 + +labgrid-aparcar-enterasys_ws-ap3710i: + NetworkService: + address: 192.168.1.1%vlan112 + username: root + PDUDaemonPort: + host: localhost:16421 + index: 12 + pdu: 192.168.128.2 + TFTPProvider: + external: enterasys_ws-ap3710i/ + external_ip: 192.168.112.1 + internal: /srv/tftp/enterasys_ws-ap3710i/ + USBSerialPort: + match: + ID_PATH: platform-xhci-hcd.1-usb-0:2.1.2:1.0 + +labgrid-aparcar-genexis_pulse-ex400: + NetworkService: + address: 192.168.1.1%vlan101 + username: root + PDUDaemonPort: + host: localhost:16421 + index: 1 + pdu: 192.168.128.2 + TFTPProvider: + external: genexis_pulse-ex400/ + internal: /srv/tftp/genexis_pulse-ex400/ + USBSerialPort: + match: + ID_PATH: platform-xhci-hcd.1-usb-0:1.1.3:1.0 + +labgrid-aparcar-glinet_gl-mt6000: + NetworkService: + address: 192.168.1.1%vlan110 + username: root + PDUDaemonPort: + host: localhost:16421 + index: 10 + pdu: 192.168.128.2 + TFTPProvider: + external: glinet_gl-mt6000/ + external_ip: 192.168.110.1 + internal: /srv/tftp/glinet_gl-mt6000/ + USBSerialPort: + match: + ID_PATH: platform-xhci-hcd.1-usb-0:2.1.3:1.0 + + + + + +labgrid-aparcar-tplink_tl-wdr3600-v1: + NetworkService: + address: 192.168.1.1%vlan102 + username: root + PDUDaemonPort: + host: localhost:16421 + index: 2 + pdu: 192.168.128.2 + TFTPProvider: + external: tplink_tl-wdr3600-v1/ + external_ip: 192.168.102.1 + internal: /srv/tftp/tplink_tl-wdr3600-v1/ + USBSerialPort: + match: + ID_PATH: platform-xhci-hcd.1-usb-0:1.1.1:1.0 + +labgrid-aparcar-ignitenet_ss-w2-ac2600: + NetworkService: + address: 192.168.1.1%vlan111 + username: root + PDUDaemonPort: + host: localhost:16421 + index: 11 + pdu: 192.168.128.2 + TFTPProvider: + external: ignitenet_ss-w2-ac2600/ + external_ip: 192.168.111.1 + internal: /srv/tftp/ignitenet_ss-w2-ac2600/ + USBSerialPort: + match: + ID_PATH: platform-xhci-hcd.1-usb-0:2.1.4:1.0 diff --git a/ansible/generate_env_files.yml b/ansible/generate_env_files.yml new file mode 100644 index 000000000..96f24d321 --- /dev/null +++ b/ansible/generate_env_files.yml @@ -0,0 +1,27 @@ +--- +- name: Generate labgrid environment files from labnet.yaml + hosts: localhost + gather_facts: no + vars_files: + - ../labnet.yaml + + tasks: + - name: Ensure env directory exists + file: + path: ../env + state: directory + mode: '0755' + + - name: Generate env files for each lab-device combination + copy: + dest: "../env/{{ item.0.key }}-{{ item.1 }}.env" + mode: '0644' + content: | + export LG_IMAGE=/Users/user/src/openwrt/tests/firmware/openwrt-{{ devices[item.1].target }}-{{ item.1 }}-{{ devices[item.1].firmware }} + export LG_PLACE={{ item.0.key }}-{{ item.1 }} + export LG_PROXY={{ item.0.value.proxy }} + export LG_ENV=targets/{{ item.1 }}.yaml + with_subelements: + - "{{ labs | dict2items }}" + - value.devices + - skip_missing: yes diff --git a/ansible/playbook_coordinator.yml b/ansible/playbook_coordinator.yml new file mode 100644 index 000000000..54926a064 --- /dev/null +++ b/ansible/playbook_coordinator.yml @@ -0,0 +1,71 @@ +- hosts: coordinator + become: true + tasks: + - name: Create a user account called labgrid-dev + user: + name: labgrid-dev + state: present + + - name: Add SSH keys to labgrid-dev based on labnet.yaml + authorized_key: + user: labgrid-dev + state: present + key: "{{ item.value.sshkey }}" + comment: "{{ item.key }}" + loop: "{{ hostvars[inventory_hostname]['developers'] | dict2items | default([]) }}" + + # - name: Add labs to /etc/hosts + # lineinfile: + # path: /etc/hosts + # line: "{{ hostvars[item].wireguard_ip }} {{ item }}" + # state: present + # loop: "{{ groups['labs'] }}" + + - name: Create .ssh directory + file: + path: /home/labgrid-dev/.ssh + state: directory + mode: "0700" + + # - name: Add .ssh/config + # blockinfile: + # path: /home/labgrid-dev/.ssh/config + # block: | + # Host {{ item }} + # HostName {{ hostvars[item].wireguard_ip }} + # User labgrid-dev + # state: present + # marker: "# {mark} ANSIBLE MANAGED BLOCK {{ item }}" + # create: true + # loop: "{{ groups['labs'] }}" + + # - name: Add .ssh/config + # blockinfile: + # path: /home/labgrid-dev/.ssh/config + # block: | + # Host {{ item }} + # HostName {{ hostvars[item].wireguard_ip }} + # User labgrid-dev + # state: present + # marker: "# {mark} ANSIBLE MANAGED BLOCK {{ item }}" + # create: true + # loop: "{{ groups['labs'] }}" + + # - name: Add .ssh/config + # blockinfile: + # path: /home/{{ ansible_ssh_user }}/.ssh/config + # block: | + # Host {{ item }} + # HostName {{ hostvars[item].wireguard_ip }} + # User labgrid-dev + # state: present + # marker: "# {mark} ANSIBLE MANAGED BLOCK {{ item }}" + # create: true + # loop: "{{ groups['labs'] }}" + + - name: Install required packages + apt: + name: + - nmap + state: present + update_cache: yes diff --git a/backup.tar.gz b/backup.tar.gz new file mode 100644 index 000000000..a2aaa1bd7 Binary files /dev/null and b/backup.tar.gz differ diff --git a/bar.sh b/bar.sh new file mode 100644 index 000000000..af95c6cc2 --- /dev/null +++ b/bar.sh @@ -0,0 +1,59 @@ + # Get versions (reuse from previous step to avoid repeated API calls) +versions_json=$(curl -s https://downloads.openwrt.org/.versions.json) +stable_version=$(echo "$versions_json" | jq -r '.stable_version') +oldstable_version=$(echo "$versions_json" | jq -r '.oldstable_version') +stable_branch=$(echo "$stable_version" | cut -d. -f1,2) +oldstable_branch=$(echo "$oldstable_version" | cut -d. -f1,2) + +versions="[ +{\"type\": \"snapshot\", \"name\": \"snapshot\", \"version_url\": \"https://mirror-03.infra.openwrt.org/snapshots/targets\"}, +{\"type\": \"stable\", \"name\": \"$stable_branch\", \"version\": \"$stable_version\", \"version_url\": \"https://mirror-03.infra.openwrt.org/releases/$stable_branch-SNAPSHOT/targets\"}, +{\"type\": \"stable\", \"name\": \"$oldstable_branch\", \"version\": \"$oldstable_version\", \"version_url\": \"https://mirror-03.infra.openwrt.org/releases/$oldstable_branch-SNAPSHOT/targets\"} +]" + +device_matrix=$(yq -o=json ' +. as $root | +$root.labs as $labs | +$root.devices as $devices | +$labs +| to_entries +| map( + .key as $lab | + .value.devices + | map( + select($devices[.] != null) | + { + "device": ., + "name": $devices[.].name, + "proxy": $labs[$lab].proxy, + "target": $devices[.].target, + "firmware": $devices[.].firmware, + "maintainers": $labs[$lab].maintainers, + "snapshots_only": ($devices[.].snapshots_only // false) + } + ) + ) +| flatten +' labnet.yaml) + + +echo $device_matrix + +exit 0 + +# Combine devices with versions to create full matrix +matrix=$(echo "$device_matrix" | jq --argjson versions "$versions" ' +[.[] as $device | $versions[] as $version | $device + {"version_url": $version.version_url, "version_name": $version.name}] +') +echo "matrix=$(echo "$matrix" | jq -c '.')" >> $GITHUB_ENV + +# Create QEMU matrix +qemu_base='[ +{"target": "malta-be", "firmware": "vmlinux-initramfs.elf", "dependency": "qemu-system-mips"}, +{"target": "x86-64", "firmware": "generic-squashfs-combined.img.gz", "dependency": "qemu-system-x86"}, +{"target": "armsr-armv8", "firmware": "generic-initramfs-kernel.bin", "dependency": "qemu-system-aarch64"} +]' +qemu_matrix=$(echo "$qemu_base" | jq --argjson versions "$versions" ' +[.[] as $qemu | $versions[] as $version | $qemu + {"version_url": $version.version_url, "version_name": $version.name}] +') +echo "qemu_matrix=$(echo "$qemu_matrix" | jq -c '.')" >> $GITHUB_ENV diff --git a/configs/labs/aparcar/.env b/configs/labs/aparcar/.env new file mode 100644 index 000000000..b570b411e --- /dev/null +++ b/configs/labs/aparcar/.env @@ -0,0 +1,37 @@ +# ============================================================================= +# Aparcar Lab - Labgrid KCI Adapter Configuration +# ============================================================================= + +# Lab identification +LAB_NAME=aparcar + +# KernelCI API connection +KCI_API_URL=https://api.openwrt-kci.aparcar.org +KCI_API_TOKEN=eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJzdWIiOiJhcGFyY2FyIiwidHlwZSI6ImxhYiIsImlhdCI6MTc3MDE1ODE0NywiZXhwIjoxODAxNjk0MTQ3fQ.pBNLuPSrmYpmbpjAIId1fwQQQT3hhutIiIgZQx9UUhk + +# Labgrid coordinator (local to this host) +LG_COORDINATOR=localhost:20408 + +# Test repository (cloned before each job) +TESTS_REPO_URL=https://github.com/aparcar/openwrt-tests +TESTS_REPO_BRANCH=main + +# Paths (relative to cloned repo or absolute) +TARGETS_DIR=./targets +TESTS_DIR=./tests +FIRMWARE_CACHE=/home/labgrid-dev/labgrid-kci-adapter/cache + +# Device discovery +REQUIRE_TARGET_FILES=false +DEVICE_DISCOVERY_INTERVAL=300 + +# Job execution +MAX_CONCURRENT_JOBS=3 +POLL_INTERVAL=30 + +# Health checks +HEALTH_CHECK_ENABLED=true +HEALTH_CHECK_INTERVAL=86400 + +# Logging +LOG_LEVEL=INFO diff --git a/configs/labs/aparcar/README.md b/configs/labs/aparcar/README.md new file mode 100644 index 000000000..99d200314 --- /dev/null +++ b/configs/labs/aparcar/README.md @@ -0,0 +1,118 @@ +# Aparcar Lab Deployment + +Configuration files for deploying the labgrid-kci-adapter on the aparcar lab host. + +## Prerequisites + +- labgrid-coordinator running on localhost:20408 +- Python 3.11+ with pipx +- Network access to api.openwrt-kci.aparcar.org + +## Generate API Token + +Generate a JWT token for this lab using the KCI_SECRET_KEY from the API server: + +```bash +# On the machine running the KernelCI API, get the secret key +# Then generate a token: +cd /path/to/openwrt-tests +python scripts/generate-lab-token.py aparcar "YOUR_KCI_SECRET_KEY" --expires-days 365 +``` + +Copy the generated token to the `.env` file. + +## Installation + +1. **Copy files to lab host:** + +```bash +scp .env labgrid-aparcar:/home/labgrid-dev/labgrid-kci-adapter/ +scp labgrid-kci-adapter.service labgrid-aparcar:/tmp/ +``` + +2. **Install the adapter:** + +```bash +ssh labgrid-aparcar + +# Install via pipx (as labgrid-dev user) +pipx install git+https://github.com/aparcar/openwrt-tests.git#subdirectory=kernelci/labgrid-adapter + +# Or install from local checkout +cd ~/labgrid-kci-adapter +pipx install . +``` + +3. **Install systemd service:** + +```bash +sudo cp /tmp/labgrid-kci-adapter.service /etc/systemd/system/ +sudo systemctl daemon-reload +sudo systemctl enable labgrid-kci-adapter +sudo systemctl start labgrid-kci-adapter +``` + +4. **Verify:** + +```bash +sudo systemctl status labgrid-kci-adapter +sudo journalctl -u labgrid-kci-adapter -f +``` + +## Configuration + +Edit `/home/labgrid-dev/labgrid-kci-adapter/.env` to configure: + +| Variable | Description | Default | +|----------|-------------|---------| +| `LAB_NAME` | Lab identifier | `aparcar` | +| `KCI_API_URL` | KernelCI API URL | `https://api.openwrt-kci.aparcar.org` | +| `KCI_API_TOKEN` | JWT auth token | (required) | +| `LG_COORDINATOR` | Labgrid coordinator | `localhost:20408` | +| `TESTS_REPO_URL` | Git repo with tests | `https://github.com/aparcar/openwrt-tests` | +| `REQUIRE_TARGET_FILES` | Validate targets | `false` | +| `MAX_CONCURRENT_JOBS` | Parallel jobs | `3` | + +## Device Discovery + +The adapter automatically discovers devices from the labgrid coordinator. Devices need the `device` tag set: + +```bash +labgrid-client set-tags labgrid-aparcar-openwrt_one device=openwrt_one +``` + +Check discovered devices: + +```bash +labgrid-client -v places +``` + +## Troubleshooting + +**Check logs:** +```bash +sudo journalctl -u labgrid-kci-adapter -f --no-pager +``` + +**Test discovery manually:** +```bash +cd ~/labgrid-kci-adapter +./venv/bin/python -c " +import asyncio +from labgrid_kci_adapter.labgrid_client import LabgridClient + +async def test(): + client = LabgridClient('localhost:20408') + places = await client.get_places() + for name, place in places.items(): + print(f'{name}: device_type={place.device_type}') + +asyncio.run(test()) +" +``` + +**Test API connection:** +```bash +curl -H "Authorization: Bearer $KCI_API_TOKEN" \ + https://api.openwrt-kci.aparcar.org/latest/nodes?limit=1 +``` diff --git a/configs/labs/aparcar/devices.yaml b/configs/labs/aparcar/devices.yaml new file mode 100644 index 000000000..ef421a758 --- /dev/null +++ b/configs/labs/aparcar/devices.yaml @@ -0,0 +1,66 @@ +# Device inventory for aparcar lab +# Lists all devices available in this lab with their capabilities +# +# This file is used by the KernelCI scheduler to determine which +# devices can run which tests. + +lab_name: labgrid-aparcar + +devices: + # OpenWrt One - MediaTek MT7981B SoC + - name: labgrid-aparcar-openwrt_one + device_type: openwrt_one + target_file: openwrt_one.yaml + capabilities: + - wan_port + - wifi + test_types: + - firmware + + # Banana Pi BPI-R4 Lite - MediaTek MT7988A SoC + - name: labgrid-aparcar-bananapi_bpi-r4-lite + device_type: bananapi_bpi-r4-lite + target_file: bananapi_bpi-r4-lite.yaml + capabilities: + - wan_port + test_types: + - firmware + + # Banana Pi BPI-R4 - MediaTek MT7988A SoC + - name: labgrid-aparcar-bananapi_bpi-r4 + device_type: bananapi_bpi-r4 + target_file: bananapi_bpi-r4.yaml + capabilities: + - wan_port + test_types: + - firmware + - kselftest + + # Raspberry Pi 4 - Broadcom BCM2711 SoC + - name: labgrid-aparcar-rpi-4 + device_type: rpi-4 + target_file: rpi-4.yaml + capabilities: + - wan_port + provisioning: sdmux + test_types: + - firmware + + # Genexis Pulse EX400 - Realtek RTL9303 SoC + - name: labgrid-aparcar-genexis_pulse-ex400 + device_type: genexis_pulse-ex400 + target_file: genexis_pulse-ex400.yaml + capabilities: + - wan_port + test_types: + - firmware + + # TP-Link TL-WDR3600 v1 - Qualcomm Atheros AR9344 SoC + - name: labgrid-aparcar-tplink_tl-wdr3600-v1 + device_type: tplink_tl-wdr3600-v1 + target_file: tplink_tl-wdr3600-v1.yaml + capabilities: + - wan_port + - wifi + test_types: + - firmware diff --git a/configs/labs/aparcar/docker-compose.yaml b/configs/labs/aparcar/docker-compose.yaml new file mode 100644 index 000000000..1e811e38f --- /dev/null +++ b/configs/labs/aparcar/docker-compose.yaml @@ -0,0 +1,44 @@ +version: "3.8" + +services: + labgrid-kci-adapter: + image: ghcr.io/openwrt/labgrid-kci-adapter:latest + build: + context: ../../../kernelci/labgrid-adapter + dockerfile: Dockerfile + container_name: labgrid-kci-adapter-aparcar + restart: unless-stopped + env_file: + - .env + environment: + # Override paths for container + TARGETS_DIR: /app/targets + TESTS_DIR: /app/tests + FIRMWARE_CACHE: /app/cache + volumes: + # Mount labgrid target definitions (read-only) + - ../../../targets:/app/targets:ro + # Mount test suite (read-only) + - ../../../tests:/app/tests:ro + # Mount strategies for custom labgrid drivers (read-only) + - ../../../strategies:/app/strategies:ro + # Firmware cache (persistent volume) + - firmware-cache:/app/cache + # Host network required for: + # - Labgrid coordinator gRPC connection + # - Serial port access to devices + # - Network access to device VLANs + network_mode: host + # Privileged mode for USB serial port access + privileged: true + devices: + - /dev:/dev + logging: + driver: json-file + options: + max-size: "10m" + max-file: "3" + +volumes: + firmware-cache: + name: labgrid-aparcar-firmware-cache diff --git a/configs/labs/aparcar/labgrid-kci-adapter.service b/configs/labs/aparcar/labgrid-kci-adapter.service new file mode 100644 index 000000000..abb8f6d49 --- /dev/null +++ b/configs/labs/aparcar/labgrid-kci-adapter.service @@ -0,0 +1,28 @@ +[Unit] +Description=Labgrid KernelCI Adapter +Documentation=https://github.com/aparcar/openwrt-tests +After=network.target labgrid-coordinator.service + +[Service] +Type=simple +User=labgrid-dev +Group=labgrid-dev +WorkingDirectory=/home/labgrid-dev/labgrid-kci-adapter + +# Load environment from .env file +EnvironmentFile=/home/labgrid-dev/labgrid-kci-adapter/.env + +# Use pipx-installed adapter +ExecStart=/opt/pipx/venvs/labgrid-kci-adapter/bin/labgrid-kci-adapter + +# Restart on failure +Restart=on-failure +RestartSec=10 + +# Logging +StandardOutput=journal +StandardError=journal +SyslogIdentifier=labgrid-kci-adapter + +[Install] +WantedBy=multi-user.target diff --git a/contrib/404.html b/contrib/404.html new file mode 100644 index 000000000..f40914cac --- /dev/null +++ b/contrib/404.html @@ -0,0 +1,124 @@ + + + + + + Page Not Found - OpenWrt Tests + + + +
+
404
+

Page Not Found

+

+ The page you're looking for doesn't exist or may have been moved. + Try navigating to one of the available test result sections below. +

+ + + + +
+ + + + diff --git a/contrib/version-template.html b/contrib/version-template.html new file mode 100644 index 000000000..f2736354b --- /dev/null +++ b/contrib/version-template.html @@ -0,0 +1,284 @@ + + + + + + OpenWrt Test Results - {{VERSION_TITLE}} + + + +
+

OpenWrt Test Results

+
{{VERSION_TITLE}}
+
+ + + +
Loading test results...
+ + + + + + + + diff --git a/docs/daily-workflow.md b/docs/daily-workflow.md new file mode 100644 index 000000000..3ba150def --- /dev/null +++ b/docs/daily-workflow.md @@ -0,0 +1,149 @@ +# Daily Workflow for OpenWrt Testing + +## Overview + +The daily workflow (`.github/workflows/daily.yml`) runs comprehensive tests on both snapshot builds and stable release daily rebuilds. This workflow was evolved from the previous `snapshots.yml` to provide broader coverage across different OpenWrt versions. + +## Workflow Triggers + +The workflow runs: +- Daily at midnight UTC (`0 0 * * *`) +- On pushes to the `main` branch +- Manually via `workflow_dispatch` + +## Version Support + +The workflow automatically fetches and tests three types of OpenWrt builds: + +### 1. Snapshot Builds +- **URL**: `https://mirror-03.infra.openwrt.org/snapshots/targets` +- **Description**: Latest development builds +- **Version**: `SNAPSHOT` + +### 2. Stable Release Daily Rebuilds +- **URL**: `https://mirror-03.infra.openwrt.org/releases/{branch}-SNAPSHOT/targets` +- **Description**: Daily rebuilds of stable branches +- **Branches**: Automatically detected from `.versions.json`: + - Current stable (e.g., `24.10-SNAPSHOT`) + - Previous stable (e.g., `23.05-SNAPSHOT`) + +## Version Detection + +The workflow fetches version information from `https://downloads.openwrt.org/.versions.json`: + +```json +{ + "stable_version": "24.10.2", + "oldstable_version": "23.05.5" +} +``` + +Branch names are derived by removing the patch version: +- `24.10.2` → `24.10-SNAPSHOT` +- `23.05.5` → `23.05-SNAPSHOT` + +## Firmware Resolution + +### Snapshots +Uses the traditional approach with predictable filenames: +``` +openwrt-{target}-{device}-{firmware} +``` + +### Stable Releases +Uses dynamic firmware resolution: + +1. **Fetch profiles.json**: Downloads `targets/{target}/profiles.json` +2. **Extract real filename**: Searches the `images` array for the correct image type +3. **Fallback**: Uses constructed filename if dynamic resolution fails + +Example profiles.json structure: +```json +{ + "profiles": { + "generic": { + "image_prefix": "openwrt-24.10-snapshot-r28784-155eea44e7-x86-64-generic", + "images": [ + { + "name": "openwrt-24.10-snapshot-r28784-155eea44e7-x86-64-generic-squashfs-combined.img.gz", + "type": "combined", + "filesystem": "squashfs" + } + ] + } + } +} +``` + +## Test Matrix + +The workflow creates a comprehensive test matrix by combining: + +### Real Hardware Tests +- All devices from `labnet.yaml` +- Filtered to exclude devices with open healthcheck issues +- Cross-multiplied with all supported versions +- Results in format: `Device {device} ({version})` + +### QEMU Tests +- Three target architectures: + - `malta-be` (MIPS big-endian) + - `x86-64` (x86 64-bit) + - `armsr-armv8` (ARM 64-bit) +- Cross-multiplied with all supported versions +- Results in format: `QEMU {target} ({version})` + +## Artifact Organization + +Test results are organized by device and version: +- Real hardware: `results-{device}-{version}` +- QEMU: `results-qemu_{target}-{version}` + +## Dashboard Integration + +The results page combines: +- Device matrix from `labnet.yaml` +- QEMU target definitions +- Version information for each test run +- Links to detailed test results + +## Error Handling + +The workflow includes several error handling mechanisms: + +1. **URL Validation**: Tests accessibility of profiles.json before proceeding +2. **Firmware Fallback**: Falls back to constructed filenames if dynamic resolution fails +3. **Device Filtering**: Excludes devices with known health issues +4. **Graceful Degradation**: Continues testing other combinations if one fails + +## Example Test Run + +For a device `tplink_archer-c7-v2` with target `ath79-generic`, the workflow will: + +1. **Snapshot**: Test with latest development build +2. **Stable (24.10)**: Test with 24.10-SNAPSHOT daily rebuild +3. **Old Stable (23.05)**: Test with 23.05-SNAPSHOT daily rebuild + +Each test produces separate results and artifacts, allowing comparison across versions. + +## Migration Notes + +This workflow replaces the previous `snapshots.yml` workflow. Key changes: + +- **Filename**: `snapshots.yml` → `daily.yml` +- **Scope**: Snapshots only → Snapshots + stable releases +- **Matrix**: Single version → Multi-version testing +- **Artifacts**: Per-device → Per-device-per-version +- **Firmware Resolution**: Static → Dynamic for stable releases + +## Configuration + +The workflow uses these key environment variables: + +- `PYTHONUNBUFFERED="1"`: Ensure real-time log output +- `PYTEST_ADDOPTS="--color=yes"`: Colorized test output +- `LG_CONSOLE="internal"`: Use internal console handling +- `LG_FEATURE_APK="true"`: Enable APK package manager features +- `LG_FEATURE_ONLINE="true"`: Enable online features for QEMU tests + +No manual configuration is required - all versions and URLs are auto-detected. diff --git a/docs/kernel-selftests.md b/docs/kernel-selftests.md new file mode 100644 index 000000000..e3e4869c0 --- /dev/null +++ b/docs/kernel-selftests.md @@ -0,0 +1,220 @@ +# Kernel Selftests Workflow Documentation + +This document describes how to use the kernel selftests workflow to run Linux kernel selftests on OpenWrt devices using pytest and labgrid. + +## Overview + +The kernel selftests workflow allows you to: + +- Run Linux kernel selftests on real OpenWrt hardware using pytest +- Test custom OpenWrt firmware images +- Get automated results posted as GitHub issue comments +- Leverage the existing labgrid testing infrastructure + +## Supported Devices + +Currently supported devices: + +- `bananapi_bpi-r64-kernel` - Banana Pi BPI-R64 with kernel selftest support + +## How to Use + +### 1. Create an Issue + +Create a new issue using the "Kernel Selftests Request" template, or create a regular issue and add a comment with the test request. + +### 2. Format Your Test Request + +In an issue comment, use this exact format: + +``` +/test-kernel-selftests +device: bananapi_bpi-r64-kernel +command: make -C net run_tests +firmware: https://example.com/path/to/your-firmware.bin +``` + +### Parameters + +#### device + +- **Required**: Target device identifier +- **Supported values**: `bananapi_bpi-r64-kernel` +- **Example**: `device: bananapi_bpi-r64-kernel` + +#### command + +- **Required**: Shell command to run in the `/root/selftests/` directory +- **Examples**: + - `make -C net run_tests` - Run network tests + - `make -C bpf run_tests` - Run BPF tests + - `make -C mm run_tests` - Run memory management tests + - `make -C filesystems run_tests` - Run filesystem tests + - `make -C cpu-hotplug run_tests` - Run CPU hotplug tests + - `./run_kselftest.sh` - Run all available tests + - `./run_kselftest.sh -t net:ping` - Run specific test + +#### firmware + +- **Required**: Direct URL to your OpenWrt firmware image +- **Format**: Must be a direct download link +- **Supported extensions**: `.bin`, `.img`, `.gz` +- **Examples**: + - GitHub releases: `https://github.com/user/repo/releases/download/v1.0/openwrt-image.bin` + - File hosting: `https://example.com/firmware/custom-build.img` + +## Common Test Commands + +### Network Tests + +```bash +# All network tests +make -C net run_tests + +# Specific network test +make -C net/forwarding run_tests +``` + +### BPF Tests + +```bash +# All BPF tests (requires BPF support in kernel) +make -C bpf run_tests + +# Specific BPF test category +make -C bpf/prog_tests run_tests +``` + +### Memory Management Tests + +```bash +# Memory management tests +make -C mm run_tests + +# Specific memory test +make -C vm run_tests +``` + +### Filesystem Tests + +```bash +# Filesystem tests +make -C filesystems run_tests + +# Specific filesystem +make -C filesystems/overlayfs run_tests +``` + +### CPU Tests + +```bash +# CPU hotplug tests +make -C cpu-hotplug run_tests + +# CPU frequency tests +make -C cpufreq run_tests +``` + +### Run All Tests + +```bash +# Run everything (may take a very long time!) +./run_kselftest.sh + +# Run with specific timeout +./run_kselftest.sh -t 300 # 5 minute timeout per test +``` + +## Workflow Process + +1. **Comment Parsing**: The workflow parses your comment and validates parameters +2. **Device Reservation**: Reserves the specified hardware device +3. **Firmware Download**: Downloads your custom firmware image +4. **Device Boot**: Boots the device with your firmware +5. **Selftests Transfer**: Downloads and transfers kernel selftests to the device +6. **Test Execution**: Runs your specified command +7. **Results Collection**: Collects and formats test output +8. **Comment Results**: Posts formatted results as a comment +9. **Cleanup**: Powers off device and releases reservation + +## Results Format + +The bot will comment with: + +- Test summary (total/passed/failed counts) +- Detailed test output in a collapsible section +- Link to workflow logs for debugging +- Firmware and command information + +Example result: + +```markdown +# 🧪 Kernel Selftests Results + +**Device:** bananapi_bpi-r64-kernel +**Command:** `make -C net run_tests` +**Firmware:** https://example.com/firmware.bin + +## Summary + +- **Total Tests:** 45 +- **Passed:** 43 ✅ +- **Failed:** 2 ❌ + +## Detailed Output + +
+Click to expand full test output +[Test output here...] +
+``` + +## Troubleshooting + +### Common Issues + +1. **Invalid device**: Only `bananapi_bpi-r64-kernel` is currently supported +2. **Invalid firmware URL**: Must be a direct download link +3. **Device busy**: Device may be reserved by another test +4. **Test timeout**: Long-running tests may timeout (default: 30 minutes) +5. **Firmware boot failure**: Custom firmware may not boot properly + +### Getting Help + +- Check workflow logs for detailed error information +- Ensure your firmware is compatible with the target device +- Verify your firmware URL is accessible and downloads correctly +- Consider breaking large test suites into smaller commands + +## Firmware Requirements + +Your OpenWrt firmware should have: + +- Kernel selftests support enabled +- Sufficient storage space for selftests (~100MB) +- Network connectivity (if tests require it) +- Required kernel modules for your tests + +## Security Notes + +- Firmware images are downloaded from user-provided URLs +- Tests run in an isolated hardware environment +- No persistent data is stored after tests complete +- Device is fully reset between test runs + +## Contributing + +To add support for additional devices: + +1. Add device configuration to `targets/` directory +2. Update workflow device validation +3. Test with the new device configuration +4. Update this documentation + +## Limitations + +- Single device type currently supported +- Tests run with root privileges only +- No custom environment variables supported +- Maximum test runtime: 30 minutes +- Results limited to stdout/stderr capture diff --git a/docs/kernelci-labgrid-integration.md b/docs/kernelci-labgrid-integration.md new file mode 100644 index 000000000..ac20f081e --- /dev/null +++ b/docs/kernelci-labgrid-integration.md @@ -0,0 +1,1418 @@ +# Self-Hosted KernelCI for OpenWrt Testing + +## Executive Summary + +This document outlines how to deploy a **self-hosted KernelCI instance** for OpenWrt firmware testing. The system will provide: + +- **Test result visualization** via the KernelCI dashboard +- **Job scheduling** with labgrid runtime support +- **Multi-source firmware management** (official releases, PRs, custom builds) +- **Device health checks** with automated monitoring +- **Decentralized lab federation** using pull-mode architecture + +## Table of Contents + +1. [Architecture Overview](#architecture-overview) +2. [Self-Hosted KernelCI Components](#self-hosted-kernelci-components) +3. [OpenWrt-Specific Adaptations](#openwrt-specific-adaptations) +4. [Firmware Source Management](#firmware-source-management) +5. [Health Check System](#health-check-system) +6. [Deployment Guide](#deployment-guide) +7. [Implementation Plan](#implementation-plan) + +--- + +## Architecture Overview + +### High-Level System Design + +``` +┌─────────────────────────────────────────────────────────────────────────────────────┐ +│ Self-Hosted OpenWrt KernelCI Instance │ +├─────────────────────────────────────────────────────────────────────────────────────┤ +│ │ +│ ┌─────────────────────────────────────────────────────────────────────────────┐ │ +│ │ Firmware Sources │ │ +│ │ ┌──────────────┐ ┌──────────────┐ ┌──────────────┐ ┌──────────────┐ │ │ +│ │ │ Official │ │ GitHub │ │ Custom │ │ Buildbot │ │ │ +│ │ │ Releases │ │ PR Builds │ │ Builds │ │ Integration │ │ │ +│ │ │ (snapshot, │ │ (CI arti- │ │ (developer │ │ (upstream │ │ │ +│ │ │ stable) │ │ facts) │ │ uploads) │ │ builds) │ │ │ +│ │ └──────┬───────┘ └──────┬───────┘ └──────┬───────┘ └──────┬───────┘ │ │ +│ └─────────┼─────────────────┼─────────────────┼─────────────────┼────────────┘ │ +│ └─────────────────┴────────┬────────┴─────────────────┘ │ +│ ▼ │ +│ ┌─────────────────────────────────────────────────────────────────────────────┐ │ +│ │ KernelCI API (Maestro) - Self-Hosted │ │ +│ │ ┌─────────────┐ ┌─────────────┐ ┌─────────────┐ ┌─────────────────┐ │ │ +│ │ │ FastAPI │ │ MongoDB │ │ Redis │ │ Artifact │ │ │ +│ │ │ REST API │ │ Database │ │ Pub/Sub │ │ Storage (S3) │ │ │ +│ │ └─────────────┘ └─────────────┘ └─────────────┘ └─────────────────┘ │ │ +│ └──────────────────────────────────┬──────────────────────────────────────────┘ │ +│ │ │ +│ Events (new firmware, test triggers) │ +│ │ │ +│ ┌──────────────────────────────────┼──────────────────────────────────────────┐ │ +│ │ KernelCI Pipeline - OpenWrt Scheduler │ │ +│ │ ┌─────────────┐ ┌─────────────┐ ┌─────────────┐ ┌─────────────────┐ │ │ +│ │ │ Firmware │ │ Test │ │ Health │ │ Results │ │ │ +│ │ │ Trigger │ │ Scheduler │ │ Check │ │ Collector │ │ │ +│ │ │ (watches │ │ (assigns │ │ Scheduler │ │ (aggregates) │ │ │ +│ │ │ sources) │ │ to labs) │ │ (periodic) │ │ │ │ │ +│ │ └─────────────┘ └─────────────┘ └─────────────┘ └─────────────────┘ │ │ +│ └──────────────────────────────────┬──────────────────────────────────────────┘ │ +│ │ │ +│ Pull-mode job distribution │ +│ │ │ +│ ┌──────────────────────────────────┼──────────────────────────────────────────┐ │ +│ │ Labgrid Adapter (per lab) │ │ +│ │ ┌─────────────┐ ┌─────────────┐ ┌─────────────┐ ┌─────────────────┐ │ │ +│ │ │ Job Poller │ │ pytest │ │ Firmware │ │ Health Check │ │ │ +│ │ │ (pull-mode) │ │ Executor │ │ Flasher │ │ Runner │ │ │ +│ │ └─────────────┘ └─────────────┘ └─────────────┘ └─────────────────┘ │ │ +│ └──────────────────────────────────┬──────────────────────────────────────────┘ │ +│ │ │ +│ ┌────────────────────────┼────────────────────────┐ │ +│ ▼ ▼ ▼ │ +│ ┌──────────────────┐ ┌──────────────────┐ ┌──────────────────┐ │ +│ │ Lab: aparcar │ │ Lab: leinelab │ │ Lab: community │ │ +│ │ (coordinator) │ │ (exporter) │ │ (pull-mode) │ │ +│ │ ┌────────────┐ │ │ ┌────────────┐ │ │ ┌────────────┐ │ │ +│ │ │ OpenWrt One│ │ │ │ BPi R4 │ │ │ │ Custom HW │ │ │ +│ │ │ Linksys │ │ │ │ RPi 4 │ │ │ │ QEMU │ │ │ +│ │ │ TP-Link │ │ │ │ GL.iNet │ │ │ │ │ │ │ +│ │ └────────────┘ │ │ └────────────┘ │ │ └────────────┘ │ │ +│ └──────────────────┘ └──────────────────┘ └──────────────────┘ │ +│ │ +│ ┌─────────────────────────────────────────────────────────────────────────────┐ │ +│ │ Dashboard (Web UI) │ │ +│ │ ┌─────────────────┐ ┌─────────────────┐ ┌─────────────────────────────┐ │ │ +│ │ │ Device Fleet │ │ Test Results │ │ Firmware Comparison │ │ │ +│ │ │ Status │ │ Matrix │ │ (version × device × test) │ │ │ +│ │ └─────────────────┘ └─────────────────┘ └─────────────────────────────┘ │ │ +│ │ ┌─────────────────┐ ┌─────────────────┐ ┌─────────────────────────────┐ │ │ +│ │ │ Health Check │ │ Regression │ │ PR Status │ │ │ +│ │ │ Dashboard │ │ Tracking │ │ (test before merge) │ │ │ +│ │ └─────────────────┘ └─────────────────┘ └─────────────────────────────┘ │ │ +│ └─────────────────────────────────────────────────────────────────────────────┘ │ +│ │ +└─────────────────────────────────────────────────────────────────────────────────────┘ +``` + +### Key Design Principles + +1. **Self-Hosted**: Complete control over infrastructure, no external dependencies +2. **Firmware-Centric**: Designed around OpenWrt images, not kernel builds +3. **Multi-Source**: Accept firmware from official, PR, custom, and buildbot sources +4. **Health-First**: Device health monitoring as a core feature +5. **Decentralized**: Labs operate independently, pull jobs when ready + +--- + +## Self-Hosted KernelCI Components + +### Component Stack + +| Component | Technology | Purpose | Port | +|-----------|------------|---------|------| +| **API Server** | kernelci-api (FastAPI) | REST API, job management | 8001 | +| **Database** | MongoDB | Store nodes, jobs, results | 27017 | +| **Message Queue** | Redis | Pub/Sub for events | 6379 | +| **Pipeline** | kernelci-pipeline | Job scheduling, triggers | - | +| **Dashboard** | kernelci-dashboard (React) | Web visualization | 3000 | +| **Storage** | MinIO (S3-compatible) | Artifacts, firmware, logs | 9000 | +| **Reverse Proxy** | Traefik/nginx | TLS termination, routing | 443 | + +### Modified Components for OpenWrt + +KernelCI is designed for Linux kernel testing. For OpenWrt, we need these adaptations: + +| KernelCI Concept | OpenWrt Adaptation | +|------------------|-------------------| +| Kernel build | Firmware image (sysupgrade, factory, initramfs) | +| Kernel tree | OpenWrt repository + target/subtarget | +| Defconfig | Device profile | +| Boot test | Flash + boot + shell access | +| kunit/kselftest | pytest test suite | + +--- + +## OpenWrt-Specific Adaptations + +### Node Schema for OpenWrt Firmware + +```python +# openwrt_schema.py +from pydantic import BaseModel +from typing import Optional, List +from datetime import datetime + +class OpenWrtFirmware(BaseModel): + """Schema for OpenWrt firmware in KernelCI.""" + + # Core identification + id: str # "openwrt:snapshot:ath79-generic:tplink_archer-c7-v2:20250124" + origin: str = "openwrt" + + # Firmware source + source: str # "official", "pr", "custom", "buildbot" + source_url: Optional[str] # Download URL or PR reference + + # OpenWrt-specific fields (replaces kernel fields) + version: str # "SNAPSHOT", "24.10.0", "23.05.5" + target: str # "ath79" + subtarget: str # "generic" + profile: str # "tplink_archer-c7-v2" + + # Git information + git_repository_url: str # "https://github.com/openwrt/openwrt" + git_commit_hash: str # commit SHA + git_branch: Optional[str] # "main", "openwrt-24.10" + + # Build artifacts + artifacts: dict # URLs to firmware files + # Example: + # { + # "sysupgrade": "https://.../openwrt-ath79-generic-tplink_archer-c7-v2-squashfs-sysupgrade.bin", + # "factory": "https://.../openwrt-ath79-generic-tplink_archer-c7-v2-squashfs-factory.bin", + # "initramfs": "https://.../openwrt-ath79-generic-tplink_archer-c7-v2-initramfs-kernel.bin", + # "manifest": "https://.../openwrt-ath79-generic-tplink_archer-c7-v2.manifest" + # } + + # Metadata + build_time: datetime + file_size: int + sha256: str + + # Features (from profiles.json) + features: List[str] # ["wifi", "usb", "poe"] + packages: List[str] # installed packages + + +class OpenWrtTestJob(BaseModel): + """Schema for OpenWrt test jobs.""" + + id: str + firmware_id: str # Reference to OpenWrtFirmware + device_type: str # Labgrid target name + test_plan: List[str] # ["test_shell", "test_ssh", "test_wifi_wpa3"] + required_features: List[str] # Device must have these features + priority: int = 5 # 1-10, higher = more urgent + timeout: int = 1800 # seconds + + # Job state + status: str # "pending", "running", "complete", "failed" + assigned_lab: Optional[str] + started_at: Optional[datetime] + completed_at: Optional[datetime] + + +class OpenWrtTestResult(BaseModel): + """Schema for OpenWrt test results.""" + + id: str + job_id: str + firmware_id: str + device_type: str + lab_name: str + + # Test execution + test_name: str + status: str # "pass", "fail", "skip", "error" + duration: float # seconds + start_time: datetime + + # Output + log_url: Optional[str] + console_log_url: Optional[str] + error_message: Optional[str] + + # Environment + environment: dict + # { + # "device_serial": "...", + # "firmware_version": "...", + # "kernel_version": "...", + # "lab_name": "...", + # } +``` + +### Pipeline Configuration for OpenWrt + +```yaml +# config/pipeline-openwrt.yaml +api: + openwrt-local: + url: http://localhost:8001 + token: ${OPENWRT_KCI_TOKEN} + +storage: + openwrt-minio: + storage_type: s3 + base_url: http://minio:9000 + bucket: openwrt-artifacts + access_key: ${MINIO_ACCESS_KEY} + secret_key: ${MINIO_SECRET_KEY} + +# Firmware source watchers +triggers: + # Watch official OpenWrt releases + openwrt-official: + type: firmware_watcher + sources: + - name: snapshot + url: https://downloads.openwrt.org/snapshots/targets/ + interval: 3600 # Check hourly + pattern: "*/*/profiles.json" + + - name: stable + url: https://downloads.openwrt.org/releases/24.10.0/targets/ + interval: 86400 # Check daily + + - name: oldstable + url: https://downloads.openwrt.org/releases/23.05.5/targets/ + interval: 86400 + + # Watch GitHub PR artifacts + openwrt-github-pr: + type: github_artifacts + repository: openwrt/openwrt + workflow: "build.yml" + artifact_pattern: "openwrt-*" + on_labels: ["ci-test-requested"] + + # Accept custom uploads via API + openwrt-custom: + type: api_upload + endpoint: /api/v1/firmware/upload + validation: + required_fields: ["target", "subtarget", "profile"] + max_size: 100MB + +# Runtime definitions +runtimes: + labgrid: + type: labgrid + adapter: labgrid-kci-adapter + # Labs pull jobs, we don't push + +# Test plan definitions +test_plans: + openwrt-base: + description: "Basic boot and connectivity tests" + tests: + - test_shell + - test_ssh + - test_firmware_version + - test_ubus_system_board + - test_free_memory + - test_kernel_errors + timeout: 600 + required_features: [] + + openwrt-system: + description: "System health validation" + tests: + - test_memory_usage + - test_filesystem_usage + - test_system_uptime + - test_process_count + - test_entropy_available + timeout: 300 + required_features: [] + + openwrt-network: + description: "Network functionality tests" + tests: + - test_lan_interface_address + - test_wan_wait_for_network + - test_https_download + timeout: 600 + required_features: ["wan_port", "online"] + + openwrt-wifi: + description: "WiFi functionality tests" + tests: + - test_wifi_scan + - test_wifi_wpa2 + - test_wifi_wpa3 + timeout: 900 + required_features: ["wifi"] + + openwrt-package: + description: "Package manager tests" + tests: + - test_opkg_procd_installed + - test_opkg_install_ucert + timeout: 300 + required_features: ["opkg", "online"] + +# Scheduler configuration +scheduler: + # Map device types to test plans + device_test_mapping: + # All devices get base tests + default: + - openwrt-base + - openwrt-system + + # Devices with specific features get additional tests + feature_wifi: + - openwrt-wifi + + feature_wan_port: + - openwrt-network + + feature_opkg: + - openwrt-package + + # Priority rules + priorities: + pr_builds: 10 # Highest - developers waiting + snapshot: 5 # Medium - daily testing + stable: 3 # Lower - release validation + custom: 7 # Developer uploads + + # Health check scheduling + health_checks: + enabled: true + interval: 86400 # Daily + tests: ["test_shell", "test_ssh"] + on_failure: + notify: ["email", "github_issue"] + disable_device: true +``` + +--- + +## Firmware Source Management + +### Source Types + +#### 1. Official OpenWrt Releases + +```python +# firmware_sources/official.py +import httpx +from datetime import datetime + +class OfficialFirmwareSource: + """Watch official OpenWrt download server for new firmware.""" + + BASE_URLS = { + "snapshot": "https://downloads.openwrt.org/snapshots/targets", + "stable": "https://downloads.openwrt.org/releases/24.10.0/targets", + "oldstable": "https://downloads.openwrt.org/releases/23.05.5/targets", + } + + async def scan_for_firmware(self, version: str) -> list[dict]: + """Scan for available firmware images.""" + base_url = self.BASE_URLS[version] + firmware_list = [] + + # Get list of targets + async with httpx.AsyncClient() as client: + # Parse target directories + targets = await self._list_targets(client, base_url) + + for target, subtarget in targets: + profiles_url = f"{base_url}/{target}/{subtarget}/profiles.json" + try: + resp = await client.get(profiles_url) + profiles = resp.json() + + for profile_name, profile_data in profiles["profiles"].items(): + firmware_list.append({ + "source": "official", + "version": version, + "target": target, + "subtarget": subtarget, + "profile": profile_name, + "artifacts": self._build_artifact_urls( + base_url, target, subtarget, profile_data + ), + "features": profile_data.get("device_packages", []), + }) + except Exception as e: + logger.warning(f"Failed to fetch profiles for {target}/{subtarget}: {e}") + + return firmware_list + + def _build_artifact_urls(self, base_url, target, subtarget, profile) -> dict: + """Build URLs for firmware artifacts.""" + images = profile.get("images", []) + artifacts = {} + + for image in images: + image_type = image.get("type", "unknown") + filename = image.get("name") + if filename: + artifacts[image_type] = f"{base_url}/{target}/{subtarget}/{filename}" + artifacts[f"{image_type}_sha256"] = image.get("sha256") + + return artifacts +``` + +#### 2. GitHub PR Builds + +```python +# firmware_sources/github_pr.py +import httpx +from github import Github + +class GitHubPRFirmwareSource: + """Fetch firmware from GitHub Actions artifacts for PRs.""" + + def __init__(self, token: str, repo: str = "openwrt/openwrt"): + self.gh = Github(token) + self.repo = self.gh.get_repo(repo) + + async def get_pr_firmware(self, pr_number: int) -> list[dict]: + """Get firmware artifacts from a PR's CI run.""" + pr = self.repo.get_pull(pr_number) + + # Find the latest successful workflow run + runs = self.repo.get_workflow_runs( + branch=pr.head.ref, + status="success" + ) + + if runs.totalCount == 0: + return [] + + latest_run = runs[0] + artifacts = latest_run.get_artifacts() + + firmware_list = [] + for artifact in artifacts: + if artifact.name.startswith("openwrt-"): + # Parse target info from artifact name + # e.g., "openwrt-ath79-generic" + parts = artifact.name.split("-") + if len(parts) >= 3: + firmware_list.append({ + "source": "pr", + "source_ref": f"PR #{pr_number}", + "version": f"pr-{pr_number}", + "target": parts[1], + "subtarget": parts[2], + "git_commit_hash": pr.head.sha, + "git_branch": pr.head.ref, + "artifact_id": artifact.id, + "artifact_url": artifact.archive_download_url, + }) + + return firmware_list + + async def download_and_extract(self, artifact_id: int, dest_path: str): + """Download and extract PR artifact to storage.""" + artifact = self.repo.get_workflow_run_artifact(artifact_id) + # Download zip and extract firmware files + # Upload to MinIO storage + pass +``` + +#### 3. Custom/Developer Uploads + +```python +# firmware_sources/custom_upload.py +from fastapi import APIRouter, UploadFile, File, Form +from pydantic import BaseModel +import hashlib + +router = APIRouter() + +class FirmwareUploadResponse(BaseModel): + firmware_id: str + status: str + message: str + +@router.post("/api/v1/firmware/upload") +async def upload_custom_firmware( + firmware_file: UploadFile = File(...), + target: str = Form(...), + subtarget: str = Form(...), + profile: str = Form(...), + version: str = Form(default="custom"), + git_commit: str = Form(default=None), + description: str = Form(default=None), +) -> FirmwareUploadResponse: + """ + Upload custom firmware for testing. + + Allows developers to upload their own builds for testing + on the shared infrastructure. + """ + # Validate file + content = await firmware_file.read() + sha256 = hashlib.sha256(content).hexdigest() + + # Store in MinIO + storage_path = f"custom/{target}/{subtarget}/{profile}/{sha256[:8]}/{firmware_file.filename}" + await storage.upload(storage_path, content) + + # Create firmware node + firmware_id = f"openwrt:custom:{target}-{subtarget}:{profile}:{sha256[:12]}" + + firmware_node = { + "id": firmware_id, + "source": "custom", + "version": version, + "target": target, + "subtarget": subtarget, + "profile": profile, + "artifacts": { + "sysupgrade": storage.get_url(storage_path), + }, + "sha256": sha256, + "description": description, + "git_commit_hash": git_commit, + } + + # Submit to API + await api.submit_node(firmware_node) + + # Trigger test jobs + await scheduler.create_jobs_for_firmware(firmware_id) + + return FirmwareUploadResponse( + firmware_id=firmware_id, + status="accepted", + message=f"Firmware uploaded. Test jobs queued." + ) +``` + +#### 4. Buildbot Integration + +```python +# firmware_sources/buildbot.py +class BuildbotFirmwareSource: + """ + Integration with OpenWrt Buildbot. + + Listens for build completion webhooks from Buildbot + and imports firmware for testing. + """ + + def __init__(self, buildbot_url: str, api_key: str): + self.buildbot_url = buildbot_url + self.api_key = api_key + + async def handle_build_complete(self, build_data: dict): + """Handle Buildbot build completion webhook.""" + if build_data["results"] != "success": + return + + builder = build_data["builder"] + # Parse target from builder name, e.g., "target/ath79/generic" + target, subtarget = self._parse_builder_name(builder) + + # Get artifact URLs from Buildbot + artifacts = await self._fetch_build_artifacts(build_data["buildid"]) + + # Create firmware nodes for each profile + for profile, artifact_url in artifacts.items(): + firmware_node = { + "source": "buildbot", + "version": "buildbot", + "target": target, + "subtarget": subtarget, + "profile": profile, + "artifacts": {"sysupgrade": artifact_url}, + "git_commit_hash": build_data.get("revision"), + } + await api.submit_node(firmware_node) +``` + +--- + +## Health Check System + +### Health Check Architecture + +``` +┌─────────────────────────────────────────────────────────────────────────────┐ +│ Health Check System │ +├─────────────────────────────────────────────────────────────────────────────┤ +│ │ +│ ┌─────────────────────────────────────────────────────────────────────┐ │ +│ │ Health Check Scheduler │ │ +│ │ │ │ +│ │ ┌─────────────┐ ┌─────────────┐ ┌─────────────────────────┐ │ │ +│ │ │ Periodic │ │ On-Demand │ │ Post-Failure │ │ │ +│ │ │ (daily) │ │ (API call) │ │ (verify fix) │ │ │ +│ │ └──────┬──────┘ └──────┬──────┘ └───────────┬─────────────┘ │ │ +│ │ └─────────────────┴──────────────────────┘ │ │ +│ │ │ │ │ +│ │ ▼ │ │ +│ │ ┌─────────────────────────────────────────────────────────────┐ │ │ +│ │ │ Health Check Job Generator │ │ │ +│ │ │ - Creates minimal test jobs (test_shell, test_ssh) │ │ │ +│ │ │ - High priority for quick execution │ │ │ +│ │ │ - No firmware flash (use existing) │ │ │ +│ │ └─────────────────────────────────────────────────────────────┘ │ │ +│ └─────────────────────────────────────────────────────────────────────┘ │ +│ │ │ +│ ▼ │ +│ ┌─────────────────────────────────────────────────────────────────────┐ │ +│ │ Health Check Executor │ │ +│ │ │ │ +│ │ For each device: │ │ +│ │ 1. Power on device (PDUDaemon) │ │ +│ │ 2. Wait for boot (serial console) │ │ +│ │ 3. Verify shell access │ │ +│ │ 4. Verify SSH access │ │ +│ │ 5. Run basic diagnostics │ │ +│ │ 6. Record results │ │ +│ │ 7. Power off device │ │ +│ └──────────────────────────────────┬──────────────────────────────────┘ │ +│ │ │ +│ ▼ │ +│ ┌─────────────────────────────────────────────────────────────────────┐ │ +│ │ Health Status Manager │ │ +│ │ │ │ +│ │ ┌─────────────────────────────────────────────────────────────┐ │ │ +│ │ │ Device Registry │ │ │ +│ │ │ device_id │ status │ last_check │ last_pass │ failures │ │ │ +│ │ │ ─────────────────────────────────────────────────────────── │ │ │ +│ │ │ openwrt-one│ healthy │ 2025-01-24 │ 2025-01-24│ 0 │ │ │ +│ │ │ bpi-r4 │ healthy │ 2025-01-24 │ 2025-01-24│ 0 │ │ │ +│ │ │ linksys │ failing │ 2025-01-24 │ 2025-01-20│ 4 │ │ │ +│ │ │ tplink-wr │ disabled│ 2025-01-15 │ 2025-01-10│ 5 (max) │ │ │ +│ │ └─────────────────────────────────────────────────────────────┘ │ │ +│ │ │ │ +│ │ Actions on failure: │ │ +│ │ - consecutive_failures >= 3: Mark device as "failing" │ │ +│ │ - consecutive_failures >= 5: Disable device, open issue │ │ +│ │ - After manual fix: Verify with on-demand health check │ │ +│ └─────────────────────────────────────────────────────────────────────┘ │ +│ │ +│ ┌─────────────────────────────────────────────────────────────────────┐ │ +│ │ Notification System │ │ +│ │ │ │ +│ │ ┌──────────────┐ ┌──────────────┐ ┌────────────────────────┐ │ │ +│ │ │ GitHub Issue │ │ Email/Slack │ │ Dashboard Alert │ │ │ +│ │ │ (auto-create │ │ Notification │ │ (visual indicator) │ │ │ +│ │ │ & close) │ │ │ │ │ │ │ +│ │ └──────────────┘ └──────────────┘ └────────────────────────┘ │ │ +│ └─────────────────────────────────────────────────────────────────────┘ │ +│ │ +└─────────────────────────────────────────────────────────────────────────────┘ +``` + +### Health Check Implementation + +```python +# health/scheduler.py +from datetime import datetime, timedelta +from typing import Optional +import asyncio + +class HealthCheckScheduler: + """Schedule and manage device health checks.""" + + def __init__(self, api_client, device_registry): + self.api = api_client + self.devices = device_registry + self.check_interval = timedelta(hours=24) + self.failure_threshold = 5 + + async def run_periodic_checks(self): + """Main loop for periodic health checks.""" + while True: + devices_to_check = self.devices.get_devices_needing_check( + interval=self.check_interval + ) + + for device in devices_to_check: + await self.schedule_health_check(device) + + await asyncio.sleep(3600) # Check hourly for devices due + + async def schedule_health_check(self, device: dict): + """Create health check job for a device.""" + job = { + "type": "health_check", + "device_type": device["name"], + "test_plan": ["test_shell", "test_ssh"], + "priority": 10, # Highest priority + "timeout": 300, # 5 minutes max + "skip_firmware_flash": True, # Use existing firmware + } + + await self.api.create_job(job) + + async def process_health_result(self, result: dict): + """Process health check result and update device status.""" + device_id = result["device_type"] + passed = result["status"] == "pass" + + if passed: + await self.devices.mark_healthy(device_id) + await self._close_failure_issue(device_id) + else: + failures = await self.devices.increment_failures(device_id) + + if failures >= self.failure_threshold: + await self.devices.disable_device(device_id) + await self._create_failure_issue(device_id, result) + elif failures >= 3: + await self.devices.mark_failing(device_id) + await self._notify_failure(device_id, result) + + async def _create_failure_issue(self, device_id: str, result: dict): + """Create GitHub issue for persistent device failure.""" + issue_body = f""" +## Device Health Check Failure + +**Device:** {device_id} +**Last Check:** {datetime.now().isoformat()} +**Consecutive Failures:** {self.failure_threshold} + +### Error Details +``` +{result.get('error_message', 'No error message')} +``` + +### Console Log +{result.get('console_log_url', 'No console log available')} + +### Actions Taken +- Device has been **disabled** from the test pool +- No new test jobs will be scheduled for this device + +### Resolution +1. Investigate the device manually +2. Fix any hardware/network issues +3. Run manual health check: `POST /api/v1/health-check/{device_id}` +4. Device will be re-enabled after successful health check + +/label ~"device-failure" ~"health-check" +""" + await self.github.create_issue( + title=f"[Health Check] {device_id} failing - disabled", + body=issue_body, + labels=["device-failure", "health-check"] + ) + + +# health/executor.py +class HealthCheckExecutor: + """Execute health checks on labgrid devices.""" + + async def run_health_check(self, device_name: str) -> dict: + """Run health check on a specific device.""" + start_time = datetime.now() + results = { + "device_type": device_name, + "start_time": start_time, + "checks": [], + } + + try: + # Acquire device + target = await self.labgrid.acquire_target(device_name) + + # Power cycle if supported + if hasattr(target, 'power'): + await target.power.cycle() + await asyncio.sleep(5) + + # Check 1: Serial/Shell access + shell_result = await self._check_shell(target) + results["checks"].append(shell_result) + + if not shell_result["passed"]: + results["status"] = "fail" + results["error_message"] = "Shell access failed" + return results + + # Check 2: SSH access + ssh_result = await self._check_ssh(target) + results["checks"].append(ssh_result) + + if not ssh_result["passed"]: + results["status"] = "fail" + results["error_message"] = "SSH access failed" + return results + + # Check 3: Basic system health + system_result = await self._check_system(target) + results["checks"].append(system_result) + + # All checks passed + results["status"] = "pass" + results["duration"] = (datetime.now() - start_time).total_seconds() + + except Exception as e: + results["status"] = "error" + results["error_message"] = str(e) + + finally: + await self.labgrid.release_target(device_name) + + return results + + async def _check_shell(self, target) -> dict: + """Verify shell access works.""" + try: + shell = target.get_driver("ShellDriver") + output = shell.run_check("echo health_check_ok") + return { + "name": "shell_access", + "passed": "health_check_ok" in output, + "output": output, + } + except Exception as e: + return { + "name": "shell_access", + "passed": False, + "error": str(e), + } + + async def _check_ssh(self, target) -> dict: + """Verify SSH access works.""" + try: + ssh = target.get_driver("SSHDriver") + output = ssh.run_check("uname -a") + return { + "name": "ssh_access", + "passed": "Linux" in output, + "output": output, + } + except Exception as e: + return { + "name": "ssh_access", + "passed": False, + "error": str(e), + } + + async def _check_system(self, target) -> dict: + """Run basic system health checks.""" + ssh = target.get_driver("SSHDriver") + checks = [] + + # Memory check + mem_output = ssh.run_check("free -m | grep Mem") + mem_available = int(mem_output.split()[6]) + checks.append({ + "name": "memory", + "passed": mem_available > 10, + "value": mem_available, + "unit": "MB", + }) + + # Disk check + disk_output = ssh.run_check("df / | tail -1") + disk_usage = int(disk_output.split()[4].rstrip('%')) + checks.append({ + "name": "disk", + "passed": disk_usage < 95, + "value": disk_usage, + "unit": "%", + }) + + return { + "name": "system_health", + "passed": all(c["passed"] for c in checks), + "checks": checks, + } +``` + +### Health Check Dashboard View + +```typescript +// dashboard/components/HealthCheckView.tsx +interface DeviceHealth { + device_id: string; + status: 'healthy' | 'failing' | 'disabled' | 'unknown'; + last_check: string; + last_pass: string; + consecutive_failures: number; + lab_name: string; + features: string[]; +} + +const HealthCheckDashboard: React.FC = () => { + const [devices, setDevices] = useState([]); + + return ( +
+

Device Fleet Health Status

+ +
+ d.status === 'healthy').length} + color="green" + /> + d.status === 'failing').length} + color="yellow" + /> + d.status === 'disabled').length} + color="red" + /> +
+ + + + + + + + + + + + + + + {devices.map(device => ( + + + + + + + + + + ))} + +
DeviceLabStatusLast CheckLast PassFailuresActions
{device.device_id}{device.lab_name}{formatDate(device.last_check)}{formatDate(device.last_pass)}{device.consecutive_failures} + + {device.status === 'disabled' && ( + + )} +
+
+ ); +}; +``` + +--- + +## Deployment Guide + +### Docker Compose Stack + +```yaml +# docker-compose.yml +version: '3.8' + +services: + # ============================================ + # Core Infrastructure + # ============================================ + + mongodb: + image: mongo:7.0 + container_name: openwrt-kci-mongodb + volumes: + - mongodb_data:/data/db + environment: + MONGO_INITDB_ROOT_USERNAME: ${MONGO_USER:-admin} + MONGO_INITDB_ROOT_PASSWORD: ${MONGO_PASSWORD} + networks: + - kci-network + restart: unless-stopped + + redis: + image: redis:7-alpine + container_name: openwrt-kci-redis + volumes: + - redis_data:/data + networks: + - kci-network + restart: unless-stopped + + minio: + image: minio/minio:latest + container_name: openwrt-kci-minio + command: server /data --console-address ":9001" + volumes: + - minio_data:/data + environment: + MINIO_ROOT_USER: ${MINIO_ACCESS_KEY} + MINIO_ROOT_PASSWORD: ${MINIO_SECRET_KEY} + ports: + - "9000:9000" + - "9001:9001" + networks: + - kci-network + restart: unless-stopped + + # ============================================ + # KernelCI API (Maestro) + # ============================================ + + kernelci-api: + image: ghcr.io/kernelci/kernelci-api:latest + container_name: openwrt-kci-api + depends_on: + - mongodb + - redis + environment: + SECRET_KEY: ${KCI_SECRET_KEY} + MONGO_SERVICE: mongodb://mongodb:27017 + REDIS_HOST: redis + ALGORITHM: HS256 + ACCESS_TOKEN_EXPIRE_MINUTES: 480 + volumes: + - ./config/api-config.toml:/home/kernelci/config/kernelci.toml:ro + ports: + - "8001:8001" + networks: + - kci-network + restart: unless-stopped + + # ============================================ + # KernelCI Pipeline Services + # ============================================ + + pipeline-trigger: + image: ghcr.io/kernelci/kernelci-pipeline:latest + container_name: openwrt-kci-trigger + depends_on: + - kernelci-api + environment: + KCI_API_URL: http://kernelci-api:8001 + KCI_API_TOKEN: ${KCI_API_TOKEN} + volumes: + - ./config/pipeline-openwrt.yaml:/home/kernelci/config/pipeline.yaml:ro + - ./openwrt-pipeline:/home/kernelci/openwrt-pipeline:ro + command: ["python", "-m", "openwrt_pipeline.firmware_trigger"] + networks: + - kci-network + restart: unless-stopped + + pipeline-scheduler: + image: ghcr.io/kernelci/kernelci-pipeline:latest + container_name: openwrt-kci-scheduler + depends_on: + - kernelci-api + environment: + KCI_API_URL: http://kernelci-api:8001 + KCI_API_TOKEN: ${KCI_API_TOKEN} + volumes: + - ./config/pipeline-openwrt.yaml:/home/kernelci/config/pipeline.yaml:ro + - ./openwrt-pipeline:/home/kernelci/openwrt-pipeline:ro + command: ["python", "-m", "openwrt_pipeline.test_scheduler"] + networks: + - kci-network + restart: unless-stopped + + pipeline-health: + image: ghcr.io/kernelci/kernelci-pipeline:latest + container_name: openwrt-kci-health + depends_on: + - kernelci-api + environment: + KCI_API_URL: http://kernelci-api:8001 + KCI_API_TOKEN: ${KCI_API_TOKEN} + HEALTH_CHECK_INTERVAL: 86400 + volumes: + - ./config/pipeline-openwrt.yaml:/home/kernelci/config/pipeline.yaml:ro + - ./openwrt-pipeline:/home/kernelci/openwrt-pipeline:ro + command: ["python", "-m", "openwrt_pipeline.health_scheduler"] + networks: + - kci-network + restart: unless-stopped + + # ============================================ + # Dashboard + # ============================================ + + dashboard: + image: ghcr.io/kernelci/dashboard:latest + container_name: openwrt-kci-dashboard + depends_on: + - kernelci-api + environment: + NEXT_PUBLIC_API_URL: http://kernelci-api:8001 + # Enable OpenWrt-specific views + NEXT_PUBLIC_PROJECT: openwrt + ports: + - "3000:3000" + networks: + - kci-network + restart: unless-stopped + + # ============================================ + # Reverse Proxy + # ============================================ + + traefik: + image: traefik:v3.0 + container_name: openwrt-kci-proxy + command: + - "--api.dashboard=true" + - "--providers.docker=true" + - "--entrypoints.web.address=:80" + - "--entrypoints.websecure.address=:443" + - "--certificatesresolvers.letsencrypt.acme.httpchallenge.entrypoint=web" + - "--certificatesresolvers.letsencrypt.acme.email=${ACME_EMAIL}" + - "--certificatesresolvers.letsencrypt.acme.storage=/letsencrypt/acme.json" + ports: + - "80:80" + - "443:443" + volumes: + - /var/run/docker.sock:/var/run/docker.sock:ro + - letsencrypt_data:/letsencrypt + networks: + - kci-network + restart: unless-stopped + +volumes: + mongodb_data: + redis_data: + minio_data: + letsencrypt_data: + +networks: + kci-network: + driver: bridge +``` + +### API Configuration + +```toml +# config/api-config.toml +[server] +host = "0.0.0.0" +port = 8001 + +[database] +service = "mongodb://mongodb:27017" +name = "openwrt_kernelci" + +[redis] +host = "redis" +port = 6379 + +[storage] +type = "s3" +endpoint = "http://minio:9000" +bucket = "openwrt-artifacts" +access_key_env = "MINIO_ACCESS_KEY" +secret_key_env = "MINIO_SECRET_KEY" + +[jwt] +secret_key_env = "KCI_SECRET_KEY" +algorithm = "HS256" +access_token_expire_minutes = 480 + +# OpenWrt-specific settings +[openwrt] +project_name = "OpenWrt" +firmware_sources = ["official", "pr", "custom", "buildbot"] +default_test_timeout = 1800 +health_check_interval = 86400 + +[openwrt.official_sources] +snapshot = "https://downloads.openwrt.org/snapshots/targets" +stable = "https://downloads.openwrt.org/releases/24.10.0/targets" +oldstable = "https://downloads.openwrt.org/releases/23.05.5/targets" +``` + +### Labgrid Adapter Deployment (Per Lab) + +```yaml +# labgrid-adapter/docker-compose.yml +version: '3.8' + +services: + labgrid-adapter: + build: + context: . + dockerfile: Dockerfile + container_name: labgrid-kci-adapter + environment: + # KernelCI API connection + KCI_API_URL: ${KCI_API_URL} # https://openwrt-kci.example.org/api + KCI_API_TOKEN: ${KCI_API_TOKEN} + LAB_NAME: ${LAB_NAME} + + # Labgrid coordinator + LG_CROSSBAR: ${LG_CROSSBAR:-ws://localhost:20408/ws} + + # Local storage for firmware caching + FIRMWARE_CACHE: /cache + + volumes: + - ./config:/app/config:ro + - ./targets:/app/targets:ro + - firmware_cache:/cache + - /var/run/docker.sock:/var/run/docker.sock # For QEMU targets + + # Host network for labgrid coordinator access + network_mode: host + + restart: unless-stopped + +volumes: + firmware_cache: +``` + +```dockerfile +# labgrid-adapter/Dockerfile +FROM python:3.13-slim + +WORKDIR /app + +# Install system dependencies +RUN apt-get update && apt-get install -y \ + git \ + openssh-client \ + qemu-system-arm \ + qemu-system-mips \ + qemu-system-x86 \ + && rm -rf /var/lib/apt/lists/* + +# Install Python dependencies +COPY requirements.txt . +RUN pip install --no-cache-dir -r requirements.txt + +# Copy adapter code +COPY labgrid_kci_adapter/ ./labgrid_kci_adapter/ +COPY tests/ ./tests/ +COPY conftest.py . + +# Entry point +CMD ["python", "-m", "labgrid_kci_adapter.service"] +``` + +### Environment File + +```bash +# .env +# MongoDB +MONGO_PASSWORD=change_me_secure_password + +# MinIO (S3-compatible storage) +MINIO_ACCESS_KEY=openwrt-kci +MINIO_SECRET_KEY=change_me_secure_password + +# KernelCI API +KCI_SECRET_KEY=change_me_32_char_minimum_secret_key +KCI_API_TOKEN=admin_token_change_me + +# TLS certificates +ACME_EMAIL=admin@example.org + +# Lab configuration (for adapter) +LAB_NAME=openwrt-lab-1 +KCI_API_URL=https://openwrt-kci.example.org +``` + +--- + +## Implementation Plan + +### Phase 1: Core Infrastructure (Weeks 1-3) + +**Goal:** Deploy self-hosted KernelCI with basic OpenWrt support. + +| Task | Description | Deliverable | +|------|-------------|-------------| +| 1.1 | Set up Docker Compose stack | Running MongoDB, Redis, MinIO | +| 1.2 | Deploy KernelCI API | API accessible at /api | +| 1.3 | Configure authentication | JWT tokens, user management | +| 1.4 | Deploy dashboard | Basic web UI running | +| 1.5 | Implement OpenWrt firmware schema | Custom node types in API | + +### Phase 2: Firmware Sources (Weeks 4-6) + +**Goal:** Enable firmware ingestion from multiple sources. + +| Task | Description | Deliverable | +|------|-------------|-------------| +| 2.1 | Official release watcher | Auto-import snapshots/releases | +| 2.2 | GitHub PR integration | Import artifacts from PRs | +| 2.3 | Custom upload API | `/api/v1/firmware/upload` endpoint | +| 2.4 | Firmware storage in MinIO | Organized artifact storage | +| 2.5 | profiles.json parser | Extract device features | + +### Phase 3: Labgrid Integration (Weeks 7-10) + +**Goal:** Connect labgrid infrastructure to KernelCI. + +| Task | Description | Deliverable | +|------|-------------|-------------| +| 3.1 | Labgrid adapter service | Pull-mode job execution | +| 3.2 | Device capability mapping | Match jobs to compatible devices | +| 3.3 | Test execution bridge | pytest → KernelCI results | +| 3.4 | Console log upload | Logs in MinIO, linked in results | +| 3.5 | Multi-lab support | Federated lab registration | + +### Phase 4: Health Checks (Weeks 11-13) + +**Goal:** Implement comprehensive device health monitoring. + +| Task | Description | Deliverable | +|------|-------------|-------------| +| 4.1 | Health check scheduler | Periodic checks (daily) | +| 4.2 | Health check executor | Shell/SSH verification | +| 4.3 | Device status tracking | Healthy/failing/disabled states | +| 4.4 | Automated issue creation | GitHub issues for failures | +| 4.5 | Health dashboard view | Visual fleet status | + +### Phase 5: Dashboard Customization (Weeks 14-16) + +**Goal:** OpenWrt-specific visualization and developer experience. + +| Task | Description | Deliverable | +|------|-------------|-------------| +| 5.1 | Firmware comparison view | Version × device × test matrix | +| 5.2 | PR status integration | Test results on PRs | +| 5.3 | Device fleet overview | Map of labs and devices | +| 5.4 | Regression detection | Highlight new failures | +| 5.5 | Developer notifications | Email/Slack on regressions | + +--- + +## Current Infrastructure Preservation + +### What Stays the Same + +| Component | Current | Proposed | +|-----------|---------|----------| +| Device targets | 38+ YAML files in `/targets/` | Same files, used by adapter | +| Lab network | 7 distributed labs | Same labs, pull-mode adapter | +| Test suite | pytest tests in `/tests/` | Same tests, invoked by adapter | +| labgrid | Custom fork | Continue using | +| Device features | `@pytest.mark.lg_feature` | Mapped to job requirements | + +### What Changes + +| Component | Current | Proposed | +|-----------|---------|----------| +| Orchestration | GitHub Actions | KernelCI Pipeline | +| Results UI | GitHub Pages | KernelCI Dashboard | +| Job scheduling | GHA matrix | KernelCI Scheduler | +| Health checks | healthcheck.yml | Health Check service | +| Result storage | GHA artifacts | MinIO + PostgreSQL | + +--- + +## References + +- [KernelCI Self-Hosted Documentation](https://docs.kernelci.org/components/devops/) +- [KernelCI Local Instance Setup](https://docs.kernelci.org/maestro/api/local-instance/) +- [KernelCI Pipeline Configuration](https://github.com/kernelci/kernelci-pipeline/blob/main/config/pipeline.yaml) +- [KernelCI Docker Containers](https://github.com/kernelci/kernelci-docker) +- [KernelCI Dashboard](https://github.com/kernelci/dashboard) +- [Simple KernelCI Labs with Labgrid (LPC 2022)](https://lpc.events/event/16/contributions/1313/) + +--- + +*Document updated: January 2025* +*Focus: Self-hosted KernelCI for OpenWrt firmware testing* diff --git a/foo copy.sh b/foo copy.sh new file mode 100644 index 000000000..95e7d37de --- /dev/null +++ b/foo copy.sh @@ -0,0 +1,23 @@ +yq -o=json ' + . as $root | + $root.labs as $labs | + $root.devices as $devices | + $labs + | to_entries + | map( + .key as $lab | + .value.devices + | map( + select($devices[.] != null and $devices[.].snapshots_only != true) | + { + "device": ., + "proxy": $labs[$lab].proxy, + "target": $devices[.].target, + "firmware": $devices[.].firmware, + "maintainers": $labs[$lab].maintainers, + "snapshots_only": ($devices[.].snapshots_only // false) + } + ) + ) + | flatten +' labnet.yaml diff --git a/foo.sh b/foo.sh new file mode 100644 index 000000000..10146adb6 --- /dev/null +++ b/foo.sh @@ -0,0 +1,23 @@ +yq ' + . as $root | + $root.labs as $labs | + $root.devices as $devices | + $labs + | to_entries + | map( + .key as $lab | + .value.devices + | map( + select($devices[.] != null and $devices[.].snapshots_only != true) | + { + "device": ., + "proxy": $labs[$lab].proxy, + "target": $devices[.].target, + "firmware": $devices[.].firmware, + "maintainers": $labs[$lab].maintainers, + "snapshots_only": ($devices[.].snapshots_only // false) + } + ) + ) + | flatten +' labnet.yaml diff --git a/kernelci/.env.example b/kernelci/.env.example new file mode 100644 index 000000000..2b5297447 --- /dev/null +++ b/kernelci/.env.example @@ -0,0 +1,99 @@ +# ============================================================================= +# OpenWrt KernelCI Environment Configuration +# ============================================================================= +# +# Copy this file to .env and fill in the values. +# NEVER commit the .env file to version control! +# +# Usage: +# cp .env.example .env +# # Edit .env with your values +# docker compose up -d +# +# ============================================================================= + +# ============================================================================= +# Domain Configuration +# ============================================================================= + +# Your domain name (used for TLS certificates and routing) +DOMAIN=openwrt-kci.example.org + +# Email for Let's Encrypt certificate notifications +ACME_EMAIL=admin@example.org + +# ============================================================================= +# MongoDB Configuration +# ============================================================================= + +# MongoDB root user +MONGO_USER=admin + +# MongoDB root password (generate a strong password!) +# Example: openssl rand -base64 32 +MONGO_PASSWORD=CHANGE_ME_GENERATE_SECURE_PASSWORD + +# ============================================================================= +# MinIO (S3-Compatible Storage) Configuration +# ============================================================================= + +# MinIO access credentials +MINIO_ACCESS_KEY=openwrt-kci-admin + +# MinIO secret key (generate a strong password!) +# Example: openssl rand -base64 32 +MINIO_SECRET_KEY=CHANGE_ME_GENERATE_SECURE_PASSWORD + +# ============================================================================= +# KernelCI API Configuration +# ============================================================================= + +# JWT secret key for API authentication (minimum 32 characters!) +# Example: openssl rand -base64 48 +KCI_SECRET_KEY=CHANGE_ME_GENERATE_AT_LEAST_32_CHAR_SECRET + +# API token for pipeline services +# Generate after first startup via API or use admin token +KCI_API_TOKEN=CHANGE_ME_GENERATE_API_TOKEN + +# ============================================================================= +# GitHub Integration (Optional but Recommended) +# ============================================================================= + +# GitHub personal access token for PR integration +# Needs: repo (read), actions (read artifacts) +GITHUB_TOKEN=ghp_CHANGE_ME + +# GitHub repository for issue creation +GITHUB_REPO=openwrt/openwrt + +# ============================================================================= +# Health Check Configuration +# ============================================================================= + +# Health check interval in seconds (default: 24 hours) +HEALTH_CHECK_INTERVAL=86400 + +# ============================================================================= +# Buildbot Integration (Optional) +# ============================================================================= + +# Webhook secret for buildbot integration +# BUILDBOT_WEBHOOK_SECRET=CHANGE_ME + +# ============================================================================= +# Notification Configuration (Optional) +# ============================================================================= + +# Slack webhook URL for notifications +# SLACK_WEBHOOK_URL=https://hooks.slack.com/services/... + +# ============================================================================= +# Development/Debug Options +# ============================================================================= + +# Set to true to skip TLS (for local development only!) +# SKIP_TLS=false + +# Log level (DEBUG, INFO, WARNING, ERROR) +# LOG_LEVEL=INFO diff --git a/kernelci/README.md b/kernelci/README.md new file mode 100644 index 000000000..4a0b82819 --- /dev/null +++ b/kernelci/README.md @@ -0,0 +1,412 @@ +# OpenWrt KernelCI - Self-Hosted Testing Infrastructure + +This directory contains the Docker Compose stack for running a self-hosted +KernelCI instance tailored for OpenWrt firmware testing. + +## Modular Architecture + +The system is split into two components: + +| Component | Description | Reusable? | +|-----------|-------------|-----------| +| `labgrid-adapter/` | Generic adapter connecting labgrid to KernelCI | **Yes** - usable by any project | +| `openwrt-pipeline/` | OpenWrt-specific firmware triggers and scheduling | No - OpenWrt specific | + +The **labgrid-kci-adapter** is designed to be project-agnostic and can be used +by other projects that want to connect labgrid-managed devices to KernelCI. +See `labgrid-adapter/README.md` for details. + +## Overview + +The stack provides: + +- **KernelCI API (Maestro)** - Job management and REST API +- **Dashboard** - Web-based result visualization +- **Pipeline Services** - Firmware triggers, scheduling, health checks +- **Storage** - MinIO for artifacts, MongoDB for data, Redis for events +- **Reverse Proxy** - Traefik with automatic TLS certificates + +## Quick Start + +### Prerequisites + +- Docker Engine 24.0+ +- Docker Compose v2.20+ +- A domain name pointing to your server (for TLS) +- At least 4GB RAM, 20GB disk space + +### Installation + +1. **Clone and configure:** + + ```bash + cd kernelci + cp .env.example .env + ``` + +2. **Edit `.env` with your settings:** + + ```bash + # Generate secure passwords + openssl rand -base64 32 # For MONGO_PASSWORD + openssl rand -base64 32 # For MINIO_SECRET_KEY + openssl rand -base64 48 # For KCI_SECRET_KEY + ``` + +3. **Start the stack:** + + ```bash + docker compose up -d + ``` + +4. **Check logs:** + + ```bash + docker compose logs -f + ``` + +5. **Access the services:** + + - Dashboard: `https://your-domain.org` + - API: `https://api.your-domain.org` + - Storage Console: `https://storage.your-domain.org` + - Traefik Dashboard: `http://your-server:8080` + +## Architecture + +``` +┌─────────────────────────────────────────────────────────────────────┐ +│ Traefik (Reverse Proxy) │ +│ :80 (redirect) → :443 (TLS) │ +└─────────────────────────────────────────────────────────────────────┘ + │ │ │ + ▼ ▼ ▼ +┌─────────────────┐ ┌─────────────────┐ ┌─────────────────┐ +│ Dashboard │ │ KernelCI API │ │ MinIO Console │ +│ (KernelCI) │ │ (Maestro) │ │ (S3 Storage) │ +│ :3000 │ │ :8001 │ │ :9001 │ +└─────────────────┘ └─────────────────┘ └─────────────────┘ + │ + ┌────────────────────┼────────────────────┐ + │ │ │ + ▼ ▼ ▼ +┌─────────────────┐ ┌─────────────────┐ ┌─────────────────┐ +│ MongoDB │ │ Redis │ │ MinIO │ +│ (Data) │ │ (Pub/Sub) │ │ (Artifacts) │ +└─────────────────┘ └─────────────────┘ └─────────────────┘ +``` + +## Pipeline Services + +### Firmware Trigger (`pipeline-trigger`) + +Watches for new firmware from configured sources: + +- **Official releases** - downloads.openwrt.org +- **GitHub PRs** - Artifacts from PR CI runs +- **Custom uploads** - Via API endpoint + +### Test Scheduler (`pipeline-scheduler`) + +Creates test job nodes for available firmware based on: + +- Device compatibility (target/subtarget/profile) +- Device features (wifi, wan_port, etc.) +- Test plan requirements + +## Lab Integration + +Labs connect using the **pull-mode** architecture: + +1. Lab runs the `labgrid-adapter` service +2. Adapter polls API for pending jobs (`kind=job`, `state=available`) +3. Jobs are claimed by setting `state=running` +4. Tests run via pytest with labgrid plugin +5. Results submitted as test nodes under job +6. Health checks run automatically every 24 hours + +See `labgrid-adapter/` for the lab-side component. + +### Test Execution + +Tests are executed using pytest's programmatic API with the labgrid plugin. +Following the [LAVA pattern](https://docs.lavasoftware.org/lava/writing-tests.html), +tests are pulled from git before each job execution. + +**Configuration:** + +```bash +# Configure the tests repository (pulled before each job) +TESTS_REPO_URL=https://github.com/openwrt/openwrt-tests.git +TESTS_REPO_BRANCH=main +``` + +**Per-job override:** + +Jobs can specify a different tests repository: + +```yaml +data: + tests_repo: "https://github.com/custom/tests.git" + tests_branch: "feature-branch" + tests: ["test_boot", "test_wifi"] +``` + +The executor: +1. Pulls tests from git (clones if not exists, updates if exists) +2. Runs pytest with labgrid plugin +3. Collects results via `ResultCollectorPlugin` +4. Submits results as KernelCI test nodes + +Labgrid handles firmware flashing via its pytest fixtures. + +### Lab Configuration + +```bash +# Required environment variables +LAB_NAME=my-lab +KCI_API_URL=https://api.kernelci.example.com +KCI_API_TOKEN= +LG_COORDINATOR=labgrid-coordinator:20408 + +# Optional - polling and concurrency +POLL_INTERVAL=30 +MAX_CONCURRENT_JOBS=3 + +# Optional - health checks +HEALTH_CHECK_INTERVAL=86400 # 24 hours +HEALTH_CHECK_ENABLED=true + +# Optional - tests repository (pulled before each job) +TESTS_REPO_URL=https://github.com/openwrt/openwrt-tests.git +TESTS_REPO_BRANCH=main +``` + +### Health Checks + +The adapter runs automatic health checks: + +- Every 24 hours (configurable via `HEALTH_CHECK_INTERVAL`) +- Failing devices removed from job pool +- Recovered devices automatically re-added +- Results logged for lab maintainers + +Manual check: +```bash +python -m labgrid_kci_adapter.health_check --all +``` + +## Configuration + +### `config/pipeline.yaml` + +Main pipeline configuration including: + +- Firmware sources +- Test plans +- Scheduler settings +- Device type mappings +- Health check settings + +### `config/api-config.toml` + +KernelCI API configuration: + +- Server settings +- Database connection +- JWT authentication +- OpenWrt-specific settings + +### `config/mongo-init.js` + +MongoDB initialization: + +- Creates collections +- Sets up indexes +- Optimizes queries + +## API Reference + +The KernelCI API uses a **Node-based data model** where all entities +(firmware builds, jobs, tests) are nodes with different `kind` values. + +### Query Nodes + +```bash +# Get all available jobs for a device type +GET /latest/nodes?kind=job&state=available&data.device_type=ath79-tplink-archer-c7-v2 + +# Get firmware nodes +GET /latest/nodes?kind=kbuild&data.target=ath79 + +# Get test results for a job +GET /latest/nodes?kind=test&parent={job_id} +``` + +### Create Nodes + +```bash +# Create firmware node +POST /latest/nodes +{ + "kind": "kbuild", + "name": "openwrt-ath79-generic-tplink_archer-c7-v2", + "state": "available", + "data": { + "target": "ath79", + "subtarget": "generic", + "profile": "tplink_archer-c7-v2", + "version": "24.10.0" + } +} + +# Create test result +POST /latest/nodes +{ + "kind": "test", + "name": "test_firmware_version", + "parent": "{job_id}", + "state": "done", + "result": "pass" +} +``` + +### Update Nodes + +```bash +# Claim a job +PUT /latest/nodes/{job_id} +{ + "state": "running", + "data": { + "lab_name": "my-lab", + "device_id": "device-01" + } +} + +# Complete a job +PUT /latest/nodes/{job_id} +{ + "state": "done", + "result": "pass" +} +``` + +### Node States + +| State | Description | +|-------|-------------| +| `available` | Ready to be processed (job ready for lab) | +| `running` | Currently being processed | +| `done` | Processing complete | + +### Node Kinds + +| Kind | Description | +|------|-------------| +| `kbuild` | Firmware build (OpenWrt image) | +| `job` | Test job container | +| `test` | Individual test result | + +## Maintenance + +### Backup + +```bash +# Backup MongoDB +docker exec openwrt-kci-mongodb mongodump --out /backup +docker cp openwrt-kci-mongodb:/backup ./backup-$(date +%Y%m%d) + +# Backup MinIO +docker run --rm -v openwrt-kci-minio:/data -v $(pwd):/backup \ + alpine tar czf /backup/minio-$(date +%Y%m%d).tar.gz /data +``` + +### Logs + +```bash +# View all logs +docker compose logs -f + +# View specific service +docker compose logs -f pipeline-health + +# View last 100 lines +docker compose logs --tail=100 kernelci-api +``` + +### Updates + +```bash +# Pull latest images +docker compose pull + +# Restart with new images +docker compose up -d + +# Rebuild pipeline services +docker compose build --no-cache +docker compose up -d +``` + +## Troubleshooting + +### API not starting + +Check MongoDB connection: + +```bash +docker compose logs mongodb +docker exec -it openwrt-kci-mongodb mongosh --eval "db.adminCommand('ping')" +``` + +### Jobs not being scheduled + +Check scheduler logs: + +```bash +docker compose logs -f pipeline-scheduler +``` + +### Device health checks failing + +Health checks run on the lab-side adapter, not centrally. +Check the adapter logs on your lab server: + +```bash +docker logs labgrid-adapter +``` + +### TLS certificate issues + +Check Traefik logs: + +```bash +docker compose logs -f traefik +``` + +Ensure your domain DNS is correctly configured. + +## Development + +### Local development without TLS + +Add to `.env`: + +``` +SKIP_TLS=true +``` + +Access via `http://localhost:3000` (dashboard) and `http://localhost:8001` (API). + +### Running tests + +```bash +# Build and run tests +docker compose -f docker-compose.yml -f docker-compose.test.yml up --build +``` + +## License + +This project is part of the OpenWrt testing infrastructure. +See the main repository for license information. diff --git a/kernelci/config/api-config.toml b/kernelci/config/api-config.toml new file mode 100644 index 000000000..c3af7d222 --- /dev/null +++ b/kernelci/config/api-config.toml @@ -0,0 +1,187 @@ +# ============================================================================= +# KernelCI API Configuration for OpenWrt Testing +# ============================================================================= +# +# This configuration file is mounted into the kernelci-api container. +# Environment variables are used for secrets (defined in .env file). +# +# ============================================================================= + +[server] +host = "0.0.0.0" +port = 8001 + +[database] +# MongoDB connection string (uses environment variable) +service_env = "MONGO_SERVICE" +name = "openwrt_kernelci" + +[redis] +host_env = "REDIS_HOST" +port = 6379 + +[jwt] +# JWT secret key for authentication (uses environment variable) +secret_key_env = "SECRET_KEY" +algorithm = "HS256" +access_token_expire_minutes = 480 + +[cors] +# Allow cross-origin requests from dashboard +allow_origins = ["*"] +allow_methods = ["*"] +allow_headers = ["*"] + +# ============================================================================= +# OpenWrt-Specific Configuration +# ============================================================================= + +[openwrt] +# Project identification +project_name = "OpenWrt" +project_url = "https://openwrt.org" + +# Supported firmware sources +firmware_sources = ["official", "pr", "custom", "buildbot"] + +# Default timeouts (seconds) +default_test_timeout = 1800 +default_boot_timeout = 300 +health_check_timeout = 300 + +# Health check configuration +health_check_interval = 86400 # 24 hours +health_check_failure_threshold = 5 + +# ============================================================================= +# Official Firmware Sources +# ============================================================================= + +[openwrt.sources.snapshot] +name = "Snapshot" +description = "Latest development builds" +url = "https://downloads.openwrt.org/snapshots/targets" +check_interval = 3600 # 1 hour +enabled = true + +[openwrt.sources.stable] +name = "Stable (24.10)" +description = "Current stable release" +url = "https://downloads.openwrt.org/releases/24.10.0/targets" +version = "24.10.0" +check_interval = 86400 # 24 hours +enabled = true + +[openwrt.sources.oldstable] +name = "Old Stable (23.05)" +description = "Previous stable release" +url = "https://downloads.openwrt.org/releases/23.05.5/targets" +version = "23.05.5" +check_interval = 86400 # 24 hours +enabled = true + +# ============================================================================= +# Test Plans +# ============================================================================= + +[openwrt.test_plans.base] +name = "Base Tests" +description = "Basic boot and connectivity tests" +tests = [ + "test_shell", + "test_ssh", + "test_firmware_version", + "test_ubus_system_board", + "test_free_memory", + "test_kernel_errors", +] +timeout = 600 +required_features = [] +priority = 5 + +[openwrt.test_plans.system] +name = "System Health" +description = "System resource and health validation" +tests = [ + "test_memory_usage", + "test_filesystem_usage", + "test_system_uptime", + "test_process_count", + "test_entropy_available", +] +timeout = 300 +required_features = [] +priority = 4 + +[openwrt.test_plans.network] +name = "Network Tests" +description = "Network functionality validation" +tests = [ + "test_lan_interface_address", + "test_wan_wait_for_network", + "test_https_download", +] +timeout = 600 +required_features = ["wan_port", "online"] +priority = 3 + +[openwrt.test_plans.wifi] +name = "WiFi Tests" +description = "Wireless functionality validation" +tests = [ + "test_wifi_scan", + "test_wifi_wpa2", + "test_wifi_wpa3", +] +timeout = 900 +required_features = ["wifi"] +priority = 3 + +[openwrt.test_plans.package] +name = "Package Manager Tests" +description = "Package installation and management" +tests = [ + "test_opkg_procd_installed", + "test_opkg_install_ucert", +] +timeout = 300 +required_features = ["opkg", "online"] +priority = 2 + +[openwrt.test_plans.health_check] +name = "Health Check" +description = "Quick device health verification" +tests = [ + "test_shell", + "test_ssh", +] +timeout = 300 +required_features = [] +priority = 10 # Highest priority +skip_firmware_flash = true + +# ============================================================================= +# Scheduler Configuration +# ============================================================================= + +[scheduler] +# Maximum concurrent jobs per lab +max_concurrent_jobs = 5 + +# Job priority weights +[scheduler.priorities] +pr_builds = 10 # Highest - developers waiting +custom = 8 # Developer uploads +health_check = 10 # Device health verification +snapshot = 5 # Daily development testing +stable = 3 # Release validation +oldstable = 2 # Legacy support + +# Feature-based test assignment +[scheduler.feature_mapping] +# Devices with wifi feature get wifi tests +wifi = ["wifi"] +# Devices with wan_port and online get network tests +network = ["wan_port", "online"] +# Devices with opkg and online get package tests +package = ["opkg", "online"] diff --git a/kernelci/config/mongo-init.js b/kernelci/config/mongo-init.js new file mode 100644 index 000000000..7a01e78f8 --- /dev/null +++ b/kernelci/config/mongo-init.js @@ -0,0 +1,123 @@ +// ============================================================================= +// MongoDB Initialization Script for OpenWrt KernelCI +// ============================================================================= +// +// This script runs once when the MongoDB container is first initialized. +// It creates the database, collections, and indexes needed for KernelCI. +// +// ============================================================================= + +// Switch to the openwrt_kernelci database +db = db.getSiblingDB('openwrt_kernelci'); + +// ============================================================================= +// Collections for KernelCI Core +// ============================================================================= + +// Users collection (managed by kernelci-api) +db.createCollection('users'); +db.users.createIndex({ "email": 1 }, { unique: true }); +db.users.createIndex({ "username": 1 }, { unique: true }); + +// Nodes collection (firmware, jobs, tests) +db.createCollection('nodes'); +db.nodes.createIndex({ "id": 1 }, { unique: true }); +db.nodes.createIndex({ "kind": 1 }); +db.nodes.createIndex({ "state": 1 }); +db.nodes.createIndex({ "created": -1 }); +db.nodes.createIndex({ "owner": 1 }); +db.nodes.createIndex({ "parent": 1 }); + +// ============================================================================= +// OpenWrt-Specific Collections +// ============================================================================= + +// Firmware collection - stores firmware metadata +db.createCollection('firmware'); +db.firmware.createIndex({ "id": 1 }, { unique: true }); +db.firmware.createIndex({ "source": 1 }); +db.firmware.createIndex({ "version": 1 }); +db.firmware.createIndex({ "target": 1, "subtarget": 1 }); +db.firmware.createIndex({ "profile": 1 }); +db.firmware.createIndex({ "created_at": -1 }); +db.firmware.createIndex({ "git_commit_hash": 1 }); + +// Jobs collection - test job queue +db.createCollection('jobs'); +db.jobs.createIndex({ "id": 1 }, { unique: true }); +db.jobs.createIndex({ "status": 1 }); +db.jobs.createIndex({ "priority": -1 }); +db.jobs.createIndex({ "device_type": 1 }); +db.jobs.createIndex({ "firmware_id": 1 }); +db.jobs.createIndex({ "assigned_lab": 1 }); +db.jobs.createIndex({ "created_at": -1 }); +db.jobs.createIndex({ "status": 1, "priority": -1 }); // Compound for job polling + +// Results collection - test results +db.createCollection('results'); +db.results.createIndex({ "id": 1 }, { unique: true }); +db.results.createIndex({ "job_id": 1 }); +db.results.createIndex({ "firmware_id": 1 }); +db.results.createIndex({ "device_type": 1 }); +db.results.createIndex({ "test_name": 1 }); +db.results.createIndex({ "status": 1 }); +db.results.createIndex({ "lab_name": 1 }); +db.results.createIndex({ "start_time": -1 }); +db.results.createIndex({ "firmware_id": 1, "device_type": 1, "test_name": 1 }); + +// ============================================================================= +// Device and Lab Management +// ============================================================================= + +// Devices collection - device registry +db.createCollection('devices'); +db.devices.createIndex({ "id": 1 }, { unique: true }); +db.devices.createIndex({ "lab_name": 1 }); +db.devices.createIndex({ "status": 1 }); +db.devices.createIndex({ "features": 1 }); +db.devices.createIndex({ "target": 1, "subtarget": 1 }); + +// Labs collection - lab registry +db.createCollection('labs'); +db.labs.createIndex({ "id": 1 }, { unique: true }); +db.labs.createIndex({ "status": 1 }); +db.labs.createIndex({ "last_seen": -1 }); + +// Health checks collection - device health history +db.createCollection('health_checks'); +db.health_checks.createIndex({ "device_id": 1 }); +db.health_checks.createIndex({ "timestamp": -1 }); +db.health_checks.createIndex({ "status": 1 }); +db.health_checks.createIndex({ "device_id": 1, "timestamp": -1 }); + +// ============================================================================= +// Events and Notifications +// ============================================================================= + +// Events collection - pub/sub events (TTL: 7 days) +db.createCollection('events'); +db.events.createIndex({ "type": 1 }); +db.events.createIndex({ "timestamp": -1 }); +db.events.createIndex({ "timestamp": 1 }, { expireAfterSeconds: 604800 }); + +// Notifications collection - pending notifications +db.createCollection('notifications'); +db.notifications.createIndex({ "type": 1 }); +db.notifications.createIndex({ "status": 1 }); +db.notifications.createIndex({ "created_at": -1 }); + +// ============================================================================= +// Statistics and Aggregations +// ============================================================================= + +// Daily statistics (for dashboard) +db.createCollection('daily_stats'); +db.daily_stats.createIndex({ "date": 1 }, { unique: true }); +db.daily_stats.createIndex({ "date": -1 }); + +// Device statistics +db.createCollection('device_stats'); +db.device_stats.createIndex({ "device_id": 1, "date": 1 }, { unique: true }); +db.device_stats.createIndex({ "date": -1 }); + +print('MongoDB initialization complete for OpenWrt KernelCI'); diff --git a/kernelci/config/pipeline.yaml b/kernelci/config/pipeline.yaml new file mode 100644 index 000000000..1684c58a4 --- /dev/null +++ b/kernelci/config/pipeline.yaml @@ -0,0 +1,574 @@ +# ============================================================================= +# OpenWrt KernelCI Pipeline Configuration +# ============================================================================= +# +# This file configures the pipeline services for OpenWrt firmware testing: +# - Firmware source watchers +# - Test scheduling +# - Health check automation +# - Result collection +# +# ============================================================================= + +# API connection settings +api: + local: + url: ${KCI_API_URL:-http://kernelci-api:8001} + token: ${KCI_API_TOKEN} + +# Storage configuration +storage: + minio: + type: s3 + endpoint: ${MINIO_ENDPOINT:-minio:9000} + access_key: ${MINIO_ACCESS_KEY} + secret_key: ${MINIO_SECRET_KEY} + secure: false + buckets: + firmware: openwrt-firmware + logs: openwrt-logs + artifacts: openwrt-artifacts + +# ============================================================================= +# Tree and Branch Configuration +# ============================================================================= +# +# Tree: openwrt +# Branches are discovered dynamically from: +# https://downloads.openwrt.org/.versions.json +# +# This automatically picks up: +# - main -> SNAPSHOT builds +# - stable -> Current release (e.g., 24.10.0) +# - oldstable -> Previous release series (e.g., 23.05.5) +# +tree: + name: openwrt + url: https://git.openwrt.org/openwrt/openwrt.git + +# ============================================================================= +# Firmware Source Watchers +# ============================================================================= + +firmware_sources: + # Official OpenWrt releases (versions fetched from .versions.json) + official: + enabled: true + type: openwrt_releases + + # Include snapshot (main branch) builds + include_snapshot: true + + # Include oldstable (previous release series) + include_oldstable: true + + # Check interval in seconds + check_interval: 3600 # 1 hour for snapshots, 24h for releases + + # Targets to scan for firmware + # These are checked for all active branches + targets: + - ath79/generic + - ramips/mt7621 + - mediatek/filogic + - x86/64 + - armsr/armv8 + - malta/be + + # GitHub Pull Request builds + github_pr: + enabled: true + type: github_artifacts + repository: openwrt/openwrt + token: ${GITHUB_TOKEN} + # Only process PRs with specific labels + trigger_labels: + - ci-test-requested + - needs-testing + # Workflow that produces firmware artifacts + workflow_name: Build + artifact_pattern: "openwrt-*" + check_interval: 300 # 5 minutes + + # Custom firmware uploads via API + custom: + enabled: true + type: api_upload + max_file_size: 104857600 # 100MB + allowed_extensions: + - .bin + - .img + - .itb + - .gz + validation: + require_target: true + require_profile: true + + # Buildbot integration (webhook-based) + buildbot: + enabled: false # Enable when buildbot is configured + type: webhook + endpoint: /api/v1/webhook/buildbot + secret: ${BUILDBOT_WEBHOOK_SECRET} + +# ============================================================================= +# Test Plans +# ============================================================================= + +test_plans: + base: + name: Base Tests + description: Basic boot and connectivity tests + tests: + - test_shell + - test_ssh + - test_firmware_version + - test_ubus_system_board + - test_free_memory + - test_kernel_errors + timeout: 600 + required_features: [] + + system: + name: System Health + description: System resource validation + tests: + - test_memory_usage + - test_filesystem_usage + - test_system_uptime + - test_process_count + - test_entropy_available + timeout: 300 + required_features: [] + + network: + name: Network Tests + description: Network functionality tests + tests: + - test_lan_interface_address + - test_wan_wait_for_network + - test_https_download + timeout: 600 + required_features: + - wan_port + - online + + wifi: + name: WiFi Tests + description: Wireless functionality tests + tests: + - test_wifi_scan + - test_wifi_wpa2 + - test_wifi_wpa3 + timeout: 900 + required_features: + - wifi + + wifi_hwsim: + name: WiFi HWSIM Tests + description: WiFi tests using mac80211_hwsim + tests: + - test_wifi_hwsim_sae_mixed + timeout: 600 + required_features: + - hwsim + + package_opkg: + name: OPKG Tests + description: OPKG package manager tests + tests: + - test_opkg_procd_installed + - test_opkg_install_ucert + timeout: 300 + required_features: + - opkg + - online + + package_apk: + name: APK Tests + description: APK package manager tests + tests: + - test_apk_procd_installed + - test_apk_add_ucert + - test_apk_audit + timeout: 300 + required_features: + - apk + - online + + health_check: + name: Health Check + description: Quick device health verification + tests: + - test_shell + - test_ssh + timeout: 300 + required_features: [] + skip_firmware_flash: true + + # ========================================================================== + # Kernel Selftest Plans + # ========================================================================== + # These require custom images with kselftest packages installed + # Package names: kselftests-{subsystem} + + # Networking + kselftest_net: + name: Kernel Network Tests + description: Linux kernel networking stack tests + tests: + - kselftest_net + timeout: 1800 + required_features: [] + test_type: kselftest + + # Timers and RTC + kselftest_timers: + name: Kernel Timer Tests + description: Linux kernel timer subsystem tests + tests: + - kselftest_timers + timeout: 600 + required_features: [] + test_type: kselftest + + kselftest_rtc: + name: Kernel RTC Tests + description: Real-time clock tests + tests: + - kselftest_rtc + timeout: 300 + required_features: [] + test_type: kselftest + + # Syscalls + kselftest_clone3: + name: Kernel clone3 Tests + description: clone3 syscall tests + tests: + - kselftest_clone3 + timeout: 300 + required_features: [] + test_type: kselftest + + kselftest_openat2: + name: Kernel openat2 Tests + description: openat2 syscall tests + tests: + - kselftest_openat2 + timeout: 300 + required_features: [] + test_type: kselftest + + kselftest_exec: + name: Kernel exec Tests + description: Program execution tests + tests: + - kselftest_exec + timeout: 300 + required_features: [] + test_type: kselftest + + kselftest_mincore: + name: Kernel mincore Tests + description: mincore syscall tests + tests: + - kselftest_mincore + timeout: 300 + required_features: [] + test_type: kselftest + + kselftest_splice: + name: Kernel splice Tests + description: splice syscall tests + tests: + - kselftest_splice + timeout: 300 + required_features: [] + test_type: kselftest + + kselftest_sync: + name: Kernel sync Tests + description: sync_file_range tests + tests: + - kselftest_sync + timeout: 300 + required_features: [] + test_type: kselftest + + # IPC + kselftest_futex: + name: Kernel futex Tests + description: Futex tests + tests: + - kselftest_futex + timeout: 600 + required_features: [] + test_type: kselftest + + kselftest_mqueue: + name: Kernel mqueue Tests + description: POSIX message queue tests + tests: + - kselftest_mqueue + timeout: 300 + required_features: [] + test_type: kselftest + + kselftest_sigaltstack: + name: Kernel sigaltstack Tests + description: Signal alternate stack tests + tests: + - kselftest_sigaltstack + timeout: 300 + required_features: [] + test_type: kselftest + + # Process + kselftest_kcmp: + name: Kernel kcmp Tests + description: Process comparison tests + tests: + - kselftest_kcmp + timeout: 300 + required_features: [] + test_type: kselftest + + # Size + kselftest_size: + name: Kernel Size Tests + description: Binary size test + tests: + - kselftest_size + timeout: 300 + required_features: [] + test_type: kselftest + +# ============================================================================= +# Scheduler Configuration +# ============================================================================= + +scheduler: + # Maximum jobs to queue at once + max_queue_size: 100 + + # Maximum concurrent jobs per lab + max_concurrent_per_lab: 3 + + # Job timeout (seconds) - jobs exceeding this are marked failed + job_timeout: 3600 + + # Enabled test types + # - firmware: Standard OpenWrt functionality tests + # - kselftest: Linux kernel validation tests (requires custom images) + enabled_test_types: + - firmware + # - kselftest # Enable when kselftest packages are available + + # Priority values (higher = more urgent) + priorities: + health_check: 10 + pr_builds: 9 + custom: 8 + snapshot: 5 + stable: 3 + oldstable: 2 + + # Default test plans by firmware source + default_test_plans: + snapshot: + - base + - system + stable: + - base + - system + - network + oldstable: + - base + pr_builds: + - base + - system + custom: + - base + + # Additional test plans based on device features + feature_test_plans: + wifi: + - wifi + hwsim: + - wifi_hwsim + opkg: + - package_opkg + apk: + - package_apk + wan_port: + - network + +# ============================================================================= +# Health Check Configuration +# ============================================================================= + +health_check: + enabled: true + + # Check interval (seconds) + interval: 86400 # 24 hours + + # Failure thresholds + warning_threshold: 3 + disable_threshold: 5 + + # Tests to run for health check + test_plan: health_check + + # Notification settings + notifications: + github_issues: + enabled: true + repository: ${GITHUB_REPO:-openwrt/openwrt} + labels: + - device-failure + - health-check + auto_close: true + + email: + enabled: false + recipients: [] + + slack: + enabled: false + webhook_url: ${SLACK_WEBHOOK_URL} + +# ============================================================================= +# Device Type Mapping +# ============================================================================= +# +# Maps labgrid target names to OpenWrt target/subtarget/profile +# This allows the scheduler to match firmware to compatible devices +# +# Each device has: +# - target/subtarget/profile: OpenWrt build target +# - features: Used for firmware test plan selection +# - capabilities: Used for test type filtering (firmware vs kselftest) +# +# Capabilities for test types: +# - serial_console: Required for all tests +# - isolated_network: Required for kselftest_net (no external network) +# - loopback_ethernet: Optional for some network tests +# - high_memory: Device has >= 512MB RAM +# +device_types: + # QEMU targets - good for kselftest (isolated, configurable resources) + qemu_x86-64: + target: x86 + subtarget: "64" + profile: generic + features: + - wan_port + - rootfs + capabilities: + - serial_console + - isolated_network + - high_memory + + qemu_malta-be: + target: malta + subtarget: be + profile: generic + features: + - wan_port + capabilities: + - serial_console + - isolated_network + + qemu_armsr-armv8: + target: armsr + subtarget: armv8 + profile: generic + features: + - wan_port + capabilities: + - serial_console + - isolated_network + - high_memory + + # Real hardware - primarily for firmware testing + # Add isolated_network capability if device is on dedicated test network + openwrt_one: + target: mediatek + subtarget: filogic + profile: openwrt_one + features: + - wifi + - wan_port + capabilities: + - serial_console + + bananapi_bpi-r4: + target: mediatek + subtarget: filogic + profile: bananapi_bpi-r4 + features: + - wifi + - wan_port + capabilities: + - serial_console + - high_memory + + linksys_e8450: + target: mediatek + subtarget: mt7622 + profile: linksys_e8450-ubi + features: + - wifi + - wan_port + capabilities: + - serial_console + + glinet_gl-mt6000: + target: mediatek + subtarget: filogic + profile: glinet_gl-mt6000 + features: + - wifi + - wan_port + capabilities: + - serial_console + - high_memory + + tplink_archer-c7-v2: + target: ath79 + subtarget: generic + profile: tplink_archer-c7-v2 + features: + - wifi + - wan_port + capabilities: + - serial_console + + raspberry_pi_4: + target: bcm27xx + subtarget: bcm2711 + profile: rpi-4 + features: + - wan_port + capabilities: + - serial_console + - high_memory + +# ============================================================================= +# Results Collection +# ============================================================================= + +results: + # Store console logs + store_console_logs: true + + # Store test artifacts + store_artifacts: true + + # Log retention (days) + log_retention_days: 90 + + # Compress logs older than (days) + compress_after_days: 7 diff --git a/kernelci/docker-compose.yml b/kernelci/docker-compose.yml new file mode 100644 index 000000000..08dec927c --- /dev/null +++ b/kernelci/docker-compose.yml @@ -0,0 +1,266 @@ +version: '3.8' + +# ============================================================================= +# OpenWrt KernelCI - Self-Hosted Testing Infrastructure +# ============================================================================= +# +# This Docker Compose stack deploys a complete KernelCI instance for OpenWrt +# firmware testing with: +# - KernelCI API (Maestro) for job management +# - KernelCI Dashboard for result visualization +# - Pipeline services for firmware triggers and scheduling +# +# Usage: +# docker compose up -d +# docker compose logs -f +# +# ============================================================================= + +services: + # =========================================================================== + # Core Infrastructure + # =========================================================================== + + mongodb: + image: mongo:7.0 + container_name: openwrt-kci-mongodb + volumes: + - mongodb_data:/data/db + - ./config/mongo-init.js:/docker-entrypoint-initdb.d/mongo-init.js:ro + environment: + MONGO_INITDB_ROOT_USERNAME: ${MONGO_USER:-admin} + MONGO_INITDB_ROOT_PASSWORD: ${MONGO_PASSWORD:?MONGO_PASSWORD is required} + MONGO_INITDB_DATABASE: openwrt_kernelci + networks: + - kci-internal + restart: unless-stopped + healthcheck: + test: ["CMD", "mongosh", "--eval", "db.adminCommand('ping')"] + interval: 10s + timeout: 5s + retries: 5 + + redis: + image: redis:7-alpine + container_name: openwrt-kci-redis + command: redis-server --appendonly yes + volumes: + - redis_data:/data + networks: + - kci-internal + restart: unless-stopped + healthcheck: + test: ["CMD", "redis-cli", "ping"] + interval: 10s + timeout: 5s + retries: 5 + + minio: + image: minio/minio:latest + container_name: openwrt-kci-minio + command: server /data --console-address ":9001" + volumes: + - minio_data:/data + environment: + MINIO_ROOT_USER: ${MINIO_ACCESS_KEY:?MINIO_ACCESS_KEY is required} + MINIO_ROOT_PASSWORD: ${MINIO_SECRET_KEY:?MINIO_SECRET_KEY is required} + networks: + - kci-internal + restart: unless-stopped + healthcheck: + test: ["CMD", "mc", "ready", "local"] + interval: 10s + timeout: 5s + retries: 5 + labels: + - "traefik.enable=true" + - "traefik.http.routers.minio-console.rule=Host(`storage.${DOMAIN:-localhost}`)" + - "traefik.http.routers.minio-console.entrypoints=websecure" + - "traefik.http.routers.minio-console.tls.certresolver=letsencrypt" + - "traefik.http.services.minio-console.loadbalancer.server.port=9001" + + # MinIO bucket initialization + minio-init: + image: minio/mc:latest + container_name: openwrt-kci-minio-init + depends_on: + minio: + condition: service_healthy + environment: + MINIO_ACCESS_KEY: ${MINIO_ACCESS_KEY} + MINIO_SECRET_KEY: ${MINIO_SECRET_KEY} + entrypoint: > + /bin/sh -c " + mc alias set minio http://minio:9000 $${MINIO_ACCESS_KEY} $${MINIO_SECRET_KEY}; + mc mb --ignore-existing minio/openwrt-firmware; + mc mb --ignore-existing minio/openwrt-logs; + mc mb --ignore-existing minio/openwrt-artifacts; + mc anonymous set download minio/openwrt-firmware; + mc anonymous set download minio/openwrt-logs; + echo 'MinIO buckets initialized'; + " + networks: + - kci-internal + + # =========================================================================== + # KernelCI API (Maestro) + # =========================================================================== + + kernelci-api: + image: ghcr.io/kernelci/kernelci-api:latest + container_name: openwrt-kci-api + depends_on: + mongodb: + condition: service_healthy + redis: + condition: service_healthy + environment: + SECRET_KEY: ${KCI_SECRET_KEY:?KCI_SECRET_KEY is required} + MONGO_SERVICE: mongodb://${MONGO_USER:-admin}:${MONGO_PASSWORD}@mongodb:27017 + REDIS_HOST: redis + REDIS_PORT: 6379 + ALGORITHM: HS256 + ACCESS_TOKEN_EXPIRE_MINUTES: 480 + volumes: + - ./config/api-config.toml:/home/kernelci/config/kernelci.toml:ro + networks: + - kci-internal + restart: unless-stopped + healthcheck: + test: ["CMD", "curl", "-f", "http://localhost:8001/"] + interval: 10s + timeout: 5s + retries: 5 + labels: + - "traefik.enable=true" + - "traefik.http.routers.api.rule=Host(`api.${DOMAIN:-localhost}`)" + - "traefik.http.routers.api.entrypoints=websecure" + - "traefik.http.routers.api.tls.certresolver=letsencrypt" + - "traefik.http.services.api.loadbalancer.server.port=8001" + + # =========================================================================== + # KernelCI Pipeline Services + # =========================================================================== + + pipeline-trigger: + build: + context: ./openwrt-pipeline + dockerfile: Dockerfile + container_name: openwrt-kci-trigger + depends_on: + kernelci-api: + condition: service_healthy + environment: + KCI_API_URL: http://kernelci-api:8001 + KCI_API_TOKEN: ${KCI_API_TOKEN:?KCI_API_TOKEN is required} + MINIO_ENDPOINT: minio:9000 + MINIO_ACCESS_KEY: ${MINIO_ACCESS_KEY} + MINIO_SECRET_KEY: ${MINIO_SECRET_KEY} + PYTHONUNBUFFERED: "1" + volumes: + - ./config/pipeline.yaml:/app/config/pipeline.yaml:ro + - firmware_cache:/app/cache + command: ["python", "-m", "openwrt_pipeline.firmware_trigger"] + networks: + - kci-internal + restart: unless-stopped + + pipeline-scheduler: + build: + context: ./openwrt-pipeline + dockerfile: Dockerfile + container_name: openwrt-kci-scheduler + depends_on: + kernelci-api: + condition: service_healthy + environment: + KCI_API_URL: http://kernelci-api:8001 + KCI_API_TOKEN: ${KCI_API_TOKEN} + PYTHONUNBUFFERED: "1" + volumes: + - ./config/pipeline.yaml:/app/config/pipeline.yaml:ro + command: ["python", "-m", "openwrt_pipeline.test_scheduler"] + networks: + - kci-internal + restart: unless-stopped + + # =========================================================================== + # Dashboard + # =========================================================================== + + dashboard: + image: ghcr.io/kernelci/dashboard:latest + container_name: openwrt-kci-dashboard + depends_on: + kernelci-api: + condition: service_healthy + environment: + NEXT_PUBLIC_API_ORIGIN: https://api.${DOMAIN:-localhost} + NEXT_PUBLIC_PROJECT_NAME: OpenWrt + networks: + - kci-internal + restart: unless-stopped + labels: + - "traefik.enable=true" + - "traefik.http.routers.dashboard.rule=Host(`${DOMAIN:-localhost}`)" + - "traefik.http.routers.dashboard.entrypoints=websecure" + - "traefik.http.routers.dashboard.tls.certresolver=letsencrypt" + - "traefik.http.services.dashboard.loadbalancer.server.port=3000" + + # =========================================================================== + # Reverse Proxy + # =========================================================================== + + traefik: + image: traefik:v3.0 + container_name: openwrt-kci-proxy + command: + - "--api.dashboard=true" + - "--api.insecure=true" + - "--providers.docker=true" + - "--providers.docker.exposedbydefault=false" + - "--providers.docker.network=kci-internal" + - "--entrypoints.web.address=:80" + - "--entrypoints.web.http.redirections.entrypoint.to=websecure" + - "--entrypoints.web.http.redirections.entrypoint.scheme=https" + - "--entrypoints.websecure.address=:443" + - "--certificatesresolvers.letsencrypt.acme.httpchallenge=true" + - "--certificatesresolvers.letsencrypt.acme.httpchallenge.entrypoint=web" + - "--certificatesresolvers.letsencrypt.acme.email=${ACME_EMAIL:-admin@example.org}" + - "--certificatesresolvers.letsencrypt.acme.storage=/letsencrypt/acme.json" + - "--log.level=INFO" + ports: + - "80:80" + - "443:443" + - "8080:8080" # Traefik dashboard + volumes: + - /var/run/docker.sock:/var/run/docker.sock:ro + - letsencrypt_data:/letsencrypt + networks: + - kci-internal + restart: unless-stopped + +# ============================================================================= +# Volumes +# ============================================================================= + +volumes: + mongodb_data: + name: openwrt-kci-mongodb + redis_data: + name: openwrt-kci-redis + minio_data: + name: openwrt-kci-minio + letsencrypt_data: + name: openwrt-kci-letsencrypt + firmware_cache: + name: openwrt-kci-firmware-cache + +# ============================================================================= +# Networks +# ============================================================================= + +networks: + kci-internal: + name: openwrt-kci-network + driver: bridge diff --git a/kernelci/labgrid-adapter/.env.example b/kernelci/labgrid-adapter/.env.example new file mode 100644 index 000000000..a7d7c878c --- /dev/null +++ b/kernelci/labgrid-adapter/.env.example @@ -0,0 +1,30 @@ +# KernelCI Labgrid Adapter Configuration +# Copy to .env and fill in your values + +# Lab identification (must match labnet.yaml) +LAB_NAME=labgrid-aparcar + +# KernelCI API connection +KCI_API_URL=https://api.openwrt-kci.aparcar.org +KCI_API_TOKEN= + +# Labgrid coordinator address (gRPC) +LG_COORDINATOR=localhost:20408 + +# Polling configuration +POLL_INTERVAL=30 +MAX_CONCURRENT_JOBS=3 + +# Health checks (seconds between checks, 86400 = 24h) +HEALTH_CHECK_ENABLED=true +HEALTH_CHECK_INTERVAL=86400 + +# Test types this lab supports (comma-separated) +# Options: firmware, kselftest +SUPPORTED_TEST_TYPES=firmware + +# Optional: MinIO for log storage +# MINIO_ENDPOINT=minio.example.com:9000 +# MINIO_ACCESS_KEY= +# MINIO_SECRET_KEY= +# MINIO_LOGS_BUCKET=test-logs diff --git a/kernelci/labgrid-adapter/Dockerfile b/kernelci/labgrid-adapter/Dockerfile new file mode 100644 index 000000000..6d9bdd5aa --- /dev/null +++ b/kernelci/labgrid-adapter/Dockerfile @@ -0,0 +1,73 @@ +# ============================================================================= +# Labgrid KernelCI Adapter +# ============================================================================= +# +# This container runs in each lab and provides the bridge between +# the KernelCI API and the local labgrid infrastructure. +# +# Features: +# - Pull-mode job polling (no inbound connections required) +# - pytest test execution +# - Firmware flashing via labgrid +# - Result submission to KernelCI +# +# ============================================================================= + +FROM python:3.13-slim + +# Labels +LABEL org.opencontainers.image.title="Labgrid KernelCI Adapter" +LABEL org.opencontainers.image.description="Bridge between KernelCI and labgrid test labs" + +# Set working directory +WORKDIR /app + +# Install system dependencies +RUN apt-get update && apt-get install -y --no-install-recommends \ + curl \ + git \ + openssh-client \ + # QEMU for emulated targets + qemu-system-arm \ + qemu-system-mips \ + qemu-system-x86 \ + # Serial port tools + picocom \ + # Network tools + iputils-ping \ + netcat-openbsd \ + # Build tools (for some labgrid drivers) + build-essential \ + && rm -rf /var/lib/apt/lists/* + +# Copy requirements first (for layer caching) +COPY requirements.txt . + +# Install Python dependencies +RUN pip install --no-cache-dir -r requirements.txt + +# Copy application code +COPY labgrid_kci_adapter/ ./labgrid_kci_adapter/ + +# Copy test suite from main repo (mounted or copied) +# These are typically mounted from the host +COPY tests/ ./tests/ 2>/dev/null || true +COPY conftest.py . 2>/dev/null || true +COPY targets/ ./targets/ 2>/dev/null || true + +# Create non-root user +RUN useradd -m -s /bin/bash adapter +RUN chown -R adapter:adapter /app + +# Create directories for caching and logs +RUN mkdir -p /app/cache /app/logs && chown adapter:adapter /app/cache /app/logs + +# Switch to non-root user +USER adapter + +# Environment +ENV PYTHONUNBUFFERED=1 +ENV PYTHONDONTWRITEBYTECODE=1 + +# Default command +CMD ["python", "-m", "labgrid_kci_adapter.service"] diff --git a/kernelci/labgrid-adapter/README.md b/kernelci/labgrid-adapter/README.md new file mode 100644 index 000000000..cf4b46865 --- /dev/null +++ b/kernelci/labgrid-adapter/README.md @@ -0,0 +1,198 @@ +# Labgrid KernelCI Adapter + +A generic adapter connecting [labgrid](https://labgrid.readthedocs.io/) test +infrastructure to [KernelCI](https://kernelci.org/) for automated hardware testing. + +**This is a reusable component** - while developed for OpenWrt testing, it can be +used by any project that wants to connect labgrid-managed devices to KernelCI. + +## Features + +- **Pull-mode architecture**: Labs poll KernelCI API for jobs (no inbound connections) +- **Pytest integration**: Executes tests via pytest with labgrid plugin +- **Automatic test sync**: Pulls tests from git before each job +- **Health checks**: Automatic device health monitoring +- **Result collection**: Submits results as KernelCI test nodes + +## Architecture + +``` +┌─────────────────────────────────────────────────────────────┐ +│ KernelCI API (Central) │ +│ - Job queue (nodes) │ +│ - Result storage │ +└─────────────────────────────────────────────────────────────┘ + ▲ + │ HTTPS (poll) + ▼ +┌─────────────────────────────────────────────────────────────┐ +│ Labgrid KCI Adapter (Lab) │ +│ ┌─────────┐ ┌──────────┐ ┌────────────┐ ┌───────────┐ │ +│ │ Poller │→ │ Executor │→ │ Test Sync │→ │ Labgrid │ │ +│ └─────────┘ └──────────┘ └────────────┘ └───────────┘ │ +└─────────────────────────────────────────────────────────────┘ + │ + │ gRPC + ▼ +┌─────────────────────────────────────────────────────────────┐ +│ Labgrid Coordinator + Devices │ +│ (Router, SBC, QEMU, etc.) │ +└─────────────────────────────────────────────────────────────┘ +``` + +## Installation + +```bash +pip install labgrid-kci-adapter +# or +docker pull ghcr.io/openwrt/labgrid-kci-adapter +``` + +## Configuration + +All configuration via environment variables: + +```bash +# Required +LAB_NAME=my-lab # Unique lab identifier +KCI_API_URL=https://api.kci.org # KernelCI API endpoint +KCI_API_TOKEN=your-token # API authentication token +LG_COORDINATOR=localhost:20408 # Labgrid coordinator address + +# Tests (pulled before each job) +TESTS_REPO_URL=https://github.com/your/tests.git +TESTS_REPO_BRANCH=main + +# Optional +POLL_INTERVAL=30 # Seconds between job polls +MAX_CONCURRENT_JOBS=3 # Parallel job limit +HEALTH_CHECK_INTERVAL=86400 # Health check interval (24h) +HEALTH_CHECK_ENABLED=true # Enable automatic health checks + +# Storage (optional) +MINIO_ENDPOINT=storage.example.com:9000 +MINIO_ACCESS_KEY=... +MINIO_SECRET_KEY=... +MINIO_LOGS_BUCKET=test-logs # Bucket for test logs +``` + +## Usage + +### Docker Compose + +```yaml +services: + labgrid-adapter: + image: ghcr.io/openwrt/labgrid-kci-adapter + environment: + LAB_NAME: my-lab + KCI_API_URL: https://api.kernelci.org + KCI_API_TOKEN: ${KCI_API_TOKEN} + LG_COORDINATOR: labgrid-coordinator:20408 + TESTS_REPO_URL: https://github.com/your/tests.git + volumes: + - ./targets:/app/targets:ro # Labgrid target configs + depends_on: + - labgrid-coordinator +``` + +### Standalone + +```bash +export LAB_NAME=my-lab +export KCI_API_URL=https://api.kernelci.org +export KCI_API_TOKEN=your-token +export LG_COORDINATOR=localhost:20408 +export TESTS_REPO_URL=https://github.com/your/tests.git + +python -m labgrid_kci_adapter.service +``` + +## Target Configuration + +Place labgrid target YAML files in the `targets/` directory: + +```yaml +# targets/my-device.yaml +targets: + main: + resources: + RemotePlace: + name: my-device + drivers: + ShellDriver: + prompt: 'root@.*:' + login_prompt: 'login:' + username: root +``` + +## Job Format + +The adapter expects KernelCI job nodes with: + +```json +{ + "kind": "job", + "state": "available", + "data": { + "device_type": "my-device", + "test_plan": "base", + "tests": ["test_boot", "test_network"], + "timeout": 1800, + "firmware_url": "https://...", + "tests_repo": "https://github.com/...", + "tests_branch": "main" + } +} +``` + +## Test Structure + +Tests are standard pytest files using the labgrid plugin: + +```python +import pytest + +def test_device_boots(target): + """Test that device boots successfully.""" + shell = target.get_driver("ShellDriver") + shell.run_check("uname -a") + +def test_network(target): + """Test network connectivity.""" + shell = target.get_driver("ShellDriver") + shell.run_check("ping -c 3 8.8.8.8") +``` + +## Using with Other Projects + +This adapter is project-agnostic. To use with your project: + +1. Set up a KernelCI instance (or use the public one) +2. Create test jobs with your device types +3. Configure the adapter with your tests repository +4. Create labgrid target files for your devices + +The adapter will: +- Poll for jobs matching your device types +- Clone/update your tests repository +- Execute tests via pytest + labgrid +- Submit results to KernelCI + +## Development + +```bash +# Install dependencies +pip install -e ".[dev]" + +# Run tests +pytest + +# Lint +ruff check . +ruff format . +``` + +## License + +See repository root for license information. diff --git a/kernelci/labgrid-adapter/docker-compose.yaml b/kernelci/labgrid-adapter/docker-compose.yaml new file mode 100644 index 000000000..a0621026d --- /dev/null +++ b/kernelci/labgrid-adapter/docker-compose.yaml @@ -0,0 +1,55 @@ +version: "3.8" + +services: + adapter: + build: . + container_name: kernelci-adapter + restart: unless-stopped + environment: + # Lab identification + LAB_NAME: "${LAB_NAME:-labgrid-aparcar}" + + # KernelCI API connection + KCI_API_URL: "${KCI_API_URL:-https://api.openwrt-kci.aparcar.org}" + KCI_API_TOKEN: "${KCI_API_TOKEN}" + + # Labgrid coordinator + LG_COORDINATOR: "${LG_COORDINATOR:-localhost:20408}" + + # Polling and execution + POLL_INTERVAL: "${POLL_INTERVAL:-30}" + MAX_CONCURRENT_JOBS: "${MAX_CONCURRENT_JOBS:-3}" + + # Health checks + HEALTH_CHECK_ENABLED: "${HEALTH_CHECK_ENABLED:-true}" + HEALTH_CHECK_INTERVAL: "${HEALTH_CHECK_INTERVAL:-86400}" + + # Test types this lab supports + SUPPORTED_TEST_TYPES: "${SUPPORTED_TEST_TYPES:-firmware}" + + # Optional: MinIO for log storage + MINIO_ENDPOINT: "${MINIO_ENDPOINT:-}" + MINIO_ACCESS_KEY: "${MINIO_ACCESS_KEY:-}" + MINIO_SECRET_KEY: "${MINIO_SECRET_KEY:-}" + MINIO_LOGS_BUCKET: "${MINIO_LOGS_BUCKET:-test-logs}" + + volumes: + # Mount labgrid target definitions + - ../../targets:/app/targets:ro + # Mount test suite + - ../../tests:/app/tests:ro + # Mount conftest and strategies + - ../../strategies:/app/strategies:ro + # Firmware cache (persistent) + - firmware-cache:/app/cache + + # Host network for labgrid serial/network access + network_mode: host + + # For USB serial port access + privileged: true + devices: + - /dev:/dev + +volumes: + firmware-cache: diff --git a/kernelci/labgrid-adapter/labgrid_kci_adapter.egg-info/PKG-INFO b/kernelci/labgrid-adapter/labgrid_kci_adapter.egg-info/PKG-INFO new file mode 100644 index 000000000..1c4024220 --- /dev/null +++ b/kernelci/labgrid-adapter/labgrid_kci_adapter.egg-info/PKG-INFO @@ -0,0 +1,216 @@ +Metadata-Version: 2.4 +Name: labgrid-kci-adapter +Version: 0.1.0 +Summary: Adapter connecting labgrid test infrastructure to KernelCI +Requires-Python: >=3.11 +Description-Content-Type: text/markdown +Requires-Dist: labgrid @ git+https://github.com/aparcar/labgrid.git@aparcar/staging +Requires-Dist: httpx>=0.27.0 +Requires-Dist: pydantic>=2.5.0 +Requires-Dist: pydantic-settings>=2.1.0 +Requires-Dist: pyyaml>=6.0.1 +Requires-Dist: minio>=7.2.0 +Requires-Dist: structlog>=24.1.0 +Provides-Extra: dev +Requires-Dist: pytest>=8.0.0; extra == "dev" +Requires-Dist: pytest-asyncio>=0.23.0; extra == "dev" +Requires-Dist: ruff>=0.1.0; extra == "dev" + +# Labgrid KernelCI Adapter + +A generic adapter connecting [labgrid](https://labgrid.readthedocs.io/) test +infrastructure to [KernelCI](https://kernelci.org/) for automated hardware testing. + +**This is a reusable component** - while developed for OpenWrt testing, it can be +used by any project that wants to connect labgrid-managed devices to KernelCI. + +## Features + +- **Pull-mode architecture**: Labs poll KernelCI API for jobs (no inbound connections) +- **Pytest integration**: Executes tests via pytest with labgrid plugin +- **Automatic test sync**: Pulls tests from git before each job +- **Health checks**: Automatic device health monitoring +- **Result collection**: Submits results as KernelCI test nodes + +## Architecture + +``` +┌─────────────────────────────────────────────────────────────┐ +│ KernelCI API (Central) │ +│ - Job queue (nodes) │ +│ - Result storage │ +└─────────────────────────────────────────────────────────────┘ + ▲ + │ HTTPS (poll) + ▼ +┌─────────────────────────────────────────────────────────────┐ +│ Labgrid KCI Adapter (Lab) │ +│ ┌─────────┐ ┌──────────┐ ┌────────────┐ ┌───────────┐ │ +│ │ Poller │→ │ Executor │→ │ Test Sync │→ │ Labgrid │ │ +│ └─────────┘ └──────────┘ └────────────┘ └───────────┘ │ +└─────────────────────────────────────────────────────────────┘ + │ + │ gRPC + ▼ +┌─────────────────────────────────────────────────────────────┐ +│ Labgrid Coordinator + Devices │ +│ (Router, SBC, QEMU, etc.) │ +└─────────────────────────────────────────────────────────────┘ +``` + +## Installation + +```bash +pip install labgrid-kci-adapter +# or +docker pull ghcr.io/openwrt/labgrid-kci-adapter +``` + +## Configuration + +All configuration via environment variables: + +```bash +# Required +LAB_NAME=my-lab # Unique lab identifier +KCI_API_URL=https://api.kci.org # KernelCI API endpoint +KCI_API_TOKEN=your-token # API authentication token +LG_COORDINATOR=localhost:20408 # Labgrid coordinator address + +# Tests (pulled before each job) +TESTS_REPO_URL=https://github.com/your/tests.git +TESTS_REPO_BRANCH=main + +# Optional +POLL_INTERVAL=30 # Seconds between job polls +MAX_CONCURRENT_JOBS=3 # Parallel job limit +HEALTH_CHECK_INTERVAL=86400 # Health check interval (24h) +HEALTH_CHECK_ENABLED=true # Enable automatic health checks + +# Storage (optional) +MINIO_ENDPOINT=storage.example.com:9000 +MINIO_ACCESS_KEY=... +MINIO_SECRET_KEY=... +MINIO_LOGS_BUCKET=test-logs # Bucket for test logs +``` + +## Usage + +### Docker Compose + +```yaml +services: + labgrid-adapter: + image: ghcr.io/openwrt/labgrid-kci-adapter + environment: + LAB_NAME: my-lab + KCI_API_URL: https://api.kernelci.org + KCI_API_TOKEN: ${KCI_API_TOKEN} + LG_COORDINATOR: labgrid-coordinator:20408 + TESTS_REPO_URL: https://github.com/your/tests.git + volumes: + - ./targets:/app/targets:ro # Labgrid target configs + depends_on: + - labgrid-coordinator +``` + +### Standalone + +```bash +export LAB_NAME=my-lab +export KCI_API_URL=https://api.kernelci.org +export KCI_API_TOKEN=your-token +export LG_COORDINATOR=localhost:20408 +export TESTS_REPO_URL=https://github.com/your/tests.git + +python -m labgrid_kci_adapter.service +``` + +## Target Configuration + +Place labgrid target YAML files in the `targets/` directory: + +```yaml +# targets/my-device.yaml +targets: + main: + resources: + RemotePlace: + name: my-device + drivers: + ShellDriver: + prompt: 'root@.*:' + login_prompt: 'login:' + username: root +``` + +## Job Format + +The adapter expects KernelCI job nodes with: + +```json +{ + "kind": "job", + "state": "available", + "data": { + "device_type": "my-device", + "test_plan": "base", + "tests": ["test_boot", "test_network"], + "timeout": 1800, + "firmware_url": "https://...", + "tests_repo": "https://github.com/...", + "tests_branch": "main" + } +} +``` + +## Test Structure + +Tests are standard pytest files using the labgrid plugin: + +```python +import pytest + +def test_device_boots(target): + """Test that device boots successfully.""" + shell = target.get_driver("ShellDriver") + shell.run_check("uname -a") + +def test_network(target): + """Test network connectivity.""" + shell = target.get_driver("ShellDriver") + shell.run_check("ping -c 3 8.8.8.8") +``` + +## Using with Other Projects + +This adapter is project-agnostic. To use with your project: + +1. Set up a KernelCI instance (or use the public one) +2. Create test jobs with your device types +3. Configure the adapter with your tests repository +4. Create labgrid target files for your devices + +The adapter will: +- Poll for jobs matching your device types +- Clone/update your tests repository +- Execute tests via pytest + labgrid +- Submit results to KernelCI + +## Development + +```bash +# Install dependencies +pip install -e ".[dev]" + +# Run tests +pytest + +# Lint +ruff check . +ruff format . +``` + +## License + +See repository root for license information. diff --git a/kernelci/labgrid-adapter/labgrid_kci_adapter.egg-info/SOURCES.txt b/kernelci/labgrid-adapter/labgrid_kci_adapter.egg-info/SOURCES.txt new file mode 100644 index 000000000..0be0878ac --- /dev/null +++ b/kernelci/labgrid-adapter/labgrid_kci_adapter.egg-info/SOURCES.txt @@ -0,0 +1,25 @@ +README.md +pyproject.toml +labgrid_kci_adapter/__init__.py +labgrid_kci_adapter/config.py +labgrid_kci_adapter/device_discovery.py +labgrid_kci_adapter/executor.py +labgrid_kci_adapter/health_check.py +labgrid_kci_adapter/ktap_parser.py +labgrid_kci_adapter/labgrid_client.py +labgrid_kci_adapter/models.py +labgrid_kci_adapter/poller.py +labgrid_kci_adapter/service.py +labgrid_kci_adapter/test_sync.py +labgrid_kci_adapter.egg-info/PKG-INFO +labgrid_kci_adapter.egg-info/SOURCES.txt +labgrid_kci_adapter.egg-info/dependency_links.txt +labgrid_kci_adapter.egg-info/entry_points.txt +labgrid_kci_adapter.egg-info/requires.txt +labgrid_kci_adapter.egg-info/top_level.txt +tests/test_device_discovery.py +tests/test_executor.py +tests/test_ktap_parser.py +tests/test_labgrid_client.py +tests/test_poller.py +tests/test_test_sync.py \ No newline at end of file diff --git a/kernelci/labgrid-adapter/labgrid_kci_adapter.egg-info/dependency_links.txt b/kernelci/labgrid-adapter/labgrid_kci_adapter.egg-info/dependency_links.txt new file mode 100644 index 000000000..8b1378917 --- /dev/null +++ b/kernelci/labgrid-adapter/labgrid_kci_adapter.egg-info/dependency_links.txt @@ -0,0 +1 @@ + diff --git a/kernelci/labgrid-adapter/labgrid_kci_adapter.egg-info/entry_points.txt b/kernelci/labgrid-adapter/labgrid_kci_adapter.egg-info/entry_points.txt new file mode 100644 index 000000000..e0d896752 --- /dev/null +++ b/kernelci/labgrid-adapter/labgrid_kci_adapter.egg-info/entry_points.txt @@ -0,0 +1,2 @@ +[console_scripts] +labgrid-kci-adapter = labgrid_kci_adapter.service:run diff --git a/kernelci/labgrid-adapter/labgrid_kci_adapter.egg-info/requires.txt b/kernelci/labgrid-adapter/labgrid_kci_adapter.egg-info/requires.txt new file mode 100644 index 000000000..912a36493 --- /dev/null +++ b/kernelci/labgrid-adapter/labgrid_kci_adapter.egg-info/requires.txt @@ -0,0 +1,12 @@ +labgrid @ git+https://github.com/aparcar/labgrid.git@aparcar/staging +httpx>=0.27.0 +pydantic>=2.5.0 +pydantic-settings>=2.1.0 +pyyaml>=6.0.1 +minio>=7.2.0 +structlog>=24.1.0 + +[dev] +pytest>=8.0.0 +pytest-asyncio>=0.23.0 +ruff>=0.1.0 diff --git a/kernelci/labgrid-adapter/labgrid_kci_adapter.egg-info/top_level.txt b/kernelci/labgrid-adapter/labgrid_kci_adapter.egg-info/top_level.txt new file mode 100644 index 000000000..dabf89206 --- /dev/null +++ b/kernelci/labgrid-adapter/labgrid_kci_adapter.egg-info/top_level.txt @@ -0,0 +1 @@ +labgrid_kci_adapter diff --git a/kernelci/labgrid-adapter/labgrid_kci_adapter/__init__.py b/kernelci/labgrid-adapter/labgrid_kci_adapter/__init__.py new file mode 100644 index 000000000..fe84ec94a --- /dev/null +++ b/kernelci/labgrid-adapter/labgrid_kci_adapter/__init__.py @@ -0,0 +1,13 @@ +""" +Labgrid KernelCI Adapter + +This package provides the bridge between KernelCI and labgrid test labs. + +Components: +- service: Main adapter service +- poller: Job polling from KernelCI API +- executor: Test execution using labgrid and pytest +- results: Result collection and submission +""" + +__version__ = "0.1.0" diff --git a/kernelci/labgrid-adapter/labgrid_kci_adapter/__main__.py b/kernelci/labgrid-adapter/labgrid_kci_adapter/__main__.py new file mode 100644 index 000000000..6fcdd779c --- /dev/null +++ b/kernelci/labgrid-adapter/labgrid_kci_adapter/__main__.py @@ -0,0 +1,17 @@ +"""Entry point for running labgrid_kci_adapter as a module.""" +import asyncio +import logging +import os + +# Configure standard logging before importing other modules +log_level = os.environ.get("LOG_LEVEL", "INFO").upper() +logging.basicConfig( + level=getattr(logging, log_level, logging.INFO), + format="%(asctime)s [%(levelname)s] %(name)s: %(message)s", + datefmt="%Y-%m-%dT%H:%M:%S", +) + +from .service import main + +if __name__ == "__main__": + asyncio.run(main()) diff --git a/kernelci/labgrid-adapter/labgrid_kci_adapter/config.py b/kernelci/labgrid-adapter/labgrid_kci_adapter/config.py new file mode 100644 index 000000000..03fea8463 --- /dev/null +++ b/kernelci/labgrid-adapter/labgrid_kci_adapter/config.py @@ -0,0 +1,124 @@ +""" +Configuration for Labgrid KernelCI Adapter. +""" + +from pathlib import Path + +from pydantic import Field +from pydantic_settings import BaseSettings + + +class Settings(BaseSettings): + """Adapter settings from environment variables.""" + + # Lab identification + lab_name: str = Field( + default="openwrt-lab", + description="Unique name for this lab", + ) + + # KernelCI API connection + kci_api_url: str = Field( + default="http://localhost:8001", + description="KernelCI API URL", + ) + kci_api_token: str = Field( + default="", + description="KernelCI API authentication token", + ) + + # Labgrid coordinator (gRPC) + lg_coordinator: str = Field( + default="localhost:20408", + description="Labgrid coordinator gRPC address (host:port)", + ) + + # MinIO storage (optional) + minio_endpoint: str = Field(default="") + minio_access_key: str = Field(default="") + minio_secret_key: str = Field(default="") + minio_secure: bool = Field(default=False) + minio_logs_bucket: str = Field( + default="test-logs", + description="MinIO bucket name for test logs", + ) + + # Polling configuration + poll_interval: int = Field( + default=30, + description="Seconds between job polls", + ) + max_concurrent_jobs: int = Field( + default=3, + description="Maximum concurrent jobs", + ) + + # Health check configuration + health_check_interval: int = Field( + default=86400, + description="Seconds between health checks (default: 24h)", + ) + health_check_enabled: bool = Field( + default=True, + description="Enable automatic health checks", + ) + + # Device discovery configuration + device_discovery_interval: int = Field( + default=300, + description="Seconds between device discovery refreshes (default: 5min)", + ) + require_target_files: bool = Field( + default=True, + description="Only accept jobs for devices with target YAML files", + ) + + # Paths + targets_dir: Path = Field( + default=Path("/app/targets"), + description="Directory containing labgrid target YAML files", + ) + tests_dir: Path = Field( + default=Path("/app/tests"), + description="Directory containing pytest test files", + ) + firmware_cache: Path = Field( + default=Path("/app/cache"), + description="Directory for caching firmware files", + ) + + # Test repository (pulled before each job execution) + tests_repo_url: str = Field( + default="", + description="Git URL for tests repository (if empty, uses local tests_dir)", + ) + tests_repo_branch: str = Field( + default="main", + description="Branch to use for tests repository", + ) + tests_repo_subdir: str = Field( + default="", + description="Subdirectory within tests repository containing tests", + ) + + # Test type configuration + supported_test_types: str = Field( + default="firmware", + description="Comma-separated list of test types this lab supports", + ) + + # Logging + log_level: str = Field(default="INFO") + + def get_supported_test_types(self) -> list[str]: + """Get list of supported test types.""" + return [t.strip() for t in self.supported_test_types.split(",") if t.strip()] + + class Config: + env_file = ".env" + env_prefix = "" + case_sensitive = False + + +# Global settings instance +settings = Settings() diff --git a/kernelci/labgrid-adapter/labgrid_kci_adapter/device_discovery.py b/kernelci/labgrid-adapter/labgrid_kci_adapter/device_discovery.py new file mode 100644 index 000000000..85bdd23fd --- /dev/null +++ b/kernelci/labgrid-adapter/labgrid_kci_adapter/device_discovery.py @@ -0,0 +1,263 @@ +""" +Device Discovery Manager + +Discovers devices from the labgrid coordinator and provides +a unified interface for device/feature enumeration. + +Replaces static target file scanning with dynamic coordinator-based +discovery filtered by lab name. +""" + +import asyncio +import logging +from dataclasses import dataclass, field +from pathlib import Path +from time import monotonic + +import yaml + +from .labgrid_client import LabgridClient, Place + +logger = logging.getLogger(__name__) + + +@dataclass +class DiscoveredDevice: + """Represents a discovered device type with its metadata.""" + + device_type: str + places: list[Place] = field(default_factory=list) + features: set[str] = field(default_factory=set) + has_target_file: bool = False + + @property + def instance_count(self) -> int: + """Number of physical instances of this device type.""" + return len(self.places) + + +class DeviceDiscoveryManager: + """ + Manages dynamic device discovery from labgrid coordinator. + + Replaces static target file scanning with coordinator-based + discovery filtered by lab name. + + Place naming convention: + {lab_name}-{device_type}[-{instance}] + + Examples: + - aparcar-openwrt_one + - aparcar-openwrt_one-2 + - hsn-bananapi_bpi-r4 + + Alternative: Use explicit tags on places: + labgrid-client set-tags aparcar-openwrt_one \\ + device_type=openwrt_one lab=aparcar features=wifi,wan_port + """ + + def __init__( + self, + labgrid_client: LabgridClient, + targets_dir: Path | None = None, + refresh_interval: float = 300.0, + require_target_files: bool = False, + ): + """ + Initialize the discovery manager. + + Args: + labgrid_client: Client for coordinator queries + targets_dir: Directory containing target YAML files (optional) + refresh_interval: Seconds between automatic cache refreshes + require_target_files: If True, filter out devices without target files + """ + self._client = labgrid_client + self.targets_dir = targets_dir + self.refresh_interval = refresh_interval + self.require_target_files = require_target_files + + # Discovered state + self._devices: dict[str, DiscoveredDevice] = {} + self._last_refresh: float = 0 + self._lock = asyncio.Lock() + + async def discover( + self, force_refresh: bool = False + ) -> dict[str, DiscoveredDevice]: + """ + Discover devices from coordinator. + + Args: + force_refresh: Force refresh even if cache is valid + + Returns: + Dict mapping device_type to DiscoveredDevice + """ + async with self._lock: + now = monotonic() + + # Use cached if still valid + if ( + not force_refresh + and self._devices + and (now - self._last_refresh) < self.refresh_interval + ): + return self._devices + + logger.info("Discovering devices from coordinator") + + # Get all places from coordinator (each lab has its own coordinator) + all_places = await self._client.get_places(refresh=True) + places = list(all_places.values()) + + logger.info(f"Found {len(places)} places on coordinator") + + # Group by device type + devices: dict[str, DiscoveredDevice] = {} + + for place in places: + device_type = place.device_type + if not device_type: + logger.warning( + f"Place {place.name} has no device_type, skipping" + ) + continue + + if device_type not in devices: + devices[device_type] = DiscoveredDevice(device_type=device_type) + + devices[device_type].places.append(place) + + # Extract features from place tags + if place.tags and "features" in place.tags: + features = place.tags["features"].split(",") + devices[device_type].features.update( + f.strip() for f in features if f.strip() + ) + + # Validate target files and extract additional features + if self.targets_dir: + self._validate_and_enrich(devices) + + # Filter out devices without target files if required + if self.require_target_files: + devices = { + dt: dev + for dt, dev in devices.items() + if dev.has_target_file + } + + self._devices = devices + self._last_refresh = now + + logger.info( + f"Discovered {len(devices)} device types: {list(devices.keys())}" + ) + + return devices + + def _validate_and_enrich( + self, devices: dict[str, DiscoveredDevice] + ) -> None: + """ + Validate target files exist and extract features. + + Args: + devices: Dict of discovered devices to validate/enrich + """ + for device_type, device in devices.items(): + target_file = self.targets_dir / f"{device_type}.yaml" + + if target_file.exists(): + device.has_target_file = True + + # Extract features from target file + try: + with open(target_file) as f: + config = yaml.safe_load(f) + + file_features = self._extract_features_from_config(config) + device.features.update(file_features) + + except Exception as e: + logger.warning(f"Error reading target file {target_file}: {e}") + else: + logger.warning( + f"No target file for device type {device_type}: {target_file}" + ) + + def _extract_features_from_config(self, config: dict) -> set[str]: + """Extract features from labgrid target configuration.""" + features = set() + + if not config: + return features + + # Check for explicit features at top level + if "features" in config: + features.update(config["features"]) + return features + + # Check targets section + targets = config.get("targets", {}) + for target_name, target_config in targets.items(): + if not isinstance(target_config, dict): + continue + + # Check for features in target + if "features" in target_config: + features.update(target_config["features"]) + continue + + # Infer from resources/drivers + resources = target_config.get("resources", []) + drivers = target_config.get("drivers", []) + + # Handle resources as list of dicts + resource_names = set() + if isinstance(resources, list): + for r in resources: + if isinstance(r, dict): + resource_names.update(r.keys()) + elif isinstance(resources, dict): + resource_names.update(resources.keys()) + + # Handle drivers as list of dicts + driver_names = set() + if isinstance(drivers, list): + for d in drivers: + if isinstance(d, dict): + driver_names.update(d.keys()) + elif isinstance(drivers, dict): + driver_names.update(drivers.keys()) + + if "NetworkService" in resource_names or "WifiAP" in resource_names: + features.add("wifi") + if "EthernetInterface" in resource_names: + features.add("wan_port") + if any("USB" in r for r in resource_names): + features.add("usb") + if "QEMUDriver" in driver_names: + features.add("hwsim") + + return features + + def get_device_types(self) -> list[str]: + """Get list of discovered device types.""" + return list(self._devices.keys()) + + def get_all_features(self) -> list[str]: + """Get aggregated list of all features across devices.""" + all_features = set() + for device in self._devices.values(): + all_features.update(device.features) + return list(all_features) + + def get_device(self, device_type: str) -> DiscoveredDevice | None: + """Get discovered device by type.""" + return self._devices.get(device_type) + + def has_device(self, device_type: str) -> bool: + """Check if device type is discovered.""" + return device_type in self._devices diff --git a/kernelci/labgrid-adapter/labgrid_kci_adapter/executor.py b/kernelci/labgrid-adapter/labgrid_kci_adapter/executor.py new file mode 100644 index 000000000..0173b02cf --- /dev/null +++ b/kernelci/labgrid-adapter/labgrid_kci_adapter/executor.py @@ -0,0 +1,677 @@ +""" +Test Executor for Labgrid + +Executes test jobs using labgrid for device control and pytest +for test execution. Uses pytest's programmatic API for execution +and result collection. + +For kselftest jobs, the executor parses KTAP output from test stdout +to extract individual subtest results. +""" + +import asyncio +import logging +import os +import tempfile +from datetime import datetime +from pathlib import Path + +import httpx +from minio import Minio + +from .config import settings +from .ktap_parser import parse_ktap, ktap_results_to_dict +from .labgrid_client import LabgridClient +from .models import JobResult, TestResult, TestStatus +from .test_sync import ensure_tests + +logger = logging.getLogger(__name__) + + +class ResultCollectorPlugin: + """ + Pytest plugin to collect test results programmatically. + + Captures test outcomes, durations, and error messages without + requiring external JSON report files. + + For kselftest tests, also captures stdout which may contain KTAP + output for subtest parsing. + """ + + def __init__(self): + self.results: list[dict] = [] + self.start_time: datetime | None = None + self.end_time: datetime | None = None + + def pytest_sessionstart(self, session): + self.start_time = datetime.utcnow() + + def pytest_sessionfinish(self, session, exitstatus): + self.end_time = datetime.utcnow() + + def pytest_runtest_logreport(self, report): + """Collect test results from each test phase.""" + # Only capture the 'call' phase (actual test execution) + # Skip 'setup' and 'teardown' phases + if report.when != "call": + return + + result = { + "nodeid": report.nodeid, + "outcome": report.outcome, + "duration": report.duration, + "error_message": None, + "stdout": None, + "stderr": None, + } + + if report.failed: + if hasattr(report, "longreprtext"): + result["error_message"] = report.longreprtext + elif hasattr(report.longrepr, "reprcrash"): + result["error_message"] = str(report.longrepr.reprcrash) + + # Capture stdout/stderr for KTAP parsing + if hasattr(report, "capstdout") and report.capstdout: + result["stdout"] = report.capstdout + if hasattr(report, "capstderr") and report.capstderr: + result["stderr"] = report.capstderr + + # Also check sections for captured output + for section_name, content in report.sections: + if "stdout" in section_name.lower() and content: + result["stdout"] = content + elif "stderr" in section_name.lower() and content: + result["stderr"] = content + + self.results.append(result) + + def pytest_collection_modifyitems(self, items): + """Log collected test items.""" + logger.info(f"Collected {len(items)} tests") + + +class TestExecutor: + """ + Executes test jobs using labgrid and pytest. + + The executor: + 1. Downloads firmware artifacts + 2. Runs pytest with labgrid plugin for device control + 3. Collects results via custom plugin + 4. Uploads logs to storage + """ + + def __init__(self, lab_name: str, targets_dir: Path, tests_dir: Path): + self.lab_name = lab_name + self.targets_dir = targets_dir + self.tests_dir = tests_dir + self.cache_dir = Path(settings.firmware_cache) + self.cache_dir.mkdir(parents=True, exist_ok=True) + + self._http_client: httpx.AsyncClient | None = None + self._minio: Minio | None = None + self._labgrid_client = LabgridClient() + + async def initialize(self) -> None: + """Initialize HTTP client and storage client.""" + self._http_client = httpx.AsyncClient( + timeout=httpx.Timeout(300.0), + follow_redirects=True, + ) + + if settings.minio_endpoint: + self._minio = Minio( + settings.minio_endpoint, + access_key=settings.minio_access_key, + secret_key=settings.minio_secret_key, + secure=settings.minio_secure, + ) + + async def cleanup(self) -> None: + """Cleanup resources.""" + if self._http_client: + await self._http_client.aclose() + + @property + def http_client(self) -> httpx.AsyncClient: + if self._http_client is None: + raise RuntimeError("Executor not initialized") + return self._http_client + + async def execute_job(self, job: dict) -> JobResult: + """ + Execute a test job. + + Args: + job: Job definition from KernelCI API + + Returns: + JobResult with test results + """ + job_id = job.get("id") or job.get("_id") + job_data = job.get("data", {}) + device_type = job_data.get("device_type") + tests = job_data.get("tests", []) + timeout = job_data.get("timeout", 1800) + + # Get firmware info from parent node if available + firmware_id = job.get("parent") # None if no parent + firmware_url = job_data.get("firmware_url") + + # Test type for logging/debugging + test_type = job_data.get("test_type", "firmware") + + # Tests can be fetched per-job (LAVA pattern) or use static tests_dir + tests_repo_url = job_data.get("tests_repo") + tests_repo_branch = job_data.get("tests_branch", "main") + tests_subdir = job_data.get("tests_subdir") # Override for kselftest, etc. + + logger.info( + f"Executing job {job_id} on device {device_type} " + f"(test_type={test_type})" + ) + + start_time = datetime.utcnow() + test_results: list[TestResult] = [] + console_log_url = None + + # Construct place name + place_name = f"{self.lab_name}-{device_type}" + + try: + # Acquire the labgrid place before running tests + logger.info(f"Acquiring place: {place_name}") + if not await self._labgrid_client.acquire_place(place_name): + raise RuntimeError(f"Failed to acquire place: {place_name}") + + with tempfile.TemporaryDirectory(prefix=f"job-{job_id}-") as tmpdir: + tmpdir_path = Path(tmpdir) + console_log_path = tmpdir_path / "console.log" + lg_log_dir = tmpdir_path / "labgrid-logs" + lg_log_dir.mkdir(exist_ok=True) + + # Ensure tests are up-to-date before execution + # Uses per-job repo if specified, otherwise uses configured repo + # tests_subdir can be overridden per-job (e.g., for kselftest) + tests_dir = await ensure_tests( + repo_url=tests_repo_url, + branch=tests_repo_branch, + subdir=tests_subdir, + ) + + # Download firmware if URL provided + firmware_path = None + if firmware_url: + firmware_path = await self._download_firmware( + url=firmware_url, + dest_dir=tmpdir_path, + ) + + # Run pytest and collect results + collector, output = await self._run_pytest( + device_type=device_type, + tests=tests, + tests_dir=tests_dir, + firmware_path=firmware_path, + timeout=timeout, + log_dir=lg_log_dir, + ) + + # Save console output + console_log_path.write_text(output) + + # Convert collected results + test_results = self._convert_results( + collector=collector, + job_id=job_id, + firmware_id=firmware_id, + device_type=device_type, + ) + + # Find boot log (serial console output from labgrid) + boot_log_path = self._find_boot_log(lg_log_dir) + + # Combine boot log + pytest output into single log file + combined_log_path = tmpdir_path / "combined.log" + await self._combine_logs( + boot_log_path=boot_log_path, + console_log_path=console_log_path, + output_path=combined_log_path, + ) + + # Upload combined log + if combined_log_path.exists(): + console_log_url = await self._upload_log( + log_path=combined_log_path, + job_id=job_id, + log_name="console.log", + ) + + except Exception as e: + logger.exception(f"Job {job_id} failed: {e}") + test_results = [ + TestResult( + id=f"{job_id}:error", + job_id=job_id, + firmware_id=firmware_id, + device_type=device_type, + lab_name=self.lab_name, + test_name="job_execution", + status=TestStatus.ERROR, + duration=0, + start_time=start_time, + error_message=str(e), + ) + ] + + finally: + # Always release the place after test execution + logger.info(f"Releasing place: {place_name}") + await self._labgrid_client.release_place(place_name) + + end_time = datetime.utcnow() + duration = (end_time - start_time).total_seconds() + + passed = sum(1 for r in test_results if r.status == TestStatus.PASS) + failed = sum(1 for r in test_results if r.status == TestStatus.FAIL) + skipped = sum(1 for r in test_results if r.status == TestStatus.SKIP) + errors = sum(1 for r in test_results if r.status == TestStatus.ERROR) + + return JobResult( + job_id=job_id, + firmware_id=firmware_id, + device_type=device_type, + lab_name=self.lab_name, + status="pass" if (errors == 0 and failed == 0) else "fail", + total_tests=len(test_results), + passed_tests=passed, + failed_tests=failed, + skipped_tests=skipped, + error_tests=errors, + started_at=start_time, + completed_at=end_time, + duration=duration, + test_results=test_results, + console_log_url=console_log_url, + ) + + async def _download_firmware(self, url: str, dest_dir: Path) -> Path | None: + """Download firmware from URL to cache directory.""" + filename = url.split("/")[-1] + cache_path = self.cache_dir / filename + if cache_path.exists(): + logger.info(f"Using cached firmware: {cache_path}") + return cache_path + + logger.info(f"Downloading firmware: {url}") + try: + response = await self.http_client.get(url) + response.raise_for_status() + cache_path.write_bytes(response.content) + return cache_path + except Exception as e: + logger.warning(f"Failed to download firmware: {e}") + return None + + async def _run_pytest( + self, + device_type: str, + tests: list[str], + tests_dir: Path, + firmware_path: Path | None, + timeout: int, + log_dir: Path | None = None, + ) -> tuple[ResultCollectorPlugin, str]: + """ + Run pytest as a subprocess and collect results. + + Uses subprocess to avoid event loop conflicts with labgrid's + async coordinator session. + + Args: + device_type: Device type for labgrid target selection + tests: List of test name patterns to run + tests_dir: Directory containing pytest test files + firmware_path: Path to firmware file (optional) + timeout: Test timeout in seconds + log_dir: Directory to store labgrid serial logs (boot log) + + Returns: + Tuple of (result collector plugin, console output) + """ + target_file = self.targets_dir / f"{device_type}.yaml" + + # Build pytest arguments + # Match Makefile approach: --lg-log --log-cli-level=CONSOLE --lg-colored-steps + # This streams all labgrid console output (boot log) directly to pytest output + args = [ + "pytest", + str(tests_dir), + "-v", + "--tb=short", + f"--lg-env={target_file}", + # Stream all logging (including labgrid serial console) to output + "--log-cli-level=CONSOLE", + # Show labgrid step markers in output + "--lg-colored-steps", + ] + + # Add labgrid logging to capture serial console (boot log) + if log_dir: + args.append(f"--lg-log={log_dir}") + + # Filter specific tests if provided + if tests: + args.extend(["-k", " or ".join(tests)]) + + # Set labgrid environment variables + env = os.environ.copy() + env["LG_COORDINATOR"] = settings.lg_coordinator + # LG_PLACE is the labgrid place name for remote device access + env["LG_PLACE"] = f"{settings.lab_name}-{device_type}" + if firmware_path: + # LG_IMAGE is used by target YAML templates for firmware path + env["LG_IMAGE"] = str(firmware_path) + # Also set LG_FIRMWARE for backwards compatibility + env["LG_FIRMWARE"] = str(firmware_path) + + # Run pytest as subprocess + logger.info(f"Running pytest: {' '.join(args)}") + proc = await asyncio.create_subprocess_exec( + *args, + env=env, + stdout=asyncio.subprocess.PIPE, + stderr=asyncio.subprocess.STDOUT, + cwd=str(self.targets_dir.parent), # Run from labgrid-adapter dir + ) + + try: + stdout, _ = await asyncio.wait_for(proc.communicate(), timeout=timeout) + except asyncio.TimeoutError: + proc.kill() + await proc.wait() + logger.error(f"pytest timed out after {timeout}s") + stdout = b"pytest timed out" + + output = stdout.decode("utf-8", errors="replace") + exit_code = proc.returncode + + logger.info(f"pytest completed with exit code: {exit_code}") + + # Log output if pytest failed or had issues + if exit_code != 0: + logger.warning(f"pytest output:\n{output[-2000:]}") # Last 2000 chars + + # Parse results from pytest output + # Create a collector to hold results (parsed from output) + collector = ResultCollectorPlugin() + collector.start_time = datetime.utcnow() + collector.end_time = datetime.utcnow() + + # Try to parse pytest output for test results + collector.results = self._parse_pytest_output(output) + + return collector, output + + def _parse_pytest_output(self, output: str) -> list[dict]: + """ + Parse pytest verbose output to extract test results. + + Looks for lines like: + tests/test_base.py::test_shell PASSED + tests/test_base.py::test_uname FAILED + + Args: + output: Raw pytest output + + Returns: + List of result dicts with nodeid, outcome, duration + """ + import re + results = [] + + # Match pytest verbose output: nodeid STATUS [duration] + # Examples: + # test_base.py::test_shell PASSED [ 50%] + # test_base.py::test_uname FAILED [100%] + pattern = r'^([\w/\-_\.]+::\w+)\s+(PASSED|FAILED|SKIPPED|ERROR)' + + for line in output.split('\n'): + match = re.search(pattern, line) + if match: + nodeid = match.group(1) + status = match.group(2).lower() + # Map to pytest internal format + outcome_map = { + 'passed': 'passed', + 'failed': 'failed', + 'skipped': 'skipped', + 'error': 'failed', + } + results.append({ + 'nodeid': nodeid, + 'outcome': outcome_map.get(status, 'failed'), + 'duration': 0, # Not available from verbose output + 'error_message': None, + 'stdout': None, + 'stderr': None, + }) + + logger.info(f"Parsed {len(results)} test results from output") + return results + + def _convert_results( + self, + collector: ResultCollectorPlugin, + job_id: str, + firmware_id: str, + device_type: str, + ) -> list[TestResult]: + """ + Convert collected pytest results to TestResult objects. + + For tests that contain KTAP output in their stdout, parse the + KTAP to extract individual subtest results. This is used for + kselftest tests that run multiple subtests and report via KTAP. + """ + test_results = [] + + # Pytest uses past tense: "passed", "failed", "skipped" + pytest_status_map = { + "passed": TestStatus.PASS, + "failed": TestStatus.FAIL, + "skipped": TestStatus.SKIP, + } + + # KTAP uses present tense: "pass", "fail", "skip", "error" + ktap_status_map = { + "pass": TestStatus.PASS, + "fail": TestStatus.FAIL, + "skip": TestStatus.SKIP, + "error": TestStatus.ERROR, + } + + for result in collector.results: + nodeid = result["nodeid"] + test_name = nodeid.split("::")[-1] if "::" in nodeid else nodeid + stdout = result.get("stdout", "") + + # Check if stdout contains KTAP output + ktap_results = self._try_parse_ktap(stdout, test_name) + + if ktap_results: + # Expand KTAP subtests into individual TestResult objects + for ktap in ktap_results: + ktap_status = ktap_status_map.get( + ktap["status"], TestStatus.ERROR + ) + test_results.append( + TestResult( + id=f"{job_id}:{ktap['name']}", + job_id=job_id, + firmware_id=firmware_id, + device_type=device_type, + lab_name=self.lab_name, + test_name=ktap["name"], + test_path=f"{nodeid}::{ktap['name']}", + status=ktap_status, + duration=ktap.get("duration", 0), + start_time=collector.start_time or datetime.utcnow(), + error_message=ktap.get("error_message"), + ) + ) + else: + # Standard pytest result (no KTAP) + status = pytest_status_map.get(result["outcome"], TestStatus.ERROR) + test_results.append( + TestResult( + id=f"{job_id}:{test_name}", + job_id=job_id, + firmware_id=firmware_id, + device_type=device_type, + lab_name=self.lab_name, + test_name=test_name, + test_path=nodeid, + status=status, + duration=result["duration"], + start_time=collector.start_time or datetime.utcnow(), + error_message=result.get("error_message"), + ) + ) + + return test_results + + def _try_parse_ktap( + self, output: str, prefix: str = "" + ) -> list[dict] | None: + """ + Try to parse KTAP output from test stdout. + + Returns parsed results if KTAP is detected, None otherwise. + + Args: + output: Test stdout that may contain KTAP + prefix: Prefix for test names (usually the parent test name) + + Returns: + List of dicts with 'name', 'status', 'duration', 'error_message' + or None if no KTAP detected + """ + if not output: + return None + + # Check for KTAP/TAP markers + if not any( + marker in output + for marker in ["KTAP version", "TAP version", "1.."] + ): + return None + + try: + ktap_results = parse_ktap(output, prefix=prefix) + if ktap_results: + logger.info( + f"Parsed {len(ktap_results)} subtests from KTAP output" + ) + return ktap_results_to_dict(ktap_results) + except Exception as e: + logger.warning(f"Failed to parse KTAP output: {e}") + + return None + + async def _upload_log( + self, log_path: Path, job_id: str, log_name: str = "console.log" + ) -> str | None: + """Upload a log file to storage.""" + if not self._minio: + return None + + try: + bucket = settings.minio_logs_bucket + object_name = f"logs/{job_id}/{log_name}" + self._minio.fput_object( + bucket_name=bucket, + object_name=object_name, + file_path=str(log_path), + content_type="text/plain", + ) + # Use https if minio_secure is enabled + scheme = "https" if settings.minio_secure else "http" + return f"{scheme}://{settings.minio_endpoint}/{bucket}/{object_name}" + except Exception as e: + logger.warning(f"Failed to upload log {log_name}: {e}") + return None + + def _find_boot_log(self, log_dir: Path) -> Path | None: + """ + Find boot log (serial console output) from labgrid log directory. + + Labgrid's --lg-log option creates files like: + - console_main (no .log extension) + + Returns the path to the boot log file, or None if not found. + """ + if not log_dir.exists(): + return None + + try: + # Find console log files created by labgrid + # Labgrid creates files like "console_main" (no .log extension) + console_logs = list(log_dir.glob("console_*")) + if not console_logs: + # Try alternative patterns + console_logs = list(log_dir.glob("*serial*")) + if not console_logs: + # Fallback: any file in the directory + console_logs = [f for f in log_dir.iterdir() if f.is_file()] + + if not console_logs: + logger.debug(f"No boot log found in {log_dir}") + return None + + # Use the largest/most recent log file + boot_log = max(console_logs, key=lambda p: p.stat().st_size) + logger.info(f"Found boot log: {boot_log.name} ({boot_log.stat().st_size} bytes)") + return boot_log + except Exception as e: + logger.warning(f"Failed to find boot log: {e}") + return None + + async def _combine_logs( + self, + boot_log_path: Path | None, + console_log_path: Path, + output_path: Path, + ) -> None: + """ + Combine boot log and pytest console output into a single file. + + The combined log shows: + 1. Boot log (serial console during device boot) + 2. Pytest output (test execution results) + """ + try: + with open(output_path, "w") as outfile: + # Write boot log first (if available) + if boot_log_path and boot_log_path.exists(): + outfile.write("=" * 80 + "\n") + outfile.write("BOOT LOG (Serial Console)\n") + outfile.write("=" * 80 + "\n\n") + outfile.write(boot_log_path.read_text(errors="replace")) + outfile.write("\n\n") + + # Write pytest output + if console_log_path.exists(): + outfile.write("=" * 80 + "\n") + outfile.write("TEST OUTPUT (pytest)\n") + outfile.write("=" * 80 + "\n\n") + outfile.write(console_log_path.read_text(errors="replace")) + + logger.info(f"Combined logs written to {output_path}") + except Exception as e: + logger.warning(f"Failed to combine logs: {e}") + # Fall back to just copying console log + if console_log_path.exists(): + output_path.write_text(console_log_path.read_text(errors="replace")) diff --git a/kernelci/labgrid-adapter/labgrid_kci_adapter/health_check.py b/kernelci/labgrid-adapter/labgrid_kci_adapter/health_check.py new file mode 100644 index 000000000..fedfa05ae --- /dev/null +++ b/kernelci/labgrid-adapter/labgrid_kci_adapter/health_check.py @@ -0,0 +1,168 @@ +""" +Device Health Check Tool for Lab Maintainers + +Standalone tool to verify devices are accessible and functioning. +Not part of KernelCI - this is for lab maintenance only. + +Usage: + python -m labgrid_kci_adapter.health_check [device_name] + python -m labgrid_kci_adapter.health_check --all +""" + +import argparse +import subprocess +import sys +from pathlib import Path + +from .config import settings + + +def check_device(device_name: str) -> tuple[str, bool, str]: + """ + Run basic health check on a device using place-based acquisition. + + Uses labgrid-client -p to check if the device is accessible + via the coordinator, without needing to parse target config files. + + Returns: + Tuple of (device_name, passed, message) + """ + import os + + try: + env = os.environ.copy() + env["LG_COORDINATOR"] = settings.lg_coordinator + + # Construct place name from lab name and device + # Lab name already includes the full prefix (e.g., "labgrid-aparcar") + place_name = f"{settings.lab_name}-{device_name}" + + # Try to acquire the place + result = subprocess.run( + [ + "labgrid-client", + "-p", + place_name, + "acquire", + ], + capture_output=True, + text=True, + timeout=30, + env=env, + ) + + if result.returncode != 0: + return (device_name, False, f"Acquire failed: {result.stderr.strip()}") + + # Release immediately + subprocess.run( + [ + "labgrid-client", + "-p", + place_name, + "release", + ], + capture_output=True, + timeout=10, + env=env, + ) + + return (device_name, True, "OK") + + except subprocess.TimeoutExpired: + return (device_name, False, "Timeout waiting for device") + except Exception as e: + return (device_name, False, str(e)) + + +def list_devices_from_targets(targets_dir: Path) -> list[str]: + """List device names from target YAML files.""" + return [f.stem for f in targets_dir.glob("*.yaml")] + + +def list_devices_from_coordinator() -> list[str]: + """List device names from labgrid coordinator places.""" + import os + + env = os.environ.copy() + env["LG_COORDINATOR"] = settings.lg_coordinator + + result = subprocess.run( + ["labgrid-client", "places"], + capture_output=True, + text=True, + timeout=10, + env=env, + ) + + if result.returncode != 0: + return [] + + # Parse place names and extract device names for this lab + # Place format: {lab_name}-{device_name} (lab_name includes full prefix) + prefix = f"{settings.lab_name}-" + devices = [] + for line in result.stdout.strip().split("\n"): + place = line.strip() + if place.startswith(prefix): + device = place[len(prefix) :] + devices.append(device) + return devices + + +def main(): + parser = argparse.ArgumentParser( + description="Device health check for lab maintainers" + ) + parser.add_argument("device", nargs="?", help="Device name to check") + parser.add_argument("--all", action="store_true", help="Check all devices") + parser.add_argument( + "--targets-dir", + type=Path, + default=settings.targets_dir, + help="Directory containing target YAML files (optional)", + ) + args = parser.parse_args() + + if not args.device and not args.all: + parser.print_help() + sys.exit(1) + + # Get devices to check + if args.all: + # First try to get devices from coordinator + devices = list_devices_from_coordinator() + if not devices: + # Fall back to target files + targets_dir = args.targets_dir + if targets_dir.exists(): + devices = list_devices_from_targets(targets_dir) + else: + print("Error: No devices found from coordinator or targets directory") + sys.exit(1) + else: + devices = [args.device] + + # Run checks + print(f"Checking {len(devices)} device(s)...\n") + + passed = 0 + failed = 0 + + for device in devices: + name, ok, message = check_device(device) + status = "✓" if ok else "✗" + print(f" {status} {name}: {message}") + + if ok: + passed += 1 + else: + failed += 1 + + # Summary + print(f"\nResults: {passed} passed, {failed} failed") + sys.exit(0 if failed == 0 else 1) + + +if __name__ == "__main__": + main() diff --git a/kernelci/labgrid-adapter/labgrid_kci_adapter/ktap_parser.py b/kernelci/labgrid-adapter/labgrid_kci_adapter/ktap_parser.py new file mode 100644 index 000000000..6c62dabdf --- /dev/null +++ b/kernelci/labgrid-adapter/labgrid_kci_adapter/ktap_parser.py @@ -0,0 +1,376 @@ +""" +KTAP (Kernel Test Anything Protocol) parser. + +Parses nested KTAP output from kselftests into flat test results +suitable for KernelCI node submission. + +KTAP is an extension of TAP (Test Anything Protocol) used by the Linux +kernel for reporting test results from kselftests and KUnit tests. + +Key features: +- Nested subtests via 2-space indentation +- Directives: SKIP, TODO, XFAIL, TIMEOUT, ERROR +- Diagnostic lines prefixed with # + +Reference: https://docs.kernel.org/dev-tools/ktap.html + +Example KTAP output: + KTAP version 1 + 1..2 + KTAP version 1 + 1..3 + ok 1 - subtest_a + not ok 2 - subtest_b # SKIP not supported + ok 3 - subtest_c + ok 1 - test_group + not ok 2 - test_single # FAIL assertion failed + +Note: In KTAP, subtests appear BEFORE the parent result line. +""" + +from __future__ import annotations + +import re +from dataclasses import dataclass +from enum import Enum + + +class TestStatus(str, Enum): + """ + Test result status. + + Note: This enum mirrors models.TestStatus. They are kept separate to + avoid a pydantic dependency in the KTAP parser module. The string + values ("pass", "fail", "skip", "error") must stay in sync. + """ + + PASS = "pass" + FAIL = "fail" + SKIP = "skip" + ERROR = "error" + + +@dataclass +class KtapTestResult: + """ + Individual test result from KTAP output. + + Attributes: + name: Hierarchical test name (e.g., "net.socket.af_inet") + status: Test result status + directive: Optional directive (SKIP, TODO, XFAIL, etc.) + directive_reason: Reason provided with directive + diagnostic: Diagnostic/error message from # lines + number: Test number from KTAP output + raw_name: Original test name before hierarchical prefixing + """ + + name: str + status: TestStatus + directive: str | None = None + directive_reason: str | None = None + diagnostic: str | None = None + number: int = 0 + raw_name: str = "" + + +@dataclass +class _ParsedLine: + """Internal representation of a parsed KTAP line.""" + + line_type: str # "version", "plan", "result", "diagnostic", "unknown" + indent_level: int = 0 + raw_line: str = "" + # For result lines + is_ok: bool = False + test_number: int = 0 + test_name: str = "" + directive: str | None = None + directive_reason: str | None = None + # For plan lines + plan_count: int = 0 + # For diagnostic lines + diagnostic_text: str = "" + + +# Regex patterns for KTAP parsing +_VERSION_PATTERN = re.compile(r"^(KTAP version|TAP version)\s+(\d+)") +_PLAN_PATTERN = re.compile(r"^1\.\.(\d+)") +_RESULT_PATTERN = re.compile( + r"^(ok|not ok)\s+(\d+)\s*(?:-\s*)?([^#]*?)(?:\s*#\s*(.*))?$" +) +_DIRECTIVE_PATTERN = re.compile( + r"^(SKIP|TODO|XFAIL|TIMEOUT|ERROR)(?:\s+(.*))?$", re.IGNORECASE +) + + +def parse_ktap(output: str, prefix: str = "") -> list[KtapTestResult]: + """ + Parse KTAP output into flat list of test results. + + Handles nested KTAP output by flattening the hierarchy into + dot-separated test names suitable for KernelCI submission. + + Args: + output: Raw KTAP output string + prefix: Optional prefix for all test names (e.g., "kselftest.net") + + Returns: + List of KtapTestResult objects, one per test/subtest + + Example: + >>> output = ''' + ... KTAP version 1 + ... 1..2 + ... ok 1 - test_a + ... not ok 2 - test_b # SKIP no support + ... ''' + >>> results = parse_ktap(output, prefix="kselftest.net") + >>> results[0].name + 'kselftest.net.test_a' + >>> results[1].status + + """ + lines = output.splitlines() + parsed_lines = [_parse_line(line) for line in lines] + return _process_parsed_lines(parsed_lines, prefix) + + +def _parse_line(line: str) -> _ParsedLine: + """Parse a single line into a structured representation.""" + # Calculate indentation (2 spaces = 1 level) + stripped = line.lstrip() + indent_spaces = len(line) - len(stripped) + indent_level = indent_spaces // 2 + + result = _ParsedLine( + line_type="unknown", + indent_level=indent_level, + raw_line=line, + ) + + if not stripped: + return result + + # Check for version line + if _VERSION_PATTERN.match(stripped): + result.line_type = "version" + return result + + # Check for plan line + plan_match = _PLAN_PATTERN.match(stripped) + if plan_match: + result.line_type = "plan" + result.plan_count = int(plan_match.group(1)) + return result + + # Check for diagnostic line + if stripped.startswith("#"): + result.line_type = "diagnostic" + result.diagnostic_text = stripped[1:].strip() + return result + + # Check for result line + result_match = _RESULT_PATTERN.match(stripped) + if result_match: + ok_str, num_str, name, directive_str = result_match.groups() + result.line_type = "result" + result.is_ok = ok_str == "ok" + result.test_number = int(num_str) + result.test_name = name.strip() if name else f"test_{num_str}" + + # Parse directive if present + if directive_str: + directive_str = directive_str.strip() + dir_match = _DIRECTIVE_PATTERN.match(directive_str) + if dir_match: + result.directive = dir_match.group(1).upper() + result.directive_reason = dir_match.group(2) + else: + # Treat as reason/message if not a known directive + result.directive_reason = directive_str + + return result + + return result + + +def _process_parsed_lines( + lines: list[_ParsedLine], + prefix: str, +) -> list[KtapTestResult]: + """ + Process parsed lines into test results, handling nesting. + + In KTAP, subtests appear BEFORE the parent result line: + KTAP version 1 + 1..1 + KTAP version 1 <- subtest block starts + 1..2 + ok 1 - child_a + ok 2 - child_b + ok 1 - parent <- parent result comes after + + This function collects subtest blocks and associates them with + the next result line at the parent's indentation level. + """ + results: list[KtapTestResult] = [] + idx = 0 + n = len(lines) + + def process_at_level(level: int, name_prefix: str) -> list[KtapTestResult]: + """Process lines at a specific indentation level.""" + nonlocal idx + level_results: list[KtapTestResult] = [] + pending_diagnostics: list[str] = [] + pending_subtests: list[KtapTestResult] = [] + + while idx < n: + line = lines[idx] + + # If we hit a line at a lower indent level, we're done with this level + if line.line_type != "unknown" and line.indent_level < level: + break + + # If we hit a higher indent level, it's a subtest block + if line.indent_level > level: + # Process the subtest block + # We don't know the parent name yet, will be filled in when we + # see the parent result line + subtests = process_at_level(line.indent_level, "") + pending_subtests.extend(subtests) + continue + + # Process lines at our level + idx += 1 + + if line.line_type == "version": + continue + + if line.line_type == "plan": + continue + + if line.line_type == "diagnostic": + pending_diagnostics.append(line.diagnostic_text) + continue + + if line.line_type == "result": + test_name = line.test_name + + # If we have pending subtests, they belong to this parent + if pending_subtests: + # Update subtest names with parent prefix + parent_prefix = ( + f"{name_prefix}.{test_name}" if name_prefix else test_name + ) + for subtest in pending_subtests: + subtest.name = f"{parent_prefix}.{subtest.name}" + level_results.extend(pending_subtests) + pending_subtests = [] + pending_diagnostics = [] + else: + # This is a leaf test (no subtests) + full_name = ( + f"{name_prefix}.{test_name}" if name_prefix else test_name + ) + status = _determine_status(line.is_ok, line.directive) + + result = KtapTestResult( + name=full_name, + status=status, + directive=line.directive, + directive_reason=line.directive_reason, + diagnostic=( + "\n".join(pending_diagnostics) + if pending_diagnostics + else None + ), + number=line.test_number, + raw_name=test_name, + ) + level_results.append(result) + pending_diagnostics = [] + + return level_results + + results = process_at_level(0, prefix) + return results + + +def _determine_status(is_ok: bool, directive: str | None) -> TestStatus: + """ + Determine test status from ok/not ok and directive. + + Args: + is_ok: True if line started with "ok", False for "not ok" + directive: Optional directive (SKIP, TODO, XFAIL, etc.) + + Returns: + Appropriate TestStatus value + """ + if directive: + directive_upper = directive.upper() + if directive_upper == "SKIP": + return TestStatus.SKIP + elif directive_upper == "TODO": + # TODO tests are expected to fail, treat as skip + return TestStatus.SKIP + elif directive_upper == "XFAIL": + # Expected failure - if it failed as expected, it's a pass + return TestStatus.PASS if not is_ok else TestStatus.FAIL + elif directive_upper in ("TIMEOUT", "ERROR"): + return TestStatus.ERROR + + return TestStatus.PASS if is_ok else TestStatus.FAIL + + +def ktap_results_to_dict(results: list[KtapTestResult]) -> list[dict]: + """ + Convert KTAP results to dictionary format for KernelCI API. + + Args: + results: List of KtapTestResult objects + + Returns: + List of dicts with 'name', 'status', 'duration', 'error_message' keys + """ + return [ + { + "name": r.name, + "status": r.status.value, + "duration": 0, # KTAP doesn't include timing info + "error_message": r.directive_reason or r.diagnostic, + } + for r in results + ] + + +def summarize_results(results: list[KtapTestResult]) -> dict: + """ + Generate summary statistics for KTAP results. + + Args: + results: List of KtapTestResult objects + + Returns: + Dict with total, passed, failed, skipped, error counts + """ + summary = { + "total": len(results), + "passed": 0, + "failed": 0, + "skipped": 0, + "errors": 0, + } + + for r in results: + if r.status == TestStatus.PASS: + summary["passed"] += 1 + elif r.status == TestStatus.FAIL: + summary["failed"] += 1 + elif r.status == TestStatus.SKIP: + summary["skipped"] += 1 + elif r.status == TestStatus.ERROR: + summary["errors"] += 1 + + return summary diff --git a/kernelci/labgrid-adapter/labgrid_kci_adapter/labgrid_client.py b/kernelci/labgrid-adapter/labgrid_kci_adapter/labgrid_client.py new file mode 100644 index 000000000..887a9c656 --- /dev/null +++ b/kernelci/labgrid-adapter/labgrid_kci_adapter/labgrid_client.py @@ -0,0 +1,304 @@ +""" +Labgrid Coordinator Client + +Provides async interface to query the labgrid coordinator for +available places and their status. +""" + +import asyncio +import logging +import re +from dataclasses import dataclass + +from .config import settings + +logger = logging.getLogger(__name__) + + +@dataclass +class Place: + """Represents a labgrid place (physical device).""" + + name: str + acquired: bool + acquired_by: str | None = None + tags: dict[str, str] | None = None + + @property + def device_type(self) -> str | None: + """Extract device type from place tags or name.""" + if self.tags: + # Check common tag names for device type + if "device_type" in self.tags: + return self.tags["device_type"] + if "device" in self.tags: + return self.tags["device"] + # Fallback: try to extract from name (unreliable, prefer tags) + parts = self.name.split("-", 1) + if len(parts) > 1: + device_part = parts[1] + match = re.match(r"(.+?)(?:-\d+)?$", device_part) + if match: + return match.group(1) + return None + + +class LabgridClient: + """ + Client for interacting with the labgrid coordinator. + + Uses labgrid-client CLI commands to query place status. + """ + + def __init__(self, coordinator_url: str | None = None): + self.coordinator_url = coordinator_url or settings.lg_coordinator + self._places_cache: dict[str, Place] | None = None + self._cache_time: float = 0 + self._cache_ttl: float = 5.0 # Cache places for 5 seconds + + async def _run_labgrid_client(self, *args: str) -> tuple[int, str, str]: + """Run labgrid-client command.""" + import os + + env = os.environ.copy() + env["LG_COORDINATOR"] = self.coordinator_url + + proc = await asyncio.create_subprocess_exec( + "labgrid-client", + *args, + env=env, + stdout=asyncio.subprocess.PIPE, + stderr=asyncio.subprocess.PIPE, + ) + + stdout, stderr = await proc.communicate() + return proc.returncode, stdout.decode(), stderr.decode() + + async def get_places(self, refresh: bool = False) -> dict[str, Place]: + """ + Get all places from the coordinator. + + Args: + refresh: Force refresh of cached data + + Returns: + Dict mapping place name to Place object + """ + now = asyncio.get_event_loop().time() + + if ( + not refresh + and self._places_cache is not None + and (now - self._cache_time) < self._cache_ttl + ): + return self._places_cache + + # Use -v for verbose output with all place details + returncode, stdout, stderr = await self._run_labgrid_client("-v", "places") + + if returncode != 0: + logger.warning(f"Failed to list places: {stderr}") + return self._places_cache or {} + + places = self._parse_verbose_places_output(stdout) + self._places_cache = places + self._cache_time = now + + return places + + def _parse_verbose_places_output(self, output: str) -> dict[str, Place]: + """ + Parse output of 'labgrid-client -v places' command. + + Example output: + Place 'labgrid-aparcar-openwrt_one': + tags: device=openwrt_one + matches: + */labgrid-aparcar-openwrt_one/* + acquired: None + acquired resources: + created: 2025-12-17 23:56:47 + changed: 2026-02-03 01:48:12.311304 + Place 'labgrid-aparcar-rpi-4': + tags: device=rpi-4 + ... + """ + places = {} + current_name = None + current_tags = {} + current_acquired = False + current_acquired_by = None + + for line in output.split("\n"): + # New place starts with "Place '" + if line.startswith("Place '"): + # Save previous place + if current_name: + places[current_name] = Place( + name=current_name, + acquired=current_acquired, + acquired_by=current_acquired_by, + tags=current_tags if current_tags else None, + ) + + # Parse new place name + match = re.match(r"Place '([^']+)':", line) + if match: + current_name = match.group(1) + current_tags = {} + current_acquired = False + current_acquired_by = None + + elif current_name: + line = line.strip() + + # Parse tags line: "tags: device=openwrt_one key2=value2" + if line.startswith("tags:"): + tag_str = line.split(":", 1)[1].strip() + if tag_str: + for tag in tag_str.split(): + if "=" in tag: + k, v = tag.split("=", 1) + current_tags[k] = v + + # Parse acquired line + elif line.startswith("acquired:"): + value = line.split(":", 1)[1].strip() + if value and value != "None": + current_acquired = True + current_acquired_by = value + + # Don't forget the last place + if current_name: + places[current_name] = Place( + name=current_name, + acquired=current_acquired, + acquired_by=current_acquired_by, + tags=current_tags if current_tags else None, + ) + + return places + + async def get_places_by_device_type( + self, device_type: str, refresh: bool = False + ) -> list[Place]: + """ + Get all places for a specific device type. + + Args: + device_type: The device type to filter by + refresh: Force refresh of cached data + + Returns: + List of Place objects matching the device type + """ + places = await self.get_places(refresh=refresh) + return [p for p in places.values() if p.device_type == device_type] + + async def get_available_places( + self, device_type: str, refresh: bool = False + ) -> list[Place]: + """ + Get available (not acquired) places for a device type. + + Args: + device_type: The device type to filter by + refresh: Force refresh of cached data + + Returns: + List of available Place objects + """ + places = await self.get_places_by_device_type(device_type, refresh=refresh) + return [p for p in places if not p.acquired] + + async def count_available(self, device_type: str, refresh: bool = False) -> int: + """ + Count available places for a device type. + + Args: + device_type: The device type to count + refresh: Force refresh of cached data + + Returns: + Number of available places + """ + available = await self.get_available_places(device_type, refresh=refresh) + return len(available) + + async def acquire_place(self, place_name: str) -> bool: + """ + Acquire a specific place. + + Args: + place_name: Name of the place to acquire + + Returns: + True if successfully acquired + """ + returncode, _, stderr = await self._run_labgrid_client( + "-p", place_name, "acquire" + ) + if returncode != 0: + logger.warning(f"Failed to acquire {place_name}: {stderr}") + return False + return True + + async def release_place(self, place_name: str) -> bool: + """ + Release a specific place. + + Args: + place_name: Name of the place to release + + Returns: + True if successfully released + """ + returncode, _, stderr = await self._run_labgrid_client( + "-p", place_name, "release" + ) + if returncode != 0: + logger.warning(f"Failed to release {place_name}: {stderr}") + return False + return True + + async def get_places_for_lab( + self, lab_name: str, refresh: bool = False + ) -> list[Place]: + """ + Get all places belonging to a specific lab. + + Filters places by: + 1. tags.lab == lab_name (explicit tag) + 2. place.name starts with "{lab_name}-" (naming convention) + + Args: + lab_name: The lab name to filter by + refresh: Force refresh of cached data + + Returns: + List of Place objects belonging to this lab + """ + places = await self.get_places(refresh=refresh) + lab_places = [] + + for place in places.values(): + # Check explicit lab tag first + if place.tags and place.tags.get("lab") == lab_name: + lab_places.append(place) + # Fall back to name prefix matching + elif place.name.startswith(f"{lab_name}-"): + lab_places.append(place) + + return lab_places + + def get_unique_device_types(self, places: list[Place]) -> set[str]: + """ + Extract unique device types from a list of places. + + Args: + places: List of Place objects + + Returns: + Set of unique device type strings + """ + return {p.device_type for p in places if p.device_type} diff --git a/kernelci/labgrid-adapter/labgrid_kci_adapter/models.py b/kernelci/labgrid-adapter/labgrid_kci_adapter/models.py new file mode 100644 index 000000000..225cc4cff --- /dev/null +++ b/kernelci/labgrid-adapter/labgrid_kci_adapter/models.py @@ -0,0 +1,65 @@ +""" +Data models for Labgrid KernelCI Adapter. +""" + +from datetime import datetime +from enum import Enum +from typing import Any + +from pydantic import BaseModel, Field + + +class TestStatus(str, Enum): + """Test result status.""" + + PASS = "pass" + FAIL = "fail" + SKIP = "skip" + ERROR = "error" + + +class TestResult(BaseModel): + """Individual test result.""" + + id: str + job_id: str + firmware_id: str | None = None + device_type: str + lab_name: str + + test_name: str + test_path: str | None = None + status: TestStatus + duration: float + start_time: datetime + end_time: datetime | None = None + + error_message: str | None = None + log_url: str | None = None + stdout: str | None = None + stderr: str | None = None + + +class JobResult(BaseModel): + """Complete job result.""" + + job_id: str + firmware_id: str | None = None + device_type: str + lab_name: str + + status: str + total_tests: int + passed_tests: int + failed_tests: int + skipped_tests: int + error_tests: int + + started_at: datetime + completed_at: datetime + duration: float + + test_results: list[TestResult] = Field(default_factory=list) + console_log_url: str | None = None + boot_log_url: str | None = None + environment: dict[str, Any] = Field(default_factory=dict) diff --git a/kernelci/labgrid-adapter/labgrid_kci_adapter/poller.py b/kernelci/labgrid-adapter/labgrid_kci_adapter/poller.py new file mode 100644 index 000000000..75728701c --- /dev/null +++ b/kernelci/labgrid-adapter/labgrid_kci_adapter/poller.py @@ -0,0 +1,292 @@ +""" +Job Poller for KernelCI + +Polls the KernelCI API for pending test jobs that match +this lab's capabilities (devices and features). + +Uses the KernelCI Node-based API where jobs are nodes with kind=job +and state=available. + +Supports parallel execution: if a lab has multiple physical devices +of the same type, it can run multiple jobs for different firmware +versions in parallel. +""" + +import asyncio +import logging +from collections import defaultdict +from datetime import datetime + +import httpx +from tenacity import retry, stop_after_attempt, wait_exponential + +from .config import settings +from .labgrid_client import LabgridClient + +logger = logging.getLogger(__name__) + + +class JobPoller: + """ + Polls KernelCI API for pending jobs. + + The poller: + 1. Polls for pending jobs (nodes with kind=job, state=available) + 2. Claims jobs by updating state to 'running' + 3. Dispatches jobs to the executor + 4. Handles concurrent job limits + 5. Supports parallel execution across multiple devices of same type + """ + + def __init__( + self, + lab_name: str, + devices: list[str], + features: list[str], + on_job, + labgrid_client: LabgridClient | None = None, + ): + self.lab_name = lab_name + self.devices = devices + self.features = features + self.on_job = on_job + + self.api_url = settings.kci_api_url.rstrip("/") + self.api_token = settings.kci_api_token + + self._client: httpx.AsyncClient | None = None + self._running = False + # Track running jobs: job_id -> device_type + self._current_jobs: dict[str, str] = {} + # Track jobs per device type for parallel execution + self._jobs_per_device: dict[str, set[str]] = defaultdict(set) + # Labgrid client to query available places + self._labgrid = labgrid_client or LabgridClient() + + async def connect(self) -> None: + """Create HTTP client.""" + self._client = httpx.AsyncClient( + base_url=self.api_url, + headers={ + "Authorization": f"Bearer {self.api_token}", + "Content-Type": "application/json", + }, + timeout=httpx.Timeout(30.0), + ) + logger.info(f"Poller connected to {self.api_url}") + + async def close(self) -> None: + """Close HTTP client.""" + if self._client: + await self._client.aclose() + self._client = None + + @property + def client(self) -> httpx.AsyncClient: + if self._client is None: + raise RuntimeError("Poller not connected") + return self._client + + @retry( + stop=stop_after_attempt(3), + wait=wait_exponential(multiplier=1, min=1, max=10), + ) + async def _api_request( + self, + method: str, + path: str, + **kwargs, + ) -> dict: + """Make an API request with retry logic.""" + response = await self.client.request(method, path, **kwargs) + + if response.status_code >= 400: + logger.error(f"API error: {response.status_code} - {response.text}") + response.raise_for_status() + + if response.status_code == 204: + return {} + + return response.json() + + async def _poll_jobs(self) -> list[tuple[dict, str]]: + """ + Poll for pending jobs matching our devices. + + Queries nodes with: + - kind=job + - state=available + - data.runtime=labgrid + - data.device_type in our devices + + Supports parallel execution: if multiple physical devices of the + same type are available, fetches multiple jobs for that type. + + Returns: + List of (job, device_type) tuples + """ + jobs = [] + + for device_type in self.devices: + try: + # Check how many places are available for this device type + available_places = await self._labgrid.count_available(device_type) + running_jobs = len(self._jobs_per_device.get(device_type, set())) + free_slots = available_places - running_jobs + + if free_slots <= 0: + logger.debug( + f"No free slots for {device_type}: " + f"{available_places} places, {running_jobs} jobs running" + ) + continue + + # Query for available jobs to claim (up to number of free slots) + # KernelCI state machine: available -> closing -> done + # We use 'closing' state to indicate job is being executed + params = { + "kind": "job", + "state": "available", + "data.device_type": device_type, + "data.runtime": "labgrid", + "limit": free_slots, + } + data = await self._api_request("GET", "/latest/nodes", params=params) + + # Handle both list and dict responses + items = data if isinstance(data, list) else data.get("items", []) + + for job in items: + jobs.append((job, device_type)) + + if items: + logger.debug( + f"Found {len(items)} jobs for {device_type} " + f"({free_slots} free slots)" + ) + + except Exception as e: + logger.warning(f"Failed to poll jobs for {device_type}: {e}") + + return jobs + + async def _claim_job(self, job_id: str, device: str) -> bool: + """ + Claim a job by marking it with our lab info and changing state to closing. + + KernelCI state machine allows: available -> closing -> done + We use 'closing' state to indicate a job is being executed. + + Returns True if successfully claimed, False if already taken. + """ + try: + # First GET the current node + node = await self._api_request("GET", f"/latest/node/{job_id}") + + # Check if already claimed by another lab + node_data = node.get("data", {}) + if node_data.get("lab_name") and node_data.get("lab_name") != self.lab_name: + logger.debug(f"Job {job_id} already claimed by {node_data.get('lab_name')}") + return False + + # Check if available to claim + if node.get("state") != "available": + logger.debug(f"Job {job_id} not claimable: {node.get('state')}") + return False + + # Claim the job: set state to 'closing' (execution in progress) + # and mark with our lab info + node["state"] = "closing" + node_data["lab_name"] = self.lab_name + node_data["device_id"] = device + node_data["started_at"] = datetime.utcnow().isoformat() + node["data"] = node_data + + await self._api_request( + "PUT", + f"/latest/node/{job_id}", + json=node, + ) + logger.info(f"Claimed job {job_id} for device {device}") + return True + except httpx.HTTPStatusError as e: + if e.response.status_code in (409, 400): + # Job already claimed or invalid state transition + logger.debug(f"Job {job_id} already claimed") + else: + logger.warning(f"Failed to claim job {job_id}: {e}") + return False + except Exception as e: + logger.warning(f"Failed to claim job {job_id}: {e}") + return False + + async def run(self) -> None: + """Main polling loop.""" + self._running = True + logger.info(f"Starting job poller for lab: {self.lab_name}") + logger.info(f"Devices: {self.devices}") + logger.info(f"Features: {self.features}") + + poll_interval = settings.poll_interval + + while self._running: + try: + # Check global capacity + if len(self._current_jobs) >= settings.max_concurrent_jobs: + logger.debug("At max concurrent jobs, waiting...") + await asyncio.sleep(poll_interval) + continue + + # Poll for jobs (returns list of (job, device_type) tuples) + job_tuples = await self._poll_jobs() + + for job, device_type in job_tuples: + # Check global capacity again (might have filled up) + if len(self._current_jobs) >= settings.max_concurrent_jobs: + break + + job_id = job.get("id") or job.get("_id") + + if not job_id: + continue + + if job_id in self._current_jobs: + continue + + # Try to claim + if await self._claim_job(job_id, device_type): + self._current_jobs[job_id] = device_type + self._jobs_per_device[device_type].add(job_id) + asyncio.create_task( + self._execute_job(job, device_type), + name=f"job-{job_id}", + ) + + await asyncio.sleep(poll_interval) + + except asyncio.CancelledError: + logger.info("Poller cancelled") + break + except Exception as e: + logger.exception(f"Error in polling loop: {e}") + await asyncio.sleep(poll_interval) + + logger.info("Job poller stopped") + + async def _execute_job(self, job: dict, device_type: str) -> None: + """Execute a job and handle completion.""" + job_id = job.get("id") or job.get("_id") + + try: + await self.on_job(job) + except Exception as e: + logger.exception(f"Job {job_id} failed: {e}") + finally: + # Clean up job tracking + self._current_jobs.pop(job_id, None) + if device_type in self._jobs_per_device: + self._jobs_per_device[device_type].discard(job_id) + + def stop(self) -> None: + """Stop the polling loop.""" + self._running = False diff --git a/kernelci/labgrid-adapter/labgrid_kci_adapter/service.py b/kernelci/labgrid-adapter/labgrid_kci_adapter/service.py new file mode 100644 index 000000000..dc14c03f6 --- /dev/null +++ b/kernelci/labgrid-adapter/labgrid_kci_adapter/service.py @@ -0,0 +1,457 @@ +""" +Labgrid KernelCI Adapter Service + +Main service that: +1. Discovers available devices from labgrid +2. Registers with KernelCI API +3. Polls for and executes test jobs +4. Submits results back to KernelCI +5. Runs periodic health checks on devices (every 24h by default) +""" + +import asyncio +import os +import signal + +import httpx +import structlog + +from .config import settings +from .device_discovery import DeviceDiscoveryManager +from .executor import TestExecutor +from .labgrid_client import LabgridClient +from .models import JobResult +from .poller import JobPoller + +# Configure logging +structlog.configure( + processors=[ + structlog.stdlib.filter_by_level, + structlog.stdlib.add_logger_name, + structlog.stdlib.add_log_level, + structlog.stdlib.PositionalArgumentsFormatter(), + structlog.processors.TimeStamper(fmt="iso"), + structlog.processors.StackInfoRenderer(), + structlog.processors.format_exc_info, + structlog.processors.UnicodeDecoder(), + structlog.dev.ConsoleRenderer(), + ], + wrapper_class=structlog.stdlib.BoundLogger, + context_class=dict, + logger_factory=structlog.stdlib.LoggerFactory(), + cache_logger_on_first_use=True, +) + +logger = structlog.get_logger(__name__) + + +class LabgridKCIAdapter: + """ + Main adapter service connecting labgrid to KernelCI. + """ + + def __init__(self): + self.lab_name = settings.lab_name + self.devices: list[str] = [] + self.healthy_devices: set[str] = set() # Only healthy devices get jobs + self.features: list[str] = [] + + self.poller: JobPoller | None = None + self.executor: TestExecutor | None = None + self._api_client: httpx.AsyncClient | None = None + self._labgrid_client: LabgridClient | None = None + self._discovery_manager: DeviceDiscoveryManager | None = None + self._running = False + self._health_check_task: asyncio.Task | None = None + self._discovery_task: asyncio.Task | None = None + + async def initialize(self) -> None: + """Initialize the adapter.""" + logger.info(f"Initializing Labgrid KCI Adapter for lab: {self.lab_name}") + + # Initialize labgrid client first (needed for discovery) + self._labgrid_client = LabgridClient() + + # Discover devices from coordinator + self._discovery_manager = DeviceDiscoveryManager( + labgrid_client=self._labgrid_client, + targets_dir=settings.targets_dir, + refresh_interval=settings.device_discovery_interval, + require_target_files=settings.require_target_files, + ) + devices = await self._discovery_manager.discover() + self.devices = list(devices.keys()) + self.features = self._discovery_manager.get_all_features() + logger.info(f"Discovered {len(self.devices)} device types from coordinator") + + # Initialize API client + self._api_client = httpx.AsyncClient( + base_url=settings.kci_api_url, + headers={ + "Authorization": f"Bearer {settings.kci_api_token}", + "Content-Type": "application/json", + }, + timeout=httpx.Timeout(30.0), + ) + + # Initialize executor + self.executor = TestExecutor( + lab_name=self.lab_name, + targets_dir=settings.targets_dir, + tests_dir=settings.tests_dir, + ) + await self.executor.initialize() + + # Initially assume all devices are healthy (health check will verify) + self.healthy_devices = set(self.devices) + + # Initialize poller with healthy devices only + # Poller uses labgrid client to support parallel execution + # when multiple physical devices of same type are available + self.poller = JobPoller( + lab_name=self.lab_name, + devices=list(self.healthy_devices), + features=self.features, + on_job=self._handle_job, + labgrid_client=self._labgrid_client, + ) + await self.poller.connect() + + async def shutdown(self) -> None: + """Shutdown the adapter.""" + logger.info("Shutting down Labgrid KCI Adapter") + self._running = False + + if self.poller: + self.poller.stop() + + if self.executor: + await self.executor.cleanup() + + if self._api_client: + await self._api_client.aclose() + + async def _handle_job(self, job: dict) -> None: + """ + Handle a job received from the poller. + + Args: + job: Job definition from KernelCI API + """ + job_id = job.get("id") + logger.info(f"Handling job: {job_id}") + + try: + # Execute the job + result = await self.executor.execute_job(job) + + # Submit results + await self._submit_results(result) + + logger.info( + "Job completed", + job_id=job_id, + status=result.status, + passed=result.passed_tests, + failed=result.failed_tests, + ) + + except Exception as e: + logger.exception(f"Error handling job {job_id}: {e}") + + # Try to mark job as failed + try: + await self._mark_job_failed(job_id, str(e)) + except Exception: + logger.exception(f"Failed to mark job {job_id} as failed") + + async def _submit_results(self, result: JobResult) -> None: + """Submit job results to KernelCI API.""" + logger.info(f"Submitting results for job: {result.job_id}") + + try: + # Get the current node + response = await self._api_client.get(f"/latest/node/{result.job_id}") + response.raise_for_status() + node = response.json() + + # Update node with results + node["state"] = "done" + node["result"] = result.status.value if hasattr(result.status, 'value') else result.status + node_data = node.get("data", {}) + node_data["completed_at"] = result.completed_at.isoformat() if result.completed_at else None + node_data["duration"] = result.duration + node_data["test_results"] = [t.model_dump(mode="json") for t in result.test_results] + if result.console_log_url: + node_data["log_url"] = result.console_log_url + if result.boot_log_url: + node_data["boot_log_url"] = result.boot_log_url + node["data"] = node_data + + # PUT the updated node + response = await self._api_client.put( + f"/latest/node/{result.job_id}", + json=node, + ) + response.raise_for_status() + logger.info(f"Results submitted for job: {result.job_id}") + + except Exception as e: + logger.error(f"Failed to submit results: {e}") + raise + + async def _mark_job_failed(self, job_id: str, error: str) -> None: + """Mark a job as failed.""" + try: + # Get the current node + response = await self._api_client.get(f"/latest/node/{job_id}") + response.raise_for_status() + node = response.json() + + # Update node with failure + node["state"] = "done" + node["result"] = "fail" + node_data = node.get("data", {}) + node_data["error_message"] = error + node["data"] = node_data + + # PUT the updated node + response = await self._api_client.put( + f"/latest/node/{job_id}", + json=node, + ) + response.raise_for_status() + except Exception as e: + logger.error(f"Failed to mark job as failed: {e}") + + # ========================================================================= + # Health Check + # ========================================================================= + + async def _run_health_checks(self) -> None: + """Run health checks on all devices and update healthy_devices set.""" + logger.info("Starting health check for all devices") + + for device in self.devices: + target_file = settings.targets_dir / f"{device}.yaml" + ok, message = await self._check_device_health(device, target_file) + + if ok: + if device not in self.healthy_devices: + logger.info(f"Device {device} is now healthy") + self.healthy_devices.add(device) + # Update poller with new device list + if self.poller: + self.poller.devices = list(self.healthy_devices) + else: + if device in self.healthy_devices: + logger.warning(f"Device {device} failed health check: {message}") + self.healthy_devices.discard(device) + # Update poller to stop accepting jobs for this device + if self.poller: + self.poller.devices = list(self.healthy_devices) + else: + logger.warning(f"Device {device} still unhealthy: {message}") + + logger.info( + f"Health check complete: {len(self.healthy_devices)}/{len(self.devices)} " + "devices healthy" + ) + + async def _check_device_health( + self, device: str, target_file: os.PathLike + ) -> tuple[bool, str]: + """ + Check if a device is accessible via labgrid coordinator. + + Uses place-based acquisition (labgrid-client -p ) instead of + config file-based, since target files require template variables + (LG_IMAGE, etc.) that are only available at job execution time. + + Returns: + Tuple of (is_healthy, message) + """ + try: + env = os.environ.copy() + env["LG_COORDINATOR"] = settings.lg_coordinator + + # Construct place name from lab name and device + # Place naming convention: {lab_name}-{device_type} + # Lab name already includes full prefix (e.g., "labgrid-aparcar") + place_name = f"{self.lab_name}-{device}" + + # Try to acquire the place + proc = await asyncio.create_subprocess_exec( + "labgrid-client", + "-p", + place_name, + "acquire", + env=env, + stdout=asyncio.subprocess.PIPE, + stderr=asyncio.subprocess.PIPE, + ) + + try: + _, stderr = await asyncio.wait_for(proc.communicate(), timeout=30) + except asyncio.TimeoutError: + proc.kill() + await proc.wait() + return (False, "Timeout acquiring device") + + if proc.returncode != 0: + return (False, f"Acquire failed: {stderr.decode().strip()}") + + # Release immediately + release_proc = await asyncio.create_subprocess_exec( + "labgrid-client", + "-p", + place_name, + "release", + env=env, + stdout=asyncio.subprocess.PIPE, + stderr=asyncio.subprocess.PIPE, + ) + await asyncio.wait_for(release_proc.communicate(), timeout=10) + + return (True, "OK") + + except Exception as e: + return (False, str(e)) + + async def _health_check_loop(self) -> None: + """Background task that runs health checks periodically.""" + # Run initial health check + await self._run_health_checks() + + while self._running: + try: + await asyncio.sleep(settings.health_check_interval) + if self._running: + await self._run_health_checks() + except asyncio.CancelledError: + break + except Exception as e: + logger.exception(f"Error in health check loop: {e}") + # Continue running despite errors + await asyncio.sleep(60) + + async def _discovery_refresh_loop(self) -> None: + """Background task that refreshes device discovery periodically.""" + while self._running: + try: + await asyncio.sleep(settings.device_discovery_interval) + if self._running and self._discovery_manager: + old_devices = set(self.devices) + devices = await self._discovery_manager.discover(force_refresh=True) + new_devices = set(devices.keys()) + + # Log changes + added = new_devices - old_devices + removed = old_devices - new_devices + + if added: + logger.info(f"New devices discovered: {added}") + if removed: + logger.info(f"Devices removed: {removed}") + + # Update device list + self.devices = list(new_devices) + self.features = self._discovery_manager.get_all_features() + + # Update healthy devices (remove any that no longer exist) + self.healthy_devices &= new_devices + + # Update poller device list + if self.poller: + self.poller.devices = [ + d for d in self.devices if d in self.healthy_devices + ] + + except asyncio.CancelledError: + break + except Exception as e: + logger.exception(f"Error in discovery refresh loop: {e}") + await asyncio.sleep(60) + + async def run(self) -> None: + """Main service loop.""" + self._running = True + logger.info("Starting Labgrid KCI Adapter") + + try: + # Start discovery refresh loop in background + if self._discovery_manager: + self._discovery_task = asyncio.create_task( + self._discovery_refresh_loop() + ) + logger.info( + f"Device discovery refresh interval: " + f"{settings.device_discovery_interval}s" + ) + + # Start health check loop in background + if settings.health_check_enabled: + self._health_check_task = asyncio.create_task(self._health_check_loop()) + logger.info( + f"Health checks enabled, interval: " + f"{settings.health_check_interval}s" + ) + + # Start the poller (uses healthy_devices) + # Note: Tests are pulled before each job execution, not in background + await self.poller.run() + except asyncio.CancelledError: + logger.info("Adapter cancelled") + except Exception as e: + logger.exception(f"Adapter error: {e}") + finally: + if self._discovery_task: + self._discovery_task.cancel() + try: + await self._discovery_task + except asyncio.CancelledError: + pass + if self._health_check_task: + self._health_check_task.cancel() + try: + await self._health_check_task + except asyncio.CancelledError: + pass + await self.shutdown() + + +# ============================================================================= +# Main Entry Point +# ============================================================================= + + +async def main(): + """Main entry point.""" + adapter = LabgridKCIAdapter() + + # Handle signals + loop = asyncio.get_event_loop() + + def signal_handler(): + logger.info("Received shutdown signal") + loop.create_task(adapter.shutdown()) + + for sig in (signal.SIGTERM, signal.SIGINT): + loop.add_signal_handler(sig, signal_handler) + + try: + await adapter.initialize() + await adapter.run() + except KeyboardInterrupt: + logger.info("Keyboard interrupt") + finally: + await adapter.shutdown() + + +def run(): + """Synchronous entry point.""" + asyncio.run(main()) + + +if __name__ == "__main__": + run() diff --git a/kernelci/labgrid-adapter/labgrid_kci_adapter/test_sync.py b/kernelci/labgrid-adapter/labgrid_kci_adapter/test_sync.py new file mode 100644 index 000000000..47c40de4b --- /dev/null +++ b/kernelci/labgrid-adapter/labgrid_kci_adapter/test_sync.py @@ -0,0 +1,119 @@ +""" +Test Repository Sync + +Fetches tests from a git repository before job execution. +If the repository already exists locally, updates it with git pull. + +Similar to LAVA pattern where tests are fetched at job execution time. +See: https://docs.lavasoftware.org/lava/writing-tests.html +""" + +import asyncio +import logging +import shutil +from pathlib import Path + +from .config import settings + +logger = logging.getLogger(__name__) + + +async def _run_git(*args: str, cwd: Path | None = None) -> tuple[int, str]: + """Run a git command asynchronously.""" + proc = await asyncio.create_subprocess_exec( + "git", + *args, + cwd=cwd, + stdout=asyncio.subprocess.PIPE, + stderr=asyncio.subprocess.STDOUT, + ) + stdout, _ = await proc.communicate() + return proc.returncode, stdout.decode().strip() + + +async def ensure_tests( + repo_url: str | None = None, + branch: str | None = None, + dest_dir: Path | None = None, + subdir: str | None = None, +) -> Path: + """ + Ensure tests are available and up-to-date before job execution. + + If repo_url is provided, clones or updates from that repository. + If repo already exists locally, pulls latest changes. + If no repo_url and no local repo, uses dest_dir as-is (local tests). + + Args: + repo_url: Git repository URL (optional, uses settings if not provided) + branch: Branch to checkout (optional, uses settings if not provided) + dest_dir: Destination directory (optional, uses settings if not provided) + subdir: Subdirectory within repo containing tests (optional) + + Returns: + Path to the tests directory (including subdir if specified) + """ + repo_url = repo_url or settings.tests_repo_url + branch = branch or settings.tests_repo_branch + dest_dir = dest_dir or settings.tests_dir + subdir = subdir if subdir is not None else settings.tests_repo_subdir + + dest_dir.parent.mkdir(parents=True, exist_ok=True) + + # Helper to get final tests path including subdir + def _tests_path() -> Path: + if subdir: + return dest_dir / subdir + return dest_dir + + # If no repo URL configured, just use local directory + if not repo_url: + tests_path = _tests_path() + if not tests_path.exists(): + raise RuntimeError( + f"Tests directory {tests_path} does not exist " + "and no TESTS_REPO_URL configured" + ) + logger.debug(f"Using local tests at {tests_path}") + return tests_path + + # Check if already cloned + if (dest_dir / ".git").exists(): + # Update existing repository + logger.debug(f"Updating tests in {dest_dir}") + returncode, output = await _run_git("fetch", "origin", branch, cwd=dest_dir) + if returncode != 0: + logger.warning(f"Git fetch failed: {output}") + # Continue with existing checkout + return _tests_path() + + # Check if there are updates + _, local_rev = await _run_git("rev-parse", "HEAD", cwd=dest_dir) + _, remote_rev = await _run_git("rev-parse", f"origin/{branch}", cwd=dest_dir) + + if local_rev != remote_rev: + await _run_git("reset", "--hard", f"origin/{branch}", cwd=dest_dir) + logger.info(f"Tests updated: {local_rev[:8]} -> {remote_rev[:8]}") + else: + logger.debug("Tests already up-to-date") + else: + # Clone fresh + logger.info(f"Cloning tests from {repo_url}") + if dest_dir.is_symlink(): + dest_dir.unlink() + elif dest_dir.exists(): + shutil.rmtree(dest_dir) + + returncode, output = await _run_git( + "clone", "--branch", branch, "--depth", "1", repo_url, str(dest_dir) + ) + if returncode != 0: + raise RuntimeError(f"Failed to clone tests: {output}") + logger.info(f"Tests cloned to {dest_dir}") + + tests_path = _tests_path() + if subdir and not tests_path.exists(): + raise RuntimeError( + f"Subdirectory '{subdir}' does not exist in repository" + ) + return tests_path diff --git a/kernelci/labgrid-adapter/pyproject.toml b/kernelci/labgrid-adapter/pyproject.toml new file mode 100644 index 000000000..d0d6fe131 --- /dev/null +++ b/kernelci/labgrid-adapter/pyproject.toml @@ -0,0 +1,43 @@ +[project] +name = "labgrid-kci-adapter" +version = "0.1.0" +description = "Adapter connecting labgrid test infrastructure to KernelCI" +readme = "README.md" +requires-python = ">=3.11" +dependencies = [ + "labgrid @ git+https://github.com/aparcar/labgrid.git@aparcar/staging", + "httpx>=0.27.0", + "pydantic>=2.5.0", + "pydantic-settings>=2.1.0", + "pyyaml>=6.0.1", + "minio>=7.2.0", + "structlog>=24.1.0", +] + +[project.optional-dependencies] +dev = [ + "pytest>=8.0.0", + "pytest-asyncio>=0.23.0", + "ruff>=0.1.0", +] + +[project.scripts] +labgrid-kci-adapter = "labgrid_kci_adapter.service:run" + +[build-system] +requires = ["setuptools>=61.0"] +build-backend = "setuptools.build_meta" + +[tool.setuptools.packages.find] +include = ["labgrid_kci_adapter*"] + +[tool.ruff] +line-length = 88 +target-version = "py311" + +[tool.ruff.lint] +select = ["E", "F", "I", "W"] + +[tool.pytest.ini_options] +asyncio_mode = "auto" +asyncio_default_fixture_loop_scope = "function" diff --git a/kernelci/labgrid-adapter/requirements.txt b/kernelci/labgrid-adapter/requirements.txt new file mode 100644 index 000000000..154fb1149 --- /dev/null +++ b/kernelci/labgrid-adapter/requirements.txt @@ -0,0 +1,42 @@ +# ============================================================================= +# Labgrid KernelCI Adapter Dependencies +# ============================================================================= + +# Labgrid (from custom fork for OpenWrt-specific features) +# Uses gRPC coordinator (not the legacy Crossbar/WAMP) +labgrid @ git+https://github.com/aparcar/labgrid.git@aparcar/staging + +# gRPC for labgrid coordinator communication +grpcio>=1.60.0 +grpcio-tools>=1.60.0 + +# HTTP client for KernelCI API +httpx>=0.27.0 +aiohttp>=3.9.0 + +# Testing framework +pytest>=8.4.0 +pytest-check>=2.3.0 +pytest-harvest>=1.10.0 + +# Data validation +pydantic>=2.5.0 +pydantic-settings>=2.1.0 + +# YAML configuration +pyyaml>=6.0.1 + +# S3/MinIO client (for artifact download) +minio>=7.2.0 + +# Logging +structlog>=24.1.0 + +# Retry logic +tenacity>=8.2.0 + +# Process management +psutil>=5.9.0 + +# Async utilities +anyio>=4.2.0 diff --git a/kernelci/labgrid-adapter/tests/__init__.py b/kernelci/labgrid-adapter/tests/__init__.py new file mode 100644 index 000000000..52c435ed0 --- /dev/null +++ b/kernelci/labgrid-adapter/tests/__init__.py @@ -0,0 +1 @@ +"""Tests for labgrid-kci-adapter.""" diff --git a/kernelci/labgrid-adapter/tests/conftest.py b/kernelci/labgrid-adapter/tests/conftest.py new file mode 100644 index 000000000..4a2f8dd63 --- /dev/null +++ b/kernelci/labgrid-adapter/tests/conftest.py @@ -0,0 +1,18 @@ +"""Pytest configuration for labgrid-adapter tests.""" + +import sys +from pathlib import Path + +import pytest + +# Add the package to sys.path for imports +package_dir = Path(__file__).parent.parent +if str(package_dir) not in sys.path: + sys.path.insert(0, str(package_dir)) + + +@pytest.fixture(autouse=True) +def reset_settings(): + """Reset settings between tests.""" + # This ensures each test starts with fresh settings + yield diff --git a/kernelci/labgrid-adapter/tests/test_device_discovery.py b/kernelci/labgrid-adapter/tests/test_device_discovery.py new file mode 100644 index 000000000..d208c9ad9 --- /dev/null +++ b/kernelci/labgrid-adapter/tests/test_device_discovery.py @@ -0,0 +1,358 @@ +"""Tests for device discovery manager.""" + +from unittest.mock import AsyncMock, MagicMock + +import pytest + +from labgrid_kci_adapter.device_discovery import ( + DeviceDiscoveryManager, + DiscoveredDevice, +) +from labgrid_kci_adapter.labgrid_client import Place + + +@pytest.fixture +def sample_places(): + """Sample places for testing.""" + return { + "testlab-openwrt_one": Place( + name="testlab-openwrt_one", + acquired=False, + tags={"device_type": "openwrt_one", "features": "wifi,wan_port"}, + ), + "testlab-openwrt_one-2": Place( + name="testlab-openwrt_one-2", + acquired=False, + tags={"device_type": "openwrt_one", "features": "wifi"}, + ), + "testlab-archer-c7": Place( + name="testlab-archer-c7", + acquired=False, + tags={"device_type": "tplink_archer-c7-v2"}, + ), + "testlab-bananapi_bpi-r4": Place( + name="testlab-bananapi_bpi-r4", + acquired=True, + acquired_by="user@host", + tags={"device_type": "bananapi_bpi-r4", "features": "wan_port"}, + ), + } + + +@pytest.fixture +def mock_labgrid_client(sample_places): + """Create mock labgrid client with sample places.""" + client = MagicMock() + client.get_places = AsyncMock(return_value=sample_places) + return client + + +class TestDiscoveredDevice: + """Tests for DiscoveredDevice dataclass.""" + + def test_instance_count(self): + """Test instance count property.""" + device = DiscoveredDevice( + device_type="openwrt_one", + places=[ + Place(name="lab-openwrt_one", acquired=False), + Place(name="lab-openwrt_one-2", acquired=False), + ], + ) + assert device.instance_count == 2 + + def test_empty_places(self): + """Test device with no places.""" + device = DiscoveredDevice(device_type="test") + assert device.instance_count == 0 + assert device.features == set() + assert device.has_target_file is False + + +class TestDeviceDiscoveryManager: + """Tests for DeviceDiscoveryManager.""" + + @pytest.mark.asyncio + async def test_discover_devices(self, mock_labgrid_client): + """Test basic device discovery.""" + manager = DeviceDiscoveryManager( + labgrid_client=mock_labgrid_client, + ) + + devices = await manager.discover() + + assert len(devices) == 3 + assert "openwrt_one" in devices + assert "tplink_archer-c7-v2" in devices + assert "bananapi_bpi-r4" in devices + + @pytest.mark.asyncio + async def test_discover_counts_instances(self, mock_labgrid_client): + """Test that instance count is correct.""" + manager = DeviceDiscoveryManager( + labgrid_client=mock_labgrid_client, + ) + + devices = await manager.discover() + + # openwrt_one has 2 instances + assert devices["openwrt_one"].instance_count == 2 + # archer-c7 has 1 instance + assert devices["tplink_archer-c7-v2"].instance_count == 1 + # bananapi has 1 instance + assert devices["bananapi_bpi-r4"].instance_count == 1 + + @pytest.mark.asyncio + async def test_discover_extracts_features(self, mock_labgrid_client): + """Test feature extraction from place tags.""" + manager = DeviceDiscoveryManager( + labgrid_client=mock_labgrid_client, + ) + + devices = await manager.discover() + + # openwrt_one has wifi and wan_port from tags (merged from both instances) + assert "wifi" in devices["openwrt_one"].features + assert "wan_port" in devices["openwrt_one"].features + + # bananapi has wan_port + assert "wan_port" in devices["bananapi_bpi-r4"].features + + # archer-c7 has no features in tags + assert len(devices["tplink_archer-c7-v2"].features) == 0 + + @pytest.mark.asyncio + async def test_discover_validates_target_files(self, mock_labgrid_client, tmp_path): + """Test target file validation.""" + # Create a target file for one device + targets_dir = tmp_path / "targets" + targets_dir.mkdir() + (targets_dir / "openwrt_one.yaml").write_text( + "targets:\n main:\n features:\n - rootfs\n" + ) + + manager = DeviceDiscoveryManager( + labgrid_client=mock_labgrid_client, + targets_dir=targets_dir, + ) + + devices = await manager.discover() + + assert devices["openwrt_one"].has_target_file is True + assert devices["tplink_archer-c7-v2"].has_target_file is False + assert devices["bananapi_bpi-r4"].has_target_file is False + + @pytest.mark.asyncio + async def test_discover_merges_features_from_file( + self, mock_labgrid_client, tmp_path + ): + """Test that features from target file are merged with tag features.""" + targets_dir = tmp_path / "targets" + targets_dir.mkdir() + (targets_dir / "openwrt_one.yaml").write_text( + "targets:\n main:\n features:\n - rootfs\n - hwsim\n" + ) + + manager = DeviceDiscoveryManager( + labgrid_client=mock_labgrid_client, + targets_dir=targets_dir, + ) + + devices = await manager.discover() + + # Should have features from both tags and file + assert "wifi" in devices["openwrt_one"].features # from tags + assert "wan_port" in devices["openwrt_one"].features # from tags + assert "rootfs" in devices["openwrt_one"].features # from file + assert "hwsim" in devices["openwrt_one"].features # from file + + @pytest.mark.asyncio + async def test_require_target_files_filters(self, mock_labgrid_client, tmp_path): + """Test that require_target_files filters devices.""" + targets_dir = tmp_path / "targets" + targets_dir.mkdir() + (targets_dir / "openwrt_one.yaml").write_text("targets: {}") + + manager = DeviceDiscoveryManager( + labgrid_client=mock_labgrid_client, + targets_dir=targets_dir, + require_target_files=True, + ) + + devices = await manager.discover() + + # Only openwrt_one has a target file + assert "openwrt_one" in devices + assert "tplink_archer-c7-v2" not in devices + assert "bananapi_bpi-r4" not in devices + + @pytest.mark.asyncio + async def test_caching(self, mock_labgrid_client): + """Test that discovery results are cached.""" + manager = DeviceDiscoveryManager( + labgrid_client=mock_labgrid_client, + refresh_interval=300, + ) + + # First call + await manager.discover() + assert mock_labgrid_client.get_places.call_count == 1 + + # Second call (should use cache) + await manager.discover() + assert mock_labgrid_client.get_places.call_count == 1 + + # Force refresh + await manager.discover(force_refresh=True) + assert mock_labgrid_client.get_places.call_count == 2 + + @pytest.mark.asyncio + async def test_get_device_types(self, mock_labgrid_client): + """Test get_device_types method.""" + manager = DeviceDiscoveryManager( + labgrid_client=mock_labgrid_client, + ) + + await manager.discover() + device_types = manager.get_device_types() + + assert set(device_types) == { + "openwrt_one", + "tplink_archer-c7-v2", + "bananapi_bpi-r4", + } + + @pytest.mark.asyncio + async def test_get_all_features(self, mock_labgrid_client): + """Test get_all_features method.""" + manager = DeviceDiscoveryManager( + labgrid_client=mock_labgrid_client, + ) + + await manager.discover() + features = manager.get_all_features() + + assert "wifi" in features + assert "wan_port" in features + + @pytest.mark.asyncio + async def test_get_device(self, mock_labgrid_client): + """Test get_device method.""" + manager = DeviceDiscoveryManager( + labgrid_client=mock_labgrid_client, + ) + + await manager.discover() + + device = manager.get_device("openwrt_one") + assert device is not None + assert device.device_type == "openwrt_one" + + # Non-existent device + assert manager.get_device("nonexistent") is None + + @pytest.mark.asyncio + async def test_has_device(self, mock_labgrid_client): + """Test has_device method.""" + manager = DeviceDiscoveryManager( + labgrid_client=mock_labgrid_client, + ) + + await manager.discover() + + assert manager.has_device("openwrt_one") is True + assert manager.has_device("nonexistent") is False + + @pytest.mark.asyncio + async def test_skips_places_without_device_type(self): + """Test that places without device_type are skipped.""" + client = MagicMock() + # Use a name that doesn't follow the lab-device naming convention + # so it won't have a device_type from either tags or name parsing + client.get_places = AsyncMock( + return_value={ + "nodevicetype": Place(name="nodevicetype", acquired=False, tags={}), + "testlab-openwrt_one": Place( + name="testlab-openwrt_one", + acquired=False, + tags={"device_type": "openwrt_one"}, + ), + } + ) + + manager = DeviceDiscoveryManager( + labgrid_client=client, + ) + + devices = await manager.discover() + + # Should only have the device with device_type (from tags) + # The "nodevicetype" place has no tags and name doesn't match pattern + assert len(devices) == 1 + assert "openwrt_one" in devices + + +class TestFeatureExtraction: + """Tests for feature extraction from target config.""" + + def test_extract_features_from_explicit_list(self): + """Test extracting features from explicit features list.""" + manager = DeviceDiscoveryManager( + labgrid_client=MagicMock(), + ) + + config = {"features": ["wifi", "wan_port", "usb"]} + features = manager._extract_features_from_config(config) + + assert features == {"wifi", "wan_port", "usb"} + + def test_extract_features_from_target_section(self): + """Test extracting features from targets section.""" + manager = DeviceDiscoveryManager( + labgrid_client=MagicMock(), + ) + + config = { + "targets": { + "main": { + "features": ["rootfs", "hwsim"], + } + } + } + features = manager._extract_features_from_config(config) + + assert features == {"rootfs", "hwsim"} + + def test_extract_features_inferred_from_resources(self): + """Test inferring features from resources.""" + manager = DeviceDiscoveryManager( + labgrid_client=MagicMock(), + ) + + config = { + "targets": { + "main": { + "resources": [ + {"NetworkService": {"address": "192.168.1.1"}}, + {"USBSerialPort": {"match": {}}}, + ], + "drivers": [ + {"QEMUDriver": {"memory": "256M"}}, + ], + } + } + } + features = manager._extract_features_from_config(config) + + assert "wifi" in features # From NetworkService + assert "usb" in features # From USBSerialPort + assert "hwsim" in features # From QEMUDriver + + def test_extract_features_empty_config(self): + """Test extracting features from empty config.""" + manager = DeviceDiscoveryManager( + labgrid_client=MagicMock(), + ) + + assert manager._extract_features_from_config({}) == set() + assert manager._extract_features_from_config(None) == set() diff --git a/kernelci/labgrid-adapter/tests/test_executor.py b/kernelci/labgrid-adapter/tests/test_executor.py new file mode 100644 index 000000000..40d62ab09 --- /dev/null +++ b/kernelci/labgrid-adapter/tests/test_executor.py @@ -0,0 +1,434 @@ +"""Tests for test executor.""" + +import tempfile +from datetime import datetime +from pathlib import Path +from unittest.mock import AsyncMock, MagicMock, patch + +import pytest + +from labgrid_kci_adapter.executor import ResultCollectorPlugin, TestExecutor +from labgrid_kci_adapter.models import TestStatus + + +class TestResultCollectorPlugin: + """Tests for ResultCollectorPlugin.""" + + def test_init(self): + """Test plugin initialization.""" + plugin = ResultCollectorPlugin() + assert plugin.results == [] + assert plugin.start_time is None + assert plugin.end_time is None + + def test_pytest_sessionstart(self): + """Test session start hook.""" + plugin = ResultCollectorPlugin() + plugin.pytest_sessionstart(MagicMock()) + assert plugin.start_time is not None + assert isinstance(plugin.start_time, datetime) + + def test_pytest_sessionfinish(self): + """Test session finish hook.""" + plugin = ResultCollectorPlugin() + plugin.pytest_sessionfinish(MagicMock(), 0) + assert plugin.end_time is not None + + def test_pytest_runtest_logreport_call_phase(self): + """Test collecting results from call phase.""" + plugin = ResultCollectorPlugin() + + # Create mock report for 'call' phase + report = MagicMock() + report.when = "call" + report.nodeid = "test_example.py::test_pass" + report.outcome = "passed" + report.duration = 1.5 + report.failed = False + + plugin.pytest_runtest_logreport(report) + + assert len(plugin.results) == 1 + assert plugin.results[0]["nodeid"] == "test_example.py::test_pass" + assert plugin.results[0]["outcome"] == "passed" + assert plugin.results[0]["duration"] == 1.5 + assert plugin.results[0]["error_message"] is None + + def test_pytest_runtest_logreport_setup_phase_ignored(self): + """Test that setup phase is ignored.""" + plugin = ResultCollectorPlugin() + + report = MagicMock() + report.when = "setup" + + plugin.pytest_runtest_logreport(report) + + assert len(plugin.results) == 0 + + def test_pytest_runtest_logreport_teardown_phase_ignored(self): + """Test that teardown phase is ignored.""" + plugin = ResultCollectorPlugin() + + report = MagicMock() + report.when = "teardown" + + plugin.pytest_runtest_logreport(report) + + assert len(plugin.results) == 0 + + def test_pytest_runtest_logreport_failed_with_error(self): + """Test collecting failed test with error message.""" + plugin = ResultCollectorPlugin() + + report = MagicMock() + report.when = "call" + report.nodeid = "test_example.py::test_fail" + report.outcome = "failed" + report.duration = 0.5 + report.failed = True + report.longreprtext = "AssertionError: expected True" + + plugin.pytest_runtest_logreport(report) + + assert len(plugin.results) == 1 + assert plugin.results[0]["outcome"] == "failed" + assert plugin.results[0]["error_message"] == "AssertionError: expected True" + + def test_pytest_collection_modifyitems(self): + """Test collection hook logs items.""" + plugin = ResultCollectorPlugin() + items = [MagicMock(), MagicMock(), MagicMock()] + + # Should not raise + plugin.pytest_collection_modifyitems(items) + + +class TestTestExecutor: + """Tests for TestExecutor.""" + + @pytest.fixture + def temp_dirs(self): + """Create temporary directories for testing.""" + with tempfile.TemporaryDirectory() as tmpdir: + tmpdir_path = Path(tmpdir) + targets_dir = tmpdir_path / "targets" + tests_dir = tmpdir_path / "tests" + targets_dir.mkdir() + tests_dir.mkdir() + yield targets_dir, tests_dir + + @pytest.fixture + def executor(self, temp_dirs): + """Create a TestExecutor instance.""" + targets_dir, tests_dir = temp_dirs + return TestExecutor( + lab_name="test-lab", + targets_dir=targets_dir, + tests_dir=tests_dir, + ) + + def test_init(self, executor, temp_dirs): + """Test executor initialization.""" + targets_dir, tests_dir = temp_dirs + assert executor.lab_name == "test-lab" + assert executor.targets_dir == targets_dir + assert executor.tests_dir == tests_dir + + @pytest.mark.asyncio + async def test_initialize(self, executor): + """Test executor initialization creates HTTP client.""" + with patch("labgrid_kci_adapter.executor.settings") as mock_settings: + mock_settings.minio_endpoint = "" + mock_settings.firmware_cache = Path("/tmp/cache") + + await executor.initialize() + + assert executor._http_client is not None + await executor.cleanup() + + @pytest.mark.asyncio + async def test_cleanup(self, executor): + """Test executor cleanup closes HTTP client.""" + with patch("labgrid_kci_adapter.executor.settings") as mock_settings: + mock_settings.minio_endpoint = "" + mock_settings.firmware_cache = Path("/tmp/cache") + + await executor.initialize() + await executor.cleanup() + + assert executor._http_client is None + + def test_convert_results(self, executor): + """Test converting pytest results to TestResult objects.""" + collector = ResultCollectorPlugin() + collector.start_time = datetime(2024, 1, 1, 12, 0, 0) + collector.results = [ + { + "nodeid": "test_example.py::test_pass", + "outcome": "passed", + "duration": 1.0, + "error_message": None, + }, + { + "nodeid": "test_example.py::test_fail", + "outcome": "failed", + "duration": 0.5, + "error_message": "AssertionError", + }, + { + "nodeid": "test_example.py::test_skip", + "outcome": "skipped", + "duration": 0.0, + "error_message": None, + }, + ] + + results = executor._convert_results( + collector=collector, + job_id="job-123", + firmware_id="fw-456", + device_type="test-device", + ) + + assert len(results) == 3 + + # Check passed test + assert results[0].test_name == "test_pass" + assert results[0].status == TestStatus.PASS + assert results[0].duration == 1.0 + + # Check failed test + assert results[1].test_name == "test_fail" + assert results[1].status == TestStatus.FAIL + assert results[1].error_message == "AssertionError" + + # Check skipped test + assert results[2].test_name == "test_skip" + assert results[2].status == TestStatus.SKIP + + @pytest.mark.asyncio + async def test_download_firmware_cached(self, executor, temp_dirs): + """Test firmware download uses cache.""" + with patch("labgrid_kci_adapter.executor.settings") as mock_settings: + mock_settings.firmware_cache = temp_dirs[0] + mock_settings.minio_endpoint = "" + + # Create cached file + cache_file = temp_dirs[0] / "firmware.bin" + cache_file.write_bytes(b"cached firmware") + + await executor.initialize() + + result = await executor._download_firmware( + url="http://example.com/firmware.bin", + dest_dir=temp_dirs[1], + ) + + assert result == cache_file + await executor.cleanup() + + @pytest.mark.asyncio + async def test_execute_job_success(self, executor, temp_dirs): + """Test successful job execution.""" + targets_dir, tests_dir = temp_dirs + + # Create target file + (targets_dir / "test-device.yaml").write_text("targets: {}") + + # Create test file + (tests_dir / "test_example.py").write_text( + "def test_pass(): pass\ndef test_another(): pass\n" + ) + + job = { + "id": "job-123", + "parent": "fw-456", + "data": { + "device_type": "test-device", + "tests": [], + "timeout": 60, + }, + } + + with patch("labgrid_kci_adapter.executor.settings") as mock_settings: + mock_settings.firmware_cache = temp_dirs[0] / "cache" + mock_settings.minio_endpoint = "" + mock_settings.lg_coordinator = "localhost:20408" + mock_settings.tests_repo_url = "" + mock_settings.tests_repo_branch = "main" + mock_settings.tests_dir = tests_dir + + with patch( + "labgrid_kci_adapter.executor.ensure_tests", + new_callable=AsyncMock, + return_value=tests_dir, + ): + with patch.object( + executor, + "_run_pytest", + return_value=(ResultCollectorPlugin(), "output"), + ): + await executor.initialize() + result = await executor.execute_job(job) + await executor.cleanup() + + assert result.job_id == "job-123" + assert result.lab_name == "test-lab" + assert result.device_type == "test-device" + + def test_try_parse_ktap_with_valid_ktap(self, executor): + """Test _try_parse_ktap with valid KTAP output.""" + ktap_output = """KTAP version 1 +1..3 +ok 1 - test_pass +not ok 2 - test_fail +ok 3 - test_skip # SKIP not supported +""" + results = executor._try_parse_ktap(ktap_output, prefix="kselftest") + + assert results is not None + assert len(results) == 3 + assert results[0]["name"] == "kselftest.test_pass" + assert results[0]["status"] == "pass" + assert results[1]["name"] == "kselftest.test_fail" + assert results[1]["status"] == "fail" + assert results[2]["name"] == "kselftest.test_skip" + assert results[2]["status"] == "skip" + + def test_try_parse_ktap_with_nested_subtests(self, executor): + """Test _try_parse_ktap with nested KTAP subtests.""" + ktap_output = """KTAP version 1 +1..1 + KTAP version 1 + 1..2 + ok 1 - child_a + not ok 2 - child_b +ok 1 - parent_test +""" + results = executor._try_parse_ktap(ktap_output, prefix="net") + + assert results is not None + assert len(results) == 2 + assert results[0]["name"] == "net.parent_test.child_a" + assert results[0]["status"] == "pass" + assert results[1]["name"] == "net.parent_test.child_b" + assert results[1]["status"] == "fail" + + def test_try_parse_ktap_with_no_ktap(self, executor): + """Test _try_parse_ktap returns None for non-KTAP output.""" + regular_output = "Running tests...\nAll tests passed!\n" + results = executor._try_parse_ktap(regular_output, prefix="test") + + assert results is None + + def test_try_parse_ktap_with_empty_output(self, executor): + """Test _try_parse_ktap returns None for empty output.""" + assert executor._try_parse_ktap("", prefix="test") is None + assert executor._try_parse_ktap(None, prefix="test") is None + + def test_convert_results_with_ktap_output(self, executor): + """Test _convert_results expands KTAP results into multiple TestResults.""" + collector = ResultCollectorPlugin() + collector.start_time = datetime(2024, 1, 1, 12, 0, 0) + collector.results = [ + { + "nodeid": "test_kselftest.py::test_kselftest_net", + "outcome": "passed", + "duration": 10.0, + "error_message": None, + "stdout": """KTAP version 1 +1..3 +ok 1 - socket_test +not ok 2 - bind_test # FAIL address in use +ok 3 - listen_test # SKIP requires root +""", + "stderr": None, + }, + ] + + results = executor._convert_results( + collector=collector, + job_id="job-123", + firmware_id="fw-456", + device_type="test-device", + ) + + # Should expand to 3 KTAP subtests + assert len(results) == 3 + + # Check each subtest result + assert results[0].test_name == "test_kselftest_net.socket_test" + assert results[0].status == TestStatus.PASS + + assert results[1].test_name == "test_kselftest_net.bind_test" + assert results[1].status == TestStatus.FAIL + assert results[1].error_message == "FAIL address in use" + + assert results[2].test_name == "test_kselftest_net.listen_test" + assert results[2].status == TestStatus.SKIP + + def test_convert_results_mixed_ktap_and_regular(self, executor): + """Test _convert_results handles mix of KTAP and regular tests.""" + collector = ResultCollectorPlugin() + collector.start_time = datetime(2024, 1, 1, 12, 0, 0) + collector.results = [ + # Regular pytest test (no KTAP) + { + "nodeid": "test_boot.py::test_boot_success", + "outcome": "passed", + "duration": 5.0, + "error_message": None, + "stdout": "Device booted successfully\n", + "stderr": None, + }, + # KTAP kselftest + { + "nodeid": "test_kselftest.py::test_kselftest_timers", + "outcome": "passed", + "duration": 10.0, + "error_message": None, + "stdout": """TAP version 13 +1..2 +ok 1 - timer_create +ok 2 - timer_delete +""", + "stderr": None, + }, + ] + + results = executor._convert_results( + collector=collector, + job_id="job-123", + firmware_id="fw-456", + device_type="test-device", + ) + + # Should have 1 regular + 2 KTAP = 3 results + assert len(results) == 3 + + # First is regular pytest result + assert results[0].test_name == "test_boot_success" + assert results[0].status == TestStatus.PASS + + # Next two are KTAP subtests + assert results[1].test_name == "test_kselftest_timers.timer_create" + assert results[2].test_name == "test_kselftest_timers.timer_delete" + + def test_result_collector_captures_stdout(self): + """Test ResultCollectorPlugin captures stdout from report sections.""" + plugin = ResultCollectorPlugin() + + report = MagicMock() + report.when = "call" + report.nodeid = "test_example.py::test_ktap" + report.outcome = "passed" + report.duration = 1.0 + report.failed = False + report.sections = [ + ("Captured stdout call", "KTAP version 1\n1..1\nok 1 - test\n"), + ] + + plugin.pytest_runtest_logreport(report) + + assert len(plugin.results) == 1 + assert plugin.results[0]["stdout"] == "KTAP version 1\n1..1\nok 1 - test\n" diff --git a/kernelci/labgrid-adapter/tests/test_ktap_parser.py b/kernelci/labgrid-adapter/tests/test_ktap_parser.py new file mode 100644 index 000000000..5682fe77e --- /dev/null +++ b/kernelci/labgrid-adapter/tests/test_ktap_parser.py @@ -0,0 +1,470 @@ +""" +Tests for KTAP parser module. +""" + +import pytest + +from labgrid_kci_adapter.ktap_parser import ( + KtapTestResult, + TestStatus, + ktap_results_to_dict, + parse_ktap, + summarize_results, +) + + +class TestParseKtapBasic: + """Basic KTAP parsing tests.""" + + def test_simple_pass(self): + """Test parsing a simple passing test.""" + output = """ +KTAP version 1 +1..1 +ok 1 - test_simple +""" + results = parse_ktap(output) + assert len(results) == 1 + assert results[0].name == "test_simple" + assert results[0].status == TestStatus.PASS + assert results[0].number == 1 + + def test_simple_fail(self): + """Test parsing a simple failing test.""" + output = """ +KTAP version 1 +1..1 +not ok 1 - test_fail +""" + results = parse_ktap(output) + assert len(results) == 1 + assert results[0].name == "test_fail" + assert results[0].status == TestStatus.FAIL + + def test_multiple_tests(self): + """Test parsing multiple tests.""" + output = """ +KTAP version 1 +1..3 +ok 1 - test_a +not ok 2 - test_b +ok 3 - test_c +""" + results = parse_ktap(output) + assert len(results) == 3 + assert results[0].status == TestStatus.PASS + assert results[1].status == TestStatus.FAIL + assert results[2].status == TestStatus.PASS + + def test_tap_version_13(self): + """Test parsing TAP version 13 output.""" + output = """ +TAP version 13 +1..2 +ok 1 - test_one +ok 2 - test_two +""" + results = parse_ktap(output) + assert len(results) == 2 + + def test_with_prefix(self): + """Test parsing with a name prefix.""" + output = """ +KTAP version 1 +1..1 +ok 1 - test_socket +""" + results = parse_ktap(output, prefix="kselftest.net") + assert len(results) == 1 + assert results[0].name == "kselftest.net.test_socket" + assert results[0].raw_name == "test_socket" + + +class TestParseKtapDirectives: + """Tests for KTAP directive handling.""" + + def test_skip_directive(self): + """Test SKIP directive parsing.""" + output = """ +KTAP version 1 +1..1 +ok 1 - test_feature # SKIP not supported on this platform +""" + results = parse_ktap(output) + assert len(results) == 1 + assert results[0].status == TestStatus.SKIP + assert results[0].directive == "SKIP" + assert results[0].directive_reason == "not supported on this platform" + + def test_skip_not_ok(self): + """Test SKIP with not ok (should still be skip).""" + output = """ +KTAP version 1 +1..1 +not ok 1 - test_feature # SKIP missing dependency +""" + results = parse_ktap(output) + assert results[0].status == TestStatus.SKIP + + def test_todo_directive(self): + """Test TODO directive parsing.""" + output = """ +KTAP version 1 +1..1 +not ok 1 - test_wip # TODO work in progress +""" + results = parse_ktap(output) + assert results[0].status == TestStatus.SKIP + assert results[0].directive == "TODO" + + def test_xfail_directive_failed(self): + """Test XFAIL when test fails (expected, so pass).""" + output = """ +KTAP version 1 +1..1 +not ok 1 - test_known_bug # XFAIL known issue #123 +""" + results = parse_ktap(output) + assert results[0].status == TestStatus.PASS + assert results[0].directive == "XFAIL" + + def test_xfail_directive_passed(self): + """Test XFAIL when test passes unexpectedly (fail).""" + output = """ +KTAP version 1 +1..1 +ok 1 - test_known_bug # XFAIL expected to fail +""" + results = parse_ktap(output) + assert results[0].status == TestStatus.FAIL + + def test_timeout_directive(self): + """Test TIMEOUT directive parsing.""" + output = """ +KTAP version 1 +1..1 +not ok 1 - test_slow # TIMEOUT exceeded 30s limit +""" + results = parse_ktap(output) + assert results[0].status == TestStatus.ERROR + assert results[0].directive == "TIMEOUT" + + def test_error_directive(self): + """Test ERROR directive parsing.""" + output = """ +KTAP version 1 +1..1 +not ok 1 - test_crash # ERROR segmentation fault +""" + results = parse_ktap(output) + assert results[0].status == TestStatus.ERROR + assert results[0].directive == "ERROR" + + +class TestParseKtapNested: + """Tests for nested KTAP subtest parsing.""" + + def test_simple_nested(self): + """Test simple nested subtests.""" + output = """ +KTAP version 1 +1..1 + KTAP version 1 + 1..2 + ok 1 - subtest_a + ok 2 - subtest_b +ok 1 - parent_test +""" + results = parse_ktap(output) + assert len(results) == 2 + assert results[0].name == "parent_test.subtest_a" + assert results[1].name == "parent_test.subtest_b" + + def test_nested_with_prefix(self): + """Test nested subtests with a prefix.""" + output = """ +KTAP version 1 +1..1 + KTAP version 1 + 1..1 + ok 1 - child +ok 1 - parent +""" + results = parse_ktap(output, prefix="kselftest.net") + assert len(results) == 1 + assert results[0].name == "kselftest.net.parent.child" + + def test_mixed_nested_and_flat(self): + """Test mix of nested and non-nested tests.""" + output = """ +KTAP version 1 +1..3 +ok 1 - simple_test + KTAP version 1 + 1..2 + ok 1 - nested_a + not ok 2 - nested_b +not ok 2 - parent_with_subtests +ok 3 - another_simple +""" + results = parse_ktap(output) + assert len(results) == 4 + assert results[0].name == "simple_test" + assert results[0].status == TestStatus.PASS + assert results[1].name == "parent_with_subtests.nested_a" + assert results[2].name == "parent_with_subtests.nested_b" + assert results[2].status == TestStatus.FAIL + assert results[3].name == "another_simple" + + def test_deeply_nested(self): + """Test multiple levels of nesting.""" + output = """ +KTAP version 1 +1..1 + KTAP version 1 + 1..1 + KTAP version 1 + 1..2 + ok 1 - leaf_a + ok 2 - leaf_b + ok 1 - middle +ok 1 - top +""" + results = parse_ktap(output) + assert len(results) == 2 + assert results[0].name == "top.middle.leaf_a" + assert results[1].name == "top.middle.leaf_b" + + +class TestParseKtapDiagnostic: + """Tests for diagnostic line handling.""" + + def test_diagnostic_before_result(self): + """Test diagnostic lines captured before result.""" + output = """ +KTAP version 1 +1..1 +# Running test for socket operations +# Testing IPv4 +not ok 1 - test_socket +""" + results = parse_ktap(output) + assert len(results) == 1 + assert results[0].diagnostic is not None + assert "Running test" in results[0].diagnostic + assert "IPv4" in results[0].diagnostic + + def test_diagnostic_not_mixed_between_tests(self): + """Test diagnostics are associated with correct test.""" + output = """ +KTAP version 1 +1..2 +# Info for test 1 +ok 1 - test_one +# Info for test 2 +not ok 2 - test_two +""" + results = parse_ktap(output) + assert len(results) == 2 + assert results[0].diagnostic == "Info for test 1" + assert results[1].diagnostic == "Info for test 2" + + +class TestParseKtapEdgeCases: + """Edge case tests for KTAP parser.""" + + def test_empty_output(self): + """Test parsing empty output.""" + results = parse_ktap("") + assert results == [] + + def test_only_version_and_plan(self): + """Test output with only version and plan, no tests.""" + output = """ +KTAP version 1 +1..0 +""" + results = parse_ktap(output) + assert results == [] + + def test_no_test_name(self): + """Test result line without test name.""" + output = """ +KTAP version 1 +1..1 +ok 1 +""" + results = parse_ktap(output) + assert len(results) == 1 + assert results[0].name == "test_1" + + def test_test_name_with_spaces(self): + """Test result with spaces in test name.""" + output = """ +KTAP version 1 +1..1 +ok 1 - test with spaces in name +""" + results = parse_ktap(output) + assert results[0].name == "test with spaces in name" + + def test_case_insensitive_directives(self): + """Test that directives are case-insensitive.""" + output = """ +KTAP version 1 +1..3 +ok 1 - test_a # skip reason +ok 2 - test_b # Skip reason +ok 3 - test_c # SKIP reason +""" + results = parse_ktap(output) + assert all(r.status == TestStatus.SKIP for r in results) + assert all(r.directive == "SKIP" for r in results) + + def test_unknown_directive_as_message(self): + """Test unknown directive text becomes message.""" + output = """ +KTAP version 1 +1..1 +not ok 1 - test_fail # assertion failed at line 42 +""" + results = parse_ktap(output) + assert results[0].status == TestStatus.FAIL + assert results[0].directive is None + assert results[0].directive_reason == "assertion failed at line 42" + + +class TestParseKtapRealWorld: + """Tests with real-world-like KTAP output.""" + + def test_kselftest_net_sample(self): + """Test parsing sample kselftest net output.""" + output = """ +TAP version 13 +1..4 +# selftests: net: reuseport_bpf +ok 1 selftests: net: reuseport_bpf +# selftests: net: reuseport_dualstack +not ok 2 selftests: net: reuseport_dualstack # SKIP ipv6 disabled + TAP version 13 + 1..3 + ok 1 socket_af_inet + ok 2 socket_af_inet6 + not ok 3 socket_af_packet # SKIP requires CAP_NET_RAW +ok 3 selftests: net: socket +# selftests: net: rtnetlink +not ok 4 selftests: net: rtnetlink # exit=1 +""" + results = parse_ktap(output, prefix="kselftest") + + # Should have 6 results: + # - reuseport_bpf (leaf) + # - reuseport_dualstack (leaf, skip) + # - socket subtests: socket_af_inet, socket_af_inet6, socket_af_packet + # - rtnetlink (leaf) + assert len(results) == 6 + + # Check first test + assert results[0].name == "kselftest.selftests: net: reuseport_bpf" + assert results[0].status == TestStatus.PASS + + # Check skipped test + assert results[1].name == "kselftest.selftests: net: reuseport_dualstack" + assert results[1].status == TestStatus.SKIP + + # Check nested tests (socket subtests) + assert "socket_af_inet" in results[2].name + assert results[2].status == TestStatus.PASS + + assert "socket_af_inet6" in results[3].name + assert results[3].status == TestStatus.PASS + + assert "socket_af_packet" in results[4].name + assert results[4].status == TestStatus.SKIP + + # Check final test + assert results[5].name == "kselftest.selftests: net: rtnetlink" + assert results[5].status == TestStatus.FAIL + + def test_kunit_sample(self): + """Test parsing sample KUnit output.""" + output = """ +KTAP version 1 +1..1 + KTAP version 1 + # Subtest: example_test_suite + 1..3 + ok 1 - example_simple_test + ok 2 - example_skip_test # SKIP skip reason + not ok 3 - example_fail_test + # example_fail_test: EXPECTATION FAILED at lib/test.c:42 + # Expected 1 == 2 +ok 1 - example_test_suite +""" + results = parse_ktap(output) + + assert len(results) == 3 + assert results[0].name == "example_test_suite.example_simple_test" + assert results[0].status == TestStatus.PASS + + assert results[1].name == "example_test_suite.example_skip_test" + assert results[1].status == TestStatus.SKIP + + assert results[2].name == "example_test_suite.example_fail_test" + assert results[2].status == TestStatus.FAIL + + +class TestKtapResultsToDictAndSummarize: + """Tests for helper functions.""" + + def test_ktap_results_to_dict(self): + """Test converting results to dict format.""" + results = [ + KtapTestResult( + name="test_a", + status=TestStatus.PASS, + number=1, + ), + KtapTestResult( + name="test_b", + status=TestStatus.FAIL, + directive_reason="assertion failed", + number=2, + ), + ] + + dicts = ktap_results_to_dict(results) + + assert len(dicts) == 2 + assert dicts[0] == { + "name": "test_a", + "status": "pass", + "duration": 0, + "error_message": None, + } + assert dicts[1] == { + "name": "test_b", + "status": "fail", + "duration": 0, + "error_message": "assertion failed", + } + + def test_summarize_results(self): + """Test result summarization.""" + results = [ + KtapTestResult(name="t1", status=TestStatus.PASS, number=1), + KtapTestResult(name="t2", status=TestStatus.PASS, number=2), + KtapTestResult(name="t3", status=TestStatus.FAIL, number=3), + KtapTestResult(name="t4", status=TestStatus.SKIP, number=4), + KtapTestResult(name="t5", status=TestStatus.ERROR, number=5), + ] + + summary = summarize_results(results) + + assert summary == { + "total": 5, + "passed": 2, + "failed": 1, + "skipped": 1, + "errors": 1, + } diff --git a/kernelci/labgrid-adapter/tests/test_labgrid_client.py b/kernelci/labgrid-adapter/tests/test_labgrid_client.py new file mode 100644 index 000000000..bfa2db702 --- /dev/null +++ b/kernelci/labgrid-adapter/tests/test_labgrid_client.py @@ -0,0 +1,190 @@ +"""Tests for labgrid coordinator client.""" + +import pytest + +from labgrid_kci_adapter.labgrid_client import LabgridClient, Place + + +class TestPlace: + """Tests for Place dataclass.""" + + def test_device_type_from_tags(self): + """Test extracting device type from tags.""" + place = Place( + name="aparcar-openwrt_one", + acquired=False, + tags={"device_type": "openwrt_one"}, + ) + assert place.device_type == "openwrt_one" + + def test_device_type_from_name_simple(self): + """Test extracting device type from simple name.""" + place = Place(name="aparcar-openwrt_one", acquired=False) + assert place.device_type == "openwrt_one" + + def test_device_type_from_name_with_instance(self): + """Test extracting device type from name with instance number.""" + place = Place(name="aparcar-openwrt_one-2", acquired=False) + assert place.device_type == "openwrt_one" + + def test_device_type_from_name_complex(self): + """Test extracting device type from complex name.""" + place = Place(name="lab-tplink_archer-c7-v2-1", acquired=False) + assert place.device_type == "tplink_archer-c7-v2" + + def test_device_type_tags_override_name(self): + """Test that tags override name-based extraction.""" + place = Place( + name="aparcar-wrong_name", + acquired=False, + tags={"device_type": "correct_type"}, + ) + assert place.device_type == "correct_type" + + +class TestLabgridClientParsing: + """Tests for LabgridClient output parsing.""" + + def test_parse_places_output_simple(self): + """Test parsing simple places output.""" + output = """Place 'aparcar-openwrt_one': + acquired: + tags: + device_type: openwrt_one +""" + client = LabgridClient() + places = client._parse_places_output(output) + + assert len(places) == 1 + assert "aparcar-openwrt_one" in places + place = places["aparcar-openwrt_one"] + assert place.name == "aparcar-openwrt_one" + assert place.acquired is False + assert place.tags == {"device_type": "openwrt_one"} + + def test_parse_places_output_acquired(self): + """Test parsing places output with acquired place.""" + output = """Place 'aparcar-openwrt_one': + acquired: user/hostname + tags: + device_type: openwrt_one +""" + client = LabgridClient() + places = client._parse_places_output(output) + + place = places["aparcar-openwrt_one"] + assert place.acquired is True + assert place.acquired_by == "user/hostname" + + def test_parse_places_output_multiple(self): + """Test parsing multiple places.""" + output = """Place 'aparcar-openwrt_one': + acquired: + tags: + device_type: openwrt_one +Place 'aparcar-openwrt_one-2': + acquired: user/host + tags: + device_type: openwrt_one +Place 'leinelab-archer-c7': + acquired: + tags: + device_type: tplink_archer-c7-v2 +""" + client = LabgridClient() + places = client._parse_places_output(output) + + assert len(places) == 3 + assert places["aparcar-openwrt_one"].acquired is False + assert places["aparcar-openwrt_one-2"].acquired is True + assert places["leinelab-archer-c7"].device_type == "tplink_archer-c7-v2" + + def test_parse_places_output_multiple_tags(self): + """Test parsing places with multiple tags.""" + output = """Place 'aparcar-openwrt_one': + acquired: + tags: + device_type: openwrt_one + lab: aparcar + features: wifi,ethernet +""" + client = LabgridClient() + places = client._parse_places_output(output) + + place = places["aparcar-openwrt_one"] + assert place.tags["device_type"] == "openwrt_one" + assert place.tags["lab"] == "aparcar" + assert place.tags["features"] == "wifi,ethernet" + + def test_parse_places_output_empty(self): + """Test parsing empty output.""" + client = LabgridClient() + places = client._parse_places_output("") + assert len(places) == 0 + + def test_parse_places_output_no_tags(self): + """Test parsing place without tags section.""" + output = """Place 'aparcar-openwrt_one': + acquired: +""" + client = LabgridClient() + places = client._parse_places_output(output) + + place = places["aparcar-openwrt_one"] + assert place.tags is None or place.tags == {} + + +class TestLabgridClientFiltering: + """Tests for place filtering methods.""" + + @pytest.fixture + def client_with_places(self): + """Create client with cached places.""" + client = LabgridClient() + client._places_cache = { + "aparcar-openwrt_one": Place( + name="aparcar-openwrt_one", + acquired=False, + tags={"device_type": "openwrt_one"}, + ), + "aparcar-openwrt_one-2": Place( + name="aparcar-openwrt_one-2", + acquired=True, + acquired_by="user/host", + tags={"device_type": "openwrt_one"}, + ), + "aparcar-archer-c7": Place( + name="aparcar-archer-c7", + acquired=False, + tags={"device_type": "tplink_archer-c7-v2"}, + ), + } + client._cache_time = float("inf") # Never expire + return client + + @pytest.mark.asyncio + async def test_get_places_by_device_type(self, client_with_places): + """Test filtering places by device type.""" + places = await client_with_places.get_places_by_device_type("openwrt_one") + assert len(places) == 2 + assert all(p.device_type == "openwrt_one" for p in places) + + @pytest.mark.asyncio + async def test_get_available_places(self, client_with_places): + """Test getting only available (not acquired) places.""" + places = await client_with_places.get_available_places("openwrt_one") + assert len(places) == 1 + assert places[0].name == "aparcar-openwrt_one" + assert places[0].acquired is False + + @pytest.mark.asyncio + async def test_count_available(self, client_with_places): + """Test counting available places.""" + count = await client_with_places.count_available("openwrt_one") + assert count == 1 + + count = await client_with_places.count_available("tplink_archer-c7-v2") + assert count == 1 + + count = await client_with_places.count_available("nonexistent") + assert count == 0 diff --git a/kernelci/labgrid-adapter/tests/test_poller.py b/kernelci/labgrid-adapter/tests/test_poller.py new file mode 100644 index 000000000..7de30fd2c --- /dev/null +++ b/kernelci/labgrid-adapter/tests/test_poller.py @@ -0,0 +1,176 @@ +"""Tests for job poller with parallel execution support.""" + +from unittest.mock import AsyncMock, MagicMock, patch + +import pytest + +from labgrid_kci_adapter.labgrid_client import LabgridClient, Place +from labgrid_kci_adapter.poller import JobPoller + + +@pytest.fixture +def mock_labgrid_client(): + """Create a mock labgrid client.""" + client = MagicMock(spec=LabgridClient) + client.count_available = AsyncMock(return_value=1) + return client + + +@pytest.fixture +def poller(mock_labgrid_client): + """Create a poller with mocked dependencies.""" + with patch("labgrid_kci_adapter.poller.settings") as mock_settings: + mock_settings.kci_api_url = "http://api.example.com" + mock_settings.kci_api_token = "test-token" + mock_settings.poll_interval = 1 + mock_settings.max_concurrent_jobs = 10 + + return JobPoller( + lab_name="test-lab", + devices=["openwrt_one", "archer-c7"], + features=["wifi"], + on_job=AsyncMock(), + labgrid_client=mock_labgrid_client, + ) + + +class TestJobPollerInit: + """Tests for JobPoller initialization.""" + + def test_init_tracking_structures(self, poller): + """Test that tracking structures are initialized correctly.""" + assert poller._current_jobs == {} + assert poller._jobs_per_device == {} + + def test_init_with_labgrid_client(self, poller, mock_labgrid_client): + """Test that labgrid client is set.""" + assert poller._labgrid == mock_labgrid_client + + +class TestJobPollerTracking: + """Tests for job tracking with parallel execution.""" + + def test_track_job_per_device_type(self, poller): + """Test tracking jobs per device type.""" + # Simulate claiming a job + poller._current_jobs["job-1"] = "openwrt_one" + poller._jobs_per_device["openwrt_one"].add("job-1") + + assert "job-1" in poller._current_jobs + assert "job-1" in poller._jobs_per_device["openwrt_one"] + + def test_track_multiple_jobs_same_device_type(self, poller): + """Test tracking multiple jobs for same device type.""" + # Simulate claiming multiple jobs for same device type + poller._current_jobs["job-1"] = "openwrt_one" + poller._current_jobs["job-2"] = "openwrt_one" + poller._jobs_per_device["openwrt_one"].add("job-1") + poller._jobs_per_device["openwrt_one"].add("job-2") + + assert len(poller._jobs_per_device["openwrt_one"]) == 2 + + def test_cleanup_job_tracking(self, poller): + """Test cleanup of job tracking.""" + # Setup + poller._current_jobs["job-1"] = "openwrt_one" + poller._jobs_per_device["openwrt_one"].add("job-1") + + # Cleanup (simulating _execute_job finally block) + poller._current_jobs.pop("job-1", None) + poller._jobs_per_device["openwrt_one"].discard("job-1") + + assert "job-1" not in poller._current_jobs + assert "job-1" not in poller._jobs_per_device["openwrt_one"] + + +class TestJobPollerParallelExecution: + """Tests for parallel execution support.""" + + @pytest.mark.asyncio + async def test_poll_respects_available_places(self, poller, mock_labgrid_client): + """Test that polling respects available places.""" + # 2 places available for openwrt_one + mock_labgrid_client.count_available = AsyncMock( + side_effect=lambda dt: 2 if dt == "openwrt_one" else 1 + ) + + # Mock API response + poller._api_request = AsyncMock( + return_value=[ + {"id": "job-1", "data": {"device_type": "openwrt_one"}}, + {"id": "job-2", "data": {"device_type": "openwrt_one"}}, + ] + ) + + jobs = await poller._poll_jobs() + + # Should return 2 jobs since 2 places available + assert len(jobs) == 4 # 2 for openwrt_one + 2 for archer-c7 (mocked) + + @pytest.mark.asyncio + async def test_poll_skips_busy_device_type(self, poller, mock_labgrid_client): + """Test that polling skips device types with no free slots.""" + # 1 place available, 1 job running + mock_labgrid_client.count_available = AsyncMock(return_value=1) + poller._jobs_per_device["openwrt_one"].add("existing-job") + + poller._api_request = AsyncMock(return_value=[]) + + jobs = await poller._poll_jobs() + + # API should not be called for openwrt_one (no free slots) + # Check that we didn't query for openwrt_one + calls = poller._api_request.call_args_list + for call in calls: + params = call.kwargs.get("params", {}) + if params.get("data.device_type") == "openwrt_one": + pytest.fail("Should not poll for openwrt_one when no free slots") + + @pytest.mark.asyncio + async def test_poll_limits_by_free_slots(self, poller, mock_labgrid_client): + """Test that polling limits jobs by free slots.""" + # 3 places, 1 job running = 2 free slots + mock_labgrid_client.count_available = AsyncMock(return_value=3) + poller._jobs_per_device["openwrt_one"].add("existing-job") + + poller._api_request = AsyncMock(return_value=[]) + + await poller._poll_jobs() + + # Check limit parameter + calls = poller._api_request.call_args_list + for call in calls: + params = call.kwargs.get("params", {}) + if params.get("data.device_type") == "openwrt_one": + assert params.get("limit") == 2 # 3 places - 1 running = 2 + + +class TestJobPollerClaiming: + """Tests for job claiming.""" + + @pytest.mark.asyncio + async def test_claim_job_success(self, poller): + """Test successful job claiming.""" + poller._api_request = AsyncMock(return_value={}) + + result = await poller._claim_job("job-1", "openwrt_one") + + assert result is True + poller._api_request.assert_called_once() + + @pytest.mark.asyncio + async def test_claim_job_conflict(self, poller): + """Test job claiming with conflict (already claimed).""" + import httpx + + error_response = MagicMock() + error_response.status_code = 409 + poller._api_request = AsyncMock( + side_effect=httpx.HTTPStatusError( + "Conflict", request=MagicMock(), response=error_response + ) + ) + + result = await poller._claim_job("job-1", "openwrt_one") + + assert result is False diff --git a/kernelci/labgrid-adapter/tests/test_test_sync.py b/kernelci/labgrid-adapter/tests/test_test_sync.py new file mode 100644 index 000000000..564601ed5 --- /dev/null +++ b/kernelci/labgrid-adapter/tests/test_test_sync.py @@ -0,0 +1,248 @@ +"""Tests for test repository synchronization.""" + +import tempfile +from pathlib import Path +from unittest.mock import patch + +import pytest + +from labgrid_kci_adapter.test_sync import _run_git, ensure_tests + + +@pytest.fixture +def temp_dir(): + """Create a temporary directory for tests.""" + with tempfile.TemporaryDirectory() as tmpdir: + yield Path(tmpdir) + + +class TestRunGit: + """Tests for _run_git helper.""" + + @pytest.mark.asyncio + async def test_run_git_success(self, temp_dir): + """Test successful git command.""" + # Initialize a git repo + returncode, output = await _run_git("init", cwd=temp_dir) + assert returncode == 0 + assert (temp_dir / ".git").exists() + + @pytest.mark.asyncio + async def test_run_git_failure(self, temp_dir): + """Test failed git command.""" + returncode, output = await _run_git("clone", "nonexistent-repo", cwd=temp_dir) + assert returncode != 0 + + +class TestEnsureTests: + """Tests for ensure_tests function.""" + + @pytest.mark.asyncio + async def test_ensure_tests_local_dir_exists(self, temp_dir): + """Test using existing local directory when no repo URL.""" + # Create a local tests directory + tests_dir = temp_dir / "tests" + tests_dir.mkdir() + (tests_dir / "test_example.py").write_text("def test_pass(): pass") + + with patch("labgrid_kci_adapter.test_sync.settings") as mock_settings: + mock_settings.tests_repo_url = "" + mock_settings.tests_repo_branch = "main" + mock_settings.tests_repo_subdir = "" + mock_settings.tests_dir = tests_dir + + result = await ensure_tests() + assert result == tests_dir + + @pytest.mark.asyncio + async def test_ensure_tests_local_dir_not_exists(self, temp_dir): + """Test error when local directory doesn't exist and no repo URL.""" + tests_dir = temp_dir / "nonexistent" + + with patch("labgrid_kci_adapter.test_sync.settings") as mock_settings: + mock_settings.tests_repo_url = "" + mock_settings.tests_repo_branch = "main" + mock_settings.tests_repo_subdir = "" + mock_settings.tests_dir = tests_dir + + with pytest.raises(RuntimeError, match="does not exist"): + await ensure_tests() + + @pytest.mark.asyncio + async def test_ensure_tests_clone_repo(self, temp_dir): + """Test cloning a new repository.""" + tests_dir = temp_dir / "tests" + + # Create a "remote" repo to clone from + remote_dir = temp_dir / "remote" + remote_dir.mkdir() + await _run_git("init", "--bare", cwd=remote_dir) + + # Create a source repo with content + source_dir = temp_dir / "source" + source_dir.mkdir() + await _run_git("init", cwd=source_dir) + await _run_git("config", "user.email", "test@test.com", cwd=source_dir) + await _run_git("config", "user.name", "Test", cwd=source_dir) + (source_dir / "test.py").write_text("# test") + await _run_git("add", ".", cwd=source_dir) + await _run_git("commit", "-m", "initial", cwd=source_dir) + await _run_git("remote", "add", "origin", str(remote_dir), cwd=source_dir) + await _run_git("push", "-u", "origin", "master", cwd=source_dir) + + with patch("labgrid_kci_adapter.test_sync.settings") as mock_settings: + mock_settings.tests_repo_url = str(remote_dir) + mock_settings.tests_repo_branch = "master" + mock_settings.tests_repo_subdir = "" + mock_settings.tests_dir = tests_dir + + result = await ensure_tests() + assert result == tests_dir + assert (tests_dir / ".git").exists() + assert (tests_dir / "test.py").exists() + + @pytest.mark.asyncio + async def test_ensure_tests_update_existing(self, temp_dir): + """Test updating an existing cloned repository.""" + tests_dir = temp_dir / "tests" + + # Create a "remote" repo + remote_dir = temp_dir / "remote" + remote_dir.mkdir() + await _run_git("init", "--bare", cwd=remote_dir) + + # Create source and push + source_dir = temp_dir / "source" + source_dir.mkdir() + await _run_git("init", cwd=source_dir) + await _run_git("config", "user.email", "test@test.com", cwd=source_dir) + await _run_git("config", "user.name", "Test", cwd=source_dir) + (source_dir / "test.py").write_text("# v1") + await _run_git("add", ".", cwd=source_dir) + await _run_git("commit", "-m", "v1", cwd=source_dir) + await _run_git("remote", "add", "origin", str(remote_dir), cwd=source_dir) + await _run_git("push", "-u", "origin", "master", cwd=source_dir) + + # Clone to tests_dir + await _run_git("clone", str(remote_dir), str(tests_dir)) + + # Update source and push + (source_dir / "test.py").write_text("# v2") + await _run_git("add", ".", cwd=source_dir) + await _run_git("commit", "-m", "v2", cwd=source_dir) + await _run_git("push", cwd=source_dir) + + # Now ensure_tests should update + with patch("labgrid_kci_adapter.test_sync.settings") as mock_settings: + mock_settings.tests_repo_url = str(remote_dir) + mock_settings.tests_repo_branch = "master" + mock_settings.tests_repo_subdir = "" + mock_settings.tests_dir = tests_dir + + result = await ensure_tests() + assert result == tests_dir + assert (tests_dir / "test.py").read_text() == "# v2" + + @pytest.mark.asyncio + async def test_ensure_tests_override_params(self, temp_dir): + """Test that parameters override settings.""" + tests_dir = temp_dir / "custom" + tests_dir.mkdir() + (tests_dir / "test.py").write_text("# test") + + with patch("labgrid_kci_adapter.test_sync.settings") as mock_settings: + mock_settings.tests_repo_url = "" + mock_settings.tests_repo_branch = "main" + mock_settings.tests_repo_subdir = "" + mock_settings.tests_dir = temp_dir / "default" + + # Override with custom dest_dir + result = await ensure_tests(dest_dir=tests_dir) + assert result == tests_dir + + @pytest.mark.asyncio + async def test_ensure_tests_with_subdir(self, temp_dir): + """Test using subdirectory within repository.""" + tests_dir = temp_dir / "repo" + tests_dir.mkdir() + subdir = tests_dir / "tests" / "integration" + subdir.mkdir(parents=True) + (subdir / "test_example.py").write_text("def test_pass(): pass") + + with patch("labgrid_kci_adapter.test_sync.settings") as mock_settings: + mock_settings.tests_repo_url = "" + mock_settings.tests_repo_branch = "main" + mock_settings.tests_repo_subdir = "tests/integration" + mock_settings.tests_dir = tests_dir + + result = await ensure_tests() + assert result == subdir + assert (result / "test_example.py").exists() + + @pytest.mark.asyncio + async def test_ensure_tests_subdir_not_exists(self, temp_dir): + """Test error when subdirectory doesn't exist.""" + tests_dir = temp_dir / "repo" + tests_dir.mkdir() + (tests_dir / "test.py").write_text("# test") + + with patch("labgrid_kci_adapter.test_sync.settings") as mock_settings: + mock_settings.tests_repo_url = "" + mock_settings.tests_repo_branch = "main" + mock_settings.tests_repo_subdir = "nonexistent/subdir" + mock_settings.tests_dir = tests_dir + + with pytest.raises(RuntimeError, match="does not exist"): + await ensure_tests() + + @pytest.mark.asyncio + async def test_ensure_tests_subdir_override(self, temp_dir): + """Test that subdir parameter overrides settings.""" + tests_dir = temp_dir / "repo" + tests_dir.mkdir() + subdir = tests_dir / "custom" + subdir.mkdir() + (subdir / "test.py").write_text("# test") + + with patch("labgrid_kci_adapter.test_sync.settings") as mock_settings: + mock_settings.tests_repo_url = "" + mock_settings.tests_repo_branch = "main" + mock_settings.tests_repo_subdir = "default" + mock_settings.tests_dir = tests_dir + + # Override with custom subdir + result = await ensure_tests(subdir="custom") + assert result == subdir + + @pytest.mark.asyncio + async def test_ensure_tests_clone_with_subdir(self, temp_dir): + """Test cloning repository with subdirectory.""" + tests_dir = temp_dir / "tests" + + # Create a "remote" repo with subdirectory structure + remote_dir = temp_dir / "remote" + remote_dir.mkdir() + await _run_git("init", "--bare", cwd=remote_dir) + + # Create source with tests in subdirectory + source_dir = temp_dir / "source" + source_dir.mkdir() + await _run_git("init", cwd=source_dir) + await _run_git("config", "user.email", "test@test.com", cwd=source_dir) + await _run_git("config", "user.name", "Test", cwd=source_dir) + (source_dir / "tests").mkdir() + (source_dir / "tests" / "test.py").write_text("# test") + await _run_git("add", ".", cwd=source_dir) + await _run_git("commit", "-m", "initial", cwd=source_dir) + await _run_git("remote", "add", "origin", str(remote_dir), cwd=source_dir) + await _run_git("push", "-u", "origin", "master", cwd=source_dir) + + with patch("labgrid_kci_adapter.test_sync.settings") as mock_settings: + mock_settings.tests_repo_url = str(remote_dir) + mock_settings.tests_repo_branch = "master" + mock_settings.tests_repo_subdir = "tests" + mock_settings.tests_dir = tests_dir + + result = await ensure_tests() + assert result == tests_dir / "tests" + assert (result / "test.py").exists() diff --git a/kernelci/labgrid-adapter/uv.lock b/kernelci/labgrid-adapter/uv.lock new file mode 100644 index 000000000..03cd5a0d1 --- /dev/null +++ b/kernelci/labgrid-adapter/uv.lock @@ -0,0 +1,956 @@ +version = 1 +revision = 3 +requires-python = ">=3.11" +resolution-markers = [ + "python_full_version >= '3.14'", + "python_full_version < '3.14'", +] + +[[package]] +name = "annotated-types" +version = "0.7.0" +source = { registry = "https://pypi.org/simple" } +sdist = { url = "https://files.pythonhosted.org/packages/ee/67/531ea369ba64dcff5ec9c3402f9f51bf748cec26dde048a2f973a4eea7f5/annotated_types-0.7.0.tar.gz", hash = "sha256:aff07c09a53a08bc8cfccb9c85b05f1aa9a2a6f23728d790723543408344ce89", size = 16081, upload-time = "2024-05-20T21:33:25.928Z" } +wheels = [ + { url = "https://files.pythonhosted.org/packages/78/b6/6307fbef88d9b5ee7421e68d78a9f162e0da4900bc5f5793f6d3d0e34fb8/annotated_types-0.7.0-py3-none-any.whl", hash = "sha256:1f02e8b43a8fbbc3f3e0d4f0f4bfc8131bcb4eebe8849b8e5c773f3a1c582a53", size = 13643, upload-time = "2024-05-20T21:33:24.1Z" }, +] + +[[package]] +name = "anyio" +version = "4.12.1" +source = { registry = "https://pypi.org/simple" } +dependencies = [ + { name = "idna" }, + { name = "typing-extensions", marker = "python_full_version < '3.13'" }, +] +sdist = { url = "https://files.pythonhosted.org/packages/96/f0/5eb65b2bb0d09ac6776f2eb54adee6abe8228ea05b20a5ad0e4945de8aac/anyio-4.12.1.tar.gz", hash = "sha256:41cfcc3a4c85d3f05c932da7c26d0201ac36f72abd4435ba90d0464a3ffed703", size = 228685, upload-time = "2026-01-06T11:45:21.246Z" } +wheels = [ + { url = "https://files.pythonhosted.org/packages/38/0e/27be9fdef66e72d64c0cdc3cc2823101b80585f8119b5c112c2e8f5f7dab/anyio-4.12.1-py3-none-any.whl", hash = "sha256:d405828884fc140aa80a3c667b8beed277f1dfedec42ba031bd6ac3db606ab6c", size = 113592, upload-time = "2026-01-06T11:45:19.497Z" }, +] + +[[package]] +name = "argon2-cffi" +version = "25.1.0" +source = { registry = "https://pypi.org/simple" } +dependencies = [ + { name = "argon2-cffi-bindings" }, +] +sdist = { url = "https://files.pythonhosted.org/packages/0e/89/ce5af8a7d472a67cc819d5d998aa8c82c5d860608c4db9f46f1162d7dab9/argon2_cffi-25.1.0.tar.gz", hash = "sha256:694ae5cc8a42f4c4e2bf2ca0e64e51e23a040c6a517a85074683d3959e1346c1", size = 45706, upload-time = "2025-06-03T06:55:32.073Z" } +wheels = [ + { url = "https://files.pythonhosted.org/packages/4f/d3/a8b22fa575b297cd6e3e3b0155c7e25db170edf1c74783d6a31a2490b8d9/argon2_cffi-25.1.0-py3-none-any.whl", hash = "sha256:fdc8b074db390fccb6eb4a3604ae7231f219aa669a2652e0f20e16ba513d5741", size = 14657, upload-time = "2025-06-03T06:55:30.804Z" }, +] + +[[package]] +name = "argon2-cffi-bindings" +version = "25.1.0" +source = { registry = "https://pypi.org/simple" } +dependencies = [ + { name = "cffi" }, +] +sdist = { url = "https://files.pythonhosted.org/packages/5c/2d/db8af0df73c1cf454f71b2bbe5e356b8c1f8041c979f505b3d3186e520a9/argon2_cffi_bindings-25.1.0.tar.gz", hash = "sha256:b957f3e6ea4d55d820e40ff76f450952807013d361a65d7f28acc0acbf29229d", size = 1783441, upload-time = "2025-07-30T10:02:05.147Z" } +wheels = [ + { url = "https://files.pythonhosted.org/packages/60/97/3c0a35f46e52108d4707c44b95cfe2afcafc50800b5450c197454569b776/argon2_cffi_bindings-25.1.0-cp314-cp314t-macosx_10_13_universal2.whl", hash = "sha256:3d3f05610594151994ca9ccb3c771115bdb4daef161976a266f0dd8aa9996b8f", size = 54393, upload-time = "2025-07-30T10:01:40.97Z" }, + { url = "https://files.pythonhosted.org/packages/9d/f4/98bbd6ee89febd4f212696f13c03ca302b8552e7dbf9c8efa11ea4a388c3/argon2_cffi_bindings-25.1.0-cp314-cp314t-macosx_10_13_x86_64.whl", hash = "sha256:8b8efee945193e667a396cbc7b4fb7d357297d6234d30a489905d96caabde56b", size = 29328, upload-time = "2025-07-30T10:01:41.916Z" }, + { url = "https://files.pythonhosted.org/packages/43/24/90a01c0ef12ac91a6be05969f29944643bc1e5e461155ae6559befa8f00b/argon2_cffi_bindings-25.1.0-cp314-cp314t-macosx_11_0_arm64.whl", hash = "sha256:3c6702abc36bf3ccba3f802b799505def420a1b7039862014a65db3205967f5a", size = 31269, upload-time = "2025-07-30T10:01:42.716Z" }, + { url = "https://files.pythonhosted.org/packages/d4/d3/942aa10782b2697eee7af5e12eeff5ebb325ccfb86dd8abda54174e377e4/argon2_cffi_bindings-25.1.0-cp314-cp314t-manylinux_2_26_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:a1c70058c6ab1e352304ac7e3b52554daadacd8d453c1752e547c76e9c99ac44", size = 86558, upload-time = "2025-07-30T10:01:43.943Z" }, + { url = "https://files.pythonhosted.org/packages/0d/82/b484f702fec5536e71836fc2dbc8c5267b3f6e78d2d539b4eaa6f0db8bf8/argon2_cffi_bindings-25.1.0-cp314-cp314t-manylinux_2_26_x86_64.manylinux_2_28_x86_64.whl", hash = "sha256:e2fd3bfbff3c5d74fef31a722f729bf93500910db650c925c2d6ef879a7e51cb", size = 92364, upload-time = "2025-07-30T10:01:44.887Z" }, + { url = "https://files.pythonhosted.org/packages/c9/c1/a606ff83b3f1735f3759ad0f2cd9e038a0ad11a3de3b6c673aa41c24bb7b/argon2_cffi_bindings-25.1.0-cp314-cp314t-musllinux_1_2_aarch64.whl", hash = "sha256:c4f9665de60b1b0e99bcd6be4f17d90339698ce954cfd8d9cf4f91c995165a92", size = 85637, upload-time = "2025-07-30T10:01:46.225Z" }, + { url = "https://files.pythonhosted.org/packages/44/b4/678503f12aceb0262f84fa201f6027ed77d71c5019ae03b399b97caa2f19/argon2_cffi_bindings-25.1.0-cp314-cp314t-musllinux_1_2_x86_64.whl", hash = "sha256:ba92837e4a9aa6a508c8d2d7883ed5a8f6c308c89a4790e1e447a220deb79a85", size = 91934, upload-time = "2025-07-30T10:01:47.203Z" }, + { url = "https://files.pythonhosted.org/packages/f0/c7/f36bd08ef9bd9f0a9cff9428406651f5937ce27b6c5b07b92d41f91ae541/argon2_cffi_bindings-25.1.0-cp314-cp314t-win32.whl", hash = "sha256:84a461d4d84ae1295871329b346a97f68eade8c53b6ed9a7ca2d7467f3c8ff6f", size = 28158, upload-time = "2025-07-30T10:01:48.341Z" }, + { url = "https://files.pythonhosted.org/packages/b3/80/0106a7448abb24a2c467bf7d527fe5413b7fdfa4ad6d6a96a43a62ef3988/argon2_cffi_bindings-25.1.0-cp314-cp314t-win_amd64.whl", hash = "sha256:b55aec3565b65f56455eebc9b9f34130440404f27fe21c3b375bf1ea4d8fbae6", size = 32597, upload-time = "2025-07-30T10:01:49.112Z" }, + { url = "https://files.pythonhosted.org/packages/05/b8/d663c9caea07e9180b2cb662772865230715cbd573ba3b5e81793d580316/argon2_cffi_bindings-25.1.0-cp314-cp314t-win_arm64.whl", hash = "sha256:87c33a52407e4c41f3b70a9c2d3f6056d88b10dad7695be708c5021673f55623", size = 28231, upload-time = "2025-07-30T10:01:49.92Z" }, + { url = "https://files.pythonhosted.org/packages/1d/57/96b8b9f93166147826da5f90376e784a10582dd39a393c99bb62cfcf52f0/argon2_cffi_bindings-25.1.0-cp39-abi3-macosx_10_9_universal2.whl", hash = "sha256:aecba1723ae35330a008418a91ea6cfcedf6d31e5fbaa056a166462ff066d500", size = 54121, upload-time = "2025-07-30T10:01:50.815Z" }, + { url = "https://files.pythonhosted.org/packages/0a/08/a9bebdb2e0e602dde230bdde8021b29f71f7841bd54801bcfd514acb5dcf/argon2_cffi_bindings-25.1.0-cp39-abi3-macosx_10_9_x86_64.whl", hash = "sha256:2630b6240b495dfab90aebe159ff784d08ea999aa4b0d17efa734055a07d2f44", size = 29177, upload-time = "2025-07-30T10:01:51.681Z" }, + { url = "https://files.pythonhosted.org/packages/b6/02/d297943bcacf05e4f2a94ab6f462831dc20158614e5d067c35d4e63b9acb/argon2_cffi_bindings-25.1.0-cp39-abi3-macosx_11_0_arm64.whl", hash = "sha256:7aef0c91e2c0fbca6fc68e7555aa60ef7008a739cbe045541e438373bc54d2b0", size = 31090, upload-time = "2025-07-30T10:01:53.184Z" }, + { url = "https://files.pythonhosted.org/packages/c1/93/44365f3d75053e53893ec6d733e4a5e3147502663554b4d864587c7828a7/argon2_cffi_bindings-25.1.0-cp39-abi3-manylinux_2_26_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:1e021e87faa76ae0d413b619fe2b65ab9a037f24c60a1e6cc43457ae20de6dc6", size = 81246, upload-time = "2025-07-30T10:01:54.145Z" }, + { url = "https://files.pythonhosted.org/packages/09/52/94108adfdd6e2ddf58be64f959a0b9c7d4ef2fa71086c38356d22dc501ea/argon2_cffi_bindings-25.1.0-cp39-abi3-manylinux_2_26_x86_64.manylinux_2_28_x86_64.whl", hash = "sha256:d3e924cfc503018a714f94a49a149fdc0b644eaead5d1f089330399134fa028a", size = 87126, upload-time = "2025-07-30T10:01:55.074Z" }, + { url = "https://files.pythonhosted.org/packages/72/70/7a2993a12b0ffa2a9271259b79cc616e2389ed1a4d93842fac5a1f923ffd/argon2_cffi_bindings-25.1.0-cp39-abi3-musllinux_1_2_aarch64.whl", hash = "sha256:c87b72589133f0346a1cb8d5ecca4b933e3c9b64656c9d175270a000e73b288d", size = 80343, upload-time = "2025-07-30T10:01:56.007Z" }, + { url = "https://files.pythonhosted.org/packages/78/9a/4e5157d893ffc712b74dbd868c7f62365618266982b64accab26bab01edc/argon2_cffi_bindings-25.1.0-cp39-abi3-musllinux_1_2_x86_64.whl", hash = "sha256:1db89609c06afa1a214a69a462ea741cf735b29a57530478c06eb81dd403de99", size = 86777, upload-time = "2025-07-30T10:01:56.943Z" }, + { url = "https://files.pythonhosted.org/packages/74/cd/15777dfde1c29d96de7f18edf4cc94c385646852e7c7b0320aa91ccca583/argon2_cffi_bindings-25.1.0-cp39-abi3-win32.whl", hash = "sha256:473bcb5f82924b1becbb637b63303ec8d10e84c8d241119419897a26116515d2", size = 27180, upload-time = "2025-07-30T10:01:57.759Z" }, + { url = "https://files.pythonhosted.org/packages/e2/c6/a759ece8f1829d1f162261226fbfd2c6832b3ff7657384045286d2afa384/argon2_cffi_bindings-25.1.0-cp39-abi3-win_amd64.whl", hash = "sha256:a98cd7d17e9f7ce244c0803cad3c23a7d379c301ba618a5fa76a67d116618b98", size = 31715, upload-time = "2025-07-30T10:01:58.56Z" }, + { url = "https://files.pythonhosted.org/packages/42/b9/f8d6fa329ab25128b7e98fd83a3cb34d9db5b059a9847eddb840a0af45dd/argon2_cffi_bindings-25.1.0-cp39-abi3-win_arm64.whl", hash = "sha256:b0fdbcf513833809c882823f98dc2f931cf659d9a1429616ac3adebb49f5db94", size = 27149, upload-time = "2025-07-30T10:01:59.329Z" }, +] + +[[package]] +name = "attrs" +version = "25.4.0" +source = { registry = "https://pypi.org/simple" } +sdist = { url = "https://files.pythonhosted.org/packages/6b/5c/685e6633917e101e5dcb62b9dd76946cbb57c26e133bae9e0cd36033c0a9/attrs-25.4.0.tar.gz", hash = "sha256:16d5969b87f0859ef33a48b35d55ac1be6e42ae49d5e853b597db70c35c57e11", size = 934251, upload-time = "2025-10-06T13:54:44.725Z" } +wheels = [ + { url = "https://files.pythonhosted.org/packages/3a/2a/7cc015f5b9f5db42b7d48157e23356022889fc354a2813c15934b7cb5c0e/attrs-25.4.0-py3-none-any.whl", hash = "sha256:adcf7e2a1fb3b36ac48d97835bb6d8ade15b8dcce26aba8bf1d14847b57a3373", size = 67615, upload-time = "2025-10-06T13:54:43.17Z" }, +] + +[[package]] +name = "certifi" +version = "2026.1.4" +source = { registry = "https://pypi.org/simple" } +sdist = { url = "https://files.pythonhosted.org/packages/e0/2d/a891ca51311197f6ad14a7ef42e2399f36cf2f9bd44752b3dc4eab60fdc5/certifi-2026.1.4.tar.gz", hash = "sha256:ac726dd470482006e014ad384921ed6438c457018f4b3d204aea4281258b2120", size = 154268, upload-time = "2026-01-04T02:42:41.825Z" } +wheels = [ + { url = "https://files.pythonhosted.org/packages/e6/ad/3cc14f097111b4de0040c83a525973216457bbeeb63739ef1ed275c1c021/certifi-2026.1.4-py3-none-any.whl", hash = "sha256:9943707519e4add1115f44c2bc244f782c0249876bf51b6599fee1ffbedd685c", size = 152900, upload-time = "2026-01-04T02:42:40.15Z" }, +] + +[[package]] +name = "cffi" +version = "2.0.0" +source = { registry = "https://pypi.org/simple" } +dependencies = [ + { name = "pycparser", marker = "implementation_name != 'PyPy'" }, +] +sdist = { url = "https://files.pythonhosted.org/packages/eb/56/b1ba7935a17738ae8453301356628e8147c79dbb825bcbc73dc7401f9846/cffi-2.0.0.tar.gz", hash = "sha256:44d1b5909021139fe36001ae048dbdde8214afa20200eda0f64c068cac5d5529", size = 523588, upload-time = "2025-09-08T23:24:04.541Z" } +wheels = [ + { url = "https://files.pythonhosted.org/packages/12/4a/3dfd5f7850cbf0d06dc84ba9aa00db766b52ca38d8b86e3a38314d52498c/cffi-2.0.0-cp311-cp311-macosx_10_13_x86_64.whl", hash = "sha256:b4c854ef3adc177950a8dfc81a86f5115d2abd545751a304c5bcf2c2c7283cfe", size = 184344, upload-time = "2025-09-08T23:22:26.456Z" }, + { url = "https://files.pythonhosted.org/packages/4f/8b/f0e4c441227ba756aafbe78f117485b25bb26b1c059d01f137fa6d14896b/cffi-2.0.0-cp311-cp311-macosx_11_0_arm64.whl", hash = "sha256:2de9a304e27f7596cd03d16f1b7c72219bd944e99cc52b84d0145aefb07cbd3c", size = 180560, upload-time = "2025-09-08T23:22:28.197Z" }, + { url = "https://files.pythonhosted.org/packages/b1/b7/1200d354378ef52ec227395d95c2576330fd22a869f7a70e88e1447eb234/cffi-2.0.0-cp311-cp311-manylinux1_i686.manylinux2014_i686.manylinux_2_17_i686.manylinux_2_5_i686.whl", hash = "sha256:baf5215e0ab74c16e2dd324e8ec067ef59e41125d3eade2b863d294fd5035c92", size = 209613, upload-time = "2025-09-08T23:22:29.475Z" }, + { url = "https://files.pythonhosted.org/packages/b8/56/6033f5e86e8cc9bb629f0077ba71679508bdf54a9a5e112a3c0b91870332/cffi-2.0.0-cp311-cp311-manylinux2014_aarch64.manylinux_2_17_aarch64.whl", hash = "sha256:730cacb21e1bdff3ce90babf007d0a0917cc3e6492f336c2f0134101e0944f93", size = 216476, upload-time = "2025-09-08T23:22:31.063Z" }, + { url = "https://files.pythonhosted.org/packages/dc/7f/55fecd70f7ece178db2f26128ec41430d8720f2d12ca97bf8f0a628207d5/cffi-2.0.0-cp311-cp311-manylinux2014_ppc64le.manylinux_2_17_ppc64le.whl", hash = "sha256:6824f87845e3396029f3820c206e459ccc91760e8fa24422f8b0c3d1731cbec5", size = 203374, upload-time = "2025-09-08T23:22:32.507Z" }, + { url = "https://files.pythonhosted.org/packages/84/ef/a7b77c8bdc0f77adc3b46888f1ad54be8f3b7821697a7b89126e829e676a/cffi-2.0.0-cp311-cp311-manylinux2014_s390x.manylinux_2_17_s390x.whl", hash = "sha256:9de40a7b0323d889cf8d23d1ef214f565ab154443c42737dfe52ff82cf857664", size = 202597, upload-time = "2025-09-08T23:22:34.132Z" }, + { url = "https://files.pythonhosted.org/packages/d7/91/500d892b2bf36529a75b77958edfcd5ad8e2ce4064ce2ecfeab2125d72d1/cffi-2.0.0-cp311-cp311-manylinux2014_x86_64.manylinux_2_17_x86_64.whl", hash = "sha256:8941aaadaf67246224cee8c3803777eed332a19d909b47e29c9842ef1e79ac26", size = 215574, upload-time = "2025-09-08T23:22:35.443Z" }, + { url = "https://files.pythonhosted.org/packages/44/64/58f6255b62b101093d5df22dcb752596066c7e89dd725e0afaed242a61be/cffi-2.0.0-cp311-cp311-musllinux_1_2_aarch64.whl", hash = "sha256:a05d0c237b3349096d3981b727493e22147f934b20f6f125a3eba8f994bec4a9", size = 218971, upload-time = "2025-09-08T23:22:36.805Z" }, + { url = "https://files.pythonhosted.org/packages/ab/49/fa72cebe2fd8a55fbe14956f9970fe8eb1ac59e5df042f603ef7c8ba0adc/cffi-2.0.0-cp311-cp311-musllinux_1_2_i686.whl", hash = "sha256:94698a9c5f91f9d138526b48fe26a199609544591f859c870d477351dc7b2414", size = 211972, upload-time = "2025-09-08T23:22:38.436Z" }, + { url = "https://files.pythonhosted.org/packages/0b/28/dd0967a76aab36731b6ebfe64dec4e981aff7e0608f60c2d46b46982607d/cffi-2.0.0-cp311-cp311-musllinux_1_2_x86_64.whl", hash = "sha256:5fed36fccc0612a53f1d4d9a816b50a36702c28a2aa880cb8a122b3466638743", size = 217078, upload-time = "2025-09-08T23:22:39.776Z" }, + { url = "https://files.pythonhosted.org/packages/2b/c0/015b25184413d7ab0a410775fdb4a50fca20f5589b5dab1dbbfa3baad8ce/cffi-2.0.0-cp311-cp311-win32.whl", hash = "sha256:c649e3a33450ec82378822b3dad03cc228b8f5963c0c12fc3b1e0ab940f768a5", size = 172076, upload-time = "2025-09-08T23:22:40.95Z" }, + { url = "https://files.pythonhosted.org/packages/ae/8f/dc5531155e7070361eb1b7e4c1a9d896d0cb21c49f807a6c03fd63fc877e/cffi-2.0.0-cp311-cp311-win_amd64.whl", hash = "sha256:66f011380d0e49ed280c789fbd08ff0d40968ee7b665575489afa95c98196ab5", size = 182820, upload-time = "2025-09-08T23:22:42.463Z" }, + { url = "https://files.pythonhosted.org/packages/95/5c/1b493356429f9aecfd56bc171285a4c4ac8697f76e9bbbbb105e537853a1/cffi-2.0.0-cp311-cp311-win_arm64.whl", hash = "sha256:c6638687455baf640e37344fe26d37c404db8b80d037c3d29f58fe8d1c3b194d", size = 177635, upload-time = "2025-09-08T23:22:43.623Z" }, + { url = "https://files.pythonhosted.org/packages/ea/47/4f61023ea636104d4f16ab488e268b93008c3d0bb76893b1b31db1f96802/cffi-2.0.0-cp312-cp312-macosx_10_13_x86_64.whl", hash = "sha256:6d02d6655b0e54f54c4ef0b94eb6be0607b70853c45ce98bd278dc7de718be5d", size = 185271, upload-time = "2025-09-08T23:22:44.795Z" }, + { url = "https://files.pythonhosted.org/packages/df/a2/781b623f57358e360d62cdd7a8c681f074a71d445418a776eef0aadb4ab4/cffi-2.0.0-cp312-cp312-macosx_11_0_arm64.whl", hash = "sha256:8eca2a813c1cb7ad4fb74d368c2ffbbb4789d377ee5bb8df98373c2cc0dee76c", size = 181048, upload-time = "2025-09-08T23:22:45.938Z" }, + { url = "https://files.pythonhosted.org/packages/ff/df/a4f0fbd47331ceeba3d37c2e51e9dfc9722498becbeec2bd8bc856c9538a/cffi-2.0.0-cp312-cp312-manylinux1_i686.manylinux2014_i686.manylinux_2_17_i686.manylinux_2_5_i686.whl", hash = "sha256:21d1152871b019407d8ac3985f6775c079416c282e431a4da6afe7aefd2bccbe", size = 212529, upload-time = "2025-09-08T23:22:47.349Z" }, + { url = "https://files.pythonhosted.org/packages/d5/72/12b5f8d3865bf0f87cf1404d8c374e7487dcf097a1c91c436e72e6badd83/cffi-2.0.0-cp312-cp312-manylinux2014_aarch64.manylinux_2_17_aarch64.whl", hash = "sha256:b21e08af67b8a103c71a250401c78d5e0893beff75e28c53c98f4de42f774062", size = 220097, upload-time = "2025-09-08T23:22:48.677Z" }, + { url = "https://files.pythonhosted.org/packages/c2/95/7a135d52a50dfa7c882ab0ac17e8dc11cec9d55d2c18dda414c051c5e69e/cffi-2.0.0-cp312-cp312-manylinux2014_ppc64le.manylinux_2_17_ppc64le.whl", hash = "sha256:1e3a615586f05fc4065a8b22b8152f0c1b00cdbc60596d187c2a74f9e3036e4e", size = 207983, upload-time = "2025-09-08T23:22:50.06Z" }, + { url = "https://files.pythonhosted.org/packages/3a/c8/15cb9ada8895957ea171c62dc78ff3e99159ee7adb13c0123c001a2546c1/cffi-2.0.0-cp312-cp312-manylinux2014_s390x.manylinux_2_17_s390x.whl", hash = "sha256:81afed14892743bbe14dacb9e36d9e0e504cd204e0b165062c488942b9718037", size = 206519, upload-time = "2025-09-08T23:22:51.364Z" }, + { url = "https://files.pythonhosted.org/packages/78/2d/7fa73dfa841b5ac06c7b8855cfc18622132e365f5b81d02230333ff26e9e/cffi-2.0.0-cp312-cp312-manylinux2014_x86_64.manylinux_2_17_x86_64.whl", hash = "sha256:3e17ed538242334bf70832644a32a7aae3d83b57567f9fd60a26257e992b79ba", size = 219572, upload-time = "2025-09-08T23:22:52.902Z" }, + { url = "https://files.pythonhosted.org/packages/07/e0/267e57e387b4ca276b90f0434ff88b2c2241ad72b16d31836adddfd6031b/cffi-2.0.0-cp312-cp312-musllinux_1_2_aarch64.whl", hash = "sha256:3925dd22fa2b7699ed2617149842d2e6adde22b262fcbfada50e3d195e4b3a94", size = 222963, upload-time = "2025-09-08T23:22:54.518Z" }, + { url = "https://files.pythonhosted.org/packages/b6/75/1f2747525e06f53efbd878f4d03bac5b859cbc11c633d0fb81432d98a795/cffi-2.0.0-cp312-cp312-musllinux_1_2_x86_64.whl", hash = "sha256:2c8f814d84194c9ea681642fd164267891702542f028a15fc97d4674b6206187", size = 221361, upload-time = "2025-09-08T23:22:55.867Z" }, + { url = "https://files.pythonhosted.org/packages/7b/2b/2b6435f76bfeb6bbf055596976da087377ede68df465419d192acf00c437/cffi-2.0.0-cp312-cp312-win32.whl", hash = "sha256:da902562c3e9c550df360bfa53c035b2f241fed6d9aef119048073680ace4a18", size = 172932, upload-time = "2025-09-08T23:22:57.188Z" }, + { url = "https://files.pythonhosted.org/packages/f8/ed/13bd4418627013bec4ed6e54283b1959cf6db888048c7cf4b4c3b5b36002/cffi-2.0.0-cp312-cp312-win_amd64.whl", hash = "sha256:da68248800ad6320861f129cd9c1bf96ca849a2771a59e0344e88681905916f5", size = 183557, upload-time = "2025-09-08T23:22:58.351Z" }, + { url = "https://files.pythonhosted.org/packages/95/31/9f7f93ad2f8eff1dbc1c3656d7ca5bfd8fb52c9d786b4dcf19b2d02217fa/cffi-2.0.0-cp312-cp312-win_arm64.whl", hash = "sha256:4671d9dd5ec934cb9a73e7ee9676f9362aba54f7f34910956b84d727b0d73fb6", size = 177762, upload-time = "2025-09-08T23:22:59.668Z" }, + { url = "https://files.pythonhosted.org/packages/4b/8d/a0a47a0c9e413a658623d014e91e74a50cdd2c423f7ccfd44086ef767f90/cffi-2.0.0-cp313-cp313-macosx_10_13_x86_64.whl", hash = "sha256:00bdf7acc5f795150faa6957054fbbca2439db2f775ce831222b66f192f03beb", size = 185230, upload-time = "2025-09-08T23:23:00.879Z" }, + { url = "https://files.pythonhosted.org/packages/4a/d2/a6c0296814556c68ee32009d9c2ad4f85f2707cdecfd7727951ec228005d/cffi-2.0.0-cp313-cp313-macosx_11_0_arm64.whl", hash = "sha256:45d5e886156860dc35862657e1494b9bae8dfa63bf56796f2fb56e1679fc0bca", size = 181043, upload-time = "2025-09-08T23:23:02.231Z" }, + { url = "https://files.pythonhosted.org/packages/b0/1e/d22cc63332bd59b06481ceaac49d6c507598642e2230f201649058a7e704/cffi-2.0.0-cp313-cp313-manylinux1_i686.manylinux2014_i686.manylinux_2_17_i686.manylinux_2_5_i686.whl", hash = "sha256:07b271772c100085dd28b74fa0cd81c8fb1a3ba18b21e03d7c27f3436a10606b", size = 212446, upload-time = "2025-09-08T23:23:03.472Z" }, + { url = "https://files.pythonhosted.org/packages/a9/f5/a2c23eb03b61a0b8747f211eb716446c826ad66818ddc7810cc2cc19b3f2/cffi-2.0.0-cp313-cp313-manylinux2014_aarch64.manylinux_2_17_aarch64.whl", hash = "sha256:d48a880098c96020b02d5a1f7d9251308510ce8858940e6fa99ece33f610838b", size = 220101, upload-time = "2025-09-08T23:23:04.792Z" }, + { url = "https://files.pythonhosted.org/packages/f2/7f/e6647792fc5850d634695bc0e6ab4111ae88e89981d35ac269956605feba/cffi-2.0.0-cp313-cp313-manylinux2014_ppc64le.manylinux_2_17_ppc64le.whl", hash = "sha256:f93fd8e5c8c0a4aa1f424d6173f14a892044054871c771f8566e4008eaa359d2", size = 207948, upload-time = "2025-09-08T23:23:06.127Z" }, + { url = "https://files.pythonhosted.org/packages/cb/1e/a5a1bd6f1fb30f22573f76533de12a00bf274abcdc55c8edab639078abb6/cffi-2.0.0-cp313-cp313-manylinux2014_s390x.manylinux_2_17_s390x.whl", hash = "sha256:dd4f05f54a52fb558f1ba9f528228066954fee3ebe629fc1660d874d040ae5a3", size = 206422, upload-time = "2025-09-08T23:23:07.753Z" }, + { url = "https://files.pythonhosted.org/packages/98/df/0a1755e750013a2081e863e7cd37e0cdd02664372c754e5560099eb7aa44/cffi-2.0.0-cp313-cp313-manylinux2014_x86_64.manylinux_2_17_x86_64.whl", hash = "sha256:c8d3b5532fc71b7a77c09192b4a5a200ea992702734a2e9279a37f2478236f26", size = 219499, upload-time = "2025-09-08T23:23:09.648Z" }, + { url = "https://files.pythonhosted.org/packages/50/e1/a969e687fcf9ea58e6e2a928ad5e2dd88cc12f6f0ab477e9971f2309b57c/cffi-2.0.0-cp313-cp313-musllinux_1_2_aarch64.whl", hash = "sha256:d9b29c1f0ae438d5ee9acb31cadee00a58c46cc9c0b2f9038c6b0b3470877a8c", size = 222928, upload-time = "2025-09-08T23:23:10.928Z" }, + { url = "https://files.pythonhosted.org/packages/36/54/0362578dd2c9e557a28ac77698ed67323ed5b9775ca9d3fe73fe191bb5d8/cffi-2.0.0-cp313-cp313-musllinux_1_2_x86_64.whl", hash = "sha256:6d50360be4546678fc1b79ffe7a66265e28667840010348dd69a314145807a1b", size = 221302, upload-time = "2025-09-08T23:23:12.42Z" }, + { url = "https://files.pythonhosted.org/packages/eb/6d/bf9bda840d5f1dfdbf0feca87fbdb64a918a69bca42cfa0ba7b137c48cb8/cffi-2.0.0-cp313-cp313-win32.whl", hash = "sha256:74a03b9698e198d47562765773b4a8309919089150a0bb17d829ad7b44b60d27", size = 172909, upload-time = "2025-09-08T23:23:14.32Z" }, + { url = "https://files.pythonhosted.org/packages/37/18/6519e1ee6f5a1e579e04b9ddb6f1676c17368a7aba48299c3759bbc3c8b3/cffi-2.0.0-cp313-cp313-win_amd64.whl", hash = "sha256:19f705ada2530c1167abacb171925dd886168931e0a7b78f5bffcae5c6b5be75", size = 183402, upload-time = "2025-09-08T23:23:15.535Z" }, + { url = "https://files.pythonhosted.org/packages/cb/0e/02ceeec9a7d6ee63bb596121c2c8e9b3a9e150936f4fbef6ca1943e6137c/cffi-2.0.0-cp313-cp313-win_arm64.whl", hash = "sha256:256f80b80ca3853f90c21b23ee78cd008713787b1b1e93eae9f3d6a7134abd91", size = 177780, upload-time = "2025-09-08T23:23:16.761Z" }, + { url = "https://files.pythonhosted.org/packages/92/c4/3ce07396253a83250ee98564f8d7e9789fab8e58858f35d07a9a2c78de9f/cffi-2.0.0-cp314-cp314-macosx_10_13_x86_64.whl", hash = "sha256:fc33c5141b55ed366cfaad382df24fe7dcbc686de5be719b207bb248e3053dc5", size = 185320, upload-time = "2025-09-08T23:23:18.087Z" }, + { url = "https://files.pythonhosted.org/packages/59/dd/27e9fa567a23931c838c6b02d0764611c62290062a6d4e8ff7863daf9730/cffi-2.0.0-cp314-cp314-macosx_11_0_arm64.whl", hash = "sha256:c654de545946e0db659b3400168c9ad31b5d29593291482c43e3564effbcee13", size = 181487, upload-time = "2025-09-08T23:23:19.622Z" }, + { url = "https://files.pythonhosted.org/packages/d6/43/0e822876f87ea8a4ef95442c3d766a06a51fc5298823f884ef87aaad168c/cffi-2.0.0-cp314-cp314-manylinux2014_aarch64.manylinux_2_17_aarch64.whl", hash = "sha256:24b6f81f1983e6df8db3adc38562c83f7d4a0c36162885ec7f7b77c7dcbec97b", size = 220049, upload-time = "2025-09-08T23:23:20.853Z" }, + { url = "https://files.pythonhosted.org/packages/b4/89/76799151d9c2d2d1ead63c2429da9ea9d7aac304603de0c6e8764e6e8e70/cffi-2.0.0-cp314-cp314-manylinux2014_ppc64le.manylinux_2_17_ppc64le.whl", hash = "sha256:12873ca6cb9b0f0d3a0da705d6086fe911591737a59f28b7936bdfed27c0d47c", size = 207793, upload-time = "2025-09-08T23:23:22.08Z" }, + { url = "https://files.pythonhosted.org/packages/bb/dd/3465b14bb9e24ee24cb88c9e3730f6de63111fffe513492bf8c808a3547e/cffi-2.0.0-cp314-cp314-manylinux2014_s390x.manylinux_2_17_s390x.whl", hash = "sha256:d9b97165e8aed9272a6bb17c01e3cc5871a594a446ebedc996e2397a1c1ea8ef", size = 206300, upload-time = "2025-09-08T23:23:23.314Z" }, + { url = "https://files.pythonhosted.org/packages/47/d9/d83e293854571c877a92da46fdec39158f8d7e68da75bf73581225d28e90/cffi-2.0.0-cp314-cp314-manylinux2014_x86_64.manylinux_2_17_x86_64.whl", hash = "sha256:afb8db5439b81cf9c9d0c80404b60c3cc9c3add93e114dcae767f1477cb53775", size = 219244, upload-time = "2025-09-08T23:23:24.541Z" }, + { url = "https://files.pythonhosted.org/packages/2b/0f/1f177e3683aead2bb00f7679a16451d302c436b5cbf2505f0ea8146ef59e/cffi-2.0.0-cp314-cp314-musllinux_1_2_aarch64.whl", hash = "sha256:737fe7d37e1a1bffe70bd5754ea763a62a066dc5913ca57e957824b72a85e205", size = 222828, upload-time = "2025-09-08T23:23:26.143Z" }, + { url = "https://files.pythonhosted.org/packages/c6/0f/cafacebd4b040e3119dcb32fed8bdef8dfe94da653155f9d0b9dc660166e/cffi-2.0.0-cp314-cp314-musllinux_1_2_x86_64.whl", hash = "sha256:38100abb9d1b1435bc4cc340bb4489635dc2f0da7456590877030c9b3d40b0c1", size = 220926, upload-time = "2025-09-08T23:23:27.873Z" }, + { url = "https://files.pythonhosted.org/packages/3e/aa/df335faa45b395396fcbc03de2dfcab242cd61a9900e914fe682a59170b1/cffi-2.0.0-cp314-cp314-win32.whl", hash = "sha256:087067fa8953339c723661eda6b54bc98c5625757ea62e95eb4898ad5e776e9f", size = 175328, upload-time = "2025-09-08T23:23:44.61Z" }, + { url = "https://files.pythonhosted.org/packages/bb/92/882c2d30831744296ce713f0feb4c1cd30f346ef747b530b5318715cc367/cffi-2.0.0-cp314-cp314-win_amd64.whl", hash = "sha256:203a48d1fb583fc7d78a4c6655692963b860a417c0528492a6bc21f1aaefab25", size = 185650, upload-time = "2025-09-08T23:23:45.848Z" }, + { url = "https://files.pythonhosted.org/packages/9f/2c/98ece204b9d35a7366b5b2c6539c350313ca13932143e79dc133ba757104/cffi-2.0.0-cp314-cp314-win_arm64.whl", hash = "sha256:dbd5c7a25a7cb98f5ca55d258b103a2054f859a46ae11aaf23134f9cc0d356ad", size = 180687, upload-time = "2025-09-08T23:23:47.105Z" }, + { url = "https://files.pythonhosted.org/packages/3e/61/c768e4d548bfa607abcda77423448df8c471f25dbe64fb2ef6d555eae006/cffi-2.0.0-cp314-cp314t-macosx_10_13_x86_64.whl", hash = "sha256:9a67fc9e8eb39039280526379fb3a70023d77caec1852002b4da7e8b270c4dd9", size = 188773, upload-time = "2025-09-08T23:23:29.347Z" }, + { url = "https://files.pythonhosted.org/packages/2c/ea/5f76bce7cf6fcd0ab1a1058b5af899bfbef198bea4d5686da88471ea0336/cffi-2.0.0-cp314-cp314t-macosx_11_0_arm64.whl", hash = "sha256:7a66c7204d8869299919db4d5069a82f1561581af12b11b3c9f48c584eb8743d", size = 185013, upload-time = "2025-09-08T23:23:30.63Z" }, + { url = "https://files.pythonhosted.org/packages/be/b4/c56878d0d1755cf9caa54ba71e5d049479c52f9e4afc230f06822162ab2f/cffi-2.0.0-cp314-cp314t-manylinux2014_aarch64.manylinux_2_17_aarch64.whl", hash = "sha256:7cc09976e8b56f8cebd752f7113ad07752461f48a58cbba644139015ac24954c", size = 221593, upload-time = "2025-09-08T23:23:31.91Z" }, + { url = "https://files.pythonhosted.org/packages/e0/0d/eb704606dfe8033e7128df5e90fee946bbcb64a04fcdaa97321309004000/cffi-2.0.0-cp314-cp314t-manylinux2014_ppc64le.manylinux_2_17_ppc64le.whl", hash = "sha256:92b68146a71df78564e4ef48af17551a5ddd142e5190cdf2c5624d0c3ff5b2e8", size = 209354, upload-time = "2025-09-08T23:23:33.214Z" }, + { url = "https://files.pythonhosted.org/packages/d8/19/3c435d727b368ca475fb8742ab97c9cb13a0de600ce86f62eab7fa3eea60/cffi-2.0.0-cp314-cp314t-manylinux2014_s390x.manylinux_2_17_s390x.whl", hash = "sha256:b1e74d11748e7e98e2f426ab176d4ed720a64412b6a15054378afdb71e0f37dc", size = 208480, upload-time = "2025-09-08T23:23:34.495Z" }, + { url = "https://files.pythonhosted.org/packages/d0/44/681604464ed9541673e486521497406fadcc15b5217c3e326b061696899a/cffi-2.0.0-cp314-cp314t-manylinux2014_x86_64.manylinux_2_17_x86_64.whl", hash = "sha256:28a3a209b96630bca57cce802da70c266eb08c6e97e5afd61a75611ee6c64592", size = 221584, upload-time = "2025-09-08T23:23:36.096Z" }, + { url = "https://files.pythonhosted.org/packages/25/8e/342a504ff018a2825d395d44d63a767dd8ebc927ebda557fecdaca3ac33a/cffi-2.0.0-cp314-cp314t-musllinux_1_2_aarch64.whl", hash = "sha256:7553fb2090d71822f02c629afe6042c299edf91ba1bf94951165613553984512", size = 224443, upload-time = "2025-09-08T23:23:37.328Z" }, + { url = "https://files.pythonhosted.org/packages/e1/5e/b666bacbbc60fbf415ba9988324a132c9a7a0448a9a8f125074671c0f2c3/cffi-2.0.0-cp314-cp314t-musllinux_1_2_x86_64.whl", hash = "sha256:6c6c373cfc5c83a975506110d17457138c8c63016b563cc9ed6e056a82f13ce4", size = 223437, upload-time = "2025-09-08T23:23:38.945Z" }, + { url = "https://files.pythonhosted.org/packages/a0/1d/ec1a60bd1a10daa292d3cd6bb0b359a81607154fb8165f3ec95fe003b85c/cffi-2.0.0-cp314-cp314t-win32.whl", hash = "sha256:1fc9ea04857caf665289b7a75923f2c6ed559b8298a1b8c49e59f7dd95c8481e", size = 180487, upload-time = "2025-09-08T23:23:40.423Z" }, + { url = "https://files.pythonhosted.org/packages/bf/41/4c1168c74fac325c0c8156f04b6749c8b6a8f405bbf91413ba088359f60d/cffi-2.0.0-cp314-cp314t-win_amd64.whl", hash = "sha256:d68b6cef7827e8641e8ef16f4494edda8b36104d79773a334beaa1e3521430f6", size = 191726, upload-time = "2025-09-08T23:23:41.742Z" }, + { url = "https://files.pythonhosted.org/packages/ae/3a/dbeec9d1ee0844c679f6bb5d6ad4e9f198b1224f4e7a32825f47f6192b0c/cffi-2.0.0-cp314-cp314t-win_arm64.whl", hash = "sha256:0a1527a803f0a659de1af2e1fd700213caba79377e27e4693648c2923da066f9", size = 184195, upload-time = "2025-09-08T23:23:43.004Z" }, +] + +[[package]] +name = "charset-normalizer" +version = "3.4.4" +source = { registry = "https://pypi.org/simple" } +sdist = { url = "https://files.pythonhosted.org/packages/13/69/33ddede1939fdd074bce5434295f38fae7136463422fe4fd3e0e89b98062/charset_normalizer-3.4.4.tar.gz", hash = "sha256:94537985111c35f28720e43603b8e7b43a6ecfb2ce1d3058bbe955b73404e21a", size = 129418, upload-time = "2025-10-14T04:42:32.879Z" } +wheels = [ + { url = "https://files.pythonhosted.org/packages/ed/27/c6491ff4954e58a10f69ad90aca8a1b6fe9c5d3c6f380907af3c37435b59/charset_normalizer-3.4.4-cp311-cp311-macosx_10_9_universal2.whl", hash = "sha256:6e1fcf0720908f200cd21aa4e6750a48ff6ce4afe7ff5a79a90d5ed8a08296f8", size = 206988, upload-time = "2025-10-14T04:40:33.79Z" }, + { url = "https://files.pythonhosted.org/packages/94/59/2e87300fe67ab820b5428580a53cad894272dbb97f38a7a814a2a1ac1011/charset_normalizer-3.4.4-cp311-cp311-manylinux2014_aarch64.manylinux_2_17_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:5f819d5fe9234f9f82d75bdfa9aef3a3d72c4d24a6e57aeaebba32a704553aa0", size = 147324, upload-time = "2025-10-14T04:40:34.961Z" }, + { url = "https://files.pythonhosted.org/packages/07/fb/0cf61dc84b2b088391830f6274cb57c82e4da8bbc2efeac8c025edb88772/charset_normalizer-3.4.4-cp311-cp311-manylinux2014_armv7l.manylinux_2_17_armv7l.manylinux_2_31_armv7l.whl", hash = "sha256:a59cb51917aa591b1c4e6a43c132f0cdc3c76dbad6155df4e28ee626cc77a0a3", size = 142742, upload-time = "2025-10-14T04:40:36.105Z" }, + { url = "https://files.pythonhosted.org/packages/62/8b/171935adf2312cd745d290ed93cf16cf0dfe320863ab7cbeeae1dcd6535f/charset_normalizer-3.4.4-cp311-cp311-manylinux2014_ppc64le.manylinux_2_17_ppc64le.manylinux_2_28_ppc64le.whl", hash = "sha256:8ef3c867360f88ac904fd3f5e1f902f13307af9052646963ee08ff4f131adafc", size = 160863, upload-time = "2025-10-14T04:40:37.188Z" }, + { url = "https://files.pythonhosted.org/packages/09/73/ad875b192bda14f2173bfc1bc9a55e009808484a4b256748d931b6948442/charset_normalizer-3.4.4-cp311-cp311-manylinux2014_s390x.manylinux_2_17_s390x.manylinux_2_28_s390x.whl", hash = "sha256:d9e45d7faa48ee908174d8fe84854479ef838fc6a705c9315372eacbc2f02897", size = 157837, upload-time = "2025-10-14T04:40:38.435Z" }, + { url = "https://files.pythonhosted.org/packages/6d/fc/de9cce525b2c5b94b47c70a4b4fb19f871b24995c728e957ee68ab1671ea/charset_normalizer-3.4.4-cp311-cp311-manylinux2014_x86_64.manylinux_2_17_x86_64.manylinux_2_28_x86_64.whl", hash = "sha256:840c25fb618a231545cbab0564a799f101b63b9901f2569faecd6b222ac72381", size = 151550, upload-time = "2025-10-14T04:40:40.053Z" }, + { url = "https://files.pythonhosted.org/packages/55/c2/43edd615fdfba8c6f2dfbd459b25a6b3b551f24ea21981e23fb768503ce1/charset_normalizer-3.4.4-cp311-cp311-manylinux_2_31_riscv64.manylinux_2_39_riscv64.whl", hash = "sha256:ca5862d5b3928c4940729dacc329aa9102900382fea192fc5e52eb69d6093815", size = 149162, upload-time = "2025-10-14T04:40:41.163Z" }, + { url = "https://files.pythonhosted.org/packages/03/86/bde4ad8b4d0e9429a4e82c1e8f5c659993a9a863ad62c7df05cf7b678d75/charset_normalizer-3.4.4-cp311-cp311-musllinux_1_2_aarch64.whl", hash = "sha256:d9c7f57c3d666a53421049053eaacdd14bbd0a528e2186fcb2e672effd053bb0", size = 150019, upload-time = "2025-10-14T04:40:42.276Z" }, + { url = "https://files.pythonhosted.org/packages/1f/86/a151eb2af293a7e7bac3a739b81072585ce36ccfb4493039f49f1d3cae8c/charset_normalizer-3.4.4-cp311-cp311-musllinux_1_2_armv7l.whl", hash = "sha256:277e970e750505ed74c832b4bf75dac7476262ee2a013f5574dd49075879e161", size = 143310, upload-time = "2025-10-14T04:40:43.439Z" }, + { url = "https://files.pythonhosted.org/packages/b5/fe/43dae6144a7e07b87478fdfc4dbe9efd5defb0e7ec29f5f58a55aeef7bf7/charset_normalizer-3.4.4-cp311-cp311-musllinux_1_2_ppc64le.whl", hash = "sha256:31fd66405eaf47bb62e8cd575dc621c56c668f27d46a61d975a249930dd5e2a4", size = 162022, upload-time = "2025-10-14T04:40:44.547Z" }, + { url = "https://files.pythonhosted.org/packages/80/e6/7aab83774f5d2bca81f42ac58d04caf44f0cc2b65fc6db2b3b2e8a05f3b3/charset_normalizer-3.4.4-cp311-cp311-musllinux_1_2_riscv64.whl", hash = "sha256:0d3d8f15c07f86e9ff82319b3d9ef6f4bf907608f53fe9d92b28ea9ae3d1fd89", size = 149383, upload-time = "2025-10-14T04:40:46.018Z" }, + { url = "https://files.pythonhosted.org/packages/4f/e8/b289173b4edae05c0dde07f69f8db476a0b511eac556dfe0d6bda3c43384/charset_normalizer-3.4.4-cp311-cp311-musllinux_1_2_s390x.whl", hash = "sha256:9f7fcd74d410a36883701fafa2482a6af2ff5ba96b9a620e9e0721e28ead5569", size = 159098, upload-time = "2025-10-14T04:40:47.081Z" }, + { url = "https://files.pythonhosted.org/packages/d8/df/fe699727754cae3f8478493c7f45f777b17c3ef0600e28abfec8619eb49c/charset_normalizer-3.4.4-cp311-cp311-musllinux_1_2_x86_64.whl", hash = "sha256:ebf3e58c7ec8a8bed6d66a75d7fb37b55e5015b03ceae72a8e7c74495551e224", size = 152991, upload-time = "2025-10-14T04:40:48.246Z" }, + { url = "https://files.pythonhosted.org/packages/1a/86/584869fe4ddb6ffa3bd9f491b87a01568797fb9bd8933f557dba9771beaf/charset_normalizer-3.4.4-cp311-cp311-win32.whl", hash = "sha256:eecbc200c7fd5ddb9a7f16c7decb07b566c29fa2161a16cf67b8d068bd21690a", size = 99456, upload-time = "2025-10-14T04:40:49.376Z" }, + { url = "https://files.pythonhosted.org/packages/65/f6/62fdd5feb60530f50f7e38b4f6a1d5203f4d16ff4f9f0952962c044e919a/charset_normalizer-3.4.4-cp311-cp311-win_amd64.whl", hash = "sha256:5ae497466c7901d54b639cf42d5b8c1b6a4fead55215500d2f486d34db48d016", size = 106978, upload-time = "2025-10-14T04:40:50.844Z" }, + { url = "https://files.pythonhosted.org/packages/7a/9d/0710916e6c82948b3be62d9d398cb4fcf4e97b56d6a6aeccd66c4b2f2bd5/charset_normalizer-3.4.4-cp311-cp311-win_arm64.whl", hash = "sha256:65e2befcd84bc6f37095f5961e68a6f077bf44946771354a28ad434c2cce0ae1", size = 99969, upload-time = "2025-10-14T04:40:52.272Z" }, + { url = "https://files.pythonhosted.org/packages/f3/85/1637cd4af66fa687396e757dec650f28025f2a2f5a5531a3208dc0ec43f2/charset_normalizer-3.4.4-cp312-cp312-macosx_10_13_universal2.whl", hash = "sha256:0a98e6759f854bd25a58a73fa88833fba3b7c491169f86ce1180c948ab3fd394", size = 208425, upload-time = "2025-10-14T04:40:53.353Z" }, + { url = "https://files.pythonhosted.org/packages/9d/6a/04130023fef2a0d9c62d0bae2649b69f7b7d8d24ea5536feef50551029df/charset_normalizer-3.4.4-cp312-cp312-manylinux2014_aarch64.manylinux_2_17_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:b5b290ccc2a263e8d185130284f8501e3e36c5e02750fc6b6bdeb2e9e96f1e25", size = 148162, upload-time = "2025-10-14T04:40:54.558Z" }, + { url = "https://files.pythonhosted.org/packages/78/29/62328d79aa60da22c9e0b9a66539feae06ca0f5a4171ac4f7dc285b83688/charset_normalizer-3.4.4-cp312-cp312-manylinux2014_armv7l.manylinux_2_17_armv7l.manylinux_2_31_armv7l.whl", hash = "sha256:74bb723680f9f7a6234dcf67aea57e708ec1fbdf5699fb91dfd6f511b0a320ef", size = 144558, upload-time = "2025-10-14T04:40:55.677Z" }, + { url = "https://files.pythonhosted.org/packages/86/bb/b32194a4bf15b88403537c2e120b817c61cd4ecffa9b6876e941c3ee38fe/charset_normalizer-3.4.4-cp312-cp312-manylinux2014_ppc64le.manylinux_2_17_ppc64le.manylinux_2_28_ppc64le.whl", hash = "sha256:f1e34719c6ed0b92f418c7c780480b26b5d9c50349e9a9af7d76bf757530350d", size = 161497, upload-time = "2025-10-14T04:40:57.217Z" }, + { url = "https://files.pythonhosted.org/packages/19/89/a54c82b253d5b9b111dc74aca196ba5ccfcca8242d0fb64146d4d3183ff1/charset_normalizer-3.4.4-cp312-cp312-manylinux2014_s390x.manylinux_2_17_s390x.manylinux_2_28_s390x.whl", hash = "sha256:2437418e20515acec67d86e12bf70056a33abdacb5cb1655042f6538d6b085a8", size = 159240, upload-time = "2025-10-14T04:40:58.358Z" }, + { url = "https://files.pythonhosted.org/packages/c0/10/d20b513afe03acc89ec33948320a5544d31f21b05368436d580dec4e234d/charset_normalizer-3.4.4-cp312-cp312-manylinux2014_x86_64.manylinux_2_17_x86_64.manylinux_2_28_x86_64.whl", hash = "sha256:11d694519d7f29d6cd09f6ac70028dba10f92f6cdd059096db198c283794ac86", size = 153471, upload-time = "2025-10-14T04:40:59.468Z" }, + { url = "https://files.pythonhosted.org/packages/61/fa/fbf177b55bdd727010f9c0a3c49eefa1d10f960e5f09d1d887bf93c2e698/charset_normalizer-3.4.4-cp312-cp312-manylinux_2_31_riscv64.manylinux_2_39_riscv64.whl", hash = "sha256:ac1c4a689edcc530fc9d9aa11f5774b9e2f33f9a0c6a57864e90908f5208d30a", size = 150864, upload-time = "2025-10-14T04:41:00.623Z" }, + { url = "https://files.pythonhosted.org/packages/05/12/9fbc6a4d39c0198adeebbde20b619790e9236557ca59fc40e0e3cebe6f40/charset_normalizer-3.4.4-cp312-cp312-musllinux_1_2_aarch64.whl", hash = "sha256:21d142cc6c0ec30d2efee5068ca36c128a30b0f2c53c1c07bd78cb6bc1d3be5f", size = 150647, upload-time = "2025-10-14T04:41:01.754Z" }, + { url = "https://files.pythonhosted.org/packages/ad/1f/6a9a593d52e3e8c5d2b167daf8c6b968808efb57ef4c210acb907c365bc4/charset_normalizer-3.4.4-cp312-cp312-musllinux_1_2_armv7l.whl", hash = "sha256:5dbe56a36425d26d6cfb40ce79c314a2e4dd6211d51d6d2191c00bed34f354cc", size = 145110, upload-time = "2025-10-14T04:41:03.231Z" }, + { url = "https://files.pythonhosted.org/packages/30/42/9a52c609e72471b0fc54386dc63c3781a387bb4fe61c20231a4ebcd58bdd/charset_normalizer-3.4.4-cp312-cp312-musllinux_1_2_ppc64le.whl", hash = "sha256:5bfbb1b9acf3334612667b61bd3002196fe2a1eb4dd74d247e0f2a4d50ec9bbf", size = 162839, upload-time = "2025-10-14T04:41:04.715Z" }, + { url = "https://files.pythonhosted.org/packages/c4/5b/c0682bbf9f11597073052628ddd38344a3d673fda35a36773f7d19344b23/charset_normalizer-3.4.4-cp312-cp312-musllinux_1_2_riscv64.whl", hash = "sha256:d055ec1e26e441f6187acf818b73564e6e6282709e9bcb5b63f5b23068356a15", size = 150667, upload-time = "2025-10-14T04:41:05.827Z" }, + { url = "https://files.pythonhosted.org/packages/e4/24/a41afeab6f990cf2daf6cb8c67419b63b48cf518e4f56022230840c9bfb2/charset_normalizer-3.4.4-cp312-cp312-musllinux_1_2_s390x.whl", hash = "sha256:af2d8c67d8e573d6de5bc30cdb27e9b95e49115cd9baad5ddbd1a6207aaa82a9", size = 160535, upload-time = "2025-10-14T04:41:06.938Z" }, + { url = "https://files.pythonhosted.org/packages/2a/e5/6a4ce77ed243c4a50a1fecca6aaaab419628c818a49434be428fe24c9957/charset_normalizer-3.4.4-cp312-cp312-musllinux_1_2_x86_64.whl", hash = "sha256:780236ac706e66881f3b7f2f32dfe90507a09e67d1d454c762cf642e6e1586e0", size = 154816, upload-time = "2025-10-14T04:41:08.101Z" }, + { url = "https://files.pythonhosted.org/packages/a8/ef/89297262b8092b312d29cdb2517cb1237e51db8ecef2e9af5edbe7b683b1/charset_normalizer-3.4.4-cp312-cp312-win32.whl", hash = "sha256:5833d2c39d8896e4e19b689ffc198f08ea58116bee26dea51e362ecc7cd3ed26", size = 99694, upload-time = "2025-10-14T04:41:09.23Z" }, + { url = "https://files.pythonhosted.org/packages/3d/2d/1e5ed9dd3b3803994c155cd9aacb60c82c331bad84daf75bcb9c91b3295e/charset_normalizer-3.4.4-cp312-cp312-win_amd64.whl", hash = "sha256:a79cfe37875f822425b89a82333404539ae63dbdddf97f84dcbc3d339aae9525", size = 107131, upload-time = "2025-10-14T04:41:10.467Z" }, + { url = "https://files.pythonhosted.org/packages/d0/d9/0ed4c7098a861482a7b6a95603edce4c0d9db2311af23da1fb2b75ec26fc/charset_normalizer-3.4.4-cp312-cp312-win_arm64.whl", hash = "sha256:376bec83a63b8021bb5c8ea75e21c4ccb86e7e45ca4eb81146091b56599b80c3", size = 100390, upload-time = "2025-10-14T04:41:11.915Z" }, + { url = "https://files.pythonhosted.org/packages/97/45/4b3a1239bbacd321068ea6e7ac28875b03ab8bc0aa0966452db17cd36714/charset_normalizer-3.4.4-cp313-cp313-macosx_10_13_universal2.whl", hash = "sha256:e1f185f86a6f3403aa2420e815904c67b2f9ebc443f045edd0de921108345794", size = 208091, upload-time = "2025-10-14T04:41:13.346Z" }, + { url = "https://files.pythonhosted.org/packages/7d/62/73a6d7450829655a35bb88a88fca7d736f9882a27eacdca2c6d505b57e2e/charset_normalizer-3.4.4-cp313-cp313-manylinux2014_aarch64.manylinux_2_17_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:6b39f987ae8ccdf0d2642338faf2abb1862340facc796048b604ef14919e55ed", size = 147936, upload-time = "2025-10-14T04:41:14.461Z" }, + { url = "https://files.pythonhosted.org/packages/89/c5/adb8c8b3d6625bef6d88b251bbb0d95f8205831b987631ab0c8bb5d937c2/charset_normalizer-3.4.4-cp313-cp313-manylinux2014_armv7l.manylinux_2_17_armv7l.manylinux_2_31_armv7l.whl", hash = "sha256:3162d5d8ce1bb98dd51af660f2121c55d0fa541b46dff7bb9b9f86ea1d87de72", size = 144180, upload-time = "2025-10-14T04:41:15.588Z" }, + { url = "https://files.pythonhosted.org/packages/91/ed/9706e4070682d1cc219050b6048bfd293ccf67b3d4f5a4f39207453d4b99/charset_normalizer-3.4.4-cp313-cp313-manylinux2014_ppc64le.manylinux_2_17_ppc64le.manylinux_2_28_ppc64le.whl", hash = "sha256:81d5eb2a312700f4ecaa977a8235b634ce853200e828fbadf3a9c50bab278328", size = 161346, upload-time = "2025-10-14T04:41:16.738Z" }, + { url = "https://files.pythonhosted.org/packages/d5/0d/031f0d95e4972901a2f6f09ef055751805ff541511dc1252ba3ca1f80cf5/charset_normalizer-3.4.4-cp313-cp313-manylinux2014_s390x.manylinux_2_17_s390x.manylinux_2_28_s390x.whl", hash = "sha256:5bd2293095d766545ec1a8f612559f6b40abc0eb18bb2f5d1171872d34036ede", size = 158874, upload-time = "2025-10-14T04:41:17.923Z" }, + { url = "https://files.pythonhosted.org/packages/f5/83/6ab5883f57c9c801ce5e5677242328aa45592be8a00644310a008d04f922/charset_normalizer-3.4.4-cp313-cp313-manylinux2014_x86_64.manylinux_2_17_x86_64.manylinux_2_28_x86_64.whl", hash = "sha256:a8a8b89589086a25749f471e6a900d3f662d1d3b6e2e59dcecf787b1cc3a1894", size = 153076, upload-time = "2025-10-14T04:41:19.106Z" }, + { url = "https://files.pythonhosted.org/packages/75/1e/5ff781ddf5260e387d6419959ee89ef13878229732732ee73cdae01800f2/charset_normalizer-3.4.4-cp313-cp313-manylinux_2_31_riscv64.manylinux_2_39_riscv64.whl", hash = "sha256:bc7637e2f80d8530ee4a78e878bce464f70087ce73cf7c1caf142416923b98f1", size = 150601, upload-time = "2025-10-14T04:41:20.245Z" }, + { url = "https://files.pythonhosted.org/packages/d7/57/71be810965493d3510a6ca79b90c19e48696fb1ff964da319334b12677f0/charset_normalizer-3.4.4-cp313-cp313-musllinux_1_2_aarch64.whl", hash = "sha256:f8bf04158c6b607d747e93949aa60618b61312fe647a6369f88ce2ff16043490", size = 150376, upload-time = "2025-10-14T04:41:21.398Z" }, + { url = "https://files.pythonhosted.org/packages/e5/d5/c3d057a78c181d007014feb7e9f2e65905a6c4ef182c0ddf0de2924edd65/charset_normalizer-3.4.4-cp313-cp313-musllinux_1_2_armv7l.whl", hash = "sha256:554af85e960429cf30784dd47447d5125aaa3b99a6f0683589dbd27e2f45da44", size = 144825, upload-time = "2025-10-14T04:41:22.583Z" }, + { url = "https://files.pythonhosted.org/packages/e6/8c/d0406294828d4976f275ffbe66f00266c4b3136b7506941d87c00cab5272/charset_normalizer-3.4.4-cp313-cp313-musllinux_1_2_ppc64le.whl", hash = "sha256:74018750915ee7ad843a774364e13a3db91682f26142baddf775342c3f5b1133", size = 162583, upload-time = "2025-10-14T04:41:23.754Z" }, + { url = "https://files.pythonhosted.org/packages/d7/24/e2aa1f18c8f15c4c0e932d9287b8609dd30ad56dbe41d926bd846e22fb8d/charset_normalizer-3.4.4-cp313-cp313-musllinux_1_2_riscv64.whl", hash = "sha256:c0463276121fdee9c49b98908b3a89c39be45d86d1dbaa22957e38f6321d4ce3", size = 150366, upload-time = "2025-10-14T04:41:25.27Z" }, + { url = "https://files.pythonhosted.org/packages/e4/5b/1e6160c7739aad1e2df054300cc618b06bf784a7a164b0f238360721ab86/charset_normalizer-3.4.4-cp313-cp313-musllinux_1_2_s390x.whl", hash = "sha256:362d61fd13843997c1c446760ef36f240cf81d3ebf74ac62652aebaf7838561e", size = 160300, upload-time = "2025-10-14T04:41:26.725Z" }, + { url = "https://files.pythonhosted.org/packages/7a/10/f882167cd207fbdd743e55534d5d9620e095089d176d55cb22d5322f2afd/charset_normalizer-3.4.4-cp313-cp313-musllinux_1_2_x86_64.whl", hash = "sha256:9a26f18905b8dd5d685d6d07b0cdf98a79f3c7a918906af7cc143ea2e164c8bc", size = 154465, upload-time = "2025-10-14T04:41:28.322Z" }, + { url = "https://files.pythonhosted.org/packages/89/66/c7a9e1b7429be72123441bfdbaf2bc13faab3f90b933f664db506dea5915/charset_normalizer-3.4.4-cp313-cp313-win32.whl", hash = "sha256:9b35f4c90079ff2e2edc5b26c0c77925e5d2d255c42c74fdb70fb49b172726ac", size = 99404, upload-time = "2025-10-14T04:41:29.95Z" }, + { url = "https://files.pythonhosted.org/packages/c4/26/b9924fa27db384bdcd97ab83b4f0a8058d96ad9626ead570674d5e737d90/charset_normalizer-3.4.4-cp313-cp313-win_amd64.whl", hash = "sha256:b435cba5f4f750aa6c0a0d92c541fb79f69a387c91e61f1795227e4ed9cece14", size = 107092, upload-time = "2025-10-14T04:41:31.188Z" }, + { url = "https://files.pythonhosted.org/packages/af/8f/3ed4bfa0c0c72a7ca17f0380cd9e4dd842b09f664e780c13cff1dcf2ef1b/charset_normalizer-3.4.4-cp313-cp313-win_arm64.whl", hash = "sha256:542d2cee80be6f80247095cc36c418f7bddd14f4a6de45af91dfad36d817bba2", size = 100408, upload-time = "2025-10-14T04:41:32.624Z" }, + { url = "https://files.pythonhosted.org/packages/2a/35/7051599bd493e62411d6ede36fd5af83a38f37c4767b92884df7301db25d/charset_normalizer-3.4.4-cp314-cp314-macosx_10_13_universal2.whl", hash = "sha256:da3326d9e65ef63a817ecbcc0df6e94463713b754fe293eaa03da99befb9a5bd", size = 207746, upload-time = "2025-10-14T04:41:33.773Z" }, + { url = "https://files.pythonhosted.org/packages/10/9a/97c8d48ef10d6cd4fcead2415523221624bf58bcf68a802721a6bc807c8f/charset_normalizer-3.4.4-cp314-cp314-manylinux2014_aarch64.manylinux_2_17_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:8af65f14dc14a79b924524b1e7fffe304517b2bff5a58bf64f30b98bbc5079eb", size = 147889, upload-time = "2025-10-14T04:41:34.897Z" }, + { url = "https://files.pythonhosted.org/packages/10/bf/979224a919a1b606c82bd2c5fa49b5c6d5727aa47b4312bb27b1734f53cd/charset_normalizer-3.4.4-cp314-cp314-manylinux2014_armv7l.manylinux_2_17_armv7l.manylinux_2_31_armv7l.whl", hash = "sha256:74664978bb272435107de04e36db5a9735e78232b85b77d45cfb38f758efd33e", size = 143641, upload-time = "2025-10-14T04:41:36.116Z" }, + { url = "https://files.pythonhosted.org/packages/ba/33/0ad65587441fc730dc7bd90e9716b30b4702dc7b617e6ba4997dc8651495/charset_normalizer-3.4.4-cp314-cp314-manylinux2014_ppc64le.manylinux_2_17_ppc64le.manylinux_2_28_ppc64le.whl", hash = "sha256:752944c7ffbfdd10c074dc58ec2d5a8a4cd9493b314d367c14d24c17684ddd14", size = 160779, upload-time = "2025-10-14T04:41:37.229Z" }, + { url = "https://files.pythonhosted.org/packages/67/ed/331d6b249259ee71ddea93f6f2f0a56cfebd46938bde6fcc6f7b9a3d0e09/charset_normalizer-3.4.4-cp314-cp314-manylinux2014_s390x.manylinux_2_17_s390x.manylinux_2_28_s390x.whl", hash = "sha256:d1f13550535ad8cff21b8d757a3257963e951d96e20ec82ab44bc64aeb62a191", size = 159035, upload-time = "2025-10-14T04:41:38.368Z" }, + { url = "https://files.pythonhosted.org/packages/67/ff/f6b948ca32e4f2a4576aa129d8bed61f2e0543bf9f5f2b7fc3758ed005c9/charset_normalizer-3.4.4-cp314-cp314-manylinux2014_x86_64.manylinux_2_17_x86_64.manylinux_2_28_x86_64.whl", hash = "sha256:ecaae4149d99b1c9e7b88bb03e3221956f68fd6d50be2ef061b2381b61d20838", size = 152542, upload-time = "2025-10-14T04:41:39.862Z" }, + { url = "https://files.pythonhosted.org/packages/16/85/276033dcbcc369eb176594de22728541a925b2632f9716428c851b149e83/charset_normalizer-3.4.4-cp314-cp314-manylinux_2_31_riscv64.manylinux_2_39_riscv64.whl", hash = "sha256:cb6254dc36b47a990e59e1068afacdcd02958bdcce30bb50cc1700a8b9d624a6", size = 149524, upload-time = "2025-10-14T04:41:41.319Z" }, + { url = "https://files.pythonhosted.org/packages/9e/f2/6a2a1f722b6aba37050e626530a46a68f74e63683947a8acff92569f979a/charset_normalizer-3.4.4-cp314-cp314-musllinux_1_2_aarch64.whl", hash = "sha256:c8ae8a0f02f57a6e61203a31428fa1d677cbe50c93622b4149d5c0f319c1d19e", size = 150395, upload-time = "2025-10-14T04:41:42.539Z" }, + { url = "https://files.pythonhosted.org/packages/60/bb/2186cb2f2bbaea6338cad15ce23a67f9b0672929744381e28b0592676824/charset_normalizer-3.4.4-cp314-cp314-musllinux_1_2_armv7l.whl", hash = "sha256:47cc91b2f4dd2833fddaedd2893006b0106129d4b94fdb6af1f4ce5a9965577c", size = 143680, upload-time = "2025-10-14T04:41:43.661Z" }, + { url = "https://files.pythonhosted.org/packages/7d/a5/bf6f13b772fbb2a90360eb620d52ed8f796f3c5caee8398c3b2eb7b1c60d/charset_normalizer-3.4.4-cp314-cp314-musllinux_1_2_ppc64le.whl", hash = "sha256:82004af6c302b5d3ab2cfc4cc5f29db16123b1a8417f2e25f9066f91d4411090", size = 162045, upload-time = "2025-10-14T04:41:44.821Z" }, + { url = "https://files.pythonhosted.org/packages/df/c5/d1be898bf0dc3ef9030c3825e5d3b83f2c528d207d246cbabe245966808d/charset_normalizer-3.4.4-cp314-cp314-musllinux_1_2_riscv64.whl", hash = "sha256:2b7d8f6c26245217bd2ad053761201e9f9680f8ce52f0fcd8d0755aeae5b2152", size = 149687, upload-time = "2025-10-14T04:41:46.442Z" }, + { url = "https://files.pythonhosted.org/packages/a5/42/90c1f7b9341eef50c8a1cb3f098ac43b0508413f33affd762855f67a410e/charset_normalizer-3.4.4-cp314-cp314-musllinux_1_2_s390x.whl", hash = "sha256:799a7a5e4fb2d5898c60b640fd4981d6a25f1c11790935a44ce38c54e985f828", size = 160014, upload-time = "2025-10-14T04:41:47.631Z" }, + { url = "https://files.pythonhosted.org/packages/76/be/4d3ee471e8145d12795ab655ece37baed0929462a86e72372fd25859047c/charset_normalizer-3.4.4-cp314-cp314-musllinux_1_2_x86_64.whl", hash = "sha256:99ae2cffebb06e6c22bdc25801d7b30f503cc87dbd283479e7b606f70aff57ec", size = 154044, upload-time = "2025-10-14T04:41:48.81Z" }, + { url = "https://files.pythonhosted.org/packages/b0/6f/8f7af07237c34a1defe7defc565a9bc1807762f672c0fde711a4b22bf9c0/charset_normalizer-3.4.4-cp314-cp314-win32.whl", hash = "sha256:f9d332f8c2a2fcbffe1378594431458ddbef721c1769d78e2cbc06280d8155f9", size = 99940, upload-time = "2025-10-14T04:41:49.946Z" }, + { url = "https://files.pythonhosted.org/packages/4b/51/8ade005e5ca5b0d80fb4aff72a3775b325bdc3d27408c8113811a7cbe640/charset_normalizer-3.4.4-cp314-cp314-win_amd64.whl", hash = "sha256:8a6562c3700cce886c5be75ade4a5db4214fda19fede41d9792d100288d8f94c", size = 107104, upload-time = "2025-10-14T04:41:51.051Z" }, + { url = "https://files.pythonhosted.org/packages/da/5f/6b8f83a55bb8278772c5ae54a577f3099025f9ade59d0136ac24a0df4bde/charset_normalizer-3.4.4-cp314-cp314-win_arm64.whl", hash = "sha256:de00632ca48df9daf77a2c65a484531649261ec9f25489917f09e455cb09ddb2", size = 100743, upload-time = "2025-10-14T04:41:52.122Z" }, + { url = "https://files.pythonhosted.org/packages/0a/4c/925909008ed5a988ccbb72dcc897407e5d6d3bd72410d69e051fc0c14647/charset_normalizer-3.4.4-py3-none-any.whl", hash = "sha256:7a32c560861a02ff789ad905a2fe94e3f840803362c84fecf1851cb4cf3dc37f", size = 53402, upload-time = "2025-10-14T04:42:31.76Z" }, +] + +[[package]] +name = "colorama" +version = "0.4.6" +source = { registry = "https://pypi.org/simple" } +sdist = { url = "https://files.pythonhosted.org/packages/d8/53/6f443c9a4a8358a93a6792e2acffb9d9d5cb0a5cfd8802644b7b1c9a02e4/colorama-0.4.6.tar.gz", hash = "sha256:08695f5cb7ed6e0531a20572697297273c47b8cae5a63ffc6d6ed5c201be6e44", size = 27697, upload-time = "2022-10-25T02:36:22.414Z" } +wheels = [ + { url = "https://files.pythonhosted.org/packages/d1/d6/3965ed04c63042e047cb6a3e6ed1a63a35087b6a609aa3a15ed8ac56c221/colorama-0.4.6-py2.py3-none-any.whl", hash = "sha256:4f1d9991f5acc0ca119f9d443620b77f9d6b33703e51011c16baf57afb285fc6", size = 25335, upload-time = "2022-10-25T02:36:20.889Z" }, +] + +[[package]] +name = "grpcio" +version = "1.76.0" +source = { registry = "https://pypi.org/simple" } +dependencies = [ + { name = "typing-extensions" }, +] +sdist = { url = "https://files.pythonhosted.org/packages/b6/e0/318c1ce3ae5a17894d5791e87aea147587c9e702f24122cc7a5c8bbaeeb1/grpcio-1.76.0.tar.gz", hash = "sha256:7be78388d6da1a25c0d5ec506523db58b18be22d9c37d8d3a32c08be4987bd73", size = 12785182, upload-time = "2025-10-21T16:23:12.106Z" } +wheels = [ + { url = "https://files.pythonhosted.org/packages/a0/00/8163a1beeb6971f66b4bbe6ac9457b97948beba8dd2fc8e1281dce7f79ec/grpcio-1.76.0-cp311-cp311-linux_armv7l.whl", hash = "sha256:2e1743fbd7f5fa713a1b0a8ac8ebabf0ec980b5d8809ec358d488e273b9cf02a", size = 5843567, upload-time = "2025-10-21T16:20:52.829Z" }, + { url = "https://files.pythonhosted.org/packages/10/c1/934202f5cf335e6d852530ce14ddb0fef21be612ba9ecbbcbd4d748ca32d/grpcio-1.76.0-cp311-cp311-macosx_11_0_universal2.whl", hash = "sha256:a8c2cf1209497cf659a667d7dea88985e834c24b7c3b605e6254cbb5076d985c", size = 11848017, upload-time = "2025-10-21T16:20:56.705Z" }, + { url = "https://files.pythonhosted.org/packages/11/0b/8dec16b1863d74af6eb3543928600ec2195af49ca58b16334972f6775663/grpcio-1.76.0-cp311-cp311-manylinux2014_aarch64.manylinux_2_17_aarch64.whl", hash = "sha256:08caea849a9d3c71a542827d6df9d5a69067b0a1efbea8a855633ff5d9571465", size = 6412027, upload-time = "2025-10-21T16:20:59.3Z" }, + { url = "https://files.pythonhosted.org/packages/d7/64/7b9e6e7ab910bea9d46f2c090380bab274a0b91fb0a2fe9b0cd399fffa12/grpcio-1.76.0-cp311-cp311-manylinux2014_i686.manylinux_2_17_i686.whl", hash = "sha256:f0e34c2079d47ae9f6188211db9e777c619a21d4faba6977774e8fa43b085e48", size = 7075913, upload-time = "2025-10-21T16:21:01.645Z" }, + { url = "https://files.pythonhosted.org/packages/68/86/093c46e9546073cefa789bd76d44c5cb2abc824ca62af0c18be590ff13ba/grpcio-1.76.0-cp311-cp311-manylinux2014_x86_64.manylinux_2_17_x86_64.whl", hash = "sha256:8843114c0cfce61b40ad48df65abcfc00d4dba82eae8718fab5352390848c5da", size = 6615417, upload-time = "2025-10-21T16:21:03.844Z" }, + { url = "https://files.pythonhosted.org/packages/f7/b6/5709a3a68500a9c03da6fb71740dcdd5ef245e39266461a03f31a57036d8/grpcio-1.76.0-cp311-cp311-musllinux_1_2_aarch64.whl", hash = "sha256:8eddfb4d203a237da6f3cc8a540dad0517d274b5a1e9e636fd8d2c79b5c1d397", size = 7199683, upload-time = "2025-10-21T16:21:06.195Z" }, + { url = "https://files.pythonhosted.org/packages/91/d3/4b1f2bf16ed52ce0b508161df3a2d186e4935379a159a834cb4a7d687429/grpcio-1.76.0-cp311-cp311-musllinux_1_2_i686.whl", hash = "sha256:32483fe2aab2c3794101c2a159070584e5db11d0aa091b2c0ea9c4fc43d0d749", size = 8163109, upload-time = "2025-10-21T16:21:08.498Z" }, + { url = "https://files.pythonhosted.org/packages/5c/61/d9043f95f5f4cf085ac5dd6137b469d41befb04bd80280952ffa2a4c3f12/grpcio-1.76.0-cp311-cp311-musllinux_1_2_x86_64.whl", hash = "sha256:dcfe41187da8992c5f40aa8c5ec086fa3672834d2be57a32384c08d5a05b4c00", size = 7626676, upload-time = "2025-10-21T16:21:10.693Z" }, + { url = "https://files.pythonhosted.org/packages/36/95/fd9a5152ca02d8881e4dd419cdd790e11805979f499a2e5b96488b85cf27/grpcio-1.76.0-cp311-cp311-win32.whl", hash = "sha256:2107b0c024d1b35f4083f11245c0e23846ae64d02f40b2b226684840260ed054", size = 3997688, upload-time = "2025-10-21T16:21:12.746Z" }, + { url = "https://files.pythonhosted.org/packages/60/9c/5c359c8d4c9176cfa3c61ecd4efe5affe1f38d9bae81e81ac7186b4c9cc8/grpcio-1.76.0-cp311-cp311-win_amd64.whl", hash = "sha256:522175aba7af9113c48ec10cc471b9b9bd4f6ceb36aeb4544a8e2c80ed9d252d", size = 4709315, upload-time = "2025-10-21T16:21:15.26Z" }, + { url = "https://files.pythonhosted.org/packages/bf/05/8e29121994b8d959ffa0afd28996d452f291b48cfc0875619de0bde2c50c/grpcio-1.76.0-cp312-cp312-linux_armv7l.whl", hash = "sha256:81fd9652b37b36f16138611c7e884eb82e0cec137c40d3ef7c3f9b3ed00f6ed8", size = 5799718, upload-time = "2025-10-21T16:21:17.939Z" }, + { url = "https://files.pythonhosted.org/packages/d9/75/11d0e66b3cdf998c996489581bdad8900db79ebd83513e45c19548f1cba4/grpcio-1.76.0-cp312-cp312-macosx_11_0_universal2.whl", hash = "sha256:04bbe1bfe3a68bbfd4e52402ab7d4eb59d72d02647ae2042204326cf4bbad280", size = 11825627, upload-time = "2025-10-21T16:21:20.466Z" }, + { url = "https://files.pythonhosted.org/packages/28/50/2f0aa0498bc188048f5d9504dcc5c2c24f2eb1a9337cd0fa09a61a2e75f0/grpcio-1.76.0-cp312-cp312-manylinux2014_aarch64.manylinux_2_17_aarch64.whl", hash = "sha256:d388087771c837cdb6515539f43b9d4bf0b0f23593a24054ac16f7a960be16f4", size = 6359167, upload-time = "2025-10-21T16:21:23.122Z" }, + { url = "https://files.pythonhosted.org/packages/66/e5/bbf0bb97d29ede1d59d6588af40018cfc345b17ce979b7b45424628dc8bb/grpcio-1.76.0-cp312-cp312-manylinux2014_i686.manylinux_2_17_i686.whl", hash = "sha256:9f8f757bebaaea112c00dba718fc0d3260052ce714e25804a03f93f5d1c6cc11", size = 7044267, upload-time = "2025-10-21T16:21:25.995Z" }, + { url = "https://files.pythonhosted.org/packages/f5/86/f6ec2164f743d9609691115ae8ece098c76b894ebe4f7c94a655c6b03e98/grpcio-1.76.0-cp312-cp312-manylinux2014_x86_64.manylinux_2_17_x86_64.whl", hash = "sha256:980a846182ce88c4f2f7e2c22c56aefd515daeb36149d1c897f83cf57999e0b6", size = 6573963, upload-time = "2025-10-21T16:21:28.631Z" }, + { url = "https://files.pythonhosted.org/packages/60/bc/8d9d0d8505feccfdf38a766d262c71e73639c165b311c9457208b56d92ae/grpcio-1.76.0-cp312-cp312-musllinux_1_2_aarch64.whl", hash = "sha256:f92f88e6c033db65a5ae3d97905c8fea9c725b63e28d5a75cb73b49bda5024d8", size = 7164484, upload-time = "2025-10-21T16:21:30.837Z" }, + { url = "https://files.pythonhosted.org/packages/67/e6/5d6c2fc10b95edf6df9b8f19cf10a34263b7fd48493936fffd5085521292/grpcio-1.76.0-cp312-cp312-musllinux_1_2_i686.whl", hash = "sha256:4baf3cbe2f0be3289eb68ac8ae771156971848bb8aaff60bad42005539431980", size = 8127777, upload-time = "2025-10-21T16:21:33.577Z" }, + { url = "https://files.pythonhosted.org/packages/3f/c8/dce8ff21c86abe025efe304d9e31fdb0deaaa3b502b6a78141080f206da0/grpcio-1.76.0-cp312-cp312-musllinux_1_2_x86_64.whl", hash = "sha256:615ba64c208aaceb5ec83bfdce7728b80bfeb8be97562944836a7a0a9647d882", size = 7594014, upload-time = "2025-10-21T16:21:41.882Z" }, + { url = "https://files.pythonhosted.org/packages/e0/42/ad28191ebf983a5d0ecef90bab66baa5a6b18f2bfdef9d0a63b1973d9f75/grpcio-1.76.0-cp312-cp312-win32.whl", hash = "sha256:45d59a649a82df5718fd9527ce775fd66d1af35e6d31abdcdc906a49c6822958", size = 3984750, upload-time = "2025-10-21T16:21:44.006Z" }, + { url = "https://files.pythonhosted.org/packages/9e/00/7bd478cbb851c04a48baccaa49b75abaa8e4122f7d86da797500cccdd771/grpcio-1.76.0-cp312-cp312-win_amd64.whl", hash = "sha256:c088e7a90b6017307f423efbb9d1ba97a22aa2170876223f9709e9d1de0b5347", size = 4704003, upload-time = "2025-10-21T16:21:46.244Z" }, + { url = "https://files.pythonhosted.org/packages/fc/ed/71467ab770effc9e8cef5f2e7388beb2be26ed642d567697bb103a790c72/grpcio-1.76.0-cp313-cp313-linux_armv7l.whl", hash = "sha256:26ef06c73eb53267c2b319f43e6634c7556ea37672029241a056629af27c10e2", size = 5807716, upload-time = "2025-10-21T16:21:48.475Z" }, + { url = "https://files.pythonhosted.org/packages/2c/85/c6ed56f9817fab03fa8a111ca91469941fb514e3e3ce6d793cb8f1e1347b/grpcio-1.76.0-cp313-cp313-macosx_11_0_universal2.whl", hash = "sha256:45e0111e73f43f735d70786557dc38141185072d7ff8dc1829d6a77ac1471468", size = 11821522, upload-time = "2025-10-21T16:21:51.142Z" }, + { url = "https://files.pythonhosted.org/packages/ac/31/2b8a235ab40c39cbc141ef647f8a6eb7b0028f023015a4842933bc0d6831/grpcio-1.76.0-cp313-cp313-manylinux2014_aarch64.manylinux_2_17_aarch64.whl", hash = "sha256:83d57312a58dcfe2a3a0f9d1389b299438909a02db60e2f2ea2ae2d8034909d3", size = 6362558, upload-time = "2025-10-21T16:21:54.213Z" }, + { url = "https://files.pythonhosted.org/packages/bd/64/9784eab483358e08847498ee56faf8ff6ea8e0a4592568d9f68edc97e9e9/grpcio-1.76.0-cp313-cp313-manylinux2014_i686.manylinux_2_17_i686.whl", hash = "sha256:3e2a27c89eb9ac3d81ec8835e12414d73536c6e620355d65102503064a4ed6eb", size = 7049990, upload-time = "2025-10-21T16:21:56.476Z" }, + { url = "https://files.pythonhosted.org/packages/2b/94/8c12319a6369434e7a184b987e8e9f3b49a114c489b8315f029e24de4837/grpcio-1.76.0-cp313-cp313-manylinux2014_x86_64.manylinux_2_17_x86_64.whl", hash = "sha256:61f69297cba3950a524f61c7c8ee12e55c486cb5f7db47ff9dcee33da6f0d3ae", size = 6575387, upload-time = "2025-10-21T16:21:59.051Z" }, + { url = "https://files.pythonhosted.org/packages/15/0f/f12c32b03f731f4a6242f771f63039df182c8b8e2cf8075b245b409259d4/grpcio-1.76.0-cp313-cp313-musllinux_1_2_aarch64.whl", hash = "sha256:6a15c17af8839b6801d554263c546c69c4d7718ad4321e3166175b37eaacca77", size = 7166668, upload-time = "2025-10-21T16:22:02.049Z" }, + { url = "https://files.pythonhosted.org/packages/ff/2d/3ec9ce0c2b1d92dd59d1c3264aaec9f0f7c817d6e8ac683b97198a36ed5a/grpcio-1.76.0-cp313-cp313-musllinux_1_2_i686.whl", hash = "sha256:25a18e9810fbc7e7f03ec2516addc116a957f8cbb8cbc95ccc80faa072743d03", size = 8124928, upload-time = "2025-10-21T16:22:04.984Z" }, + { url = "https://files.pythonhosted.org/packages/1a/74/fd3317be5672f4856bcdd1a9e7b5e17554692d3db9a3b273879dc02d657d/grpcio-1.76.0-cp313-cp313-musllinux_1_2_x86_64.whl", hash = "sha256:931091142fd8cc14edccc0845a79248bc155425eee9a98b2db2ea4f00a235a42", size = 7589983, upload-time = "2025-10-21T16:22:07.881Z" }, + { url = "https://files.pythonhosted.org/packages/45/bb/ca038cf420f405971f19821c8c15bcbc875505f6ffadafe9ffd77871dc4c/grpcio-1.76.0-cp313-cp313-win32.whl", hash = "sha256:5e8571632780e08526f118f74170ad8d50fb0a48c23a746bef2a6ebade3abd6f", size = 3984727, upload-time = "2025-10-21T16:22:10.032Z" }, + { url = "https://files.pythonhosted.org/packages/41/80/84087dc56437ced7cdd4b13d7875e7439a52a261e3ab4e06488ba6173b0a/grpcio-1.76.0-cp313-cp313-win_amd64.whl", hash = "sha256:f9f7bd5faab55f47231ad8dba7787866b69f5e93bc306e3915606779bbfb4ba8", size = 4702799, upload-time = "2025-10-21T16:22:12.709Z" }, + { url = "https://files.pythonhosted.org/packages/b4/46/39adac80de49d678e6e073b70204091e76631e03e94928b9ea4ecf0f6e0e/grpcio-1.76.0-cp314-cp314-linux_armv7l.whl", hash = "sha256:ff8a59ea85a1f2191a0ffcc61298c571bc566332f82e5f5be1b83c9d8e668a62", size = 5808417, upload-time = "2025-10-21T16:22:15.02Z" }, + { url = "https://files.pythonhosted.org/packages/9c/f5/a4531f7fb8b4e2a60b94e39d5d924469b7a6988176b3422487be61fe2998/grpcio-1.76.0-cp314-cp314-macosx_11_0_universal2.whl", hash = "sha256:06c3d6b076e7b593905d04fdba6a0525711b3466f43b3400266f04ff735de0cd", size = 11828219, upload-time = "2025-10-21T16:22:17.954Z" }, + { url = "https://files.pythonhosted.org/packages/4b/1c/de55d868ed7a8bd6acc6b1d6ddc4aa36d07a9f31d33c912c804adb1b971b/grpcio-1.76.0-cp314-cp314-manylinux2014_aarch64.manylinux_2_17_aarch64.whl", hash = "sha256:fd5ef5932f6475c436c4a55e4336ebbe47bd3272be04964a03d316bbf4afbcbc", size = 6367826, upload-time = "2025-10-21T16:22:20.721Z" }, + { url = "https://files.pythonhosted.org/packages/59/64/99e44c02b5adb0ad13ab3adc89cb33cb54bfa90c74770f2607eea629b86f/grpcio-1.76.0-cp314-cp314-manylinux2014_i686.manylinux_2_17_i686.whl", hash = "sha256:b331680e46239e090f5b3cead313cc772f6caa7d0fc8de349337563125361a4a", size = 7049550, upload-time = "2025-10-21T16:22:23.637Z" }, + { url = "https://files.pythonhosted.org/packages/43/28/40a5be3f9a86949b83e7d6a2ad6011d993cbe9b6bd27bea881f61c7788b6/grpcio-1.76.0-cp314-cp314-manylinux2014_x86_64.manylinux_2_17_x86_64.whl", hash = "sha256:2229ae655ec4e8999599469559e97630185fdd53ae1e8997d147b7c9b2b72cba", size = 6575564, upload-time = "2025-10-21T16:22:26.016Z" }, + { url = "https://files.pythonhosted.org/packages/4b/a9/1be18e6055b64467440208a8559afac243c66a8b904213af6f392dc2212f/grpcio-1.76.0-cp314-cp314-musllinux_1_2_aarch64.whl", hash = "sha256:490fa6d203992c47c7b9e4a9d39003a0c2bcc1c9aa3c058730884bbbb0ee9f09", size = 7176236, upload-time = "2025-10-21T16:22:28.362Z" }, + { url = "https://files.pythonhosted.org/packages/0f/55/dba05d3fcc151ce6e81327541d2cc8394f442f6b350fead67401661bf041/grpcio-1.76.0-cp314-cp314-musllinux_1_2_i686.whl", hash = "sha256:479496325ce554792dba6548fae3df31a72cef7bad71ca2e12b0e58f9b336bfc", size = 8125795, upload-time = "2025-10-21T16:22:31.075Z" }, + { url = "https://files.pythonhosted.org/packages/4a/45/122df922d05655f63930cf42c9e3f72ba20aadb26c100ee105cad4ce4257/grpcio-1.76.0-cp314-cp314-musllinux_1_2_x86_64.whl", hash = "sha256:1c9b93f79f48b03ada57ea24725d83a30284a012ec27eab2cf7e50a550cbbbcc", size = 7592214, upload-time = "2025-10-21T16:22:33.831Z" }, + { url = "https://files.pythonhosted.org/packages/4a/6e/0b899b7f6b66e5af39e377055fb4a6675c9ee28431df5708139df2e93233/grpcio-1.76.0-cp314-cp314-win32.whl", hash = "sha256:747fa73efa9b8b1488a95d0ba1039c8e2dca0f741612d80415b1e1c560febf4e", size = 4062961, upload-time = "2025-10-21T16:22:36.468Z" }, + { url = "https://files.pythonhosted.org/packages/19/41/0b430b01a2eb38ee887f88c1f07644a1df8e289353b78e82b37ef988fb64/grpcio-1.76.0-cp314-cp314-win_amd64.whl", hash = "sha256:922fa70ba549fce362d2e2871ab542082d66e2aaf0c19480ea453905b01f384e", size = 4834462, upload-time = "2025-10-21T16:22:39.772Z" }, +] + +[[package]] +name = "grpcio-reflection" +version = "1.76.0" +source = { registry = "https://pypi.org/simple" } +dependencies = [ + { name = "grpcio" }, + { name = "protobuf" }, +] +sdist = { url = "https://files.pythonhosted.org/packages/bd/10/767f9c2719c435616141efb3371f6e158f95cdde36a34876ae1d08ba7440/grpcio_reflection-1.76.0.tar.gz", hash = "sha256:e0e7e49921c2ee951e5ddff0bdbacbd1ac1a70888beb61d567f3d01b799decb1", size = 18845, upload-time = "2025-10-21T16:28:57.776Z" } +wheels = [ + { url = "https://files.pythonhosted.org/packages/32/af/6168cf4ff389deed1388b1196281c67cb36dbbf44aaee40e2bfb72ac0202/grpcio_reflection-1.76.0-py3-none-any.whl", hash = "sha256:d7c43f2047a2a9c9320a5905aa7133c677977436b5f63e6a868e507864a11c73", size = 22702, upload-time = "2025-10-21T16:27:40.846Z" }, +] + +[[package]] +name = "h11" +version = "0.16.0" +source = { registry = "https://pypi.org/simple" } +sdist = { url = "https://files.pythonhosted.org/packages/01/ee/02a2c011bdab74c6fb3c75474d40b3052059d95df7e73351460c8588d963/h11-0.16.0.tar.gz", hash = "sha256:4e35b956cf45792e4caa5885e69fba00bdbc6ffafbfa020300e549b208ee5ff1", size = 101250, upload-time = "2025-04-24T03:35:25.427Z" } +wheels = [ + { url = "https://files.pythonhosted.org/packages/04/4b/29cac41a4d98d144bf5f6d33995617b185d14b22401f75ca86f384e87ff1/h11-0.16.0-py3-none-any.whl", hash = "sha256:63cf8bbe7522de3bf65932fda1d9c2772064ffb3dae62d55932da54b31cb6c86", size = 37515, upload-time = "2025-04-24T03:35:24.344Z" }, +] + +[[package]] +name = "httpcore" +version = "1.0.9" +source = { registry = "https://pypi.org/simple" } +dependencies = [ + { name = "certifi" }, + { name = "h11" }, +] +sdist = { url = "https://files.pythonhosted.org/packages/06/94/82699a10bca87a5556c9c59b5963f2d039dbd239f25bc2a63907a05a14cb/httpcore-1.0.9.tar.gz", hash = "sha256:6e34463af53fd2ab5d807f399a9b45ea31c3dfa2276f15a2c3f00afff6e176e8", size = 85484, upload-time = "2025-04-24T22:06:22.219Z" } +wheels = [ + { url = "https://files.pythonhosted.org/packages/7e/f5/f66802a942d491edb555dd61e3a9961140fd64c90bce1eafd741609d334d/httpcore-1.0.9-py3-none-any.whl", hash = "sha256:2d400746a40668fc9dec9810239072b40b4484b640a8c38fd654a024c7a1bf55", size = 78784, upload-time = "2025-04-24T22:06:20.566Z" }, +] + +[[package]] +name = "httpx" +version = "0.28.1" +source = { registry = "https://pypi.org/simple" } +dependencies = [ + { name = "anyio" }, + { name = "certifi" }, + { name = "httpcore" }, + { name = "idna" }, +] +sdist = { url = "https://files.pythonhosted.org/packages/b1/df/48c586a5fe32a0f01324ee087459e112ebb7224f646c0b5023f5e79e9956/httpx-0.28.1.tar.gz", hash = "sha256:75e98c5f16b0f35b567856f597f06ff2270a374470a5c2392242528e3e3e42fc", size = 141406, upload-time = "2024-12-06T15:37:23.222Z" } +wheels = [ + { url = "https://files.pythonhosted.org/packages/2a/39/e50c7c3a983047577ee07d2a9e53faf5a69493943ec3f6a384bdc792deb2/httpx-0.28.1-py3-none-any.whl", hash = "sha256:d909fcccc110f8c7faf814ca82a9a4d816bc5a6dbfea25d6591d6985b8ba59ad", size = 73517, upload-time = "2024-12-06T15:37:21.509Z" }, +] + +[[package]] +name = "idna" +version = "3.11" +source = { registry = "https://pypi.org/simple" } +sdist = { url = "https://files.pythonhosted.org/packages/6f/6d/0703ccc57f3a7233505399edb88de3cbd678da106337b9fcde432b65ed60/idna-3.11.tar.gz", hash = "sha256:795dafcc9c04ed0c1fb032c2aa73654d8e8c5023a7df64a53f39190ada629902", size = 194582, upload-time = "2025-10-12T14:55:20.501Z" } +wheels = [ + { url = "https://files.pythonhosted.org/packages/0e/61/66938bbb5fc52dbdf84594873d5b51fb1f7c7794e9c0f5bd885f30bc507b/idna-3.11-py3-none-any.whl", hash = "sha256:771a87f49d9defaf64091e6e6fe9c18d4833f140bd19464795bc32d966ca37ea", size = 71008, upload-time = "2025-10-12T14:55:18.883Z" }, +] + +[[package]] +name = "iniconfig" +version = "2.3.0" +source = { registry = "https://pypi.org/simple" } +sdist = { url = "https://files.pythonhosted.org/packages/72/34/14ca021ce8e5dfedc35312d08ba8bf51fdd999c576889fc2c24cb97f4f10/iniconfig-2.3.0.tar.gz", hash = "sha256:c76315c77db068650d49c5b56314774a7804df16fee4402c1f19d6d15d8c4730", size = 20503, upload-time = "2025-10-18T21:55:43.219Z" } +wheels = [ + { url = "https://files.pythonhosted.org/packages/cb/b1/3846dd7f199d53cb17f49cba7e651e9ce294d8497c8c150530ed11865bb8/iniconfig-2.3.0-py3-none-any.whl", hash = "sha256:f631c04d2c48c52b84d0d0549c99ff3859c98df65b3101406327ecc7d53fbf12", size = 7484, upload-time = "2025-10-18T21:55:41.639Z" }, +] + +[[package]] +name = "jinja2" +version = "3.1.6" +source = { registry = "https://pypi.org/simple" } +dependencies = [ + { name = "markupsafe" }, +] +sdist = { url = "https://files.pythonhosted.org/packages/df/bf/f7da0350254c0ed7c72f3e33cef02e048281fec7ecec5f032d4aac52226b/jinja2-3.1.6.tar.gz", hash = "sha256:0137fb05990d35f1275a587e9aee6d56da821fc83491a0fb838183be43f66d6d", size = 245115, upload-time = "2025-03-05T20:05:02.478Z" } +wheels = [ + { url = "https://files.pythonhosted.org/packages/62/a1/3d680cbfd5f4b8f15abc1d571870c5fc3e594bb582bc3b64ea099db13e56/jinja2-3.1.6-py3-none-any.whl", hash = "sha256:85ece4451f492d0c13c5dd7c13a64681a86afae63a5f347908daf103ce6d2f67", size = 134899, upload-time = "2025-03-05T20:05:00.369Z" }, +] + +[[package]] +name = "labgrid" +version = "25.1.dev101" +source = { git = "https://github.com/aparcar/labgrid.git?rev=aparcar%2Fstaging#e866266b2d955893bc718dd1a81d474eb88adfa5" } +dependencies = [ + { name = "attrs" }, + { name = "grpcio" }, + { name = "grpcio-reflection" }, + { name = "jinja2" }, + { name = "pexpect" }, + { name = "protobuf" }, + { name = "pyserial-labgrid" }, + { name = "pytest" }, + { name = "pyudev" }, + { name = "pyusb" }, + { name = "pyyaml" }, + { name = "requests" }, + { name = "xmodem" }, +] + +[[package]] +name = "labgrid-kci-adapter" +version = "0.1.0" +source = { editable = "." } +dependencies = [ + { name = "httpx" }, + { name = "labgrid" }, + { name = "minio" }, + { name = "pydantic" }, + { name = "pydantic-settings" }, + { name = "pyyaml" }, + { name = "structlog" }, +] + +[package.optional-dependencies] +dev = [ + { name = "pytest" }, + { name = "pytest-asyncio" }, + { name = "ruff" }, +] + +[package.metadata] +requires-dist = [ + { name = "httpx", specifier = ">=0.27.0" }, + { name = "labgrid", git = "https://github.com/aparcar/labgrid.git?rev=aparcar%2Fstaging" }, + { name = "minio", specifier = ">=7.2.0" }, + { name = "pydantic", specifier = ">=2.5.0" }, + { name = "pydantic-settings", specifier = ">=2.1.0" }, + { name = "pytest", marker = "extra == 'dev'", specifier = ">=8.0.0" }, + { name = "pytest-asyncio", marker = "extra == 'dev'", specifier = ">=0.23.0" }, + { name = "pyyaml", specifier = ">=6.0.1" }, + { name = "ruff", marker = "extra == 'dev'", specifier = ">=0.1.0" }, + { name = "structlog", specifier = ">=24.1.0" }, +] +provides-extras = ["dev"] + +[[package]] +name = "markupsafe" +version = "3.0.3" +source = { registry = "https://pypi.org/simple" } +sdist = { url = "https://files.pythonhosted.org/packages/7e/99/7690b6d4034fffd95959cbe0c02de8deb3098cc577c67bb6a24fe5d7caa7/markupsafe-3.0.3.tar.gz", hash = "sha256:722695808f4b6457b320fdc131280796bdceb04ab50fe1795cd540799ebe1698", size = 80313, upload-time = "2025-09-27T18:37:40.426Z" } +wheels = [ + { url = "https://files.pythonhosted.org/packages/08/db/fefacb2136439fc8dd20e797950e749aa1f4997ed584c62cfb8ef7c2be0e/markupsafe-3.0.3-cp311-cp311-macosx_10_9_x86_64.whl", hash = "sha256:1cc7ea17a6824959616c525620e387f6dd30fec8cb44f649e31712db02123dad", size = 11631, upload-time = "2025-09-27T18:36:18.185Z" }, + { url = "https://files.pythonhosted.org/packages/e1/2e/5898933336b61975ce9dc04decbc0a7f2fee78c30353c5efba7f2d6ff27a/markupsafe-3.0.3-cp311-cp311-macosx_11_0_arm64.whl", hash = "sha256:4bd4cd07944443f5a265608cc6aab442e4f74dff8088b0dfc8238647b8f6ae9a", size = 12058, upload-time = "2025-09-27T18:36:19.444Z" }, + { url = "https://files.pythonhosted.org/packages/1d/09/adf2df3699d87d1d8184038df46a9c80d78c0148492323f4693df54e17bb/markupsafe-3.0.3-cp311-cp311-manylinux2014_aarch64.manylinux_2_17_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:6b5420a1d9450023228968e7e6a9ce57f65d148ab56d2313fcd589eee96a7a50", size = 24287, upload-time = "2025-09-27T18:36:20.768Z" }, + { url = "https://files.pythonhosted.org/packages/30/ac/0273f6fcb5f42e314c6d8cd99effae6a5354604d461b8d392b5ec9530a54/markupsafe-3.0.3-cp311-cp311-manylinux2014_x86_64.manylinux_2_17_x86_64.manylinux_2_28_x86_64.whl", hash = "sha256:0bf2a864d67e76e5c9a34dc26ec616a66b9888e25e7b9460e1c76d3293bd9dbf", size = 22940, upload-time = "2025-09-27T18:36:22.249Z" }, + { url = "https://files.pythonhosted.org/packages/19/ae/31c1be199ef767124c042c6c3e904da327a2f7f0cd63a0337e1eca2967a8/markupsafe-3.0.3-cp311-cp311-manylinux_2_31_riscv64.manylinux_2_39_riscv64.whl", hash = "sha256:bc51efed119bc9cfdf792cdeaa4d67e8f6fcccab66ed4bfdd6bde3e59bfcbb2f", size = 21887, upload-time = "2025-09-27T18:36:23.535Z" }, + { url = "https://files.pythonhosted.org/packages/b2/76/7edcab99d5349a4532a459e1fe64f0b0467a3365056ae550d3bcf3f79e1e/markupsafe-3.0.3-cp311-cp311-musllinux_1_2_aarch64.whl", hash = "sha256:068f375c472b3e7acbe2d5318dea141359e6900156b5b2ba06a30b169086b91a", size = 23692, upload-time = "2025-09-27T18:36:24.823Z" }, + { url = "https://files.pythonhosted.org/packages/a4/28/6e74cdd26d7514849143d69f0bf2399f929c37dc2b31e6829fd2045b2765/markupsafe-3.0.3-cp311-cp311-musllinux_1_2_riscv64.whl", hash = "sha256:7be7b61bb172e1ed687f1754f8e7484f1c8019780f6f6b0786e76bb01c2ae115", size = 21471, upload-time = "2025-09-27T18:36:25.95Z" }, + { url = "https://files.pythonhosted.org/packages/62/7e/a145f36a5c2945673e590850a6f8014318d5577ed7e5920a4b3448e0865d/markupsafe-3.0.3-cp311-cp311-musllinux_1_2_x86_64.whl", hash = "sha256:f9e130248f4462aaa8e2552d547f36ddadbeaa573879158d721bbd33dfe4743a", size = 22923, upload-time = "2025-09-27T18:36:27.109Z" }, + { url = "https://files.pythonhosted.org/packages/0f/62/d9c46a7f5c9adbeeeda52f5b8d802e1094e9717705a645efc71b0913a0a8/markupsafe-3.0.3-cp311-cp311-win32.whl", hash = "sha256:0db14f5dafddbb6d9208827849fad01f1a2609380add406671a26386cdf15a19", size = 14572, upload-time = "2025-09-27T18:36:28.045Z" }, + { url = "https://files.pythonhosted.org/packages/83/8a/4414c03d3f891739326e1783338e48fb49781cc915b2e0ee052aa490d586/markupsafe-3.0.3-cp311-cp311-win_amd64.whl", hash = "sha256:de8a88e63464af587c950061a5e6a67d3632e36df62b986892331d4620a35c01", size = 15077, upload-time = "2025-09-27T18:36:29.025Z" }, + { url = "https://files.pythonhosted.org/packages/35/73/893072b42e6862f319b5207adc9ae06070f095b358655f077f69a35601f0/markupsafe-3.0.3-cp311-cp311-win_arm64.whl", hash = "sha256:3b562dd9e9ea93f13d53989d23a7e775fdfd1066c33494ff43f5418bc8c58a5c", size = 13876, upload-time = "2025-09-27T18:36:29.954Z" }, + { url = "https://files.pythonhosted.org/packages/5a/72/147da192e38635ada20e0a2e1a51cf8823d2119ce8883f7053879c2199b5/markupsafe-3.0.3-cp312-cp312-macosx_10_13_x86_64.whl", hash = "sha256:d53197da72cc091b024dd97249dfc7794d6a56530370992a5e1a08983ad9230e", size = 11615, upload-time = "2025-09-27T18:36:30.854Z" }, + { url = "https://files.pythonhosted.org/packages/9a/81/7e4e08678a1f98521201c3079f77db69fb552acd56067661f8c2f534a718/markupsafe-3.0.3-cp312-cp312-macosx_11_0_arm64.whl", hash = "sha256:1872df69a4de6aead3491198eaf13810b565bdbeec3ae2dc8780f14458ec73ce", size = 12020, upload-time = "2025-09-27T18:36:31.971Z" }, + { url = "https://files.pythonhosted.org/packages/1e/2c/799f4742efc39633a1b54a92eec4082e4f815314869865d876824c257c1e/markupsafe-3.0.3-cp312-cp312-manylinux2014_aarch64.manylinux_2_17_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:3a7e8ae81ae39e62a41ec302f972ba6ae23a5c5396c8e60113e9066ef893da0d", size = 24332, upload-time = "2025-09-27T18:36:32.813Z" }, + { url = "https://files.pythonhosted.org/packages/3c/2e/8d0c2ab90a8c1d9a24f0399058ab8519a3279d1bd4289511d74e909f060e/markupsafe-3.0.3-cp312-cp312-manylinux2014_x86_64.manylinux_2_17_x86_64.manylinux_2_28_x86_64.whl", hash = "sha256:d6dd0be5b5b189d31db7cda48b91d7e0a9795f31430b7f271219ab30f1d3ac9d", size = 22947, upload-time = "2025-09-27T18:36:33.86Z" }, + { url = "https://files.pythonhosted.org/packages/2c/54/887f3092a85238093a0b2154bd629c89444f395618842e8b0c41783898ea/markupsafe-3.0.3-cp312-cp312-manylinux_2_31_riscv64.manylinux_2_39_riscv64.whl", hash = "sha256:94c6f0bb423f739146aec64595853541634bde58b2135f27f61c1ffd1cd4d16a", size = 21962, upload-time = "2025-09-27T18:36:35.099Z" }, + { url = "https://files.pythonhosted.org/packages/c9/2f/336b8c7b6f4a4d95e91119dc8521402461b74a485558d8f238a68312f11c/markupsafe-3.0.3-cp312-cp312-musllinux_1_2_aarch64.whl", hash = "sha256:be8813b57049a7dc738189df53d69395eba14fb99345e0a5994914a3864c8a4b", size = 23760, upload-time = "2025-09-27T18:36:36.001Z" }, + { url = "https://files.pythonhosted.org/packages/32/43/67935f2b7e4982ffb50a4d169b724d74b62a3964bc1a9a527f5ac4f1ee2b/markupsafe-3.0.3-cp312-cp312-musllinux_1_2_riscv64.whl", hash = "sha256:83891d0e9fb81a825d9a6d61e3f07550ca70a076484292a70fde82c4b807286f", size = 21529, upload-time = "2025-09-27T18:36:36.906Z" }, + { url = "https://files.pythonhosted.org/packages/89/e0/4486f11e51bbba8b0c041098859e869e304d1c261e59244baa3d295d47b7/markupsafe-3.0.3-cp312-cp312-musllinux_1_2_x86_64.whl", hash = "sha256:77f0643abe7495da77fb436f50f8dab76dbc6e5fd25d39589a0f1fe6548bfa2b", size = 23015, upload-time = "2025-09-27T18:36:37.868Z" }, + { url = "https://files.pythonhosted.org/packages/2f/e1/78ee7a023dac597a5825441ebd17170785a9dab23de95d2c7508ade94e0e/markupsafe-3.0.3-cp312-cp312-win32.whl", hash = "sha256:d88b440e37a16e651bda4c7c2b930eb586fd15ca7406cb39e211fcff3bf3017d", size = 14540, upload-time = "2025-09-27T18:36:38.761Z" }, + { url = "https://files.pythonhosted.org/packages/aa/5b/bec5aa9bbbb2c946ca2733ef9c4ca91c91b6a24580193e891b5f7dbe8e1e/markupsafe-3.0.3-cp312-cp312-win_amd64.whl", hash = "sha256:26a5784ded40c9e318cfc2bdb30fe164bdb8665ded9cd64d500a34fb42067b1c", size = 15105, upload-time = "2025-09-27T18:36:39.701Z" }, + { url = "https://files.pythonhosted.org/packages/e5/f1/216fc1bbfd74011693a4fd837e7026152e89c4bcf3e77b6692fba9923123/markupsafe-3.0.3-cp312-cp312-win_arm64.whl", hash = "sha256:35add3b638a5d900e807944a078b51922212fb3dedb01633a8defc4b01a3c85f", size = 13906, upload-time = "2025-09-27T18:36:40.689Z" }, + { url = "https://files.pythonhosted.org/packages/38/2f/907b9c7bbba283e68f20259574b13d005c121a0fa4c175f9bed27c4597ff/markupsafe-3.0.3-cp313-cp313-macosx_10_13_x86_64.whl", hash = "sha256:e1cf1972137e83c5d4c136c43ced9ac51d0e124706ee1c8aa8532c1287fa8795", size = 11622, upload-time = "2025-09-27T18:36:41.777Z" }, + { url = "https://files.pythonhosted.org/packages/9c/d9/5f7756922cdd676869eca1c4e3c0cd0df60ed30199ffd775e319089cb3ed/markupsafe-3.0.3-cp313-cp313-macosx_11_0_arm64.whl", hash = "sha256:116bb52f642a37c115f517494ea5feb03889e04df47eeff5b130b1808ce7c219", size = 12029, upload-time = "2025-09-27T18:36:43.257Z" }, + { url = "https://files.pythonhosted.org/packages/00/07/575a68c754943058c78f30db02ee03a64b3c638586fba6a6dd56830b30a3/markupsafe-3.0.3-cp313-cp313-manylinux2014_aarch64.manylinux_2_17_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:133a43e73a802c5562be9bbcd03d090aa5a1fe899db609c29e8c8d815c5f6de6", size = 24374, upload-time = "2025-09-27T18:36:44.508Z" }, + { url = "https://files.pythonhosted.org/packages/a9/21/9b05698b46f218fc0e118e1f8168395c65c8a2c750ae2bab54fc4bd4e0e8/markupsafe-3.0.3-cp313-cp313-manylinux2014_x86_64.manylinux_2_17_x86_64.manylinux_2_28_x86_64.whl", hash = "sha256:ccfcd093f13f0f0b7fdd0f198b90053bf7b2f02a3927a30e63f3ccc9df56b676", size = 22980, upload-time = "2025-09-27T18:36:45.385Z" }, + { url = "https://files.pythonhosted.org/packages/7f/71/544260864f893f18b6827315b988c146b559391e6e7e8f7252839b1b846a/markupsafe-3.0.3-cp313-cp313-manylinux_2_31_riscv64.manylinux_2_39_riscv64.whl", hash = "sha256:509fa21c6deb7a7a273d629cf5ec029bc209d1a51178615ddf718f5918992ab9", size = 21990, upload-time = "2025-09-27T18:36:46.916Z" }, + { url = "https://files.pythonhosted.org/packages/c2/28/b50fc2f74d1ad761af2f5dcce7492648b983d00a65b8c0e0cb457c82ebbe/markupsafe-3.0.3-cp313-cp313-musllinux_1_2_aarch64.whl", hash = "sha256:a4afe79fb3de0b7097d81da19090f4df4f8d3a2b3adaa8764138aac2e44f3af1", size = 23784, upload-time = "2025-09-27T18:36:47.884Z" }, + { url = "https://files.pythonhosted.org/packages/ed/76/104b2aa106a208da8b17a2fb72e033a5a9d7073c68f7e508b94916ed47a9/markupsafe-3.0.3-cp313-cp313-musllinux_1_2_riscv64.whl", hash = "sha256:795e7751525cae078558e679d646ae45574b47ed6e7771863fcc079a6171a0fc", size = 21588, upload-time = "2025-09-27T18:36:48.82Z" }, + { url = "https://files.pythonhosted.org/packages/b5/99/16a5eb2d140087ebd97180d95249b00a03aa87e29cc224056274f2e45fd6/markupsafe-3.0.3-cp313-cp313-musllinux_1_2_x86_64.whl", hash = "sha256:8485f406a96febb5140bfeca44a73e3ce5116b2501ac54fe953e488fb1d03b12", size = 23041, upload-time = "2025-09-27T18:36:49.797Z" }, + { url = "https://files.pythonhosted.org/packages/19/bc/e7140ed90c5d61d77cea142eed9f9c303f4c4806f60a1044c13e3f1471d0/markupsafe-3.0.3-cp313-cp313-win32.whl", hash = "sha256:bdd37121970bfd8be76c5fb069c7751683bdf373db1ed6c010162b2a130248ed", size = 14543, upload-time = "2025-09-27T18:36:51.584Z" }, + { url = "https://files.pythonhosted.org/packages/05/73/c4abe620b841b6b791f2edc248f556900667a5a1cf023a6646967ae98335/markupsafe-3.0.3-cp313-cp313-win_amd64.whl", hash = "sha256:9a1abfdc021a164803f4d485104931fb8f8c1efd55bc6b748d2f5774e78b62c5", size = 15113, upload-time = "2025-09-27T18:36:52.537Z" }, + { url = "https://files.pythonhosted.org/packages/f0/3a/fa34a0f7cfef23cf9500d68cb7c32dd64ffd58a12b09225fb03dd37d5b80/markupsafe-3.0.3-cp313-cp313-win_arm64.whl", hash = "sha256:7e68f88e5b8799aa49c85cd116c932a1ac15caaa3f5db09087854d218359e485", size = 13911, upload-time = "2025-09-27T18:36:53.513Z" }, + { url = "https://files.pythonhosted.org/packages/e4/d7/e05cd7efe43a88a17a37b3ae96e79a19e846f3f456fe79c57ca61356ef01/markupsafe-3.0.3-cp313-cp313t-macosx_10_13_x86_64.whl", hash = "sha256:218551f6df4868a8d527e3062d0fb968682fe92054e89978594c28e642c43a73", size = 11658, upload-time = "2025-09-27T18:36:54.819Z" }, + { url = "https://files.pythonhosted.org/packages/99/9e/e412117548182ce2148bdeacdda3bb494260c0b0184360fe0d56389b523b/markupsafe-3.0.3-cp313-cp313t-macosx_11_0_arm64.whl", hash = "sha256:3524b778fe5cfb3452a09d31e7b5adefeea8c5be1d43c4f810ba09f2ceb29d37", size = 12066, upload-time = "2025-09-27T18:36:55.714Z" }, + { url = "https://files.pythonhosted.org/packages/bc/e6/fa0ffcda717ef64a5108eaa7b4f5ed28d56122c9a6d70ab8b72f9f715c80/markupsafe-3.0.3-cp313-cp313t-manylinux2014_aarch64.manylinux_2_17_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:4e885a3d1efa2eadc93c894a21770e4bc67899e3543680313b09f139e149ab19", size = 25639, upload-time = "2025-09-27T18:36:56.908Z" }, + { url = "https://files.pythonhosted.org/packages/96/ec/2102e881fe9d25fc16cb4b25d5f5cde50970967ffa5dddafdb771237062d/markupsafe-3.0.3-cp313-cp313t-manylinux2014_x86_64.manylinux_2_17_x86_64.manylinux_2_28_x86_64.whl", hash = "sha256:8709b08f4a89aa7586de0aadc8da56180242ee0ada3999749b183aa23df95025", size = 23569, upload-time = "2025-09-27T18:36:57.913Z" }, + { url = "https://files.pythonhosted.org/packages/4b/30/6f2fce1f1f205fc9323255b216ca8a235b15860c34b6798f810f05828e32/markupsafe-3.0.3-cp313-cp313t-manylinux_2_31_riscv64.manylinux_2_39_riscv64.whl", hash = "sha256:b8512a91625c9b3da6f127803b166b629725e68af71f8184ae7e7d54686a56d6", size = 23284, upload-time = "2025-09-27T18:36:58.833Z" }, + { url = "https://files.pythonhosted.org/packages/58/47/4a0ccea4ab9f5dcb6f79c0236d954acb382202721e704223a8aafa38b5c8/markupsafe-3.0.3-cp313-cp313t-musllinux_1_2_aarch64.whl", hash = "sha256:9b79b7a16f7fedff2495d684f2b59b0457c3b493778c9eed31111be64d58279f", size = 24801, upload-time = "2025-09-27T18:36:59.739Z" }, + { url = "https://files.pythonhosted.org/packages/6a/70/3780e9b72180b6fecb83a4814d84c3bf4b4ae4bf0b19c27196104149734c/markupsafe-3.0.3-cp313-cp313t-musllinux_1_2_riscv64.whl", hash = "sha256:12c63dfb4a98206f045aa9563db46507995f7ef6d83b2f68eda65c307c6829eb", size = 22769, upload-time = "2025-09-27T18:37:00.719Z" }, + { url = "https://files.pythonhosted.org/packages/98/c5/c03c7f4125180fc215220c035beac6b9cb684bc7a067c84fc69414d315f5/markupsafe-3.0.3-cp313-cp313t-musllinux_1_2_x86_64.whl", hash = "sha256:8f71bc33915be5186016f675cd83a1e08523649b0e33efdb898db577ef5bb009", size = 23642, upload-time = "2025-09-27T18:37:01.673Z" }, + { url = "https://files.pythonhosted.org/packages/80/d6/2d1b89f6ca4bff1036499b1e29a1d02d282259f3681540e16563f27ebc23/markupsafe-3.0.3-cp313-cp313t-win32.whl", hash = "sha256:69c0b73548bc525c8cb9a251cddf1931d1db4d2258e9599c28c07ef3580ef354", size = 14612, upload-time = "2025-09-27T18:37:02.639Z" }, + { url = "https://files.pythonhosted.org/packages/2b/98/e48a4bfba0a0ffcf9925fe2d69240bfaa19c6f7507b8cd09c70684a53c1e/markupsafe-3.0.3-cp313-cp313t-win_amd64.whl", hash = "sha256:1b4b79e8ebf6b55351f0d91fe80f893b4743f104bff22e90697db1590e47a218", size = 15200, upload-time = "2025-09-27T18:37:03.582Z" }, + { url = "https://files.pythonhosted.org/packages/0e/72/e3cc540f351f316e9ed0f092757459afbc595824ca724cbc5a5d4263713f/markupsafe-3.0.3-cp313-cp313t-win_arm64.whl", hash = "sha256:ad2cf8aa28b8c020ab2fc8287b0f823d0a7d8630784c31e9ee5edea20f406287", size = 13973, upload-time = "2025-09-27T18:37:04.929Z" }, + { url = "https://files.pythonhosted.org/packages/33/8a/8e42d4838cd89b7dde187011e97fe6c3af66d8c044997d2183fbd6d31352/markupsafe-3.0.3-cp314-cp314-macosx_10_13_x86_64.whl", hash = "sha256:eaa9599de571d72e2daf60164784109f19978b327a3910d3e9de8c97b5b70cfe", size = 11619, upload-time = "2025-09-27T18:37:06.342Z" }, + { url = "https://files.pythonhosted.org/packages/b5/64/7660f8a4a8e53c924d0fa05dc3a55c9cee10bbd82b11c5afb27d44b096ce/markupsafe-3.0.3-cp314-cp314-macosx_11_0_arm64.whl", hash = "sha256:c47a551199eb8eb2121d4f0f15ae0f923d31350ab9280078d1e5f12b249e0026", size = 12029, upload-time = "2025-09-27T18:37:07.213Z" }, + { url = "https://files.pythonhosted.org/packages/da/ef/e648bfd021127bef5fa12e1720ffed0c6cbb8310c8d9bea7266337ff06de/markupsafe-3.0.3-cp314-cp314-manylinux2014_aarch64.manylinux_2_17_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:f34c41761022dd093b4b6896d4810782ffbabe30f2d443ff5f083e0cbbb8c737", size = 24408, upload-time = "2025-09-27T18:37:09.572Z" }, + { url = "https://files.pythonhosted.org/packages/41/3c/a36c2450754618e62008bf7435ccb0f88053e07592e6028a34776213d877/markupsafe-3.0.3-cp314-cp314-manylinux2014_x86_64.manylinux_2_17_x86_64.manylinux_2_28_x86_64.whl", hash = "sha256:457a69a9577064c05a97c41f4e65148652db078a3a509039e64d3467b9e7ef97", size = 23005, upload-time = "2025-09-27T18:37:10.58Z" }, + { url = "https://files.pythonhosted.org/packages/bc/20/b7fdf89a8456b099837cd1dc21974632a02a999ec9bf7ca3e490aacd98e7/markupsafe-3.0.3-cp314-cp314-manylinux_2_31_riscv64.manylinux_2_39_riscv64.whl", hash = "sha256:e8afc3f2ccfa24215f8cb28dcf43f0113ac3c37c2f0f0806d8c70e4228c5cf4d", size = 22048, upload-time = "2025-09-27T18:37:11.547Z" }, + { url = "https://files.pythonhosted.org/packages/9a/a7/591f592afdc734f47db08a75793a55d7fbcc6902a723ae4cfbab61010cc5/markupsafe-3.0.3-cp314-cp314-musllinux_1_2_aarch64.whl", hash = "sha256:ec15a59cf5af7be74194f7ab02d0f59a62bdcf1a537677ce67a2537c9b87fcda", size = 23821, upload-time = "2025-09-27T18:37:12.48Z" }, + { url = "https://files.pythonhosted.org/packages/7d/33/45b24e4f44195b26521bc6f1a82197118f74df348556594bd2262bda1038/markupsafe-3.0.3-cp314-cp314-musllinux_1_2_riscv64.whl", hash = "sha256:0eb9ff8191e8498cca014656ae6b8d61f39da5f95b488805da4bb029cccbfbaf", size = 21606, upload-time = "2025-09-27T18:37:13.485Z" }, + { url = "https://files.pythonhosted.org/packages/ff/0e/53dfaca23a69fbfbbf17a4b64072090e70717344c52eaaaa9c5ddff1e5f0/markupsafe-3.0.3-cp314-cp314-musllinux_1_2_x86_64.whl", hash = "sha256:2713baf880df847f2bece4230d4d094280f4e67b1e813eec43b4c0e144a34ffe", size = 23043, upload-time = "2025-09-27T18:37:14.408Z" }, + { url = "https://files.pythonhosted.org/packages/46/11/f333a06fc16236d5238bfe74daccbca41459dcd8d1fa952e8fbd5dccfb70/markupsafe-3.0.3-cp314-cp314-win32.whl", hash = "sha256:729586769a26dbceff69f7a7dbbf59ab6572b99d94576a5592625d5b411576b9", size = 14747, upload-time = "2025-09-27T18:37:15.36Z" }, + { url = "https://files.pythonhosted.org/packages/28/52/182836104b33b444e400b14f797212f720cbc9ed6ba34c800639d154e821/markupsafe-3.0.3-cp314-cp314-win_amd64.whl", hash = "sha256:bdc919ead48f234740ad807933cdf545180bfbe9342c2bb451556db2ed958581", size = 15341, upload-time = "2025-09-27T18:37:16.496Z" }, + { url = "https://files.pythonhosted.org/packages/6f/18/acf23e91bd94fd7b3031558b1f013adfa21a8e407a3fdb32745538730382/markupsafe-3.0.3-cp314-cp314-win_arm64.whl", hash = "sha256:5a7d5dc5140555cf21a6fefbdbf8723f06fcd2f63ef108f2854de715e4422cb4", size = 14073, upload-time = "2025-09-27T18:37:17.476Z" }, + { url = "https://files.pythonhosted.org/packages/3c/f0/57689aa4076e1b43b15fdfa646b04653969d50cf30c32a102762be2485da/markupsafe-3.0.3-cp314-cp314t-macosx_10_13_x86_64.whl", hash = "sha256:1353ef0c1b138e1907ae78e2f6c63ff67501122006b0f9abad68fda5f4ffc6ab", size = 11661, upload-time = "2025-09-27T18:37:18.453Z" }, + { url = "https://files.pythonhosted.org/packages/89/c3/2e67a7ca217c6912985ec766c6393b636fb0c2344443ff9d91404dc4c79f/markupsafe-3.0.3-cp314-cp314t-macosx_11_0_arm64.whl", hash = "sha256:1085e7fbddd3be5f89cc898938f42c0b3c711fdcb37d75221de2666af647c175", size = 12069, upload-time = "2025-09-27T18:37:19.332Z" }, + { url = "https://files.pythonhosted.org/packages/f0/00/be561dce4e6ca66b15276e184ce4b8aec61fe83662cce2f7d72bd3249d28/markupsafe-3.0.3-cp314-cp314t-manylinux2014_aarch64.manylinux_2_17_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:1b52b4fb9df4eb9ae465f8d0c228a00624de2334f216f178a995ccdcf82c4634", size = 25670, upload-time = "2025-09-27T18:37:20.245Z" }, + { url = "https://files.pythonhosted.org/packages/50/09/c419f6f5a92e5fadde27efd190eca90f05e1261b10dbd8cbcb39cd8ea1dc/markupsafe-3.0.3-cp314-cp314t-manylinux2014_x86_64.manylinux_2_17_x86_64.manylinux_2_28_x86_64.whl", hash = "sha256:fed51ac40f757d41b7c48425901843666a6677e3e8eb0abcff09e4ba6e664f50", size = 23598, upload-time = "2025-09-27T18:37:21.177Z" }, + { url = "https://files.pythonhosted.org/packages/22/44/a0681611106e0b2921b3033fc19bc53323e0b50bc70cffdd19f7d679bb66/markupsafe-3.0.3-cp314-cp314t-manylinux_2_31_riscv64.manylinux_2_39_riscv64.whl", hash = "sha256:f190daf01f13c72eac4efd5c430a8de82489d9cff23c364c3ea822545032993e", size = 23261, upload-time = "2025-09-27T18:37:22.167Z" }, + { url = "https://files.pythonhosted.org/packages/5f/57/1b0b3f100259dc9fffe780cfb60d4be71375510e435efec3d116b6436d43/markupsafe-3.0.3-cp314-cp314t-musllinux_1_2_aarch64.whl", hash = "sha256:e56b7d45a839a697b5eb268c82a71bd8c7f6c94d6fd50c3d577fa39a9f1409f5", size = 24835, upload-time = "2025-09-27T18:37:23.296Z" }, + { url = "https://files.pythonhosted.org/packages/26/6a/4bf6d0c97c4920f1597cc14dd720705eca0bf7c787aebc6bb4d1bead5388/markupsafe-3.0.3-cp314-cp314t-musllinux_1_2_riscv64.whl", hash = "sha256:f3e98bb3798ead92273dc0e5fd0f31ade220f59a266ffd8a4f6065e0a3ce0523", size = 22733, upload-time = "2025-09-27T18:37:24.237Z" }, + { url = "https://files.pythonhosted.org/packages/14/c7/ca723101509b518797fedc2fdf79ba57f886b4aca8a7d31857ba3ee8281f/markupsafe-3.0.3-cp314-cp314t-musllinux_1_2_x86_64.whl", hash = "sha256:5678211cb9333a6468fb8d8be0305520aa073f50d17f089b5b4b477ea6e67fdc", size = 23672, upload-time = "2025-09-27T18:37:25.271Z" }, + { url = "https://files.pythonhosted.org/packages/fb/df/5bd7a48c256faecd1d36edc13133e51397e41b73bb77e1a69deab746ebac/markupsafe-3.0.3-cp314-cp314t-win32.whl", hash = "sha256:915c04ba3851909ce68ccc2b8e2cd691618c4dc4c4232fb7982bca3f41fd8c3d", size = 14819, upload-time = "2025-09-27T18:37:26.285Z" }, + { url = "https://files.pythonhosted.org/packages/1a/8a/0402ba61a2f16038b48b39bccca271134be00c5c9f0f623208399333c448/markupsafe-3.0.3-cp314-cp314t-win_amd64.whl", hash = "sha256:4faffd047e07c38848ce017e8725090413cd80cbc23d86e55c587bf979e579c9", size = 15426, upload-time = "2025-09-27T18:37:27.316Z" }, + { url = "https://files.pythonhosted.org/packages/70/bc/6f1c2f612465f5fa89b95bead1f44dcb607670fd42891d8fdcd5d039f4f4/markupsafe-3.0.3-cp314-cp314t-win_arm64.whl", hash = "sha256:32001d6a8fc98c8cb5c947787c5d08b0a50663d139f1305bac5885d98d9b40fa", size = 14146, upload-time = "2025-09-27T18:37:28.327Z" }, +] + +[[package]] +name = "minio" +version = "7.2.20" +source = { registry = "https://pypi.org/simple" } +dependencies = [ + { name = "argon2-cffi" }, + { name = "certifi" }, + { name = "pycryptodome" }, + { name = "typing-extensions" }, + { name = "urllib3" }, +] +sdist = { url = "https://files.pythonhosted.org/packages/40/df/6dfc6540f96a74125a11653cce717603fd5b7d0001a8e847b3e54e72d238/minio-7.2.20.tar.gz", hash = "sha256:95898b7a023fbbfde375985aa77e2cd6a0762268db79cf886f002a9ea8e68598", size = 136113, upload-time = "2025-11-27T00:37:15.569Z" } +wheels = [ + { url = "https://files.pythonhosted.org/packages/3e/9a/b697530a882588a84db616580f2ba5d1d515c815e11c30d219145afeec87/minio-7.2.20-py3-none-any.whl", hash = "sha256:eb33dd2fb80e04c3726a76b13241c6be3c4c46f8d81e1d58e757786f6501897e", size = 93751, upload-time = "2025-11-27T00:37:13.993Z" }, +] + +[[package]] +name = "packaging" +version = "26.0" +source = { registry = "https://pypi.org/simple" } +sdist = { url = "https://files.pythonhosted.org/packages/65/ee/299d360cdc32edc7d2cf530f3accf79c4fca01e96ffc950d8a52213bd8e4/packaging-26.0.tar.gz", hash = "sha256:00243ae351a257117b6a241061796684b084ed1c516a08c48a3f7e147a9d80b4", size = 143416, upload-time = "2026-01-21T20:50:39.064Z" } +wheels = [ + { url = "https://files.pythonhosted.org/packages/b7/b9/c538f279a4e237a006a2c98387d081e9eb060d203d8ed34467cc0f0b9b53/packaging-26.0-py3-none-any.whl", hash = "sha256:b36f1fef9334a5588b4166f8bcd26a14e521f2b55e6b9de3aaa80d3ff7a37529", size = 74366, upload-time = "2026-01-21T20:50:37.788Z" }, +] + +[[package]] +name = "pexpect" +version = "4.9.0" +source = { registry = "https://pypi.org/simple" } +dependencies = [ + { name = "ptyprocess" }, +] +sdist = { url = "https://files.pythonhosted.org/packages/42/92/cc564bf6381ff43ce1f4d06852fc19a2f11d180f23dc32d9588bee2f149d/pexpect-4.9.0.tar.gz", hash = "sha256:ee7d41123f3c9911050ea2c2dac107568dc43b2d3b0c7557a33212c398ead30f", size = 166450, upload-time = "2023-11-25T09:07:26.339Z" } +wheels = [ + { url = "https://files.pythonhosted.org/packages/9e/c3/059298687310d527a58bb01f3b1965787ee3b40dce76752eda8b44e9a2c5/pexpect-4.9.0-py2.py3-none-any.whl", hash = "sha256:7236d1e080e4936be2dc3e326cec0af72acf9212a7e1d060210e70a47e253523", size = 63772, upload-time = "2023-11-25T06:56:14.81Z" }, +] + +[[package]] +name = "pluggy" +version = "1.6.0" +source = { registry = "https://pypi.org/simple" } +sdist = { url = "https://files.pythonhosted.org/packages/f9/e2/3e91f31a7d2b083fe6ef3fa267035b518369d9511ffab804f839851d2779/pluggy-1.6.0.tar.gz", hash = "sha256:7dcc130b76258d33b90f61b658791dede3486c3e6bfb003ee5c9bfb396dd22f3", size = 69412, upload-time = "2025-05-15T12:30:07.975Z" } +wheels = [ + { url = "https://files.pythonhosted.org/packages/54/20/4d324d65cc6d9205fabedc306948156824eb9f0ee1633355a8f7ec5c66bf/pluggy-1.6.0-py3-none-any.whl", hash = "sha256:e920276dd6813095e9377c0bc5566d94c932c33b27a3e3945d8389c374dd4746", size = 20538, upload-time = "2025-05-15T12:30:06.134Z" }, +] + +[[package]] +name = "protobuf" +version = "6.33.5" +source = { registry = "https://pypi.org/simple" } +sdist = { url = "https://files.pythonhosted.org/packages/ba/25/7c72c307aafc96fa87062aa6291d9f7c94836e43214d43722e86037aac02/protobuf-6.33.5.tar.gz", hash = "sha256:6ddcac2a081f8b7b9642c09406bc6a4290128fce5f471cddd165960bb9119e5c", size = 444465, upload-time = "2026-01-29T21:51:33.494Z" } +wheels = [ + { url = "https://files.pythonhosted.org/packages/b1/79/af92d0a8369732b027e6d6084251dd8e782c685c72da161bd4a2e00fbabb/protobuf-6.33.5-cp310-abi3-win32.whl", hash = "sha256:d71b040839446bac0f4d162e758bea99c8251161dae9d0983a3b88dee345153b", size = 425769, upload-time = "2026-01-29T21:51:21.751Z" }, + { url = "https://files.pythonhosted.org/packages/55/75/bb9bc917d10e9ee13dee8607eb9ab963b7cf8be607c46e7862c748aa2af7/protobuf-6.33.5-cp310-abi3-win_amd64.whl", hash = "sha256:3093804752167bcab3998bec9f1048baae6e29505adaf1afd14a37bddede533c", size = 437118, upload-time = "2026-01-29T21:51:24.022Z" }, + { url = "https://files.pythonhosted.org/packages/a2/6b/e48dfc1191bc5b52950246275bf4089773e91cb5ba3592621723cdddca62/protobuf-6.33.5-cp39-abi3-macosx_10_9_universal2.whl", hash = "sha256:a5cb85982d95d906df1e2210e58f8e4f1e3cdc088e52c921a041f9c9a0386de5", size = 427766, upload-time = "2026-01-29T21:51:25.413Z" }, + { url = "https://files.pythonhosted.org/packages/4e/b1/c79468184310de09d75095ed1314b839eb2f72df71097db9d1404a1b2717/protobuf-6.33.5-cp39-abi3-manylinux2014_aarch64.whl", hash = "sha256:9b71e0281f36f179d00cbcb119cb19dec4d14a81393e5ea220f64b286173e190", size = 324638, upload-time = "2026-01-29T21:51:26.423Z" }, + { url = "https://files.pythonhosted.org/packages/c5/f5/65d838092fd01c44d16037953fd4c2cc851e783de9b8f02b27ec4ffd906f/protobuf-6.33.5-cp39-abi3-manylinux2014_s390x.whl", hash = "sha256:8afa18e1d6d20af15b417e728e9f60f3aa108ee76f23c3b2c07a2c3b546d3afd", size = 339411, upload-time = "2026-01-29T21:51:27.446Z" }, + { url = "https://files.pythonhosted.org/packages/9b/53/a9443aa3ca9ba8724fdfa02dd1887c1bcd8e89556b715cfbacca6b63dbec/protobuf-6.33.5-cp39-abi3-manylinux2014_x86_64.whl", hash = "sha256:cbf16ba3350fb7b889fca858fb215967792dc125b35c7976ca4818bee3521cf0", size = 323465, upload-time = "2026-01-29T21:51:28.925Z" }, + { url = "https://files.pythonhosted.org/packages/57/bf/2086963c69bdac3d7cff1cc7ff79b8ce5ea0bec6797a017e1be338a46248/protobuf-6.33.5-py3-none-any.whl", hash = "sha256:69915a973dd0f60f31a08b8318b73eab2bd6a392c79184b3612226b0a3f8ec02", size = 170687, upload-time = "2026-01-29T21:51:32.557Z" }, +] + +[[package]] +name = "ptyprocess" +version = "0.7.0" +source = { registry = "https://pypi.org/simple" } +sdist = { url = "https://files.pythonhosted.org/packages/20/e5/16ff212c1e452235a90aeb09066144d0c5a6a8c0834397e03f5224495c4e/ptyprocess-0.7.0.tar.gz", hash = "sha256:5c5d0a3b48ceee0b48485e0c26037c0acd7d29765ca3fbb5cb3831d347423220", size = 70762, upload-time = "2020-12-28T15:15:30.155Z" } +wheels = [ + { url = "https://files.pythonhosted.org/packages/22/a6/858897256d0deac81a172289110f31629fc4cee19b6f01283303e18c8db3/ptyprocess-0.7.0-py2.py3-none-any.whl", hash = "sha256:4b41f3967fce3af57cc7e94b888626c18bf37a083e3651ca8feeb66d492fef35", size = 13993, upload-time = "2020-12-28T15:15:28.35Z" }, +] + +[[package]] +name = "pycparser" +version = "3.0" +source = { registry = "https://pypi.org/simple" } +sdist = { url = "https://files.pythonhosted.org/packages/1b/7d/92392ff7815c21062bea51aa7b87d45576f649f16458d78b7cf94b9ab2e6/pycparser-3.0.tar.gz", hash = "sha256:600f49d217304a5902ac3c37e1281c9fe94e4d0489de643a9504c5cdfdfc6b29", size = 103492, upload-time = "2026-01-21T14:26:51.89Z" } +wheels = [ + { url = "https://files.pythonhosted.org/packages/0c/c3/44f3fbbfa403ea2a7c779186dc20772604442dde72947e7d01069cbe98e3/pycparser-3.0-py3-none-any.whl", hash = "sha256:b727414169a36b7d524c1c3e31839a521725078d7b2ff038656844266160a992", size = 48172, upload-time = "2026-01-21T14:26:50.693Z" }, +] + +[[package]] +name = "pycryptodome" +version = "3.23.0" +source = { registry = "https://pypi.org/simple" } +sdist = { url = "https://files.pythonhosted.org/packages/8e/a6/8452177684d5e906854776276ddd34eca30d1b1e15aa1ee9cefc289a33f5/pycryptodome-3.23.0.tar.gz", hash = "sha256:447700a657182d60338bab09fdb27518f8856aecd80ae4c6bdddb67ff5da44ef", size = 4921276, upload-time = "2025-05-17T17:21:45.242Z" } +wheels = [ + { url = "https://files.pythonhosted.org/packages/04/5d/bdb09489b63cd34a976cc9e2a8d938114f7a53a74d3dd4f125ffa49dce82/pycryptodome-3.23.0-cp313-cp313t-macosx_10_13_universal2.whl", hash = "sha256:0011f7f00cdb74879142011f95133274741778abba114ceca229adbf8e62c3e4", size = 2495152, upload-time = "2025-05-17T17:20:20.833Z" }, + { url = "https://files.pythonhosted.org/packages/a7/ce/7840250ed4cc0039c433cd41715536f926d6e86ce84e904068eb3244b6a6/pycryptodome-3.23.0-cp313-cp313t-macosx_10_13_x86_64.whl", hash = "sha256:90460fc9e088ce095f9ee8356722d4f10f86e5be06e2354230a9880b9c549aae", size = 1639348, upload-time = "2025-05-17T17:20:23.171Z" }, + { url = "https://files.pythonhosted.org/packages/ee/f0/991da24c55c1f688d6a3b5a11940567353f74590734ee4a64294834ae472/pycryptodome-3.23.0-cp313-cp313t-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:4764e64b269fc83b00f682c47443c2e6e85b18273712b98aa43bcb77f8570477", size = 2184033, upload-time = "2025-05-17T17:20:25.424Z" }, + { url = "https://files.pythonhosted.org/packages/54/16/0e11882deddf00f68b68dd4e8e442ddc30641f31afeb2bc25588124ac8de/pycryptodome-3.23.0-cp313-cp313t-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:eb8f24adb74984aa0e5d07a2368ad95276cf38051fe2dc6605cbcf482e04f2a7", size = 2270142, upload-time = "2025-05-17T17:20:27.808Z" }, + { url = "https://files.pythonhosted.org/packages/d5/fc/4347fea23a3f95ffb931f383ff28b3f7b1fe868739182cb76718c0da86a1/pycryptodome-3.23.0-cp313-cp313t-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:d97618c9c6684a97ef7637ba43bdf6663a2e2e77efe0f863cce97a76af396446", size = 2309384, upload-time = "2025-05-17T17:20:30.765Z" }, + { url = "https://files.pythonhosted.org/packages/6e/d9/c5261780b69ce66d8cfab25d2797bd6e82ba0241804694cd48be41add5eb/pycryptodome-3.23.0-cp313-cp313t-musllinux_1_2_aarch64.whl", hash = "sha256:9a53a4fe5cb075075d515797d6ce2f56772ea7e6a1e5e4b96cf78a14bac3d265", size = 2183237, upload-time = "2025-05-17T17:20:33.736Z" }, + { url = "https://files.pythonhosted.org/packages/5a/6f/3af2ffedd5cfa08c631f89452c6648c4d779e7772dfc388c77c920ca6bbf/pycryptodome-3.23.0-cp313-cp313t-musllinux_1_2_i686.whl", hash = "sha256:763d1d74f56f031788e5d307029caef067febf890cd1f8bf61183ae142f1a77b", size = 2343898, upload-time = "2025-05-17T17:20:36.086Z" }, + { url = "https://files.pythonhosted.org/packages/9a/dc/9060d807039ee5de6e2f260f72f3d70ac213993a804f5e67e0a73a56dd2f/pycryptodome-3.23.0-cp313-cp313t-musllinux_1_2_x86_64.whl", hash = "sha256:954af0e2bd7cea83ce72243b14e4fb518b18f0c1649b576d114973e2073b273d", size = 2269197, upload-time = "2025-05-17T17:20:38.414Z" }, + { url = "https://files.pythonhosted.org/packages/f9/34/e6c8ca177cb29dcc4967fef73f5de445912f93bd0343c9c33c8e5bf8cde8/pycryptodome-3.23.0-cp313-cp313t-win32.whl", hash = "sha256:257bb3572c63ad8ba40b89f6fc9d63a2a628e9f9708d31ee26560925ebe0210a", size = 1768600, upload-time = "2025-05-17T17:20:40.688Z" }, + { url = "https://files.pythonhosted.org/packages/e4/1d/89756b8d7ff623ad0160f4539da571d1f594d21ee6d68be130a6eccb39a4/pycryptodome-3.23.0-cp313-cp313t-win_amd64.whl", hash = "sha256:6501790c5b62a29fcb227bd6b62012181d886a767ce9ed03b303d1f22eb5c625", size = 1799740, upload-time = "2025-05-17T17:20:42.413Z" }, + { url = "https://files.pythonhosted.org/packages/5d/61/35a64f0feaea9fd07f0d91209e7be91726eb48c0f1bfc6720647194071e4/pycryptodome-3.23.0-cp313-cp313t-win_arm64.whl", hash = "sha256:9a77627a330ab23ca43b48b130e202582e91cc69619947840ea4d2d1be21eb39", size = 1703685, upload-time = "2025-05-17T17:20:44.388Z" }, + { url = "https://files.pythonhosted.org/packages/db/6c/a1f71542c969912bb0e106f64f60a56cc1f0fabecf9396f45accbe63fa68/pycryptodome-3.23.0-cp37-abi3-macosx_10_9_universal2.whl", hash = "sha256:187058ab80b3281b1de11c2e6842a357a1f71b42cb1e15bce373f3d238135c27", size = 2495627, upload-time = "2025-05-17T17:20:47.139Z" }, + { url = "https://files.pythonhosted.org/packages/6e/4e/a066527e079fc5002390c8acdd3aca431e6ea0a50ffd7201551175b47323/pycryptodome-3.23.0-cp37-abi3-macosx_10_9_x86_64.whl", hash = "sha256:cfb5cd445280c5b0a4e6187a7ce8de5a07b5f3f897f235caa11f1f435f182843", size = 1640362, upload-time = "2025-05-17T17:20:50.392Z" }, + { url = "https://files.pythonhosted.org/packages/50/52/adaf4c8c100a8c49d2bd058e5b551f73dfd8cb89eb4911e25a0c469b6b4e/pycryptodome-3.23.0-cp37-abi3-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:67bd81fcbe34f43ad9422ee8fd4843c8e7198dd88dd3d40e6de42ee65fbe1490", size = 2182625, upload-time = "2025-05-17T17:20:52.866Z" }, + { url = "https://files.pythonhosted.org/packages/5f/e9/a09476d436d0ff1402ac3867d933c61805ec2326c6ea557aeeac3825604e/pycryptodome-3.23.0-cp37-abi3-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:c8987bd3307a39bc03df5c8e0e3d8be0c4c3518b7f044b0f4c15d1aa78f52575", size = 2268954, upload-time = "2025-05-17T17:20:55.027Z" }, + { url = "https://files.pythonhosted.org/packages/f9/c5/ffe6474e0c551d54cab931918127c46d70cab8f114e0c2b5a3c071c2f484/pycryptodome-3.23.0-cp37-abi3-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:aa0698f65e5b570426fc31b8162ed4603b0c2841cbb9088e2b01641e3065915b", size = 2308534, upload-time = "2025-05-17T17:20:57.279Z" }, + { url = "https://files.pythonhosted.org/packages/18/28/e199677fc15ecf43010f2463fde4c1a53015d1fe95fb03bca2890836603a/pycryptodome-3.23.0-cp37-abi3-musllinux_1_2_aarch64.whl", hash = "sha256:53ecbafc2b55353edcebd64bf5da94a2a2cdf5090a6915bcca6eca6cc452585a", size = 2181853, upload-time = "2025-05-17T17:20:59.322Z" }, + { url = "https://files.pythonhosted.org/packages/ce/ea/4fdb09f2165ce1365c9eaefef36625583371ee514db58dc9b65d3a255c4c/pycryptodome-3.23.0-cp37-abi3-musllinux_1_2_i686.whl", hash = "sha256:156df9667ad9f2ad26255926524e1c136d6664b741547deb0a86a9acf5ea631f", size = 2342465, upload-time = "2025-05-17T17:21:03.83Z" }, + { url = "https://files.pythonhosted.org/packages/22/82/6edc3fc42fe9284aead511394bac167693fb2b0e0395b28b8bedaa07ef04/pycryptodome-3.23.0-cp37-abi3-musllinux_1_2_x86_64.whl", hash = "sha256:dea827b4d55ee390dc89b2afe5927d4308a8b538ae91d9c6f7a5090f397af1aa", size = 2267414, upload-time = "2025-05-17T17:21:06.72Z" }, + { url = "https://files.pythonhosted.org/packages/59/fe/aae679b64363eb78326c7fdc9d06ec3de18bac68be4b612fc1fe8902693c/pycryptodome-3.23.0-cp37-abi3-win32.whl", hash = "sha256:507dbead45474b62b2bbe318eb1c4c8ee641077532067fec9c1aa82c31f84886", size = 1768484, upload-time = "2025-05-17T17:21:08.535Z" }, + { url = "https://files.pythonhosted.org/packages/54/2f/e97a1b8294db0daaa87012c24a7bb714147c7ade7656973fd6c736b484ff/pycryptodome-3.23.0-cp37-abi3-win_amd64.whl", hash = "sha256:c75b52aacc6c0c260f204cbdd834f76edc9fb0d8e0da9fbf8352ef58202564e2", size = 1799636, upload-time = "2025-05-17T17:21:10.393Z" }, + { url = "https://files.pythonhosted.org/packages/18/3d/f9441a0d798bf2b1e645adc3265e55706aead1255ccdad3856dbdcffec14/pycryptodome-3.23.0-cp37-abi3-win_arm64.whl", hash = "sha256:11eeeb6917903876f134b56ba11abe95c0b0fd5e3330def218083c7d98bbcb3c", size = 1703675, upload-time = "2025-05-17T17:21:13.146Z" }, +] + +[[package]] +name = "pydantic" +version = "2.12.5" +source = { registry = "https://pypi.org/simple" } +dependencies = [ + { name = "annotated-types" }, + { name = "pydantic-core" }, + { name = "typing-extensions" }, + { name = "typing-inspection" }, +] +sdist = { url = "https://files.pythonhosted.org/packages/69/44/36f1a6e523abc58ae5f928898e4aca2e0ea509b5aa6f6f392a5d882be928/pydantic-2.12.5.tar.gz", hash = "sha256:4d351024c75c0f085a9febbb665ce8c0c6ec5d30e903bdb6394b7ede26aebb49", size = 821591, upload-time = "2025-11-26T15:11:46.471Z" } +wheels = [ + { url = "https://files.pythonhosted.org/packages/5a/87/b70ad306ebb6f9b585f114d0ac2137d792b48be34d732d60e597c2f8465a/pydantic-2.12.5-py3-none-any.whl", hash = "sha256:e561593fccf61e8a20fc46dfc2dfe075b8be7d0188df33f221ad1f0139180f9d", size = 463580, upload-time = "2025-11-26T15:11:44.605Z" }, +] + +[[package]] +name = "pydantic-core" +version = "2.41.5" +source = { registry = "https://pypi.org/simple" } +dependencies = [ + { name = "typing-extensions" }, +] +sdist = { url = "https://files.pythonhosted.org/packages/71/70/23b021c950c2addd24ec408e9ab05d59b035b39d97cdc1130e1bce647bb6/pydantic_core-2.41.5.tar.gz", hash = "sha256:08daa51ea16ad373ffd5e7606252cc32f07bc72b28284b6bc9c6df804816476e", size = 460952, upload-time = "2025-11-04T13:43:49.098Z" } +wheels = [ + { url = "https://files.pythonhosted.org/packages/e8/72/74a989dd9f2084b3d9530b0915fdda64ac48831c30dbf7c72a41a5232db8/pydantic_core-2.41.5-cp311-cp311-macosx_10_12_x86_64.whl", hash = "sha256:a3a52f6156e73e7ccb0f8cced536adccb7042be67cb45f9562e12b319c119da6", size = 2105873, upload-time = "2025-11-04T13:39:31.373Z" }, + { url = "https://files.pythonhosted.org/packages/12/44/37e403fd9455708b3b942949e1d7febc02167662bf1a7da5b78ee1ea2842/pydantic_core-2.41.5-cp311-cp311-macosx_11_0_arm64.whl", hash = "sha256:7f3bf998340c6d4b0c9a2f02d6a400e51f123b59565d74dc60d252ce888c260b", size = 1899826, upload-time = "2025-11-04T13:39:32.897Z" }, + { url = "https://files.pythonhosted.org/packages/33/7f/1d5cab3ccf44c1935a359d51a8a2a9e1a654b744b5e7f80d41b88d501eec/pydantic_core-2.41.5-cp311-cp311-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:378bec5c66998815d224c9ca994f1e14c0c21cb95d2f52b6021cc0b2a58f2a5a", size = 1917869, upload-time = "2025-11-04T13:39:34.469Z" }, + { url = "https://files.pythonhosted.org/packages/6e/6a/30d94a9674a7fe4f4744052ed6c5e083424510be1e93da5bc47569d11810/pydantic_core-2.41.5-cp311-cp311-manylinux_2_17_armv7l.manylinux2014_armv7l.whl", hash = "sha256:e7b576130c69225432866fe2f4a469a85a54ade141d96fd396dffcf607b558f8", size = 2063890, upload-time = "2025-11-04T13:39:36.053Z" }, + { url = "https://files.pythonhosted.org/packages/50/be/76e5d46203fcb2750e542f32e6c371ffa9b8ad17364cf94bb0818dbfb50c/pydantic_core-2.41.5-cp311-cp311-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:6cb58b9c66f7e4179a2d5e0f849c48eff5c1fca560994d6eb6543abf955a149e", size = 2229740, upload-time = "2025-11-04T13:39:37.753Z" }, + { url = "https://files.pythonhosted.org/packages/d3/ee/fed784df0144793489f87db310a6bbf8118d7b630ed07aa180d6067e653a/pydantic_core-2.41.5-cp311-cp311-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:88942d3a3dff3afc8288c21e565e476fc278902ae4d6d134f1eeda118cc830b1", size = 2350021, upload-time = "2025-11-04T13:39:40.94Z" }, + { url = "https://files.pythonhosted.org/packages/c8/be/8fed28dd0a180dca19e72c233cbf58efa36df055e5b9d90d64fd1740b828/pydantic_core-2.41.5-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:f31d95a179f8d64d90f6831d71fa93290893a33148d890ba15de25642c5d075b", size = 2066378, upload-time = "2025-11-04T13:39:42.523Z" }, + { url = "https://files.pythonhosted.org/packages/b0/3b/698cf8ae1d536a010e05121b4958b1257f0b5522085e335360e53a6b1c8b/pydantic_core-2.41.5-cp311-cp311-manylinux_2_5_i686.manylinux1_i686.whl", hash = "sha256:c1df3d34aced70add6f867a8cf413e299177e0c22660cc767218373d0779487b", size = 2175761, upload-time = "2025-11-04T13:39:44.553Z" }, + { url = "https://files.pythonhosted.org/packages/b8/ba/15d537423939553116dea94ce02f9c31be0fa9d0b806d427e0308ec17145/pydantic_core-2.41.5-cp311-cp311-musllinux_1_1_aarch64.whl", hash = "sha256:4009935984bd36bd2c774e13f9a09563ce8de4abaa7226f5108262fa3e637284", size = 2146303, upload-time = "2025-11-04T13:39:46.238Z" }, + { url = "https://files.pythonhosted.org/packages/58/7f/0de669bf37d206723795f9c90c82966726a2ab06c336deba4735b55af431/pydantic_core-2.41.5-cp311-cp311-musllinux_1_1_armv7l.whl", hash = "sha256:34a64bc3441dc1213096a20fe27e8e128bd3ff89921706e83c0b1ac971276594", size = 2340355, upload-time = "2025-11-04T13:39:48.002Z" }, + { url = "https://files.pythonhosted.org/packages/e5/de/e7482c435b83d7e3c3ee5ee4451f6e8973cff0eb6007d2872ce6383f6398/pydantic_core-2.41.5-cp311-cp311-musllinux_1_1_x86_64.whl", hash = "sha256:c9e19dd6e28fdcaa5a1de679aec4141f691023916427ef9bae8584f9c2fb3b0e", size = 2319875, upload-time = "2025-11-04T13:39:49.705Z" }, + { url = "https://files.pythonhosted.org/packages/fe/e6/8c9e81bb6dd7560e33b9053351c29f30c8194b72f2d6932888581f503482/pydantic_core-2.41.5-cp311-cp311-win32.whl", hash = "sha256:2c010c6ded393148374c0f6f0bf89d206bf3217f201faa0635dcd56bd1520f6b", size = 1987549, upload-time = "2025-11-04T13:39:51.842Z" }, + { url = "https://files.pythonhosted.org/packages/11/66/f14d1d978ea94d1bc21fc98fcf570f9542fe55bfcc40269d4e1a21c19bf7/pydantic_core-2.41.5-cp311-cp311-win_amd64.whl", hash = "sha256:76ee27c6e9c7f16f47db7a94157112a2f3a00e958bc626e2f4ee8bec5c328fbe", size = 2011305, upload-time = "2025-11-04T13:39:53.485Z" }, + { url = "https://files.pythonhosted.org/packages/56/d8/0e271434e8efd03186c5386671328154ee349ff0354d83c74f5caaf096ed/pydantic_core-2.41.5-cp311-cp311-win_arm64.whl", hash = "sha256:4bc36bbc0b7584de96561184ad7f012478987882ebf9f9c389b23f432ea3d90f", size = 1972902, upload-time = "2025-11-04T13:39:56.488Z" }, + { url = "https://files.pythonhosted.org/packages/5f/5d/5f6c63eebb5afee93bcaae4ce9a898f3373ca23df3ccaef086d0233a35a7/pydantic_core-2.41.5-cp312-cp312-macosx_10_12_x86_64.whl", hash = "sha256:f41a7489d32336dbf2199c8c0a215390a751c5b014c2c1c5366e817202e9cdf7", size = 2110990, upload-time = "2025-11-04T13:39:58.079Z" }, + { url = "https://files.pythonhosted.org/packages/aa/32/9c2e8ccb57c01111e0fd091f236c7b371c1bccea0fa85247ac55b1e2b6b6/pydantic_core-2.41.5-cp312-cp312-macosx_11_0_arm64.whl", hash = "sha256:070259a8818988b9a84a449a2a7337c7f430a22acc0859c6b110aa7212a6d9c0", size = 1896003, upload-time = "2025-11-04T13:39:59.956Z" }, + { url = "https://files.pythonhosted.org/packages/68/b8/a01b53cb0e59139fbc9e4fda3e9724ede8de279097179be4ff31f1abb65a/pydantic_core-2.41.5-cp312-cp312-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:e96cea19e34778f8d59fe40775a7a574d95816eb150850a85a7a4c8f4b94ac69", size = 1919200, upload-time = "2025-11-04T13:40:02.241Z" }, + { url = "https://files.pythonhosted.org/packages/38/de/8c36b5198a29bdaade07b5985e80a233a5ac27137846f3bc2d3b40a47360/pydantic_core-2.41.5-cp312-cp312-manylinux_2_17_armv7l.manylinux2014_armv7l.whl", hash = "sha256:ed2e99c456e3fadd05c991f8f437ef902e00eedf34320ba2b0842bd1c3ca3a75", size = 2052578, upload-time = "2025-11-04T13:40:04.401Z" }, + { url = "https://files.pythonhosted.org/packages/00/b5/0e8e4b5b081eac6cb3dbb7e60a65907549a1ce035a724368c330112adfdd/pydantic_core-2.41.5-cp312-cp312-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:65840751b72fbfd82c3c640cff9284545342a4f1eb1586ad0636955b261b0b05", size = 2208504, upload-time = "2025-11-04T13:40:06.072Z" }, + { url = "https://files.pythonhosted.org/packages/77/56/87a61aad59c7c5b9dc8caad5a41a5545cba3810c3e828708b3d7404f6cef/pydantic_core-2.41.5-cp312-cp312-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:e536c98a7626a98feb2d3eaf75944ef6f3dbee447e1f841eae16f2f0a72d8ddc", size = 2335816, upload-time = "2025-11-04T13:40:07.835Z" }, + { url = "https://files.pythonhosted.org/packages/0d/76/941cc9f73529988688a665a5c0ecff1112b3d95ab48f81db5f7606f522d3/pydantic_core-2.41.5-cp312-cp312-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:eceb81a8d74f9267ef4081e246ffd6d129da5d87e37a77c9bde550cb04870c1c", size = 2075366, upload-time = "2025-11-04T13:40:09.804Z" }, + { url = "https://files.pythonhosted.org/packages/d3/43/ebef01f69baa07a482844faaa0a591bad1ef129253ffd0cdaa9d8a7f72d3/pydantic_core-2.41.5-cp312-cp312-manylinux_2_5_i686.manylinux1_i686.whl", hash = "sha256:d38548150c39b74aeeb0ce8ee1d8e82696f4a4e16ddc6de7b1d8823f7de4b9b5", size = 2171698, upload-time = "2025-11-04T13:40:12.004Z" }, + { url = "https://files.pythonhosted.org/packages/b1/87/41f3202e4193e3bacfc2c065fab7706ebe81af46a83d3e27605029c1f5a6/pydantic_core-2.41.5-cp312-cp312-musllinux_1_1_aarch64.whl", hash = "sha256:c23e27686783f60290e36827f9c626e63154b82b116d7fe9adba1fda36da706c", size = 2132603, upload-time = "2025-11-04T13:40:13.868Z" }, + { url = "https://files.pythonhosted.org/packages/49/7d/4c00df99cb12070b6bccdef4a195255e6020a550d572768d92cc54dba91a/pydantic_core-2.41.5-cp312-cp312-musllinux_1_1_armv7l.whl", hash = "sha256:482c982f814460eabe1d3bb0adfdc583387bd4691ef00b90575ca0d2b6fe2294", size = 2329591, upload-time = "2025-11-04T13:40:15.672Z" }, + { url = "https://files.pythonhosted.org/packages/cc/6a/ebf4b1d65d458f3cda6a7335d141305dfa19bdc61140a884d165a8a1bbc7/pydantic_core-2.41.5-cp312-cp312-musllinux_1_1_x86_64.whl", hash = "sha256:bfea2a5f0b4d8d43adf9d7b8bf019fb46fdd10a2e5cde477fbcb9d1fa08c68e1", size = 2319068, upload-time = "2025-11-04T13:40:17.532Z" }, + { url = "https://files.pythonhosted.org/packages/49/3b/774f2b5cd4192d5ab75870ce4381fd89cf218af999515baf07e7206753f0/pydantic_core-2.41.5-cp312-cp312-win32.whl", hash = "sha256:b74557b16e390ec12dca509bce9264c3bbd128f8a2c376eaa68003d7f327276d", size = 1985908, upload-time = "2025-11-04T13:40:19.309Z" }, + { url = "https://files.pythonhosted.org/packages/86/45/00173a033c801cacf67c190fef088789394feaf88a98a7035b0e40d53dc9/pydantic_core-2.41.5-cp312-cp312-win_amd64.whl", hash = "sha256:1962293292865bca8e54702b08a4f26da73adc83dd1fcf26fbc875b35d81c815", size = 2020145, upload-time = "2025-11-04T13:40:21.548Z" }, + { url = "https://files.pythonhosted.org/packages/f9/22/91fbc821fa6d261b376a3f73809f907cec5ca6025642c463d3488aad22fb/pydantic_core-2.41.5-cp312-cp312-win_arm64.whl", hash = "sha256:1746d4a3d9a794cacae06a5eaaccb4b8643a131d45fbc9af23e353dc0a5ba5c3", size = 1976179, upload-time = "2025-11-04T13:40:23.393Z" }, + { url = "https://files.pythonhosted.org/packages/87/06/8806241ff1f70d9939f9af039c6c35f2360cf16e93c2ca76f184e76b1564/pydantic_core-2.41.5-cp313-cp313-macosx_10_12_x86_64.whl", hash = "sha256:941103c9be18ac8daf7b7adca8228f8ed6bb7a1849020f643b3a14d15b1924d9", size = 2120403, upload-time = "2025-11-04T13:40:25.248Z" }, + { url = "https://files.pythonhosted.org/packages/94/02/abfa0e0bda67faa65fef1c84971c7e45928e108fe24333c81f3bfe35d5f5/pydantic_core-2.41.5-cp313-cp313-macosx_11_0_arm64.whl", hash = "sha256:112e305c3314f40c93998e567879e887a3160bb8689ef3d2c04b6cc62c33ac34", size = 1896206, upload-time = "2025-11-04T13:40:27.099Z" }, + { url = "https://files.pythonhosted.org/packages/15/df/a4c740c0943e93e6500f9eb23f4ca7ec9bf71b19e608ae5b579678c8d02f/pydantic_core-2.41.5-cp313-cp313-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:0cbaad15cb0c90aa221d43c00e77bb33c93e8d36e0bf74760cd00e732d10a6a0", size = 1919307, upload-time = "2025-11-04T13:40:29.806Z" }, + { url = "https://files.pythonhosted.org/packages/9a/e3/6324802931ae1d123528988e0e86587c2072ac2e5394b4bc2bc34b61ff6e/pydantic_core-2.41.5-cp313-cp313-manylinux_2_17_armv7l.manylinux2014_armv7l.whl", hash = "sha256:03ca43e12fab6023fc79d28ca6b39b05f794ad08ec2feccc59a339b02f2b3d33", size = 2063258, upload-time = "2025-11-04T13:40:33.544Z" }, + { url = "https://files.pythonhosted.org/packages/c9/d4/2230d7151d4957dd79c3044ea26346c148c98fbf0ee6ebd41056f2d62ab5/pydantic_core-2.41.5-cp313-cp313-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:dc799088c08fa04e43144b164feb0c13f9a0bc40503f8df3e9fde58a3c0c101e", size = 2214917, upload-time = "2025-11-04T13:40:35.479Z" }, + { url = "https://files.pythonhosted.org/packages/e6/9f/eaac5df17a3672fef0081b6c1bb0b82b33ee89aa5cec0d7b05f52fd4a1fa/pydantic_core-2.41.5-cp313-cp313-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:97aeba56665b4c3235a0e52b2c2f5ae9cd071b8a8310ad27bddb3f7fb30e9aa2", size = 2332186, upload-time = "2025-11-04T13:40:37.436Z" }, + { url = "https://files.pythonhosted.org/packages/cf/4e/35a80cae583a37cf15604b44240e45c05e04e86f9cfd766623149297e971/pydantic_core-2.41.5-cp313-cp313-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:406bf18d345822d6c21366031003612b9c77b3e29ffdb0f612367352aab7d586", size = 2073164, upload-time = "2025-11-04T13:40:40.289Z" }, + { url = "https://files.pythonhosted.org/packages/bf/e3/f6e262673c6140dd3305d144d032f7bd5f7497d3871c1428521f19f9efa2/pydantic_core-2.41.5-cp313-cp313-manylinux_2_5_i686.manylinux1_i686.whl", hash = "sha256:b93590ae81f7010dbe380cdeab6f515902ebcbefe0b9327cc4804d74e93ae69d", size = 2179146, upload-time = "2025-11-04T13:40:42.809Z" }, + { url = "https://files.pythonhosted.org/packages/75/c7/20bd7fc05f0c6ea2056a4565c6f36f8968c0924f19b7d97bbfea55780e73/pydantic_core-2.41.5-cp313-cp313-musllinux_1_1_aarch64.whl", hash = "sha256:01a3d0ab748ee531f4ea6c3e48ad9dac84ddba4b0d82291f87248f2f9de8d740", size = 2137788, upload-time = "2025-11-04T13:40:44.752Z" }, + { url = "https://files.pythonhosted.org/packages/3a/8d/34318ef985c45196e004bc46c6eab2eda437e744c124ef0dbe1ff2c9d06b/pydantic_core-2.41.5-cp313-cp313-musllinux_1_1_armv7l.whl", hash = "sha256:6561e94ba9dacc9c61bce40e2d6bdc3bfaa0259d3ff36ace3b1e6901936d2e3e", size = 2340133, upload-time = "2025-11-04T13:40:46.66Z" }, + { url = "https://files.pythonhosted.org/packages/9c/59/013626bf8c78a5a5d9350d12e7697d3d4de951a75565496abd40ccd46bee/pydantic_core-2.41.5-cp313-cp313-musllinux_1_1_x86_64.whl", hash = "sha256:915c3d10f81bec3a74fbd4faebe8391013ba61e5a1a8d48c4455b923bdda7858", size = 2324852, upload-time = "2025-11-04T13:40:48.575Z" }, + { url = "https://files.pythonhosted.org/packages/1a/d9/c248c103856f807ef70c18a4f986693a46a8ffe1602e5d361485da502d20/pydantic_core-2.41.5-cp313-cp313-win32.whl", hash = "sha256:650ae77860b45cfa6e2cdafc42618ceafab3a2d9a3811fcfbd3bbf8ac3c40d36", size = 1994679, upload-time = "2025-11-04T13:40:50.619Z" }, + { url = "https://files.pythonhosted.org/packages/9e/8b/341991b158ddab181cff136acd2552c9f35bd30380422a639c0671e99a91/pydantic_core-2.41.5-cp313-cp313-win_amd64.whl", hash = "sha256:79ec52ec461e99e13791ec6508c722742ad745571f234ea6255bed38c6480f11", size = 2019766, upload-time = "2025-11-04T13:40:52.631Z" }, + { url = "https://files.pythonhosted.org/packages/73/7d/f2f9db34af103bea3e09735bb40b021788a5e834c81eedb541991badf8f5/pydantic_core-2.41.5-cp313-cp313-win_arm64.whl", hash = "sha256:3f84d5c1b4ab906093bdc1ff10484838aca54ef08de4afa9de0f5f14d69639cd", size = 1981005, upload-time = "2025-11-04T13:40:54.734Z" }, + { url = "https://files.pythonhosted.org/packages/ea/28/46b7c5c9635ae96ea0fbb779e271a38129df2550f763937659ee6c5dbc65/pydantic_core-2.41.5-cp314-cp314-macosx_10_12_x86_64.whl", hash = "sha256:3f37a19d7ebcdd20b96485056ba9e8b304e27d9904d233d7b1015db320e51f0a", size = 2119622, upload-time = "2025-11-04T13:40:56.68Z" }, + { url = "https://files.pythonhosted.org/packages/74/1a/145646e5687e8d9a1e8d09acb278c8535ebe9e972e1f162ed338a622f193/pydantic_core-2.41.5-cp314-cp314-macosx_11_0_arm64.whl", hash = "sha256:1d1d9764366c73f996edd17abb6d9d7649a7eb690006ab6adbda117717099b14", size = 1891725, upload-time = "2025-11-04T13:40:58.807Z" }, + { url = "https://files.pythonhosted.org/packages/23/04/e89c29e267b8060b40dca97bfc64a19b2a3cf99018167ea1677d96368273/pydantic_core-2.41.5-cp314-cp314-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:25e1c2af0fce638d5f1988b686f3b3ea8cd7de5f244ca147c777769e798a9cd1", size = 1915040, upload-time = "2025-11-04T13:41:00.853Z" }, + { url = "https://files.pythonhosted.org/packages/84/a3/15a82ac7bd97992a82257f777b3583d3e84bdb06ba6858f745daa2ec8a85/pydantic_core-2.41.5-cp314-cp314-manylinux_2_17_armv7l.manylinux2014_armv7l.whl", hash = "sha256:506d766a8727beef16b7adaeb8ee6217c64fc813646b424d0804d67c16eddb66", size = 2063691, upload-time = "2025-11-04T13:41:03.504Z" }, + { url = "https://files.pythonhosted.org/packages/74/9b/0046701313c6ef08c0c1cf0e028c67c770a4e1275ca73131563c5f2a310a/pydantic_core-2.41.5-cp314-cp314-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:4819fa52133c9aa3c387b3328f25c1facc356491e6135b459f1de698ff64d869", size = 2213897, upload-time = "2025-11-04T13:41:05.804Z" }, + { url = "https://files.pythonhosted.org/packages/8a/cd/6bac76ecd1b27e75a95ca3a9a559c643b3afcd2dd62086d4b7a32a18b169/pydantic_core-2.41.5-cp314-cp314-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:2b761d210c9ea91feda40d25b4efe82a1707da2ef62901466a42492c028553a2", size = 2333302, upload-time = "2025-11-04T13:41:07.809Z" }, + { url = "https://files.pythonhosted.org/packages/4c/d2/ef2074dc020dd6e109611a8be4449b98cd25e1b9b8a303c2f0fca2f2bcf7/pydantic_core-2.41.5-cp314-cp314-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:22f0fb8c1c583a3b6f24df2470833b40207e907b90c928cc8d3594b76f874375", size = 2064877, upload-time = "2025-11-04T13:41:09.827Z" }, + { url = "https://files.pythonhosted.org/packages/18/66/e9db17a9a763d72f03de903883c057b2592c09509ccfe468187f2a2eef29/pydantic_core-2.41.5-cp314-cp314-manylinux_2_5_i686.manylinux1_i686.whl", hash = "sha256:2782c870e99878c634505236d81e5443092fba820f0373997ff75f90f68cd553", size = 2180680, upload-time = "2025-11-04T13:41:12.379Z" }, + { url = "https://files.pythonhosted.org/packages/d3/9e/3ce66cebb929f3ced22be85d4c2399b8e85b622db77dad36b73c5387f8f8/pydantic_core-2.41.5-cp314-cp314-musllinux_1_1_aarch64.whl", hash = "sha256:0177272f88ab8312479336e1d777f6b124537d47f2123f89cb37e0accea97f90", size = 2138960, upload-time = "2025-11-04T13:41:14.627Z" }, + { url = "https://files.pythonhosted.org/packages/a6/62/205a998f4327d2079326b01abee48e502ea739d174f0a89295c481a2272e/pydantic_core-2.41.5-cp314-cp314-musllinux_1_1_armv7l.whl", hash = "sha256:63510af5e38f8955b8ee5687740d6ebf7c2a0886d15a6d65c32814613681bc07", size = 2339102, upload-time = "2025-11-04T13:41:16.868Z" }, + { url = "https://files.pythonhosted.org/packages/3c/0d/f05e79471e889d74d3d88f5bd20d0ed189ad94c2423d81ff8d0000aab4ff/pydantic_core-2.41.5-cp314-cp314-musllinux_1_1_x86_64.whl", hash = "sha256:e56ba91f47764cc14f1daacd723e3e82d1a89d783f0f5afe9c364b8bb491ccdb", size = 2326039, upload-time = "2025-11-04T13:41:18.934Z" }, + { url = "https://files.pythonhosted.org/packages/ec/e1/e08a6208bb100da7e0c4b288eed624a703f4d129bde2da475721a80cab32/pydantic_core-2.41.5-cp314-cp314-win32.whl", hash = "sha256:aec5cf2fd867b4ff45b9959f8b20ea3993fc93e63c7363fe6851424c8a7e7c23", size = 1995126, upload-time = "2025-11-04T13:41:21.418Z" }, + { url = "https://files.pythonhosted.org/packages/48/5d/56ba7b24e9557f99c9237e29f5c09913c81eeb2f3217e40e922353668092/pydantic_core-2.41.5-cp314-cp314-win_amd64.whl", hash = "sha256:8e7c86f27c585ef37c35e56a96363ab8de4e549a95512445b85c96d3e2f7c1bf", size = 2015489, upload-time = "2025-11-04T13:41:24.076Z" }, + { url = "https://files.pythonhosted.org/packages/4e/bb/f7a190991ec9e3e0ba22e4993d8755bbc4a32925c0b5b42775c03e8148f9/pydantic_core-2.41.5-cp314-cp314-win_arm64.whl", hash = "sha256:e672ba74fbc2dc8eea59fb6d4aed6845e6905fc2a8afe93175d94a83ba2a01a0", size = 1977288, upload-time = "2025-11-04T13:41:26.33Z" }, + { url = "https://files.pythonhosted.org/packages/92/ed/77542d0c51538e32e15afe7899d79efce4b81eee631d99850edc2f5e9349/pydantic_core-2.41.5-cp314-cp314t-macosx_10_12_x86_64.whl", hash = "sha256:8566def80554c3faa0e65ac30ab0932b9e3a5cd7f8323764303d468e5c37595a", size = 2120255, upload-time = "2025-11-04T13:41:28.569Z" }, + { url = "https://files.pythonhosted.org/packages/bb/3d/6913dde84d5be21e284439676168b28d8bbba5600d838b9dca99de0fad71/pydantic_core-2.41.5-cp314-cp314t-macosx_11_0_arm64.whl", hash = "sha256:b80aa5095cd3109962a298ce14110ae16b8c1aece8b72f9dafe81cf597ad80b3", size = 1863760, upload-time = "2025-11-04T13:41:31.055Z" }, + { url = "https://files.pythonhosted.org/packages/5a/f0/e5e6b99d4191da102f2b0eb9687aaa7f5bea5d9964071a84effc3e40f997/pydantic_core-2.41.5-cp314-cp314t-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:3006c3dd9ba34b0c094c544c6006cc79e87d8612999f1a5d43b769b89181f23c", size = 1878092, upload-time = "2025-11-04T13:41:33.21Z" }, + { url = "https://files.pythonhosted.org/packages/71/48/36fb760642d568925953bcc8116455513d6e34c4beaa37544118c36aba6d/pydantic_core-2.41.5-cp314-cp314t-manylinux_2_17_armv7l.manylinux2014_armv7l.whl", hash = "sha256:72f6c8b11857a856bcfa48c86f5368439f74453563f951e473514579d44aa612", size = 2053385, upload-time = "2025-11-04T13:41:35.508Z" }, + { url = "https://files.pythonhosted.org/packages/20/25/92dc684dd8eb75a234bc1c764b4210cf2646479d54b47bf46061657292a8/pydantic_core-2.41.5-cp314-cp314t-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:5cb1b2f9742240e4bb26b652a5aeb840aa4b417c7748b6f8387927bc6e45e40d", size = 2218832, upload-time = "2025-11-04T13:41:37.732Z" }, + { url = "https://files.pythonhosted.org/packages/e2/09/f53e0b05023d3e30357d82eb35835d0f6340ca344720a4599cd663dca599/pydantic_core-2.41.5-cp314-cp314t-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:bd3d54f38609ff308209bd43acea66061494157703364ae40c951f83ba99a1a9", size = 2327585, upload-time = "2025-11-04T13:41:40Z" }, + { url = "https://files.pythonhosted.org/packages/aa/4e/2ae1aa85d6af35a39b236b1b1641de73f5a6ac4d5a7509f77b814885760c/pydantic_core-2.41.5-cp314-cp314t-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:2ff4321e56e879ee8d2a879501c8e469414d948f4aba74a2d4593184eb326660", size = 2041078, upload-time = "2025-11-04T13:41:42.323Z" }, + { url = "https://files.pythonhosted.org/packages/cd/13/2e215f17f0ef326fc72afe94776edb77525142c693767fc347ed6288728d/pydantic_core-2.41.5-cp314-cp314t-manylinux_2_5_i686.manylinux1_i686.whl", hash = "sha256:d0d2568a8c11bf8225044aa94409e21da0cb09dcdafe9ecd10250b2baad531a9", size = 2173914, upload-time = "2025-11-04T13:41:45.221Z" }, + { url = "https://files.pythonhosted.org/packages/02/7a/f999a6dcbcd0e5660bc348a3991c8915ce6599f4f2c6ac22f01d7a10816c/pydantic_core-2.41.5-cp314-cp314t-musllinux_1_1_aarch64.whl", hash = "sha256:a39455728aabd58ceabb03c90e12f71fd30fa69615760a075b9fec596456ccc3", size = 2129560, upload-time = "2025-11-04T13:41:47.474Z" }, + { url = "https://files.pythonhosted.org/packages/3a/b1/6c990ac65e3b4c079a4fb9f5b05f5b013afa0f4ed6780a3dd236d2cbdc64/pydantic_core-2.41.5-cp314-cp314t-musllinux_1_1_armv7l.whl", hash = "sha256:239edca560d05757817c13dc17c50766136d21f7cd0fac50295499ae24f90fdf", size = 2329244, upload-time = "2025-11-04T13:41:49.992Z" }, + { url = "https://files.pythonhosted.org/packages/d9/02/3c562f3a51afd4d88fff8dffb1771b30cfdfd79befd9883ee094f5b6c0d8/pydantic_core-2.41.5-cp314-cp314t-musllinux_1_1_x86_64.whl", hash = "sha256:2a5e06546e19f24c6a96a129142a75cee553cc018ffee48a460059b1185f4470", size = 2331955, upload-time = "2025-11-04T13:41:54.079Z" }, + { url = "https://files.pythonhosted.org/packages/5c/96/5fb7d8c3c17bc8c62fdb031c47d77a1af698f1d7a406b0f79aaa1338f9ad/pydantic_core-2.41.5-cp314-cp314t-win32.whl", hash = "sha256:b4ececa40ac28afa90871c2cc2b9ffd2ff0bf749380fbdf57d165fd23da353aa", size = 1988906, upload-time = "2025-11-04T13:41:56.606Z" }, + { url = "https://files.pythonhosted.org/packages/22/ed/182129d83032702912c2e2d8bbe33c036f342cc735737064668585dac28f/pydantic_core-2.41.5-cp314-cp314t-win_amd64.whl", hash = "sha256:80aa89cad80b32a912a65332f64a4450ed00966111b6615ca6816153d3585a8c", size = 1981607, upload-time = "2025-11-04T13:41:58.889Z" }, + { url = "https://files.pythonhosted.org/packages/9f/ed/068e41660b832bb0b1aa5b58011dea2a3fe0ba7861ff38c4d4904c1c1a99/pydantic_core-2.41.5-cp314-cp314t-win_arm64.whl", hash = "sha256:35b44f37a3199f771c3eaa53051bc8a70cd7b54f333531c59e29fd4db5d15008", size = 1974769, upload-time = "2025-11-04T13:42:01.186Z" }, + { url = "https://files.pythonhosted.org/packages/11/72/90fda5ee3b97e51c494938a4a44c3a35a9c96c19bba12372fb9c634d6f57/pydantic_core-2.41.5-graalpy311-graalpy242_311_native-macosx_10_12_x86_64.whl", hash = "sha256:b96d5f26b05d03cc60f11a7761a5ded1741da411e7fe0909e27a5e6a0cb7b034", size = 2115441, upload-time = "2025-11-04T13:42:39.557Z" }, + { url = "https://files.pythonhosted.org/packages/1f/53/8942f884fa33f50794f119012dc6a1a02ac43a56407adaac20463df8e98f/pydantic_core-2.41.5-graalpy311-graalpy242_311_native-macosx_11_0_arm64.whl", hash = "sha256:634e8609e89ceecea15e2d61bc9ac3718caaaa71963717bf3c8f38bfde64242c", size = 1930291, upload-time = "2025-11-04T13:42:42.169Z" }, + { url = "https://files.pythonhosted.org/packages/79/c8/ecb9ed9cd942bce09fc888ee960b52654fbdbede4ba6c2d6e0d3b1d8b49c/pydantic_core-2.41.5-graalpy311-graalpy242_311_native-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:93e8740d7503eb008aa2df04d3b9735f845d43ae845e6dcd2be0b55a2da43cd2", size = 1948632, upload-time = "2025-11-04T13:42:44.564Z" }, + { url = "https://files.pythonhosted.org/packages/2e/1b/687711069de7efa6af934e74f601e2a4307365e8fdc404703afc453eab26/pydantic_core-2.41.5-graalpy311-graalpy242_311_native-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:f15489ba13d61f670dcc96772e733aad1a6f9c429cc27574c6cdaed82d0146ad", size = 2138905, upload-time = "2025-11-04T13:42:47.156Z" }, + { url = "https://files.pythonhosted.org/packages/09/32/59b0c7e63e277fa7911c2fc70ccfb45ce4b98991e7ef37110663437005af/pydantic_core-2.41.5-graalpy312-graalpy250_312_native-macosx_10_12_x86_64.whl", hash = "sha256:7da7087d756b19037bc2c06edc6c170eeef3c3bafcb8f532ff17d64dc427adfd", size = 2110495, upload-time = "2025-11-04T13:42:49.689Z" }, + { url = "https://files.pythonhosted.org/packages/aa/81/05e400037eaf55ad400bcd318c05bb345b57e708887f07ddb2d20e3f0e98/pydantic_core-2.41.5-graalpy312-graalpy250_312_native-macosx_11_0_arm64.whl", hash = "sha256:aabf5777b5c8ca26f7824cb4a120a740c9588ed58df9b2d196ce92fba42ff8dc", size = 1915388, upload-time = "2025-11-04T13:42:52.215Z" }, + { url = "https://files.pythonhosted.org/packages/6e/0d/e3549b2399f71d56476b77dbf3cf8937cec5cd70536bdc0e374a421d0599/pydantic_core-2.41.5-graalpy312-graalpy250_312_native-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:c007fe8a43d43b3969e8469004e9845944f1a80e6acd47c150856bb87f230c56", size = 1942879, upload-time = "2025-11-04T13:42:56.483Z" }, + { url = "https://files.pythonhosted.org/packages/f7/07/34573da085946b6a313d7c42f82f16e8920bfd730665de2d11c0c37a74b5/pydantic_core-2.41.5-graalpy312-graalpy250_312_native-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:76d0819de158cd855d1cbb8fcafdf6f5cf1eb8e470abe056d5d161106e38062b", size = 2139017, upload-time = "2025-11-04T13:42:59.471Z" }, + { url = "https://files.pythonhosted.org/packages/5f/9b/1b3f0e9f9305839d7e84912f9e8bfbd191ed1b1ef48083609f0dabde978c/pydantic_core-2.41.5-pp311-pypy311_pp73-macosx_10_12_x86_64.whl", hash = "sha256:b2379fa7ed44ddecb5bfe4e48577d752db9fc10be00a6b7446e9663ba143de26", size = 2101980, upload-time = "2025-11-04T13:43:25.97Z" }, + { url = "https://files.pythonhosted.org/packages/a4/ed/d71fefcb4263df0da6a85b5d8a7508360f2f2e9b3bf5814be9c8bccdccc1/pydantic_core-2.41.5-pp311-pypy311_pp73-macosx_11_0_arm64.whl", hash = "sha256:266fb4cbf5e3cbd0b53669a6d1b039c45e3ce651fd5442eff4d07c2cc8d66808", size = 1923865, upload-time = "2025-11-04T13:43:28.763Z" }, + { url = "https://files.pythonhosted.org/packages/ce/3a/626b38db460d675f873e4444b4bb030453bbe7b4ba55df821d026a0493c4/pydantic_core-2.41.5-pp311-pypy311_pp73-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:58133647260ea01e4d0500089a8c4f07bd7aa6ce109682b1426394988d8aaacc", size = 2134256, upload-time = "2025-11-04T13:43:31.71Z" }, + { url = "https://files.pythonhosted.org/packages/83/d9/8412d7f06f616bbc053d30cb4e5f76786af3221462ad5eee1f202021eb4e/pydantic_core-2.41.5-pp311-pypy311_pp73-manylinux_2_5_i686.manylinux1_i686.whl", hash = "sha256:287dad91cfb551c363dc62899a80e9e14da1f0e2b6ebde82c806612ca2a13ef1", size = 2174762, upload-time = "2025-11-04T13:43:34.744Z" }, + { url = "https://files.pythonhosted.org/packages/55/4c/162d906b8e3ba3a99354e20faa1b49a85206c47de97a639510a0e673f5da/pydantic_core-2.41.5-pp311-pypy311_pp73-musllinux_1_1_aarch64.whl", hash = "sha256:03b77d184b9eb40240ae9fd676ca364ce1085f203e1b1256f8ab9984dca80a84", size = 2143141, upload-time = "2025-11-04T13:43:37.701Z" }, + { url = "https://files.pythonhosted.org/packages/1f/f2/f11dd73284122713f5f89fc940f370d035fa8e1e078d446b3313955157fe/pydantic_core-2.41.5-pp311-pypy311_pp73-musllinux_1_1_armv7l.whl", hash = "sha256:a668ce24de96165bb239160b3d854943128f4334822900534f2fe947930e5770", size = 2330317, upload-time = "2025-11-04T13:43:40.406Z" }, + { url = "https://files.pythonhosted.org/packages/88/9d/b06ca6acfe4abb296110fb1273a4d848a0bfb2ff65f3ee92127b3244e16b/pydantic_core-2.41.5-pp311-pypy311_pp73-musllinux_1_1_x86_64.whl", hash = "sha256:f14f8f046c14563f8eb3f45f499cc658ab8d10072961e07225e507adb700e93f", size = 2316992, upload-time = "2025-11-04T13:43:43.602Z" }, + { url = "https://files.pythonhosted.org/packages/36/c7/cfc8e811f061c841d7990b0201912c3556bfeb99cdcb7ed24adc8d6f8704/pydantic_core-2.41.5-pp311-pypy311_pp73-win_amd64.whl", hash = "sha256:56121965f7a4dc965bff783d70b907ddf3d57f6eba29b6d2e5dabfaf07799c51", size = 2145302, upload-time = "2025-11-04T13:43:46.64Z" }, +] + +[[package]] +name = "pydantic-settings" +version = "2.12.0" +source = { registry = "https://pypi.org/simple" } +dependencies = [ + { name = "pydantic" }, + { name = "python-dotenv" }, + { name = "typing-inspection" }, +] +sdist = { url = "https://files.pythonhosted.org/packages/43/4b/ac7e0aae12027748076d72a8764ff1c9d82ca75a7a52622e67ed3f765c54/pydantic_settings-2.12.0.tar.gz", hash = "sha256:005538ef951e3c2a68e1c08b292b5f2e71490def8589d4221b95dab00dafcfd0", size = 194184, upload-time = "2025-11-10T14:25:47.013Z" } +wheels = [ + { url = "https://files.pythonhosted.org/packages/c1/60/5d4751ba3f4a40a6891f24eec885f51afd78d208498268c734e256fb13c4/pydantic_settings-2.12.0-py3-none-any.whl", hash = "sha256:fddb9fd99a5b18da837b29710391e945b1e30c135477f484084ee513adb93809", size = 51880, upload-time = "2025-11-10T14:25:45.546Z" }, +] + +[[package]] +name = "pygments" +version = "2.19.2" +source = { registry = "https://pypi.org/simple" } +sdist = { url = "https://files.pythonhosted.org/packages/b0/77/a5b8c569bf593b0140bde72ea885a803b82086995367bf2037de0159d924/pygments-2.19.2.tar.gz", hash = "sha256:636cb2477cec7f8952536970bc533bc43743542f70392ae026374600add5b887", size = 4968631, upload-time = "2025-06-21T13:39:12.283Z" } +wheels = [ + { url = "https://files.pythonhosted.org/packages/c7/21/705964c7812476f378728bdf590ca4b771ec72385c533964653c68e86bdc/pygments-2.19.2-py3-none-any.whl", hash = "sha256:86540386c03d588bb81d44bc3928634ff26449851e99741617ecb9037ee5ec0b", size = 1225217, upload-time = "2025-06-21T13:39:07.939Z" }, +] + +[[package]] +name = "pyserial-labgrid" +version = "3.5.0.2" +source = { registry = "https://pypi.org/simple" } +sdist = { url = "https://files.pythonhosted.org/packages/dd/29/7b475dc1d39fde54a99ce58a73b98ea7ef77863a4bf974d2850fbc456d9e/pyserial-labgrid-3.5.0.2.tar.gz", hash = "sha256:98f826609795e92157ba2c2e9d8e98c6cec2f399de353f76dd0436536f5c52ab", size = 161398, upload-time = "2023-04-26T09:57:04.041Z" } +wheels = [ + { url = "https://files.pythonhosted.org/packages/42/4e/bd1f71b7889cc1d34aeca3b162d5b5dd7eccd47631b5a59449288a5e3bb0/pyserial_labgrid-3.5.0.2-py2.py3-none-any.whl", hash = "sha256:99525df10c9d6b7739a1d1c44c63ac64a54237fb4c3d643d74a7aad8b96817fd", size = 91562, upload-time = "2023-04-26T09:57:02.357Z" }, +] + +[[package]] +name = "pytest" +version = "9.0.2" +source = { registry = "https://pypi.org/simple" } +dependencies = [ + { name = "colorama", marker = "sys_platform == 'win32'" }, + { name = "iniconfig" }, + { name = "packaging" }, + { name = "pluggy" }, + { name = "pygments" }, +] +sdist = { url = "https://files.pythonhosted.org/packages/d1/db/7ef3487e0fb0049ddb5ce41d3a49c235bf9ad299b6a25d5780a89f19230f/pytest-9.0.2.tar.gz", hash = "sha256:75186651a92bd89611d1d9fc20f0b4345fd827c41ccd5c299a868a05d70edf11", size = 1568901, upload-time = "2025-12-06T21:30:51.014Z" } +wheels = [ + { url = "https://files.pythonhosted.org/packages/3b/ab/b3226f0bd7cdcf710fbede2b3548584366da3b19b5021e74f5bde2a8fa3f/pytest-9.0.2-py3-none-any.whl", hash = "sha256:711ffd45bf766d5264d487b917733b453d917afd2b0ad65223959f59089f875b", size = 374801, upload-time = "2025-12-06T21:30:49.154Z" }, +] + +[[package]] +name = "pytest-asyncio" +version = "1.3.0" +source = { registry = "https://pypi.org/simple" } +dependencies = [ + { name = "pytest" }, + { name = "typing-extensions", marker = "python_full_version < '3.13'" }, +] +sdist = { url = "https://files.pythonhosted.org/packages/90/2c/8af215c0f776415f3590cac4f9086ccefd6fd463befeae41cd4d3f193e5a/pytest_asyncio-1.3.0.tar.gz", hash = "sha256:d7f52f36d231b80ee124cd216ffb19369aa168fc10095013c6b014a34d3ee9e5", size = 50087, upload-time = "2025-11-10T16:07:47.256Z" } +wheels = [ + { url = "https://files.pythonhosted.org/packages/e5/35/f8b19922b6a25bc0880171a2f1a003eaeb93657475193ab516fd87cac9da/pytest_asyncio-1.3.0-py3-none-any.whl", hash = "sha256:611e26147c7f77640e6d0a92a38ed17c3e9848063698d5c93d5aa7aa11cebff5", size = 15075, upload-time = "2025-11-10T16:07:45.537Z" }, +] + +[[package]] +name = "python-dotenv" +version = "1.2.1" +source = { registry = "https://pypi.org/simple" } +sdist = { url = "https://files.pythonhosted.org/packages/f0/26/19cadc79a718c5edbec86fd4919a6b6d3f681039a2f6d66d14be94e75fb9/python_dotenv-1.2.1.tar.gz", hash = "sha256:42667e897e16ab0d66954af0e60a9caa94f0fd4ecf3aaf6d2d260eec1aa36ad6", size = 44221, upload-time = "2025-10-26T15:12:10.434Z" } +wheels = [ + { url = "https://files.pythonhosted.org/packages/14/1b/a298b06749107c305e1fe0f814c6c74aea7b2f1e10989cb30f544a1b3253/python_dotenv-1.2.1-py3-none-any.whl", hash = "sha256:b81ee9561e9ca4004139c6cbba3a238c32b03e4894671e181b671e8cb8425d61", size = 21230, upload-time = "2025-10-26T15:12:09.109Z" }, +] + +[[package]] +name = "pyudev" +version = "0.24.4" +source = { registry = "https://pypi.org/simple" } +sdist = { url = "https://files.pythonhosted.org/packages/5e/1d/8bdbf651de1002e8b58fbe817bee22b1e8bfcdd24341d42c3238ce9a75f4/pyudev-0.24.4.tar.gz", hash = "sha256:e788bb983700b1a84efc2e88862b0a51af2a995d5b86bc9997546505cf7b36bc", size = 56135, upload-time = "2025-10-08T17:26:58.661Z" } +wheels = [ + { url = "https://files.pythonhosted.org/packages/2a/51/3dc0cd6498b24dea3cdeaed648568e3ca7454d41334d840b114156d7479f/pyudev-0.24.4-py3-none-any.whl", hash = "sha256:b3b6b01c68e6fc628428cc45ff3fe6c277afbb5d96507f14473ddb4a6b959e00", size = 62784, upload-time = "2025-10-08T17:26:57.664Z" }, +] + +[[package]] +name = "pyusb" +version = "1.3.1" +source = { registry = "https://pypi.org/simple" } +sdist = { url = "https://files.pythonhosted.org/packages/00/6b/ce3727395e52b7b76dfcf0c665e37d223b680b9becc60710d4bc08b7b7cb/pyusb-1.3.1.tar.gz", hash = "sha256:3af070b607467c1c164f49d5b0caabe8ac78dbed9298d703a8dbf9df4052d17e", size = 77281, upload-time = "2025-01-08T23:45:01.866Z" } +wheels = [ + { url = "https://files.pythonhosted.org/packages/28/b8/27e6312e86408a44fe16bd28ee12dd98608b39f7e7e57884a24e8f29b573/pyusb-1.3.1-py3-none-any.whl", hash = "sha256:bf9b754557af4717fe80c2b07cc2b923a9151f5c08d17bdb5345dac09d6a0430", size = 58465, upload-time = "2025-01-08T23:45:00.029Z" }, +] + +[[package]] +name = "pyyaml" +version = "6.0.3" +source = { registry = "https://pypi.org/simple" } +sdist = { url = "https://files.pythonhosted.org/packages/05/8e/961c0007c59b8dd7729d542c61a4d537767a59645b82a0b521206e1e25c2/pyyaml-6.0.3.tar.gz", hash = "sha256:d76623373421df22fb4cf8817020cbb7ef15c725b9d5e45f17e189bfc384190f", size = 130960, upload-time = "2025-09-25T21:33:16.546Z" } +wheels = [ + { url = "https://files.pythonhosted.org/packages/6d/16/a95b6757765b7b031c9374925bb718d55e0a9ba8a1b6a12d25962ea44347/pyyaml-6.0.3-cp311-cp311-macosx_10_13_x86_64.whl", hash = "sha256:44edc647873928551a01e7a563d7452ccdebee747728c1080d881d68af7b997e", size = 185826, upload-time = "2025-09-25T21:31:58.655Z" }, + { url = "https://files.pythonhosted.org/packages/16/19/13de8e4377ed53079ee996e1ab0a9c33ec2faf808a4647b7b4c0d46dd239/pyyaml-6.0.3-cp311-cp311-macosx_11_0_arm64.whl", hash = "sha256:652cb6edd41e718550aad172851962662ff2681490a8a711af6a4d288dd96824", size = 175577, upload-time = "2025-09-25T21:32:00.088Z" }, + { url = "https://files.pythonhosted.org/packages/0c/62/d2eb46264d4b157dae1275b573017abec435397aa59cbcdab6fc978a8af4/pyyaml-6.0.3-cp311-cp311-manylinux2014_aarch64.manylinux_2_17_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:10892704fc220243f5305762e276552a0395f7beb4dbf9b14ec8fd43b57f126c", size = 775556, upload-time = "2025-09-25T21:32:01.31Z" }, + { url = "https://files.pythonhosted.org/packages/10/cb/16c3f2cf3266edd25aaa00d6c4350381c8b012ed6f5276675b9eba8d9ff4/pyyaml-6.0.3-cp311-cp311-manylinux2014_s390x.manylinux_2_17_s390x.manylinux_2_28_s390x.whl", hash = "sha256:850774a7879607d3a6f50d36d04f00ee69e7fc816450e5f7e58d7f17f1ae5c00", size = 882114, upload-time = "2025-09-25T21:32:03.376Z" }, + { url = "https://files.pythonhosted.org/packages/71/60/917329f640924b18ff085ab889a11c763e0b573da888e8404ff486657602/pyyaml-6.0.3-cp311-cp311-manylinux2014_x86_64.manylinux_2_17_x86_64.manylinux_2_28_x86_64.whl", hash = "sha256:b8bb0864c5a28024fac8a632c443c87c5aa6f215c0b126c449ae1a150412f31d", size = 806638, upload-time = "2025-09-25T21:32:04.553Z" }, + { url = "https://files.pythonhosted.org/packages/dd/6f/529b0f316a9fd167281a6c3826b5583e6192dba792dd55e3203d3f8e655a/pyyaml-6.0.3-cp311-cp311-musllinux_1_2_aarch64.whl", hash = "sha256:1d37d57ad971609cf3c53ba6a7e365e40660e3be0e5175fa9f2365a379d6095a", size = 767463, upload-time = "2025-09-25T21:32:06.152Z" }, + { url = "https://files.pythonhosted.org/packages/f2/6a/b627b4e0c1dd03718543519ffb2f1deea4a1e6d42fbab8021936a4d22589/pyyaml-6.0.3-cp311-cp311-musllinux_1_2_x86_64.whl", hash = "sha256:37503bfbfc9d2c40b344d06b2199cf0e96e97957ab1c1b546fd4f87e53e5d3e4", size = 794986, upload-time = "2025-09-25T21:32:07.367Z" }, + { url = "https://files.pythonhosted.org/packages/45/91/47a6e1c42d9ee337c4839208f30d9f09caa9f720ec7582917b264defc875/pyyaml-6.0.3-cp311-cp311-win32.whl", hash = "sha256:8098f252adfa6c80ab48096053f512f2321f0b998f98150cea9bd23d83e1467b", size = 142543, upload-time = "2025-09-25T21:32:08.95Z" }, + { url = "https://files.pythonhosted.org/packages/da/e3/ea007450a105ae919a72393cb06f122f288ef60bba2dc64b26e2646fa315/pyyaml-6.0.3-cp311-cp311-win_amd64.whl", hash = "sha256:9f3bfb4965eb874431221a3ff3fdcddc7e74e3b07799e0e84ca4a0f867d449bf", size = 158763, upload-time = "2025-09-25T21:32:09.96Z" }, + { url = "https://files.pythonhosted.org/packages/d1/33/422b98d2195232ca1826284a76852ad5a86fe23e31b009c9886b2d0fb8b2/pyyaml-6.0.3-cp312-cp312-macosx_10_13_x86_64.whl", hash = "sha256:7f047e29dcae44602496db43be01ad42fc6f1cc0d8cd6c83d342306c32270196", size = 182063, upload-time = "2025-09-25T21:32:11.445Z" }, + { url = "https://files.pythonhosted.org/packages/89/a0/6cf41a19a1f2f3feab0e9c0b74134aa2ce6849093d5517a0c550fe37a648/pyyaml-6.0.3-cp312-cp312-macosx_11_0_arm64.whl", hash = "sha256:fc09d0aa354569bc501d4e787133afc08552722d3ab34836a80547331bb5d4a0", size = 173973, upload-time = "2025-09-25T21:32:12.492Z" }, + { url = "https://files.pythonhosted.org/packages/ed/23/7a778b6bd0b9a8039df8b1b1d80e2e2ad78aa04171592c8a5c43a56a6af4/pyyaml-6.0.3-cp312-cp312-manylinux2014_aarch64.manylinux_2_17_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:9149cad251584d5fb4981be1ecde53a1ca46c891a79788c0df828d2f166bda28", size = 775116, upload-time = "2025-09-25T21:32:13.652Z" }, + { url = "https://files.pythonhosted.org/packages/65/30/d7353c338e12baef4ecc1b09e877c1970bd3382789c159b4f89d6a70dc09/pyyaml-6.0.3-cp312-cp312-manylinux2014_s390x.manylinux_2_17_s390x.manylinux_2_28_s390x.whl", hash = "sha256:5fdec68f91a0c6739b380c83b951e2c72ac0197ace422360e6d5a959d8d97b2c", size = 844011, upload-time = "2025-09-25T21:32:15.21Z" }, + { url = "https://files.pythonhosted.org/packages/8b/9d/b3589d3877982d4f2329302ef98a8026e7f4443c765c46cfecc8858c6b4b/pyyaml-6.0.3-cp312-cp312-manylinux2014_x86_64.manylinux_2_17_x86_64.manylinux_2_28_x86_64.whl", hash = "sha256:ba1cc08a7ccde2d2ec775841541641e4548226580ab850948cbfda66a1befcdc", size = 807870, upload-time = "2025-09-25T21:32:16.431Z" }, + { url = "https://files.pythonhosted.org/packages/05/c0/b3be26a015601b822b97d9149ff8cb5ead58c66f981e04fedf4e762f4bd4/pyyaml-6.0.3-cp312-cp312-musllinux_1_2_aarch64.whl", hash = "sha256:8dc52c23056b9ddd46818a57b78404882310fb473d63f17b07d5c40421e47f8e", size = 761089, upload-time = "2025-09-25T21:32:17.56Z" }, + { url = "https://files.pythonhosted.org/packages/be/8e/98435a21d1d4b46590d5459a22d88128103f8da4c2d4cb8f14f2a96504e1/pyyaml-6.0.3-cp312-cp312-musllinux_1_2_x86_64.whl", hash = "sha256:41715c910c881bc081f1e8872880d3c650acf13dfa8214bad49ed4cede7c34ea", size = 790181, upload-time = "2025-09-25T21:32:18.834Z" }, + { url = "https://files.pythonhosted.org/packages/74/93/7baea19427dcfbe1e5a372d81473250b379f04b1bd3c4c5ff825e2327202/pyyaml-6.0.3-cp312-cp312-win32.whl", hash = "sha256:96b533f0e99f6579b3d4d4995707cf36df9100d67e0c8303a0c55b27b5f99bc5", size = 137658, upload-time = "2025-09-25T21:32:20.209Z" }, + { url = "https://files.pythonhosted.org/packages/86/bf/899e81e4cce32febab4fb42bb97dcdf66bc135272882d1987881a4b519e9/pyyaml-6.0.3-cp312-cp312-win_amd64.whl", hash = "sha256:5fcd34e47f6e0b794d17de1b4ff496c00986e1c83f7ab2fb8fcfe9616ff7477b", size = 154003, upload-time = "2025-09-25T21:32:21.167Z" }, + { url = "https://files.pythonhosted.org/packages/1a/08/67bd04656199bbb51dbed1439b7f27601dfb576fb864099c7ef0c3e55531/pyyaml-6.0.3-cp312-cp312-win_arm64.whl", hash = "sha256:64386e5e707d03a7e172c0701abfb7e10f0fb753ee1d773128192742712a98fd", size = 140344, upload-time = "2025-09-25T21:32:22.617Z" }, + { url = "https://files.pythonhosted.org/packages/d1/11/0fd08f8192109f7169db964b5707a2f1e8b745d4e239b784a5a1dd80d1db/pyyaml-6.0.3-cp313-cp313-macosx_10_13_x86_64.whl", hash = "sha256:8da9669d359f02c0b91ccc01cac4a67f16afec0dac22c2ad09f46bee0697eba8", size = 181669, upload-time = "2025-09-25T21:32:23.673Z" }, + { url = "https://files.pythonhosted.org/packages/b1/16/95309993f1d3748cd644e02e38b75d50cbc0d9561d21f390a76242ce073f/pyyaml-6.0.3-cp313-cp313-macosx_11_0_arm64.whl", hash = "sha256:2283a07e2c21a2aa78d9c4442724ec1eb15f5e42a723b99cb3d822d48f5f7ad1", size = 173252, upload-time = "2025-09-25T21:32:25.149Z" }, + { url = "https://files.pythonhosted.org/packages/50/31/b20f376d3f810b9b2371e72ef5adb33879b25edb7a6d072cb7ca0c486398/pyyaml-6.0.3-cp313-cp313-manylinux2014_aarch64.manylinux_2_17_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:ee2922902c45ae8ccada2c5b501ab86c36525b883eff4255313a253a3160861c", size = 767081, upload-time = "2025-09-25T21:32:26.575Z" }, + { url = "https://files.pythonhosted.org/packages/49/1e/a55ca81e949270d5d4432fbbd19dfea5321eda7c41a849d443dc92fd1ff7/pyyaml-6.0.3-cp313-cp313-manylinux2014_s390x.manylinux_2_17_s390x.manylinux_2_28_s390x.whl", hash = "sha256:a33284e20b78bd4a18c8c2282d549d10bc8408a2a7ff57653c0cf0b9be0afce5", size = 841159, upload-time = "2025-09-25T21:32:27.727Z" }, + { url = "https://files.pythonhosted.org/packages/74/27/e5b8f34d02d9995b80abcef563ea1f8b56d20134d8f4e5e81733b1feceb2/pyyaml-6.0.3-cp313-cp313-manylinux2014_x86_64.manylinux_2_17_x86_64.manylinux_2_28_x86_64.whl", hash = "sha256:0f29edc409a6392443abf94b9cf89ce99889a1dd5376d94316ae5145dfedd5d6", size = 801626, upload-time = "2025-09-25T21:32:28.878Z" }, + { url = "https://files.pythonhosted.org/packages/f9/11/ba845c23988798f40e52ba45f34849aa8a1f2d4af4b798588010792ebad6/pyyaml-6.0.3-cp313-cp313-musllinux_1_2_aarch64.whl", hash = "sha256:f7057c9a337546edc7973c0d3ba84ddcdf0daa14533c2065749c9075001090e6", size = 753613, upload-time = "2025-09-25T21:32:30.178Z" }, + { url = "https://files.pythonhosted.org/packages/3d/e0/7966e1a7bfc0a45bf0a7fb6b98ea03fc9b8d84fa7f2229e9659680b69ee3/pyyaml-6.0.3-cp313-cp313-musllinux_1_2_x86_64.whl", hash = "sha256:eda16858a3cab07b80edaf74336ece1f986ba330fdb8ee0d6c0d68fe82bc96be", size = 794115, upload-time = "2025-09-25T21:32:31.353Z" }, + { url = "https://files.pythonhosted.org/packages/de/94/980b50a6531b3019e45ddeada0626d45fa85cbe22300844a7983285bed3b/pyyaml-6.0.3-cp313-cp313-win32.whl", hash = "sha256:d0eae10f8159e8fdad514efdc92d74fd8d682c933a6dd088030f3834bc8e6b26", size = 137427, upload-time = "2025-09-25T21:32:32.58Z" }, + { url = "https://files.pythonhosted.org/packages/97/c9/39d5b874e8b28845e4ec2202b5da735d0199dbe5b8fb85f91398814a9a46/pyyaml-6.0.3-cp313-cp313-win_amd64.whl", hash = "sha256:79005a0d97d5ddabfeeea4cf676af11e647e41d81c9a7722a193022accdb6b7c", size = 154090, upload-time = "2025-09-25T21:32:33.659Z" }, + { url = "https://files.pythonhosted.org/packages/73/e8/2bdf3ca2090f68bb3d75b44da7bbc71843b19c9f2b9cb9b0f4ab7a5a4329/pyyaml-6.0.3-cp313-cp313-win_arm64.whl", hash = "sha256:5498cd1645aa724a7c71c8f378eb29ebe23da2fc0d7a08071d89469bf1d2defb", size = 140246, upload-time = "2025-09-25T21:32:34.663Z" }, + { url = "https://files.pythonhosted.org/packages/9d/8c/f4bd7f6465179953d3ac9bc44ac1a8a3e6122cf8ada906b4f96c60172d43/pyyaml-6.0.3-cp314-cp314-macosx_10_13_x86_64.whl", hash = "sha256:8d1fab6bb153a416f9aeb4b8763bc0f22a5586065f86f7664fc23339fc1c1fac", size = 181814, upload-time = "2025-09-25T21:32:35.712Z" }, + { url = "https://files.pythonhosted.org/packages/bd/9c/4d95bb87eb2063d20db7b60faa3840c1b18025517ae857371c4dd55a6b3a/pyyaml-6.0.3-cp314-cp314-macosx_11_0_arm64.whl", hash = "sha256:34d5fcd24b8445fadc33f9cf348c1047101756fd760b4dacb5c3e99755703310", size = 173809, upload-time = "2025-09-25T21:32:36.789Z" }, + { url = "https://files.pythonhosted.org/packages/92/b5/47e807c2623074914e29dabd16cbbdd4bf5e9b2db9f8090fa64411fc5382/pyyaml-6.0.3-cp314-cp314-manylinux2014_aarch64.manylinux_2_17_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:501a031947e3a9025ed4405a168e6ef5ae3126c59f90ce0cd6f2bfc477be31b7", size = 766454, upload-time = "2025-09-25T21:32:37.966Z" }, + { url = "https://files.pythonhosted.org/packages/02/9e/e5e9b168be58564121efb3de6859c452fccde0ab093d8438905899a3a483/pyyaml-6.0.3-cp314-cp314-manylinux2014_s390x.manylinux_2_17_s390x.manylinux_2_28_s390x.whl", hash = "sha256:b3bc83488de33889877a0f2543ade9f70c67d66d9ebb4ac959502e12de895788", size = 836355, upload-time = "2025-09-25T21:32:39.178Z" }, + { url = "https://files.pythonhosted.org/packages/88/f9/16491d7ed2a919954993e48aa941b200f38040928474c9e85ea9e64222c3/pyyaml-6.0.3-cp314-cp314-manylinux2014_x86_64.manylinux_2_17_x86_64.manylinux_2_28_x86_64.whl", hash = "sha256:c458b6d084f9b935061bc36216e8a69a7e293a2f1e68bf956dcd9e6cbcd143f5", size = 794175, upload-time = "2025-09-25T21:32:40.865Z" }, + { url = "https://files.pythonhosted.org/packages/dd/3f/5989debef34dc6397317802b527dbbafb2b4760878a53d4166579111411e/pyyaml-6.0.3-cp314-cp314-musllinux_1_2_aarch64.whl", hash = "sha256:7c6610def4f163542a622a73fb39f534f8c101d690126992300bf3207eab9764", size = 755228, upload-time = "2025-09-25T21:32:42.084Z" }, + { url = "https://files.pythonhosted.org/packages/d7/ce/af88a49043cd2e265be63d083fc75b27b6ed062f5f9fd6cdc223ad62f03e/pyyaml-6.0.3-cp314-cp314-musllinux_1_2_x86_64.whl", hash = "sha256:5190d403f121660ce8d1d2c1bb2ef1bd05b5f68533fc5c2ea899bd15f4399b35", size = 789194, upload-time = "2025-09-25T21:32:43.362Z" }, + { url = "https://files.pythonhosted.org/packages/23/20/bb6982b26a40bb43951265ba29d4c246ef0ff59c9fdcdf0ed04e0687de4d/pyyaml-6.0.3-cp314-cp314-win_amd64.whl", hash = "sha256:4a2e8cebe2ff6ab7d1050ecd59c25d4c8bd7e6f400f5f82b96557ac0abafd0ac", size = 156429, upload-time = "2025-09-25T21:32:57.844Z" }, + { url = "https://files.pythonhosted.org/packages/f4/f4/a4541072bb9422c8a883ab55255f918fa378ecf083f5b85e87fc2b4eda1b/pyyaml-6.0.3-cp314-cp314-win_arm64.whl", hash = "sha256:93dda82c9c22deb0a405ea4dc5f2d0cda384168e466364dec6255b293923b2f3", size = 143912, upload-time = "2025-09-25T21:32:59.247Z" }, + { url = "https://files.pythonhosted.org/packages/7c/f9/07dd09ae774e4616edf6cda684ee78f97777bdd15847253637a6f052a62f/pyyaml-6.0.3-cp314-cp314t-macosx_10_13_x86_64.whl", hash = "sha256:02893d100e99e03eda1c8fd5c441d8c60103fd175728e23e431db1b589cf5ab3", size = 189108, upload-time = "2025-09-25T21:32:44.377Z" }, + { url = "https://files.pythonhosted.org/packages/4e/78/8d08c9fb7ce09ad8c38ad533c1191cf27f7ae1effe5bb9400a46d9437fcf/pyyaml-6.0.3-cp314-cp314t-macosx_11_0_arm64.whl", hash = "sha256:c1ff362665ae507275af2853520967820d9124984e0f7466736aea23d8611fba", size = 183641, upload-time = "2025-09-25T21:32:45.407Z" }, + { url = "https://files.pythonhosted.org/packages/7b/5b/3babb19104a46945cf816d047db2788bcaf8c94527a805610b0289a01c6b/pyyaml-6.0.3-cp314-cp314t-manylinux2014_aarch64.manylinux_2_17_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:6adc77889b628398debc7b65c073bcb99c4a0237b248cacaf3fe8a557563ef6c", size = 831901, upload-time = "2025-09-25T21:32:48.83Z" }, + { url = "https://files.pythonhosted.org/packages/8b/cc/dff0684d8dc44da4d22a13f35f073d558c268780ce3c6ba1b87055bb0b87/pyyaml-6.0.3-cp314-cp314t-manylinux2014_s390x.manylinux_2_17_s390x.manylinux_2_28_s390x.whl", hash = "sha256:a80cb027f6b349846a3bf6d73b5e95e782175e52f22108cfa17876aaeff93702", size = 861132, upload-time = "2025-09-25T21:32:50.149Z" }, + { url = "https://files.pythonhosted.org/packages/b1/5e/f77dc6b9036943e285ba76b49e118d9ea929885becb0a29ba8a7c75e29fe/pyyaml-6.0.3-cp314-cp314t-manylinux2014_x86_64.manylinux_2_17_x86_64.manylinux_2_28_x86_64.whl", hash = "sha256:00c4bdeba853cc34e7dd471f16b4114f4162dc03e6b7afcc2128711f0eca823c", size = 839261, upload-time = "2025-09-25T21:32:51.808Z" }, + { url = "https://files.pythonhosted.org/packages/ce/88/a9db1376aa2a228197c58b37302f284b5617f56a5d959fd1763fb1675ce6/pyyaml-6.0.3-cp314-cp314t-musllinux_1_2_aarch64.whl", hash = "sha256:66e1674c3ef6f541c35191caae2d429b967b99e02040f5ba928632d9a7f0f065", size = 805272, upload-time = "2025-09-25T21:32:52.941Z" }, + { url = "https://files.pythonhosted.org/packages/da/92/1446574745d74df0c92e6aa4a7b0b3130706a4142b2d1a5869f2eaa423c6/pyyaml-6.0.3-cp314-cp314t-musllinux_1_2_x86_64.whl", hash = "sha256:16249ee61e95f858e83976573de0f5b2893b3677ba71c9dd36b9cf8be9ac6d65", size = 829923, upload-time = "2025-09-25T21:32:54.537Z" }, + { url = "https://files.pythonhosted.org/packages/f0/7a/1c7270340330e575b92f397352af856a8c06f230aa3e76f86b39d01b416a/pyyaml-6.0.3-cp314-cp314t-win_amd64.whl", hash = "sha256:4ad1906908f2f5ae4e5a8ddfce73c320c2a1429ec52eafd27138b7f1cbe341c9", size = 174062, upload-time = "2025-09-25T21:32:55.767Z" }, + { url = "https://files.pythonhosted.org/packages/f1/12/de94a39c2ef588c7e6455cfbe7343d3b2dc9d6b6b2f40c4c6565744c873d/pyyaml-6.0.3-cp314-cp314t-win_arm64.whl", hash = "sha256:ebc55a14a21cb14062aa4162f906cd962b28e2e9ea38f9b4391244cd8de4ae0b", size = 149341, upload-time = "2025-09-25T21:32:56.828Z" }, +] + +[[package]] +name = "requests" +version = "2.32.5" +source = { registry = "https://pypi.org/simple" } +dependencies = [ + { name = "certifi" }, + { name = "charset-normalizer" }, + { name = "idna" }, + { name = "urllib3" }, +] +sdist = { url = "https://files.pythonhosted.org/packages/c9/74/b3ff8e6c8446842c3f5c837e9c3dfcfe2018ea6ecef224c710c85ef728f4/requests-2.32.5.tar.gz", hash = "sha256:dbba0bac56e100853db0ea71b82b4dfd5fe2bf6d3754a8893c3af500cec7d7cf", size = 134517, upload-time = "2025-08-18T20:46:02.573Z" } +wheels = [ + { url = "https://files.pythonhosted.org/packages/1e/db/4254e3eabe8020b458f1a747140d32277ec7a271daf1d235b70dc0b4e6e3/requests-2.32.5-py3-none-any.whl", hash = "sha256:2462f94637a34fd532264295e186976db0f5d453d1cdd31473c85a6a161affb6", size = 64738, upload-time = "2025-08-18T20:46:00.542Z" }, +] + +[[package]] +name = "ruff" +version = "0.15.0" +source = { registry = "https://pypi.org/simple" } +sdist = { url = "https://files.pythonhosted.org/packages/c8/39/5cee96809fbca590abea6b46c6d1c586b49663d1d2830a751cc8fc42c666/ruff-0.15.0.tar.gz", hash = "sha256:6bdea47cdbea30d40f8f8d7d69c0854ba7c15420ec75a26f463290949d7f7e9a", size = 4524893, upload-time = "2026-02-03T17:53:35.357Z" } +wheels = [ + { url = "https://files.pythonhosted.org/packages/bc/88/3fd1b0aa4b6330d6aaa63a285bc96c9f71970351579152d231ed90914586/ruff-0.15.0-py3-none-linux_armv6l.whl", hash = "sha256:aac4ebaa612a82b23d45964586f24ae9bc23ca101919f5590bdb368d74ad5455", size = 10354332, upload-time = "2026-02-03T17:52:54.892Z" }, + { url = "https://files.pythonhosted.org/packages/72/f6/62e173fbb7eb75cc29fe2576a1e20f0a46f671a2587b5f604bfb0eaf5f6f/ruff-0.15.0-py3-none-macosx_10_12_x86_64.whl", hash = "sha256:dcd4be7cc75cfbbca24a98d04d0b9b36a270d0833241f776b788d59f4142b14d", size = 10767189, upload-time = "2026-02-03T17:53:19.778Z" }, + { url = "https://files.pythonhosted.org/packages/99/e4/968ae17b676d1d2ff101d56dc69cf333e3a4c985e1ec23803df84fc7bf9e/ruff-0.15.0-py3-none-macosx_11_0_arm64.whl", hash = "sha256:d747e3319b2bce179c7c1eaad3d884dc0a199b5f4d5187620530adf9105268ce", size = 10075384, upload-time = "2026-02-03T17:53:29.241Z" }, + { url = "https://files.pythonhosted.org/packages/a2/bf/9843c6044ab9e20af879c751487e61333ca79a2c8c3058b15722386b8cae/ruff-0.15.0-py3-none-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:650bd9c56ae03102c51a5e4b554d74d825ff3abe4db22b90fd32d816c2e90621", size = 10481363, upload-time = "2026-02-03T17:52:43.332Z" }, + { url = "https://files.pythonhosted.org/packages/55/d9/4ada5ccf4cd1f532db1c8d44b6f664f2208d3d93acbeec18f82315e15193/ruff-0.15.0-py3-none-manylinux_2_17_armv7l.manylinux2014_armv7l.whl", hash = "sha256:a6664b7eac559e3048223a2da77769c2f92b43a6dfd4720cef42654299a599c9", size = 10187736, upload-time = "2026-02-03T17:53:00.522Z" }, + { url = "https://files.pythonhosted.org/packages/86/e2/f25eaecd446af7bb132af0a1d5b135a62971a41f5366ff41d06d25e77a91/ruff-0.15.0-py3-none-manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:6f811f97b0f092b35320d1556f3353bf238763420ade5d9e62ebd2b73f2ff179", size = 10968415, upload-time = "2026-02-03T17:53:15.705Z" }, + { url = "https://files.pythonhosted.org/packages/e7/dc/f06a8558d06333bf79b497d29a50c3a673d9251214e0d7ec78f90b30aa79/ruff-0.15.0-py3-none-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:761ec0a66680fab6454236635a39abaf14198818c8cdf691e036f4bc0f406b2d", size = 11809643, upload-time = "2026-02-03T17:53:23.031Z" }, + { url = "https://files.pythonhosted.org/packages/dd/45/0ece8db2c474ad7df13af3a6d50f76e22a09d078af63078f005057ca59eb/ruff-0.15.0-py3-none-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:940f11c2604d317e797b289f4f9f3fa5555ffe4fb574b55ed006c3d9b6f0eb78", size = 11234787, upload-time = "2026-02-03T17:52:46.432Z" }, + { url = "https://files.pythonhosted.org/packages/8a/d9/0e3a81467a120fd265658d127db648e4d3acfe3e4f6f5d4ea79fac47e587/ruff-0.15.0-py3-none-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:bcbca3d40558789126da91d7ef9a7c87772ee107033db7191edefa34e2c7f1b4", size = 11112797, upload-time = "2026-02-03T17:52:49.274Z" }, + { url = "https://files.pythonhosted.org/packages/b2/cb/8c0b3b0c692683f8ff31351dfb6241047fa873a4481a76df4335a8bff716/ruff-0.15.0-py3-none-manylinux_2_31_riscv64.whl", hash = "sha256:9a121a96db1d75fa3eb39c4539e607f628920dd72ff1f7c5ee4f1b768ac62d6e", size = 11033133, upload-time = "2026-02-03T17:53:33.105Z" }, + { url = "https://files.pythonhosted.org/packages/f8/5e/23b87370cf0f9081a8c89a753e69a4e8778805b8802ccfe175cc410e50b9/ruff-0.15.0-py3-none-musllinux_1_2_aarch64.whl", hash = "sha256:5298d518e493061f2eabd4abd067c7e4fb89e2f63291c94332e35631c07c3662", size = 10442646, upload-time = "2026-02-03T17:53:06.278Z" }, + { url = "https://files.pythonhosted.org/packages/e1/9a/3c94de5ce642830167e6d00b5c75aacd73e6347b4c7fc6828699b150a5ee/ruff-0.15.0-py3-none-musllinux_1_2_armv7l.whl", hash = "sha256:afb6e603d6375ff0d6b0cee563fa21ab570fd15e65c852cb24922cef25050cf1", size = 10195750, upload-time = "2026-02-03T17:53:26.084Z" }, + { url = "https://files.pythonhosted.org/packages/30/15/e396325080d600b436acc970848d69df9c13977942fb62bb8722d729bee8/ruff-0.15.0-py3-none-musllinux_1_2_i686.whl", hash = "sha256:77e515f6b15f828b94dc17d2b4ace334c9ddb7d9468c54b2f9ed2b9c1593ef16", size = 10676120, upload-time = "2026-02-03T17:53:09.363Z" }, + { url = "https://files.pythonhosted.org/packages/8d/c9/229a23d52a2983de1ad0fb0ee37d36e0257e6f28bfd6b498ee2c76361874/ruff-0.15.0-py3-none-musllinux_1_2_x86_64.whl", hash = "sha256:6f6e80850a01eb13b3e42ee0ebdf6e4497151b48c35051aab51c101266d187a3", size = 11201636, upload-time = "2026-02-03T17:52:57.281Z" }, + { url = "https://files.pythonhosted.org/packages/6f/b0/69adf22f4e24f3677208adb715c578266842e6e6a3cc77483f48dd999ede/ruff-0.15.0-py3-none-win32.whl", hash = "sha256:238a717ef803e501b6d51e0bdd0d2c6e8513fe9eec14002445134d3907cd46c3", size = 10465945, upload-time = "2026-02-03T17:53:12.591Z" }, + { url = "https://files.pythonhosted.org/packages/51/ad/f813b6e2c97e9b4598be25e94a9147b9af7e60523b0cb5d94d307c15229d/ruff-0.15.0-py3-none-win_amd64.whl", hash = "sha256:dd5e4d3301dc01de614da3cdffc33d4b1b96fb89e45721f1598e5532ccf78b18", size = 11564657, upload-time = "2026-02-03T17:52:51.893Z" }, + { url = "https://files.pythonhosted.org/packages/f6/b0/2d823f6e77ebe560f4e397d078487e8d52c1516b331e3521bc75db4272ca/ruff-0.15.0-py3-none-win_arm64.whl", hash = "sha256:c480d632cc0ca3f0727acac8b7d053542d9e114a462a145d0b00e7cd658c515a", size = 10865753, upload-time = "2026-02-03T17:53:03.014Z" }, +] + +[[package]] +name = "structlog" +version = "25.5.0" +source = { registry = "https://pypi.org/simple" } +sdist = { url = "https://files.pythonhosted.org/packages/ef/52/9ba0f43b686e7f3ddfeaa78ac3af750292662284b3661e91ad5494f21dbc/structlog-25.5.0.tar.gz", hash = "sha256:098522a3bebed9153d4570c6d0288abf80a031dfdb2048d59a49e9dc2190fc98", size = 1460830, upload-time = "2025-10-27T08:28:23.028Z" } +wheels = [ + { url = "https://files.pythonhosted.org/packages/a8/45/a132b9074aa18e799b891b91ad72133c98d8042c70f6240e4c5f9dabee2f/structlog-25.5.0-py3-none-any.whl", hash = "sha256:a8453e9b9e636ec59bd9e79bbd4a72f025981b3ba0f5837aebf48f02f37a7f9f", size = 72510, upload-time = "2025-10-27T08:28:21.535Z" }, +] + +[[package]] +name = "typing-extensions" +version = "4.15.0" +source = { registry = "https://pypi.org/simple" } +sdist = { url = "https://files.pythonhosted.org/packages/72/94/1a15dd82efb362ac84269196e94cf00f187f7ed21c242792a923cdb1c61f/typing_extensions-4.15.0.tar.gz", hash = "sha256:0cea48d173cc12fa28ecabc3b837ea3cf6f38c6d1136f85cbaaf598984861466", size = 109391, upload-time = "2025-08-25T13:49:26.313Z" } +wheels = [ + { url = "https://files.pythonhosted.org/packages/18/67/36e9267722cc04a6b9f15c7f3441c2363321a3ea07da7ae0c0707beb2a9c/typing_extensions-4.15.0-py3-none-any.whl", hash = "sha256:f0fa19c6845758ab08074a0cfa8b7aecb71c999ca73d62883bc25cc018c4e548", size = 44614, upload-time = "2025-08-25T13:49:24.86Z" }, +] + +[[package]] +name = "typing-inspection" +version = "0.4.2" +source = { registry = "https://pypi.org/simple" } +dependencies = [ + { name = "typing-extensions" }, +] +sdist = { url = "https://files.pythonhosted.org/packages/55/e3/70399cb7dd41c10ac53367ae42139cf4b1ca5f36bb3dc6c9d33acdb43655/typing_inspection-0.4.2.tar.gz", hash = "sha256:ba561c48a67c5958007083d386c3295464928b01faa735ab8547c5692e87f464", size = 75949, upload-time = "2025-10-01T02:14:41.687Z" } +wheels = [ + { url = "https://files.pythonhosted.org/packages/dc/9b/47798a6c91d8bdb567fe2698fe81e0c6b7cb7ef4d13da4114b41d239f65d/typing_inspection-0.4.2-py3-none-any.whl", hash = "sha256:4ed1cacbdc298c220f1bd249ed5287caa16f34d44ef4e9c3d0cbad5b521545e7", size = 14611, upload-time = "2025-10-01T02:14:40.154Z" }, +] + +[[package]] +name = "urllib3" +version = "2.6.3" +source = { registry = "https://pypi.org/simple" } +sdist = { url = "https://files.pythonhosted.org/packages/c7/24/5f1b3bdffd70275f6661c76461e25f024d5a38a46f04aaca912426a2b1d3/urllib3-2.6.3.tar.gz", hash = "sha256:1b62b6884944a57dbe321509ab94fd4d3b307075e0c2eae991ac71ee15ad38ed", size = 435556, upload-time = "2026-01-07T16:24:43.925Z" } +wheels = [ + { url = "https://files.pythonhosted.org/packages/39/08/aaaad47bc4e9dc8c725e68f9d04865dbcb2052843ff09c97b08904852d84/urllib3-2.6.3-py3-none-any.whl", hash = "sha256:bf272323e553dfb2e87d9bfd225ca7b0f467b919d7bbd355436d3fd37cb0acd4", size = 131584, upload-time = "2026-01-07T16:24:42.685Z" }, +] + +[[package]] +name = "xmodem" +version = "0.4.7" +source = { registry = "https://pypi.org/simple" } +sdist = { url = "https://files.pythonhosted.org/packages/3d/17/fd6668a09afdc46c22990172b6f65e07dfb5bcf38960d063a7a887ca926d/xmodem-0.4.7.tar.gz", hash = "sha256:2f1068aa8676f0d1d112498b5786c4f8ea4f89d8f25d07d3a0f293cd21db1c35", size = 34052, upload-time = "2023-06-11T18:02:42.897Z" } +wheels = [ + { url = "https://files.pythonhosted.org/packages/67/6b/371c541d67e926d7e87e47aaa7117f8d30ac13dd16e64f9ac2ffdcdead44/xmodem-0.4.7-py2.py3-none-any.whl", hash = "sha256:0842d2266175f01225053db721ea952b3f4b239cb3ace83c32b1daf90aa413af", size = 35626, upload-time = "2023-06-11T18:02:38.583Z" }, + { url = "https://files.pythonhosted.org/packages/8c/1a/e383b538450963953d4e652d2bce0a4cfbcf15ecc8c34ab998bd1329c95d/xmodem-0.4.7-py3-none-any.whl", hash = "sha256:e6a2c7608f7b187da786c47780f8407dbc4ac2d3dfeb34fe683cc19778f01360", size = 35689, upload-time = "2023-06-11T18:02:40.845Z" }, +] diff --git a/kernelci/openwrt-pipeline/Dockerfile b/kernelci/openwrt-pipeline/Dockerfile new file mode 100644 index 000000000..d7ea2f509 --- /dev/null +++ b/kernelci/openwrt-pipeline/Dockerfile @@ -0,0 +1,50 @@ +# ============================================================================= +# OpenWrt KernelCI Pipeline Services +# ============================================================================= +# +# This Dockerfile builds the container image for all pipeline services: +# - firmware_trigger: Watches for new firmware +# - test_scheduler: Assigns jobs to labs +# - health_scheduler: Monitors device health +# - results_collector: Aggregates test results +# +# ============================================================================= + +FROM python:3.13-slim + +# Labels +LABEL org.opencontainers.image.title="OpenWrt KernelCI Pipeline" +LABEL org.opencontainers.image.description="Pipeline services for OpenWrt firmware testing" +LABEL org.opencontainers.image.source="https://github.com/openwrt/openwrt-tests" + +# Set working directory +WORKDIR /app + +# Install system dependencies +RUN apt-get update && apt-get install -y --no-install-recommends \ + curl \ + git \ + && rm -rf /var/lib/apt/lists/* + +# Copy requirements first (for layer caching) +COPY requirements.txt . + +# Install Python dependencies +RUN pip install --no-cache-dir -r requirements.txt + +# Copy application code +COPY openwrt_pipeline/ ./openwrt_pipeline/ + +# Create non-root user +RUN useradd -m -s /bin/bash pipeline +RUN chown -R pipeline:pipeline /app + +# Switch to non-root user +USER pipeline + +# Environment +ENV PYTHONUNBUFFERED=1 +ENV PYTHONDONTWRITEBYTECODE=1 + +# Default command (overridden by docker-compose) +CMD ["python", "-m", "openwrt_pipeline"] diff --git a/kernelci/openwrt-pipeline/openwrt_pipeline/__init__.py b/kernelci/openwrt-pipeline/openwrt_pipeline/__init__.py new file mode 100644 index 000000000..bea1dfdff --- /dev/null +++ b/kernelci/openwrt-pipeline/openwrt_pipeline/__init__.py @@ -0,0 +1,12 @@ +""" +OpenWrt KernelCI Pipeline Services + +This package provides the pipeline services for OpenWrt firmware testing: + +- firmware_trigger: Watches for new firmware from various sources +- test_scheduler: Assigns test jobs to available labs +- health_scheduler: Monitors device health +- results_collector: Aggregates and stores test results +""" + +__version__ = "0.1.0" diff --git a/kernelci/openwrt-pipeline/openwrt_pipeline/api_client.py b/kernelci/openwrt-pipeline/openwrt_pipeline/api_client.py new file mode 100644 index 000000000..4ef9d5c8a --- /dev/null +++ b/kernelci/openwrt-pipeline/openwrt_pipeline/api_client.py @@ -0,0 +1,445 @@ +""" +KernelCI API Client for OpenWrt Pipeline. + +Provides async methods aligned with the KernelCI Maestro API: +- Node-based data model (jobs, tests are nodes with different 'kind') +- Pub/Sub event subscription +- Authentication via Bearer token + +API Reference: https://docs.kernelci.org/maestro/ + +Tree/Branch Structure: +- Tree: openwrt +- Branches: main (SNAPSHOT), openwrt-24.10, openwrt-23.05, etc. +- Versions fetched dynamically from downloads.openwrt.org/.versions.json +""" + +import logging +from datetime import datetime +from typing import Any + +import httpx +from tenacity import retry, stop_after_attempt, wait_exponential + +from .config import settings +from .versions import OPENWRT_REPO, OPENWRT_TREE, version_to_branch + +logger = logging.getLogger(__name__) + + +class APIError(Exception): + """API request error.""" + + def __init__(self, status_code: int, message: str, details: Any = None): + self.status_code = status_code + self.message = message + self.details = details + super().__init__(f"API Error {status_code}: {message}") + + +class KernelCIClient: + """ + Async client for KernelCI Maestro API. + + Uses the Node-based data model where all entities (checkouts, builds, + jobs, tests) are nodes with different 'kind' values forming a tree. + """ + + def __init__( + self, + base_url: str | None = None, + token: str | None = None, + timeout: float = 30.0, + ): + self.base_url = (base_url or settings.kci_api_url).rstrip("/") + self.token = token or settings.kci_api_token + self.timeout = timeout + self._client: httpx.AsyncClient | None = None + + async def __aenter__(self): + await self.connect() + return self + + async def __aexit__(self, exc_type, exc_val, exc_tb): + await self.close() + + async def connect(self): + """Create HTTP client connection.""" + if self._client is None: + self._client = httpx.AsyncClient( + base_url=self.base_url, + headers={ + "Authorization": f"Bearer {self.token}", + "Content-Type": "application/json", + }, + timeout=httpx.Timeout(self.timeout), + ) + + async def close(self): + """Close HTTP client connection.""" + if self._client: + await self._client.aclose() + self._client = None + + @property + def client(self) -> httpx.AsyncClient: + if self._client is None: + raise RuntimeError("Client not connected") + return self._client + + @retry( + stop=stop_after_attempt(3), + wait=wait_exponential(multiplier=1, min=1, max=10), + ) + async def _request( + self, + method: str, + path: str, + **kwargs, + ) -> dict[str, Any]: + """Make an API request with retry logic.""" + response = await self.client.request(method, path, **kwargs) + + if response.status_code >= 400: + try: + error_data = response.json() + except Exception: + error_data = {"detail": response.text} + + raise APIError( + status_code=response.status_code, + message=error_data.get("detail", "Unknown error"), + details=error_data, + ) + + if response.status_code == 204: + return {} + + return response.json() + + # ========================================================================= + # Node Operations (Core KernelCI API) + # ========================================================================= + + async def create_node(self, node: dict[str, Any]) -> dict[str, Any]: + """ + Create a new node. + + Nodes are the core data model in KernelCI. Types include: + - checkout: Source code checkout + - kbuild: Kernel/firmware build + - job: Test job container + - test: Individual test result + + Args: + node: Node data including 'kind', 'name', 'path', etc. + + Returns: + Created node with generated 'id' + """ + # KernelCI API uses /node (singular) for creating nodes + return await self._request("POST", "/latest/node", json=node) + + async def get_node(self, node_id: str) -> dict[str, Any] | None: + """Get node by ID.""" + try: + # KernelCI API uses /node/{id} (singular) for single node operations + return await self._request("GET", f"/latest/node/{node_id}") + except APIError as e: + if e.status_code == 404: + return None + raise + + async def update_node( + self, node_id: str, updates: dict[str, Any] + ) -> dict[str, Any]: + """Update an existing node.""" + # KernelCI API uses /node/{id} (singular) for single node operations + return await self._request("PUT", f"/latest/node/{node_id}", json=updates) + + async def query_nodes( + self, + kind: str | None = None, + state: str | None = None, + parent: str | None = None, + name: str | None = None, + limit: int = 100, + offset: int = 0, + **filters, + ) -> list[dict[str, Any]]: + """ + Query nodes with filters. + + Args: + kind: Node kind (checkout, kbuild, job, test) + state: Node state (running, done, available) + parent: Parent node ID + name: Node name pattern + limit: Max results + offset: Pagination offset + **filters: Additional query filters + + Returns: + List of matching nodes + """ + params = {"limit": limit, "offset": offset} + if kind: + params["kind"] = kind + if state: + params["state"] = state + if parent: + params["parent"] = parent + if name: + params["name"] = name + params.update(filters) + + data = await self._request("GET", "/latest/nodes", params=params) + # API returns list directly or {"items": [...]} + if isinstance(data, list): + return data + return data.get("items", data.get("nodes", [])) + + # ========================================================================= + # OpenWrt-Specific Operations (Built on Nodes) + # ========================================================================= + + async def create_firmware_node( + self, + name: str, + version: str, + target: str, + subtarget: str, + profile: str, + source: str, + artifacts: dict[str, str], + git_commit: str | None = None, + pr_number: int | None = None, + ) -> dict[str, Any]: + """ + Create a firmware node (kind=kbuild for OpenWrt). + + This represents a built firmware image available for testing. + Links to the openwrt tree with proper branch mapping. + """ + branch = version_to_branch(version) + + node = { + "kind": "kbuild", + "name": f"openwrt-{target}-{subtarget}-{profile}", + "path": [OPENWRT_TREE, branch, target, subtarget, profile], + "group": OPENWRT_TREE, # Tree identifier for dashboard + "state": "available", + "result": "pass", + "data": { + "kernel_revision": { + "tree": OPENWRT_TREE, + "branch": branch, + "commit": git_commit or "", + "url": OPENWRT_REPO, + }, + "openwrt_version": version, + "target": target, + "subtarget": subtarget, + "profile": profile, + "source": source, + "artifacts": artifacts, + }, + } + if pr_number: + node["data"]["pr_number"] = pr_number + + return await self.create_node(node) + + async def create_test_job( + self, + firmware_node_id: str, + device_type: str, + test_plan: str, + tests: list[str] | None = None, + timeout: int = 1800, + test_type: str = "firmware", + firmware_url: str | None = None, + tests_subdir: str | None = None, + ) -> dict[str, Any]: + """ + Create a test job node (kind=job). + + Jobs are containers for test runs on a specific device. + Inherits tree/branch from parent firmware node. + + Args: + firmware_node_id: Parent firmware node ID + device_type: Target device type + test_plan: Name of the test plan + tests: Specific test names to run (optional) + timeout: Job timeout in seconds + test_type: Type of tests (firmware, kselftest) + firmware_url: Custom firmware URL (for kselftest images) + tests_subdir: Subdirectory containing tests + """ + # Get parent node to inherit tree/branch info + parent = await self.get_node(firmware_node_id) + parent_data = parent.get("data", {}) if parent else {} + kernel_rev = parent_data.get("kernel_revision", {}) + + # Use custom firmware URL or get from parent artifacts + if not firmware_url: + artifacts = parent_data.get("artifacts", {}) + firmware_url = artifacts.get("sysupgrade") or artifacts.get("factory") + + # Build path from parent path or construct from kernel_revision + parent_path = parent.get("path", []) if parent else [] + if not parent_path: + # Fallback: construct from kernel_revision + branch = kernel_rev.get("branch", "main") + parent_path = [OPENWRT_TREE, branch] + + # Extend path with job-specific info + job_path = parent_path + [test_type, device_type, test_plan] + + node = { + "kind": "job", + "name": f"openwrt-{test_type}-{device_type}-{test_plan}", + "path": job_path, + "parent": firmware_node_id, + "group": OPENWRT_TREE, + "state": "available", # Lab adapter will claim and transition to closing->done + "data": { + "kernel_revision": kernel_rev, + "device_type": device_type, + "test_plan": test_plan, + "test_type": test_type, + "tests": tests or [], + "timeout": timeout, + "runtime": "labgrid", # Indicates labgrid runtime + }, + } + + # Add firmware URL if available + if firmware_url: + node["data"]["firmware_url"] = firmware_url + + # Add tests subdirectory if specified + if tests_subdir: + node["data"]["tests_subdir"] = tests_subdir + + return await self.create_node(node) + + async def claim_job( + self, + job_id: str, + lab_name: str, + device_id: str, + ) -> dict[str, Any]: + """ + Claim a job for execution (set state to 'running'). + + Returns updated node or raises APIError if already claimed. + """ + return await self.update_node( + job_id, + { + "state": "running", + "data": { + "lab_name": lab_name, + "device_id": device_id, + "started_at": datetime.utcnow().isoformat(), + }, + }, + ) + + async def complete_job( + self, + job_id: str, + result: str, + test_results: list[dict[str, Any]], + log_url: str | None = None, + ) -> dict[str, Any]: + """ + Mark a job as complete and submit test results. + + Args: + job_id: Job node ID + result: Overall result (pass, fail, incomplete) + test_results: List of individual test results + log_url: URL to console/test logs + """ + # Update job node + job_update = { + "state": "done", + "result": result, + "data": { + "completed_at": datetime.utcnow().isoformat(), + }, + } + if log_url: + job_update["data"]["log_url"] = log_url + + await self.update_node(job_id, job_update) + + # Create test nodes for each result + for test in test_results: + test_node = { + "kind": "test", + "name": test.get("name", "unknown"), + "parent": job_id, + "group": OPENWRT_TREE, + "state": "done", + "result": test.get("status", "fail"), + "data": { + "duration": test.get("duration", 0), + "error_message": test.get("error_message"), + }, + } + await self.create_node(test_node) + + return await self.get_node(job_id) + + async def get_pending_jobs( + self, + device_type: str | None = None, + limit: int = 10, + ) -> list[dict[str, Any]]: + """ + Get jobs available for execution. + + Filters for jobs with: + - kind=job + - state=available + - runtime=labgrid + """ + filters = { + "kind": "job", + "state": "available", + "limit": limit, + } + if device_type: + filters["data.device_type"] = device_type + + return await self.query_nodes(**filters) + + # ========================================================================= + # GitHub Integration + # ========================================================================= + + async def post_github_status( + self, + repo: str, + commit_sha: str, + state: str, + context: str, + description: str, + target_url: str | None = None, + ) -> None: + """ + Post commit status to GitHub (if configured). + + This is typically handled by a separate GitHub integration service, + but we provide the method for completeness. + """ + # This would call GitHub API directly or through a webhook + logger.info(f"GitHub status: {repo}@{commit_sha[:7]} {state} - {description}") + + # Note: Pub/Sub events would be implemented via WebSocket or SSE + # connection to the KernelCI event endpoint. For OpenWrt testing, + # the polling approach in test_scheduler.py is sufficient. diff --git a/kernelci/openwrt-pipeline/openwrt_pipeline/asu_client.py b/kernelci/openwrt-pipeline/openwrt_pipeline/asu_client.py new file mode 100644 index 000000000..839ed8ed4 --- /dev/null +++ b/kernelci/openwrt-pipeline/openwrt_pipeline/asu_client.py @@ -0,0 +1,279 @@ +""" +ASU (Attended Sysupgrade) Client + +Client for requesting custom OpenWrt firmware images from sysupgrade.openwrt.org. +Allows building images with additional packages for specific test types +(e.g., kselftest images with bash, python3, etc.). + +API Documentation: https://sysupgrade.openwrt.org/docs +""" + +import asyncio +import logging +from dataclasses import dataclass, field + +import httpx +from tenacity import retry, stop_after_attempt, wait_exponential + +logger = logging.getLogger(__name__) + +# Default ASU server +ASU_API_URL = "https://sysupgrade.openwrt.org/api/v1" + +# Timeout for build requests (can take a while) +BUILD_TIMEOUT = 600 # 10 minutes +POLL_INTERVAL = 10 # seconds + + +@dataclass +class ImageBuildRequest: + """Request for a custom OpenWrt image build.""" + + target: str + subtarget: str + profile: str + version: str + packages: list[str] = field(default_factory=list) + filesystem: str | None = None + diff_packages: bool = False + + def to_dict(self) -> dict: + """Convert to API request format.""" + data = { + "target": self.target, + "subtarget": self.subtarget, + "profile": self.profile, + "version": self.version, + "packages": self.packages, + "diff_packages": self.diff_packages, + } + if self.filesystem: + data["filesystem"] = self.filesystem + return data + + +@dataclass +class ImageBuildResult: + """Result of a custom image build.""" + + request_hash: str + status: str # "queued", "building", "completed", "failed" + version: str + target: str + profile: str + + # Available when completed + sysupgrade_url: str | None = None + factory_url: str | None = None + manifest_url: str | None = None + sha256_sysupgrade: str | None = None + sha256_factory: str | None = None + + # Error info if failed + error: str | None = None + + @classmethod + def from_response(cls, data: dict) -> "ImageBuildResult": + """Create from API response.""" + # Extract image URLs from response + images = data.get("images", []) + sysupgrade_url = None + factory_url = None + sha256_sysupgrade = None + sha256_factory = None + + for img in images: + img_type = img.get("type", "") + if "sysupgrade" in img_type: + sysupgrade_url = img.get("url") + sha256_sysupgrade = img.get("sha256") + elif "factory" in img_type: + factory_url = img.get("url") + sha256_factory = img.get("sha256") + + return cls( + request_hash=data.get("request_hash", ""), + status=data.get("status", "unknown"), + version=data.get("version", ""), + target=data.get("target", ""), + profile=data.get("profile", ""), + sysupgrade_url=sysupgrade_url, + factory_url=factory_url, + manifest_url=data.get("manifest_url"), + sha256_sysupgrade=sha256_sysupgrade, + sha256_factory=sha256_factory, + error=data.get("error"), + ) + + +class ASUClient: + """ + Client for OpenWrt Attended Sysupgrade API. + + Requests custom firmware builds with specified packages. + Handles build queuing and polling for completion. + """ + + def __init__(self, api_url: str = ASU_API_URL): + self.api_url = api_url.rstrip("/") + self._client: httpx.AsyncClient | None = None + + async def __aenter__(self) -> "ASUClient": + await self.connect() + return self + + async def __aexit__(self, *args) -> None: + await self.close() + + async def connect(self) -> None: + """Create HTTP client.""" + self._client = httpx.AsyncClient( + timeout=httpx.Timeout(60.0), + follow_redirects=True, + ) + + async def close(self) -> None: + """Close HTTP client.""" + if self._client: + await self._client.aclose() + self._client = None + + @property + def client(self) -> httpx.AsyncClient: + if self._client is None: + raise RuntimeError("ASUClient not connected") + return self._client + + @retry( + stop=stop_after_attempt(3), + wait=wait_exponential(multiplier=1, min=2, max=10), + ) + async def _request( + self, method: str, path: str, **kwargs + ) -> dict: + """Make API request with retry.""" + url = f"{self.api_url}{path}" + response = await self.client.request(method, url, **kwargs) + + if response.status_code >= 400: + logger.error(f"ASU API error: {response.status_code} - {response.text}") + response.raise_for_status() + + return response.json() + + async def get_overview(self) -> dict: + """Get available versions, targets, and profiles.""" + return await self._request("GET", "/overview") + + async def request_build( + self, request: ImageBuildRequest + ) -> ImageBuildResult: + """ + Request a custom image build. + + Returns immediately with build status. If the image is cached, + returns completed status with URLs. Otherwise returns queued status. + """ + logger.info( + f"Requesting build: {request.target}/{request.subtarget}/{request.profile} " + f"v{request.version} with packages: {request.packages}" + ) + + data = await self._request("POST", "/build", json=request.to_dict()) + result = ImageBuildResult.from_response(data) + + logger.info(f"Build request {result.request_hash}: {result.status}") + return result + + async def get_build_status(self, request_hash: str) -> ImageBuildResult: + """Check status of a build request.""" + data = await self._request("GET", f"/build/{request_hash}") + return ImageBuildResult.from_response(data) + + async def build_and_wait( + self, + request: ImageBuildRequest, + timeout: float = BUILD_TIMEOUT, + poll_interval: float = POLL_INTERVAL, + ) -> ImageBuildResult: + """ + Request a build and wait for completion. + + Args: + request: Build request specification + timeout: Maximum time to wait (seconds) + poll_interval: Time between status checks (seconds) + + Returns: + Completed build result with image URLs + + Raises: + TimeoutError: If build doesn't complete in time + RuntimeError: If build fails + """ + result = await self.request_build(request) + + if result.status == "completed": + return result + + if result.status == "failed": + raise RuntimeError(f"Build failed: {result.error}") + + # Poll for completion + elapsed = 0.0 + while elapsed < timeout: + await asyncio.sleep(poll_interval) + elapsed += poll_interval + + result = await self.get_build_status(result.request_hash) + logger.debug(f"Build {result.request_hash}: {result.status}") + + if result.status == "completed": + logger.info( + f"Build completed: {result.sysupgrade_url or result.factory_url}" + ) + return result + + if result.status == "failed": + raise RuntimeError(f"Build failed: {result.error}") + + raise TimeoutError( + f"Build {result.request_hash} did not complete within {timeout}s" + ) + + +# Convenience function for one-off builds +async def build_custom_image( + target: str, + subtarget: str, + profile: str, + version: str, + packages: list[str], + api_url: str = ASU_API_URL, + timeout: float = BUILD_TIMEOUT, +) -> ImageBuildResult: + """ + Build a custom OpenWrt image with specified packages. + + Args: + target: Hardware target (e.g., "ath79") + subtarget: Subtarget (e.g., "generic") + profile: Device profile (e.g., "tplink_archer-c7-v2") + version: OpenWrt version (e.g., "SNAPSHOT", "23.05.3") + packages: List of packages to include + api_url: ASU API URL + timeout: Build timeout in seconds + + Returns: + Build result with image URLs + """ + request = ImageBuildRequest( + target=target, + subtarget=subtarget, + profile=profile, + version=version, + packages=packages, + ) + + async with ASUClient(api_url) as client: + return await client.build_and_wait(request, timeout=timeout) diff --git a/kernelci/openwrt-pipeline/openwrt_pipeline/config.py b/kernelci/openwrt-pipeline/openwrt_pipeline/config.py new file mode 100644 index 000000000..d8a60e30b --- /dev/null +++ b/kernelci/openwrt-pipeline/openwrt_pipeline/config.py @@ -0,0 +1,120 @@ +""" +Configuration management for OpenWrt KernelCI Pipeline. + +Loads configuration from: +1. Environment variables +2. YAML configuration file (config/pipeline.yaml) +""" + +import os +from pathlib import Path +from typing import Any + +import yaml +from pydantic import Field +from pydantic_settings import BaseSettings + + +class Settings(BaseSettings): + """Pipeline service settings from environment variables.""" + + # API Configuration + kci_api_url: str = Field( + default="http://kernelci-api:8001", + description="KernelCI API URL", + ) + kci_api_token: str = Field( + default="", + description="KernelCI API authentication token", + ) + + # MinIO Configuration + minio_endpoint: str = Field(default="minio:9000") + minio_access_key: str = Field(default="") + minio_secret_key: str = Field(default="") + minio_secure: bool = Field(default=False) + + # GitHub Configuration + github_token: str | None = Field(default=None) + github_repo: str = Field(default="openwrt/openwrt") + + # Health Check Configuration + health_check_interval: int = Field( + default=86400, + description="Health check interval in seconds", + ) + + # Logging + log_level: str = Field(default="INFO") + + # Config file path + config_file: Path = Field(default=Path("/app/config/pipeline.yaml")) + + class Config: + env_prefix = "" + case_sensitive = False + + +# Global settings instance +settings = Settings() + + +def load_pipeline_config() -> dict[str, Any]: + """Load pipeline configuration from YAML file.""" + config_path = settings.config_file + + if not config_path.exists(): + # Try alternative paths + alt_paths = [ + Path("config/pipeline.yaml"), + Path("../config/pipeline.yaml"), + Path("/app/config/pipeline.yaml"), + ] + for alt in alt_paths: + if alt.exists(): + config_path = alt + break + + if not config_path.exists(): + raise FileNotFoundError(f"Pipeline config not found: {config_path}") + + with open(config_path) as f: + config = yaml.safe_load(f) + + # Substitute environment variables in config + config = _substitute_env_vars(config) + + return config + + +def _substitute_env_vars(obj: Any) -> Any: + """Recursively substitute ${VAR} patterns with environment variables.""" + if isinstance(obj, str): + if obj.startswith("${") and obj.endswith("}"): + var_name = obj[2:-1] + # Handle default values: ${VAR:-default} + if ":-" in var_name: + var_name, default = var_name.split(":-", 1) + return os.environ.get(var_name, default) + return os.environ.get(var_name, obj) + return obj + elif isinstance(obj, dict): + return {k: _substitute_env_vars(v) for k, v in obj.items()} + elif isinstance(obj, list): + return [_substitute_env_vars(item) for item in obj] + return obj + + +def get_test_plan(config: dict, plan_name: str) -> dict | None: + """Get a test plan configuration by name.""" + return config.get("test_plans", {}).get(plan_name) + + +def get_device_type(config: dict, device_name: str) -> dict | None: + """Get device type configuration by name.""" + return config.get("device_types", {}).get(device_name) + + +def get_firmware_source(config: dict, source_name: str) -> dict | None: + """Get firmware source configuration by name.""" + return config.get("firmware_sources", {}).get(source_name) diff --git a/kernelci/openwrt-pipeline/openwrt_pipeline/firmware_sources/__init__.py b/kernelci/openwrt-pipeline/openwrt_pipeline/firmware_sources/__init__.py new file mode 100644 index 000000000..033d5b182 --- /dev/null +++ b/kernelci/openwrt-pipeline/openwrt_pipeline/firmware_sources/__init__.py @@ -0,0 +1,21 @@ +""" +Firmware source watchers for OpenWrt KernelCI. + +This module provides watchers for different firmware sources: +- official: Official OpenWrt releases from downloads.openwrt.org +- github_pr: Firmware artifacts from GitHub Pull Requests +- custom: Custom firmware uploads via API +- buildbot: Integration with OpenWrt Buildbot +""" + +from .base import FirmwareSource +from .custom import CustomFirmwareUploader +from .github_pr import GitHubPRSource +from .official import OfficialReleaseSource + +__all__ = [ + "FirmwareSource", + "OfficialReleaseSource", + "GitHubPRSource", + "CustomFirmwareUploader", +] diff --git a/kernelci/openwrt-pipeline/openwrt_pipeline/firmware_sources/base.py b/kernelci/openwrt-pipeline/openwrt_pipeline/firmware_sources/base.py new file mode 100644 index 000000000..43540a273 --- /dev/null +++ b/kernelci/openwrt-pipeline/openwrt_pipeline/firmware_sources/base.py @@ -0,0 +1,119 @@ +""" +Base class for firmware sources. +""" + +import hashlib +import logging +from abc import ABC, abstractmethod +from typing import AsyncIterator + +from ..models import Firmware + +logger = logging.getLogger(__name__) + + +def generate_firmware_id( + *parts: str, + hash_input: str | None = None, +) -> str: + """ + Generate a unique firmware ID from parts. + + Args: + *parts: ID components (e.g., "openwrt", version, target) + hash_input: Optional string to hash for uniqueness suffix + + Returns: + Colon-separated ID with optional hash suffix + """ + base_id = ":".join(p for p in parts if p) + if hash_input: + short_hash = hashlib.sha256(hash_input.encode()).hexdigest()[:8] + return f"{base_id}:{short_hash}" + return base_id + + +def detect_firmware_type(filename: str) -> str | None: + """ + Detect firmware type from filename. + + Returns: + Firmware type (sysupgrade, factory, initramfs) or None + """ + filename_lower = filename.lower() + + if "sysupgrade" in filename_lower: + return "sysupgrade" + elif "factory" in filename_lower: + return "factory" + elif "initramfs" in filename_lower or "kernel" in filename_lower: + return "initramfs" + elif filename_lower.endswith((".bin", ".img", ".itb")): + return "unknown" + return None + + +class FirmwareSource(ABC): + """ + Abstract base class for firmware sources. + + Subclasses implement specific source types (official, PR, custom, etc.) + """ + + def __init__(self, name: str, config: dict): + """ + Initialize firmware source. + + Args: + name: Source identifier + config: Source configuration from pipeline.yaml + """ + self.name = name + self.config = config + self.enabled = config.get("enabled", True) + + @abstractmethod + async def scan(self) -> AsyncIterator[Firmware]: + """ + Scan for new firmware. + + Yields: + Firmware objects for each discovered firmware image + """ + pass + + @abstractmethod + async def download_artifact( + self, + firmware: Firmware, + artifact_type: str, + destination: str, + ) -> str: + """ + Download a firmware artifact. + + Args: + firmware: Firmware metadata + artifact_type: Type of artifact (sysupgrade, factory, etc.) + destination: Local path to save the artifact + + Returns: + Path to the downloaded file + """ + pass + + def is_enabled(self) -> bool: + """Check if this source is enabled.""" + return self.enabled + + def get_check_interval(self) -> int: + """Get the interval between scans in seconds.""" + return self.config.get("check_interval", 3600) + + async def initialize(self) -> None: + """Initialize the source (called once at startup).""" + logger.info(f"Initializing firmware source: {self.name}") + + async def cleanup(self) -> None: + """Cleanup resources (called at shutdown).""" + logger.info(f"Cleaning up firmware source: {self.name}") diff --git a/kernelci/openwrt-pipeline/openwrt_pipeline/firmware_sources/custom.py b/kernelci/openwrt-pipeline/openwrt_pipeline/firmware_sources/custom.py new file mode 100644 index 000000000..2f075526d --- /dev/null +++ b/kernelci/openwrt-pipeline/openwrt_pipeline/firmware_sources/custom.py @@ -0,0 +1,274 @@ +""" +Custom firmware upload handler. + +Provides a FastAPI router for uploading custom firmware builds +for testing on the OpenWrt KernelCI infrastructure. +""" + +import hashlib +import io +import logging +from datetime import datetime +from pathlib import Path +from typing import Annotated + +from fastapi import APIRouter, File, Form, HTTPException, UploadFile +from minio import Minio +from pydantic import BaseModel + +from ..config import settings +from ..models import Firmware, FirmwareArtifacts +from ..models import FirmwareSource as FirmwareSourceEnum +from .base import detect_firmware_type as _detect_firmware_type + +logger = logging.getLogger(__name__) + +# Create router for custom firmware uploads +router = APIRouter(prefix="/api/v1/firmware", tags=["firmware"]) + + +class FirmwareUploadResponse(BaseModel): + """Response model for firmware upload.""" + + firmware_id: str + status: str + message: str + artifacts: dict[str, str] + + +class CustomFirmwareUploader: + """ + Handler for custom firmware uploads. + + Stores firmware in MinIO and creates firmware entries in the API. + """ + + def __init__(self, config: dict): + self.config = config + self.max_file_size = config.get("max_file_size", 100 * 1024 * 1024) # 100MB + self.allowed_extensions = config.get( + "allowed_extensions", [".bin", ".img", ".itb", ".gz"] + ) + self._minio: Minio | None = None + + def initialize(self) -> None: + """Initialize MinIO client.""" + self._minio = Minio( + settings.minio_endpoint, + access_key=settings.minio_access_key, + secret_key=settings.minio_secret_key, + secure=settings.minio_secure, + ) + + # Ensure bucket exists + bucket = "openwrt-firmware" + if not self._minio.bucket_exists(bucket): + self._minio.make_bucket(bucket) + logger.info(f"Created MinIO bucket: {bucket}") + + @property + def minio(self) -> Minio: + """Get MinIO client.""" + if self._minio is None: + raise RuntimeError("Uploader not initialized") + return self._minio + + def validate_file(self, filename: str, content: bytes) -> None: + """Validate uploaded file.""" + # Check file size + if len(content) > self.max_file_size: + raise ValueError( + f"File too large: {len(content)} bytes (max: {self.max_file_size})" + ) + + # Check extension + ext = Path(filename).suffix.lower() + if ext not in self.allowed_extensions: + raise ValueError( + f"Invalid file extension: {ext} (allowed: {self.allowed_extensions})" + ) + + def detect_firmware_type(self, filename: str) -> str: + """Detect firmware type from filename.""" + detected = _detect_firmware_type(filename) + return detected if detected and detected != "unknown" else "sysupgrade" + + async def upload_firmware( + self, + file: UploadFile, + target: str, + subtarget: str, + profile: str, + version: str = "custom", + git_commit: str | None = None, + description: str | None = None, + ) -> tuple[Firmware, dict[str, str]]: + """ + Upload custom firmware to storage. + + Args: + file: Uploaded file + target: OpenWrt target (e.g., ath79) + subtarget: OpenWrt subtarget (e.g., generic) + profile: Device profile name + version: Version string (default: "custom") + git_commit: Git commit hash + description: Optional description + + Returns: + Tuple of (Firmware object, artifact URLs) + """ + # Read file content + content = await file.read() + filename = file.filename or "firmware.bin" + + # Validate + self.validate_file(filename, content) + + # Calculate checksum + sha256 = hashlib.sha256(content).hexdigest() + + # Determine storage path + firmware_type = self.detect_firmware_type(filename) + storage_path = f"custom/{target}/{subtarget}/{profile}/{sha256[:8]}/{filename}" + + # Upload to MinIO + logger.info(f"Uploading {filename} to MinIO: {storage_path}") + + self.minio.put_object( + bucket_name="openwrt-firmware", + object_name=storage_path, + data=io.BytesIO(content), + length=len(content), + content_type="application/octet-stream", + ) + + # Generate public URL + # Note: In production, this would use a proper URL or presigned URL + artifact_url = ( + f"http://{settings.minio_endpoint}/openwrt-firmware/{storage_path}" + ) + + # Create artifact mapping + artifacts = FirmwareArtifacts() + setattr(artifacts, firmware_type, artifact_url) + setattr(artifacts, f"{firmware_type}_sha256", sha256) + + # Generate firmware ID + firmware_id = self._generate_firmware_id( + target=target, + subtarget=subtarget, + profile=profile, + sha256=sha256, + ) + + # Create firmware object + firmware = Firmware( + id=firmware_id, + source=FirmwareSourceEnum.CUSTOM, + version=version, + target=target, + subtarget=subtarget, + profile=profile, + git_commit_hash=git_commit, + artifacts=artifacts, + sha256=sha256, + file_size=len(content), + description=description, + build_time=datetime.utcnow(), + ) + + return firmware, {firmware_type: artifact_url} + + def _generate_firmware_id( + self, + target: str, + subtarget: str, + profile: str, + sha256: str, + ) -> str: + """Generate unique firmware ID.""" + return f"openwrt:custom:{target}:{subtarget}:{profile}:{sha256[:12]}" + + +# Global uploader instance (initialized by firmware_trigger service) +_uploader: CustomFirmwareUploader | None = None + + +def get_uploader() -> CustomFirmwareUploader: + """Get the custom uploader instance.""" + if _uploader is None: + raise HTTPException(status_code=503, detail="Upload service not initialized") + return _uploader + + +def init_uploader(config: dict) -> CustomFirmwareUploader: + """Initialize the custom uploader.""" + global _uploader + _uploader = CustomFirmwareUploader(config) + _uploader.initialize() + return _uploader + + +# ============================================================================= +# FastAPI Routes +# ============================================================================= + + +@router.post("/upload", response_model=FirmwareUploadResponse) +async def upload_firmware( + firmware_file: Annotated[UploadFile, File(description="Firmware file to upload")], + target: Annotated[str, Form(description="OpenWrt target (e.g., ath79)")], + subtarget: Annotated[str, Form(description="OpenWrt subtarget (e.g., generic)")], + profile: Annotated[str, Form(description="Device profile name")], + version: Annotated[str, Form(description="Version string")] = "custom", + git_commit: Annotated[str | None, Form(description="Git commit hash")] = None, + description: Annotated[str | None, Form(description="Description")] = None, +) -> FirmwareUploadResponse: + """ + Upload custom firmware for testing. + + This endpoint allows developers to upload their own firmware builds + for testing on the OpenWrt KernelCI infrastructure. + + The firmware will be stored and test jobs will be automatically + scheduled for compatible devices. + """ + uploader = get_uploader() + + try: + firmware, artifacts = await uploader.upload_firmware( + file=firmware_file, + target=target, + subtarget=subtarget, + profile=profile, + version=version, + git_commit=git_commit, + description=description, + ) + + # TODO: Submit firmware to API and trigger job scheduling + + return FirmwareUploadResponse( + firmware_id=firmware.id, + status="accepted", + message="Firmware uploaded successfully. Test jobs will be scheduled.", + artifacts=artifacts, + ) + + except ValueError as e: + raise HTTPException(status_code=400, detail=str(e)) + except Exception as e: + logger.exception(f"Error uploading firmware: {e}") + raise HTTPException(status_code=500, detail="Internal server error") + + +@router.get("/upload/status/{firmware_id}") +async def get_upload_status(firmware_id: str) -> dict: + """Get the status of an uploaded firmware and its test jobs.""" + # TODO: Query API for firmware status and jobs + return { + "firmware_id": firmware_id, + "status": "pending", + "jobs": [], + } diff --git a/kernelci/openwrt-pipeline/openwrt_pipeline/firmware_sources/github_pr.py b/kernelci/openwrt-pipeline/openwrt_pipeline/firmware_sources/github_pr.py new file mode 100644 index 000000000..b9f1c745e --- /dev/null +++ b/kernelci/openwrt-pipeline/openwrt_pipeline/firmware_sources/github_pr.py @@ -0,0 +1,330 @@ +""" +GitHub Pull Request firmware source. + +Watches GitHub PRs for firmware artifacts from CI builds. +Supports triggering tests on PRs with specific labels. +""" + +import io +import logging +import zipfile +from pathlib import Path +from typing import AsyncIterator + +import httpx +from github import Auth, Github +from github.PullRequest import PullRequest +from github.WorkflowRun import WorkflowRun + +from ..config import settings +from ..models import Firmware, FirmwareArtifacts +from ..models import FirmwareSource as FirmwareSourceEnum +from .base import FirmwareSource, detect_firmware_type, generate_firmware_id + +logger = logging.getLogger(__name__) + + +class GitHubPRSource(FirmwareSource): + """ + Firmware source for GitHub Pull Request CI artifacts. + + Monitors PRs with specific labels and extracts firmware + artifacts from successful workflow runs. + """ + + def __init__(self, name: str, config: dict): + super().__init__(name, config) + self.repository = config.get("repository", "openwrt/openwrt") + self.trigger_labels = config.get("trigger_labels", ["ci-test-requested"]) + self.workflow_name = config.get("workflow_name", "Build") + self.artifact_pattern = config.get("artifact_pattern", "openwrt-*") + self._github: Github | None = None + self._http_client: httpx.AsyncClient | None = None + + async def initialize(self) -> None: + """Initialize GitHub client.""" + await super().initialize() + + token = self.config.get("token") or settings.github_token + if not token: + logger.warning("No GitHub token configured, PR source will be limited") + self.enabled = False + return + + auth = Auth.Token(token) + self._github = Github(auth=auth) + self._http_client = httpx.AsyncClient( + headers={"Authorization": f"token {token}"}, + timeout=httpx.Timeout(60.0), + follow_redirects=True, + ) + + async def cleanup(self) -> None: + """Close clients.""" + if self._github: + self._github.close() + if self._http_client: + await self._http_client.aclose() + await super().cleanup() + + @property + def github(self) -> Github: + """Get GitHub client.""" + if self._github is None: + raise RuntimeError("Source not initialized or no token configured") + return self._github + + @property + def client(self) -> httpx.AsyncClient: + """Get HTTP client.""" + if self._http_client is None: + raise RuntimeError("Source not initialized") + return self._http_client + + async def scan(self) -> AsyncIterator[Firmware]: + """ + Scan PRs with trigger labels for firmware artifacts. + + Yields firmware for each PR that: + 1. Has a trigger label + 2. Has a successful workflow run + 3. Has firmware artifacts + """ + if not self.enabled: + return + + logger.info(f"Scanning GitHub PRs in {self.repository}") + + repo = self.github.get_repo(self.repository) + + # Get open PRs with trigger labels + for label in self.trigger_labels: + try: + pulls = repo.get_pulls(state="open") + for pr in pulls: + pr_labels = [lbl.name for lbl in pr.labels] + if label in pr_labels: + async for firmware in self._process_pr(pr): + yield firmware + except Exception as e: + logger.error(f"Error scanning PRs with label '{label}': {e}") + + async def _process_pr(self, pr: PullRequest) -> AsyncIterator[Firmware]: + """Process a single PR for firmware artifacts.""" + logger.info(f"Processing PR #{pr.number}: {pr.title}") + + # Find successful workflow runs for this PR + try: + runs = pr.head.repo.get_workflow_runs( + branch=pr.head.ref, + status="success", + ) + except Exception as e: + logger.error(f"Error getting workflow runs for PR #{pr.number}: {e}") + return + + # Get the most recent successful run + latest_run: WorkflowRun | None = None + for run in runs: + if run.name == self.workflow_name or self.workflow_name in (run.name or ""): + latest_run = run + break + + if not latest_run: + logger.debug(f"No successful workflow runs found for PR #{pr.number}") + return + + # Get artifacts from the run + try: + artifacts = latest_run.get_artifacts() + except Exception as e: + logger.error(f"Error getting artifacts for run {latest_run.id}: {e}") + return + + for artifact in artifacts: + # Check if artifact matches our pattern + if not self._matches_pattern(artifact.name, self.artifact_pattern): + continue + + # Parse target info from artifact name + target_info = self._parse_artifact_name(artifact.name) + if not target_info: + continue + + firmware = self._create_firmware( + pr=pr, + run=latest_run, + artifact=artifact, + target_info=target_info, + ) + if firmware: + yield firmware + + def _matches_pattern(self, name: str, pattern: str) -> bool: + """Check if artifact name matches pattern (simple glob).""" + if pattern.endswith("*"): + return name.startswith(pattern[:-1]) + return name == pattern + + def _parse_artifact_name(self, name: str) -> dict | None: + """ + Parse target/subtarget from artifact name. + + Expected format: openwrt-{target}-{subtarget}[-optional] + Example: openwrt-ath79-generic, openwrt-mediatek-filogic + """ + parts = name.split("-") + if len(parts) < 3 or parts[0] != "openwrt": + return None + + return { + "target": parts[1], + "subtarget": parts[2] if len(parts) > 2 else "generic", + } + + def _create_firmware( + self, + pr: PullRequest, + run: WorkflowRun, + artifact, + target_info: dict, + ) -> Firmware | None: + """Create a Firmware object for a PR artifact.""" + firmware_id = self._generate_firmware_id( + pr_number=pr.number, + target=target_info["target"], + subtarget=target_info["subtarget"], + commit=pr.head.sha, + ) + + return Firmware( + id=firmware_id, + source=FirmwareSourceEnum.PR, + source_url=pr.html_url, + source_ref=f"PR #{pr.number}", + version=f"pr-{pr.number}", + target=target_info["target"], + subtarget=target_info["subtarget"], + profile="*", # PR builds may contain multiple profiles + git_repository_url=pr.base.repo.clone_url, + git_commit_hash=pr.head.sha, + git_branch=pr.head.ref, + artifacts=FirmwareArtifacts(), # Will be populated on download + description=f"PR #{pr.number}: {pr.title}", + build_time=run.created_at, + ) + + def _generate_firmware_id( + self, + pr_number: int, + target: str, + subtarget: str, + commit: str, + ) -> str: + """Generate a unique firmware ID for a PR.""" + hash_input = f"pr:{pr_number}:{target}:{subtarget}:{commit}" + return generate_firmware_id( + "openwrt", f"pr-{pr_number}", target, subtarget, hash_input=hash_input + ) + + async def download_artifact( + self, + firmware: Firmware, + artifact_type: str, + destination: str, + ) -> str: + """ + Download PR artifact and extract firmware files. + + GitHub artifacts are ZIP files containing the actual firmware. + """ + # For PR artifacts, we need to download the whole artifact ZIP + # and extract the relevant firmware file + raise NotImplementedError( + "PR artifact download requires artifact_id. " + "Use download_pr_artifact() directly." + ) + + async def download_pr_artifact( + self, + pr_number: int, + artifact_id: int, + destination: str, + ) -> dict[str, str]: + """ + Download and extract a PR artifact ZIP. + + Args: + pr_number: Pull request number + artifact_id: GitHub artifact ID + destination: Directory to extract files to + + Returns: + Dict mapping firmware types to file paths + """ + repo = self.github.get_repo(self.repository) + + # Get artifact download URL + artifact = repo.get_artifact(artifact_id) + download_url = artifact.archive_download_url + + logger.info(f"Downloading artifact {artifact.name} from PR #{pr_number}") + + # Download the ZIP file + response = await self.client.get(download_url) + response.raise_for_status() + + # Extract to destination + dest_path = Path(destination) + dest_path.mkdir(parents=True, exist_ok=True) + + extracted_files = {} + with zipfile.ZipFile(io.BytesIO(response.content)) as zf: + for file_info in zf.filelist: + if file_info.is_dir(): + continue + + filename = Path(file_info.filename).name + file_type = self._detect_firmware_type(filename) + + if file_type: + extract_path = dest_path / filename + with open(extract_path, "wb") as f: + f.write(zf.read(file_info.filename)) + extracted_files[file_type] = str(extract_path) + logger.info(f"Extracted {file_type}: {extract_path}") + + return extracted_files + + def _detect_firmware_type(self, filename: str) -> str | None: + """Detect firmware type from filename.""" + return detect_firmware_type(filename) + + async def add_pr_comment( + self, + pr_number: int, + comment: str, + ) -> None: + """Add a comment to a PR with test results.""" + repo = self.github.get_repo(self.repository) + pr = repo.get_pull(pr_number) + pr.create_issue_comment(comment) + + async def update_pr_status( + self, + pr_number: int, + sha: str, + state: str, + context: str, + description: str, + target_url: str | None = None, + ) -> None: + """Update PR commit status.""" + repo = self.github.get_repo(self.repository) + commit = repo.get_commit(sha) + commit.create_status( + state=state, + context=context, + description=description, + target_url=target_url, + ) diff --git a/kernelci/openwrt-pipeline/openwrt_pipeline/firmware_sources/official.py b/kernelci/openwrt-pipeline/openwrt_pipeline/firmware_sources/official.py new file mode 100644 index 000000000..0a346fa11 --- /dev/null +++ b/kernelci/openwrt-pipeline/openwrt_pipeline/firmware_sources/official.py @@ -0,0 +1,300 @@ +""" +Official OpenWrt release firmware source. + +Watches downloads.openwrt.org for new firmware images across: +- Snapshots (development builds) +- Stable releases +- Old stable releases +""" + +import hashlib +import logging +from pathlib import Path +from typing import AsyncIterator + +import httpx + +from ..models import Firmware, FirmwareArtifacts +from ..models import FirmwareSource as FirmwareSourceEnum +from .base import FirmwareSource, generate_firmware_id + +logger = logging.getLogger(__name__) + + +class OfficialReleaseSource(FirmwareSource): + """ + Firmware source for official OpenWrt releases. + + Scans downloads.openwrt.org for profiles.json files and extracts + firmware metadata for each supported device profile. + """ + + def __init__(self, name: str, config: dict): + super().__init__(name, config) + self.sources = config.get("sources", {}) + self._http_client: httpx.AsyncClient | None = None + + async def initialize(self) -> None: + """Initialize HTTP client.""" + await super().initialize() + self._http_client = httpx.AsyncClient( + timeout=httpx.Timeout(30.0), + follow_redirects=True, + ) + + async def cleanup(self) -> None: + """Close HTTP client.""" + if self._http_client: + await self._http_client.aclose() + await super().cleanup() + + @property + def client(self) -> httpx.AsyncClient: + """Get HTTP client.""" + if self._http_client is None: + raise RuntimeError("Source not initialized") + return self._http_client + + async def scan(self) -> AsyncIterator[Firmware]: + """ + Scan all configured release sources for firmware. + + Yields firmware objects for each profile found in profiles.json. + """ + for source_name, source_config in self.sources.items(): + if not source_config.get("enabled", True): + logger.debug(f"Skipping disabled source: {source_name}") + continue + + logger.info(f"Scanning official source: {source_name}") + + base_url = source_config["url"] + version = source_config["version"] + targets = source_config.get("targets", []) + + if targets: + # Scan specific targets + for target_path in targets: + target, subtarget = target_path.split("/") + async for firmware in self._scan_target( + base_url, version, target, subtarget, source_name + ): + yield firmware + else: + # Scan all targets (slower) + async for firmware in self._scan_all_targets( + base_url, version, source_name + ): + yield firmware + + async def _scan_target( + self, + base_url: str, + version: str, + target: str, + subtarget: str, + source_name: str, + ) -> AsyncIterator[Firmware]: + """Scan a specific target/subtarget for firmware.""" + profiles_url = f"{base_url}/{target}/{subtarget}/profiles.json" + + try: + response = await self.client.get(profiles_url) + response.raise_for_status() + profiles_data = response.json() + except httpx.HTTPError as e: + logger.warning(f"Failed to fetch profiles from {profiles_url}: {e}") + return + except Exception as e: + logger.error(f"Error parsing profiles from {profiles_url}: {e}") + return + + # Extract git commit from version_code if available + version_code = profiles_data.get("version_code", "") + git_commit = version_code.split("-")[-1] if "-" in version_code else None + + profiles = profiles_data.get("profiles", {}) + logger.info(f"Found {len(profiles)} profiles for {target}/{subtarget}") + + for profile_name, profile_data in profiles.items(): + firmware = self._create_firmware( + base_url=base_url, + version=version, + target=target, + subtarget=subtarget, + profile_name=profile_name, + profile_data=profile_data, + source_name=source_name, + git_commit=git_commit, + ) + if firmware: + yield firmware + + async def _scan_all_targets( + self, + base_url: str, + version: str, + source_name: str, + ) -> AsyncIterator[Firmware]: + """Scan all available targets (by listing directory).""" + # Full target scan not implemented - would require directory listing + # which downloads.openwrt.org doesn't support well + logger.warning( + f"Full target scan not implemented for {source_name}. " + "Please configure specific targets in pipeline.yaml" + ) + # Empty async generator + if False: + yield # type: ignore + + def _create_firmware( + self, + base_url: str, + version: str, + target: str, + subtarget: str, + profile_name: str, + profile_data: dict, + source_name: str, + git_commit: str | None = None, + ) -> Firmware | None: + """Create a Firmware object from profile data.""" + images = profile_data.get("images", []) + if not images: + return None + + # Build artifact URLs + artifacts = FirmwareArtifacts() + for image in images: + image_type = image.get("type", "").lower() + filename = image.get("name") + sha256 = image.get("sha256") + + if not filename: + continue + + url = f"{base_url}/{target}/{subtarget}/{filename}" + + # Map image types to artifact fields + if "sysupgrade" in image_type: + artifacts.sysupgrade = url + artifacts.sysupgrade_sha256 = sha256 + elif "factory" in image_type: + artifacts.factory = url + artifacts.factory_sha256 = sha256 + elif "initramfs" in image_type or "kernel" in image_type: + artifacts.initramfs = url + artifacts.initramfs_sha256 = sha256 + + # Need at least one usable image + if not (artifacts.sysupgrade or artifacts.factory or artifacts.initramfs): + return None + + # Generate firmware ID + firmware_id = self._generate_firmware_id( + version, target, subtarget, profile_name, git_commit + ) + + # Extract features from device packages + features = self._extract_features(profile_data.get("device_packages", [])) + + return Firmware( + id=firmware_id, + source=FirmwareSourceEnum.OFFICIAL, + source_url=f"{base_url}/{target}/{subtarget}/", + source_ref=source_name, + version=version, + target=target, + subtarget=subtarget, + profile=profile_name, + git_commit_hash=git_commit, + artifacts=artifacts, + features=features, + packages=profile_data.get("device_packages", []), + ) + + def _generate_firmware_id( + self, + version: str, + target: str, + subtarget: str, + profile: str, + git_commit: str | None = None, + ) -> str: + """Generate a unique firmware ID.""" + parts = [ + "openwrt", + version.lower().replace(".", "-"), + target, + subtarget, + profile, + ] + if git_commit: + parts.append(git_commit[:8]) + + hash_input = f"{version}:{target}:{subtarget}:{profile}:{git_commit or ''}" + return generate_firmware_id(*parts, hash_input=hash_input) + + def _extract_features(self, packages: list[str]) -> list[str]: + """Extract device features from package list.""" + features = [] + + # WiFi detection + wifi_packages = ["hostapd", "wpad", "iw", "iwinfo"] + if any( + pkg in packages or any(pkg in p for p in packages) for pkg in wifi_packages + ): + features.append("wifi") + + # USB detection + usb_packages = ["kmod-usb", "usbutils"] + if any(any(pkg in p for p in packages) for pkg in usb_packages): + features.append("usb") + + return features + + async def download_artifact( + self, + firmware: Firmware, + artifact_type: str, + destination: str, + ) -> str: + """Download a firmware artifact to local storage.""" + # Get artifact URL + url = getattr(firmware.artifacts, artifact_type, None) + if not url: + raise ValueError(f"Artifact type '{artifact_type}' not found for firmware") + + dest_path = Path(destination) + dest_path.parent.mkdir(parents=True, exist_ok=True) + + logger.info(f"Downloading {artifact_type} from {url}") + + async with self.client.stream("GET", url) as response: + response.raise_for_status() + + with open(dest_path, "wb") as f: + async for chunk in response.aiter_bytes(chunk_size=8192): + f.write(chunk) + + # Verify checksum if available + expected_sha256 = getattr(firmware.artifacts, f"{artifact_type}_sha256", None) + if expected_sha256: + actual_sha256 = self._calculate_sha256(dest_path) + if actual_sha256 != expected_sha256: + dest_path.unlink() + raise ValueError( + f"Checksum mismatch: expected {expected_sha256}, " + f"got {actual_sha256}" + ) + + logger.info(f"Downloaded {artifact_type} to {dest_path}") + return str(dest_path) + + def _calculate_sha256(self, file_path: Path) -> str: + """Calculate SHA256 checksum of a file.""" + sha256 = hashlib.sha256() + with open(file_path, "rb") as f: + for chunk in iter(lambda: f.read(8192), b""): + sha256.update(chunk) + return sha256.hexdigest() diff --git a/kernelci/openwrt-pipeline/openwrt_pipeline/firmware_trigger.py b/kernelci/openwrt-pipeline/openwrt_pipeline/firmware_trigger.py new file mode 100644 index 000000000..6b9171fc8 --- /dev/null +++ b/kernelci/openwrt-pipeline/openwrt_pipeline/firmware_trigger.py @@ -0,0 +1,370 @@ +""" +Firmware Trigger Service + +Main service that watches all configured firmware sources and +creates firmware entries in the KernelCI API when new images +are detected. + +This service runs continuously and: +1. Periodically scans configured sources for new firmware +2. Creates firmware entries in the API +3. Publishes events for the scheduler to create test jobs +""" + +import asyncio +from datetime import datetime + +import structlog +import uvicorn +from fastapi import FastAPI + +from .api_client import APIError, KernelCIClient +from .config import load_pipeline_config, settings +from .firmware_sources import GitHubPRSource, OfficialReleaseSource +from .firmware_sources.custom import init_uploader +from .firmware_sources.custom import router as upload_router +from .models import FirmwareCreate +from .versions import get_active_branches + +# Configure logging +structlog.configure( + processors=[ + structlog.stdlib.filter_by_level, + structlog.stdlib.add_logger_name, + structlog.stdlib.add_log_level, + structlog.stdlib.PositionalArgumentsFormatter(), + structlog.processors.TimeStamper(fmt="iso"), + structlog.processors.StackInfoRenderer(), + structlog.processors.format_exc_info, + structlog.processors.UnicodeDecoder(), + structlog.dev.ConsoleRenderer(), + ], + wrapper_class=structlog.stdlib.BoundLogger, + context_class=dict, + logger_factory=structlog.stdlib.LoggerFactory(), + cache_logger_on_first_use=True, +) + +logger = structlog.get_logger(__name__) + + +class FirmwareTriggerService: + """ + Service that monitors firmware sources and triggers test jobs. + """ + + def __init__(self): + self.config = load_pipeline_config() + self.sources = [] + self.api_client: KernelCIClient | None = None + self.running = False + self._tasks: list[asyncio.Task] = [] + + async def initialize(self) -> None: + """Initialize all firmware sources and API client.""" + logger.info("Initializing Firmware Trigger Service") + + # Initialize API client + self.api_client = KernelCIClient() + await self.api_client.connect() + + # Initialize firmware sources + sources_config = self.config.get("firmware_sources", {}) + + # Dynamically fetch active branches from .versions.json + official_config = sources_config.get("official", {}) + if official_config.get("enabled", True): + await self._init_official_sources(official_config) + + # GitHub PR source + if "github_pr" in sources_config: + source = GitHubPRSource("github_pr", sources_config["github_pr"]) + await source.initialize() + if source.is_enabled(): + self.sources.append(source) + logger.info("Initialized GitHub PR source") + else: + logger.warning("GitHub PR source disabled (no token)") + + # Custom upload source (initialized separately for FastAPI) + if "custom" in sources_config: + init_uploader(sources_config["custom"]) + logger.info("Initialized custom upload handler") + + logger.info(f"Initialized {len(self.sources)} firmware sources") + + async def _init_official_sources(self, config: dict) -> None: + """ + Initialize official release sources dynamically. + + Fetches active branches from .versions.json and creates + a source for each (main/SNAPSHOT, stable, oldstable). + """ + # Get targets to scan from config + default_targets = config.get("targets", []) + check_interval = config.get("check_interval", 3600) + + # Fetch active branches dynamically + try: + branches = await get_active_branches( + include_snapshot=config.get("include_snapshot", True), + include_oldstable=config.get("include_oldstable", True), + ) + logger.info(f"Discovered {len(branches)} active branches") + except Exception as e: + logger.error(f"Failed to fetch branches: {e}") + return + + # Create a source for each branch + for branch in branches: + source_config = { + "enabled": True, + "type": "openwrt_releases", + "sources": { + branch.name: { + "url": branch.url, + "version": branch.version, + "branch": branch.name, + "check_interval": check_interval, + "targets": default_targets, + } + }, + } + + source = OfficialReleaseSource(f"official-{branch.name}", source_config) + await source.initialize() + self.sources.append(source) + logger.info( + f"Initialized source for {branch.name} " + f"(version={branch.version}, url={branch.url})" + ) + + async def shutdown(self) -> None: + """Cleanup resources.""" + logger.info("Shutting down Firmware Trigger Service") + + self.running = False + + # Cancel running tasks + for task in self._tasks: + task.cancel() + try: + await task + except asyncio.CancelledError: + pass + + # Cleanup sources + for source in self.sources: + await source.cleanup() + + # Close API client + if self.api_client: + await self.api_client.close() + + async def run(self) -> None: + """Main service loop.""" + self.running = True + + # Start a scan task for each source + for source in self.sources: + if source.is_enabled(): + task = asyncio.create_task( + self._source_scan_loop(source), + name=f"scan-{source.name}", + ) + self._tasks.append(task) + + # Wait for all tasks + if self._tasks: + await asyncio.gather(*self._tasks, return_exceptions=True) + + async def _source_scan_loop(self, source) -> None: + """Continuously scan a firmware source.""" + logger.info(f"Starting scan loop for source: {source.name}") + + while self.running: + try: + await self._scan_source(source) + except Exception as e: + logger.exception(f"Error scanning source {source.name}", error=str(e)) + + # Wait for next scan + interval = source.get_check_interval() + logger.debug(f"Next scan for {source.name} in {interval} seconds") + await asyncio.sleep(interval) + + async def _scan_source(self, source) -> None: + """Scan a source and create firmware entries.""" + logger.info(f"Scanning firmware source: {source.name}") + scan_start = datetime.utcnow() + new_count = 0 + existing_count = 0 + + async for firmware in source.scan(): + try: + # Check if firmware already exists + exists = await self.api_client.firmware_exists( + target=firmware.target, + subtarget=firmware.subtarget, + profile=firmware.profile, + version=firmware.version, + git_commit=firmware.git_commit_hash, + ) + + if exists: + existing_count += 1 + continue + + # Create firmware entry + logger.info( + "New firmware found", + firmware_id=firmware.id, + version=firmware.version, + target=firmware.target, + profile=firmware.profile, + ) + + firmware_create = FirmwareCreate( + source=firmware.source, + version=firmware.version, + target=firmware.target, + subtarget=firmware.subtarget, + profile=firmware.profile, + artifacts=firmware.artifacts, + git_commit_hash=firmware.git_commit_hash, + git_branch=firmware.git_branch, + source_url=firmware.source_url, + source_ref=firmware.source_ref, + description=firmware.description, + ) + + created = await self.api_client.create_firmware(firmware_create) + new_count += 1 + + # Publish event for scheduler + await self.api_client.publish_event( + event_type="firmware.new", + data={ + "firmware_id": created.id, + "source": firmware.source.value, + "target": firmware.target, + "subtarget": firmware.subtarget, + "profile": firmware.profile, + }, + ) + + except APIError as e: + if e.status_code == 409: # Conflict - already exists + existing_count += 1 + else: + logger.error( + "API error creating firmware", + firmware_id=firmware.id, + error=str(e), + ) + except Exception as e: + logger.exception( + "Error processing firmware", + firmware_id=firmware.id, + error=str(e), + ) + + scan_duration = (datetime.utcnow() - scan_start).total_seconds() + logger.info( + "Scan complete", + source=source.name, + new_firmware=new_count, + existing_firmware=existing_count, + duration_seconds=scan_duration, + ) + + +# ============================================================================= +# FastAPI Application +# ============================================================================= + +app = FastAPI( + title="OpenWrt Firmware Trigger", + description="Firmware source watcher and upload handler for OpenWrt KernelCI", + version="0.1.0", +) + +# Include upload router +app.include_router(upload_router) + +# Service instance +_service: FirmwareTriggerService | None = None + + +@app.on_event("startup") +async def startup(): + """Initialize service on startup.""" + global _service + _service = FirmwareTriggerService() + await _service.initialize() + + # Start the scan loop in background + asyncio.create_task(_service.run()) + + +@app.on_event("shutdown") +async def shutdown(): + """Cleanup on shutdown.""" + if _service: + await _service.shutdown() + + +@app.get("/health") +async def health(): + """Health check endpoint.""" + return {"status": "healthy", "sources": len(_service.sources) if _service else 0} + + +@app.get("/sources") +async def list_sources(): + """List configured firmware sources.""" + if not _service: + return {"sources": []} + + return { + "sources": [ + { + "name": s.name, + "enabled": s.is_enabled(), + "check_interval": s.get_check_interval(), + } + for s in _service.sources + ] + } + + +# ============================================================================= +# Main Entry Point +# ============================================================================= + + +def main(): + """Main entry point for the firmware trigger service.""" + # Handle signals + loop = asyncio.new_event_loop() + asyncio.set_event_loop(loop) + + # Run with uvicorn for API endpoints + config = uvicorn.Config( + app=app, + host="0.0.0.0", + port=8080, + log_level=settings.log_level.lower(), + ) + server = uvicorn.Server(config) + + try: + loop.run_until_complete(server.serve()) + except KeyboardInterrupt: + logger.info("Received shutdown signal") + finally: + loop.close() + + +if __name__ == "__main__": + main() diff --git a/kernelci/openwrt-pipeline/openwrt_pipeline/github_status.py b/kernelci/openwrt-pipeline/openwrt_pipeline/github_status.py new file mode 100644 index 000000000..6c6d400d6 --- /dev/null +++ b/kernelci/openwrt-pipeline/openwrt_pipeline/github_status.py @@ -0,0 +1,251 @@ +""" +GitHub Commit Status Integration + +Posts commit statuses and PR comments for test results. +Provides feedback to developers when tests pass/fail. +""" + +import logging +from typing import Literal + +from github import Auth, Github +from github.GithubException import GithubException + +from .config import settings + +logger = logging.getLogger(__name__) + +StatusState = Literal["pending", "success", "failure", "error"] + + +class GitHubStatusPoster: + """ + Posts commit statuses and PR comments to GitHub. + + Provides test feedback to developers: + - Commit status (pending/success/failure/error) + - PR comments with detailed results + """ + + def __init__( + self, + repository: str | None = None, + token: str | None = None, + ): + self.repository = repository or settings.github_repo + self.token = token or settings.github_token + self._github: Github | None = None + + def connect(self) -> None: + """Initialize GitHub client.""" + if not self.token: + logger.warning("No GitHub token configured, status posting disabled") + return + + auth = Auth.Token(self.token) + self._github = Github(auth=auth) + + def close(self) -> None: + """Close GitHub client.""" + if self._github: + self._github.close() + self._github = None + + @property + def github(self) -> Github | None: + return self._github + + def post_status( + self, + commit_sha: str, + state: StatusState, + context: str = "OpenWrt Tests", + description: str = "", + target_url: str | None = None, + ) -> bool: + """ + Post a commit status to GitHub. + + Args: + commit_sha: Full SHA of the commit + state: Status state (pending, success, failure, error) + context: Status context (appears as the check name) + description: Short description (max 140 chars) + target_url: URL to link to for details + + Returns: + True if successful, False otherwise + """ + if not self._github: + logger.debug("GitHub not connected, skipping status post") + return False + + try: + repo = self._github.get_repo(self.repository) + commit = repo.get_commit(commit_sha) + + # Truncate description to GitHub's limit + if len(description) > 140: + description = description[:137] + "..." + + commit.create_status( + state=state, + target_url=target_url or "", + description=description, + context=context, + ) + + logger.info( + f"Posted status to {self.repository}@{commit_sha[:7]}: " + f"{state} - {description}" + ) + return True + + except GithubException as e: + logger.error(f"Failed to post GitHub status: {e}") + return False + + def post_pr_comment( + self, + pr_number: int, + body: str, + ) -> bool: + """ + Post a comment on a pull request. + + Args: + pr_number: PR number + body: Comment body (markdown supported) + + Returns: + True if successful, False otherwise + """ + if not self._github: + logger.debug("GitHub not connected, skipping PR comment") + return False + + try: + repo = self._github.get_repo(self.repository) + pr = repo.get_pull(pr_number) + pr.create_issue_comment(body) + + logger.info(f"Posted comment on PR #{pr_number}") + return True + + except GithubException as e: + logger.error(f"Failed to post PR comment: {e}") + return False + + def post_test_results( + self, + commit_sha: str, + passed: int, + failed: int, + skipped: int, + target_url: str | None = None, + pr_number: int | None = None, + device: str | None = None, + details: list[dict] | None = None, + ) -> bool: + """ + Post test results as commit status and optionally PR comment. + + Args: + commit_sha: Commit SHA to post status on + passed: Number of passed tests + failed: Number of failed tests + skipped: Number of skipped tests + target_url: URL to full test results + pr_number: PR number for detailed comment (optional) + device: Device name tested on + details: List of test details for PR comment + + Returns: + True if successful + """ + total = passed + failed + skipped + state: StatusState = "success" if failed == 0 else "failure" + + # Build description + device_str = f" on {device}" if device else "" + description = f"{passed}/{total} tests passed{device_str}" + + # Build context with device name for multi-device testing + context = "OpenWrt Tests" + if device: + context = f"OpenWrt Tests ({device})" + + # Post commit status + success = self.post_status( + commit_sha=commit_sha, + state=state, + context=context, + description=description, + target_url=target_url, + ) + + # Post detailed PR comment if requested and there are failures + if pr_number and failed > 0 and details: + comment = self._format_results_comment( + passed=passed, + failed=failed, + skipped=skipped, + device=device, + target_url=target_url, + details=details, + ) + self.post_pr_comment(pr_number, comment) + + return success + + def _format_results_comment( + self, + passed: int, + failed: int, + skipped: int, + device: str | None, + target_url: str | None, + details: list[dict], + ) -> str: + """Format a PR comment with test results.""" + status_emoji = "✅" if failed == 0 else "❌" + + lines = [ + f"## {status_emoji} Test Results", + "", + f"**Device:** {device or 'Unknown'}", + f"**Results:** {passed} passed, {failed} failed, {skipped} skipped", + "", + ] + + if failed > 0: + lines.append("### Failed Tests") + lines.append("") + for test in details: + if test.get("status") in ("fail", "error"): + name = test.get("name", "unknown") + error = test.get("error_message", "No error message") + lines.append(f"- **{name}**") + if error: + lines.append(" ```") + lines.append(f" {error[:500]}") + lines.append(" ```") + lines.append("") + + if target_url: + lines.append(f"[View full results]({target_url})") + + return "\n".join(lines) + + +# Global instance for convenience +_poster: GitHubStatusPoster | None = None + + +def get_github_poster() -> GitHubStatusPoster: + """Get or create the global GitHub status poster.""" + global _poster + if _poster is None: + _poster = GitHubStatusPoster() + _poster.connect() + return _poster diff --git a/kernelci/openwrt-pipeline/openwrt_pipeline/kcidb_bridge.py b/kernelci/openwrt-pipeline/openwrt_pipeline/kcidb_bridge.py new file mode 100644 index 000000000..645357259 --- /dev/null +++ b/kernelci/openwrt-pipeline/openwrt_pipeline/kcidb_bridge.py @@ -0,0 +1,443 @@ +""" +KCIDB Bridge Service - Resolves full commit hashes from GitHub +""" + +import asyncio +import logging +import os +from datetime import datetime, timezone +import httpx +import jwt + +logging.basicConfig(level=logging.INFO, format='%(asctime)s %(levelname)s %(message)s') +logger = logging.getLogger(__name__) + +KCI_API_URL = os.environ.get("KCI_API_URL", "http://kernelci-api:8000") +KCI_API_TOKEN = os.environ.get("KCI_API_TOKEN", "") +KCIDB_URL = os.environ.get("KCIDB_URL", "http://host.docker.internal:8080") +KCIDB_ORIGIN = os.environ.get("KCIDB_ORIGIN", "openwrt") +KCIDB_SECRET = os.environ.get("KCIDB_SECRET", "") +POLL_INTERVAL = int(os.environ.get("POLL_INTERVAL", "60")) + +# Maximum size for log_excerpt (KCIDB limit is 16384) +MAX_LOG_EXCERPT_SIZE = 16000 + +# Cache for resolved commit hashes +_commit_cache: dict[str, str] = {} + +# Cache for fetched log excerpts +_log_cache: dict[str, str] = {} + + +async def fetch_log_excerpt(log_url: str) -> str | None: + """ + Fetch log content from URL and extract a relevant excerpt. + + Returns up to MAX_LOG_EXCERPT_SIZE characters, prioritizing: + 1. Test results section (PASSED/FAILED summary) + 2. Error messages and failures + 3. Last portion of the log if nothing specific found + """ + if not log_url: + return None + + if log_url in _log_cache: + return _log_cache[log_url] + + try: + async with httpx.AsyncClient(timeout=30.0) as client: + resp = await client.get(log_url) + if resp.status_code != 200: + logger.warning(f"Failed to fetch log: {resp.status_code}") + return None + + content = resp.text + excerpt = extract_log_excerpt(content) + _log_cache[log_url] = excerpt + return excerpt + except Exception as e: + logger.warning(f"Failed to fetch log from {log_url}: {e}") + return None + + +def extract_log_excerpt(content: str) -> str: + """ + Extract the most relevant portion of a log file. + + Prioritizes pytest test results and error messages. + """ + if not content: + return "" + + lines = content.split('\n') + + # Look for pytest summary section (most relevant for test results) + summary_start = -1 + for i, line in enumerate(lines): + # pytest summary markers + if '====' in line and ('passed' in line.lower() or 'failed' in line.lower() or 'error' in line.lower()): + summary_start = max(0, i - 50) # Include 50 lines before summary + break + if line.startswith('FAILED ') or line.startswith('ERROR '): + summary_start = max(0, i - 20) + break + + if summary_start >= 0: + # Get from summary to end + excerpt_lines = lines[summary_start:] + excerpt = '\n'.join(excerpt_lines) + else: + # No summary found - take the last portion of the log + excerpt = content + + # Truncate to max size + if len(excerpt) > MAX_LOG_EXCERPT_SIZE: + excerpt = excerpt[-MAX_LOG_EXCERPT_SIZE:] + # Find first newline to avoid cutting mid-line + first_newline = excerpt.find('\n') + if first_newline > 0: + excerpt = excerpt[first_newline + 1:] + + return excerpt + + +def generate_kcidb_token(): + now = datetime.now(timezone.utc) + payload = { + "origin": KCIDB_ORIGIN, + "gendate": now.isoformat(), + "exp": int(now.timestamp()) + 3600, + } + return jwt.encode(payload, KCIDB_SECRET, algorithm="HS256") + + +async def resolve_full_commit(short_hash: str) -> str: + """Resolve short commit hash to full 40-char hash via GitHub API.""" + if not short_hash: + return "0" * 40 + + short_hash = ''.join(c for c in short_hash.lower() if c in '0123456789abcdef') + + if len(short_hash) >= 40: + return short_hash[:40] + + if short_hash in _commit_cache: + return _commit_cache[short_hash] + + try: + async with httpx.AsyncClient(timeout=10.0) as client: + resp = await client.get( + f"https://api.github.com/repos/openwrt/openwrt/commits/{short_hash}", + headers={"Accept": "application/vnd.github.v3+json"}, + ) + if resp.status_code == 200: + data = resp.json() + full_hash = data.get("sha", "") + if full_hash and len(full_hash) == 40: + _commit_cache[short_hash] = full_hash + logger.info(f"Resolved commit {short_hash} -> {full_hash}") + return full_hash + else: + logger.warning(f"GitHub API returned {resp.status_code} for {short_hash}") + except Exception as e: + logger.warning(f"Failed to resolve commit {short_hash}: {e}") + + padded = short_hash + "0" * (40 - len(short_hash)) + _commit_cache[short_hash] = padded + return padded + + +def fix_timestamp(ts: str) -> str: + if not ts: + return datetime.now(timezone.utc).isoformat() + if "+" not in ts and "Z" not in ts: + return ts + "+00:00" + return ts + + +def node_to_kcidb_checkout(node: dict, full_commit: str) -> dict: + data = node.get("data", {}) + kernel_rev = data.get("kernel_revision", {}) + + return { + "id": f"{KCIDB_ORIGIN}:{node['id']}", + "origin": KCIDB_ORIGIN, + "tree_name": kernel_rev.get("tree", "openwrt"), + "git_repository_url": "https://github.com/openwrt/openwrt.git", + "git_repository_branch": kernel_rev.get("branch", "main"), + "git_commit_hash": full_commit, + "patchset_hash": "", + "start_time": fix_timestamp(node.get("created", "")), + "valid": True, + } + + +def node_to_kcidb_build(node: dict) -> dict: + data = node.get("data", {}) + target = data.get("target", "") + subtarget = data.get("subtarget", "") + profile = data.get("profile", "") + + # architecture = target/subtarget (e.g., ath79/generic) + # config_name = profile only (e.g., tplink_tl-wdr3600-v1) + architecture = f"{target}_{subtarget}" if target and subtarget else target + config_name = profile + + # Builds use 'valid' field (boolean), not 'status' + # A build is valid if the firmware was successfully created + result = node.get("result") + is_valid = result == "pass" + + return { + "id": f"{KCIDB_ORIGIN}:{node['id']}", + "origin": KCIDB_ORIGIN, + "checkout_id": f"{KCIDB_ORIGIN}:{node['id']}", + "comment": f"OpenWrt {data.get('openwrt_version', '')} - {target}/{subtarget}/{profile}", + "start_time": fix_timestamp(node.get("created", "")), + "valid": is_valid, + "architecture": architecture, + "config_name": config_name, + "input_files": [], # Required by KCIDB schema + } + + +class KCIDBBridge: + def __init__(self): + self.kci_client = None + self.kcidb_client = None + self.processed_ids = set() + + async def start(self): + self.kci_client = httpx.AsyncClient( + base_url=KCI_API_URL, + headers={"Authorization": f"Bearer {KCI_API_TOKEN}"}, + timeout=30.0, + ) + self.kcidb_client = httpx.AsyncClient( + base_url=KCIDB_URL, + timeout=30.0, + ) + logger.info(f"KCIDB Bridge started - KCI: {KCI_API_URL}, KCIDB: {KCIDB_URL}") + + async def stop(self): + if self.kci_client: + await self.kci_client.aclose() + if self.kcidb_client: + await self.kcidb_client.aclose() + + async def get_unprocessed_nodes(self, kind: str, limit: int = 100, state: str = None) -> list: + try: + params = {"kind": kind, "limit": limit} + if state: + params["state"] = state + resp = await self.kci_client.get( + "/latest/nodes", + params=params, + ) + resp.raise_for_status() + data = resp.json() + items = data.get("items", []) + return [n for n in items if n.get("id") not in self.processed_ids] + except Exception as e: + logger.error(f"Error fetching nodes: {e}") + return [] + + async def submit_to_kcidb(self, data: dict) -> bool: + try: + token = generate_kcidb_token() + resp = await self.kcidb_client.post( + "/submit", + json=data, + headers={"Authorization": f"Bearer {token}"}, + ) + if resp.status_code == 200: + return True + else: + logger.warning(f"KCIDB submission failed: {resp.status_code} {resp.text}") + return False + except Exception as e: + logger.error(f"Error submitting to KCIDB: {e}") + return False + + async def process_kbuilds(self): + nodes = await self.get_unprocessed_nodes("kbuild", limit=50) + if not nodes: + return 0 + + checkouts = [] + builds = [] + + # Group by commit hash to minimize GitHub API calls + commit_map = {} + for node in nodes: + data = node.get("data", {}) + kernel_rev = data.get("kernel_revision", {}) + short_commit = kernel_rev.get("commit", "") + if short_commit not in commit_map: + commit_map[short_commit] = [] + commit_map[short_commit].append(node) + + # Resolve commits + for short_commit, commit_nodes in commit_map.items(): + full_commit = await resolve_full_commit(short_commit) + for node in commit_nodes: + checkouts.append(node_to_kcidb_checkout(node, full_commit)) + builds.append(node_to_kcidb_build(node)) + + if checkouts: + submission = { + "version": {"major": 4, "minor": 3}, + "checkouts": checkouts, + "builds": builds, + } + + if await self.submit_to_kcidb(submission): + logger.info(f"Submitted {len(builds)} builds to KCIDB") + for node in nodes: + self.processed_ids.add(node["id"]) + return len(builds) + + return 0 + + async def process_tests(self): + # Process both "test" and "job" kinds - jobs contain test results + nodes = [] + for kind in ["test", "job"]: + # For jobs, filter by state=done to get completed jobs + # Use larger limit to catch all recent jobs + if kind == "job": + kind_nodes = await self.get_unprocessed_nodes(kind, limit=500, state="done") + # Only process jobs that have results + kind_nodes = [n for n in kind_nodes if n.get("result")] + else: + kind_nodes = await self.get_unprocessed_nodes(kind, limit=50) + nodes.extend(kind_nodes) + + if not nodes: + return 0 + + tests = [] + result_map = {"pass": "PASS", "fail": "FAIL", "skip": "SKIP", "error": "ERROR", "incomplete": "MISS"} + + for node in nodes: + data = node.get("data", {}) + result = node.get("result", "") + status = result_map.get(result, "MISS") + + # For jobs, use parent as build_id (firmware node) + parent_id = node.get("parent", "") + + # Build test path from job data (KCIDB uses dot-separated paths) + device_type = data.get("device_type", "unknown") + test_plan = data.get("test_plan", "unknown") + lab_name = data.get("lab_name", "unknown") + log_url = data.get("log_url") + + # Fetch log excerpt if log_url is available + log_excerpt = None + if log_url: + log_excerpt = await fetch_log_excerpt(log_url) + + # Check if job has individual test results + test_results = data.get("test_results", []) + + if test_results: + # Submit individual test results for detailed view + for test_result in test_results: + test_name = test_result.get("test_name", "unknown") + test_status = result_map.get(test_result.get("status", ""), "MISS") + # Path format: device.plan.test_name (e.g., bananapi_bpi-r4.boot.test_shell) + test_path = f"{device_type}.{test_plan}.{test_name}" + + test_entry = { + "id": f"{KCIDB_ORIGIN}:{node['id']}:{test_name}", + "origin": KCIDB_ORIGIN, + "build_id": f"{KCIDB_ORIGIN}:{parent_id}" if parent_id else None, + "path": test_path, + "start_time": fix_timestamp(test_result.get("start_time") or data.get("started_at") or node.get("created", "")), + "status": test_status, + "waived": False, + "environment": { + "comment": f"Device: {device_type}, Lab: {lab_name}", + "misc": { + "platform": device_type, + }, + }, + } + + # Add log URL and excerpt if available + if log_url: + test_entry["log_url"] = log_url + if log_excerpt: + test_entry["log_excerpt"] = log_excerpt + + # Add error message if test failed + if test_result.get("error_message"): + test_entry["comment"] = test_result["error_message"][:500] + + tests.append(test_entry) + else: + # No individual results - submit job-level test entry + test_path = f"{device_type}.{test_plan}" + + test_entry = { + "id": f"{KCIDB_ORIGIN}:{node['id']}", + "origin": KCIDB_ORIGIN, + "build_id": f"{KCIDB_ORIGIN}:{parent_id}" if parent_id else None, + "path": test_path, + "start_time": fix_timestamp(data.get("started_at") or node.get("created", "")), + "status": status, + "waived": False, + "environment": { + "comment": f"Device: {device_type}, Lab: {lab_name}", + "misc": { + "platform": device_type, + }, + }, + } + + if log_url: + test_entry["log_url"] = log_url + if log_excerpt: + test_entry["log_excerpt"] = log_excerpt + + tests.append(test_entry) + + if tests: + submission = { + "version": {"major": 4, "minor": 3}, + "tests": tests, + } + + if await self.submit_to_kcidb(submission): + logger.info(f"Submitted {len(tests)} tests to KCIDB") + for node in nodes: + self.processed_ids.add(node["id"]) + return len(tests) + + return 0 + + async def run(self): + await self.start() + + try: + while True: + try: + builds = await self.process_kbuilds() + tests = await self.process_tests() + + if builds > 0 or tests > 0: + logger.info(f"Processed {builds} builds, {tests} tests") + + except Exception as e: + logger.error(f"Error in processing loop: {e}") + + await asyncio.sleep(POLL_INTERVAL) + + finally: + await self.stop() + + +if __name__ == "__main__": + bridge = KCIDBBridge() + asyncio.run(bridge.run()) diff --git a/kernelci/openwrt-pipeline/openwrt_pipeline/models.py b/kernelci/openwrt-pipeline/openwrt_pipeline/models.py new file mode 100644 index 000000000..fbb00e27e --- /dev/null +++ b/kernelci/openwrt-pipeline/openwrt_pipeline/models.py @@ -0,0 +1,380 @@ +""" +Data models for OpenWrt KernelCI Pipeline. + +These Pydantic models define the structure of: +- Firmware metadata +- Test jobs +- Test results +- Device status +- Lab registration +""" + +from datetime import datetime +from enum import Enum +from typing import Any + +from pydantic import BaseModel, Field + +# ============================================================================= +# Enums +# ============================================================================= + + +class FirmwareSource(str, Enum): + """Firmware source types.""" + + OFFICIAL = "official" + PR = "pr" + CUSTOM = "custom" + BUILDBOT = "buildbot" + + +class JobStatus(str, Enum): + """Job status values.""" + + PENDING = "pending" + ASSIGNED = "assigned" + RUNNING = "running" + COMPLETE = "complete" + FAILED = "failed" + TIMEOUT = "timeout" + CANCELLED = "cancelled" + + +class TestStatus(str, Enum): + """Test result status values.""" + + PASS = "pass" + FAIL = "fail" + SKIP = "skip" + ERROR = "error" + + +class DeviceStatus(str, Enum): + """Device health status values.""" + + HEALTHY = "healthy" + FAILING = "failing" + DISABLED = "disabled" + UNKNOWN = "unknown" + + +class LabStatus(str, Enum): + """Lab status values.""" + + ONLINE = "online" + OFFLINE = "offline" + MAINTENANCE = "maintenance" + + +# ============================================================================= +# Firmware Models +# ============================================================================= + + +class FirmwareArtifacts(BaseModel): + """Firmware artifact URLs.""" + + sysupgrade: str | None = None + factory: str | None = None + initramfs: str | None = None + kernel: str | None = None + rootfs: str | None = None + manifest: str | None = None + + # SHA256 checksums + sysupgrade_sha256: str | None = None + factory_sha256: str | None = None + initramfs_sha256: str | None = None + + +class Firmware(BaseModel): + """OpenWrt firmware metadata.""" + + id: str = Field(..., description="Unique firmware identifier") + origin: str = Field(default="openwrt") + + # Source information + source: FirmwareSource + source_url: str | None = None + source_ref: str | None = None # PR number, buildbot ID, etc. + + # OpenWrt identification + version: str = Field(..., description="Version string (SNAPSHOT, 24.10.0, etc.)") + target: str = Field(..., description="Target platform (ath79, mediatek, etc.)") + subtarget: str = Field(..., description="Subtarget (generic, filogic, etc.)") + profile: str = Field(..., description="Device profile name") + + # Git information + git_repository_url: str = Field(default="https://github.com/openwrt/openwrt") + git_commit_hash: str | None = None + git_branch: str | None = None + + # Artifacts + artifacts: FirmwareArtifacts = Field(default_factory=FirmwareArtifacts) + + # Metadata + build_time: datetime | None = None + file_size: int | None = None + sha256: str | None = None + features: list[str] = Field(default_factory=list) + packages: list[str] = Field(default_factory=list) + description: str | None = None + + # Timestamps + created_at: datetime = Field(default_factory=datetime.utcnow) + updated_at: datetime = Field(default_factory=datetime.utcnow) + + +class FirmwareCreate(BaseModel): + """Request model for creating firmware entries.""" + + source: FirmwareSource + version: str + target: str + subtarget: str + profile: str + artifacts: FirmwareArtifacts | None = None + git_commit_hash: str | None = None + git_branch: str | None = None + source_url: str | None = None + source_ref: str | None = None + description: str | None = None + + +# ============================================================================= +# Job Models +# ============================================================================= + + +class TestJob(BaseModel): + """Test job definition.""" + + id: str = Field(..., description="Unique job identifier") + firmware_id: str = Field(..., description="Reference to firmware") + + # Target device + device_type: str = Field(..., description="Labgrid target name") + + # Test configuration + test_plan: str = Field(..., description="Test plan name") + tests: list[str] = Field(default_factory=list, description="Specific tests to run") + required_features: list[str] = Field(default_factory=list) + timeout: int = Field(default=1800, description="Job timeout in seconds") + + # Priority and scheduling + priority: int = Field( + default=5, description="Job priority (1-10, higher=more urgent)" + ) + status: JobStatus = Field(default=JobStatus.PENDING) + + # Assignment + assigned_lab: str | None = None + assigned_device: str | None = None + + # Timing + created_at: datetime = Field(default_factory=datetime.utcnow) + started_at: datetime | None = None + completed_at: datetime | None = None + + # Options + skip_firmware_flash: bool = Field(default=False) + retry_count: int = Field(default=0) + max_retries: int = Field(default=2) + + +class JobCreate(BaseModel): + """Request model for creating test jobs.""" + + firmware_id: str + device_type: str + test_plan: str + tests: list[str] | None = None + priority: int = 5 + timeout: int = 1800 + skip_firmware_flash: bool = False + + +class JobUpdate(BaseModel): + """Request model for updating job status.""" + + status: JobStatus + assigned_lab: str | None = None + assigned_device: str | None = None + started_at: datetime | None = None + completed_at: datetime | None = None + error_message: str | None = None + + +# ============================================================================= +# Result Models +# ============================================================================= + + +class TestResult(BaseModel): + """Individual test result.""" + + id: str = Field(..., description="Unique result identifier") + job_id: str + firmware_id: str + device_type: str + lab_name: str + + # Test identification + test_name: str + test_path: str | None = None # Full pytest path + + # Result + status: TestStatus + duration: float = Field(..., description="Duration in seconds") + start_time: datetime + end_time: datetime | None = None + + # Output + log_url: str | None = None + console_log_url: str | None = None + error_message: str | None = None + stdout: str | None = None + stderr: str | None = None + + # Environment + environment: dict[str, Any] = Field(default_factory=dict) + + # Timestamps + created_at: datetime = Field(default_factory=datetime.utcnow) + + +class JobResult(BaseModel): + """Complete job result with all test results.""" + + job_id: str + firmware_id: str + device_type: str + lab_name: str + + # Overall status + status: JobStatus + total_tests: int = 0 + passed_tests: int = 0 + failed_tests: int = 0 + skipped_tests: int = 0 + error_tests: int = 0 + + # Timing + started_at: datetime + completed_at: datetime + duration: float + + # Individual results + test_results: list[TestResult] = Field(default_factory=list) + + # Logs + console_log_url: str | None = None + + # Environment + environment: dict[str, Any] = Field(default_factory=dict) + + +# ============================================================================= +# Device Models +# ============================================================================= + + +class Device(BaseModel): + """Device registration and status.""" + + id: str = Field(..., description="Unique device identifier (labgrid target name)") + lab_name: str + + # Device type mapping + target: str + subtarget: str + profile: str | None = None + + # Features + features: list[str] = Field(default_factory=list) + + # Status + status: DeviceStatus = Field(default=DeviceStatus.UNKNOWN) + last_check: datetime | None = None + last_pass: datetime | None = None + consecutive_failures: int = Field(default=0) + + # Current job + current_job_id: str | None = None + + # Metadata + description: str | None = None + location: str | None = None + + # Timestamps + registered_at: datetime = Field(default_factory=datetime.utcnow) + updated_at: datetime = Field(default_factory=datetime.utcnow) + + +class DeviceHealthCheck(BaseModel): + """Device health check result.""" + + device_id: str + lab_name: str + timestamp: datetime = Field(default_factory=datetime.utcnow) + + # Result + status: TestStatus + checks: list[dict[str, Any]] = Field(default_factory=list) + duration: float + + # Diagnostics + error_message: str | None = None + console_log_url: str | None = None + + +# ============================================================================= +# Lab Models +# ============================================================================= + + +class Lab(BaseModel): + """Lab registration and status.""" + + id: str = Field(..., description="Unique lab identifier") + name: str + + # Status + status: LabStatus = Field(default=LabStatus.OFFLINE) + last_seen: datetime | None = None + + # Capabilities + devices: list[str] = Field(default_factory=list) + features: list[str] = Field(default_factory=list) + + # Configuration + max_concurrent_jobs: int = Field(default=3) + coordinator_url: str | None = None + + # Statistics + jobs_completed: int = Field(default=0) + jobs_failed: int = Field(default=0) + + # Timestamps + registered_at: datetime = Field(default_factory=datetime.utcnow) + updated_at: datetime = Field(default_factory=datetime.utcnow) + + +class LabRegister(BaseModel): + """Request model for lab registration.""" + + name: str + devices: list[str] + features: list[str] = Field(default_factory=list) + max_concurrent_jobs: int = 3 + coordinator_url: str | None = None + + +class LabHeartbeat(BaseModel): + """Lab heartbeat with status update.""" + + lab_id: str + status: LabStatus + available_devices: list[str] = Field(default_factory=list) + running_jobs: list[str] = Field(default_factory=list) diff --git a/kernelci/openwrt-pipeline/openwrt_pipeline/test_scheduler.py b/kernelci/openwrt-pipeline/openwrt_pipeline/test_scheduler.py new file mode 100644 index 000000000..532f9d95e --- /dev/null +++ b/kernelci/openwrt-pipeline/openwrt_pipeline/test_scheduler.py @@ -0,0 +1,497 @@ +""" +Test Scheduler Service + +Schedules test jobs for firmware based on: +- Device compatibility (target/subtarget/profile) +- Device features (wifi, wan_port, etc.) +- Test type (firmware tests vs kernel selftests) +- Job priority +- Lab availability + +This service runs continuously and: +1. Listens for new firmware events +2. Creates test jobs for compatible devices +3. Builds custom images when needed (via ASU) +4. Monitors job progress and handles failures +""" + +import asyncio +import logging +import sys + +import structlog + +from .api_client import APIError, KernelCIClient +from .asu_client import ASUClient, ImageBuildRequest +from .config import load_pipeline_config +from .test_types import ( + TEST_TYPE_CONFIGS, + TestType, + device_supports_test_type, + get_image_profile, + get_test_type_config, + needs_custom_image, +) + +# Configure stdlib logging first (required for structlog.stdlib) +logging.basicConfig( + format="%(message)s", + stream=sys.stdout, + level=logging.INFO, +) + +# Configure structlog +structlog.configure( + processors=[ + structlog.stdlib.filter_by_level, + structlog.stdlib.add_logger_name, + structlog.stdlib.add_log_level, + structlog.stdlib.PositionalArgumentsFormatter(), + structlog.processors.TimeStamper(fmt="iso"), + structlog.processors.StackInfoRenderer(), + structlog.processors.format_exc_info, + structlog.processors.UnicodeDecoder(), + structlog.dev.ConsoleRenderer(), + ], + wrapper_class=structlog.stdlib.BoundLogger, + context_class=dict, + logger_factory=structlog.stdlib.LoggerFactory(), + cache_logger_on_first_use=True, +) + +logger = structlog.get_logger(__name__) + + +class TestScheduler: + """ + Schedules test jobs based on firmware and device compatibility. + + Supports multiple test types: + - firmware: Standard OpenWrt functionality tests + - kselftest: Linux kernel validation tests (requires custom images) + """ + + def __init__(self): + self.config = load_pipeline_config() + self.api_client: KernelCIClient | None = None + self.asu_client: ASUClient | None = None + self._running = False + # Cache for custom image URLs: (target, subtarget, profile, version, test_type) -> url + self._image_cache: dict[tuple, str] = {} + + async def initialize(self) -> None: + """Initialize the scheduler.""" + logger.info("Initializing Test Scheduler") + + self.api_client = KernelCIClient() + await self.api_client.connect() + + self.asu_client = ASUClient() + await self.asu_client.connect() + + async def shutdown(self) -> None: + """Cleanup resources.""" + logger.info("Shutting down Test Scheduler") + self._running = False + + if self.api_client: + await self.api_client.close() + + if self.asu_client: + await self.asu_client.close() + + async def run(self) -> None: + """Main scheduler loop.""" + self._running = True + logger.info("Starting Test Scheduler") + + # Run concurrent tasks + await asyncio.gather( + self._event_listener(), + self._job_monitor(), + return_exceptions=True, + ) + + async def _event_listener(self) -> None: + """ + Listen for firmware events and create jobs. + + Uses the Node-based API to query for firmware (kbuild nodes) + that don't yet have test jobs scheduled. + """ + logger.info("Starting event listener") + + while self._running: + try: + # Get recent firmware nodes (kind=kbuild, state=available) + firmware_nodes = await self.api_client.query_nodes( + kind="kbuild", + state="available", + limit=50, + ) + + for firmware_node in firmware_nodes: + firmware_id = firmware_node.get("id") or firmware_node.get("_id") + + # Check if jobs already exist for this firmware + existing_jobs = await self.api_client.query_nodes( + kind="job", + parent=firmware_id, + limit=1, + ) + + if not existing_jobs: + # Create jobs for this firmware + await self._create_jobs_for_firmware(firmware_node) + + except Exception as e: + logger.exception(f"Error in event listener: {e}") + + await asyncio.sleep(30) + + async def _job_monitor(self) -> None: + """ + Monitor running jobs and handle timeouts. + """ + logger.info("Starting job monitor") + + while self._running: + try: + # This would monitor for stuck/timed out jobs + # and retry or mark as failed + pass + + except Exception as e: + logger.exception(f"Error in job monitor: {e}") + + await asyncio.sleep(60) + + async def _create_jobs_for_firmware(self, firmware_node: dict) -> None: + """ + Create test jobs for a firmware image. + + Finds compatible devices and creates jobs with appropriate + test plans based on device features and test types. + + Creates jobs for multiple test types: + - firmware: Standard tests using official image + - kselftest: Kernel tests using custom image with test packages + + Args: + firmware_node: Firmware node dict from KernelCI API + """ + firmware_id = firmware_node.get("id") or firmware_node.get("_id") + firmware_data = firmware_node.get("data", {}) + target = firmware_data.get("target", "") + subtarget = firmware_data.get("subtarget", "") + profile = firmware_data.get("profile", "*") + version = firmware_data.get("openwrt_version", "SNAPSHOT") + source = firmware_data.get("source", "official") + artifacts = firmware_data.get("artifacts", {}) + + logger.info( + "Creating jobs for firmware", + firmware_id=firmware_id, + target=target, + profile=profile, + version=version, + ) + + device_types = self.config.get("device_types", {}) + scheduler_config = self.config.get("scheduler", {}) + + # Find compatible devices + compatible_devices = self._find_compatible_devices( + target, + subtarget, + profile, + device_types, + ) + + if not compatible_devices: + logger.warning( + "No compatible devices for firmware", + firmware_id=firmware_id, + target=target, + ) + return + + # Get enabled test types from config + enabled_test_types = scheduler_config.get( + "enabled_test_types", ["firmware"] + ) + + # Create jobs for each test type + for test_type_str in enabled_test_types: + try: + test_type = TestType(test_type_str) + except ValueError: + logger.warning(f"Unknown test type: {test_type_str}") + continue + + await self._create_jobs_for_test_type( + firmware_id=firmware_id, + firmware_data=firmware_data, + test_type=test_type, + compatible_devices=compatible_devices, + artifacts=artifacts, + ) + + async def _create_jobs_for_test_type( + self, + firmware_id: str, + firmware_data: dict, + test_type: TestType, + compatible_devices: dict[str, dict], + artifacts: dict[str, str], + ) -> None: + """ + Create jobs for a specific test type. + + Args: + firmware_id: Parent firmware node ID + firmware_data: Firmware metadata + test_type: Type of tests to create jobs for + compatible_devices: Devices that can run the firmware + artifacts: Firmware artifact URLs + """ + test_type_config = get_test_type_config(test_type) + if not test_type_config: + logger.warning(f"No config for test type: {test_type}") + return + + target = firmware_data.get("target", "") + subtarget = firmware_data.get("subtarget", "") + profile = firmware_data.get("profile", "") + version = firmware_data.get("openwrt_version", "SNAPSHOT") + + # Determine firmware URL for this test type + if needs_custom_image(test_type): + # Build custom image with required packages + firmware_url = await self._get_custom_image_url( + target=target, + subtarget=subtarget, + profile=profile, + version=version, + test_type=test_type, + ) + if not firmware_url: + logger.warning( + f"Failed to get custom image for {test_type}, skipping" + ) + return + else: + # Use standard firmware image + firmware_url = artifacts.get("sysupgrade") or artifacts.get("factory") + + # Create jobs for compatible devices that support this test type + for device_name, device_config in compatible_devices.items(): + device_capabilities = device_config.get("capabilities", []) + + # Check if device supports this test type + if not device_supports_test_type(device_capabilities, test_type): + logger.debug( + f"Device {device_name} doesn't support {test_type.value}", + capabilities=device_capabilities, + required=test_type_config.required_capabilities, + ) + continue + + # Create jobs for each test plan in this test type + for plan_name in test_type_config.test_plans: + try: + created = await self.api_client.create_test_job( + firmware_node_id=firmware_id, + device_type=device_name, + test_plan=plan_name, + test_type=test_type.value, + firmware_url=firmware_url, + tests_subdir=test_type_config.tests_subdir, + timeout=1800, + ) + job_id = created.get("id") or created.get("_id") + logger.info( + "Created job", + job_id=job_id, + device=device_name, + test_type=test_type.value, + test_plan=plan_name, + ) + + except APIError as e: + if e.status_code == 409: + logger.debug( + f"Job already exists: {device_name}/{test_type.value}/{plan_name}" + ) + else: + logger.error(f"Failed to create job: {e}") + except Exception as e: + logger.exception(f"Error creating job: {e}") + + async def _get_custom_image_url( + self, + target: str, + subtarget: str, + profile: str, + version: str, + test_type: TestType, + ) -> str | None: + """ + Get URL for a custom image with packages for a test type. + + Uses ASU to build the image if not cached. + + Args: + target: Hardware target + subtarget: Subtarget + profile: Device profile + version: OpenWrt version + test_type: Test type requiring custom packages + + Returns: + URL to the custom sysupgrade image, or None on failure + """ + cache_key = (target, subtarget, profile, version, test_type.value) + + # Check cache first + if cache_key in self._image_cache: + logger.debug(f"Using cached custom image for {cache_key}") + return self._image_cache[cache_key] + + # Get packages for this test type + test_type_config = get_test_type_config(test_type) + if not test_type_config: + return None + + image_profile = get_image_profile(test_type_config.image_profile) + if not image_profile: + return None + + packages = image_profile.packages + + logger.info( + f"Building custom image for {test_type.value}", + target=target, + profile=profile, + packages=packages, + ) + + try: + request = ImageBuildRequest( + target=target, + subtarget=subtarget, + profile=profile, + version=version, + packages=packages, + ) + + result = await self.asu_client.build_and_wait(request) + + if result.sysupgrade_url: + self._image_cache[cache_key] = result.sysupgrade_url + return result.sysupgrade_url + elif result.factory_url: + self._image_cache[cache_key] = result.factory_url + return result.factory_url + else: + logger.error(f"No image URL in build result: {result}") + return None + + except Exception as e: + logger.exception(f"Failed to build custom image: {e}") + return None + + def _find_compatible_devices( + self, + target: str, + subtarget: str, + profile: str, + device_types: dict, + ) -> dict[str, dict]: + """ + Find devices compatible with a firmware. + + Matches based on target/subtarget, and optionally profile. + """ + compatible = {} + + for device_name, device_config in device_types.items(): + device_target = device_config.get("target") + device_subtarget = device_config.get("subtarget") + device_profile = device_config.get("profile") + + # Must match target and subtarget + if device_target != target or device_subtarget != subtarget: + continue + + # Profile matching (wildcard * matches any) + if profile != "*" and device_profile and device_profile != profile: + continue + + compatible[device_name] = device_config + + return compatible + + def _get_test_plans_for_device( + self, + device_config: dict, + firmware_source: str, + scheduler_config: dict, + ) -> list[str]: + """ + Determine which test plans to run for a device. + + Based on: + - Default plans for the firmware source + - Additional plans based on device features + """ + plans = [] + + # Default plans for firmware source + default_plans = scheduler_config.get("default_test_plans", {}) + plans.extend(default_plans.get(firmware_source, ["base"])) + + # Feature-based plans + feature_plans = scheduler_config.get("feature_test_plans", {}) + device_features = device_config.get("features", []) + + for feature in device_features: + if feature in feature_plans: + plans.extend(feature_plans[feature]) + + # Remove duplicates while preserving order + seen = set() + unique_plans = [] + for plan in plans: + if plan not in seen: + seen.add(plan) + unique_plans.append(plan) + + return unique_plans + + +# ============================================================================= +# Main Entry Point +# ============================================================================= + + +async def main(): + """Main entry point.""" + scheduler = TestScheduler() + + try: + await scheduler.initialize() + await scheduler.run() + except KeyboardInterrupt: + logger.info("Keyboard interrupt") + finally: + await scheduler.shutdown() + + +def run(): + """Synchronous entry point.""" + asyncio.run(main()) + + +if __name__ == "__main__": + run() diff --git a/kernelci/openwrt-pipeline/openwrt_pipeline/test_types.py b/kernelci/openwrt-pipeline/openwrt_pipeline/test_types.py new file mode 100644 index 000000000..da45a3726 --- /dev/null +++ b/kernelci/openwrt-pipeline/openwrt_pipeline/test_types.py @@ -0,0 +1,255 @@ +""" +Test Types and Image Profiles + +Defines different test types (firmware tests vs kernel selftests) and +the image profiles needed for each. This allows the scheduler to create +appropriate jobs with the right firmware images. + +Test Types: +- firmware: Standard OpenWrt functionality tests (boot, network, wifi, packages) +- kselftest: Linux kernel validation tests (net, timers, seccomp, bpf) + +Each test type may require: +- Different packages in the firmware image +- Different device capabilities (cabling, isolation) +- Different test repositories +""" + +from dataclasses import dataclass, field +from enum import Enum + + +class TestType(str, Enum): + """Types of tests that can be run on devices.""" + + FIRMWARE = "firmware" # OpenWrt functionality tests + KSELFTEST = "kselftest" # Linux kernel selftests + + +@dataclass +class ImageProfile: + """ + Defines packages and configuration for a firmware image. + + Used to request custom builds from ASU when additional + packages are needed for specific test types. + """ + + name: str + description: str + packages: list[str] = field(default_factory=list) + # If True, use standard image without custom build + use_standard_image: bool = False + + +@dataclass +class TestTypeConfig: + """ + Configuration for a test type. + + Defines what image profile to use, which test plans are available, + and what device capabilities are required. + """ + + test_type: TestType + description: str + image_profile: str + test_plans: list[str] + # Device must have ALL of these capabilities + required_capabilities: list[str] = field(default_factory=list) + # Test repository configuration (can override defaults) + tests_repo: str | None = None + tests_branch: str | None = None + tests_subdir: str | None = None + + +# ============================================================================= +# Image Profiles +# ============================================================================= + +IMAGE_PROFILES: dict[str, ImageProfile] = { + "standard": ImageProfile( + name="standard", + description="Default OpenWrt image without additional packages", + packages=[], + use_standard_image=True, + ), + "kselftest": ImageProfile( + name="kselftest", + description="Image with kselftest dependencies", + packages=[ + # Shell and scripting + "bash", + "python3", + "python3-base", + # Networking tools for net tests + "iproute2-full", + "ethtool", + "iperf3", + "iputils-ping", + # Process utilities + "procps-ng", + "coreutils", + # Kselftest packages + "kselftests-net", + "kselftests-timers", + "kselftests-size", + "kselftests-rtc", + "kselftests-futex", + "kselftests-exec", + "kselftests-clone3", + "kselftests-openat2", + "kselftests-mincore", + "kselftests-mqueue", + "kselftests-kcmp", + "kselftests-sigaltstack", + "kselftests-splice", + "kselftests-sync", + ], + use_standard_image=False, + ), + "kselftest-minimal": ImageProfile( + name="kselftest-minimal", + description="Minimal kselftest image for constrained devices", + packages=[ + "bash", + "iproute2-full", + "kselftests-net", + "kselftests-timers", + "kselftests-size", + ], + use_standard_image=False, + ), +} + + +# ============================================================================= +# Test Type Configurations +# ============================================================================= + +TEST_TYPE_CONFIGS: dict[TestType, TestTypeConfig] = { + TestType.FIRMWARE: TestTypeConfig( + test_type=TestType.FIRMWARE, + description="OpenWrt functionality tests", + image_profile="standard", + test_plans=[ + "boot", + "network", + "wifi", + "packages", + "system", + ], + # All labgrid devices have serial console, no need to require explicitly + required_capabilities=[], + ), + TestType.KSELFTEST: TestTypeConfig( + test_type=TestType.KSELFTEST, + description="Linux kernel selftests", + image_profile="kselftest", + test_plans=[ + # Networking + "kselftest_net", + # Timers and RTC + "kselftest_timers", + "kselftest_rtc", + # Syscalls + "kselftest_clone3", + "kselftest_openat2", + "kselftest_exec", + "kselftest_mincore", + "kselftest_splice", + "kselftest_sync", + # IPC + "kselftest_futex", + "kselftest_mqueue", + "kselftest_sigaltstack", + # Process + "kselftest_kcmp", + # Size + "kselftest_size", + ], + required_capabilities=[ + "serial_console", + "isolated_network", # Net tests need isolated network + ], + tests_subdir="kselftest", # Tests in kselftest/ subdirectory + ), +} + + +# ============================================================================= +# Device Capabilities +# ============================================================================= + +# Known device capabilities that can be declared by labs +DEVICE_CAPABILITIES = { + # Basic connectivity + "serial_console": "Device has serial console access", + "wan_port": "Device has WAN port connected to internet", + "lan_ports": "Device has LAN ports for local testing", + "wifi": "Device has WiFi capability", + # Testing infrastructure + "isolated_network": "Device is on isolated network for kernel tests", + "loopback_ethernet": "Device has Ethernet loopback for net tests", + "power_control": "Device power can be controlled (PDU)", + # Performance + "high_memory": "Device has >= 512MB RAM", + "external_storage": "Device has external storage (USB/SD)", +} + + +# ============================================================================= +# Helper Functions +# ============================================================================= + + +def get_image_profile(profile_name: str) -> ImageProfile | None: + """Get an image profile by name.""" + return IMAGE_PROFILES.get(profile_name) + + +def get_test_type_config(test_type: TestType) -> TestTypeConfig | None: + """Get configuration for a test type.""" + return TEST_TYPE_CONFIGS.get(test_type) + + +def get_packages_for_test_type(test_type: TestType) -> list[str]: + """Get the packages needed for a test type.""" + config = get_test_type_config(test_type) + if not config: + return [] + + profile = get_image_profile(config.image_profile) + if not profile: + return [] + + return profile.packages + + +def device_supports_test_type( + device_capabilities: list[str], test_type: TestType +) -> bool: + """Check if a device with given capabilities can run a test type.""" + config = get_test_type_config(test_type) + if not config: + return False + + # Device must have all required capabilities + for required in config.required_capabilities: + if required not in device_capabilities: + return False + + return True + + +def needs_custom_image(test_type: TestType) -> bool: + """Check if a test type needs a custom-built image.""" + config = get_test_type_config(test_type) + if not config: + return False + + profile = get_image_profile(config.image_profile) + if not profile: + return False + + return not profile.use_standard_image diff --git a/kernelci/openwrt-pipeline/openwrt_pipeline/versions.py b/kernelci/openwrt-pipeline/openwrt_pipeline/versions.py new file mode 100644 index 000000000..cb47ec3d4 --- /dev/null +++ b/kernelci/openwrt-pipeline/openwrt_pipeline/versions.py @@ -0,0 +1,184 @@ +""" +OpenWrt Version Discovery + +Fetches version information from downloads.openwrt.org/.versions.json +to automatically determine which branches to test: +- main (SNAPSHOT) +- stable (current release) +- oldstable (previous release series) + +Reference: https://github.com/openwrt/firmware-selector-openwrt-org +""" + +import logging +import re +from dataclasses import dataclass + +import httpx + +logger = logging.getLogger(__name__) + +VERSIONS_URL = "https://downloads.openwrt.org/.versions.json" +DOWNLOADS_BASE = "https://downloads.openwrt.org" + +# Tree configuration +OPENWRT_TREE = "openwrt" +OPENWRT_REPO = "https://git.openwrt.org/openwrt/openwrt.git" + + +@dataclass +class BranchInfo: + """Information about an OpenWrt branch.""" + + name: str # Branch name (e.g., "main", "openwrt-24.10") + version: str # Version string (e.g., "SNAPSHOT", "24.10.0") + url: str # Base URL for firmware downloads + is_snapshot: bool = False + + +def version_to_branch(version: str) -> str: + """Convert version string to git branch name.""" + if version.upper() == "SNAPSHOT": + return "main" + # Extract major.minor (e.g., "24.10.0" -> "openwrt-24.10") + match = re.match(r"(\d+)\.(\d+)", version) + if match: + return f"openwrt-{match.group(1)}.{match.group(2)}" + return "main" + + +def extract_major_minor(version: str) -> tuple[int, int] | None: + """Extract major.minor as tuple for sorting.""" + match = re.match(r"(\d+)\.(\d+)", version) + if match: + return (int(match.group(1)), int(match.group(2))) + return None + + +async def fetch_versions( + timeout: float = 30.0, +) -> dict: + """ + Fetch version info from OpenWrt downloads server. + + Returns: + Dict with 'stable_version' and 'versions_list' + """ + async with httpx.AsyncClient(timeout=timeout) as client: + response = await client.get(VERSIONS_URL) + response.raise_for_status() + return response.json() + + +async def get_active_branches( + include_snapshot: bool = True, + include_oldstable: bool = True, + include_upcoming: bool = True, +) -> list[BranchInfo]: + """ + Get list of active branches to test. + + Fetches .versions.json and returns: + - main (SNAPSHOT) - always latest development + - upcoming - release candidate (e.g., 25.12.0-rc4) + - stable - current release (e.g., 24.10.0) + - oldstable - previous release series (e.g., 23.05.5) + + Returns: + List of BranchInfo objects + """ + branches = [] + + # Always include snapshot/main + if include_snapshot: + branches.append( + BranchInfo( + name="main", + version="SNAPSHOT", + url=f"{DOWNLOADS_BASE}/snapshots/targets", + is_snapshot=True, + ) + ) + + try: + data = await fetch_versions() + stable_version = data.get("stable_version", "") + upcoming_version = data.get("upcoming_version", "") + versions_list = data.get("versions_list", []) + + logger.info(f"Fetched versions: stable={stable_version}, upcoming={upcoming_version}, all={versions_list}") + + # Add upcoming version (release candidate) if available + if include_upcoming and upcoming_version: + branches.append( + BranchInfo( + name=version_to_branch(upcoming_version), + version=upcoming_version, + url=f"{DOWNLOADS_BASE}/releases/{upcoming_version}/targets", + ) + ) + + # Add stable version + if stable_version: + branches.append( + BranchInfo( + name=version_to_branch(stable_version), + version=stable_version, + url=f"{DOWNLOADS_BASE}/releases/{stable_version}/targets", + ) + ) + + # Find oldstable (previous major.minor series) + if include_oldstable and versions_list: + stable_mm = extract_major_minor(stable_version) + + # Sort versions and find previous series + versioned = [] + for v in versions_list: + mm = extract_major_minor(v) + if mm: + versioned.append((mm, v)) + + # Sort by major.minor descending + versioned.sort(key=lambda x: x[0], reverse=True) + + # Find first version from a different series than stable + for mm, version in versioned: + if stable_mm and mm[0:2] != stable_mm[0:2]: + # Different major.minor series = oldstable + branches.append( + BranchInfo( + name=version_to_branch(version), + version=version, + url=f"{DOWNLOADS_BASE}/releases/{version}/targets", + ) + ) + break + + except Exception as e: + logger.warning(f"Failed to fetch versions, using defaults: {e}") + # Fallback to known versions + branches.extend( + [ + BranchInfo( + name="openwrt-24.10", + version="24.10.0", + url=f"{DOWNLOADS_BASE}/releases/24.10.0/targets", + ), + BranchInfo( + name="openwrt-23.05", + version="23.05.5", + url=f"{DOWNLOADS_BASE}/releases/23.05.5/targets", + ), + ] + ) + + return branches + + +def get_tree_info() -> dict: + """Get tree information for KernelCI nodes.""" + return { + "tree": OPENWRT_TREE, + "url": OPENWRT_REPO, + } diff --git a/kernelci/openwrt-pipeline/requirements.txt b/kernelci/openwrt-pipeline/requirements.txt new file mode 100644 index 000000000..e7f0a56d6 --- /dev/null +++ b/kernelci/openwrt-pipeline/requirements.txt @@ -0,0 +1,44 @@ +# ============================================================================= +# OpenWrt KernelCI Pipeline Dependencies +# ============================================================================= + +# Async HTTP client +httpx>=0.27.0 +aiohttp>=3.9.0 + +# API framework (for webhook endpoints) +fastapi>=0.109.0 +uvicorn>=0.27.0 + +# Data validation +pydantic>=2.5.0 +pydantic-settings>=2.1.0 + +# YAML configuration +pyyaml>=6.0.1 +python-dotenv>=1.0.0 + +# S3/MinIO client +minio>=7.2.0 +boto3>=1.34.0 + +# GitHub integration +PyGithub>=2.1.0 + +# Database clients +motor>=3.3.0 # Async MongoDB +redis>=5.0.0 + +# Date/time handling +python-dateutil>=2.8.0 + +# Logging +structlog>=24.1.0 + +# Retry logic +tenacity>=8.2.0 + +# Testing +pytest>=8.0.0 +pytest-asyncio>=0.23.0 +pytest-httpx>=0.28.0 diff --git a/kernelci/openwrt-pipeline/tests/__init__.py b/kernelci/openwrt-pipeline/tests/__init__.py new file mode 100644 index 000000000..7367a709d --- /dev/null +++ b/kernelci/openwrt-pipeline/tests/__init__.py @@ -0,0 +1 @@ +"""Tests for openwrt-pipeline.""" diff --git a/kernelci/openwrt-pipeline/tests/conftest.py b/kernelci/openwrt-pipeline/tests/conftest.py new file mode 100644 index 000000000..54450b7cf --- /dev/null +++ b/kernelci/openwrt-pipeline/tests/conftest.py @@ -0,0 +1,17 @@ +"""Pytest configuration for openwrt-pipeline tests.""" + +import sys +from pathlib import Path + +import pytest + +# Add the package to sys.path for imports +package_dir = Path(__file__).parent.parent +if str(package_dir) not in sys.path: + sys.path.insert(0, str(package_dir)) + + +@pytest.fixture(autouse=True) +def reset_settings(): + """Reset settings between tests.""" + yield diff --git a/kernelci/openwrt-pipeline/tests/test_api_client.py b/kernelci/openwrt-pipeline/tests/test_api_client.py new file mode 100644 index 000000000..082c537ee --- /dev/null +++ b/kernelci/openwrt-pipeline/tests/test_api_client.py @@ -0,0 +1,320 @@ +"""Tests for KernelCI API client.""" + +from unittest.mock import AsyncMock, patch + +import pytest + +from openwrt_pipeline.api_client import APIError, KernelCIClient + + +class TestAPIError: + """Tests for APIError exception.""" + + def test_create(self): + """Test creating APIError.""" + error = APIError(404, "Not found", {"detail": "Resource not found"}) + assert error.status_code == 404 + assert error.message == "Not found" + assert error.details == {"detail": "Resource not found"} + + def test_str(self): + """Test APIError string representation.""" + error = APIError(500, "Server error") + assert "500" in str(error) + assert "Server error" in str(error) + + +class TestKernelCIClient: + """Tests for KernelCIClient.""" + + @pytest.fixture + def client(self): + """Create a KernelCIClient instance.""" + return KernelCIClient( + base_url="http://api.example.com", + token="test-token", + ) + + def test_init(self, client): + """Test client initialization.""" + assert client.base_url == "http://api.example.com" + assert client.token == "test-token" + assert client._client is None + + def test_init_strips_trailing_slash(self): + """Test base_url trailing slash is stripped.""" + client = KernelCIClient(base_url="http://api.example.com/") + assert client.base_url == "http://api.example.com" + + @pytest.mark.asyncio + async def test_connect(self, client): + """Test client connect creates HTTP client.""" + await client.connect() + assert client._client is not None + await client.close() + + @pytest.mark.asyncio + async def test_close(self, client): + """Test client close.""" + await client.connect() + await client.close() + assert client._client is None + + @pytest.mark.asyncio + async def test_context_manager(self, client): + """Test client as async context manager.""" + async with client: + assert client._client is not None + assert client._client is None + + @pytest.mark.asyncio + async def test_client_property_raises_when_not_connected(self, client): + """Test client property raises when not connected.""" + with pytest.raises(RuntimeError, match="not connected"): + _ = client.client + + @pytest.mark.asyncio + async def test_create_node(self, client): + """Test create_node calls correct endpoint.""" + mock_response = {"id": "node-123", "kind": "kbuild"} + + with patch.object( + client, "_request", new_callable=AsyncMock, return_value=mock_response + ): + result = await client.create_node({"kind": "kbuild", "name": "test"}) + + assert result["id"] == "node-123" + client._request.assert_called_once_with( + "POST", "/latest/nodes", json={"kind": "kbuild", "name": "test"} + ) + + @pytest.mark.asyncio + async def test_get_node(self, client): + """Test get_node calls correct endpoint.""" + mock_response = {"id": "node-123", "kind": "kbuild"} + + with patch.object( + client, "_request", new_callable=AsyncMock, return_value=mock_response + ): + result = await client.get_node("node-123") + + assert result["id"] == "node-123" + client._request.assert_called_once_with("GET", "/latest/nodes/node-123") + + @pytest.mark.asyncio + async def test_get_node_not_found(self, client): + """Test get_node returns None for 404.""" + with patch.object( + client, + "_request", + new_callable=AsyncMock, + side_effect=APIError(404, "Not found"), + ): + result = await client.get_node("nonexistent") + + assert result is None + + @pytest.mark.asyncio + async def test_update_node(self, client): + """Test update_node calls correct endpoint.""" + mock_response = {"id": "node-123", "state": "done"} + + with patch.object( + client, "_request", new_callable=AsyncMock, return_value=mock_response + ): + result = await client.update_node("node-123", {"state": "done"}) + + assert result["state"] == "done" + client._request.assert_called_once_with( + "PUT", "/latest/nodes/node-123", json={"state": "done"} + ) + + @pytest.mark.asyncio + async def test_query_nodes(self, client): + """Test query_nodes with filters.""" + mock_response = [{"id": "node-1"}, {"id": "node-2"}] + + with patch.object( + client, "_request", new_callable=AsyncMock, return_value=mock_response + ): + result = await client.query_nodes(kind="job", state="available", limit=10) + + assert len(result) == 2 + client._request.assert_called_once() + call_args = client._request.call_args + assert call_args[1]["params"]["kind"] == "job" + assert call_args[1]["params"]["state"] == "available" + assert call_args[1]["params"]["limit"] == 10 + + @pytest.mark.asyncio + async def test_query_nodes_dict_response(self, client): + """Test query_nodes handles dict response with items.""" + mock_response = {"items": [{"id": "node-1"}], "total": 1} + + with patch.object( + client, "_request", new_callable=AsyncMock, return_value=mock_response + ): + result = await client.query_nodes(kind="job") + + assert len(result) == 1 + assert result[0]["id"] == "node-1" + + @pytest.mark.asyncio + async def test_create_firmware_node(self, client): + """Test create_firmware_node creates correct structure.""" + mock_response = {"id": "fw-123"} + + with patch.object( + client, "create_node", new_callable=AsyncMock, return_value=mock_response + ): + result = await client.create_firmware_node( + name="openwrt-test", + version="24.10.0", + target="ath79", + subtarget="generic", + profile="tplink_archer-c7-v2", + source="official", + artifacts={"sysupgrade": "http://example.com/fw.bin"}, + git_commit="abc123", + ) + + assert result["id"] == "fw-123" + + # Verify node structure + call_args = client.create_node.call_args[0][0] + assert call_args["kind"] == "kbuild" + assert call_args["group"] == "openwrt" + assert call_args["state"] == "available" + assert call_args["data"]["kernel_revision"]["branch"] == "openwrt-24.10" + assert call_args["data"]["target"] == "ath79" + + @pytest.mark.asyncio + async def test_create_firmware_node_snapshot(self, client): + """Test create_firmware_node for SNAPSHOT version.""" + mock_response = {"id": "fw-123"} + + with patch.object( + client, "create_node", new_callable=AsyncMock, return_value=mock_response + ): + await client.create_firmware_node( + name="openwrt-test", + version="SNAPSHOT", + target="x86", + subtarget="64", + profile="generic", + source="official", + artifacts={}, + ) + + call_args = client.create_node.call_args[0][0] + assert call_args["data"]["kernel_revision"]["branch"] == "main" + + @pytest.mark.asyncio + async def test_create_test_job(self, client): + """Test create_test_job creates correct structure.""" + mock_parent = { + "id": "fw-123", + "data": { + "kernel_revision": { + "tree": "openwrt", + "branch": "main", + "commit": "abc123", + } + }, + } + mock_response = {"id": "job-456"} + + with patch.object( + client, "get_node", new_callable=AsyncMock, return_value=mock_parent + ): + with patch.object( + client, + "create_node", + new_callable=AsyncMock, + return_value=mock_response, + ): + result = await client.create_test_job( + firmware_node_id="fw-123", + device_type="test-device", + test_plan="base", + tests=["test_boot", "test_network"], + timeout=1800, + ) + + assert result["id"] == "job-456" + + # Verify node structure + call_args = client.create_node.call_args[0][0] + assert call_args["kind"] == "job" + assert call_args["parent"] == "fw-123" + assert call_args["state"] == "available" + assert call_args["data"]["device_type"] == "test-device" + assert call_args["data"]["runtime"] == "labgrid" + assert call_args["data"]["kernel_revision"]["branch"] == "main" + + @pytest.mark.asyncio + async def test_claim_job(self, client): + """Test claim_job updates job state.""" + mock_response = {"id": "job-123", "state": "running"} + + with patch.object( + client, "update_node", new_callable=AsyncMock, return_value=mock_response + ): + result = await client.claim_job( + job_id="job-123", + lab_name="test-lab", + device_id="device-01", + ) + + assert result["state"] == "running" + call_args = client.update_node.call_args + assert call_args[0][1]["state"] == "running" + assert call_args[0][1]["data"]["lab_name"] == "test-lab" + + @pytest.mark.asyncio + async def test_complete_job(self, client): + """Test complete_job updates job and creates test nodes.""" + mock_job = {"id": "job-123", "state": "done"} + + with patch.object( + client, "update_node", new_callable=AsyncMock, return_value=mock_job + ): + with patch.object( + client, "create_node", new_callable=AsyncMock, return_value={} + ): + with patch.object( + client, "get_node", new_callable=AsyncMock, return_value=mock_job + ): + await client.complete_job( + job_id="job-123", + result="pass", + test_results=[ + {"name": "test_boot", "status": "pass", "duration": 1.0}, + {"name": "test_fail", "status": "fail", "duration": 0.5}, + ], + log_url="http://example.com/logs", + ) + + # Job should be updated + update_call = client.update_node.call_args + assert update_call[0][1]["state"] == "done" + assert update_call[0][1]["result"] == "pass" + + # Test nodes should be created + assert client.create_node.call_count == 2 + + @pytest.mark.asyncio + async def test_get_pending_jobs(self, client): + """Test get_pending_jobs queries for available jobs.""" + mock_response = [{"id": "job-1"}, {"id": "job-2"}] + + with patch.object( + client, "query_nodes", new_callable=AsyncMock, return_value=mock_response + ): + result = await client.get_pending_jobs(device_type="test-device", limit=5) + + assert len(result) == 2 + client.query_nodes.assert_called_once() + call_kwargs = client.query_nodes.call_args[1] + assert call_kwargs["kind"] == "job" + assert call_kwargs["state"] == "available" diff --git a/kernelci/openwrt-pipeline/tests/test_asu_client.py b/kernelci/openwrt-pipeline/tests/test_asu_client.py new file mode 100644 index 000000000..42344e827 --- /dev/null +++ b/kernelci/openwrt-pipeline/tests/test_asu_client.py @@ -0,0 +1,151 @@ +"""Tests for ASU (Attended Sysupgrade) client.""" + +import pytest + +from openwrt_pipeline.asu_client import ( + ASU_API_URL, + ImageBuildRequest, + ImageBuildResult, +) + + +class TestImageBuildRequest: + """Tests for ImageBuildRequest dataclass.""" + + def test_basic_request(self): + """Test creating a basic build request.""" + request = ImageBuildRequest( + target="ath79", + subtarget="generic", + profile="tplink_archer-c7-v2", + version="SNAPSHOT", + packages=["bash", "python3"], + ) + + assert request.target == "ath79" + assert request.subtarget == "generic" + assert request.profile == "tplink_archer-c7-v2" + assert request.version == "SNAPSHOT" + assert request.packages == ["bash", "python3"] + + def test_to_dict(self): + """Test converting request to API dict format.""" + request = ImageBuildRequest( + target="x86", + subtarget="64", + profile="generic", + version="23.05.3", + packages=["bash"], + ) + + data = request.to_dict() + + assert data["target"] == "x86" + assert data["subtarget"] == "64" + assert data["profile"] == "generic" + assert data["version"] == "23.05.3" + assert data["packages"] == ["bash"] + assert data["diff_packages"] is False + + def test_to_dict_with_filesystem(self): + """Test converting request with filesystem option.""" + request = ImageBuildRequest( + target="x86", + subtarget="64", + profile="generic", + version="SNAPSHOT", + packages=[], + filesystem="ext4", + ) + + data = request.to_dict() + assert data["filesystem"] == "ext4" + + def test_default_packages_empty(self): + """Test that packages defaults to empty list.""" + request = ImageBuildRequest( + target="ath79", + subtarget="generic", + profile="tplink_archer-c7-v2", + version="SNAPSHOT", + ) + + assert request.packages == [] + + +class TestImageBuildResult: + """Tests for ImageBuildResult dataclass.""" + + def test_from_response_completed(self): + """Test parsing completed build response.""" + response = { + "request_hash": "abc123", + "status": "completed", + "version": "23.05.3", + "target": "ath79/generic", + "profile": "tplink_archer-c7-v2", + "images": [ + { + "type": "sysupgrade", + "url": "https://example.com/sysupgrade.bin", + "sha256": "abc123", + }, + { + "type": "factory", + "url": "https://example.com/factory.bin", + "sha256": "def456", + }, + ], + "manifest_url": "https://example.com/manifest.txt", + } + + result = ImageBuildResult.from_response(response) + + assert result.request_hash == "abc123" + assert result.status == "completed" + assert result.version == "23.05.3" + assert result.sysupgrade_url == "https://example.com/sysupgrade.bin" + assert result.factory_url == "https://example.com/factory.bin" + assert result.sha256_sysupgrade == "abc123" + assert result.sha256_factory == "def456" + + def test_from_response_queued(self): + """Test parsing queued build response.""" + response = { + "request_hash": "xyz789", + "status": "queued", + "version": "SNAPSHOT", + "target": "x86/64", + "profile": "generic", + } + + result = ImageBuildResult.from_response(response) + + assert result.request_hash == "xyz789" + assert result.status == "queued" + assert result.sysupgrade_url is None + assert result.factory_url is None + + def test_from_response_failed(self): + """Test parsing failed build response.""" + response = { + "request_hash": "failed123", + "status": "failed", + "version": "SNAPSHOT", + "target": "ath79/generic", + "profile": "nonexistent", + "error": "Profile not found", + } + + result = ImageBuildResult.from_response(response) + + assert result.status == "failed" + assert result.error == "Profile not found" + + +class TestASUClientConstants: + """Tests for ASU client constants.""" + + def test_default_api_url(self): + """Test default ASU API URL.""" + assert ASU_API_URL == "https://sysupgrade.openwrt.org/api/v1" diff --git a/kernelci/openwrt-pipeline/tests/test_test_types.py b/kernelci/openwrt-pipeline/tests/test_test_types.py new file mode 100644 index 000000000..7860a2e94 --- /dev/null +++ b/kernelci/openwrt-pipeline/tests/test_test_types.py @@ -0,0 +1,154 @@ +"""Tests for test types and image profiles.""" + +import pytest + +from openwrt_pipeline.test_types import ( + IMAGE_PROFILES, + TEST_TYPE_CONFIGS, + TestType, + device_supports_test_type, + get_image_profile, + get_packages_for_test_type, + get_test_type_config, + needs_custom_image, +) + + +class TestTestType: + """Tests for TestType enum.""" + + def test_firmware_type(self): + """Test firmware test type.""" + assert TestType.FIRMWARE.value == "firmware" + + def test_kselftest_type(self): + """Test kselftest type.""" + assert TestType.KSELFTEST.value == "kselftest" + + def test_from_string(self): + """Test creating from string value.""" + assert TestType("firmware") == TestType.FIRMWARE + assert TestType("kselftest") == TestType.KSELFTEST + + def test_invalid_type_raises(self): + """Test that invalid type raises ValueError.""" + with pytest.raises(ValueError): + TestType("invalid") + + +class TestImageProfiles: + """Tests for image profiles.""" + + def test_standard_profile_exists(self): + """Test that standard profile exists.""" + assert "standard" in IMAGE_PROFILES + + def test_kselftest_profile_exists(self): + """Test that kselftest profile exists.""" + assert "kselftest" in IMAGE_PROFILES + + def test_standard_uses_standard_image(self): + """Test that standard profile uses standard image.""" + profile = IMAGE_PROFILES["standard"] + assert profile.use_standard_image is True + assert profile.packages == [] + + def test_kselftest_has_packages(self): + """Test that kselftest profile has required packages.""" + profile = IMAGE_PROFILES["kselftest"] + assert profile.use_standard_image is False + assert "bash" in profile.packages + assert "python3" in profile.packages + + def test_get_image_profile(self): + """Test getting image profile by name.""" + profile = get_image_profile("standard") + assert profile is not None + assert profile.name == "standard" + + profile = get_image_profile("nonexistent") + assert profile is None + + +class TestTestTypeConfigs: + """Tests for test type configurations.""" + + def test_firmware_config_exists(self): + """Test that firmware config exists.""" + assert TestType.FIRMWARE in TEST_TYPE_CONFIGS + + def test_kselftest_config_exists(self): + """Test that kselftest config exists.""" + assert TestType.KSELFTEST in TEST_TYPE_CONFIGS + + def test_firmware_config(self): + """Test firmware configuration.""" + config = TEST_TYPE_CONFIGS[TestType.FIRMWARE] + assert config.image_profile == "standard" + assert "boot" in config.test_plans + assert "serial_console" in config.required_capabilities + + def test_kselftest_config(self): + """Test kselftest configuration.""" + config = TEST_TYPE_CONFIGS[TestType.KSELFTEST] + assert config.image_profile == "kselftest" + assert "kselftest_net" in config.test_plans + assert "isolated_network" in config.required_capabilities + assert config.tests_subdir == "kselftest" + + def test_get_test_type_config(self): + """Test getting config by test type.""" + config = get_test_type_config(TestType.FIRMWARE) + assert config is not None + assert config.test_type == TestType.FIRMWARE + + +class TestDeviceSupportsTestType: + """Tests for device capability checking.""" + + def test_firmware_with_serial_console(self): + """Test that device with serial_console supports firmware tests.""" + capabilities = ["serial_console", "wan_port"] + assert device_supports_test_type(capabilities, TestType.FIRMWARE) is True + + def test_firmware_without_serial_console(self): + """Test that device without serial_console doesn't support firmware.""" + capabilities = ["wan_port"] + assert device_supports_test_type(capabilities, TestType.FIRMWARE) is False + + def test_kselftest_with_required_capabilities(self): + """Test kselftest with all required capabilities.""" + capabilities = ["serial_console", "isolated_network", "high_memory"] + assert device_supports_test_type(capabilities, TestType.KSELFTEST) is True + + def test_kselftest_missing_isolated_network(self): + """Test kselftest without isolated_network.""" + capabilities = ["serial_console", "wan_port"] + assert device_supports_test_type(capabilities, TestType.KSELFTEST) is False + + +class TestNeedsCustomImage: + """Tests for custom image detection.""" + + def test_firmware_no_custom_image(self): + """Test that firmware tests don't need custom image.""" + assert needs_custom_image(TestType.FIRMWARE) is False + + def test_kselftest_needs_custom_image(self): + """Test that kselftest needs custom image.""" + assert needs_custom_image(TestType.KSELFTEST) is True + + +class TestGetPackagesForTestType: + """Tests for getting packages for test type.""" + + def test_firmware_no_packages(self): + """Test that firmware tests need no extra packages.""" + packages = get_packages_for_test_type(TestType.FIRMWARE) + assert packages == [] + + def test_kselftest_has_packages(self): + """Test that kselftest has required packages.""" + packages = get_packages_for_test_type(TestType.KSELFTEST) + assert "bash" in packages + assert "python3" in packages diff --git a/kernelci/openwrt-pipeline/tests/test_versions.py b/kernelci/openwrt-pipeline/tests/test_versions.py new file mode 100644 index 000000000..61766bd51 --- /dev/null +++ b/kernelci/openwrt-pipeline/tests/test_versions.py @@ -0,0 +1,261 @@ +"""Tests for OpenWrt version discovery.""" + +from unittest.mock import AsyncMock, patch + +import httpx +import pytest + +from openwrt_pipeline.versions import ( + OPENWRT_REPO, + OPENWRT_TREE, + BranchInfo, + extract_major_minor, + fetch_versions, + get_active_branches, + get_tree_info, + version_to_branch, +) + + +class TestVersionToBranch: + """Tests for version_to_branch function.""" + + def test_snapshot(self): + """Test SNAPSHOT maps to main.""" + assert version_to_branch("SNAPSHOT") == "main" + + def test_snapshot_lowercase(self): + """Test snapshot (lowercase) maps to main.""" + assert version_to_branch("snapshot") == "main" + + def test_release_24_10_0(self): + """Test 24.10.0 maps to openwrt-24.10.""" + assert version_to_branch("24.10.0") == "openwrt-24.10" + + def test_release_24_10_1(self): + """Test 24.10.1 maps to openwrt-24.10.""" + assert version_to_branch("24.10.1") == "openwrt-24.10" + + def test_release_23_05_5(self): + """Test 23.05.5 maps to openwrt-23.05.""" + assert version_to_branch("23.05.5") == "openwrt-23.05" + + def test_release_25_12_0(self): + """Test 25.12.0 maps to openwrt-25.12.""" + assert version_to_branch("25.12.0") == "openwrt-25.12" + + def test_invalid_version(self): + """Test invalid version defaults to main.""" + assert version_to_branch("invalid") == "main" + + def test_empty_string(self): + """Test empty string defaults to main.""" + assert version_to_branch("") == "main" + + +class TestExtractMajorMinor: + """Tests for extract_major_minor function.""" + + def test_full_version(self): + """Test extracting from full version.""" + assert extract_major_minor("24.10.0") == (24, 10) + + def test_major_minor_only(self): + """Test extracting from major.minor only.""" + assert extract_major_minor("24.10") == (24, 10) + + def test_with_suffix(self): + """Test extracting from version with suffix.""" + assert extract_major_minor("24.10.0-rc1") == (24, 10) + + def test_invalid(self): + """Test invalid version returns None.""" + assert extract_major_minor("SNAPSHOT") is None + assert extract_major_minor("invalid") is None + + +class TestBranchInfo: + """Tests for BranchInfo dataclass.""" + + def test_create(self): + """Test creating BranchInfo.""" + branch = BranchInfo( + name="main", + version="SNAPSHOT", + url="https://downloads.openwrt.org/snapshots/targets", + is_snapshot=True, + ) + assert branch.name == "main" + assert branch.version == "SNAPSHOT" + assert branch.is_snapshot is True + + def test_defaults(self): + """Test BranchInfo defaults.""" + branch = BranchInfo( + name="openwrt-24.10", + version="24.10.0", + url="https://example.com", + ) + assert branch.is_snapshot is False + + +class TestFetchVersions: + """Tests for fetch_versions function.""" + + @pytest.mark.asyncio + async def test_fetch_success(self): + """Test successful version fetch.""" + mock_response = { + "stable_version": "24.10.0", + "versions_list": ["24.10.0", "23.05.5", "23.05.4"], + } + + with patch("httpx.AsyncClient") as mock_client_class: + mock_client = AsyncMock() + mock_client.__aenter__.return_value = mock_client + mock_client.__aexit__.return_value = None + mock_response_obj = AsyncMock() + mock_response_obj.json.return_value = mock_response + mock_response_obj.raise_for_status = lambda: None + mock_client.get.return_value = mock_response_obj + mock_client_class.return_value = mock_client + + result = await fetch_versions() + + assert result["stable_version"] == "24.10.0" + assert "24.10.0" in result["versions_list"] + + @pytest.mark.asyncio + async def test_fetch_network_error(self): + """Test fetch handles network error.""" + with patch("httpx.AsyncClient") as mock_client_class: + mock_client = AsyncMock() + mock_client.__aenter__.return_value = mock_client + mock_client.__aexit__.return_value = None + mock_client.get.side_effect = httpx.RequestError("Network error") + mock_client_class.return_value = mock_client + + with pytest.raises(httpx.RequestError): + await fetch_versions() + + +class TestGetActiveBranches: + """Tests for get_active_branches function.""" + + @pytest.mark.asyncio + async def test_all_branches(self): + """Test getting all branches.""" + mock_data = { + "stable_version": "24.10.0", + "versions_list": ["24.10.0", "23.05.5"], + } + + with patch( + "openwrt_pipeline.versions.fetch_versions", + new_callable=AsyncMock, + return_value=mock_data, + ): + branches = await get_active_branches() + + # Should have main, stable, and oldstable + assert len(branches) == 3 + + names = [b.name for b in branches] + assert "main" in names + assert "openwrt-24.10" in names + assert "openwrt-23.05" in names + + @pytest.mark.asyncio + async def test_without_snapshot(self): + """Test getting branches without snapshot.""" + mock_data = { + "stable_version": "24.10.0", + "versions_list": ["24.10.0", "23.05.5"], + } + + with patch( + "openwrt_pipeline.versions.fetch_versions", + new_callable=AsyncMock, + return_value=mock_data, + ): + branches = await get_active_branches(include_snapshot=False) + + names = [b.name for b in branches] + assert "main" not in names + assert "openwrt-24.10" in names + + @pytest.mark.asyncio + async def test_without_oldstable(self): + """Test getting branches without oldstable.""" + mock_data = { + "stable_version": "24.10.0", + "versions_list": ["24.10.0", "23.05.5"], + } + + with patch( + "openwrt_pipeline.versions.fetch_versions", + new_callable=AsyncMock, + return_value=mock_data, + ): + branches = await get_active_branches(include_oldstable=False) + + names = [b.name for b in branches] + assert "main" in names + assert "openwrt-24.10" in names + assert "openwrt-23.05" not in names + + @pytest.mark.asyncio + async def test_fallback_on_error(self): + """Test fallback when fetch fails.""" + with patch( + "openwrt_pipeline.versions.fetch_versions", + new_callable=AsyncMock, + side_effect=Exception("Network error"), + ): + branches = await get_active_branches() + + # Should have fallback versions + assert len(branches) >= 2 + + names = [b.name for b in branches] + assert "main" in names + + @pytest.mark.asyncio + async def test_branch_urls(self): + """Test branch URLs are correct.""" + mock_data = { + "stable_version": "24.10.0", + "versions_list": ["24.10.0"], + } + + with patch( + "openwrt_pipeline.versions.fetch_versions", + new_callable=AsyncMock, + return_value=mock_data, + ): + branches = await get_active_branches(include_oldstable=False) + + main_branch = next(b for b in branches if b.name == "main") + stable_branch = next(b for b in branches if b.name == "openwrt-24.10") + + assert "snapshots" in main_branch.url + assert "releases/24.10.0" in stable_branch.url + + +class TestGetTreeInfo: + """Tests for get_tree_info function.""" + + def test_returns_tree_info(self): + """Test get_tree_info returns correct info.""" + info = get_tree_info() + + assert info["tree"] == OPENWRT_TREE + assert info["url"] == OPENWRT_REPO + + def test_tree_value(self): + """Test tree name is openwrt.""" + assert OPENWRT_TREE == "openwrt" + + def test_repo_url(self): + """Test repo URL is correct.""" + assert "git.openwrt.org" in OPENWRT_REPO diff --git a/labnet.sh b/labnet.sh new file mode 100644 index 000000000..fc6f31128 --- /dev/null +++ b/labnet.sh @@ -0,0 +1,23 @@ +yq -o=json ' + . as $root | + $root.labs as $labs | + $root.devices as $devices | + $labs + | to_entries + | map( + .key as $lab | + .value.devices + | map( + select($devices[.] != null) | + { + "device": ., + "proxy": $labs[$lab].proxy, + "target": $devices[.].target, + "firmware": $devices[.].firmware, + "maintainers": $labs[$lab].maintainers + + } + ) + ) +| flatten +' labnet.yaml diff --git a/labtest.yaml b/labtest.yaml new file mode 100644 index 000000000..215d668d5 --- /dev/null +++ b/labtest.yaml @@ -0,0 +1,153 @@ +devices: + genexis_pulse-ex400: + name: Genexis Pulse EX400 / Inteno Pulse EX400 + target: ramips-mt7621 + firmware: initramfs-kernel.bin + + hpe_msm460: + name: Hewlett-Packard MSM460 + target: mpc85xx-p1020 + firmware: initramfs-kernel.bin + + openwrt_one: + name: OpenWrt One + target: mediatek-filogic + firmware: initramfs.itb + + tplink_tl-wdr3600-v1: + name: TP-Link TL-WDR3600 v1 + target: ath79-generic + firmware: initramfs-kernel.bin + + tplink_tl-wdr4300-v1: + name: TP-Link TL-WDR4300 v1 + target: ath79-generic + firmware: initramfs-kernel.bin + + tplink_tl-wr842n-v3: + name: TP-Link TL-WR842N v3 + target: ath79-generic + firmware: initramfs-kernel.bin + + tplink_tl-wr1043nd-v3: + name: TP-Link TL-WR1043N/ND v3 + target: ath79-generic + firmware: initramfs-kernel.bin + + cznic_turris-omnia: + name: CZ.NIC Turris Omnia + target: mvebu-cortexa9 + firmware: initramfs-kernel.bin + + xunlong_orangepi-zero2: + name: Orange Pi Zero2 + target: sunxi-cortexa53 + firmware: initramfs-kernel.bin + snapshots_only: true + + bananapi_bpi-r4: + name: Bananapi BPi-R4 + target: mediatek-filogic + firmware: initramfs-recovery.itb + + bananapi_bpi-r64: + name: Bananapi BPi-R64 + target: mediatek-mt7622 + firmware: initramfs-recovery.itb + + glinet_gl-mt1300: + name: GL.iNet GL-MT1300 + target: mediatek-mt7621 + firmware: initramfs-kernel.bin + + glinet_gl-mt6000: + name: GL.iNet GL-MT6000 + target: mediatek-filogic + firmware: initramfs-kernel.bin + + rpi-4: + name: Raspberry Pi 4B/400/CM4 (64bit) + target: bcm27xx-bcm2711 + firmware: squashfs-factory.img.gz + + enterasys_ws-ap3710i: + name: Enterasys WS-AP3710i + target: mpc85xx-p1020 + firmware: initramfs-kernel.bin + + iei_puzzle-m902: + name: iEi Puzzle-M902 + target: mvebu-cortexa72 + firmware: initramfs-kernel.bin + +labs: + labgrid-aparcar: + proxy: labgrid-aparcar + maintainers: "@aparcar" + devices: + - openwrt_one + - tplink_tl-wdr3600-v1 + - genexis_pulse-ex400 + - bananapi_bpi-r4 + - glinet_gl-mt6000 + - enterasys_ws-ap3710i + developers: + - aparcar + - dangowrt + - jonasjelonek + + labgrid-bastian: + proxy: labgrid-bastian + maintainers: "@bittorf" + devices: + - cznic_turris-omnia + - tplink_tl-wdr4300-v1 + developers: + - aparcar + + labgrid-blocktrron: + proxy: labgrid-blocktrron + maintainers: "@blocktrron" + devices: + - hpe_msm460 + - tplink_tl-wr842n-v3 + developers: + - aparcar + + labgrid-leinelab: + proxy: labgrid-leinelab + maintainers: "@lemoer" + devices: + - tplink_tl-wr1043nd-v3 + developers: + - aparcar + + labgrid-hsn: + proxy: labgrid-hsn + maintainers: "@jonasjelonek" + devices: + - genexis_pulse-ex400 + - bananapi_bpi-r64 + - bananapi_bpi-r4 + - iei_puzzle-m902 + developers: + - aparcar + - jonasjelonek + + labgrid-wigyori: + proxy: labgrid-wigyori + maintainers: "@wigyori" + devices: + - xunlong_orangepi-zero2 + developers: + - wigyori + +developers: + aparcar: + sshkey: ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDZTxjQ3/KTefKvsPlVBWz+ITD7dGWcOt8/C55ekd2VE + dangowrt: + sshkey: ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDe6tUbVJW3bYUVCYyQQFpNrJ1wP5kzvEkCruSjyA6TM + jonasjelonek: + sshkey: ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDy4T0osxR8yS4RjJcsMn3rSacKvNzu+ZRNfe2lXTSHS + wigyori: + sshkey: ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIG8vcHMKcl6W6OuzSdjOhQ0ZehtH+McjLB7WzgP9P4Ls diff --git a/pyproject.toml b/pyproject.toml index 50f808283..fcb00dc95 100644 --- a/pyproject.toml +++ b/pyproject.toml @@ -17,3 +17,18 @@ labgrid = { git = "https://github.com/aparcar/labgrid.git", branch = "aparcar/st [dependency-groups] dev = ["isort>=6.0.1", "ruff>=0.12.0"] + +[tool.ruff] +line-length = 88 +target-version = "py313" + +[tool.ruff.lint] +select = ["E", "F", "I", "W"] + +[tool.ruff.lint.isort] +known-first-party = ["openwrt_pipeline", "labgrid_kci_adapter"] + +[tool.isort] +profile = "black" +line_length = 88 +known_first_party = ["openwrt_pipeline", "labgrid_kci_adapter"] diff --git a/pytest.ini_ b/pytest.ini_ new file mode 100644 index 000000000..d9f094fbd --- /dev/null +++ b/pytest.ini_ @@ -0,0 +1,16 @@ +## pytest +[pytest] +rp_endpoint = https://tests.aparcar.org +rp_api_key = pytest_j3F0obMPQgWfr8w5xRsAAwrsnPH9CBJcsr_9-VTla7ftNOHW6OOer0tTsF721vCP +rp_project = openwrt + +rp_launch = Pytest Launch +rp_launch_description = "OpenWrt One Snapshot" +rp_launch_attributes = 'device:openwrt_one' 'revision:r30292-77b9393d2f' 'version:SNAPSHOT' +rp_mode = DEFAULT +; rp_verify_ssl = True +; rp_log_batch_size = 20 +; rp_log_batch_payload_size = 64000000 +; rp_log_level = INFO +; rp_ignore_attributes = 'xfail' 'usefixture' +; rp_is_skipped_an_issue = True diff --git a/qemu.sh b/qemu.sh new file mode 100644 index 000000000..d54d7b60c --- /dev/null +++ b/qemu.sh @@ -0,0 +1,6 @@ + /opt/homebrew/bin/qemu-system-x86_64 \ + ../openwrt/bin/targets/x86/64/openwrt-x86-64-generic-squashfs-combined.img \ + -netdev user,id=n1,ipv6=off -device virtio-net-pci,netdev=n1 \ + -nic user,model=virtio-net-pci,net=192.168.1.0/24,id=lan \ + -device virtio-rng-pci \ + -nographic diff --git a/r4-lite.xml b/r4-lite.xml new file mode 100644 index 000000000..a686e02b0 --- /dev/null +++ b/r4-lite.xml @@ -0,0 +1 @@ + \ No newline at end of file diff --git a/report.xml b/report.xml new file mode 100644 index 000000000..b27c7ec56 --- /dev/null +++ b/report.xml @@ -0,0 +1 @@ + \ No newline at end of file diff --git a/run.py b/run.py new file mode 100644 index 000000000..6ead3dcde --- /dev/null +++ b/run.py @@ -0,0 +1,4 @@ +import pytest + +if __name__ == "__main__": + pytest.main(["tests/test_base.py", "-s", "-k", "test_shell"]) diff --git a/run.sh b/run.sh new file mode 100644 index 000000000..02d875758 --- /dev/null +++ b/run.sh @@ -0,0 +1,5 @@ +LG_QEMU_BIN=qemu-system-x86_64 \ + poetry -C ~/src/openwrt/tests/ run pytest tests/ \ + --lg-env tests/qemu.yaml --lg-log -vv --lg-colored-steps \ + --target x86-64 \ + --firmware ../../openwrt/bin/targets/x86/64/openwrt-x86-64-generic-squashfs-combined.img diff --git a/scripts/generate-lab-token.py b/scripts/generate-lab-token.py new file mode 100755 index 000000000..6c3083f2a --- /dev/null +++ b/scripts/generate-lab-token.py @@ -0,0 +1,84 @@ +#!/usr/bin/env python3 +""" +Generate a JWT token for a KernelCI lab. + +Usage: + python generate-lab-token.py [--expires-days N] + +Example: + python generate-lab-token.py aparcar "your-secret-key" --expires-days 365 + +The secret key must match the KCI_SECRET_KEY used by the KernelCI API. +""" + +import argparse +import sys +from datetime import datetime, timedelta, timezone + +try: + import jwt +except ImportError: + print("Error: PyJWT not installed. Run: pip install pyjwt", file=sys.stderr) + sys.exit(1) + + +def generate_token( + lab_name: str, + secret_key: str, + expires_days: int = 365, + algorithm: str = "HS256", +) -> str: + """Generate a JWT token for a lab.""" + now = datetime.now(timezone.utc) + expires = now + timedelta(days=expires_days) + + payload = { + "sub": lab_name, + "type": "lab", + "iat": int(now.timestamp()), + "exp": int(expires.timestamp()), + } + + return jwt.encode(payload, secret_key, algorithm=algorithm) + + +def main(): + parser = argparse.ArgumentParser( + description="Generate a JWT token for a KernelCI lab" + ) + parser.add_argument("lab_name", help="Lab name (e.g., aparcar)") + parser.add_argument("secret_key", help="KCI_SECRET_KEY from the API server") + parser.add_argument( + "--expires-days", + type=int, + default=365, + help="Token expiration in days (default: 365)", + ) + parser.add_argument( + "--algorithm", + default="HS256", + help="JWT algorithm (default: HS256)", + ) + + args = parser.parse_args() + + if len(args.secret_key) < 32: + print( + "Warning: Secret key should be at least 32 characters", + file=sys.stderr, + ) + + token = generate_token( + lab_name=args.lab_name, + secret_key=args.secret_key, + expires_days=args.expires_days, + algorithm=args.algorithm, + ) + + print(f"# Token for lab: {args.lab_name}") + print(f"# Expires: {datetime.now(timezone.utc) + timedelta(days=args.expires_days)}") + print(f"KCI_API_TOKEN={token}") + + +if __name__ == "__main__": + main() diff --git a/scripts/kernel_selftest2junit.py b/scripts/kernel_selftest2junit.py new file mode 100644 index 000000000..93738a2ce --- /dev/null +++ b/scripts/kernel_selftest2junit.py @@ -0,0 +1,145 @@ +#!/usr/bin/env python3 +import argparse +import sys +import re +import xml.etree.ElementTree as ET +from xml.dom import minidom +from collections import defaultdict + +TEST_RE = re.compile(r"^TEST:\s*(.+?)\s+\[\s*([A-Z]+)\s*\]\s*$") +INFO_RE = re.compile(r"^INFO:\s*(?:#\s*)?(.*\S)\s*$") +INDENTED_RE = re.compile(r"^\s+(.*\S.*)$") + +STATUS_MAP = { + "OK": "pass", + "FAIL": "fail", + "XFAIL": "xfail", + "SKIP": "skip", +} + + +def parse_selftest(lines): + """ + Parse kernel selftest textual output into a structure: + { suite_name: [ {name, status, details} ] } + """ + suites = defaultdict(list) + current_suite = "default" + current_test = None + + for raw in lines: + line = raw.rstrip("\n") + + # Section / suite header + m = INFO_RE.match(line) + if m: + header = m.group(1).strip() + if header: + current_suite = header + current_test = None + continue + + # Test line + m = TEST_RE.match(line) + if m: + name, status = m.group(1).strip(), m.group(2).strip().upper() + status = STATUS_MAP.get(status, "pass") # default to pass if unknown + current_test = {"name": name, "status": status, "details": []} + suites[current_suite].append(current_test) + continue + + # Indented continuation: treat as details for the last test + m = INDENTED_RE.match(line) + if m and current_test is not None: + current_test["details"].append(m.group(1)) + continue + + # Anything else is ignored for now + + # Join details into a single string + for tests in suites.values(): + for t in tests: + t["details"] = "\n".join(t["details"]).strip() + + return suites + + +def suites_to_junit_xml(suites, top_name="kernel-selftests"): + """ + Convert parsed suites/tests to JUnit XML ( root). + """ + root = ET.Element("testsuites") + root.set("name", top_name) + + for suite_name, tests in suites.items(): + ts = ET.SubElement(root, "testsuite", name=suite_name) + # Collect counts + total = len(tests) + failures = sum(1 for t in tests if t["status"] == "fail") + skipped = sum(1 for t in tests if t["status"] in ("skip", "xfail")) + errors = 0 + + ts.set("tests", str(total)) + ts.set("failures", str(failures)) + ts.set("errors", str(errors)) + ts.set("skipped", str(skipped)) + ts.set("time", "0") + + for idx, t in enumerate(tests, 1): + case = ET.SubElement( + ts, "testcase", classname=suite_name, name=t["name"], time="0" + ) + if t["status"] == "fail": + # Put a short message + details body + msg = t["details"].splitlines()[0] if t["details"] else "failed" + fail = ET.SubElement(case, "failure", message=msg) + if t["details"]: + fail.text = t["details"] + elif t["status"] in ("skip", "xfail"): + reason = "xfail" if t["status"] == "xfail" else "skipped" + ET.SubElement(case, "skipped", message=reason) + # For passes, no child elements + + return root + + +def pretty_print_xml(elem): + rough = ET.tostring(elem, encoding="utf-8") + reparsed = minidom.parseString(rough) + return reparsed.toprettyxml(indent=" ", encoding="utf-8") + + +def main(): + ap = argparse.ArgumentParser( + description="Convert Linux kernel selftest text output to JUnit XML." + ) + ap.add_argument( + "input", nargs="?", default="-", help="Input file (default: stdin)." + ) + ap.add_argument( + "-o", "--output", default="-", help="Output file (default: stdout)." + ) + ap.add_argument( + "--suite-name", default="kernel-selftests", help="Top-level testsuites name." + ) + args = ap.parse_args() + + if args.input == "-" or args.input == "": + data = sys.stdin.read() + else: + with open(args.input, "r", encoding="utf-8") as f: + data = f.read() + + suites = parse_selftest(data.splitlines()) + xml_root = suites_to_junit_xml(suites, top_name=args.suite_name) + xml_bytes = pretty_print_xml(xml_root) + + if args.output == "-" or args.output == "": + sys.stdout.buffer.write(xml_bytes) + else: + with open(args.output, "wb") as f: + f.write(xml_bytes) + + +if __name__ == "__main__": + main() diff --git a/scripts/selftest.log b/scripts/selftest.log new file mode 100644 index 000000000..3f8669627 --- /dev/null +++ b/scripts/selftest.log @@ -0,0 +1,772 @@ +TEST: Locked port ipv4 [FAIL] + Ping did not work after unlocking port and removing FDB entry. +TEST: Locked port ipv6 [FAIL] + Ping6 did not work after unlocking port and removing FDB entry +TEST: Locked port vlan [FAIL] + Ping through vlan did not work after unlocking port and removing FDB entry +TEST: Locked port MAB [FAIL] + Ping did not work after replacing FDB entry +TEST: Locked port MAB roam [FAIL] + No locked entry on first injection +TEST: Locked port MAB configuration [FAIL] + Failed to enable MAB when port is locked and has learning enabled +TEST: Locked port MAB FDB flush [FAIL] + Failed to create locked FDB entry on second port +TEST: Locked port MAB redirect [FAIL] + Locked entry not created after deleting filter + +INFO: # Host entries configuration tests +TEST: Common host entries configuration tests (IPv4) [ OK ] +TEST: Common host entries configuration tests (IPv6) [ OK ] +TEST: Common host entries configuration tests (L2) [ OK ] + +INFO: # Port group entries configuration tests - (*, G) +TEST: Common port group entries configuration tests (IPv4 (*, G)) [ OK ] +TEST: Common port group entries configuration tests (IPv6 (*, G)) [ OK ] +TEST: IPv4 (*, G) port group entries configuration tests [ OK ] +TEST: IPv6 (*, G) port group entries configuration tests [ OK ] + +INFO: # Port group entries configuration tests - (S, G) +TEST: Common port group entries configuration tests (IPv4 (S, G)) [ OK ] +TEST: Common port group entries configuration tests (IPv6 (S, G)) [ OK ] +TEST: IPv4 (S, G) port group entries configuration tests [ OK ] +TEST: IPv6 (S, G) port group entries configuration tests [ OK ] + +INFO: # Port group entries configuration tests - L2 +TEST: Common port group entries configuration tests (L2 (*, G)) [ OK ] +TEST: L2 (*, G) port group entries configuration tests [ OK ] + +INFO: # Large scale dump tests +TEST: IPv4 large scale dump tests [FAIL] + Wrong number of entries in br-test2 +TEST: IPv6 large scale dump tests [FAIL] + Wrong number of entries in br-test2 +TEST: L2 large scale dump tests [FAIL] + Wrong number of entries in br-test2 +TEST: Flush tests [FAIL] + Wrong number of entries in br-test2 + +INFO: # Forwarding tests +TEST: IPv4 host entries forwarding tests [ OK ] +TEST: IPv6 host entries forwarding tests [ OK ] +TEST: L2 host entries forwarding tests [ OK ] +TEST: IPv4 port group "exclude" entries forwarding tests [FAIL] + Packet from invalid source received on H2 after deleting entry +TEST: IPv6 port group "exclude" entries forwarding tests [FAIL] + Packet from invalid source received on H2 after deleting entry +TEST: IPv4 port group "include" entries forwarding tests [FAIL] + Packet from invalid source received on H2 after deleting entry +TEST: IPv6 port group "include" entries forwarding tests [FAIL] + Packet from invalid source received on H2 after deleting entry +TEST: L2 port entries forwarding tests [ OK ] + +INFO: # Control packets tests +TEST: IGMPv3 MODE_IS_INCLUDE tests [ OK ] +TEST: MLDv2 MODE_IS_INCLUDE tests [ OK ] +TEST: MLDv2 report ff02::cc is_include [FAIL] + Didn't forward traffic from S,G (2001:db8:1::3, ff02::cc) +TEST: MLDv2 report ff02::cc include -> allow [FAIL] + Didn't forward traffic from S,G (2001:db8:1::12, ff02::cc) +TEST: MLDv2 report ff02::cc include -> is_include [FAIL] + Didn't forward traffic from S,G (2001:db8:1::12, ff02::cc) +TEST: MLDv2 report ff02::cc include -> is_exclude [FAIL] + Didn't forward traffic from S,G (2001:db8:1::100, ff02::cc) +TEST: MLDv2 report ff02::cc include -> to_exclude [FAIL] + Didn't forward traffic from S,G (2001:db8:1::100, ff02::cc) +TEST: MLDv2 report ff02::cc exclude -> allow [FAIL] + Didn't forward traffic from S,G (2001:db8:1::100, ff02::cc) +TEST: MLDv2 report ff02::cc exclude -> is_include [FAIL] + Didn't forward traffic from S,G (2001:db8:1::100, ff02::cc) +TEST: MLDv2 report ff02::cc exclude -> is_exclude [FAIL] + Didn't forward traffic from S,G (2001:db8:1::100, ff02::cc) +TEST: MLDv2 report ff02::cc exclude -> to_exclude [FAIL] + Didn't forward traffic from S,G (2001:db8:1::100, ff02::cc) +TEST: MLDv2 report ff02::cc include -> block [FAIL] + Didn't forward traffic from S,G (2001:db8:1::3, ff02::cc) +TEST: MLDv2 report ff02::cc exclude -> block [FAIL] + Didn't forward traffic from S,G (2001:db8:1::100, ff02::cc) +TEST: MLDv2 group ff02::cc exclude timeout [FAIL] + Didn't forward traffic from S,G (2001:db8:1::30, ff02::cc) +TEST: MLDv2 S,G port entry automatic add to a *,G port [FAIL] + Didn't forward traffic from S,G (2001:db8:1::3, ff02::cc) +TEST: per vlan snooping with port stp state change [ OK ] +TEST: per vlan snooping with vlan stp state change [ OK ] +TEST: ping [ OK ] +TEST: ping6 [ OK ] +TEST: FDB learning [FAIL] + Found FDB record when should not +TEST: Unknown unicast flood [FAIL] + Packet was not flooded when should +TEST: Unregistered multicast flood [ OK ] +INFO: Add and delete a VLAN on bridge port lan2 +TEST: ping [ OK ] +TEST: ping6 [ OK ] +TEST: Externally learned FDB entry - ageing & roaming [ OK ] +TEST: FDB entry in PVID for VLAN-tagged with other TPID [ OK ] +TEST: Reception of VLAN with other TPID as untagged [FAIL] + Packet was not forwarded when it should +TEST: Reception of VLAN with other TPID as untagged (no PVID) [FAIL] + Packet was forwarded when should not +TEST: Reception of 802.1p-tagged traffic [ OK ] +TEST: Dropping of untagged and 802.1p-tagged traffic with no PVID [ OK ] +TEST: Vlan multicast snooping enable [ OK ] +TEST: Vlan global options existence [ OK ] +TEST: Vlan mcast_snooping global option default value [ OK ] +TEST: Vlan 10 multicast snooping control [ OK ] +TEST: Vlan mcast_querier global option default value [ OK ] +TEST: Vlan 10 multicast querier enable [ OK ] +TEST: Vlan 10 tagged IGMPv2 general query sent [ OK ] +TEST: Vlan 10 tagged MLD general query sent [ OK ] +TEST: Vlan mcast_igmp_version global option default value [ OK ] +TEST: Vlan mcast_mld_version global option default value [ OK ] +TEST: Vlan 10 mcast_igmp_version option changed to 3 [ OK ] +TEST: Vlan 10 tagged IGMPv3 general query sent [ OK ] +TEST: Vlan 10 mcast_mld_version option changed to 2 [ OK ] +TEST: Vlan 10 tagged MLDv2 general query sent [ OK ] +TEST: Vlan mcast_last_member_count global option default value [ OK ] +TEST: Vlan mcast_last_member_interval global option default value [ OK ] +TEST: Vlan 10 mcast_last_member_count option changed to 3 [ OK ] +TEST: Vlan 10 mcast_last_member_interval option changed to 200 [ OK ] +TEST: Vlan mcast_startup_query_interval global option default value [ OK ] +TEST: Vlan mcast_startup_query_count global option default value [ OK ] +TEST: Vlan 10 mcast_startup_query_interval option changed to 100 [ OK ] +TEST: Vlan 10 mcast_startup_query_count option changed to 3 [ OK ] +TEST: Vlan mcast_membership_interval global option default value [ OK ] +TEST: Vlan 10 mcast_membership_interval option changed to 200 [ OK ] +TEST: Vlan 10 mcast_membership_interval mdb entry expire [ OK ] +TEST: Vlan mcast_querier_interval global option default value [ OK ] +TEST: Vlan 10 mcast_querier_interval option changed to 100 [ OK ] +TEST: Vlan 10 mcast_querier_interval expire after outside query [ OK ] +TEST: Vlan mcast_query_interval global option default value [ OK ] +TEST: Vlan 10 mcast_query_interval option changed to 200 [ OK ] +TEST: Vlan mcast_query_response_interval global option default value [ OK ] +TEST: Vlan 10 mcast_query_response_interval option changed to 200 [ OK ] +TEST: Port vlan 10 option mcast_router default value [ OK ] +TEST: Port vlan 10 mcast_router option changed to 2 [ OK ] +TEST: Flood unknown vlan multicast packets to router port only [ OK ] +TEST: Disable multicast vlan snooping when vlan filtering is disabled [ OK ] +TEST: ping [FAIL] +TEST: ping6 [FAIL] +TEST: FDB learning [FAIL] + Found FDB record when should not +TEST: Unknown unicast flood [FAIL] + Packet was not flooded when should +TEST: Unregistered multicast flood [FAIL] + Packet was not flooded when should +TEST: ping with bridge port lan2 PVID changed [FAIL] +TEST: ping6 with bridge port lan2 PVID changed [FAIL] +TEST: ping with bridge port lan2 PVID deleted [FAIL] +TEST: ping6 with bridge port lan2 PVID deleted [FAIL] +TEST: lan2: Unicast IPv4 to primary MAC address [FAIL] + reception failed +TEST: lan2: Unicast IPv4 to macvlan MAC address [FAIL] + reception failed +TEST: lan2: Unicast IPv4 to unknown MAC address [ OK ] +TEST: lan2: Unicast IPv4 to unknown MAC address, promisc [ OK ] +TEST: lan2: Unicast IPv4 to unknown MAC address, allmulti [ OK ] +TEST: lan2: Multicast IPv4 to joined group [ OK ] +TEST: lan2: Multicast IPv4 to unknown group [XFAIL] + reception succeeded, but should have failed +TEST: lan2: Multicast IPv4 to unknown group, promisc [ OK ] +TEST: lan2: Multicast IPv4 to unknown group, allmulti [ OK ] +TEST: lan2: Multicast IPv6 to joined group [ OK ] +TEST: lan2: Multicast IPv6 to unknown group [XFAIL] + reception succeeded, but should have failed +TEST: lan2: Multicast IPv6 to unknown group, promisc [ OK ] +TEST: lan2: Multicast IPv6 to unknown group, allmulti [ OK ] +TEST: lan2: 1588v2 over L2 transport, Sync [ OK ] +TEST: lan2: 1588v2 over L2 transport, Follow-Up [ OK ] +TEST: lan2: 1588v2 over L2 transport, Peer Delay Request [ OK ] +TEST: lan2: 1588v2 over IPv4, Sync [FAIL] + reception failed +TEST: lan2: 1588v2 over IPv4, Follow-Up [FAIL] + reception failed +TEST: lan2: 1588v2 over IPv4, Peer Delay Request [FAIL] + reception failed +TEST: lan2: 1588v2 over IPv6, Sync [FAIL] + reception failed +TEST: lan2: 1588v2 over IPv6, Follow-Up [FAIL] + reception failed +TEST: lan2: 1588v2 over IPv6, Peer Delay Request [FAIL] + reception failed +TEST: vlan_filtering=0 bridge: Unicast IPv4 to primary MAC address [FAIL] + reception failed +TEST: vlan_filtering=0 bridge: Unicast IPv4 to macvlan MAC address [FAIL] + reception failed +TEST: vlan_filtering=0 bridge: Unicast IPv4 to unknown MAC address [FAIL] + reception failed +TEST: vlan_filtering=0 bridge: Unicast IPv4 to unknown MAC address, promisc [FAIL] + reception failed +TEST: vlan_filtering=0 bridge: Unicast IPv4 to unknown MAC address, allmulti [FAIL] + reception failed +TEST: vlan_filtering=0 bridge: Multicast IPv4 to joined group [FAIL] + reception failed +TEST: vlan_filtering=0 bridge: Multicast IPv4 to unknown group [ OK ] +TEST: vlan_filtering=0 bridge: Multicast IPv4 to unknown group, promisc [FAIL] + reception failed +TEST: vlan_filtering=0 bridge: Multicast IPv4 to unknown group, allmulti [FAIL] + reception failed +TEST: vlan_filtering=0 bridge: Multicast IPv6 to joined group [FAIL] + reception failed +TEST: vlan_filtering=0 bridge: Multicast IPv6 to unknown group [ OK ] +TEST: vlan_filtering=0 bridge: Multicast IPv6 to unknown group, promisc [FAIL] + reception failed +TEST: vlan_filtering=0 bridge: Multicast IPv6 to unknown group, allmulti [FAIL] + reception failed +TEST: vlan_filtering=1 bridge: Unicast IPv4 to primary MAC address [ OK ] +TEST: vlan_filtering=1 bridge: Unicast IPv4 to macvlan MAC address [ OK ] +TEST: vlan_filtering=1 bridge: Unicast IPv4 to unknown MAC address [ OK ] +TEST: vlan_filtering=1 bridge: Unicast IPv4 to unknown MAC address, promisc [ OK ] +TEST: vlan_filtering=1 bridge: Unicast IPv4 to unknown MAC address, allmulti [ OK ] +TEST: vlan_filtering=1 bridge: Multicast IPv4 to joined group [ OK ] +TEST: vlan_filtering=1 bridge: Multicast IPv4 to unknown group [XFAIL] + reception succeeded, but should have failed +TEST: vlan_filtering=1 bridge: Multicast IPv4 to unknown group, promisc [ OK ] +TEST: vlan_filtering=1 bridge: Multicast IPv4 to unknown group, allmulti [ OK ] +TEST: vlan_filtering=1 bridge: Multicast IPv6 to joined group [ OK ] +TEST: vlan_filtering=1 bridge: Multicast IPv6 to unknown group [XFAIL] + reception succeeded, but should have failed +TEST: vlan_filtering=1 bridge: Multicast IPv6 to unknown group, promisc [ OK ] +TEST: vlan_filtering=1 bridge: Multicast IPv6 to unknown group, allmulti [ OK ] +TEST: VLAN upper: Unicast IPv4 to primary MAC address [ OK ] +TEST: VLAN upper: Unicast IPv4 to macvlan MAC address [ OK ] +TEST: VLAN upper: Unicast IPv4 to unknown MAC address [ OK ] +TEST: VLAN upper: Unicast IPv4 to unknown MAC address, promisc [ OK ] +TEST: VLAN upper: Unicast IPv4 to unknown MAC address, allmulti [ OK ] +TEST: VLAN upper: Multicast IPv4 to joined group [ OK ] +TEST: VLAN upper: Multicast IPv4 to unknown group [XFAIL] + reception succeeded, but should have failed +TEST: VLAN upper: Multicast IPv4 to unknown group, promisc [ OK ] +TEST: VLAN upper: Multicast IPv4 to unknown group, allmulti [ OK ] +TEST: VLAN upper: Multicast IPv6 to joined group [ OK ] +TEST: VLAN upper: Multicast IPv6 to unknown group [XFAIL] + reception succeeded, but should have failed +TEST: VLAN upper: Multicast IPv6 to unknown group, promisc [ OK ] +TEST: VLAN upper: Multicast IPv6 to unknown group, allmulti [ OK ] +TEST: VLAN upper: 1588v2 over L2 transport, Sync [ OK ] +TEST: VLAN upper: 1588v2 over L2 transport, Follow-Up [ OK ] +TEST: VLAN upper: 1588v2 over L2 transport, Peer Delay Request [FAIL] + reception failed +TEST: VLAN upper: 1588v2 over IPv4, Sync [FAIL] + reception failed +TEST: VLAN upper: 1588v2 over IPv4, Follow-Up [FAIL] + reception failed +TEST: VLAN upper: 1588v2 over IPv4, Peer Delay Request [FAIL] + reception failed +TEST: VLAN upper: 1588v2 over IPv6, Sync [FAIL] + reception failed +TEST: VLAN upper: 1588v2 over IPv6, Follow-Up [FAIL] + reception failed +TEST: VLAN upper: 1588v2 over IPv6, Peer Delay Request [FAIL] + reception failed +TEST: VLAN over vlan_filtering=0 bridged port: Unicast IPv4 to primary MAC address [FAIL] + reception failed +TEST: VLAN over vlan_filtering=0 bridged port: Unicast IPv4 to macvlan MAC address [FAIL] + reception failed +TEST: VLAN over vlan_filtering=0 bridged port: Unicast IPv4 to unknown MAC address [FAIL] + reception failed +TEST: VLAN over vlan_filtering=0 bridged port: Unicast IPv4 to unknown MAC address, promisc [FAIL] + reception failed +TEST: VLAN over vlan_filtering=0 bridged port: Unicast IPv4 to unknown MAC address, allmulti [FAIL] + reception failed +TEST: VLAN over vlan_filtering=0 bridged port: Multicast IPv4 to joined group [FAIL] + reception failed +TEST: VLAN over vlan_filtering=0 bridged port: Multicast IPv4 to unknown group [ OK ] +TEST: VLAN over vlan_filtering=0 bridged port: Multicast IPv4 to unknown group, promisc [FAIL] + reception failed +TEST: VLAN over vlan_filtering=0 bridged port: Multicast IPv4 to unknown group, allmulti [FAIL] + reception failed +TEST: VLAN over vlan_filtering=0 bridged port: Multicast IPv6 to joined group [FAIL] + reception failed +TEST: VLAN over vlan_filtering=0 bridged port: Multicast IPv6 to unknown group [ OK ] +TEST: VLAN over vlan_filtering=0 bridged port: Multicast IPv6 to unknown group, promisc [FAIL] + reception failed +TEST: VLAN over vlan_filtering=0 bridged port: Multicast IPv6 to unknown group, allmulti [FAIL] + reception failed +TEST: VLAN over vlan_filtering=0 bridged port: 1588v2 over L2 transport, Sync [FAIL] + reception failed +TEST: VLAN over vlan_filtering=0 bridged port: 1588v2 over L2 transport, Follow-Up [FAIL] + reception failed +TEST: VLAN over vlan_filtering=0 bridged port: 1588v2 over L2 transport, Peer Delay Request [FAIL] + reception failed +TEST: VLAN over vlan_filtering=0 bridged port: 1588v2 over IPv4, Sync [FAIL] + reception failed +TEST: VLAN over vlan_filtering=0 bridged port: 1588v2 over IPv4, Follow-Up [FAIL] + reception failed +TEST: VLAN over vlan_filtering=0 bridged port: 1588v2 over IPv4, Peer Delay Request [FAIL] + reception failed +TEST: VLAN over vlan_filtering=0 bridged port: 1588v2 over IPv6, Sync [FAIL] + reception failed +TEST: VLAN over vlan_filtering=0 bridged port: 1588v2 over IPv6, Follow-Up [FAIL] + reception failed +TEST: VLAN over vlan_filtering=0 bridged port: 1588v2 over IPv6, Peer Delay Request [FAIL] + reception failed +TEST: VLAN over vlan_filtering=1 bridged port: Unicast IPv4 to primary MAC address [ OK ] +TEST: VLAN over vlan_filtering=1 bridged port: Unicast IPv4 to macvlan MAC address [ OK ] +TEST: VLAN over vlan_filtering=1 bridged port: Unicast IPv4 to unknown MAC address [FAIL] + reception succeeded, but should have failed +TEST: VLAN over vlan_filtering=1 bridged port: Unicast IPv4 to unknown MAC address, promisc [ OK ] +TEST: VLAN over vlan_filtering=1 bridged port: Unicast IPv4 to unknown MAC address, allmulti [FAIL] + reception succeeded, but should have failed +TEST: VLAN over vlan_filtering=1 bridged port: Multicast IPv4 to joined group [ OK ] +TEST: VLAN over vlan_filtering=1 bridged port: Multicast IPv4 to unknown group [XFAIL] + reception succeeded, but should have failed +TEST: VLAN over vlan_filtering=1 bridged port: Multicast IPv4 to unknown group, promisc [ OK ] +TEST: VLAN over vlan_filtering=1 bridged port: Multicast IPv4 to unknown group, allmulti [ OK ] +TEST: VLAN over vlan_filtering=1 bridged port: Multicast IPv6 to joined group [ OK ] +TEST: VLAN over vlan_filtering=1 bridged port: Multicast IPv6 to unknown group [XFAIL] + reception succeeded, but should have failed +TEST: VLAN over vlan_filtering=1 bridged port: Multicast IPv6 to unknown group, promisc [ OK ] +TEST: VLAN over vlan_filtering=1 bridged port: Multicast IPv6 to unknown group, allmulti [ OK ] +TEST: VLAN over vlan_filtering=1 bridged port: 1588v2 over L2 transport, Sync [ OK ] +TEST: VLAN over vlan_filtering=1 bridged port: 1588v2 over L2 transport, Follow-Up [ OK ] +TEST: VLAN over vlan_filtering=1 bridged port: 1588v2 over L2 transport, Peer Delay Request [FAIL] + reception failed +TEST: VLAN over vlan_filtering=1 bridged port: 1588v2 over IPv4, Sync [FAIL] + reception failed +TEST: VLAN over vlan_filtering=1 bridged port: 1588v2 over IPv4, Follow-Up [FAIL] + reception failed +TEST: VLAN over vlan_filtering=1 bridged port: 1588v2 over IPv4, Peer Delay Request [FAIL] + reception failed +TEST: VLAN over vlan_filtering=1 bridged port: 1588v2 over IPv6, Sync [FAIL] + reception failed +TEST: VLAN over vlan_filtering=1 bridged port: 1588v2 over IPv6, Follow-Up [FAIL] + reception failed +TEST: VLAN over vlan_filtering=1 bridged port: 1588v2 over IPv6, Peer Delay Request [FAIL] + reception failed +TEST: VLAN over vlan_filtering=0 bridge: Unicast IPv4 to primary MAC address [FAIL] + reception failed +TEST: VLAN over vlan_filtering=0 bridge: Unicast IPv4 to macvlan MAC address [FAIL] + reception failed +TEST: VLAN over vlan_filtering=0 bridge: Unicast IPv4 to unknown MAC address [FAIL] + reception failed +TEST: VLAN over vlan_filtering=0 bridge: Unicast IPv4 to unknown MAC address, promisc [FAIL] + reception failed +TEST: VLAN over vlan_filtering=0 bridge: Unicast IPv4 to unknown MAC address, allmulti [FAIL] + reception failed +TEST: VLAN over vlan_filtering=0 bridge: Multicast IPv4 to joined group [FAIL] + reception failed +TEST: VLAN over vlan_filtering=0 bridge: Multicast IPv4 to unknown group [ OK ] +TEST: VLAN over vlan_filtering=0 bridge: Multicast IPv4 to unknown group, promisc [FAIL] + reception failed +TEST: VLAN over vlan_filtering=0 bridge: Multicast IPv4 to unknown group, allmulti [FAIL] + reception failed +TEST: VLAN over vlan_filtering=0 bridge: Multicast IPv6 to joined group [FAIL] + reception failed +TEST: VLAN over vlan_filtering=0 bridge: Multicast IPv6 to unknown group [ OK ] +TEST: VLAN over vlan_filtering=0 bridge: Multicast IPv6 to unknown group, promisc [FAIL] + reception failed +TEST: VLAN over vlan_filtering=0 bridge: Multicast IPv6 to unknown group, allmulti [FAIL] + reception failed +TEST: VLAN over vlan_filtering=1 bridge: Unicast IPv4 to primary MAC address [ OK ] +TEST: VLAN over vlan_filtering=1 bridge: Unicast IPv4 to macvlan MAC address [ OK ] +TEST: VLAN over vlan_filtering=1 bridge: Unicast IPv4 to unknown MAC address [ OK ] +TEST: VLAN over vlan_filtering=1 bridge: Unicast IPv4 to unknown MAC address, promisc [ OK ] +TEST: VLAN over vlan_filtering=1 bridge: Unicast IPv4 to unknown MAC address, allmulti [ OK ] +TEST: VLAN over vlan_filtering=1 bridge: Multicast IPv4 to joined group [ OK ] +TEST: VLAN over vlan_filtering=1 bridge: Multicast IPv4 to unknown group [XFAIL] + reception succeeded, but should have failed +TEST: VLAN over vlan_filtering=1 bridge: Multicast IPv4 to unknown group, promisc [ OK ] +TEST: VLAN over vlan_filtering=1 bridge: Multicast IPv4 to unknown group, allmulti [ OK ] +TEST: VLAN over vlan_filtering=1 bridge: Multicast IPv6 to joined group [ OK ] +TEST: VLAN over vlan_filtering=1 bridge: Multicast IPv6 to unknown group [XFAIL] + reception succeeded, but should have failed +TEST: VLAN over vlan_filtering=1 bridge: Multicast IPv6 to unknown group, promisc [ OK ] +TEST: VLAN over vlan_filtering=1 bridge: Multicast IPv6 to unknown group, allmulti [ OK ] +Standalone switch ports: Sending packets +Standalone switch ports: Checking which packets were received +TEST: Standalone switch ports: Unicast non-IP untagged [ OK ] +TEST: Standalone switch ports: Multicast non-IP untagged [ OK ] +TEST: Standalone switch ports: Broadcast non-IP untagged [ OK ] +TEST: Standalone switch ports: Unicast IPv4 untagged [ OK ] +TEST: Standalone switch ports: Multicast IPv4 untagged [ OK ] +TEST: Standalone switch ports: Unicast IPv6 untagged [ OK ] +TEST: Standalone switch ports: Multicast IPv6 untagged [ OK ] +TEST: Standalone switch ports: Unicast non-IP VID 0 [ OK ] +TEST: Standalone switch ports: Multicast non-IP VID 0 [ OK ] +TEST: Standalone switch ports: Broadcast non-IP VID 0 [ OK ] +TEST: Standalone switch ports: Unicast IPv4 VID 0 [ OK ] +TEST: Standalone switch ports: Multicast IPv4 VID 0 [ OK ] +TEST: Standalone switch ports: Unicast IPv6 VID 0 [ OK ] +TEST: Standalone switch ports: Multicast IPv6 VID 0 [ OK ] +TEST: Standalone switch ports: Unicast non-IP VID 1 [ OK ] +TEST: Standalone switch ports: Multicast non-IP VID 1 [ OK ] +TEST: Standalone switch ports: Broadcast non-IP VID 1 [ OK ] +TEST: Standalone switch ports: Unicast IPv4 VID 1 [ OK ] +TEST: Standalone switch ports: Multicast IPv4 VID 1 [ OK ] +TEST: Standalone switch ports: Unicast IPv6 VID 1 [ OK ] +TEST: Standalone switch ports: Multicast IPv6 VID 1 [ OK ] +TEST: Standalone switch ports: Unicast non-IP VID 2 [ OK ] +TEST: Standalone switch ports: Multicast non-IP VID 2 [ OK ] +TEST: Standalone switch ports: Broadcast non-IP VID 2 [ OK ] +TEST: Standalone switch ports: Unicast IPv4 VID 2 [ OK ] +TEST: Standalone switch ports: Multicast IPv4 VID 2 [ OK ] +TEST: Standalone switch ports: Unicast IPv6 VID 2 [ OK ] +TEST: Standalone switch ports: Multicast IPv6 VID 2 [ OK ] +TEST: Standalone switch ports: Unicast non-IP VID 5 [ OK ] +TEST: Standalone switch ports: Multicast non-IP VID 5 [ OK ] +TEST: Standalone switch ports: Broadcast non-IP VID 5 [ OK ] +TEST: Standalone switch ports: Unicast IPv4 VID 5 [ OK ] +TEST: Standalone switch ports: Multicast IPv4 VID 5 [ OK ] +TEST: Standalone switch ports: Unicast IPv6 VID 5 [ OK ] +TEST: Standalone switch ports: Multicast IPv6 VID 5 [ OK ] +TEST: Standalone switch ports: Unicast non-IP VID 10 [ OK ] +TEST: Standalone switch ports: Multicast non-IP VID 10 [ OK ] +TEST: Standalone switch ports: Broadcast non-IP VID 10 [ OK ] +TEST: Standalone switch ports: Unicast IPv4 VID 10 [ OK ] +TEST: Standalone switch ports: Multicast IPv4 VID 10 [ OK ] +TEST: Standalone switch ports: Unicast IPv6 VID 10 [ OK ] +TEST: Standalone switch ports: Multicast IPv6 VID 10 [ OK ] +TEST: Standalone switch ports: Unicast non-IP VID 20 [ OK ] +TEST: Standalone switch ports: Multicast non-IP VID 20 [ OK ] +TEST: Standalone switch ports: Broadcast non-IP VID 20 [ OK ] +TEST: Standalone switch ports: Unicast IPv4 VID 20 [ OK ] +TEST: Standalone switch ports: Multicast IPv4 VID 20 [ OK ] +TEST: Standalone switch ports: Unicast IPv6 VID 20 [ OK ] +TEST: Standalone switch ports: Multicast IPv6 VID 20 [ OK ] +TEST: Standalone switch ports: Unicast non-IP VID 50 [ OK ] +TEST: Standalone switch ports: Multicast non-IP VID 50 [ OK ] +TEST: Standalone switch ports: Broadcast non-IP VID 50 [ OK ] +TEST: Standalone switch ports: Unicast IPv4 VID 50 [ OK ] +TEST: Standalone switch ports: Multicast IPv4 VID 50 [ OK ] +TEST: Standalone switch ports: Unicast IPv6 VID 50 [ OK ] +TEST: Standalone switch ports: Multicast IPv6 VID 50 [ OK ] +TEST: Standalone switch ports: Unicast non-IP VID 100 [ OK ] +TEST: Standalone switch ports: Multicast non-IP VID 100 [ OK ] +TEST: Standalone switch ports: Broadcast non-IP VID 100 [ OK ] +TEST: Standalone switch ports: Unicast IPv4 VID 100 [ OK ] +TEST: Standalone switch ports: Multicast IPv4 VID 100 [ OK ] +TEST: Standalone switch ports: Unicast IPv6 VID 100 [ OK ] +TEST: Standalone switch ports: Multicast IPv6 VID 100 [ OK ] +TEST: Standalone switch ports: Unicast non-IP VID 200 [ OK ] +TEST: Standalone switch ports: Multicast non-IP VID 200 [ OK ] +TEST: Standalone switch ports: Broadcast non-IP VID 200 [ OK ] +TEST: Standalone switch ports: Unicast IPv4 VID 200 [ OK ] +TEST: Standalone switch ports: Multicast IPv4 VID 200 [ OK ] +TEST: Standalone switch ports: Unicast IPv6 VID 200 [ OK ] +TEST: Standalone switch ports: Multicast IPv6 VID 200 [ OK ] +TEST: Standalone switch ports: Unicast non-IP VID 500 [ OK ] +TEST: Standalone switch ports: Multicast non-IP VID 500 [ OK ] +TEST: Standalone switch ports: Broadcast non-IP VID 500 [ OK ] +TEST: Standalone switch ports: Unicast IPv4 VID 500 [ OK ] +TEST: Standalone switch ports: Multicast IPv4 VID 500 [ OK ] +TEST: Standalone switch ports: Unicast IPv6 VID 500 [ OK ] +TEST: Standalone switch ports: Multicast IPv6 VID 500 [ OK ] +TEST: Standalone switch ports: Unicast non-IP VID 1000 [ OK ] +TEST: Standalone switch ports: Multicast non-IP VID 1000 [ OK ] +TEST: Standalone switch ports: Broadcast non-IP VID 1000 [ OK ] +TEST: Standalone switch ports: Unicast IPv4 VID 1000 [ OK ] +TEST: Standalone switch ports: Multicast IPv4 VID 1000 [ OK ] +TEST: Standalone switch ports: Unicast IPv6 VID 1000 [ OK ] +TEST: Standalone switch ports: Multicast IPv6 VID 1000 [ OK ] +TEST: Standalone switch ports: Unicast non-IP VID 1000 [ OK ] +TEST: Standalone switch ports: Multicast non-IP VID 1000 [ OK ] +TEST: Standalone switch ports: Broadcast non-IP VID 1000 [ OK ] +TEST: Standalone switch ports: Unicast IPv4 VID 1000 [ OK ] +TEST: Standalone switch ports: Multicast IPv4 VID 1000 [ OK ] +TEST: Standalone switch ports: Unicast IPv6 VID 1000 [ OK ] +TEST: Standalone switch ports: Multicast IPv6 VID 1000 [ OK ] +TEST: Standalone switch ports: Unicast non-IP VID 2000 [ OK ] +TEST: Standalone switch ports: Multicast non-IP VID 2000 [ OK ] +TEST: Standalone switch ports: Broadcast non-IP VID 2000 [ OK ] +TEST: Standalone switch ports: Unicast IPv4 VID 2000 [ OK ] +TEST: Standalone switch ports: Multicast IPv4 VID 2000 [ OK ] +TEST: Standalone switch ports: Unicast IPv6 VID 2000 [ OK ] +TEST: Standalone switch ports: Multicast IPv6 VID 2000 [ OK ] +TEST: Standalone switch ports: Unicast non-IP VID 4000 [ OK ] +TEST: Standalone switch ports: Multicast non-IP VID 4000 [ OK ] +TEST: Standalone switch ports: Broadcast non-IP VID 4000 [ OK ] +TEST: Standalone switch ports: Unicast IPv4 VID 4000 [ OK ] +TEST: Standalone switch ports: Multicast IPv4 VID 4000 [ OK ] +TEST: Standalone switch ports: Unicast IPv6 VID 4000 [ OK ] +TEST: Standalone switch ports: Multicast IPv6 VID 4000 [ OK ] +TEST: Standalone switch ports: Unicast non-IP VID 4094 [ OK ] +TEST: Standalone switch ports: Multicast non-IP VID 4094 [ OK ] +TEST: Standalone switch ports: Broadcast non-IP VID 4094 [ OK ] +TEST: Standalone switch ports: Unicast IPv4 VID 4094 [ OK ] +TEST: Standalone switch ports: Multicast IPv4 VID 4094 [ OK ] +TEST: Standalone switch ports: Unicast IPv6 VID 4094 [ OK ] +TEST: Standalone switch ports: Multicast IPv6 VID 4094 [ OK ] +Switch ports in different bridges: Sending packets +Switch ports in different bridges: Checking which packets were received +TEST: Switch ports in different bridges: Unicast non-IP untagged [ OK ] +TEST: Switch ports in different bridges: Multicast non-IP untagged [ OK ] +TEST: Switch ports in different bridges: Broadcast non-IP untagged [ OK ] +TEST: Switch ports in different bridges: Unicast IPv4 untagged [ OK ] +TEST: Switch ports in different bridges: Multicast IPv4 untagged [ OK ] +TEST: Switch ports in different bridges: Unicast IPv6 untagged [ OK ] +TEST: Switch ports in different bridges: Multicast IPv6 untagged [ OK ] +TEST: Switch ports in different bridges: Unicast non-IP VID 0 [ OK ] +TEST: Switch ports in different bridges: Multicast non-IP VID 0 [ OK ] +TEST: Switch ports in different bridges: Broadcast non-IP VID 0 [ OK ] +TEST: Switch ports in different bridges: Unicast IPv4 VID 0 [ OK ] +TEST: Switch ports in different bridges: Multicast IPv4 VID 0 [ OK ] +TEST: Switch ports in different bridges: Unicast IPv6 VID 0 [ OK ] +TEST: Switch ports in different bridges: Multicast IPv6 VID 0 [ OK ] +TEST: Switch ports in different bridges: Unicast non-IP VID 1 [ OK ] +TEST: Switch ports in different bridges: Multicast non-IP VID 1 [ OK ] +TEST: Switch ports in different bridges: Broadcast non-IP VID 1 [ OK ] +TEST: Switch ports in different bridges: Unicast IPv4 VID 1 [ OK ] +TEST: Switch ports in different bridges: Multicast IPv4 VID 1 [ OK ] +TEST: Switch ports in different bridges: Unicast IPv6 VID 1 [ OK ] +TEST: Switch ports in different bridges: Multicast IPv6 VID 1 [ OK ] +TEST: Switch ports in different bridges: Unicast non-IP VID 2 [ OK ] +TEST: Switch ports in different bridges: Multicast non-IP VID 2 [ OK ] +TEST: Switch ports in different bridges: Broadcast non-IP VID 2 [ OK ] +TEST: Switch ports in different bridges: Unicast IPv4 VID 2 [ OK ] +TEST: Switch ports in different bridges: Multicast IPv4 VID 2 [ OK ] +TEST: Switch ports in different bridges: Unicast IPv6 VID 2 [ OK ] +TEST: Switch ports in different bridges: Multicast IPv6 VID 2 [ OK ] +TEST: Switch ports in different bridges: Unicast non-IP VID 5 [ OK ] +TEST: Switch ports in different bridges: Multicast non-IP VID 5 [ OK ] +TEST: Switch ports in different bridges: Broadcast non-IP VID 5 [ OK ] +TEST: Switch ports in different bridges: Unicast IPv4 VID 5 [ OK ] +TEST: Switch ports in different bridges: Multicast IPv4 VID 5 [ OK ] +TEST: Switch ports in different bridges: Unicast IPv6 VID 5 [ OK ] +TEST: Switch ports in different bridges: Multicast IPv6 VID 5 [ OK ] +TEST: Switch ports in different bridges: Unicast non-IP VID 10 [ OK ] +TEST: Switch ports in different bridges: Multicast non-IP VID 10 [ OK ] +TEST: Switch ports in different bridges: Broadcast non-IP VID 10 [ OK ] +TEST: Switch ports in different bridges: Unicast IPv4 VID 10 [ OK ] +TEST: Switch ports in different bridges: Multicast IPv4 VID 10 [ OK ] +TEST: Switch ports in different bridges: Unicast IPv6 VID 10 [ OK ] +TEST: Switch ports in different bridges: Multicast IPv6 VID 10 [ OK ] +TEST: Switch ports in different bridges: Unicast non-IP VID 20 [ OK ] +TEST: Switch ports in different bridges: Multicast non-IP VID 20 [ OK ] +TEST: Switch ports in different bridges: Broadcast non-IP VID 20 [ OK ] +TEST: Switch ports in different bridges: Unicast IPv4 VID 20 [ OK ] +TEST: Switch ports in different bridges: Multicast IPv4 VID 20 [ OK ] +TEST: Switch ports in different bridges: Unicast IPv6 VID 20 [ OK ] +TEST: Switch ports in different bridges: Multicast IPv6 VID 20 [ OK ] +TEST: Switch ports in different bridges: Unicast non-IP VID 50 [ OK ] +TEST: Switch ports in different bridges: Multicast non-IP VID 50 [ OK ] +TEST: Switch ports in different bridges: Broadcast non-IP VID 50 [ OK ] +TEST: Switch ports in different bridges: Unicast IPv4 VID 50 [ OK ] +TEST: Switch ports in different bridges: Multicast IPv4 VID 50 [ OK ] +TEST: Switch ports in different bridges: Unicast IPv6 VID 50 [ OK ] +TEST: Switch ports in different bridges: Multicast IPv6 VID 50 [ OK ] +TEST: Switch ports in different bridges: Unicast non-IP VID 100 [ OK ] +TEST: Switch ports in different bridges: Multicast non-IP VID 100 [ OK ] +TEST: Switch ports in different bridges: Broadcast non-IP VID 100 [ OK ] +TEST: Switch ports in different bridges: Unicast IPv4 VID 100 [ OK ] +TEST: Switch ports in different bridges: Multicast IPv4 VID 100 [ OK ] +TEST: Switch ports in different bridges: Unicast IPv6 VID 100 [ OK ] +TEST: Switch ports in different bridges: Multicast IPv6 VID 100 [ OK ] +TEST: Switch ports in different bridges: Unicast non-IP VID 200 [ OK ] +TEST: Switch ports in different bridges: Multicast non-IP VID 200 [ OK ] +TEST: Switch ports in different bridges: Broadcast non-IP VID 200 [ OK ] +TEST: Switch ports in different bridges: Unicast IPv4 VID 200 [ OK ] +TEST: Switch ports in different bridges: Multicast IPv4 VID 200 [ OK ] +TEST: Switch ports in different bridges: Unicast IPv6 VID 200 [ OK ] +TEST: Switch ports in different bridges: Multicast IPv6 VID 200 [ OK ] +TEST: Switch ports in different bridges: Unicast non-IP VID 500 [ OK ] +TEST: Switch ports in different bridges: Multicast non-IP VID 500 [ OK ] +TEST: Switch ports in different bridges: Broadcast non-IP VID 500 [ OK ] +TEST: Switch ports in different bridges: Unicast IPv4 VID 500 [ OK ] +TEST: Switch ports in different bridges: Multicast IPv4 VID 500 [ OK ] +TEST: Switch ports in different bridges: Unicast IPv6 VID 500 [ OK ] +TEST: Switch ports in different bridges: Multicast IPv6 VID 500 [ OK ] +TEST: Switch ports in different bridges: Unicast non-IP VID 1000 [ OK ] +TEST: Switch ports in different bridges: Multicast non-IP VID 1000 [ OK ] +TEST: Switch ports in different bridges: Broadcast non-IP VID 1000 [ OK ] +TEST: Switch ports in different bridges: Unicast IPv4 VID 1000 [ OK ] +TEST: Switch ports in different bridges: Multicast IPv4 VID 1000 [ OK ] +TEST: Switch ports in different bridges: Unicast IPv6 VID 1000 [ OK ] +TEST: Switch ports in different bridges: Multicast IPv6 VID 1000 [ OK ] +TEST: Switch ports in different bridges: Unicast non-IP VID 1000 [ OK ] +TEST: Switch ports in different bridges: Multicast non-IP VID 1000 [ OK ] +TEST: Switch ports in different bridges: Broadcast non-IP VID 1000 [ OK ] +TEST: Switch ports in different bridges: Unicast IPv4 VID 1000 [ OK ] +TEST: Switch ports in different bridges: Multicast IPv4 VID 1000 [ OK ] +TEST: Switch ports in different bridges: Unicast IPv6 VID 1000 [ OK ] +TEST: Switch ports in different bridges: Multicast IPv6 VID 1000 [ OK ] +TEST: Switch ports in different bridges: Unicast non-IP VID 2000 [ OK ] +TEST: Switch ports in different bridges: Multicast non-IP VID 2000 [ OK ] +TEST: Switch ports in different bridges: Broadcast non-IP VID 2000 [ OK ] +TEST: Switch ports in different bridges: Unicast IPv4 VID 2000 [ OK ] +TEST: Switch ports in different bridges: Multicast IPv4 VID 2000 [ OK ] +TEST: Switch ports in different bridges: Unicast IPv6 VID 2000 [ OK ] +TEST: Switch ports in different bridges: Multicast IPv6 VID 2000 [ OK ] +TEST: Switch ports in different bridges: Unicast non-IP VID 4000 [ OK ] +TEST: Switch ports in different bridges: Multicast non-IP VID 4000 [ OK ] +TEST: Switch ports in different bridges: Broadcast non-IP VID 4000 [ OK ] +TEST: Switch ports in different bridges: Unicast IPv4 VID 4000 [ OK ] +TEST: Switch ports in different bridges: Multicast IPv4 VID 4000 [ OK ] +TEST: Switch ports in different bridges: Unicast IPv6 VID 4000 [ OK ] +TEST: Switch ports in different bridges: Multicast IPv6 VID 4000 [ OK ] +TEST: Switch ports in different bridges: Unicast non-IP VID 4094 [ OK ] +TEST: Switch ports in different bridges: Multicast non-IP VID 4094 [ OK ] +TEST: Switch ports in different bridges: Broadcast non-IP VID 4094 [ OK ] +TEST: Switch ports in different bridges: Unicast IPv4 VID 4094 [ OK ] +TEST: Switch ports in different bridges: Multicast IPv4 VID 4094 [ OK ] +TEST: Switch ports in different bridges: Unicast IPv6 VID 4094 [ OK ] +TEST: Switch ports in different bridges: Multicast IPv6 VID 4094 [ OK ] +Switch ports in VLAN-aware bridge with different PVIDs: Sending packets +Switch ports in VLAN-aware bridge with different PVIDs: Checking which packets were received +TEST: Switch ports in VLAN-aware bridge with different PVIDs: Unicast non-IP untagged [ OK ] +TEST: Switch ports in VLAN-aware bridge with different PVIDs: Multicast non-IP untagged [ OK ] +TEST: Switch ports in VLAN-aware bridge with different PVIDs: Broadcast non-IP untagged [ OK ] +TEST: Switch ports in VLAN-aware bridge with different PVIDs: Unicast IPv4 untagged [ OK ] +TEST: Switch ports in VLAN-aware bridge with different PVIDs: Multicast IPv4 untagged [ OK ] +TEST: Switch ports in VLAN-aware bridge with different PVIDs: Unicast IPv6 untagged [ OK ] +TEST: Switch ports in VLAN-aware bridge with different PVIDs: Multicast IPv6 untagged [ OK ] +TEST: Switch ports in VLAN-aware bridge with different PVIDs: Unicast non-IP VID 0 [ OK ] +TEST: Switch ports in VLAN-aware bridge with different PVIDs: Multicast non-IP VID 0 [ OK ] +TEST: Switch ports in VLAN-aware bridge with different PVIDs: Broadcast non-IP VID 0 [ OK ] +TEST: Switch ports in VLAN-aware bridge with different PVIDs: Unicast IPv4 VID 0 [ OK ] +TEST: Switch ports in VLAN-aware bridge with different PVIDs: Multicast IPv4 VID 0 [ OK ] +TEST: Switch ports in VLAN-aware bridge with different PVIDs: Unicast IPv6 VID 0 [ OK ] +TEST: Switch ports in VLAN-aware bridge with different PVIDs: Multicast IPv6 VID 0 [ OK ] +TEST: Switch ports in VLAN-aware bridge with different PVIDs: Unicast non-IP VID 1 [ OK ] +TEST: Switch ports in VLAN-aware bridge with different PVIDs: Multicast non-IP VID 1 [ OK ] +TEST: Switch ports in VLAN-aware bridge with different PVIDs: Broadcast non-IP VID 1 [ OK ] +TEST: Switch ports in VLAN-aware bridge with different PVIDs: Unicast IPv4 VID 1 [ OK ] +TEST: Switch ports in VLAN-aware bridge with different PVIDs: Multicast IPv4 VID 1 [ OK ] +TEST: Switch ports in VLAN-aware bridge with different PVIDs: Unicast IPv6 VID 1 [ OK ] +TEST: Switch ports in VLAN-aware bridge with different PVIDs: Multicast IPv6 VID 1 [ OK ] +TEST: Switch ports in VLAN-aware bridge with different PVIDs: Unicast non-IP VID 2 [ OK ] +TEST: Switch ports in VLAN-aware bridge with different PVIDs: Multicast non-IP VID 2 [ OK ] +TEST: Switch ports in VLAN-aware bridge with different PVIDs: Broadcast non-IP VID 2 [ OK ] +TEST: Switch ports in VLAN-aware bridge with different PVIDs: Unicast IPv4 VID 2 [ OK ] +TEST: Switch ports in VLAN-aware bridge with different PVIDs: Multicast IPv4 VID 2 [ OK ] +TEST: Switch ports in VLAN-aware bridge with different PVIDs: Unicast IPv6 VID 2 [ OK ] +TEST: Switch ports in VLAN-aware bridge with different PVIDs: Multicast IPv6 VID 2 [ OK ] +TEST: Switch ports in VLAN-aware bridge with different PVIDs: Unicast non-IP VID 5 [ OK ] +TEST: Switch ports in VLAN-aware bridge with different PVIDs: Multicast non-IP VID 5 [ OK ] +TEST: Switch ports in VLAN-aware bridge with different PVIDs: Broadcast non-IP VID 5 [ OK ] +TEST: Switch ports in VLAN-aware bridge with different PVIDs: Unicast IPv4 VID 5 [ OK ] +TEST: Switch ports in VLAN-aware bridge with different PVIDs: Multicast IPv4 VID 5 [ OK ] +TEST: Switch ports in VLAN-aware bridge with different PVIDs: Unicast IPv6 VID 5 [ OK ] +TEST: Switch ports in VLAN-aware bridge with different PVIDs: Multicast IPv6 VID 5 [ OK ] +TEST: Switch ports in VLAN-aware bridge with different PVIDs: Unicast non-IP VID 10 [ OK ] +TEST: Switch ports in VLAN-aware bridge with different PVIDs: Multicast non-IP VID 10 [ OK ] +TEST: Switch ports in VLAN-aware bridge with different PVIDs: Broadcast non-IP VID 10 [ OK ] +TEST: Switch ports in VLAN-aware bridge with different PVIDs: Unicast IPv4 VID 10 [ OK ] +TEST: Switch ports in VLAN-aware bridge with different PVIDs: Multicast IPv4 VID 10 [ OK ] +TEST: Switch ports in VLAN-aware bridge with different PVIDs: Unicast IPv6 VID 10 [ OK ] +TEST: Switch ports in VLAN-aware bridge with different PVIDs: Multicast IPv6 VID 10 [ OK ] +TEST: Switch ports in VLAN-aware bridge with different PVIDs: Unicast non-IP VID 20 [ OK ] +TEST: Switch ports in VLAN-aware bridge with different PVIDs: Multicast non-IP VID 20 [ OK ] +TEST: Switch ports in VLAN-aware bridge with different PVIDs: Broadcast non-IP VID 20 [ OK ] +TEST: Switch ports in VLAN-aware bridge with different PVIDs: Unicast IPv4 VID 20 [ OK ] +TEST: Switch ports in VLAN-aware bridge with different PVIDs: Multicast IPv4 VID 20 [ OK ] +TEST: Switch ports in VLAN-aware bridge with different PVIDs: Unicast IPv6 VID 20 [ OK ] +TEST: Switch ports in VLAN-aware bridge with different PVIDs: Multicast IPv6 VID 20 [ OK ] +TEST: Switch ports in VLAN-aware bridge with different PVIDs: Unicast non-IP VID 50 [ OK ] +TEST: Switch ports in VLAN-aware bridge with different PVIDs: Multicast non-IP VID 50 [ OK ] +TEST: Switch ports in VLAN-aware bridge with different PVIDs: Broadcast non-IP VID 50 [ OK ] +TEST: Switch ports in VLAN-aware bridge with different PVIDs: Unicast IPv4 VID 50 [ OK ] +TEST: Switch ports in VLAN-aware bridge with different PVIDs: Multicast IPv4 VID 50 [ OK ] +TEST: Switch ports in VLAN-aware bridge with different PVIDs: Unicast IPv6 VID 50 [ OK ] +TEST: Switch ports in VLAN-aware bridge with different PVIDs: Multicast IPv6 VID 50 [ OK ] +TEST: Switch ports in VLAN-aware bridge with different PVIDs: Unicast non-IP VID 100 [ OK ] +TEST: Switch ports in VLAN-aware bridge with different PVIDs: Multicast non-IP VID 100 [ OK ] +TEST: Switch ports in VLAN-aware bridge with different PVIDs: Broadcast non-IP VID 100 [ OK ] +TEST: Switch ports in VLAN-aware bridge with different PVIDs: Unicast IPv4 VID 100 [ OK ] +TEST: Switch ports in VLAN-aware bridge with different PVIDs: Multicast IPv4 VID 100 [ OK ] +TEST: Switch ports in VLAN-aware bridge with different PVIDs: Unicast IPv6 VID 100 [ OK ] +TEST: Switch ports in VLAN-aware bridge with different PVIDs: Multicast IPv6 VID 100 [ OK ] +TEST: Switch ports in VLAN-aware bridge with different PVIDs: Unicast non-IP VID 200 [ OK ] +TEST: Switch ports in VLAN-aware bridge with different PVIDs: Multicast non-IP VID 200 [ OK ] +TEST: Switch ports in VLAN-aware bridge with different PVIDs: Broadcast non-IP VID 200 [ OK ] +TEST: Switch ports in VLAN-aware bridge with different PVIDs: Unicast IPv4 VID 200 [ OK ] +TEST: Switch ports in VLAN-aware bridge with different PVIDs: Multicast IPv4 VID 200 [ OK ] +TEST: Switch ports in VLAN-aware bridge with different PVIDs: Unicast IPv6 VID 200 [ OK ] +TEST: Switch ports in VLAN-aware bridge with different PVIDs: Multicast IPv6 VID 200 [ OK ] +TEST: Switch ports in VLAN-aware bridge with different PVIDs: Unicast non-IP VID 500 [ OK ] +TEST: Switch ports in VLAN-aware bridge with different PVIDs: Multicast non-IP VID 500 [ OK ] +TEST: Switch ports in VLAN-aware bridge with different PVIDs: Broadcast non-IP VID 500 [ OK ] +TEST: Switch ports in VLAN-aware bridge with different PVIDs: Unicast IPv4 VID 500 [ OK ] +TEST: Switch ports in VLAN-aware bridge with different PVIDs: Multicast IPv4 VID 500 [ OK ] +TEST: Switch ports in VLAN-aware bridge with different PVIDs: Unicast IPv6 VID 500 [ OK ] +TEST: Switch ports in VLAN-aware bridge with different PVIDs: Multicast IPv6 VID 500 [ OK ] +TEST: Switch ports in VLAN-aware bridge with different PVIDs: Unicast non-IP VID 1000 [ OK ] +TEST: Switch ports in VLAN-aware bridge with different PVIDs: Multicast non-IP VID 1000 [ OK ] +TEST: Switch ports in VLAN-aware bridge with different PVIDs: Broadcast non-IP VID 1000 [ OK ] +TEST: Switch ports in VLAN-aware bridge with different PVIDs: Unicast IPv4 VID 1000 [ OK ] +TEST: Switch ports in VLAN-aware bridge with different PVIDs: Multicast IPv4 VID 1000 [ OK ] +TEST: Switch ports in VLAN-aware bridge with different PVIDs: Unicast IPv6 VID 1000 [ OK ] +TEST: Switch ports in VLAN-aware bridge with different PVIDs: Multicast IPv6 VID 1000 [ OK ] +TEST: Switch ports in VLAN-aware bridge with different PVIDs: Unicast non-IP VID 1000 [ OK ] +TEST: Switch ports in VLAN-aware bridge with different PVIDs: Multicast non-IP VID 1000 [ OK ] +TEST: Switch ports in VLAN-aware bridge with different PVIDs: Broadcast non-IP VID 1000 [ OK ] +TEST: Switch ports in VLAN-aware bridge with different PVIDs: Unicast IPv4 VID 1000 [ OK ] +TEST: Switch ports in VLAN-aware bridge with different PVIDs: Multicast IPv4 VID 1000 [ OK ] +TEST: Switch ports in VLAN-aware bridge with different PVIDs: Unicast IPv6 VID 1000 [ OK ] +TEST: Switch ports in VLAN-aware bridge with different PVIDs: Multicast IPv6 VID 1000 [ OK ] +TEST: Switch ports in VLAN-aware bridge with different PVIDs: Unicast non-IP VID 2000 [ OK ] +TEST: Switch ports in VLAN-aware bridge with different PVIDs: Multicast non-IP VID 2000 [ OK ] +TEST: Switch ports in VLAN-aware bridge with different PVIDs: Broadcast non-IP VID 2000 [ OK ] +TEST: Switch ports in VLAN-aware bridge with different PVIDs: Unicast IPv4 VID 2000 [ OK ] +TEST: Switch ports in VLAN-aware bridge with different PVIDs: Multicast IPv4 VID 2000 [ OK ] +TEST: Switch ports in VLAN-aware bridge with different PVIDs: Unicast IPv6 VID 2000 [ OK ] +TEST: Switch ports in VLAN-aware bridge with different PVIDs: Multicast IPv6 VID 2000 [ OK ] +TEST: Switch ports in VLAN-aware bridge with different PVIDs: Unicast non-IP VID 4000 [ OK ] +TEST: Switch ports in VLAN-aware bridge with different PVIDs: Multicast non-IP VID 4000 [ OK ] +TEST: Switch ports in VLAN-aware bridge with different PVIDs: Broadcast non-IP VID 4000 [ OK ] +TEST: Switch ports in VLAN-aware bridge with different PVIDs: Unicast IPv4 VID 4000 [ OK ] +TEST: Switch ports in VLAN-aware bridge with different PVIDs: Multicast IPv4 VID 4000 [ OK ] +TEST: Switch ports in VLAN-aware bridge with different PVIDs: Unicast IPv6 VID 4000 [ OK ] +TEST: Switch ports in VLAN-aware bridge with different PVIDs: Multicast IPv6 VID 4000 [ OK ] +TEST: Switch ports in VLAN-aware bridge with different PVIDs: Unicast non-IP VID 4094 [ OK ] +TEST: Switch ports in VLAN-aware bridge with different PVIDs: Multicast non-IP VID 4094 [ OK ] +TEST: Switch ports in VLAN-aware bridge with different PVIDs: Broadcast non-IP VID 4094 [ OK ] +TEST: Switch ports in VLAN-aware bridge with different PVIDs: Unicast IPv4 VID 4094 [ OK ] +TEST: Switch ports in VLAN-aware bridge with different PVIDs: Multicast IPv4 VID 4094 [ OK ] +TEST: Switch ports in VLAN-aware bridge with different PVIDs: Unicast IPv6 VID 4094 [ OK ] +TEST: Switch ports in VLAN-aware bridge with different PVIDs: Multicast IPv6 VID 4094 [ OK ] +TEST: ncat not installed [SKIP] +TEST: isochron not installed [SKIP] +Cannot find device "br0" +2% complete... +4% complete... +6% complete... +8% complete... +10% complete... +12% complete... +14% complete... +16% complete... +18% complete... +20% complete... +22% complete... +24% complete... +26% complete... +28% complete... +30% complete... +32% complete... +34% complete... +36% complete... +38% complete... +40% complete... +42% complete... +44% complete... +46% complete... +48% complete... +50% complete... +52% complete... +54% complete... +56% complete... +58% complete... +60% complete... +62% complete... +64% complete... +66% complete... +68% complete... +70% complete... +72% complete... +74% complete... +76% complete... +78% complete... +80% complete... +82% complete... +84% complete... +86% complete... +88% complete... +90% complete... +92% complete... +94% complete... +96% complete... +98% complete... +100% complete... +Cleaning up +Please check kernel log for errors diff --git a/scripts/selftest.xml b/scripts/selftest.xml new file mode 100644 index 000000000..afaa546ba --- /dev/null +++ b/scripts/selftest.xml @@ -0,0 +1,847 @@ + + + + + Ping did not work after unlocking port and removing FDB entry. + + + Ping6 did not work after unlocking port and removing FDB entry + + + Ping through vlan did not work after unlocking port and removing FDB entry + + + Ping did not work after replacing FDB entry + + + No locked entry on first injection + + + Failed to enable MAB when port is locked and has learning enabled + + + Failed to create locked FDB entry on second port + + + Locked entry not created after deleting filter + + + + + + + + + + + + + + + + + + + + + + + + + + Wrong number of entries in br-test2 + + + Wrong number of entries in br-test2 + + + Wrong number of entries in br-test2 + + + Wrong number of entries in br-test2 + + + + + + + + Packet from invalid source received on H2 after deleting entry + + + Packet from invalid source received on H2 after deleting entry + + + Packet from invalid source received on H2 after deleting entry + + + Packet from invalid source received on H2 after deleting entry + + + + + + + + Didn't forward traffic from S,G (2001:db8:1::3, ff02::cc) + + + Didn't forward traffic from S,G (2001:db8:1::12, ff02::cc) + + + Didn't forward traffic from S,G (2001:db8:1::12, ff02::cc) + + + Didn't forward traffic from S,G (2001:db8:1::100, ff02::cc) + + + Didn't forward traffic from S,G (2001:db8:1::100, ff02::cc) + + + Didn't forward traffic from S,G (2001:db8:1::100, ff02::cc) + + + Didn't forward traffic from S,G (2001:db8:1::100, ff02::cc) + + + Didn't forward traffic from S,G (2001:db8:1::100, ff02::cc) + + + Didn't forward traffic from S,G (2001:db8:1::100, ff02::cc) + + + Didn't forward traffic from S,G (2001:db8:1::3, ff02::cc) + + + Didn't forward traffic from S,G (2001:db8:1::100, ff02::cc) + + + Didn't forward traffic from S,G (2001:db8:1::30, ff02::cc) + + + Didn't forward traffic from S,G (2001:db8:1::3, ff02::cc) + + + + + + + Found FDB record when should not + + + Packet was not flooded when should + + + + + + + + + + Packet was not forwarded when it should + + + Packet was forwarded when should not + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + Found FDB record when should not + + + Packet was not flooded when should + + + Packet was not flooded when should + + + + + + + + + + + + + + + reception failed + + + reception failed + + + + + + + + + + + + + + + + + + + + + reception failed + + + reception failed + + + reception failed + + + reception failed + + + reception failed + + + reception failed + + + reception failed + + + reception failed + + + reception failed + + + reception failed + + + reception failed + + + reception failed + + + + reception failed + + + reception failed + + + reception failed + + + + reception failed + + + reception failed + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + reception failed + + + reception failed + + + reception failed + + + reception failed + + + reception failed + + + reception failed + + + reception failed + + + reception failed + + + reception failed + + + reception failed + + + reception failed + + + reception failed + + + reception failed + + + + reception failed + + + reception failed + + + reception failed + + + + reception failed + + + reception failed + + + reception failed + + + reception failed + + + reception failed + + + reception failed + + + reception failed + + + reception failed + + + reception failed + + + reception failed + + + reception failed + + + + + reception succeeded, but should have failed + + + + reception succeeded, but should have failed + + + + + + + + + + + + + + + + + reception failed + + + reception failed + + + reception failed + + + reception failed + + + reception failed + + + reception failed + + + reception failed + + + reception failed + + + reception failed + + + reception failed + + + reception failed + + + reception failed + + + reception failed + + + + reception failed + + + reception failed + + + reception failed + + + + reception failed + + + reception failed + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + diff --git a/scripts/validate_firmware.py b/scripts/validate_firmware.py new file mode 100644 index 000000000..65a771597 --- /dev/null +++ b/scripts/validate_firmware.py @@ -0,0 +1,123 @@ +#!/usr/bin/env python3 +""" +Simple Firmware Validation Helper Script + +This script helps validate OpenWrt firmware URLs before submitting them +to the kernel selftests workflow. It checks URL accessibility and basic +file properties. +""" + +import sys +import requests +import argparse +from pathlib import Path + + +class SimpleFirmwareValidator: + """Simple validator for OpenWrt firmware URLs""" + + SUPPORTED_EXTENSIONS = [".bin", ".img", ".gz", ".xz"] + SUPPORTED_DEVICES = ["bananapi_bpi-r64-kernel"] + MIN_SIZE = 1024 * 1024 # 1MB minimum + MAX_SIZE = 512 * 1024 * 1024 # 512MB maximum + + def __init__(self): + self.session = requests.Session() + self.session.headers.update( + {"User-Agent": "OpenWrt-Tests-Firmware-Validator/1.0"} + ) + + def validate_url(self, url): + """Validate firmware URL accessibility and basic properties""" + try: + print(f"Checking URL: {url}") + response = self.session.head(url, timeout=30, allow_redirects=True) + + if response.status_code != 200: + return False, f"URL returned status {response.status_code}" + + # Check content length if available + content_length = response.headers.get("content-length") + if content_length: + size = int(content_length) + if size < self.MIN_SIZE: + return False, f"File too small: {size} bytes (min: {self.MIN_SIZE})" + if size > self.MAX_SIZE: + return False, f"File too large: {size} bytes (max: {self.MAX_SIZE})" + print(f"File size: {size:,} bytes ({size / (1024 * 1024):.1f} MB)") + + # Check file extension + filename = Path(response.url).name.lower() + if not any(filename.endswith(ext) for ext in self.SUPPORTED_EXTENSIONS): + return ( + False, + f"Unsupported file extension. Supported: {', '.join(self.SUPPORTED_EXTENSIONS)}", + ) + + print(f"Filename: {filename}") + print(f"Content-Type: {response.headers.get('content-type', 'Unknown')}") + + return True, "URL validation passed" + + except requests.RequestException as e: + return False, f"Failed to access URL: {str(e)}" + + def validate_device(self, device): + """Validate device is supported""" + if device not in self.SUPPORTED_DEVICES: + return ( + False, + f"Device '{device}' not supported. Supported: {', '.join(self.SUPPORTED_DEVICES)}", + ) + return True, f"Device '{device}' is supported" + + +def main(): + parser = argparse.ArgumentParser( + description="Validate OpenWrt firmware URL for kernel selftests", + epilog=""" +Example: + python3 validate_firmware.py https://example.com/firmware.bin bananapi_bpi-r64-kernel + """, + ) + + parser.add_argument("url", help="Firmware download URL") + parser.add_argument("device", help="Target device name") + + args = parser.parse_args() + + validator = SimpleFirmwareValidator() + + print("=" * 60) + print("FIRMWARE VALIDATION") + print("=" * 60) + + # Validate device + device_valid, device_message = validator.validate_device(args.device) + print(f"Device: {device_message}") + if not device_valid: + print("❌ Device validation failed") + sys.exit(1) + + # Validate URL + url_valid, url_message = validator.validate_url(args.url) + print(f"URL: {url_message}") + if not url_valid: + print("❌ URL validation failed") + sys.exit(1) + + print("\n" + "=" * 60) + print("✅ VALIDATION PASSED") + print("=" * 60) + print("Your firmware URL is ready for kernel selftests!") + print("\nExample usage:") + print("```") + print("/test-kernel-selftests") + print(f"device: {args.device}") + print("command: make -C net run_tests") + print(f"firmware: {args.url}") + print("```") + + +if __name__ == "__main__": + main() diff --git a/targets/arcadyan_vgv7510kw22.yaml b/targets/arcadyan_vgv7510kw22.yaml new file mode 100644 index 000000000..50834b6f3 --- /dev/null +++ b/targets/arcadyan_vgv7510kw22.yaml @@ -0,0 +1,34 @@ +targets: + main: + features: + - wifi + resources: + RemotePlace: + name: !template "$LG_PLACE" + drivers: + NetworkPowerDriver: {} + SerialDriver: + txdelay: 0.01 + ShellDriver: + prompt: 'root@[\w()]+:[^ ]+ ' + login_prompt: Please press Enter to activate this console. + await_login_timeout: 15 + login_timeout: 120 + post_login_settle_time: 5 + username: root + SmallUBootDriver: + boot_expression: "Autobooting in 1 seconds" + prompt: "ap143-2.0>" + boot_secret: "tpl" + init_commands: + - tftp 0x82000000 + UBootStrategy: {} + SSHDriver: + connection_timeout: 120.0 + explicit_scp_mode: True + +images: + root: !template $LG_IMAGE + +imports: + - ../strategies/tftpstrategy.py diff --git a/targets/bananapi_bpi-r64-kernel.yaml b/targets/bananapi_bpi-r64-kernel.yaml new file mode 100644 index 000000000..955ac4a2e --- /dev/null +++ b/targets/bananapi_bpi-r64-kernel.yaml @@ -0,0 +1,27 @@ +targets: + main: + features: + - kernel-selftests + resources: + RemotePlace: + name: !template "$LG_PLACE" + drivers: + PDUDaemonDriver: {} + SerialDriver: + txdelay: 0.01 + ShellDriver: + prompt: 'root@[\w()]+:[^ ]+ ' + login_prompt: Please press Enter to activate this console. + await_login_timeout: 15 + login_timeout: 120 + post_login_settle_time: 5 + username: root + USBSDMuxDriver: {} + USBStorageDriver: {} + SDMuxStrategy: {} + +images: + root: !template $LG_IMAGE + +imports: + - ../strategies/sdmuxstrategy.py diff --git a/test-env.sh b/test-env.sh new file mode 100755 index 000000000..b0b296ffc --- /dev/null +++ b/test-env.sh @@ -0,0 +1,15 @@ +#!/bin/bash + + +# source env/aparcar.env + +uv run labgrid-client lock + +uv run pytest tests/ \ + --lg-log \ + --log-cli-level=CONSOLE \ + --lg-colored-steps \ + --reportportal \ + -s + +uv run labgrid-client unlock diff --git a/test.log b/test.log new file mode 100644 index 000000000..b9ae2009a Binary files /dev/null and b/test.log differ diff --git a/tests/kselftest/README.md b/tests/kselftest/README.md new file mode 100644 index 000000000..f8c542f1f --- /dev/null +++ b/tests/kselftest/README.md @@ -0,0 +1,160 @@ +# Kselftest Integration + +This directory contains pytest wrappers for running Linux kernel selftests (kselftests) on OpenWrt devices. + +## Overview + +Kselftests are the Linux kernel's built-in test suite. On OpenWrt, they are packaged as `kselftests-*` packages and installed to `/usr/libexec/kselftest/`. + +The pytest wrappers in this directory: +1. Run kselftest binaries on the target device via labgrid +2. Capture KTAP output from stdout +3. The executor parses KTAP to report individual subtest results to KernelCI + +## KTAP Format + +Kselftests output results in [KTAP (Kernel Test Anything Protocol)](https://docs.kernel.org/dev-tools/ktap.html) format: + +``` +KTAP version 1 +1..3 +ok 1 - test_socket_create +not ok 2 - test_socket_bind # SKIP requires CAP_NET_RAW +ok 3 - test_socket_listen +``` + +Key elements: +- `ok N` = test passed +- `not ok N` = test failed +- `# SKIP reason` = test was skipped +- Nested subtests use 2-space indentation + +## Files + +- `conftest.py` - Pytest fixtures for running kselftests +- `test_kselftest.py` - Test functions for each kselftest subsystem + +## Fixtures + +### `kselftest_runner` + +Runs an entire kselftest subsystem: + +```python +def test_kselftest_net(kselftest_runner): + output = kselftest_runner("net", timeout=1800) + # KTAP output is printed to stdout and captured by executor +``` + +### `kselftest_single` + +Runs a single kselftest binary: + +```python +def test_specific(kselftest_single): + output = kselftest_single("net", "reuseport_bpf", timeout=300) +``` + +## Test Plan Mapping + +Each kselftest subsystem has a corresponding test plan in `pipeline.yaml`: + +| Test Plan | Pytest Test | Kselftest Subsystem | +|-----------|-------------|---------------------| +| `kselftest_net` | `test_kselftest_net` | `/usr/libexec/kselftest/net/` | +| `kselftest_timers` | `test_kselftest_timers` | `/usr/libexec/kselftest/timers/` | +| `kselftest_rtc` | `test_kselftest_rtc` | `/usr/libexec/kselftest/rtc/` | +| ... | ... | ... | + +## How Results Flow + +``` +Target Device Host KernelCI +┌─────────────────┐ ┌──────────────────────┐ ┌─────────────────┐ +│ /usr/libexec/ │ │ pytest + labgrid │ │ API │ +│ kselftest/net/ │───>│ captures stdout │───>│ receives test │ +│ (KTAP output) │ │ (KTAP in stdout) │ │ nodes │ +└─────────────────┘ └──────────────────────┘ └─────────────────┘ + │ + ▼ + ┌──────────────────────┐ + │ executor.py │ + │ _try_parse_ktap() │ + │ - detects KTAP │ + │ - parses subtests │ + │ - creates TestResult │ + │ per subtest │ + └──────────────────────┘ +``` + +## Adding a New Kselftest Subsystem + +1. Add the kselftest package to `test_types.py` IMAGE_PROFILES +2. Add a test plan to `pipeline.yaml` +3. Add a test function to `test_kselftest.py`: + +```python +class TestKselftestNewSubsystem: + def test_kselftest_newsubsystem(self, kselftest_runner): + output = kselftest_runner("newsubsystem", timeout=300) +``` + +## Troubleshooting + +### No KTAP output parsed + +Check the console log for warnings like: +``` +WARNING: Kselftest 'net' output doesn't look like KTAP format +``` + +This means the kselftest ran but didn't produce parseable output. Possible causes: +- Kselftest package not installed correctly +- Test crashed before producing output +- Test uses non-standard output format + +### Test times out + +Increase the timeout in the test function: +```python +output = kselftest_runner("net", timeout=3600) # 1 hour +``` + +Also update `pipeline.yaml` timeout for the test plan. + +### Subsystem not found + +The test will be skipped with: +``` +SKIPPED: Kselftest subsystem 'net' not installed +``` + +Ensure the `kselftests-net` package is included in the firmware image profile. + +## Required Packages + +The `kselftest` image profile in `test_types.py` includes: +- `kselftests-net` +- `kselftests-timers` +- `kselftests-rtc` +- `kselftests-clone3` +- `kselftests-openat2` +- `kselftests-exec` +- `kselftests-mincore` +- `kselftests-splice` +- `kselftests-sync` +- `kselftests-futex` +- `kselftests-mqueue` +- `kselftests-sigaltstack` +- `kselftests-kcmp` +- `kselftests-size` + +Plus dependencies like `bash`, `iproute2-full`, etc. + +## Device Capabilities + +Kselftest jobs require: +- `serial_console` - Device must have serial console access +- `isolated_network` - Required for network tests (prevents interference) + +Configure these in your lab's device definitions. diff --git a/tests/kselftest/__init__.py b/tests/kselftest/__init__.py new file mode 100644 index 000000000..af5a01e11 --- /dev/null +++ b/tests/kselftest/__init__.py @@ -0,0 +1 @@ +# Kselftest pytest wrappers diff --git a/tests/kselftest/conftest.py b/tests/kselftest/conftest.py new file mode 100644 index 000000000..2cc78e530 --- /dev/null +++ b/tests/kselftest/conftest.py @@ -0,0 +1,237 @@ +""" +Pytest fixtures for running Linux kernel selftests (kselftests). + +These fixtures provide the ability to run kselftest binaries on +the target device and capture their KTAP output for parsing. + +The KTAP output is captured via pytest's stdout capture mechanism, +allowing the executor to parse individual subtest results. + +KTAP Output Format: + Kselftests output results in KTAP (Kernel Test Anything Protocol) + format, which looks like: + + KTAP version 1 + 1..3 + ok 1 - test_name_a + not ok 2 - test_name_b # SKIP reason + ok 3 - test_name_c + + The executor parses this output to report individual subtest + results to KernelCI. + +See: https://docs.kernel.org/dev-tools/ktap.html +""" + +import logging + +import pytest + +logger = logging.getLogger(__name__) + +# Path where kselftests are installed on OpenWrt +KSELFTEST_PATH = "/usr/libexec/kselftest" + + +class KselftestError(Exception): + """Error running kselftest.""" + + pass + + +class KselftestTimeout(KselftestError): + """Kselftest execution timed out.""" + + pass + + +def _validate_ktap_output(output: str, subsystem: str) -> None: + """ + Validate that output looks like KTAP format. + + Logs a warning if output doesn't contain expected KTAP markers. + This helps diagnose issues where tests run but don't produce + parseable output. + """ + if not output or not output.strip(): + logger.warning( + f"Kselftest '{subsystem}' produced no output. " + "The test may have crashed or not be installed correctly." + ) + return + + ktap_markers = ["KTAP version", "TAP version", "1.."] + if not any(marker in output for marker in ktap_markers): + logger.warning( + f"Kselftest '{subsystem}' output doesn't look like KTAP format. " + "Subtest results may not be parsed correctly. " + f"Output starts with: {output[:100]!r}" + ) + + +@pytest.fixture +def kselftest_runner(shell_command): + """ + Fixture to run a kselftest subsystem and return raw output. + + The raw KTAP output is printed to stdout so it can be captured + by the test executor for parsing into individual subtest results. + + Usage: + def test_kselftest_net(kselftest_runner): + output = kselftest_runner("net") + # Output is also printed to stdout for KTAP parsing + + Raises: + pytest.skip: If the subsystem is not installed + KselftestTimeout: If execution times out + KselftestError: If execution fails unexpectedly + """ + + def _run(subsystem: str, timeout: int = 300) -> str: + """ + Run a kselftest subsystem. + + Args: + subsystem: The kselftest subsystem to run (e.g., "net", "timers") + timeout: Timeout in seconds for test execution + + Returns: + Raw output from the kselftest run (KTAP format) + """ + test_path = f"{KSELFTEST_PATH}/{subsystem}" + + # Check if the subsystem exists + try: + result = shell_command.run(f"test -d {test_path}") + if result[2] != 0: + pytest.skip(f"Kselftest subsystem '{subsystem}' not installed") + except Exception as e: + logger.error(f"Failed to check if '{subsystem}' exists: {e}") + pytest.skip(f"Cannot access kselftest path: {e}") + + # Determine how to run the tests + run_script = f"{test_path}/run_kselftest.sh" + try: + result = shell_command.run(f"test -f {run_script}") + has_run_script = result[2] == 0 + except Exception: + has_run_script = False + + if has_run_script: + cmd = f"cd {test_path} && ./run_kselftest.sh" + else: + # Run all executables in the directory + cmd = f"cd {test_path} && for t in *; do [ -x \"$t\" ] && ./$t; done" + + logger.info(f"Running kselftest: {cmd}") + + # Execute the tests + try: + output_lines, stderr_lines, exit_code = shell_command.run( + cmd, timeout=timeout + ) + output = "\n".join(output_lines) + except TimeoutError as e: + logger.error(f"Kselftest '{subsystem}' timed out after {timeout}s") + raise KselftestTimeout( + f"Kselftest '{subsystem}' timed out after {timeout}s" + ) from e + except Exception as e: + logger.error(f"Kselftest '{subsystem}' failed: {e}") + raise KselftestError(f"Kselftest '{subsystem}' failed: {e}") from e + + # Log exit code for debugging (kselftests may return non-zero for failures) + if exit_code != 0: + logger.info( + f"Kselftest '{subsystem}' exited with code {exit_code} " + "(non-zero is normal if some subtests failed)" + ) + + # Validate output format + _validate_ktap_output(output, subsystem) + + # Print output to stdout for KTAP capture by executor + print(output) + + return output + + return _run + + +@pytest.fixture +def kselftest_single(shell_command): + """ + Fixture to run a single kselftest binary. + + Useful for running individual tests within a subsystem. + + Usage: + def test_specific_test(kselftest_single): + output = kselftest_single("net", "reuseport_bpf") + + Raises: + pytest.skip: If the test binary is not found + KselftestTimeout: If execution times out + KselftestError: If execution fails unexpectedly + """ + + def _run(subsystem: str, test_name: str, timeout: int = 300) -> str: + """ + Run a single kselftest binary. + + Args: + subsystem: The kselftest subsystem (e.g., "net") + test_name: Name of the test binary to run + timeout: Timeout in seconds + + Returns: + Raw output from the test (KTAP format) + """ + test_path = f"{KSELFTEST_PATH}/{subsystem}/{test_name}" + + # Check if test exists and is executable + try: + result = shell_command.run(f"test -x {test_path}") + if result[2] != 0: + pytest.skip(f"Kselftest '{subsystem}/{test_name}' not found") + except Exception as e: + logger.error(f"Failed to check if '{test_name}' exists: {e}") + pytest.skip(f"Cannot access kselftest: {e}") + + logger.info(f"Running kselftest: {test_path}") + + # Execute the test + try: + output_lines, stderr_lines, exit_code = shell_command.run( + test_path, timeout=timeout + ) + output = "\n".join(output_lines) + except TimeoutError as e: + logger.error( + f"Kselftest '{subsystem}/{test_name}' timed out after {timeout}s" + ) + raise KselftestTimeout( + f"Kselftest '{subsystem}/{test_name}' timed out" + ) from e + except Exception as e: + logger.error(f"Kselftest '{subsystem}/{test_name}' failed: {e}") + raise KselftestError( + f"Kselftest '{subsystem}/{test_name}' failed: {e}" + ) from e + + # Log exit code + if exit_code != 0: + logger.info( + f"Kselftest '{test_name}' exited with code {exit_code}" + ) + + # Validate output + _validate_ktap_output(output, f"{subsystem}/{test_name}") + + # Print output to stdout for KTAP capture + print(output) + + return output + + return _run diff --git a/tests/kselftest/test_kselftest.py b/tests/kselftest/test_kselftest.py new file mode 100644 index 000000000..012615c94 --- /dev/null +++ b/tests/kselftest/test_kselftest.py @@ -0,0 +1,203 @@ +""" +Linux kernel selftest (kselftest) test wrappers. + +These tests run kselftest subsystems on the target device and output +their results in KTAP format. The test executor parses the KTAP output +to extract individual subtest results for reporting to KernelCI. + +Each test function corresponds to a kselftest subsystem/category. +The raw KTAP output is captured via stdout and parsed by the executor. + +Test Plan Mapping: + kselftest_net -> test_kselftest_net + kselftest_timers -> test_kselftest_timers + kselftest_rtc -> test_kselftest_rtc + etc. + +Note: These tests always "pass" at the pytest level - the actual +pass/fail status is determined by parsing the KTAP output. This is +because a single kselftest run may have dozens of subtests, some +passing and some failing. +""" + +import pytest + + +class TestKselftestNet: + """Network subsystem kselftests.""" + + def test_kselftest_net(self, kselftest_runner): + """ + Run network kselftests. + + Includes tests for: + - Socket operations + - Network namespaces + - TCP/UDP functionality + - BPF networking + - etc. + + Requires: isolated_network capability + """ + output = kselftest_runner("net", timeout=1800) + # Output is captured for KTAP parsing + # We don't assert here - subtests are parsed from KTAP + + +class TestKselftestTimers: + """Timer subsystem kselftests.""" + + def test_kselftest_timers(self, kselftest_runner): + """ + Run timer kselftests. + + Includes tests for: + - POSIX timers + - Clock operations + - Timer precision + """ + output = kselftest_runner("timers", timeout=600) + + +class TestKselftestRtc: + """RTC (Real-Time Clock) kselftests.""" + + def test_kselftest_rtc(self, kselftest_runner): + """ + Run RTC kselftests. + + Tests real-time clock functionality. + """ + output = kselftest_runner("rtc", timeout=300) + + +class TestKselftestClone3: + """clone3() syscall kselftests.""" + + def test_kselftest_clone3(self, kselftest_runner): + """ + Run clone3 kselftests. + + Tests the clone3() system call functionality. + """ + output = kselftest_runner("clone3", timeout=300) + + +class TestKselftestOpenat2: + """openat2() syscall kselftests.""" + + def test_kselftest_openat2(self, kselftest_runner): + """ + Run openat2 kselftests. + + Tests the openat2() system call functionality. + """ + output = kselftest_runner("openat2", timeout=300) + + +class TestKselftestExec: + """Exec subsystem kselftests.""" + + def test_kselftest_exec(self, kselftest_runner): + """ + Run exec kselftests. + + Tests execve() and related functionality. + """ + output = kselftest_runner("exec", timeout=300) + + +class TestKselftestMincore: + """mincore() syscall kselftests.""" + + def test_kselftest_mincore(self, kselftest_runner): + """ + Run mincore kselftests. + + Tests the mincore() system call. + """ + output = kselftest_runner("mincore", timeout=300) + + +class TestKselftestSplice: + """splice() syscall kselftests.""" + + def test_kselftest_splice(self, kselftest_runner): + """ + Run splice kselftests. + + Tests splice(), tee(), and vmsplice() system calls. + """ + output = kselftest_runner("splice", timeout=300) + + +class TestKselftestSync: + """Sync kselftests.""" + + def test_kselftest_sync(self, kselftest_runner): + """ + Run sync kselftests. + + Tests sync(), fsync(), and related functionality. + """ + output = kselftest_runner("sync", timeout=300) + + +class TestKselftestFutex: + """Futex kselftests.""" + + def test_kselftest_futex(self, kselftest_runner): + """ + Run futex kselftests. + + Tests futex operations for thread synchronization. + """ + output = kselftest_runner("futex", timeout=600) + + +class TestKselftestMqueue: + """POSIX message queue kselftests.""" + + def test_kselftest_mqueue(self, kselftest_runner): + """ + Run mqueue kselftests. + + Tests POSIX message queue functionality. + """ + output = kselftest_runner("mqueue", timeout=300) + + +class TestKselftestSigaltstack: + """sigaltstack() kselftests.""" + + def test_kselftest_sigaltstack(self, kselftest_runner): + """ + Run sigaltstack kselftests. + + Tests alternate signal stack functionality. + """ + output = kselftest_runner("sigaltstack", timeout=300) + + +class TestKselftestKcmp: + """kcmp() syscall kselftests.""" + + def test_kselftest_kcmp(self, kselftest_runner): + """ + Run kcmp kselftests. + + Tests the kcmp() system call for comparing processes. + """ + output = kselftest_runner("kcmp", timeout=300) + + +class TestKselftestSize: + """Size/memory kselftests.""" + + def test_kselftest_size(self, kselftest_runner): + """ + Run size kselftests. + + Tests related to memory sizes and limits. + """ + output = kselftest_runner("size", timeout=300) diff --git a/tests/test_base.py b/tests/test_base.py index 1b6cf4787..3f3a8e8e9 100644 --- a/tests/test_base.py +++ b/tests/test_base.py @@ -19,7 +19,8 @@ def test_firmware_version(shell_command, record_property): expected_version = os.environ["FIRMWARE_VERSION"] record_property("expected_firmware_version", expected_version) assert actual_version == expected_version, ( - f"Firmware version mismatch: expected {expected_version}, got {actual_version}" + f"Firmware version mismatch: expected {expected_version}, " + f"got {actual_version}" ) diff --git a/tests/test_kernel_selftests.py b/tests/test_kernel_selftests.py new file mode 100644 index 000000000..2cb3d3fb9 --- /dev/null +++ b/tests/test_kernel_selftests.py @@ -0,0 +1,71 @@ +import os +import pytest + + +# @pytest.mark.lg_feature("kernel-selftests") +# def test_kernel_selftests_setup(shell_command): +# """Setup kernel selftests on the device""" +# # Check if device has internet connectivity +# shell_command.run_check("ping -c 1 8.8.8.8", timeout=30) + +# # Create selftests directory +# shell_command.run_check("mkdir -p /root/selftests") + +# # Download kernel selftests if not already present +# result = shell_command.run("ls /root/selftests/Makefile") +# if result[2] != 0: # File doesn't exist +# print("Downloading kernel selftests...") +# shell_command.run_check("cd /root && wget -q https://github.com/torvalds/linux/archive/refs/heads/master.tar.gz", timeout=300) +# shell_command.run_check("cd /root && tar -xzf master.tar.gz", timeout=120) +# shell_command.run_check("cd /root && cp -r linux-master/tools/testing/selftests/* selftests/", timeout=60) +# shell_command.run_check("cd /root && rm -rf master.tar.gz linux-master") + + +@pytest.mark.lg_feature("kernel-selftests") +def test_kernel_selftests_run(shell_command, record_property): + """Run kernel selftests with command from environment""" + + # Get the test command from environment variable + test_command = os.environ.get( + "SELFTESTS_COMMAND", 'echo "No test command specified"' + ) + record_property("selftests_command", test_command) + + print(f"Running kernel selftests command: {test_command}") + + # Change to selftests directory and run the command + full_command = f"cd /root/ && {test_command}" + + # Run with generous timeout (30 minutes) + result = shell_command.run(full_command, timeout=3600) + + # Record the output + output = "\n".join(result[1]) if isinstance(result[1], list) else str(result[1]) + record_property("selftests_output", output) + record_property("selftests_exit_code", result[2]) + + # Print output for workflow logs + print("=== KERNEL SELFTESTS OUTPUT ===") + print(output) + print("=== END OUTPUT ===") + + # The test passes if the command ran (exit code recorded for analysis) + # We don't fail the pytest test based on selftest results + print(f"Kernel selftests completed with exit code: {result[2]}") + + +@pytest.mark.lg_feature("kernel-selftests") +def test_kernel_selftests_cleanup(shell_command): + """Optional cleanup after kernel selftests""" + + # Check disk space after tests + result = shell_command.run("df -h /root") + if result[2] == 0: + print("Disk space after tests:") + print("\n".join(result[1]) if isinstance(result[1], list) else str(result[1])) + + # Optionally clean up large files to free space + cleanup = os.environ.get("SELFTESTS_CLEANUP", "false").lower() + if cleanup == "true": + print("Cleaning up selftests directory...") + shell_command.run("rm -rf /root/selftests") diff --git a/tests/test_lan.py b/tests/test_lan.py index caf6e2c2a..f6a7bd7f4 100644 --- a/tests/test_lan.py +++ b/tests/test_lan.py @@ -27,7 +27,3 @@ def test_lan_interface_address(shell_command): assert shell_command.get_ip_addresses("br-lan")[0] == IPv4Interface( "192.168.1.1/24" ) - - -def test_lan_interface_has_neighbor(shell_command): - assert "DUP!" in "\n".join(shell_command.run("ping -c 3 ff02::1%br-lan")[0]) diff --git a/tests/test_wan.py b/tests/test_wan.py index 87357693c..409c3529f 100644 --- a/tests/test_wan.py +++ b/tests/test_wan.py @@ -63,8 +63,9 @@ def test_https_download(ssh_command): remove=False, ) + expected_hash = "26b85383a138594b1197e581bd13c6825c0b6b5f23829870a6dbc5d37ccf6cd8" assert ( - "26b85383a138594b1197e581bd13c6825c0b6b5f23829870a6dbc5d37ccf6cd8 config.buildinfo" + f"{expected_hash} config.buildinfo" in ssh_command.run("sha256sum config.buildinfo")[0] ) ssh_command.run("rm config.buildinfo") @@ -103,7 +104,8 @@ def test_https_download(ssh_command): # check_download( # ssh_command, # "https://wrong.host.badssl.com/", -# expect_stderr="Connection error: Server hostname does not match SSL certificate", +# expect_stderr="Connection error: Server hostname does not match " +# "SSL certificate", # expect_exitcode=5, # ) diff --git a/tests/test_wifi.py b/tests/test_wifi.py index e1e6c7136..aa55660bb 100644 --- a/tests/test_wifi.py +++ b/tests/test_wifi.py @@ -99,9 +99,10 @@ def test_wifi_hwsim_sae_mixed(ssh_command): """ Test wifi configuration. - This test creates one AP and one station and checks if they can connect to each other. - It sets up the wireless configuration using the `ssh_command` fixture and relies on the - "hwsim" driver to create the virtual radios. + This test creates one AP and one station and checks if they can + connect to each other. It sets up the wireless configuration using + the `ssh_command` fixture and relies on the "hwsim" driver to + create the virtual radios. """ ssh_command.run("uci set wireless.radio0.channel=11") ssh_command.run("uci set wireless.radio0.band=2g") @@ -134,11 +135,11 @@ def test_wifi_hwsim_sae_mixed(ssh_command): ) # Wait till the client associated - assert "auth" in "\n".join( - ssh_command.run( - "ubus -t 20 subscribe hostapd.phy0-ap0 | grep '\"auth\":' | while read line; do echo auth && killall ubus; done" - )[0] + ubus_auth_cmd = ( + "ubus -t 20 subscribe hostapd.phy0-ap0 | grep '\"auth\":' | " + "while read line; do echo auth && killall ubus; done" ) + assert "auth" in "\n".join(ssh_command.run(ubus_auth_cmd)[0]) assert "Mode: Client Channel: 11 (2.462 GHz)" in "\n".join( ssh_command.run("iwinfo")[0] @@ -159,11 +160,11 @@ def test_wifi_hwsim_sae_mixed(ssh_command): ssh_command.run("service network reload") # Wait till the wifi client is removed - assert "disassoc" in "\n".join( - ssh_command.run( - "ubus -t 20 subscribe hostapd.phy0-ap0 | grep '\"disassoc\":' | while read line; do echo disassoc && killall ubus; done" - )[0] + ubus_disassoc_cmd = ( + "ubus -t 20 subscribe hostapd.phy0-ap0 | grep '\"disassoc\":' | " + "while read line; do echo disassoc && killall ubus; done" ) + assert "disassoc" in "\n".join(ssh_command.run(ubus_disassoc_cmd)[0]) # wait till network reload finished assert "timed out" not in "\n".join( @@ -175,11 +176,7 @@ def test_wifi_hwsim_sae_mixed(ssh_command): ) # Wait till the client associated - assert "auth" in "\n".join( - ssh_command.run( - "ubus -t 20 subscribe hostapd.phy0-ap0 | grep '\"auth\":' | while read line; do echo auth && killall ubus; done" - )[0] - ) + assert "auth" in "\n".join(ssh_command.run(ubus_auth_cmd)[0]) assert "expected throughput" in "\n".join( ssh_command.run("iwinfo phy0-ap0 assoclist")[0] diff --git a/tests_deas/README_NEW_TESTS.md b/tests_deas/README_NEW_TESTS.md new file mode 100644 index 000000000..3a1dfedb7 --- /dev/null +++ b/tests_deas/README_NEW_TESTS.md @@ -0,0 +1,226 @@ +# OpenWRT Comprehensive Test Suite + +This directory contains a comprehensive test suite for OpenWRT systems. These tests validate system health, performance, security, and functionality. + +## Test Categories + +### 1. System Health Tests (`test_system_health.py`) + +Tests for monitoring overall system health and resource usage. + +- **CPU Load Testing**: Validates CPU load is within acceptable limits +- **Memory Usage**: Checks for memory leaks and excessive usage +- **Filesystem Usage**: Monitors disk space on critical mount points +- **System Uptime**: Records and validates system uptime +- **Temperature Monitoring**: Checks thermal sensors if available +- **Kernel Error Detection**: Scans kernel logs for critical errors +- **Process Management**: Validates process counts and checks for zombies +- **Swap Usage**: Monitors swap usage if configured +- **Entropy Availability**: Ensures sufficient entropy for crypto operations +- **Time Synchronization**: Validates system time and NTP configuration +- **File Descriptors**: Monitors system-wide file descriptor usage +- **Memory Pressure Testing**: Tests system behavior under memory stress + +### 2. UCI Configuration Tests (`test_uci.py`) + +Tests for OpenWRT's Unified Configuration Interface. + +- **Basic UCI Operations**: Tests get, set, delete commands +- **List Operations**: Validates UCI list handling +- **Revert Functionality**: Tests configuration rollback +- **Export/Import**: Validates configuration backup/restore +- **Batch Mode**: Tests batch configuration changes +- **Configuration Validation**: Validates system configurations +- **Change Tracking**: Tests UCI change detection +- **System Configuration**: Validates hostname, timezone settings +- **Permission Checks**: Verifies configuration file permissions +- **Stress Testing**: Tests UCI under heavy load + +### 3. Firewall and Security Tests (`test_firewall.py`) + +Comprehensive firewall and security validation. + +- **Service Status**: Validates firewall daemon is running +- **Zone Configuration**: Tests firewall zones (LAN/WAN) +- **Rule Validation**: Checks iptables/nftables rules +- **Port Forwarding**: Tests redirect rules +- **Custom Rules**: Validates custom firewall scripts +- **Logging Configuration**: Tests firewall logging +- **Default Policies**: Validates secure default policies +- **DDoS Protection**: Tests SYN flood protection +- **Packet Filtering**: Tests invalid packet handling +- **Zone Forwarding**: Validates inter-zone forwarding +- **Connection Tracking**: Tests conntrack functionality +- **MAC Filtering**: Tests MAC address filtering +- **IPv6 Firewall**: Validates IPv6 firewall rules + +### 4. System Services Tests (`test_services.py`) + +Tests for system daemons and services. + +- **Init System**: Validates procd functionality +- **Essential Services**: Tests core services (SSH, DNS, DHCP) +- **Service Scripts**: Validates init.d scripts +- **Boot Services**: Tests enabled services +- **Service Management**: Tests start/stop/restart +- **Cron Daemon**: Validates scheduled task execution +- **System Logging**: Tests logd functionality +- **DNS/DHCP Server**: Tests dnsmasq service +- **Web Server**: Validates uhttpd operation +- **NTP Service**: Tests time synchronization +- **Service Dependencies**: Validates boot order +- **Watchdog**: Tests hardware watchdog if available +- **Message Bus**: Tests ubus functionality + +### 5. Performance Tests (`test_performance.py`) + +System performance benchmarks and stress tests. + +- **Boot Time**: Measures system boot performance +- **Memory Bandwidth**: Tests RAM read/write speeds +- **CPU Performance**: Basic CPU benchmarks +- **Network Throughput**: Loopback throughput testing +- **Filesystem Performance**: Tests I/O on different filesystems +- **Process Creation**: Measures fork/exec performance +- **Interrupt Handling**: Validates interrupt processing +- **Sustained Load**: Tests stability under load +- **Network Latency**: Measures latency and jitter +- **Concurrent Connections**: Tests connection handling +- **Memory Fragmentation**: Tests memory allocation +- **Cache Performance**: Validates filesystem caching + +## Running the Tests + +### Prerequisites + +- Python 3.8+ +- pytest +- SSH access to OpenWRT device +- Required Python packages (see pyproject.toml) + +### Basic Usage + +Run all tests: + +```bash +pytest tests/ +``` + +Run specific test category: + +```bash +pytest tests/test_system_health.py +``` + +Run with specific markers: + +```bash +# Run only slow tests +pytest -m slow + +# Skip slow tests +pytest -m "not slow" +``` + +### Environment Variables + +- `LG_ENV`: Device environment configuration +- `FIRMWARE_VERSION`: Expected firmware version for validation + +### Test Results + +Results are saved to: + +- `results.json`: Machine-readable test results + +## Test Markers + +- `@pytest.mark.slow`: Long-running tests (>30 seconds) +- `@pytest.mark.lg_feature`: Feature-specific tests + +## Adding New Tests + +1. Create test functions starting with `test_` +2. Use appropriate fixtures: + - `ssh_command`: For SSH command execution + - `shell_command`: For shell access + - `results_bag`: To store test results +3. Add assertions to validate expected behavior +4. Use meaningful test names and docstrings + +## Best Practices + +1. **Cleanup**: Always clean up test artifacts +2. **Idempotency**: Tests should be runnable multiple times +3. **Independence**: Tests should not depend on each other +4. **Safety**: Avoid operations that could break the system +5. **Documentation**: Add clear docstrings to all tests + +## Common Test Patterns + +### Running Commands + +```python +def test_example(ssh_command): + # Simple command + output = ssh_command.run_check("uname -a") + + # Command with error handling + result = ssh_command.run("some_command") + if result[2] == 0: # Check exit code + process_output(result[0]) +``` + +### UCI Operations + +```python +def test_uci_example(ssh_command): + # Get UCI value + value = ssh_command.run("uci get system.@system[0].hostname")[0].strip() + + # Set UCI value + ssh_command.run_check("uci set test.option='value'") + ssh_command.run_check("uci commit") +``` + +### Performance Measurements + +```python +def test_performance_example(ssh_command, results_bag): + start_time = time.time() + ssh_command.run_check("some_operation") + duration = time.time() - start_time + + results_bag["operation_time"] = duration + assert duration < 5.0, f"Operation too slow: {duration}s" +``` + +## Troubleshooting + +### SSH Connection Issues + +- Verify SSH keys are configured +- Check device IP and port +- Ensure dropbear is running on device + +### Test Failures + +- Check device logs: `logread | tail -50` +- Verify device configuration +- Check available resources (memory, disk) + +### Performance Issues + +- Reduce concurrent test execution +- Skip slow tests for quick validation +- Check device load during tests + +## Contributing + +When adding new tests: + +1. Follow existing patterns and conventions +2. Add appropriate documentation +3. Test on multiple OpenWRT versions/devices +4. Consider resource constraints of embedded devices +5. Add cleanup code for all test artifacts diff --git a/tests_deas/_test_cram.py b/tests_deas/_test_cram.py new file mode 100644 index 000000000..30c612182 --- /dev/null +++ b/tests_deas/_test_cram.py @@ -0,0 +1,23 @@ +from pathlib import Path + +import prysk.test +import pytest + + +@pytest.fixture +def prysk_wrapper(shell_command): + def _prysk_wrapper(command): + data, _, returncode = shell_command.run((b"".join(command)).decode("utf-8")) + return ("\n".join(data) + "x").encode(), returncode + + return _prysk_wrapper + + +@pytest.mark.parametrize( + "cram_file", + [Path("./tests/cram/base.t"), Path("./tests/cram/opkg.t")], +) +def test_cram(prysk_wrapper, cram_file): + refout, postout, diff = prysk.test.testfile(cram_file, execute_func=prysk_wrapper) + + assert diff == [] diff --git a/tests_deas/cram/base.t b/tests_deas/cram/base.t new file mode 100644 index 000000000..a2194131e --- /dev/null +++ b/tests_deas/cram/base.t @@ -0,0 +1,11 @@ +Test echo + + $ echo "ok" + ok + $ echo "Hello OpenWrt" + Hello OpenWrt + +Test uname + + $ uname -a + Linux OpenWrt 4.14.180 #0 SMP Thu May 14 15:08:00 2020 mips GNU/Linux diff --git a/tests_deas/cram/opkg.t b/tests_deas/cram/opkg.t new file mode 100644 index 000000000..0fb2d29a7 --- /dev/null +++ b/tests_deas/cram/opkg.t @@ -0,0 +1,4 @@ +Test opkg list + + $ opkg list | grep busybox + busybox diff --git a/tests_deas/lib_/functions.sh.bats b/tests_deas/lib_/functions.sh.bats new file mode 100644 index 000000000..05a298248 --- /dev/null +++ b/tests_deas/lib_/functions.sh.bats @@ -0,0 +1,80 @@ +#!/usr/bin/env bats + +bats_require_minimum_version 1.5.0 + +setup() { + export IPKG_INSTROOT="$BATS_FILE_TMPDIR/rootfs" + mkdir -p "$IPKG_INSTROOT/etc/" + echo "wheel:*:0:root" >"$BATS_FILE_TMPDIR/rootfs/etc/group" + echo "root:*:0:0:root:/var/root:/bin/sh" >"$BATS_FILE_TMPDIR/rootfs/etc/passwd" + + mkdir -p "$IPKG_INSTROOT/lib/apk/packages/" + echo "lldp=121:lldp=129" > "$IPKG_INSTROOT/lib/apk/packages/lldpd.rusers" + + . $(pwd)/package/base-files/files/lib/functions.sh +} + +# teardown() { +# # rm -rf "$BATS_FILE_TMPDIR" +# } + +@test "test append" { + VAR="a b" + append VAR "c" + [ "$VAR" = "a b c" ] + append VAR "d" "-" + [ "$VAR" = "a b c-d" ] +} + +@test "test prepend" { + VAR="a b" + prepend VAR "c" + [ "$VAR" = "c a b" ] + + prepend VAR "d" "-" + [ "$VAR" = "d-c a b" ] +} + +@test "list contains" { + VAR="a b c" + run -0 list_contains VAR "a" + run -1 list_contains VAR "d" + +} + +@test "get_bool" { + [ "$(get_bool 0)" = 0 ] + [ "$(get_bool off)" = 0 ] + [ "$(get_bool false)" = 0 ] + [ "$(get_bool no)" = 0 ] + [ "$(get_bool disabled)" = 0 ] + + [ "$(get_bool 1)" = 1 ] + [ "$(get_bool on)" = 1 ] + [ "$(get_bool true)" = 1 ] + [ "$(get_bool yes)" = 1 ] + [ "$(get_bool enabled)" = 1 ] +} + +@test "group_exists" { + run -0 group_exists wheel + run -1 group_exists not_existing +} + +@test "user_exists" { + run -0 user_exists root + run -1 user_exists not_existing +} + +@test "add_group_and_user" { + export root="$IPKG_INSTROOT" + run -0 add_group_and_user lldpd + run -0 user_exists lldp + run -0 group_exists lldp + unset root +} + +@test "user_add" { + user_add test_user 123 123 description /var/lib/test /bin/fish + run -0 user_exists test_user +} diff --git a/tests_deas/test_firewall.py b/tests_deas/test_firewall.py new file mode 100644 index 000000000..01a1352c9 --- /dev/null +++ b/tests_deas/test_firewall.py @@ -0,0 +1,380 @@ +"""Firewall and security configuration tests for OpenWRT.""" + +import re +import time + +import pytest + + +class TestFirewall: + """Tests for firewall functionality and security configurations.""" + + def test_firewall_service_status(self, ssh_command): + """Test that firewall service is running.""" + # Check if firewall is enabled + enabled = ssh_command.run("uci get firewall.@defaults[0].disable 2>/dev/null") + if enabled[2] == 0 and enabled[0][0].strip() == "1": + pytest.skip("Firewall is disabled in configuration") + + # Check firewall service status + status = ssh_command.run("/etc/init.d/firewall status") + assert status[2] == 0, "Firewall service is not running" + + def test_firewall_zones(self, ssh_command, results_bag): + """Test firewall zones configuration.""" + # Get all zones + zones_output = ssh_command.run_check( + "uci show firewall | grep 'firewall.@zone'" + ) + + zones = {} + current_zone = None + + for line in zones_output: + if "=zone" in line: + # Extract zone index + match = re.search(r"firewall\.@zone\[(\d+)\]", line) + if match: + current_zone = f"zone_{match.group(1)}" + zones[current_zone] = {} + elif current_zone and "=" in line: + # Parse zone properties + key_match = re.search(r"firewall\.@zone\[\d+\]\.(\w+)=(.+)", line) + if key_match: + key = key_match.group(1) + value = key_match.group(2).strip("'\"") + zones[current_zone][key] = value + + results_bag["firewall_zones"] = zones + + # Verify at least one zone exists + assert len(zones) > 0, "No firewall zones configured" + + # Check for common zones + zone_names = [z.get("name", "") for z in zones.values()] + + # Usually should have at least lan and wan zones + if "lan" in zone_names: + lan_zone = next(z for z in zones.values() if z.get("name") == "lan") + assert lan_zone.get("input", "").upper() == "ACCEPT", ( + "LAN zone should accept input" + ) + assert lan_zone.get("forward", "").upper() == "ACCEPT", ( + "LAN zone should accept forward" + ) + + if "wan" in zone_names: + wan_zone = next(z for z in zones.values() if z.get("name") == "wan") + assert wan_zone.get("input", "").upper() in ["REJECT", "DROP"], ( + "WAN zone should reject/drop input" + ) + assert wan_zone.get("forward", "").upper() in ["REJECT", "DROP"], ( + "WAN zone should reject/drop forward" + ) + + def test_firewall_rules(self, ssh_command): + """Test firewall rules are properly loaded.""" + # Check iptables/nftables rules + # Try nftables first (newer OpenWRT) + nft_check = ssh_command.run("nft list ruleset 2>/dev/null") + + if nft_check[2] == 0 and nft_check[0]: + # Using nftables + rules = nft_check[0] + + # Check for essential chains + assert "input" in rules.lower(), "No input chain found in nftables" + assert "forward" in rules.lower(), "No forward chain found in nftables" + assert "output" in rules.lower(), "No output chain found in nftables" + + # Check for zone rules + assert ( + "zone" in rules.lower() + or "lan" in rules.lower() + or "wan" in rules.lower() + ), "No zone rules found" + else: + # Try iptables + iptables_check = ssh_command.run("iptables -L -n 2>/dev/null") + + if iptables_check[2] == 0: + rules = "\n".join(iptables_check) + + # Check for essential chains + assert "Chain INPUT" in rules, "No INPUT chain found" + assert "Chain FORWARD" in rules, "No FORWARD chain found" + assert "Chain OUTPUT" in rules, "No OUTPUT chain found" + + # Check for zone chains + assert "zone_" in rules.lower() or "reject" in rules.lower(), ( + "No zone chains found" + ) + else: + pytest.skip("Neither nftables nor iptables available") + + def test_firewall_port_forwards(self, ssh_command): + """Test port forwarding rules.""" + # Get all redirects (port forwards) + redirects = ssh_command.run("uci show firewall | grep 'firewall.@redirect'") + + if redirects[0]: + redirect_count = len( + set(re.findall(r"firewall\.@redirect\[(\d+)\]", "\n".join(redirects))) + ) + + for i in range(redirect_count): + # Check each redirect has required fields + name = ssh_command.run( + f"uci get firewall.@redirect[{i}].name 2>/dev/null" + ) + proto = ssh_command.run( + f"uci get firewall.@redirect[{i}].proto 2>/dev/null" + ) + + if name[2] == 0: + # Redirect exists, verify it has protocol + assert proto[2] == 0, f"Redirect {i} missing protocol" + + def test_firewall_custom_rules(self, ssh_command): + """Test custom firewall rules if configured.""" + # Check for custom rules file + custom_rules_exist = ssh_command.run("test -f /etc/firewall.user")[2] == 0 + + if custom_rules_exist: + # Check if file is executable + is_executable = ssh_command.run("test -x /etc/firewall.user")[2] == 0 + assert is_executable, "/etc/firewall.user exists but is not executable" + + # Check syntax (basic) + syntax_check = ssh_command.run("sh -n /etc/firewall.user") + assert syntax_check[2] == 0, "Syntax error in /etc/firewall.user" + + def test_firewall_logging(self, ssh_command): + """Test firewall logging configuration.""" + # Check if logging is enabled + log_level = ssh_command.run( + "uci get firewall.@defaults[0].log_level 2>/dev/null" + ) + + if log_level[2] == 0 and log_level[0].strip() != "off": + # Logging is enabled, check if it's working + # Look for firewall messages in system log + ssh_command.run("logread | grep -i firewall | tail -5") + + # We should see some firewall-related messages if logging is active + # Not asserting as there might legitimately be no recent firewall events + + def test_firewall_defaults(self, ssh_command, results_bag): + """Test firewall default policies.""" + defaults = {} + + # Get default policies + for policy in ["input", "output", "forward"]: + value = ssh_command.run( + f"uci get firewall.@defaults[0].{policy} 2>/dev/null" + ) + if value[2] == 0: + defaults[policy] = value[0].strip().upper() + + results_bag["firewall_defaults"] = defaults + + # Verify secure defaults + assert defaults.get("input", "ACCEPT") != "ACCEPT", ( + "Default input policy should not be ACCEPT" + ) + assert defaults.get("forward", "ACCEPT") != "ACCEPT", ( + "Default forward policy should not be ACCEPT" + ) + # Output can be ACCEPT + + def test_syn_flood_protection(self, ssh_command): + """Test SYN flood protection settings.""" + syn_flood = ssh_command.run( + "uci get firewall.@defaults[0].syn_flood 2>/dev/null" + ) + + if syn_flood[2] == 0: + assert syn_flood[0].strip() == "1", "SYN flood protection should be enabled" + + # Check if synflood_protect is set + syn_protect = ssh_command.run( + "uci get firewall.@defaults[0].synflood_protect 2>/dev/null" + ) + if syn_protect[2] == 0: + assert syn_protect[0].strip() == "1", ( + "SYN flood protection should be enabled" + ) + + def test_invalid_packets_handling(self, ssh_command): + """Test handling of invalid packets.""" + # Check if invalid packets are dropped + drop_invalid = ssh_command.run( + "uci get firewall.@defaults[0].drop_invalid 2>/dev/null" + ) + + if drop_invalid[2] == 0: + assert drop_invalid[0].strip() == "1", "Invalid packets should be dropped" + + def test_firewall_zone_forwarding(self, ssh_command): + """Test zone forwarding rules.""" + # Get all forwarding rules + forwards = ssh_command.run_check( + "uci show firewall | grep 'firewall.@forwarding'" + ) + + if forwards[0]: + # Parse forwarding rules + forward_rules = [] + + forward_count = len( + set(re.findall(r"firewall\.@forwarding\[(\d+)\]", "\n".join(forwards))) + ) + + for i in range(forward_count): + src = ssh_command.run( + f"uci get firewall.@forwarding[{i}].src 2>/dev/null" + ) + dest = ssh_command.run( + f"uci get firewall.@forwarding[{i}].dest 2>/dev/null" + ) + + if src[2] == 0 and dest[2] == 0: + forward_rules.append( + {"src": src[0].strip(), "dest": dest[0].strip()} + ) + + # Common check: LAN should be able to forward to WAN + lan_to_wan = any( + r["src"] == "lan" and r["dest"] == "wan" for r in forward_rules + ) + assert lan_to_wan, "LAN to WAN forwarding should be allowed" + + def test_connection_tracking(self, ssh_command): + """Test connection tracking settings.""" + # Check conntrack modules + conntrack_modules = ssh_command.run("lsmod | grep conntrack") + assert conntrack_modules[0], "No conntrack modules loaded" + + # Check conntrack table size + conntrack_max = ssh_command.run_check("sysctl net.netfilter.nf_conntrack_max")[ + 0 + ] + max_value = int(conntrack_max.split("=")[1].strip()) + + # Should be reasonable for the system + assert max_value >= 4096, f"Conntrack table too small: {max_value}" + + # Check current connections + conntrack_count = ssh_command.run( + "conntrack -C 2>/dev/null || cat /proc/sys/net/netfilter/nf_conntrack_count" + ) + if conntrack_count[2] == 0 and conntrack_count[0].strip().isdigit(): + current_conns = int(conntrack_count[0].strip()) + assert current_conns < max_value * 0.8, "Conntrack table nearly full" + + def test_firewall_include_files(self, ssh_command): + """Test firewall include files.""" + # Check for includes + includes = ssh_command.run("uci show firewall | grep '\\.path='") + + if includes[0]: + # Verify included files exist and are valid + for line in includes: + if ".path=" in line: + path_match = re.search(r"\.path='([^']+)'", line) + if path_match: + include_path = path_match.group(1) + exists = ssh_command.run(f"test -f {include_path}")[2] == 0 + assert exists, ( + f"Included firewall file {include_path} does not exist" + ) + + def test_mac_address_filtering(self, ssh_command): + """Test MAC address filtering if configured.""" + # Check for MAC-based rules + mac_rules = ssh_command.run( + "uci show firewall | grep -i 'mac' | grep -v 'macsec'" + ) + + if mac_rules[0]: + # Verify MAC addresses are in correct format + mac_pattern = re.compile(r"([0-9A-Fa-f]{2}[:-]){5}([0-9A-Fa-f]{2})") + + for line in mac_rules: + if "=" in line and "mac" in line.lower(): + value = line.split("=", 1)[1].strip("'\"") + if ":" in value or "-" in value: + assert mac_pattern.match(value), ( + f"Invalid MAC address format: {value}" + ) + + @pytest.mark.slow + def test_firewall_stress(self, ssh_command): + """Test firewall under stress conditions.""" + # Create multiple temporary rules + test_rules = [] + base_port = 50000 + + try: + # Add several test rules + for i in range(10): + port = base_port + i + rule_name = f"test_rule_{i}" + + ssh_command.run_check("uci add firewall rule") + ssh_command.run_check(f"uci set firewall.@rule[-1].name='{rule_name}'") + ssh_command.run_check("uci set firewall.@rule[-1].src='wan'") + ssh_command.run_check(f"uci set firewall.@rule[-1].dest_port='{port}'") + ssh_command.run_check("uci set firewall.@rule[-1].target='DROP'") + ssh_command.run_check("uci set firewall.@rule[-1].proto='tcp'") + + test_rules.append(rule_name) + + # Commit and reload + ssh_command.run_check("uci commit firewall") + ssh_command.run_check("/etc/init.d/firewall reload") + + # Give firewall time to reload + time.sleep(2) + + # Verify firewall is still running + status = ssh_command.run("/etc/init.d/firewall status") + assert status[2] == 0, "Firewall crashed during stress test" + + finally: + # Cleanup test rules + for rule_name in test_rules: + # Find and delete the rule + rule_idx = ssh_command.run( + f"uci show firewall | grep \"name='{rule_name}'\" | " + f"sed -n 's/firewall.@rule\[\([0-9]*\)\].*/\\1/p'" + ) + if rule_idx[2] == 0 and rule_idx[0].strip(): + ssh_command.run(f"uci delete firewall.@rule[{rule_idx[0].strip()}]") + + ssh_command.run("uci commit firewall") + ssh_command.run("/etc/init.d/firewall reload") + + def test_firewall_ipv6(self, ssh_command): + """Test IPv6 firewall configuration if enabled.""" + # Check if IPv6 is enabled + ipv6_disable = ssh_command.run( + "uci get firewall.@defaults[0].disable_ipv6 2>/dev/null" + ) + + if ipv6_disable[2] != 0 or ipv6_disable[0].strip() != "1": + # IPv6 firewall should be active + # Check for ip6tables or nft inet tables + ip6_check = ssh_command.run("ip6tables -L -n 2>/dev/null | head -20") + nft6_check = ssh_command.run("nft list ruleset 2>/dev/null | grep -i inet") + + if ip6_check[2] == 0 or (nft6_check[2] == 0 and nft6_check[0]): + # IPv6 firewall is available + # Check for ICMPv6 rules (essential for IPv6) + icmpv6_check = ssh_command.run( + "ip6tables -L -n 2>/dev/null | grep -i icmpv6 || " + "nft list ruleset 2>/dev/null | grep -i icmpv6" + ) + assert icmpv6_check[0], ( + "No ICMPv6 rules found - IPv6 may not work correctly" + ) diff --git a/tests_deas/test_performance.py b/tests_deas/test_performance.py new file mode 100644 index 000000000..cc55decf1 --- /dev/null +++ b/tests_deas/test_performance.py @@ -0,0 +1,373 @@ +"""Performance and stress tests for OpenWRT.""" + +import re +import statistics +import time + +import pytest + + +class TestPerformance: + """Tests for system performance and stress testing.""" + + def test_boot_time(self, ssh_command, results_bag): + """Measure and validate boot time.""" + # Get boot time from kernel + uptime_info = ssh_command.run_check("cat /proc/uptime") + uptime_seconds = float(uptime_info[0].split()[0]) + + # Get time when init started + dmesg_output = ssh_command.run_check( + "dmesg | grep -E 'Freeing (unused|init)' | head -1" + ) + + if dmesg_output[0]: + # Extract timestamp from dmesg + match = re.search(r"\[\s*(\d+\.\d+)\]", dmesg_output[0]) + if match: + kernel_to_init = float(match.group(1)) + results_bag["boot_time"] = { + "kernel_to_init_seconds": kernel_to_init, + "total_uptime_seconds": uptime_seconds, + "boot_phase": "complete", + } + + # Boot should complete reasonably fast + assert kernel_to_init < 60, ( + f"Kernel to init took too long: {kernel_to_init}s" + ) + + def test_memory_bandwidth(self, ssh_command): + """Test memory bandwidth using dd.""" + # Create test in memory (tmpfs) + iterations = 3 + + read_speeds = [] + write_speeds = [] + + for i in range(iterations): + # Write test + time.time() + write_result = ssh_command.run_check( + "dd if=/dev/zero of=/tmp/perftest bs=1M count=10 2>&1" + ) + time.time() + + # Parse write speed + for line in write_result: + if "MB/s" in line or "MiB/s" in line: + speed_match = re.search(r"(\d+\.?\d*)\s*M[Bi]/s", line) + if speed_match: + write_speeds.append(float(speed_match.group(1))) + + # Read test + time.time() + read_result = ssh_command.run_check( + "dd if=/tmp/perftest of=/dev/null bs=1M 2>&1" + ) + time.time() + + # Parse read speed + for line in read_result: + if "MB/s" in line or "MiB/s" in line: + speed_match = re.search(r"(\d+\.?\d*)\s*M[Bi]/s", line) + if speed_match: + read_speeds.append(float(speed_match.group(1))) + + # Cleanup + ssh_command.run("rm -f /tmp/perftest") + + # Calculate averages + if write_speeds: + avg_write = statistics.mean(write_speeds) + assert avg_write > 10, f"Memory write speed too slow: {avg_write:.1f} MB/s" + + if read_speeds: + avg_read = statistics.mean(read_speeds) + assert avg_read > 10, f"Memory read speed too slow: {avg_read:.1f} MB/s" + + def test_cpu_performance(self, ssh_command, results_bag): + """Test CPU performance with basic benchmarks.""" + # Simple CPU benchmark using bc + + # Pi calculation benchmark + start_time = time.time() + ssh_command.run_check("echo 'scale=100; 4*a(1)' | bc -l > /dev/null") + calc_time = time.time() - start_time + + results_bag["cpu_benchmark"] = { + "pi_calculation_seconds": calc_time, + "test_type": "bc_pi_100_digits", + } + + # Should complete in reasonable time (adjust based on target hardware) + assert calc_time < 10, f"CPU calculation took too long: {calc_time:.2f}s" + + # Integer operations benchmark + start_time = time.time() + ssh_command.run_check( + "awk 'BEGIN {for(i=0;i<100000;i++) j=i*i; print j}' > /dev/null" + ) + int_time = time.time() - start_time + + assert int_time < 5, f"Integer operations took too long: {int_time:.2f}s" + + def test_network_throughput_loopback(self, ssh_command): + """Test network throughput on loopback interface.""" + # Check if nc (netcat) is available + nc_check = ssh_command.run("which nc") + if nc_check[2] != 0: + pytest.skip("netcat not available for network testing") + + # Use dd and nc for basic throughput test + test_size = 10 * 1024 * 1024 # 10MB + port = 12345 + + # Start receiver in background + ssh_command.run(f"nc -l -p {port} > /dev/null &") + time.sleep(1) + + # Send data + start_time = time.time() + ssh_command.run_check( + f"dd if=/dev/zero bs=1024 count=10240 2>/dev/null | nc localhost {port}" + ) + transfer_time = time.time() - start_time + + # Calculate throughput + throughput_mbps = (test_size * 8) / (transfer_time * 1000000) + + # Loopback should be fast + assert throughput_mbps > 100, ( + f"Loopback throughput too low: {throughput_mbps:.1f} Mbps" + ) + + # Cleanup + ssh_command.run("pkill -f 'nc -l'") + + def test_filesystem_performance(self, ssh_command, results_bag): + """Test filesystem read/write performance.""" + # Test different filesystems if available + test_paths = {"/tmp": "tmpfs", "/overlay": "overlay", "/": "root"} + + results = {} + + for path, fs_type in test_paths.items(): + # Check if path exists and is writable + if ssh_command.run(f"test -w {path}")[2] != 0: + continue + + test_file = f"{path}/perftest.dat" + + # Small file test (1MB) + write_result = ssh_command.run( + f"dd if=/dev/zero of={test_file} bs=1024 count=1024 conv=fsync 2>&1" + ) + + if write_result[2] == 0: + # Parse results + for line in write_result: + if "MB/s" in line or "MiB/s" in line: + speed_match = re.search(r"(\d+\.?\d*)\s*M[Bi]/s", line) + if speed_match: + results[fs_type] = { + "write_speed_mbs": float(speed_match.group(1)), + "path": path, + } + + # Cleanup + ssh_command.run(f"rm -f {test_file}") + + results_bag["filesystem_performance"] = results + + # At least tmpfs should be fast + if "tmpfs" in results: + assert results["tmpfs"]["write_speed_mbs"] > 5, ( + f"tmpfs write speed too slow: {results['tmpfs']['write_speed_mbs']:.1f} MB/s" + ) + + def test_process_creation_performance(self, ssh_command): + """Test process creation and context switching performance.""" + # Time how long it takes to create many processes + process_count = 100 + + start_time = time.time() + ssh_command.run_check( + f"for i in $(seq 1 {process_count}); do true & done; wait" + ) + creation_time = time.time() - start_time + + # Calculate rate + processes_per_second = process_count / creation_time + + # Should be able to create processes reasonably fast + assert processes_per_second > 50, ( + f"Process creation too slow: {processes_per_second:.1f} processes/second" + ) + + def test_interrupt_handling(self, ssh_command): + """Test interrupt handling performance.""" + # Get initial interrupt counts + initial_interrupts = ssh_command.run_check("cat /proc/interrupts") + + # Generate some system activity + ssh_command.run_check("dd if=/dev/zero of=/dev/null bs=1M count=10") + time.sleep(1) + + # Get final interrupt counts + final_interrupts = ssh_command.run_check("cat /proc/interrupts") + + # Basic check that interrupts are being handled + assert initial_interrupts != final_interrupts, "No interrupt activity detected" + + @pytest.mark.slow + def test_sustained_load(self, ssh_command): + """Test system stability under sustained load.""" + # Run a sustained workload + duration = 30 # seconds + + # Start CPU load + ssh_command.run( + f"timeout {duration} sh -c 'while true; do echo scale=100; 4*a(1) | bc -l > /dev/null; done' &" + ) + + # Start memory load + ssh_command.run( + f"timeout {duration} sh -c 'while true; do dd if=/dev/zero of=/tmp/load bs=1M count=5 2>/dev/null; done' &" + ) + + # Monitor system during load + check_interval = 5 + checks = duration // check_interval + + for i in range(checks): + time.sleep(check_interval) + + # Check system responsiveness + start = time.time() + ssh_command.run_check("echo responsive") + response_time = time.time() - start + + assert response_time < 2, ( + f"System unresponsive under load: {response_time:.1f}s" + ) + + # Check memory isn't exhausted + free_mem = int(ssh_command.run_check("free -m | grep Mem:")[0].split()[3]) + assert free_mem > 5, f"System running out of memory: {free_mem}MB free" + + # Wait for load to finish + time.sleep(2) + + # Cleanup + ssh_command.run("rm -f /tmp/load") + + def test_network_latency(self, ssh_command, results_bag): + """Test network latency and jitter.""" + # Ping loopback + ping_result = ssh_command.run_check("ping -c 10 -i 0.2 127.0.0.1") + + # Parse ping statistics + latencies = [] + for line in ping_result: + if "time=" in line: + match = re.search(r"time=(\d+\.?\d*)", line) + if match: + latencies.append(float(match.group(1))) + elif "min/avg/max" in line: + match = re.search(r"(\d+\.?\d*)/(\d+\.?\d*)/(\d+\.?\d*)", line) + if match: + results_bag["loopback_latency"] = { + "min_ms": float(match.group(1)), + "avg_ms": float(match.group(2)), + "max_ms": float(match.group(3)), + "jitter_ms": float(match.group(3)) - float(match.group(1)), + } + + if latencies: + avg_latency = statistics.mean(latencies) + assert avg_latency < 1.0, f"Loopback latency too high: {avg_latency:.2f}ms" + + def test_concurrent_connections(self, ssh_command): + """Test handling of concurrent network connections.""" + # Check current connection limits + max_conn = ssh_command.run_check("sysctl net.netfilter.nf_conntrack_max")[0] + max_value = int(max_conn.split("=")[1].strip()) + + # Test creating multiple connections + port_base = 20000 + num_connections = min(50, max_value // 10) # Don't overwhelm the system + + # Start listeners + for i in range(num_connections): + ssh_command.run(f"nc -l -p {port_base + i} > /dev/null 2>&1 &") + + time.sleep(2) + + # Count established connections + nc_count = ssh_command.run("pgrep -c 'nc -l'")[0].strip() + + # Most should have started successfully + assert int(nc_count) > num_connections * 0.8, ( + f"Failed to create concurrent connections: {nc_count}/{num_connections}" + ) + + # Cleanup + ssh_command.run("pkill -f 'nc -l'") + + def test_memory_fragmentation(self, ssh_command): + """Test memory fragmentation handling.""" + # Get initial memory state + int(ssh_command.run_check("free -m | grep Mem:")[0].split()[3]) + + # Allocate and free memory multiple times + iterations = 10 + for i in range(iterations): + # Allocate + ssh_command.run( + f"dd if=/dev/zero of=/tmp/frag{i} bs=1M count=2 2>/dev/null" + ) + + # Free half + for i in range(0, iterations, 2): + ssh_command.run(f"rm -f /tmp/frag{i}") + + # Allocate again + ssh_command.run("dd if=/dev/zero of=/tmp/frag_large bs=1M count=3 2>/dev/null") + + # Check if allocation succeeded + large_exists = ssh_command.run("test -f /tmp/frag_large")[2] == 0 + assert large_exists, "Memory too fragmented to allocate continuous block" + + # Cleanup + ssh_command.run("rm -f /tmp/frag*") + + def test_cache_performance(self, ssh_command): + """Test filesystem cache performance.""" + test_file = "/tmp/cache_test" + size_mb = 5 + + # Create test file + ssh_command.run_check( + f"dd if=/dev/urandom of={test_file} bs=1M count={size_mb} 2>/dev/null" + ) + + # First read (cold cache) + ssh_command.run("sync && echo 3 > /proc/sys/vm/drop_caches 2>/dev/null") + + cold_start = time.time() + ssh_command.run_check(f"dd if={test_file} of=/dev/null bs=1M 2>/dev/null") + cold_time = time.time() - cold_start + + # Second read (warm cache) + warm_start = time.time() + ssh_command.run_check(f"dd if={test_file} of=/dev/null bs=1M 2>/dev/null") + warm_time = time.time() - warm_start + + # Warm cache should be significantly faster + assert warm_time < cold_time * 0.5, ( + f"Cache not effective: cold={cold_time:.2f}s, warm={warm_time:.2f}s" + ) + + # Cleanup + ssh_command.run(f"rm -f {test_file}") diff --git a/tests_deas/test_services.py b/tests_deas/test_services.py new file mode 100644 index 000000000..83b58ebec --- /dev/null +++ b/tests_deas/test_services.py @@ -0,0 +1,371 @@ +"""System services and daemons tests for OpenWRT.""" + +import re +import time + +import pytest +from conftest import ubus_call + + +class TestServices: + """Tests for system services and daemons.""" + + def test_init_system(self, ssh_command): + """Test init system is functioning properly.""" + # Check if procd is running (OpenWRT's init system) + procd_check = ssh_command.run("pgrep -x procd") + assert procd_check[2] == 0, "procd (init system) is not running" + + # Check procd is PID 1 + procd_pid = ssh_command.run_check("pgrep -x procd")[0].strip() + assert procd_pid == "1", f"procd should be PID 1, but is PID {procd_pid}" + + def test_essential_services(self, ssh_command, results_bag): + """Test essential system services are running.""" + essential_services = { + "dropbear": "SSH daemon", + "uhttpd": "Web server", + "dnsmasq": "DNS/DHCP server", + "netifd": "Network interface daemon", + "logd": "Logging daemon", + } + + running_services = {} + missing_services = [] + + for service, description in essential_services.items(): + # Check if service is running + pid_check = ssh_command.run(f"pgrep -x {service}") + + if pid_check[2] == 0: + running_services[service] = { + "status": "running", + "pid": pid_check[0].strip().split("\n")[0], + "description": description, + } + else: + # Check if service is installed but not running + init_script = ssh_command.run(f"test -x /etc/init.d/{service}") + if init_script[2] == 0: + running_services[service] = { + "status": "stopped", + "description": description, + } + missing_services.append(service) + else: + running_services[service] = { + "status": "not_installed", + "description": description, + } + + results_bag["running_services"] = running_services + + # Only fail for critical services that should always run + critical = ["procd", "netifd"] + for service in critical: + if service in missing_services: + assert False, f"Critical service {service} is not running" + + def test_service_init_scripts(self, ssh_command): + """Test init scripts are properly configured.""" + # List all init scripts + init_scripts = ssh_command.run_check("ls /etc/init.d/") + + for script in init_scripts[0].split(): + if script in [".", "..", "README"]: + continue + + # Check if script is executable + is_executable = ssh_command.run(f"test -x /etc/init.d/{script}")[2] == 0 + assert is_executable, f"Init script /etc/init.d/{script} is not executable" + + # Check basic script structure + has_start = ( + ssh_command.run(f"grep -q 'start()' /etc/init.d/{script}")[2] == 0 + ) + has_boot = ssh_command.run(f"grep -q 'START=' /etc/init.d/{script}")[2] == 0 + + # Should have at least one of these + assert has_start or has_boot, ( + f"Init script {script} missing start function or START priority" + ) + + def test_enabled_services(self, ssh_command, results_bag): + """Test which services are enabled at boot.""" + # Get all enabled services + enabled_services = {} + + rc_scripts = ssh_command.run("ls /etc/rc.d/S* 2>/dev/null") + if rc_scripts[2] == 0 and rc_scripts[0]: + for link in rc_scripts[0].split(): + # Extract service name and priority + match = re.match(r"/etc/rc.d/S(\d+)(.+)", link) + if match: + priority = match.group(1) + service = match.group(2) + enabled_services[service] = int(priority) + + results_bag["enabled_services"] = enabled_services + + # Check that critical services are enabled + assert "network" in enabled_services, "Network service is not enabled at boot" + assert "boot" in enabled_services, "Boot service is not enabled" + + def test_service_management(self, ssh_command): + """Test service start/stop/restart functionality.""" + # Use log service as it's safe to restart + test_service = "log" + + # Check if service exists + if ssh_command.run(f"test -x /etc/init.d/{test_service}")[2] != 0: + pytest.skip(f"Service {test_service} not available for testing") + + # Get initial status + ssh_command.run(f"/etc/init.d/{test_service} status")[2] + + # Test restart + restart_result = ssh_command.run(f"/etc/init.d/{test_service} restart") + assert restart_result[2] == 0, f"Failed to restart {test_service} service" + + # Give service time to restart + time.sleep(1) + + # Verify service is running after restart + final_status = ssh_command.run(f"/etc/init.d/{test_service} status")[2] + assert final_status == 0, f"Service {test_service} not running after restart" + + def test_cron_service(self, ssh_command): + """Test cron daemon functionality.""" + # Check if crond is running + cron_pid = ssh_command.run("pgrep -x crond") + + if cron_pid[2] != 0: + # Try busybox crond + cron_pid = ssh_command.run("pgrep cron") + if cron_pid[2] != 0: + pytest.skip("Cron service not running") + + # Check crontab directory exists + crontab_dir = ssh_command.run("test -d /etc/crontabs")[2] == 0 + assert crontab_dir, "/etc/crontabs directory missing" + + # Check if root crontab exists + ssh_command.run("test -f /etc/crontabs/root")[2] == 0 + + # Create a test cron job + test_file = "/tmp/cron_test_marker" + try: + # Remove any existing test file + ssh_command.run(f"rm -f {test_file}") + + # Add test cron job + ssh_command.run_check( + f"echo '* * * * * touch {test_file}' >> /etc/crontabs/root" + ) + + # Restart cron to pick up changes + ssh_command.run("/etc/init.d/cron restart") + + # Wait for cron to run (up to 65 seconds) + time.sleep(65) + + # Check if test file was created + file_exists = ssh_command.run(f"test -f {test_file}")[2] == 0 + assert file_exists, "Cron job did not execute" + + finally: + # Cleanup + ssh_command.run(f"rm -f {test_file}") + ssh_command.run("sed -i '/cron_test_marker/d' /etc/crontabs/root") + ssh_command.run("/etc/init.d/cron restart") + + def test_syslog_service(self, ssh_command): + """Test system logging service.""" + # Check if logd is running + logd_pid = ssh_command.run("pgrep -x logd") + assert logd_pid[2] == 0, "System logging daemon (logd) not running" + + # Test logging functionality + test_message = f"OpenWRT test message {int(time.time())}" + + # Send test message + ssh_command.run_check(f"logger -t test '{test_message}'") + + # Give log system time to process + time.sleep(1) + + # Check if message appears in log + log_check = ssh_command.run(f"logread | grep '{test_message}'") + assert log_check[2] == 0, "Test message not found in system log" + + # Check log size limits + log_size = ssh_command.run("uci get system.@system[0].log_size 2>/dev/null") + if log_size[2] == 0: + size_kb = int(log_size[0].strip()) + assert size_kb >= 64, f"Log size too small: {size_kb}KB" + + def test_dnsmasq_service(self, ssh_command): + """Test DNS/DHCP service.""" + # Check if dnsmasq is running + dnsmasq_pid = ssh_command.run("pgrep -x dnsmasq") + + if dnsmasq_pid[2] != 0: + pytest.skip("dnsmasq not running") + + # Check if listening on DNS port + dns_listen = ssh_command.run("netstat -tlunp | grep ':53'") + assert dns_listen[2] == 0, "dnsmasq not listening on DNS port 53" + + # Test DNS resolution + dns_test = ssh_command.run("nslookup localhost 127.0.0.1") + assert dns_test[2] == 0, "Local DNS resolution failed" + + # Check DHCP configuration + dhcp_config = ssh_command.run("uci show dhcp.lan 2>/dev/null") + if dhcp_config[2] == 0: + # Verify DHCP range is configured + dhcp_start = ssh_command.run("uci get dhcp.lan.start 2>/dev/null") + dhcp_limit = ssh_command.run("uci get dhcp.lan.limit 2>/dev/null") + + if dhcp_start[2] == 0 and dhcp_limit[2] == 0: + start = int(dhcp_start[0].strip()) + limit = int(dhcp_limit[0].strip()) + assert start > 0, "Invalid DHCP start address" + assert limit > 0, "Invalid DHCP limit" + + def test_uhttpd_service(self, ssh_command): + """Test web server service.""" + # Check if uhttpd is running + uhttpd_pid = ssh_command.run("pgrep -x uhttpd") + + if uhttpd_pid[2] != 0: + pytest.skip("uhttpd web server not running") + + # Check listening ports + http_ports = ssh_command.run("netstat -tlnp | grep uhttpd") + assert http_ports[2] == 0, "uhttpd not listening on any ports" + + # Check for standard HTTP/HTTPS ports + ports_output = http_ports[0] + has_http = ":80" in ports_output or ":8080" in ports_output + has_https = ":443" in ports_output or ":8443" in ports_output + + assert has_http or has_https, ( + "uhttpd not listening on standard HTTP/HTTPS ports" + ) + + # Test basic HTTP request + if has_http: + port = "80" if ":80" in ports_output else "8080" + http_test = ssh_command.run( + f"wget -q -O - http://localhost:{port}/ | head -20" + ) + # Just check it returns something, don't verify content + assert http_test[0], "Web server returned empty response" + + def test_ntpd_service(self, ssh_command): + """Test NTP time synchronization service.""" + # Check if NTP is enabled + ntp_enabled = ssh_command.run("uci get system.ntp.enabled 2>/dev/null") + + if ntp_enabled[2] != 0 or ntp_enabled[0].strip() != "1": + pytest.skip("NTP not enabled") + + # Check if ntpd is running + ntpd_pid = ssh_command.run("pgrep ntpd") + + if ntpd_pid[2] == 0: + # Full ntpd is running + # Check NTP peers + ntp_peers = ssh_command.run("ntpd -p 2>/dev/null") + if ntp_peers[2] == 0: + assert "stratum" in ntp_peers[0].lower(), "No NTP peers found" + else: + # Check for busybox ntpd + busybox_ntpd = ssh_command.run("ps | grep -v grep | grep ntpd") + assert busybox_ntpd[2] == 0, "No NTP daemon running" + + def test_service_dependencies(self, ssh_command): + """Test service dependency ordering.""" + # Get boot sequence + boot_sequence = {} + + rc_scripts = ssh_command.run("ls -la /etc/rc.d/S* 2>/dev/null") + if rc_scripts[2] == 0: + for line in rc_scripts[0].split("\n"): + match = re.search(r"S(\d+)(\S+)\s+->\s+\.\./init\.d/(\S+)", line) + if match: + priority = int(match.group(1)) + service = match.group(3) + boot_sequence[service] = priority + + # Verify critical service ordering + if "boot" in boot_sequence and "network" in boot_sequence: + assert boot_sequence["boot"] < boot_sequence["network"], ( + "Boot should start before network" + ) + + if "network" in boot_sequence and "firewall" in boot_sequence: + assert boot_sequence["network"] < boot_sequence["firewall"], ( + "Network should start before firewall" + ) + + def test_watchdog_service(self, ssh_command): + """Test hardware watchdog if available.""" + # Check if watchdog device exists + watchdog_dev = ssh_command.run("test -c /dev/watchdog")[2] == 0 + + if not watchdog_dev: + pytest.skip("No hardware watchdog available") + + # Check if watchdog is being serviced + ssh_command.run("pgrep -f watchdog") + + # Check kernel watchdog parameters + watchdog_timeout = ssh_command.run( + "cat /sys/class/watchdog/watchdog0/timeout 2>/dev/null" + ) + if watchdog_timeout[2] == 0: + timeout = int(watchdog_timeout[0].strip()) + assert timeout > 0, "Watchdog timeout not set" + assert timeout <= 300, f"Watchdog timeout too high: {timeout}s" + + @pytest.mark.slow + def test_service_restart_all(self, ssh_command): + """Test restarting all services doesn't break the system.""" + # Get list of safe services to restart + safe_services = ["log", "cron", "uhttpd", "dnsmasq"] + + failed_services = [] + + for service in safe_services: + if ssh_command.run(f"test -x /etc/init.d/{service}")[2] == 0: + # Restart service + result = ssh_command.run(f"/etc/init.d/{service} restart") + if result[2] != 0: + failed_services.append(service) + + # Give service time to start + time.sleep(1) + + # Verify service is running + status = ssh_command.run(f"/etc/init.d/{service} status") + if status[2] != 0: + failed_services.append(f"{service}_status") + + assert not failed_services, f"Failed to restart services: {failed_services}" + + def test_ubus_service(self, ssh_command): + """Test ubus system message bus.""" + # Check if ubusd is running + ubusd_pid = ssh_command.run("pgrep -x ubusd") + assert ubusd_pid[2] == 0, "ubusd not running" + + # Test ubus functionality + ubus_list = ssh_command.run_check("ubus list") + assert len(ubus_list) > 5, "Too few ubus objects registered" + + # Test calling a simple ubus method + system_info = ubus_call(ssh_command, "system", "info", {}) + assert "uptime" in system_info, "ubus system info call failed" + assert "memory" in system_info, "ubus system info missing memory data" diff --git a/tests_deas/test_uci.py b/tests_deas/test_uci.py new file mode 100644 index 000000000..e07705f29 --- /dev/null +++ b/tests_deas/test_uci.py @@ -0,0 +1,361 @@ +"""UCI (Unified Configuration Interface) tests for OpenWRT.""" + +import pytest + + +class TestUCI: + """Tests for UCI configuration system.""" + + def test_uci_basics(self, ssh_command): + """Test basic UCI commands work correctly.""" + # Test uci show + output = ssh_command.run_check("uci show system") + assert output, "UCI show returned no output" + assert "system." in output[0], "UCI show system failed" + + def test_uci_get_set_delete(self, ssh_command): + """Test UCI get, set, and delete operations.""" + test_section = "test_uci_temp" + test_option = "test_option" + test_value = "test_value_12345" + + try: + # Clean up any existing test section + ssh_command.run(f"uci delete {test_section} 2>/dev/null") + ssh_command.run("uci commit") + + # Create a new section + ssh_command.run_check(f"uci set {test_section}=config") + ssh_command.run_check( + f"uci set {test_section}.{test_option}='{test_value}'" + ) + ssh_command.run_check("uci commit") + + # Test get + result = ssh_command.run_check(f"uci get {test_section}.{test_option}") + assert result[0].strip() == test_value, ( + f"UCI get failed: expected {test_value}, got {result[0]}" + ) + + # Test show + show_result = ssh_command.run_check(f"uci show {test_section}") + assert test_value in show_result[0], "UCI show doesn't contain set value" + + # Test delete option + ssh_command.run_check(f"uci delete {test_section}.{test_option}") + ssh_command.run_check("uci commit") + + # Verify deletion + get_result = ssh_command.run(f"uci get {test_section}.{test_option} 2>&1") + assert get_result[2] != 0, "UCI delete option failed" + + finally: + # Cleanup + ssh_command.run(f"uci delete {test_section} 2>/dev/null") + ssh_command.run("uci commit") + + def test_uci_list_operations(self, ssh_command): + """Test UCI list operations.""" + test_section = "test_uci_list" + test_list = "test_list" + test_values = ["value1", "value2", "value3"] + + try: + # Clean up + ssh_command.run(f"uci delete {test_section} 2>/dev/null") + ssh_command.run("uci commit") + + # Create section with list + ssh_command.run_check(f"uci set {test_section}=config") + + # Add list values + for value in test_values: + ssh_command.run_check( + f"uci add_list {test_section}.{test_list}='{value}'" + ) + + ssh_command.run_check("uci commit") + + # Verify list + result = ssh_command.run_check(f"uci get {test_section}.{test_list}") + for value in test_values: + assert value in result[0], f"List missing value: {value}" + + # Test delete from list + ssh_command.run_check( + f"uci del_list {test_section}.{test_list}='{test_values[1]}'" + ) + ssh_command.run_check("uci commit") + + # Verify deletion + result = ssh_command.run_check(f"uci get {test_section}.{test_list}") + assert test_values[1] not in result[0], ( + f"Failed to delete {test_values[1]} from list" + ) + assert test_values[0] in result[0], f"Incorrectly deleted {test_values[0]}" + assert test_values[2] in result[0], f"Incorrectly deleted {test_values[2]}" + + finally: + # Cleanup + ssh_command.run(f"uci delete {test_section} 2>/dev/null") + ssh_command.run("uci commit") + + def test_uci_revert(self, ssh_command): + """Test UCI revert functionality.""" + test_section = "test_uci_revert" + test_option = "test_option" + original_value = "original" + new_value = "modified" + + try: + # Setup + ssh_command.run(f"uci delete {test_section} 2>/dev/null") + ssh_command.run("uci commit") + + # Create initial config + ssh_command.run_check(f"uci set {test_section}=config") + ssh_command.run_check( + f"uci set {test_section}.{test_option}='{original_value}'" + ) + ssh_command.run_check("uci commit") + + # Modify without commit + ssh_command.run_check(f"uci set {test_section}.{test_option}='{new_value}'") + + # Verify change is pending + result = ssh_command.run_check(f"uci get {test_section}.{test_option}") + assert result[0].strip() == new_value, "UCI set didn't update value" + + # Revert changes + ssh_command.run_check(f"uci revert {test_section}") + + # Verify revert + result = ssh_command.run_check(f"uci get {test_section}.{test_option}") + assert result[0].strip() == original_value, ( + f"UCI revert failed: got {result[0]}, expected {original_value}" + ) + + finally: + # Cleanup + ssh_command.run(f"uci delete {test_section} 2>/dev/null") + ssh_command.run("uci commit") + + def test_uci_export_import(self, ssh_command): + """Test UCI export and import functionality.""" + test_package = "test_export" + + try: + # Clean up + ssh_command.run(f"uci delete {test_package} 2>/dev/null") + ssh_command.run("uci commit") + + # Create test configuration + ssh_command.run_check(f"uci set {test_package}=config") + ssh_command.run_check(f"uci set {test_package}.section1=type1") + ssh_command.run_check(f"uci set {test_package}.section1.option1='value1'") + ssh_command.run_check(f"uci set {test_package}.section2=type2") + ssh_command.run_check(f"uci set {test_package}.section2.option2='value2'") + ssh_command.run_check("uci commit") + + # Export configuration + export_data = ssh_command.run_check(f"uci export {test_package}") + + # Verify export contains our data + assert "config type1 'section1'" in "\n".join(export_data) + assert "option option1 'value1'" in "\n".join(export_data) + assert "config type2 'section2'" in "\n".join(export_data) + assert "option option2 'value2'" in "\n".join(export_data) + + # Delete and reimport + ssh_command.run_check(f"uci delete {test_package}") + ssh_command.run_check("uci commit") + + # Save export to file and import + ssh_command.run_check( + f"echo '{chr(10).join(export_data)}' > /tmp/uci_export_test" + ) + ssh_command.run_check("uci import < /tmp/uci_export_test") + ssh_command.run_check("uci commit") + + # Verify import + result1 = ssh_command.run_check(f"uci get {test_package}.section1.option1") + assert result1[0].strip() == "value1", "Import failed for section1" + + result2 = ssh_command.run_check(f"uci get {test_package}.section2.option2") + assert result2[0].strip() == "value2", "Import failed for section2" + + finally: + # Cleanup + ssh_command.run(f"uci delete {test_package} 2>/dev/null") + ssh_command.run("uci commit") + ssh_command.run("rm -f /tmp/uci_export_test") + + def test_uci_batch_mode(self, ssh_command): + """Test UCI batch mode operations.""" + test_section = "test_batch" + + try: + # Create batch commands + batch_commands = [ + f"set {test_section}=config", + f"set {test_section}.option1='batch_value1'", + f"set {test_section}.option2='batch_value2'", + f"add_list {test_section}.list1='item1'", + f"add_list {test_section}.list1='item2'", + "commit", + ] + + batch_file = "/tmp/uci_batch_test" + ssh_command.run_check( + f"echo '{chr(10).join(batch_commands)}' > {batch_file}" + ) + + # Execute batch + ssh_command.run_check(f"uci batch < {batch_file}") + + # Verify results + result1 = ssh_command.run_check(f"uci get {test_section}.option1") + assert result1[0].strip() == "batch_value1", "Batch mode failed for option1" + + result2 = ssh_command.run_check(f"uci get {test_section}.option2") + assert result2[0].strip() == "batch_value2", "Batch mode failed for option2" + + list_result = ssh_command.run_check(f"uci get {test_section}.list1") + assert "item1" in list_result[0], "Batch mode failed for list item1" + assert "item2" in list_result[0], "Batch mode failed for list item2" + + finally: + # Cleanup + ssh_command.run(f"uci delete {test_section} 2>/dev/null") + ssh_command.run("uci commit") + ssh_command.run(f"rm -f {batch_file}") + + def test_uci_validation(self, ssh_command): + """Test UCI validation for common configurations.""" + # Test network configuration validation + network_config = ssh_command.run_check("uci show network") + + # Should have at least loopback interface + assert "network.loopback" in "\n".join(network_config), ( + "Missing loopback interface" + ) + + # Check if lan interface exists and has valid protocol + lan_proto = ssh_command.run("uci get network.lan.proto 2>/dev/null") + if lan_proto[2] == 0: + valid_protos = ["static", "dhcp", "none"] + assert lan_proto[0].strip() in valid_protos, ( + f"Invalid LAN protocol: {lan_proto[0]}" + ) + + def test_uci_changes_tracking(self, ssh_command): + """Test UCI changes tracking functionality.""" + test_section = "test_changes" + + try: + # Ensure clean state + ssh_command.run(f"uci delete {test_section} 2>/dev/null") + ssh_command.run("uci commit") + + # Make some changes without committing + ssh_command.run_check(f"uci set {test_section}=config") + ssh_command.run_check(f"uci set {test_section}.option1='value1'") + ssh_command.run_check(f"uci set {test_section}.option2='value2'") + + # Check changes + changes = ssh_command.run_check("uci changes") + changes_text = "\n".join(changes) + + assert f"{test_section}" in changes_text, "Changes not tracked" + assert "option1" in changes_text, "Option1 change not tracked" + assert "option2" in changes_text, "Option2 change not tracked" + + # Commit and verify no pending changes + ssh_command.run_check("uci commit") + changes_after = ssh_command.run_check("uci changes") + assert not changes_after[0].strip(), "Changes remain after commit" + + finally: + # Cleanup + ssh_command.run(f"uci delete {test_section} 2>/dev/null") + ssh_command.run("uci commit") + + def test_uci_system_config(self, ssh_command, results_bag): + """Test and validate system UCI configuration.""" + # Get hostname + hostname = ssh_command.run_check("uci get system.@system[0].hostname")[ + 0 + ].strip() + results_bag["hostname"] = hostname + + # Get timezone + timezone = ssh_command.run("uci get system.@system[0].timezone")[0].strip() + if timezone: + results_bag["timezone"] = timezone + + # Verify system has at least basic configuration + assert hostname, "No hostname configured" + assert hostname != "OpenWrt", "Using default hostname" + + def test_uci_config_permissions(self, ssh_command): + """Test UCI configuration file permissions.""" + # Check /etc/config directory permissions + config_dir_perms = ssh_command.run_check("stat -c '%a' /etc/config")[0].strip() + assert config_dir_perms == "755", ( + f"Incorrect /etc/config permissions: {config_dir_perms}" + ) + + # Check permissions of key config files + important_configs = ["system", "network", "wireless", "firewall", "dhcp"] + + for config in important_configs: + if ssh_command.run(f"test -f /etc/config/{config}")[2] == 0: + perms = ssh_command.run_check(f"stat -c '%a' /etc/config/{config}")[ + 0 + ].strip() + assert perms in ["644", "600"], ( + f"Incorrect permissions for /etc/config/{config}: {perms}" + ) + + @pytest.mark.slow + def test_uci_stress(self, ssh_command): + """Stress test UCI with multiple operations.""" + test_section = "test_stress" + num_options = 50 + + try: + # Clean state + ssh_command.run(f"uci delete {test_section} 2>/dev/null") + ssh_command.run("uci commit") + + # Create many options + ssh_command.run_check(f"uci set {test_section}=config") + + for i in range(num_options): + ssh_command.run_check(f"uci set {test_section}.option{i}='value{i}'") + + # Commit all at once + ssh_command.run_check("uci commit") + + # Verify random samples + import random + + for _ in range(10): + i = random.randint(0, num_options - 1) + result = ssh_command.run_check(f"uci get {test_section}.option{i}") + assert result[0].strip() == f"value{i}", ( + f"Stress test failed for option{i}" + ) + + # Test deletion of all + ssh_command.run_check(f"uci delete {test_section}") + ssh_command.run_check("uci commit") + + # Verify deletion + result = ssh_command.run(f"uci show {test_section} 2>&1") + assert result[2] != 0, "Failed to delete stress test section" + + finally: + # Ensure cleanup + ssh_command.run(f"uci delete {test_section} 2>/dev/null") + ssh_command.run("uci commit") diff --git a/tftp.sh b/tftp.sh new file mode 100755 index 000000000..055902ab7 --- /dev/null +++ b/tftp.sh @@ -0,0 +1,14 @@ +CIP="192.168.1.3,192.168.1.100" +IFACE=en18 +IMAGE=miwifi_r4a_firmware_72d65_2.28.62.bin +IMAGE=openwrt-ipq40xx-mikrotik-mikrotik_lhgg-60ad-initramfs-kernel.bin +IMAGE=openwrt-ipq40xx-mikrotik-mikrotik_lhgg-60ad-squashfs-sysupgrade.bin +IMAGE=openwrt-mediatek-mt7622-linksys_e8450-ubi-squashfs-sysupgrade.itb +IMAGE=openwrt-mvebu-cortexa9-cznic_turris-omnia-initramfs-kernel.bin + +echo "interface=${IFACE} + domain=unbrick.local + dhcp-range=${CIP},2m + dhcp-boot=${IMAGE} + enable-tftp + tftp-root=${PWD}" | tee /dev/stderr | dnsmasq -d -C - diff --git a/tmp/lg-ssh-2q5xhed9/proxy-stderr b/tmp/lg-ssh-2q5xhed9/proxy-stderr new file mode 100644 index 000000000..59ee5fd3b --- /dev/null +++ b/tmp/lg-ssh-2q5xhed9/proxy-stderr @@ -0,0 +1 @@ +2025/06/21 03:03:29 socat[3166] E connect(5, AF=2 192.168.1.1:22, 16): No route to host diff --git a/tmp/lg-ssh-7854s1ue/proxy-stderr b/tmp/lg-ssh-7854s1ue/proxy-stderr new file mode 100644 index 000000000..e69de29bb diff --git a/tmp/lg-ssh-b5pdwk_a/proxy-stderr b/tmp/lg-ssh-b5pdwk_a/proxy-stderr new file mode 100644 index 000000000..1c40b82b6 --- /dev/null +++ b/tmp/lg-ssh-b5pdwk_a/proxy-stderr @@ -0,0 +1 @@ +2025/06/21 03:03:36 socat[3172] E connect(5, AF=2 192.168.1.1:22, 16): No route to host diff --git a/tmp/lg-ssh-df3o6aol/proxy-stderr b/tmp/lg-ssh-df3o6aol/proxy-stderr new file mode 100644 index 000000000..957e5097a --- /dev/null +++ b/tmp/lg-ssh-df3o6aol/proxy-stderr @@ -0,0 +1 @@ +2025/06/21 03:03:44 socat[3178] E connect(5, AF=2 192.168.1.1:22, 16): No route to host diff --git a/tmp/lg-ssh-i0df1a9o/proxy-stderr b/tmp/lg-ssh-i0df1a9o/proxy-stderr new file mode 100644 index 000000000..17067e8e1 --- /dev/null +++ b/tmp/lg-ssh-i0df1a9o/proxy-stderr @@ -0,0 +1 @@ +2025/06/21 03:03:21 socat[3160] E connect(5, AF=2 192.168.1.1:22, 16): No route to host diff --git a/tmp/lg-ssh-lmnw_865/proxy-stderr b/tmp/lg-ssh-lmnw_865/proxy-stderr new file mode 100644 index 000000000..c248baffb --- /dev/null +++ b/tmp/lg-ssh-lmnw_865/proxy-stderr @@ -0,0 +1 @@ +2025/06/21 03:03:40 socat[3175] E connect(5, AF=2 192.168.1.1:22, 16): No route to host diff --git a/tmp/lg-ssh-moloje4t/proxy-stderr b/tmp/lg-ssh-moloje4t/proxy-stderr new file mode 100644 index 000000000..e69de29bb diff --git a/tmp/lg-ssh-qfkvb8ry/proxy-stderr b/tmp/lg-ssh-qfkvb8ry/proxy-stderr new file mode 100644 index 000000000..f2f3286a2 --- /dev/null +++ b/tmp/lg-ssh-qfkvb8ry/proxy-stderr @@ -0,0 +1 @@ +2025/06/21 03:03:25 socat[3163] E connect(5, AF=2 192.168.1.1:22, 16): No route to host diff --git a/tmp/lg-ssh-ukjxbljy/proxy-stderr b/tmp/lg-ssh-ukjxbljy/proxy-stderr new file mode 100644 index 000000000..007a7ced4 --- /dev/null +++ b/tmp/lg-ssh-ukjxbljy/proxy-stderr @@ -0,0 +1 @@ +2025/06/21 03:03:33 socat[3169] E connect(5, AF=2 192.168.1.1:22, 16): No route to host diff --git a/util.py b/util.py new file mode 100644 index 000000000..b9a9dcaf1 --- /dev/null +++ b/util.py @@ -0,0 +1,54 @@ +import time +from pathlib import Path + + +def parse_cram_test( + lines, + shell="/bin/sh", + indent=2, + testname=None, + env=None, + cleanenv=True, + debug=False, + dos2unix=False, + escape7bit=False, +): + indent = b" " * indent + cmdline = indent + b"$ " + conline = indent + b"> " + salt = b"PRYSK%.5f" % time.time() + + lines = lines.splitlines(True) if isinstance(lines, bytes) else lines + + after = {} + refout, _postout = [], [] + i = pos = prepos = -1 + stdin = [] + for i, line in enumerate(lines): + # Convert Windows style line endings to UNIX + if dos2unix and line.endswith(b"\r\n"): + line = line[:-2] + b"\n" + elif not line.endswith(b"\n"): + line += b"\n" + refout.append(line) + if line.startswith(cmdline): + after.setdefault(pos, []).append(line) + prepos = pos + pos = i + stdin.append(b"echo %s %d $?\n" % (salt, i)) + stdin.append(line[len(cmdline) :]) + elif line.startswith(conline): + after.setdefault(prepos, []).append(line) + stdin.append(line[len(conline) :]) + elif not line.startswith(indent): + after.setdefault(pos, []).append(line) + stdin.append(b"echo %s %d $?\n" % (salt, i + 1)) + + print(stdin) + # return after, refout, postout, stdin + + +base_test = Path("tests/cram/base.t").read_bytes() +# print(base_test) + +print(parse_cram_test(base_test)) diff --git a/uv.lock b/uv.lock new file mode 100644 index 000000000..dae355267 --- /dev/null +++ b/uv.lock @@ -0,0 +1,588 @@ +version = 1 +revision = 3 +requires-python = ">=3.13" + +[[package]] +name = "ansible" +version = "11.10.0" +source = { registry = "https://pypi.org/simple" } +dependencies = [ + { name = "ansible-core" }, +] +sdist = { url = "https://files.pythonhosted.org/packages/a4/89/7792e7006a2695925df8c3c8e679075a9574d95e723bd8fdf0f211fb4055/ansible-11.10.0.tar.gz", hash = "sha256:f394e091c5dfd7e66c2f1e1d4459c44285004ba95925989b2d04c3a850ffcb96", size = 44842153, upload-time = "2025-09-09T18:40:16.283Z" } +wheels = [ + { url = "https://files.pythonhosted.org/packages/ab/8a/5f602d5121e4b86dbe33f3b1a0024911fae16df3fb26449cd3c2d434d17a/ansible-11.10.0-py3-none-any.whl", hash = "sha256:e31d5fc5748353d3efc1d64348c3c1d969032486a4533b2253478442cb4ab024", size = 56649317, upload-time = "2025-09-09T18:40:11.061Z" }, +] + +[[package]] +name = "ansible-core" +version = "2.18.9" +source = { registry = "https://pypi.org/simple" } +dependencies = [ + { name = "cryptography" }, + { name = "jinja2" }, + { name = "packaging" }, + { name = "pyyaml" }, + { name = "resolvelib" }, +] +sdist = { url = "https://files.pythonhosted.org/packages/e7/6e/f9b8ec26b8862b0a09db2a637893d8ebdebf912823c6e83fdf0a79b494bf/ansible_core-2.18.9.tar.gz", hash = "sha256:a5f4a02aad5843e990ff7be1b92dd658a8b230de713ea643920e683ebf980da1", size = 3092054, upload-time = "2025-09-08T18:23:32.019Z" } +wheels = [ + { url = "https://files.pythonhosted.org/packages/46/b3/ffd08ee6e0f5016d19cb16e4bb41cda94f1b8f91427e7eb6e5e742c87f5e/ansible_core-2.18.9-py3-none-any.whl", hash = "sha256:25206e1aac3bd30d95649a5ccf0d3646461d02b4dc265b5959e33b7ccd6f23f8", size = 2210581, upload-time = "2025-09-08T18:23:29.874Z" }, +] + +[[package]] +name = "attrs" +version = "25.3.0" +source = { registry = "https://pypi.org/simple" } +sdist = { url = "https://files.pythonhosted.org/packages/5a/b0/1367933a8532ee6ff8d63537de4f1177af4bff9f3e829baf7331f595bb24/attrs-25.3.0.tar.gz", hash = "sha256:75d7cefc7fb576747b2c81b4442d4d4a1ce0900973527c011d1030fd3bf4af1b", size = 812032, upload-time = "2025-03-13T11:10:22.779Z" } +wheels = [ + { url = "https://files.pythonhosted.org/packages/77/06/bb80f5f86020c4551da315d78b3ab75e8228f89f0162f2c3a819e407941a/attrs-25.3.0-py3-none-any.whl", hash = "sha256:427318ce031701fea540783410126f03899a97ffc6f61596ad581ac2e40e3bc3", size = 63815, upload-time = "2025-03-13T11:10:21.14Z" }, +] + +[[package]] +name = "certifi" +version = "2025.6.15" +source = { registry = "https://pypi.org/simple" } +sdist = { url = "https://files.pythonhosted.org/packages/73/f7/f14b46d4bcd21092d7d3ccef689615220d8a08fb25e564b65d20738e672e/certifi-2025.6.15.tar.gz", hash = "sha256:d747aa5a8b9bbbb1bb8c22bb13e22bd1f18e9796defa16bab421f7f7a317323b", size = 158753, upload-time = "2025-06-15T02:45:51.329Z" } +wheels = [ + { url = "https://files.pythonhosted.org/packages/84/ae/320161bd181fc06471eed047ecce67b693fd7515b16d495d8932db763426/certifi-2025.6.15-py3-none-any.whl", hash = "sha256:2e0c7ce7cb5d8f8634ca55d2ba7e6ec2689a2fd6537d8dec1296a477a4910057", size = 157650, upload-time = "2025-06-15T02:45:49.977Z" }, +] + +[[package]] +name = "cffi" +version = "2.0.0" +source = { registry = "https://pypi.org/simple" } +dependencies = [ + { name = "pycparser", marker = "implementation_name != 'PyPy'" }, +] +sdist = { url = "https://files.pythonhosted.org/packages/eb/56/b1ba7935a17738ae8453301356628e8147c79dbb825bcbc73dc7401f9846/cffi-2.0.0.tar.gz", hash = "sha256:44d1b5909021139fe36001ae048dbdde8214afa20200eda0f64c068cac5d5529", size = 523588, upload-time = "2025-09-08T23:24:04.541Z" } +wheels = [ + { url = "https://files.pythonhosted.org/packages/4b/8d/a0a47a0c9e413a658623d014e91e74a50cdd2c423f7ccfd44086ef767f90/cffi-2.0.0-cp313-cp313-macosx_10_13_x86_64.whl", hash = "sha256:00bdf7acc5f795150faa6957054fbbca2439db2f775ce831222b66f192f03beb", size = 185230, upload-time = "2025-09-08T23:23:00.879Z" }, + { url = "https://files.pythonhosted.org/packages/4a/d2/a6c0296814556c68ee32009d9c2ad4f85f2707cdecfd7727951ec228005d/cffi-2.0.0-cp313-cp313-macosx_11_0_arm64.whl", hash = "sha256:45d5e886156860dc35862657e1494b9bae8dfa63bf56796f2fb56e1679fc0bca", size = 181043, upload-time = "2025-09-08T23:23:02.231Z" }, + { url = "https://files.pythonhosted.org/packages/b0/1e/d22cc63332bd59b06481ceaac49d6c507598642e2230f201649058a7e704/cffi-2.0.0-cp313-cp313-manylinux1_i686.manylinux2014_i686.manylinux_2_17_i686.manylinux_2_5_i686.whl", hash = "sha256:07b271772c100085dd28b74fa0cd81c8fb1a3ba18b21e03d7c27f3436a10606b", size = 212446, upload-time = "2025-09-08T23:23:03.472Z" }, + { url = "https://files.pythonhosted.org/packages/a9/f5/a2c23eb03b61a0b8747f211eb716446c826ad66818ddc7810cc2cc19b3f2/cffi-2.0.0-cp313-cp313-manylinux2014_aarch64.manylinux_2_17_aarch64.whl", hash = "sha256:d48a880098c96020b02d5a1f7d9251308510ce8858940e6fa99ece33f610838b", size = 220101, upload-time = "2025-09-08T23:23:04.792Z" }, + { url = "https://files.pythonhosted.org/packages/f2/7f/e6647792fc5850d634695bc0e6ab4111ae88e89981d35ac269956605feba/cffi-2.0.0-cp313-cp313-manylinux2014_ppc64le.manylinux_2_17_ppc64le.whl", hash = "sha256:f93fd8e5c8c0a4aa1f424d6173f14a892044054871c771f8566e4008eaa359d2", size = 207948, upload-time = "2025-09-08T23:23:06.127Z" }, + { url = "https://files.pythonhosted.org/packages/cb/1e/a5a1bd6f1fb30f22573f76533de12a00bf274abcdc55c8edab639078abb6/cffi-2.0.0-cp313-cp313-manylinux2014_s390x.manylinux_2_17_s390x.whl", hash = "sha256:dd4f05f54a52fb558f1ba9f528228066954fee3ebe629fc1660d874d040ae5a3", size = 206422, upload-time = "2025-09-08T23:23:07.753Z" }, + { url = "https://files.pythonhosted.org/packages/98/df/0a1755e750013a2081e863e7cd37e0cdd02664372c754e5560099eb7aa44/cffi-2.0.0-cp313-cp313-manylinux2014_x86_64.manylinux_2_17_x86_64.whl", hash = "sha256:c8d3b5532fc71b7a77c09192b4a5a200ea992702734a2e9279a37f2478236f26", size = 219499, upload-time = "2025-09-08T23:23:09.648Z" }, + { url = "https://files.pythonhosted.org/packages/50/e1/a969e687fcf9ea58e6e2a928ad5e2dd88cc12f6f0ab477e9971f2309b57c/cffi-2.0.0-cp313-cp313-musllinux_1_2_aarch64.whl", hash = "sha256:d9b29c1f0ae438d5ee9acb31cadee00a58c46cc9c0b2f9038c6b0b3470877a8c", size = 222928, upload-time = "2025-09-08T23:23:10.928Z" }, + { url = "https://files.pythonhosted.org/packages/36/54/0362578dd2c9e557a28ac77698ed67323ed5b9775ca9d3fe73fe191bb5d8/cffi-2.0.0-cp313-cp313-musllinux_1_2_x86_64.whl", hash = "sha256:6d50360be4546678fc1b79ffe7a66265e28667840010348dd69a314145807a1b", size = 221302, upload-time = "2025-09-08T23:23:12.42Z" }, + { url = "https://files.pythonhosted.org/packages/eb/6d/bf9bda840d5f1dfdbf0feca87fbdb64a918a69bca42cfa0ba7b137c48cb8/cffi-2.0.0-cp313-cp313-win32.whl", hash = "sha256:74a03b9698e198d47562765773b4a8309919089150a0bb17d829ad7b44b60d27", size = 172909, upload-time = "2025-09-08T23:23:14.32Z" }, + { url = "https://files.pythonhosted.org/packages/37/18/6519e1ee6f5a1e579e04b9ddb6f1676c17368a7aba48299c3759bbc3c8b3/cffi-2.0.0-cp313-cp313-win_amd64.whl", hash = "sha256:19f705ada2530c1167abacb171925dd886168931e0a7b78f5bffcae5c6b5be75", size = 183402, upload-time = "2025-09-08T23:23:15.535Z" }, + { url = "https://files.pythonhosted.org/packages/cb/0e/02ceeec9a7d6ee63bb596121c2c8e9b3a9e150936f4fbef6ca1943e6137c/cffi-2.0.0-cp313-cp313-win_arm64.whl", hash = "sha256:256f80b80ca3853f90c21b23ee78cd008713787b1b1e93eae9f3d6a7134abd91", size = 177780, upload-time = "2025-09-08T23:23:16.761Z" }, + { url = "https://files.pythonhosted.org/packages/92/c4/3ce07396253a83250ee98564f8d7e9789fab8e58858f35d07a9a2c78de9f/cffi-2.0.0-cp314-cp314-macosx_10_13_x86_64.whl", hash = "sha256:fc33c5141b55ed366cfaad382df24fe7dcbc686de5be719b207bb248e3053dc5", size = 185320, upload-time = "2025-09-08T23:23:18.087Z" }, + { url = "https://files.pythonhosted.org/packages/59/dd/27e9fa567a23931c838c6b02d0764611c62290062a6d4e8ff7863daf9730/cffi-2.0.0-cp314-cp314-macosx_11_0_arm64.whl", hash = "sha256:c654de545946e0db659b3400168c9ad31b5d29593291482c43e3564effbcee13", size = 181487, upload-time = "2025-09-08T23:23:19.622Z" }, + { url = "https://files.pythonhosted.org/packages/d6/43/0e822876f87ea8a4ef95442c3d766a06a51fc5298823f884ef87aaad168c/cffi-2.0.0-cp314-cp314-manylinux2014_aarch64.manylinux_2_17_aarch64.whl", hash = "sha256:24b6f81f1983e6df8db3adc38562c83f7d4a0c36162885ec7f7b77c7dcbec97b", size = 220049, upload-time = "2025-09-08T23:23:20.853Z" }, + { url = "https://files.pythonhosted.org/packages/b4/89/76799151d9c2d2d1ead63c2429da9ea9d7aac304603de0c6e8764e6e8e70/cffi-2.0.0-cp314-cp314-manylinux2014_ppc64le.manylinux_2_17_ppc64le.whl", hash = "sha256:12873ca6cb9b0f0d3a0da705d6086fe911591737a59f28b7936bdfed27c0d47c", size = 207793, upload-time = "2025-09-08T23:23:22.08Z" }, + { url = "https://files.pythonhosted.org/packages/bb/dd/3465b14bb9e24ee24cb88c9e3730f6de63111fffe513492bf8c808a3547e/cffi-2.0.0-cp314-cp314-manylinux2014_s390x.manylinux_2_17_s390x.whl", hash = "sha256:d9b97165e8aed9272a6bb17c01e3cc5871a594a446ebedc996e2397a1c1ea8ef", size = 206300, upload-time = "2025-09-08T23:23:23.314Z" }, + { url = "https://files.pythonhosted.org/packages/47/d9/d83e293854571c877a92da46fdec39158f8d7e68da75bf73581225d28e90/cffi-2.0.0-cp314-cp314-manylinux2014_x86_64.manylinux_2_17_x86_64.whl", hash = "sha256:afb8db5439b81cf9c9d0c80404b60c3cc9c3add93e114dcae767f1477cb53775", size = 219244, upload-time = "2025-09-08T23:23:24.541Z" }, + { url = "https://files.pythonhosted.org/packages/2b/0f/1f177e3683aead2bb00f7679a16451d302c436b5cbf2505f0ea8146ef59e/cffi-2.0.0-cp314-cp314-musllinux_1_2_aarch64.whl", hash = "sha256:737fe7d37e1a1bffe70bd5754ea763a62a066dc5913ca57e957824b72a85e205", size = 222828, upload-time = "2025-09-08T23:23:26.143Z" }, + { url = "https://files.pythonhosted.org/packages/c6/0f/cafacebd4b040e3119dcb32fed8bdef8dfe94da653155f9d0b9dc660166e/cffi-2.0.0-cp314-cp314-musllinux_1_2_x86_64.whl", hash = "sha256:38100abb9d1b1435bc4cc340bb4489635dc2f0da7456590877030c9b3d40b0c1", size = 220926, upload-time = "2025-09-08T23:23:27.873Z" }, + { url = "https://files.pythonhosted.org/packages/3e/aa/df335faa45b395396fcbc03de2dfcab242cd61a9900e914fe682a59170b1/cffi-2.0.0-cp314-cp314-win32.whl", hash = "sha256:087067fa8953339c723661eda6b54bc98c5625757ea62e95eb4898ad5e776e9f", size = 175328, upload-time = "2025-09-08T23:23:44.61Z" }, + { url = "https://files.pythonhosted.org/packages/bb/92/882c2d30831744296ce713f0feb4c1cd30f346ef747b530b5318715cc367/cffi-2.0.0-cp314-cp314-win_amd64.whl", hash = "sha256:203a48d1fb583fc7d78a4c6655692963b860a417c0528492a6bc21f1aaefab25", size = 185650, upload-time = "2025-09-08T23:23:45.848Z" }, + { url = "https://files.pythonhosted.org/packages/9f/2c/98ece204b9d35a7366b5b2c6539c350313ca13932143e79dc133ba757104/cffi-2.0.0-cp314-cp314-win_arm64.whl", hash = "sha256:dbd5c7a25a7cb98f5ca55d258b103a2054f859a46ae11aaf23134f9cc0d356ad", size = 180687, upload-time = "2025-09-08T23:23:47.105Z" }, + { url = "https://files.pythonhosted.org/packages/3e/61/c768e4d548bfa607abcda77423448df8c471f25dbe64fb2ef6d555eae006/cffi-2.0.0-cp314-cp314t-macosx_10_13_x86_64.whl", hash = "sha256:9a67fc9e8eb39039280526379fb3a70023d77caec1852002b4da7e8b270c4dd9", size = 188773, upload-time = "2025-09-08T23:23:29.347Z" }, + { url = "https://files.pythonhosted.org/packages/2c/ea/5f76bce7cf6fcd0ab1a1058b5af899bfbef198bea4d5686da88471ea0336/cffi-2.0.0-cp314-cp314t-macosx_11_0_arm64.whl", hash = "sha256:7a66c7204d8869299919db4d5069a82f1561581af12b11b3c9f48c584eb8743d", size = 185013, upload-time = "2025-09-08T23:23:30.63Z" }, + { url = "https://files.pythonhosted.org/packages/be/b4/c56878d0d1755cf9caa54ba71e5d049479c52f9e4afc230f06822162ab2f/cffi-2.0.0-cp314-cp314t-manylinux2014_aarch64.manylinux_2_17_aarch64.whl", hash = "sha256:7cc09976e8b56f8cebd752f7113ad07752461f48a58cbba644139015ac24954c", size = 221593, upload-time = "2025-09-08T23:23:31.91Z" }, + { url = "https://files.pythonhosted.org/packages/e0/0d/eb704606dfe8033e7128df5e90fee946bbcb64a04fcdaa97321309004000/cffi-2.0.0-cp314-cp314t-manylinux2014_ppc64le.manylinux_2_17_ppc64le.whl", hash = "sha256:92b68146a71df78564e4ef48af17551a5ddd142e5190cdf2c5624d0c3ff5b2e8", size = 209354, upload-time = "2025-09-08T23:23:33.214Z" }, + { url = "https://files.pythonhosted.org/packages/d8/19/3c435d727b368ca475fb8742ab97c9cb13a0de600ce86f62eab7fa3eea60/cffi-2.0.0-cp314-cp314t-manylinux2014_s390x.manylinux_2_17_s390x.whl", hash = "sha256:b1e74d11748e7e98e2f426ab176d4ed720a64412b6a15054378afdb71e0f37dc", size = 208480, upload-time = "2025-09-08T23:23:34.495Z" }, + { url = "https://files.pythonhosted.org/packages/d0/44/681604464ed9541673e486521497406fadcc15b5217c3e326b061696899a/cffi-2.0.0-cp314-cp314t-manylinux2014_x86_64.manylinux_2_17_x86_64.whl", hash = "sha256:28a3a209b96630bca57cce802da70c266eb08c6e97e5afd61a75611ee6c64592", size = 221584, upload-time = "2025-09-08T23:23:36.096Z" }, + { url = "https://files.pythonhosted.org/packages/25/8e/342a504ff018a2825d395d44d63a767dd8ebc927ebda557fecdaca3ac33a/cffi-2.0.0-cp314-cp314t-musllinux_1_2_aarch64.whl", hash = "sha256:7553fb2090d71822f02c629afe6042c299edf91ba1bf94951165613553984512", size = 224443, upload-time = "2025-09-08T23:23:37.328Z" }, + { url = "https://files.pythonhosted.org/packages/e1/5e/b666bacbbc60fbf415ba9988324a132c9a7a0448a9a8f125074671c0f2c3/cffi-2.0.0-cp314-cp314t-musllinux_1_2_x86_64.whl", hash = "sha256:6c6c373cfc5c83a975506110d17457138c8c63016b563cc9ed6e056a82f13ce4", size = 223437, upload-time = "2025-09-08T23:23:38.945Z" }, + { url = "https://files.pythonhosted.org/packages/a0/1d/ec1a60bd1a10daa292d3cd6bb0b359a81607154fb8165f3ec95fe003b85c/cffi-2.0.0-cp314-cp314t-win32.whl", hash = "sha256:1fc9ea04857caf665289b7a75923f2c6ed559b8298a1b8c49e59f7dd95c8481e", size = 180487, upload-time = "2025-09-08T23:23:40.423Z" }, + { url = "https://files.pythonhosted.org/packages/bf/41/4c1168c74fac325c0c8156f04b6749c8b6a8f405bbf91413ba088359f60d/cffi-2.0.0-cp314-cp314t-win_amd64.whl", hash = "sha256:d68b6cef7827e8641e8ef16f4494edda8b36104d79773a334beaa1e3521430f6", size = 191726, upload-time = "2025-09-08T23:23:41.742Z" }, + { url = "https://files.pythonhosted.org/packages/ae/3a/dbeec9d1ee0844c679f6bb5d6ad4e9f198b1224f4e7a32825f47f6192b0c/cffi-2.0.0-cp314-cp314t-win_arm64.whl", hash = "sha256:0a1527a803f0a659de1af2e1fd700213caba79377e27e4693648c2923da066f9", size = 184195, upload-time = "2025-09-08T23:23:43.004Z" }, +] + +[[package]] +name = "charset-normalizer" +version = "3.4.2" +source = { registry = "https://pypi.org/simple" } +sdist = { url = "https://files.pythonhosted.org/packages/e4/33/89c2ced2b67d1c2a61c19c6751aa8902d46ce3dacb23600a283619f5a12d/charset_normalizer-3.4.2.tar.gz", hash = "sha256:5baececa9ecba31eff645232d59845c07aa030f0c81ee70184a90d35099a0e63", size = 126367, upload-time = "2025-05-02T08:34:42.01Z" } +wheels = [ + { url = "https://files.pythonhosted.org/packages/ea/12/a93df3366ed32db1d907d7593a94f1fe6293903e3e92967bebd6950ed12c/charset_normalizer-3.4.2-cp313-cp313-macosx_10_13_universal2.whl", hash = "sha256:926ca93accd5d36ccdabd803392ddc3e03e6d4cd1cf17deff3b989ab8e9dbcf0", size = 199622, upload-time = "2025-05-02T08:32:56.363Z" }, + { url = "https://files.pythonhosted.org/packages/04/93/bf204e6f344c39d9937d3c13c8cd5bbfc266472e51fc8c07cb7f64fcd2de/charset_normalizer-3.4.2-cp313-cp313-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:eba9904b0f38a143592d9fc0e19e2df0fa2e41c3c3745554761c5f6447eedabf", size = 143435, upload-time = "2025-05-02T08:32:58.551Z" }, + { url = "https://files.pythonhosted.org/packages/22/2a/ea8a2095b0bafa6c5b5a55ffdc2f924455233ee7b91c69b7edfcc9e02284/charset_normalizer-3.4.2-cp313-cp313-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:3fddb7e2c84ac87ac3a947cb4e66d143ca5863ef48e4a5ecb83bd48619e4634e", size = 153653, upload-time = "2025-05-02T08:33:00.342Z" }, + { url = "https://files.pythonhosted.org/packages/b6/57/1b090ff183d13cef485dfbe272e2fe57622a76694061353c59da52c9a659/charset_normalizer-3.4.2-cp313-cp313-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:98f862da73774290f251b9df8d11161b6cf25b599a66baf087c1ffe340e9bfd1", size = 146231, upload-time = "2025-05-02T08:33:02.081Z" }, + { url = "https://files.pythonhosted.org/packages/e2/28/ffc026b26f441fc67bd21ab7f03b313ab3fe46714a14b516f931abe1a2d8/charset_normalizer-3.4.2-cp313-cp313-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:6c9379d65defcab82d07b2a9dfbfc2e95bc8fe0ebb1b176a3190230a3ef0e07c", size = 148243, upload-time = "2025-05-02T08:33:04.063Z" }, + { url = "https://files.pythonhosted.org/packages/c0/0f/9abe9bd191629c33e69e47c6ef45ef99773320e9ad8e9cb08b8ab4a8d4cb/charset_normalizer-3.4.2-cp313-cp313-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:e635b87f01ebc977342e2697d05b56632f5f879a4f15955dfe8cef2448b51691", size = 150442, upload-time = "2025-05-02T08:33:06.418Z" }, + { url = "https://files.pythonhosted.org/packages/67/7c/a123bbcedca91d5916c056407f89a7f5e8fdfce12ba825d7d6b9954a1a3c/charset_normalizer-3.4.2-cp313-cp313-musllinux_1_2_aarch64.whl", hash = "sha256:1c95a1e2902a8b722868587c0e1184ad5c55631de5afc0eb96bc4b0d738092c0", size = 145147, upload-time = "2025-05-02T08:33:08.183Z" }, + { url = "https://files.pythonhosted.org/packages/ec/fe/1ac556fa4899d967b83e9893788e86b6af4d83e4726511eaaad035e36595/charset_normalizer-3.4.2-cp313-cp313-musllinux_1_2_i686.whl", hash = "sha256:ef8de666d6179b009dce7bcb2ad4c4a779f113f12caf8dc77f0162c29d20490b", size = 153057, upload-time = "2025-05-02T08:33:09.986Z" }, + { url = "https://files.pythonhosted.org/packages/2b/ff/acfc0b0a70b19e3e54febdd5301a98b72fa07635e56f24f60502e954c461/charset_normalizer-3.4.2-cp313-cp313-musllinux_1_2_ppc64le.whl", hash = "sha256:32fc0341d72e0f73f80acb0a2c94216bd704f4f0bce10aedea38f30502b271ff", size = 156454, upload-time = "2025-05-02T08:33:11.814Z" }, + { url = "https://files.pythonhosted.org/packages/92/08/95b458ce9c740d0645feb0e96cea1f5ec946ea9c580a94adfe0b617f3573/charset_normalizer-3.4.2-cp313-cp313-musllinux_1_2_s390x.whl", hash = "sha256:289200a18fa698949d2b39c671c2cc7a24d44096784e76614899a7ccf2574b7b", size = 154174, upload-time = "2025-05-02T08:33:13.707Z" }, + { url = "https://files.pythonhosted.org/packages/78/be/8392efc43487ac051eee6c36d5fbd63032d78f7728cb37aebcc98191f1ff/charset_normalizer-3.4.2-cp313-cp313-musllinux_1_2_x86_64.whl", hash = "sha256:4a476b06fbcf359ad25d34a057b7219281286ae2477cc5ff5e3f70a246971148", size = 149166, upload-time = "2025-05-02T08:33:15.458Z" }, + { url = "https://files.pythonhosted.org/packages/44/96/392abd49b094d30b91d9fbda6a69519e95802250b777841cf3bda8fe136c/charset_normalizer-3.4.2-cp313-cp313-win32.whl", hash = "sha256:aaeeb6a479c7667fbe1099af9617c83aaca22182d6cf8c53966491a0f1b7ffb7", size = 98064, upload-time = "2025-05-02T08:33:17.06Z" }, + { url = "https://files.pythonhosted.org/packages/e9/b0/0200da600134e001d91851ddc797809e2fe0ea72de90e09bec5a2fbdaccb/charset_normalizer-3.4.2-cp313-cp313-win_amd64.whl", hash = "sha256:aa6af9e7d59f9c12b33ae4e9450619cf2488e2bbe9b44030905877f0b2324980", size = 105641, upload-time = "2025-05-02T08:33:18.753Z" }, + { url = "https://files.pythonhosted.org/packages/20/94/c5790835a017658cbfabd07f3bfb549140c3ac458cfc196323996b10095a/charset_normalizer-3.4.2-py3-none-any.whl", hash = "sha256:7f56930ab0abd1c45cd15be65cc741c28b1c9a34876ce8c17a2fa107810c0af0", size = 52626, upload-time = "2025-05-02T08:34:40.053Z" }, +] + +[[package]] +name = "colorama" +version = "0.4.6" +source = { registry = "https://pypi.org/simple" } +sdist = { url = "https://files.pythonhosted.org/packages/d8/53/6f443c9a4a8358a93a6792e2acffb9d9d5cb0a5cfd8802644b7b1c9a02e4/colorama-0.4.6.tar.gz", hash = "sha256:08695f5cb7ed6e0531a20572697297273c47b8cae5a63ffc6d6ed5c201be6e44", size = 27697, upload-time = "2022-10-25T02:36:22.414Z" } +wheels = [ + { url = "https://files.pythonhosted.org/packages/d1/d6/3965ed04c63042e047cb6a3e6ed1a63a35087b6a609aa3a15ed8ac56c221/colorama-0.4.6-py2.py3-none-any.whl", hash = "sha256:4f1d9991f5acc0ca119f9d443620b77f9d6b33703e51011c16baf57afb285fc6", size = 25335, upload-time = "2022-10-25T02:36:20.889Z" }, +] + +[[package]] +name = "cryptography" +version = "46.0.2" +source = { registry = "https://pypi.org/simple" } +dependencies = [ + { name = "cffi", marker = "platform_python_implementation != 'PyPy'" }, +] +sdist = { url = "https://files.pythonhosted.org/packages/4a/9b/e301418629f7bfdf72db9e80ad6ed9d1b83c487c471803eaa6464c511a01/cryptography-46.0.2.tar.gz", hash = "sha256:21b6fc8c71a3f9a604f028a329e5560009cc4a3a828bfea5fcba8eb7647d88fe", size = 749293, upload-time = "2025-10-01T00:29:11.856Z" } +wheels = [ + { url = "https://files.pythonhosted.org/packages/e0/98/7a8df8c19a335c8028414738490fc3955c0cecbfdd37fcc1b9c3d04bd561/cryptography-46.0.2-cp311-abi3-macosx_10_9_universal2.whl", hash = "sha256:f3e32ab7dd1b1ef67b9232c4cf5e2ee4cd517d4316ea910acaaa9c5712a1c663", size = 7261255, upload-time = "2025-10-01T00:27:22.947Z" }, + { url = "https://files.pythonhosted.org/packages/c6/38/b2adb2aa1baa6706adc3eb746691edd6f90a656a9a65c3509e274d15a2b8/cryptography-46.0.2-cp311-abi3-manylinux2014_aarch64.manylinux_2_17_aarch64.whl", hash = "sha256:1fd1a69086926b623ef8126b4c33d5399ce9e2f3fac07c9c734c2a4ec38b6d02", size = 4297596, upload-time = "2025-10-01T00:27:25.258Z" }, + { url = "https://files.pythonhosted.org/packages/e4/27/0f190ada240003119488ae66c897b5e97149292988f556aef4a6a2a57595/cryptography-46.0.2-cp311-abi3-manylinux2014_x86_64.manylinux_2_17_x86_64.whl", hash = "sha256:bb7fb9cd44c2582aa5990cf61a4183e6f54eea3172e54963787ba47287edd135", size = 4450899, upload-time = "2025-10-01T00:27:27.458Z" }, + { url = "https://files.pythonhosted.org/packages/85/d5/e4744105ab02fdf6bb58ba9a816e23b7a633255987310b4187d6745533db/cryptography-46.0.2-cp311-abi3-manylinux_2_28_aarch64.whl", hash = "sha256:9066cfd7f146f291869a9898b01df1c9b0e314bfa182cef432043f13fc462c92", size = 4300382, upload-time = "2025-10-01T00:27:29.091Z" }, + { url = "https://files.pythonhosted.org/packages/33/fb/bf9571065c18c04818cb07de90c43fc042c7977c68e5de6876049559c72f/cryptography-46.0.2-cp311-abi3-manylinux_2_28_armv7l.manylinux_2_31_armv7l.whl", hash = "sha256:97e83bf4f2f2c084d8dd792d13841d0a9b241643151686010866bbd076b19659", size = 4017347, upload-time = "2025-10-01T00:27:30.767Z" }, + { url = "https://files.pythonhosted.org/packages/35/72/fc51856b9b16155ca071080e1a3ad0c3a8e86616daf7eb018d9565b99baa/cryptography-46.0.2-cp311-abi3-manylinux_2_28_ppc64le.whl", hash = "sha256:4a766d2a5d8127364fd936572c6e6757682fc5dfcbdba1632d4554943199f2fa", size = 4983500, upload-time = "2025-10-01T00:27:32.741Z" }, + { url = "https://files.pythonhosted.org/packages/c1/53/0f51e926799025e31746d454ab2e36f8c3f0d41592bc65cb9840368d3275/cryptography-46.0.2-cp311-abi3-manylinux_2_28_x86_64.whl", hash = "sha256:fab8f805e9675e61ed8538f192aad70500fa6afb33a8803932999b1049363a08", size = 4482591, upload-time = "2025-10-01T00:27:34.869Z" }, + { url = "https://files.pythonhosted.org/packages/86/96/4302af40b23ab8aa360862251fb8fc450b2a06ff24bc5e261c2007f27014/cryptography-46.0.2-cp311-abi3-manylinux_2_34_aarch64.whl", hash = "sha256:1e3b6428a3d56043bff0bb85b41c535734204e599c1c0977e1d0f261b02f3ad5", size = 4300019, upload-time = "2025-10-01T00:27:37.029Z" }, + { url = "https://files.pythonhosted.org/packages/9b/59/0be12c7fcc4c5e34fe2b665a75bc20958473047a30d095a7657c218fa9e8/cryptography-46.0.2-cp311-abi3-manylinux_2_34_ppc64le.whl", hash = "sha256:1a88634851d9b8de8bb53726f4300ab191d3b2f42595e2581a54b26aba71b7cc", size = 4950006, upload-time = "2025-10-01T00:27:40.272Z" }, + { url = "https://files.pythonhosted.org/packages/55/1d/42fda47b0111834b49e31590ae14fd020594d5e4dadd639bce89ad790fba/cryptography-46.0.2-cp311-abi3-manylinux_2_34_x86_64.whl", hash = "sha256:be939b99d4e091eec9a2bcf41aaf8f351f312cd19ff74b5c83480f08a8a43e0b", size = 4482088, upload-time = "2025-10-01T00:27:42.668Z" }, + { url = "https://files.pythonhosted.org/packages/17/50/60f583f69aa1602c2bdc7022dae86a0d2b837276182f8c1ec825feb9b874/cryptography-46.0.2-cp311-abi3-musllinux_1_2_aarch64.whl", hash = "sha256:9f13b040649bc18e7eb37936009b24fd31ca095a5c647be8bb6aaf1761142bd1", size = 4425599, upload-time = "2025-10-01T00:27:44.616Z" }, + { url = "https://files.pythonhosted.org/packages/d1/57/d8d4134cd27e6e94cf44adb3f3489f935bde85f3a5508e1b5b43095b917d/cryptography-46.0.2-cp311-abi3-musllinux_1_2_x86_64.whl", hash = "sha256:9bdc25e4e01b261a8fda4e98618f1c9515febcecebc9566ddf4a70c63967043b", size = 4697458, upload-time = "2025-10-01T00:27:46.209Z" }, + { url = "https://files.pythonhosted.org/packages/d1/2b/531e37408573e1da33adfb4c58875013ee8ac7d548d1548967d94a0ae5c4/cryptography-46.0.2-cp311-abi3-win32.whl", hash = "sha256:8b9bf67b11ef9e28f4d78ff88b04ed0929fcd0e4f70bb0f704cfc32a5c6311ee", size = 3056077, upload-time = "2025-10-01T00:27:48.424Z" }, + { url = "https://files.pythonhosted.org/packages/a8/cd/2f83cafd47ed2dc5a3a9c783ff5d764e9e70d3a160e0df9a9dcd639414ce/cryptography-46.0.2-cp311-abi3-win_amd64.whl", hash = "sha256:758cfc7f4c38c5c5274b55a57ef1910107436f4ae842478c4989abbd24bd5acb", size = 3512585, upload-time = "2025-10-01T00:27:50.521Z" }, + { url = "https://files.pythonhosted.org/packages/00/36/676f94e10bfaa5c5b86c469ff46d3e0663c5dc89542f7afbadac241a3ee4/cryptography-46.0.2-cp311-abi3-win_arm64.whl", hash = "sha256:218abd64a2e72f8472c2102febb596793347a3e65fafbb4ad50519969da44470", size = 2927474, upload-time = "2025-10-01T00:27:52.91Z" }, + { url = "https://files.pythonhosted.org/packages/6f/cc/47fc6223a341f26d103cb6da2216805e08a37d3b52bee7f3b2aee8066f95/cryptography-46.0.2-cp314-cp314t-macosx_10_9_universal2.whl", hash = "sha256:bda55e8dbe8533937956c996beaa20266a8eca3570402e52ae52ed60de1faca8", size = 7198626, upload-time = "2025-10-01T00:27:54.8Z" }, + { url = "https://files.pythonhosted.org/packages/93/22/d66a8591207c28bbe4ac7afa25c4656dc19dc0db29a219f9809205639ede/cryptography-46.0.2-cp314-cp314t-manylinux2014_aarch64.manylinux_2_17_aarch64.whl", hash = "sha256:e7155c0b004e936d381b15425273aee1cebc94f879c0ce82b0d7fecbf755d53a", size = 4287584, upload-time = "2025-10-01T00:27:57.018Z" }, + { url = "https://files.pythonhosted.org/packages/8c/3e/fac3ab6302b928e0398c269eddab5978e6c1c50b2b77bb5365ffa8633b37/cryptography-46.0.2-cp314-cp314t-manylinux2014_x86_64.manylinux_2_17_x86_64.whl", hash = "sha256:a61c154cc5488272a6c4b86e8d5beff4639cdb173d75325ce464d723cda0052b", size = 4433796, upload-time = "2025-10-01T00:27:58.631Z" }, + { url = "https://files.pythonhosted.org/packages/7d/d8/24392e5d3c58e2d83f98fe5a2322ae343360ec5b5b93fe18bc52e47298f5/cryptography-46.0.2-cp314-cp314t-manylinux_2_28_aarch64.whl", hash = "sha256:9ec3f2e2173f36a9679d3b06d3d01121ab9b57c979de1e6a244b98d51fea1b20", size = 4292126, upload-time = "2025-10-01T00:28:00.643Z" }, + { url = "https://files.pythonhosted.org/packages/ed/38/3d9f9359b84c16c49a5a336ee8be8d322072a09fac17e737f3bb11f1ce64/cryptography-46.0.2-cp314-cp314t-manylinux_2_28_armv7l.manylinux_2_31_armv7l.whl", hash = "sha256:2fafb6aa24e702bbf74de4cb23bfa2c3beb7ab7683a299062b69724c92e0fa73", size = 3993056, upload-time = "2025-10-01T00:28:02.8Z" }, + { url = "https://files.pythonhosted.org/packages/d6/a3/4c44fce0d49a4703cc94bfbe705adebf7ab36efe978053742957bc7ec324/cryptography-46.0.2-cp314-cp314t-manylinux_2_28_ppc64le.whl", hash = "sha256:0c7ffe8c9b1fcbb07a26d7c9fa5e857c2fe80d72d7b9e0353dcf1d2180ae60ee", size = 4967604, upload-time = "2025-10-01T00:28:04.783Z" }, + { url = "https://files.pythonhosted.org/packages/eb/c2/49d73218747c8cac16bb8318a5513fde3129e06a018af3bc4dc722aa4a98/cryptography-46.0.2-cp314-cp314t-manylinux_2_28_x86_64.whl", hash = "sha256:5840f05518caa86b09d23f8b9405a7b6d5400085aa14a72a98fdf5cf1568c0d2", size = 4465367, upload-time = "2025-10-01T00:28:06.864Z" }, + { url = "https://files.pythonhosted.org/packages/1b/64/9afa7d2ee742f55ca6285a54386ed2778556a4ed8871571cb1c1bfd8db9e/cryptography-46.0.2-cp314-cp314t-manylinux_2_34_aarch64.whl", hash = "sha256:27c53b4f6a682a1b645fbf1cd5058c72cf2f5aeba7d74314c36838c7cbc06e0f", size = 4291678, upload-time = "2025-10-01T00:28:08.982Z" }, + { url = "https://files.pythonhosted.org/packages/50/48/1696d5ea9623a7b72ace87608f6899ca3c331709ac7ebf80740abb8ac673/cryptography-46.0.2-cp314-cp314t-manylinux_2_34_ppc64le.whl", hash = "sha256:512c0250065e0a6b286b2db4bbcc2e67d810acd53eb81733e71314340366279e", size = 4931366, upload-time = "2025-10-01T00:28:10.74Z" }, + { url = "https://files.pythonhosted.org/packages/eb/3c/9dfc778401a334db3b24435ee0733dd005aefb74afe036e2d154547cb917/cryptography-46.0.2-cp314-cp314t-manylinux_2_34_x86_64.whl", hash = "sha256:07c0eb6657c0e9cca5891f4e35081dbf985c8131825e21d99b4f440a8f496f36", size = 4464738, upload-time = "2025-10-01T00:28:12.491Z" }, + { url = "https://files.pythonhosted.org/packages/dc/b1/abcde62072b8f3fd414e191a6238ce55a0050e9738090dc6cded24c12036/cryptography-46.0.2-cp314-cp314t-musllinux_1_2_aarch64.whl", hash = "sha256:48b983089378f50cba258f7f7aa28198c3f6e13e607eaf10472c26320332ca9a", size = 4419305, upload-time = "2025-10-01T00:28:14.145Z" }, + { url = "https://files.pythonhosted.org/packages/c7/1f/3d2228492f9391395ca34c677e8f2571fb5370fe13dc48c1014f8c509864/cryptography-46.0.2-cp314-cp314t-musllinux_1_2_x86_64.whl", hash = "sha256:e6f6775eaaa08c0eec73e301f7592f4367ccde5e4e4df8e58320f2ebf161ea2c", size = 4681201, upload-time = "2025-10-01T00:28:15.951Z" }, + { url = "https://files.pythonhosted.org/packages/de/77/b687745804a93a55054f391528fcfc76c3d6bfd082ce9fb62c12f0d29fc1/cryptography-46.0.2-cp314-cp314t-win32.whl", hash = "sha256:e8633996579961f9b5a3008683344c2558d38420029d3c0bc7ff77c17949a4e1", size = 3022492, upload-time = "2025-10-01T00:28:17.643Z" }, + { url = "https://files.pythonhosted.org/packages/60/a5/8d498ef2996e583de0bef1dcc5e70186376f00883ae27bf2133f490adf21/cryptography-46.0.2-cp314-cp314t-win_amd64.whl", hash = "sha256:48c01988ecbb32979bb98731f5c2b2f79042a6c58cc9a319c8c2f9987c7f68f9", size = 3496215, upload-time = "2025-10-01T00:28:19.272Z" }, + { url = "https://files.pythonhosted.org/packages/56/db/ee67aaef459a2706bc302b15889a1a8126ebe66877bab1487ae6ad00f33d/cryptography-46.0.2-cp314-cp314t-win_arm64.whl", hash = "sha256:8e2ad4d1a5899b7caa3a450e33ee2734be7cc0689010964703a7c4bcc8dd4fd0", size = 2919255, upload-time = "2025-10-01T00:28:21.115Z" }, + { url = "https://files.pythonhosted.org/packages/d5/bb/fa95abcf147a1b0bb94d95f53fbb09da77b24c776c5d87d36f3d94521d2c/cryptography-46.0.2-cp38-abi3-macosx_10_9_universal2.whl", hash = "sha256:a08e7401a94c002e79dc3bc5231b6558cd4b2280ee525c4673f650a37e2c7685", size = 7248090, upload-time = "2025-10-01T00:28:22.846Z" }, + { url = "https://files.pythonhosted.org/packages/b7/66/f42071ce0e3ffbfa80a88feadb209c779fda92a23fbc1e14f74ebf72ef6b/cryptography-46.0.2-cp38-abi3-manylinux2014_aarch64.manylinux_2_17_aarch64.whl", hash = "sha256:d30bc11d35743bf4ddf76674a0a369ec8a21f87aaa09b0661b04c5f6c46e8d7b", size = 4293123, upload-time = "2025-10-01T00:28:25.072Z" }, + { url = "https://files.pythonhosted.org/packages/a8/5d/1fdbd2e5c1ba822828d250e5a966622ef00185e476d1cd2726b6dd135e53/cryptography-46.0.2-cp38-abi3-manylinux2014_x86_64.manylinux_2_17_x86_64.whl", hash = "sha256:bca3f0ce67e5a2a2cf524e86f44697c4323a86e0fd7ba857de1c30d52c11ede1", size = 4439524, upload-time = "2025-10-01T00:28:26.808Z" }, + { url = "https://files.pythonhosted.org/packages/c8/c1/5e4989a7d102d4306053770d60f978c7b6b1ea2ff8c06e0265e305b23516/cryptography-46.0.2-cp38-abi3-manylinux_2_28_aarch64.whl", hash = "sha256:ff798ad7a957a5021dcbab78dfff681f0cf15744d0e6af62bd6746984d9c9e9c", size = 4297264, upload-time = "2025-10-01T00:28:29.327Z" }, + { url = "https://files.pythonhosted.org/packages/28/78/b56f847d220cb1d6d6aef5a390e116ad603ce13a0945a3386a33abc80385/cryptography-46.0.2-cp38-abi3-manylinux_2_28_armv7l.manylinux_2_31_armv7l.whl", hash = "sha256:cb5e8daac840e8879407acbe689a174f5ebaf344a062f8918e526824eb5d97af", size = 4011872, upload-time = "2025-10-01T00:28:31.479Z" }, + { url = "https://files.pythonhosted.org/packages/e1/80/2971f214b066b888944f7b57761bf709ee3f2cf805619a18b18cab9b263c/cryptography-46.0.2-cp38-abi3-manylinux_2_28_ppc64le.whl", hash = "sha256:3f37aa12b2d91e157827d90ce78f6180f0c02319468a0aea86ab5a9566da644b", size = 4978458, upload-time = "2025-10-01T00:28:33.267Z" }, + { url = "https://files.pythonhosted.org/packages/a5/84/0cb0a2beaa4f1cbe63ebec4e97cd7e0e9f835d0ba5ee143ed2523a1e0016/cryptography-46.0.2-cp38-abi3-manylinux_2_28_x86_64.whl", hash = "sha256:5e38f203160a48b93010b07493c15f2babb4e0f2319bbd001885adb3f3696d21", size = 4472195, upload-time = "2025-10-01T00:28:36.039Z" }, + { url = "https://files.pythonhosted.org/packages/30/8b/2b542ddbf78835c7cd67b6fa79e95560023481213a060b92352a61a10efe/cryptography-46.0.2-cp38-abi3-manylinux_2_34_aarch64.whl", hash = "sha256:d19f5f48883752b5ab34cff9e2f7e4a7f216296f33714e77d1beb03d108632b6", size = 4296791, upload-time = "2025-10-01T00:28:37.732Z" }, + { url = "https://files.pythonhosted.org/packages/78/12/9065b40201b4f4876e93b9b94d91feb18de9150d60bd842a16a21565007f/cryptography-46.0.2-cp38-abi3-manylinux_2_34_ppc64le.whl", hash = "sha256:04911b149eae142ccd8c9a68892a70c21613864afb47aba92d8c7ed9cc001023", size = 4939629, upload-time = "2025-10-01T00:28:39.654Z" }, + { url = "https://files.pythonhosted.org/packages/f6/9e/6507dc048c1b1530d372c483dfd34e7709fc542765015425f0442b08547f/cryptography-46.0.2-cp38-abi3-manylinux_2_34_x86_64.whl", hash = "sha256:8b16c1ede6a937c291d41176934268e4ccac2c6521c69d3f5961c5a1e11e039e", size = 4471988, upload-time = "2025-10-01T00:28:41.822Z" }, + { url = "https://files.pythonhosted.org/packages/b1/86/d025584a5f7d5c5ec8d3633dbcdce83a0cd579f1141ceada7817a4c26934/cryptography-46.0.2-cp38-abi3-musllinux_1_2_aarch64.whl", hash = "sha256:747b6f4a4a23d5a215aadd1d0b12233b4119c4313df83ab4137631d43672cc90", size = 4422989, upload-time = "2025-10-01T00:28:43.608Z" }, + { url = "https://files.pythonhosted.org/packages/4b/39/536370418b38a15a61bbe413006b79dfc3d2b4b0eafceb5581983f973c15/cryptography-46.0.2-cp38-abi3-musllinux_1_2_x86_64.whl", hash = "sha256:6b275e398ab3a7905e168c036aad54b5969d63d3d9099a0a66cc147a3cc983be", size = 4685578, upload-time = "2025-10-01T00:28:45.361Z" }, + { url = "https://files.pythonhosted.org/packages/15/52/ea7e2b1910f547baed566c866fbb86de2402e501a89ecb4871ea7f169a81/cryptography-46.0.2-cp38-abi3-win32.whl", hash = "sha256:0b507c8e033307e37af61cb9f7159b416173bdf5b41d11c4df2e499a1d8e007c", size = 3036711, upload-time = "2025-10-01T00:28:47.096Z" }, + { url = "https://files.pythonhosted.org/packages/71/9e/171f40f9c70a873e73c2efcdbe91e1d4b1777a03398fa1c4af3c56a2477a/cryptography-46.0.2-cp38-abi3-win_amd64.whl", hash = "sha256:f9b2dc7668418fb6f221e4bf701f716e05e8eadb4f1988a2487b11aedf8abe62", size = 3500007, upload-time = "2025-10-01T00:28:48.967Z" }, + { url = "https://files.pythonhosted.org/packages/3e/7c/15ad426257615f9be8caf7f97990cf3dcbb5b8dd7ed7e0db581a1c4759dd/cryptography-46.0.2-cp38-abi3-win_arm64.whl", hash = "sha256:91447f2b17e83c9e0c89f133119d83f94ce6e0fb55dd47da0a959316e6e9cfa1", size = 2918153, upload-time = "2025-10-01T00:28:51.003Z" }, +] + +[[package]] +name = "decopatch" +version = "1.4.10" +source = { registry = "https://pypi.org/simple" } +dependencies = [ + { name = "makefun" }, +] +sdist = { url = "https://files.pythonhosted.org/packages/90/4c/8ca1f193428cbc4d63d0f07db9b8bd96be2db8ee5deefa93e7e8a28f2812/decopatch-1.4.10.tar.gz", hash = "sha256:957f49c93f4150182c23f8fb51d13bb3213e0f17a79e09c8cca7057598b55720", size = 69538, upload-time = "2022-03-01T08:57:21.79Z" } +wheels = [ + { url = "https://files.pythonhosted.org/packages/7b/fa/8e4a51e1afda8d4bd73d784bfe4a60cfdeeced9bea419eff5c271180377e/decopatch-1.4.10-py2.py3-none-any.whl", hash = "sha256:e151f7f93de2b1b3fd3f3272dcc7cefd1a69f68ec1c2d8e288ecd9deb36dc5f7", size = 18015, upload-time = "2022-03-01T08:57:20.676Z" }, +] + +[[package]] +name = "grpcio" +version = "1.73.1" +source = { registry = "https://pypi.org/simple" } +sdist = { url = "https://files.pythonhosted.org/packages/79/e8/b43b851537da2e2f03fa8be1aef207e5cbfb1a2e014fbb6b40d24c177cd3/grpcio-1.73.1.tar.gz", hash = "sha256:7fce2cd1c0c1116cf3850564ebfc3264fba75d3c74a7414373f1238ea365ef87", size = 12730355, upload-time = "2025-06-26T01:53:24.622Z" } +wheels = [ + { url = "https://files.pythonhosted.org/packages/37/bf/4ca20d1acbefabcaba633ab17f4244cbbe8eca877df01517207bd6655914/grpcio-1.73.1-cp313-cp313-linux_armv7l.whl", hash = "sha256:b310824ab5092cf74750ebd8a8a8981c1810cb2b363210e70d06ef37ad80d4f9", size = 5335615, upload-time = "2025-06-26T01:52:42.896Z" }, + { url = "https://files.pythonhosted.org/packages/75/ed/45c345f284abec5d4f6d77cbca9c52c39b554397eb7de7d2fcf440bcd049/grpcio-1.73.1-cp313-cp313-macosx_11_0_universal2.whl", hash = "sha256:8f5a6df3fba31a3485096ac85b2e34b9666ffb0590df0cd044f58694e6a1f6b5", size = 10595497, upload-time = "2025-06-26T01:52:44.695Z" }, + { url = "https://files.pythonhosted.org/packages/a4/75/bff2c2728018f546d812b755455014bc718f8cdcbf5c84f1f6e5494443a8/grpcio-1.73.1-cp313-cp313-manylinux_2_17_aarch64.whl", hash = "sha256:052e28fe9c41357da42250a91926a3e2f74c046575c070b69659467ca5aa976b", size = 5765321, upload-time = "2025-06-26T01:52:46.871Z" }, + { url = "https://files.pythonhosted.org/packages/70/3b/14e43158d3b81a38251b1d231dfb45a9b492d872102a919fbf7ba4ac20cd/grpcio-1.73.1-cp313-cp313-manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:1c0bf15f629b1497436596b1cbddddfa3234273490229ca29561209778ebe182", size = 6415436, upload-time = "2025-06-26T01:52:49.134Z" }, + { url = "https://files.pythonhosted.org/packages/e5/3f/81d9650ca40b54338336fd360f36773be8cb6c07c036e751d8996eb96598/grpcio-1.73.1-cp313-cp313-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:0ab860d5bfa788c5a021fba264802e2593688cd965d1374d31d2b1a34cacd854", size = 6007012, upload-time = "2025-06-26T01:52:51.076Z" }, + { url = "https://files.pythonhosted.org/packages/55/f4/59edf5af68d684d0f4f7ad9462a418ac517201c238551529098c9aa28cb0/grpcio-1.73.1-cp313-cp313-musllinux_1_1_aarch64.whl", hash = "sha256:ad1d958c31cc91ab050bd8a91355480b8e0683e21176522bacea225ce51163f2", size = 6105209, upload-time = "2025-06-26T01:52:52.773Z" }, + { url = "https://files.pythonhosted.org/packages/e4/a8/700d034d5d0786a5ba14bfa9ce974ed4c976936c2748c2bd87aa50f69b36/grpcio-1.73.1-cp313-cp313-musllinux_1_1_i686.whl", hash = "sha256:f43ffb3bd415c57224c7427bfb9e6c46a0b6e998754bfa0d00f408e1873dcbb5", size = 6753655, upload-time = "2025-06-26T01:52:55.064Z" }, + { url = "https://files.pythonhosted.org/packages/1f/29/efbd4ac837c23bc48e34bbaf32bd429f0dc9ad7f80721cdb4622144c118c/grpcio-1.73.1-cp313-cp313-musllinux_1_1_x86_64.whl", hash = "sha256:686231cdd03a8a8055f798b2b54b19428cdf18fa1549bee92249b43607c42668", size = 6287288, upload-time = "2025-06-26T01:52:57.33Z" }, + { url = "https://files.pythonhosted.org/packages/d8/61/c6045d2ce16624bbe18b5d169c1a5ce4d6c3a47bc9d0e5c4fa6a50ed1239/grpcio-1.73.1-cp313-cp313-win32.whl", hash = "sha256:89018866a096e2ce21e05eabed1567479713ebe57b1db7cbb0f1e3b896793ba4", size = 3668151, upload-time = "2025-06-26T01:52:59.405Z" }, + { url = "https://files.pythonhosted.org/packages/c2/d7/77ac689216daee10de318db5aa1b88d159432dc76a130948a56b3aa671a2/grpcio-1.73.1-cp313-cp313-win_amd64.whl", hash = "sha256:4a68f8c9966b94dff693670a5cf2b54888a48a5011c5d9ce2295a1a1465ee84f", size = 4335747, upload-time = "2025-06-26T01:53:01.233Z" }, +] + +[[package]] +name = "grpcio-reflection" +version = "1.73.1" +source = { registry = "https://pypi.org/simple" } +dependencies = [ + { name = "grpcio" }, + { name = "protobuf" }, +] +sdist = { url = "https://files.pythonhosted.org/packages/ad/64/4b90012768c15ceb0dd95e8e1768d858ca7f71f7c7e335fa14b7ffb575fe/grpcio_reflection-1.73.1.tar.gz", hash = "sha256:2d6a420264e31e83e81114dd2586b5cd05a6c689b01dd5e2121d91f2b4e167f2", size = 18821, upload-time = "2025-06-26T02:02:50.684Z" } +wheels = [ + { url = "https://files.pythonhosted.org/packages/b5/c5/1803a34e8734b55ff4d13e76b14c0c78ae6e5e1c4362f351030639356cd2/grpcio_reflection-1.73.1-py3-none-any.whl", hash = "sha256:a6b7680d4e3f0b9969183fd4af7611fd57c8cdc333037064d1a7f2d4ecbb3cc8", size = 22692, upload-time = "2025-06-26T02:02:43.158Z" }, +] + +[[package]] +name = "idna" +version = "3.10" +source = { registry = "https://pypi.org/simple" } +sdist = { url = "https://files.pythonhosted.org/packages/f1/70/7703c29685631f5a7590aa73f1f1d3fa9a380e654b86af429e0934a32f7d/idna-3.10.tar.gz", hash = "sha256:12f65c9b470abda6dc35cf8e63cc574b1c52b11df2c86030af0ac09b01b13ea9", size = 190490, upload-time = "2024-09-15T18:07:39.745Z" } +wheels = [ + { url = "https://files.pythonhosted.org/packages/76/c6/c88e154df9c4e1a2a66ccf0005a88dfb2650c1dffb6f5ce603dfbd452ce3/idna-3.10-py3-none-any.whl", hash = "sha256:946d195a0d259cbba61165e88e65941f16e9b36ea6ddb97f00452bae8b1287d3", size = 70442, upload-time = "2024-09-15T18:07:37.964Z" }, +] + +[[package]] +name = "iniconfig" +version = "2.1.0" +source = { registry = "https://pypi.org/simple" } +sdist = { url = "https://files.pythonhosted.org/packages/f2/97/ebf4da567aa6827c909642694d71c9fcf53e5b504f2d96afea02718862f3/iniconfig-2.1.0.tar.gz", hash = "sha256:3abbd2e30b36733fee78f9c7f7308f2d0050e88f0087fd25c2645f63c773e1c7", size = 4793, upload-time = "2025-03-19T20:09:59.721Z" } +wheels = [ + { url = "https://files.pythonhosted.org/packages/2c/e1/e6716421ea10d38022b952c159d5161ca1193197fb744506875fbb87ea7b/iniconfig-2.1.0-py3-none-any.whl", hash = "sha256:9deba5723312380e77435581c6bf4935c94cbfab9b1ed33ef8d238ea168eb760", size = 6050, upload-time = "2025-03-19T20:10:01.071Z" }, +] + +[[package]] +name = "isort" +version = "6.0.1" +source = { registry = "https://pypi.org/simple" } +sdist = { url = "https://files.pythonhosted.org/packages/b8/21/1e2a441f74a653a144224d7d21afe8f4169e6c7c20bb13aec3a2dc3815e0/isort-6.0.1.tar.gz", hash = "sha256:1cb5df28dfbc742e490c5e41bad6da41b805b0a8be7bc93cd0fb2a8a890ac450", size = 821955, upload-time = "2025-02-26T21:13:16.955Z" } +wheels = [ + { url = "https://files.pythonhosted.org/packages/c1/11/114d0a5f4dabbdcedc1125dee0888514c3c3b16d3e9facad87ed96fad97c/isort-6.0.1-py3-none-any.whl", hash = "sha256:2dc5d7f65c9678d94c88dfc29161a320eec67328bc97aad576874cb4be1e9615", size = 94186, upload-time = "2025-02-26T21:13:14.911Z" }, +] + +[[package]] +name = "jinja2" +version = "3.1.6" +source = { registry = "https://pypi.org/simple" } +dependencies = [ + { name = "markupsafe" }, +] +sdist = { url = "https://files.pythonhosted.org/packages/df/bf/f7da0350254c0ed7c72f3e33cef02e048281fec7ecec5f032d4aac52226b/jinja2-3.1.6.tar.gz", hash = "sha256:0137fb05990d35f1275a587e9aee6d56da821fc83491a0fb838183be43f66d6d", size = 245115, upload-time = "2025-03-05T20:05:02.478Z" } +wheels = [ + { url = "https://files.pythonhosted.org/packages/62/a1/3d680cbfd5f4b8f15abc1d571870c5fc3e594bb582bc3b64ea099db13e56/jinja2-3.1.6-py3-none-any.whl", hash = "sha256:85ece4451f492d0c13c5dd7c13a64681a86afae63a5f347908daf103ce6d2f67", size = 134899, upload-time = "2025-03-05T20:05:00.369Z" }, +] + +[[package]] +name = "labgrid" +version = "25.1.dev101" +source = { git = "https://github.com/aparcar/labgrid.git?branch=aparcar%2Fstaging#e866266b2d955893bc718dd1a81d474eb88adfa5" } +dependencies = [ + { name = "attrs" }, + { name = "grpcio" }, + { name = "grpcio-reflection" }, + { name = "jinja2" }, + { name = "pexpect" }, + { name = "protobuf" }, + { name = "pyserial-labgrid" }, + { name = "pytest" }, + { name = "pyudev" }, + { name = "pyusb" }, + { name = "pyyaml" }, + { name = "requests" }, + { name = "xmodem" }, +] + +[[package]] +name = "makefun" +version = "1.16.0" +source = { registry = "https://pypi.org/simple" } +sdist = { url = "https://files.pythonhosted.org/packages/7b/cf/6780ab8bc3b84a1cce3e4400aed3d64b6db7d5e227a2f75b6ded5674701a/makefun-1.16.0.tar.gz", hash = "sha256:e14601831570bff1f6d7e68828bcd30d2f5856f24bad5de0ccb22921ceebc947", size = 73565, upload-time = "2025-05-09T15:00:42.313Z" } +wheels = [ + { url = "https://files.pythonhosted.org/packages/b7/c0/4bc973defd1270b89ccaae04cef0d5fa3ea85b59b108ad2c08aeea9afb76/makefun-1.16.0-py2.py3-none-any.whl", hash = "sha256:43baa4c3e7ae2b17de9ceac20b669e9a67ceeadff31581007cca20a07bbe42c4", size = 22923, upload-time = "2025-05-09T15:00:41.042Z" }, +] + +[[package]] +name = "markupsafe" +version = "3.0.2" +source = { registry = "https://pypi.org/simple" } +sdist = { url = "https://files.pythonhosted.org/packages/b2/97/5d42485e71dfc078108a86d6de8fa46db44a1a9295e89c5d6d4a06e23a62/markupsafe-3.0.2.tar.gz", hash = "sha256:ee55d3edf80167e48ea11a923c7386f4669df67d7994554387f84e7d8b0a2bf0", size = 20537, upload-time = "2024-10-18T15:21:54.129Z" } +wheels = [ + { url = "https://files.pythonhosted.org/packages/83/0e/67eb10a7ecc77a0c2bbe2b0235765b98d164d81600746914bebada795e97/MarkupSafe-3.0.2-cp313-cp313-macosx_10_13_universal2.whl", hash = "sha256:ba9527cdd4c926ed0760bc301f6728ef34d841f405abf9d4f959c478421e4efd", size = 14274, upload-time = "2024-10-18T15:21:24.577Z" }, + { url = "https://files.pythonhosted.org/packages/2b/6d/9409f3684d3335375d04e5f05744dfe7e9f120062c9857df4ab490a1031a/MarkupSafe-3.0.2-cp313-cp313-macosx_11_0_arm64.whl", hash = "sha256:f8b3d067f2e40fe93e1ccdd6b2e1d16c43140e76f02fb1319a05cf2b79d99430", size = 12352, upload-time = "2024-10-18T15:21:25.382Z" }, + { url = "https://files.pythonhosted.org/packages/d2/f5/6eadfcd3885ea85fe2a7c128315cc1bb7241e1987443d78c8fe712d03091/MarkupSafe-3.0.2-cp313-cp313-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:569511d3b58c8791ab4c2e1285575265991e6d8f8700c7be0e88f86cb0672094", size = 24122, upload-time = "2024-10-18T15:21:26.199Z" }, + { url = "https://files.pythonhosted.org/packages/0c/91/96cf928db8236f1bfab6ce15ad070dfdd02ed88261c2afafd4b43575e9e9/MarkupSafe-3.0.2-cp313-cp313-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:15ab75ef81add55874e7ab7055e9c397312385bd9ced94920f2802310c930396", size = 23085, upload-time = "2024-10-18T15:21:27.029Z" }, + { url = "https://files.pythonhosted.org/packages/c2/cf/c9d56af24d56ea04daae7ac0940232d31d5a8354f2b457c6d856b2057d69/MarkupSafe-3.0.2-cp313-cp313-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:f3818cb119498c0678015754eba762e0d61e5b52d34c8b13d770f0719f7b1d79", size = 22978, upload-time = "2024-10-18T15:21:27.846Z" }, + { url = "https://files.pythonhosted.org/packages/2a/9f/8619835cd6a711d6272d62abb78c033bda638fdc54c4e7f4272cf1c0962b/MarkupSafe-3.0.2-cp313-cp313-musllinux_1_2_aarch64.whl", hash = "sha256:cdb82a876c47801bb54a690c5ae105a46b392ac6099881cdfb9f6e95e4014c6a", size = 24208, upload-time = "2024-10-18T15:21:28.744Z" }, + { url = "https://files.pythonhosted.org/packages/f9/bf/176950a1792b2cd2102b8ffeb5133e1ed984547b75db47c25a67d3359f77/MarkupSafe-3.0.2-cp313-cp313-musllinux_1_2_i686.whl", hash = "sha256:cabc348d87e913db6ab4aa100f01b08f481097838bdddf7c7a84b7575b7309ca", size = 23357, upload-time = "2024-10-18T15:21:29.545Z" }, + { url = "https://files.pythonhosted.org/packages/ce/4f/9a02c1d335caabe5c4efb90e1b6e8ee944aa245c1aaaab8e8a618987d816/MarkupSafe-3.0.2-cp313-cp313-musllinux_1_2_x86_64.whl", hash = "sha256:444dcda765c8a838eaae23112db52f1efaf750daddb2d9ca300bcae1039adc5c", size = 23344, upload-time = "2024-10-18T15:21:30.366Z" }, + { url = "https://files.pythonhosted.org/packages/ee/55/c271b57db36f748f0e04a759ace9f8f759ccf22b4960c270c78a394f58be/MarkupSafe-3.0.2-cp313-cp313-win32.whl", hash = "sha256:bcf3e58998965654fdaff38e58584d8937aa3096ab5354d493c77d1fdd66d7a1", size = 15101, upload-time = "2024-10-18T15:21:31.207Z" }, + { url = "https://files.pythonhosted.org/packages/29/88/07df22d2dd4df40aba9f3e402e6dc1b8ee86297dddbad4872bd5e7b0094f/MarkupSafe-3.0.2-cp313-cp313-win_amd64.whl", hash = "sha256:e6a2a455bd412959b57a172ce6328d2dd1f01cb2135efda2e4576e8a23fa3b0f", size = 15603, upload-time = "2024-10-18T15:21:32.032Z" }, + { url = "https://files.pythonhosted.org/packages/62/6a/8b89d24db2d32d433dffcd6a8779159da109842434f1dd2f6e71f32f738c/MarkupSafe-3.0.2-cp313-cp313t-macosx_10_13_universal2.whl", hash = "sha256:b5a6b3ada725cea8a5e634536b1b01c30bcdcd7f9c6fff4151548d5bf6b3a36c", size = 14510, upload-time = "2024-10-18T15:21:33.625Z" }, + { url = "https://files.pythonhosted.org/packages/7a/06/a10f955f70a2e5a9bf78d11a161029d278eeacbd35ef806c3fd17b13060d/MarkupSafe-3.0.2-cp313-cp313t-macosx_11_0_arm64.whl", hash = "sha256:a904af0a6162c73e3edcb969eeeb53a63ceeb5d8cf642fade7d39e7963a22ddb", size = 12486, upload-time = "2024-10-18T15:21:34.611Z" }, + { url = "https://files.pythonhosted.org/packages/34/cf/65d4a571869a1a9078198ca28f39fba5fbb910f952f9dbc5220afff9f5e6/MarkupSafe-3.0.2-cp313-cp313t-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:4aa4e5faecf353ed117801a068ebab7b7e09ffb6e1d5e412dc852e0da018126c", size = 25480, upload-time = "2024-10-18T15:21:35.398Z" }, + { url = "https://files.pythonhosted.org/packages/0c/e3/90e9651924c430b885468b56b3d597cabf6d72be4b24a0acd1fa0e12af67/MarkupSafe-3.0.2-cp313-cp313t-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:c0ef13eaeee5b615fb07c9a7dadb38eac06a0608b41570d8ade51c56539e509d", size = 23914, upload-time = "2024-10-18T15:21:36.231Z" }, + { url = "https://files.pythonhosted.org/packages/66/8c/6c7cf61f95d63bb866db39085150df1f2a5bd3335298f14a66b48e92659c/MarkupSafe-3.0.2-cp313-cp313t-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:d16a81a06776313e817c951135cf7340a3e91e8c1ff2fac444cfd75fffa04afe", size = 23796, upload-time = "2024-10-18T15:21:37.073Z" }, + { url = "https://files.pythonhosted.org/packages/bb/35/cbe9238ec3f47ac9a7c8b3df7a808e7cb50fe149dc7039f5f454b3fba218/MarkupSafe-3.0.2-cp313-cp313t-musllinux_1_2_aarch64.whl", hash = "sha256:6381026f158fdb7c72a168278597a5e3a5222e83ea18f543112b2662a9b699c5", size = 25473, upload-time = "2024-10-18T15:21:37.932Z" }, + { url = "https://files.pythonhosted.org/packages/e6/32/7621a4382488aa283cc05e8984a9c219abad3bca087be9ec77e89939ded9/MarkupSafe-3.0.2-cp313-cp313t-musllinux_1_2_i686.whl", hash = "sha256:3d79d162e7be8f996986c064d1c7c817f6df3a77fe3d6859f6f9e7be4b8c213a", size = 24114, upload-time = "2024-10-18T15:21:39.799Z" }, + { url = "https://files.pythonhosted.org/packages/0d/80/0985960e4b89922cb5a0bac0ed39c5b96cbc1a536a99f30e8c220a996ed9/MarkupSafe-3.0.2-cp313-cp313t-musllinux_1_2_x86_64.whl", hash = "sha256:131a3c7689c85f5ad20f9f6fb1b866f402c445b220c19fe4308c0b147ccd2ad9", size = 24098, upload-time = "2024-10-18T15:21:40.813Z" }, + { url = "https://files.pythonhosted.org/packages/82/78/fedb03c7d5380df2427038ec8d973587e90561b2d90cd472ce9254cf348b/MarkupSafe-3.0.2-cp313-cp313t-win32.whl", hash = "sha256:ba8062ed2cf21c07a9e295d5b8a2a5ce678b913b45fdf68c32d95d6c1291e0b6", size = 15208, upload-time = "2024-10-18T15:21:41.814Z" }, + { url = "https://files.pythonhosted.org/packages/4f/65/6079a46068dfceaeabb5dcad6d674f5f5c61a6fa5673746f42a9f4c233b3/MarkupSafe-3.0.2-cp313-cp313t-win_amd64.whl", hash = "sha256:e444a31f8db13eb18ada366ab3cf45fd4b31e4db1236a4448f68778c1d1a5a2f", size = 15739, upload-time = "2024-10-18T15:21:42.784Z" }, +] + +[[package]] +name = "packaging" +version = "25.0" +source = { registry = "https://pypi.org/simple" } +sdist = { url = "https://files.pythonhosted.org/packages/a1/d4/1fc4078c65507b51b96ca8f8c3ba19e6a61c8253c72794544580a7b6c24d/packaging-25.0.tar.gz", hash = "sha256:d443872c98d677bf60f6a1f2f8c1cb748e8fe762d2bf9d3148b5599295b0fc4f", size = 165727, upload-time = "2025-04-19T11:48:59.673Z" } +wheels = [ + { url = "https://files.pythonhosted.org/packages/20/12/38679034af332785aac8774540895e234f4d07f7545804097de4b666afd8/packaging-25.0-py3-none-any.whl", hash = "sha256:29572ef2b1f17581046b3a2227d5c611fb25ec70ca1ba8554b24b0e69331a484", size = 66469, upload-time = "2025-04-19T11:48:57.875Z" }, +] + +[[package]] +name = "pexpect" +version = "4.9.0" +source = { registry = "https://pypi.org/simple" } +dependencies = [ + { name = "ptyprocess" }, +] +sdist = { url = "https://files.pythonhosted.org/packages/42/92/cc564bf6381ff43ce1f4d06852fc19a2f11d180f23dc32d9588bee2f149d/pexpect-4.9.0.tar.gz", hash = "sha256:ee7d41123f3c9911050ea2c2dac107568dc43b2d3b0c7557a33212c398ead30f", size = 166450, upload-time = "2023-11-25T09:07:26.339Z" } +wheels = [ + { url = "https://files.pythonhosted.org/packages/9e/c3/059298687310d527a58bb01f3b1965787ee3b40dce76752eda8b44e9a2c5/pexpect-4.9.0-py2.py3-none-any.whl", hash = "sha256:7236d1e080e4936be2dc3e326cec0af72acf9212a7e1d060210e70a47e253523", size = 63772, upload-time = "2023-11-25T06:56:14.81Z" }, +] + +[[package]] +name = "pluggy" +version = "1.6.0" +source = { registry = "https://pypi.org/simple" } +sdist = { url = "https://files.pythonhosted.org/packages/f9/e2/3e91f31a7d2b083fe6ef3fa267035b518369d9511ffab804f839851d2779/pluggy-1.6.0.tar.gz", hash = "sha256:7dcc130b76258d33b90f61b658791dede3486c3e6bfb003ee5c9bfb396dd22f3", size = 69412, upload-time = "2025-05-15T12:30:07.975Z" } +wheels = [ + { url = "https://files.pythonhosted.org/packages/54/20/4d324d65cc6d9205fabedc306948156824eb9f0ee1633355a8f7ec5c66bf/pluggy-1.6.0-py3-none-any.whl", hash = "sha256:e920276dd6813095e9377c0bc5566d94c932c33b27a3e3945d8389c374dd4746", size = 20538, upload-time = "2025-05-15T12:30:06.134Z" }, +] + +[[package]] +name = "protobuf" +version = "6.31.1" +source = { registry = "https://pypi.org/simple" } +sdist = { url = "https://files.pythonhosted.org/packages/52/f3/b9655a711b32c19720253f6f06326faf90580834e2e83f840472d752bc8b/protobuf-6.31.1.tar.gz", hash = "sha256:d8cac4c982f0b957a4dc73a80e2ea24fab08e679c0de9deb835f4a12d69aca9a", size = 441797, upload-time = "2025-05-28T19:25:54.947Z" } +wheels = [ + { url = "https://files.pythonhosted.org/packages/f3/6f/6ab8e4bf962fd5570d3deaa2d5c38f0a363f57b4501047b5ebeb83ab1125/protobuf-6.31.1-cp310-abi3-win32.whl", hash = "sha256:7fa17d5a29c2e04b7d90e5e32388b8bfd0e7107cd8e616feef7ed3fa6bdab5c9", size = 423603, upload-time = "2025-05-28T19:25:41.198Z" }, + { url = "https://files.pythonhosted.org/packages/44/3a/b15c4347dd4bf3a1b0ee882f384623e2063bb5cf9fa9d57990a4f7df2fb6/protobuf-6.31.1-cp310-abi3-win_amd64.whl", hash = "sha256:426f59d2964864a1a366254fa703b8632dcec0790d8862d30034d8245e1cd447", size = 435283, upload-time = "2025-05-28T19:25:44.275Z" }, + { url = "https://files.pythonhosted.org/packages/6a/c9/b9689a2a250264a84e66c46d8862ba788ee7a641cdca39bccf64f59284b7/protobuf-6.31.1-cp39-abi3-macosx_10_9_universal2.whl", hash = "sha256:6f1227473dc43d44ed644425268eb7c2e488ae245d51c6866d19fe158e207402", size = 425604, upload-time = "2025-05-28T19:25:45.702Z" }, + { url = "https://files.pythonhosted.org/packages/76/a1/7a5a94032c83375e4fe7e7f56e3976ea6ac90c5e85fac8576409e25c39c3/protobuf-6.31.1-cp39-abi3-manylinux2014_aarch64.whl", hash = "sha256:a40fc12b84c154884d7d4c4ebd675d5b3b5283e155f324049ae396b95ddebc39", size = 322115, upload-time = "2025-05-28T19:25:47.128Z" }, + { url = "https://files.pythonhosted.org/packages/fa/b1/b59d405d64d31999244643d88c45c8241c58f17cc887e73bcb90602327f8/protobuf-6.31.1-cp39-abi3-manylinux2014_x86_64.whl", hash = "sha256:4ee898bf66f7a8b0bd21bce523814e6fbd8c6add948045ce958b73af7e8878c6", size = 321070, upload-time = "2025-05-28T19:25:50.036Z" }, + { url = "https://files.pythonhosted.org/packages/f7/af/ab3c51ab7507a7325e98ffe691d9495ee3d3aa5f589afad65ec920d39821/protobuf-6.31.1-py3-none-any.whl", hash = "sha256:720a6c7e6b77288b85063569baae8536671b39f15cc22037ec7045658d80489e", size = 168724, upload-time = "2025-05-28T19:25:53.926Z" }, +] + +[[package]] +name = "ptyprocess" +version = "0.7.0" +source = { registry = "https://pypi.org/simple" } +sdist = { url = "https://files.pythonhosted.org/packages/20/e5/16ff212c1e452235a90aeb09066144d0c5a6a8c0834397e03f5224495c4e/ptyprocess-0.7.0.tar.gz", hash = "sha256:5c5d0a3b48ceee0b48485e0c26037c0acd7d29765ca3fbb5cb3831d347423220", size = 70762, upload-time = "2020-12-28T15:15:30.155Z" } +wheels = [ + { url = "https://files.pythonhosted.org/packages/22/a6/858897256d0deac81a172289110f31629fc4cee19b6f01283303e18c8db3/ptyprocess-0.7.0-py2.py3-none-any.whl", hash = "sha256:4b41f3967fce3af57cc7e94b888626c18bf37a083e3651ca8feeb66d492fef35", size = 13993, upload-time = "2020-12-28T15:15:28.35Z" }, +] + +[[package]] +name = "pycparser" +version = "2.23" +source = { registry = "https://pypi.org/simple" } +sdist = { url = "https://files.pythonhosted.org/packages/fe/cf/d2d3b9f5699fb1e4615c8e32ff220203e43b248e1dfcc6736ad9057731ca/pycparser-2.23.tar.gz", hash = "sha256:78816d4f24add8f10a06d6f05b4d424ad9e96cfebf68a4ddc99c65c0720d00c2", size = 173734, upload-time = "2025-09-09T13:23:47.91Z" } +wheels = [ + { url = "https://files.pythonhosted.org/packages/a0/e3/59cd50310fc9b59512193629e1984c1f95e5c8ae6e5d8c69532ccc65a7fe/pycparser-2.23-py3-none-any.whl", hash = "sha256:e5c6e8d3fbad53479cab09ac03729e0a9faf2bee3db8208a550daf5af81a5934", size = 118140, upload-time = "2025-09-09T13:23:46.651Z" }, +] + +[[package]] +name = "pygments" +version = "2.19.2" +source = { registry = "https://pypi.org/simple" } +sdist = { url = "https://files.pythonhosted.org/packages/b0/77/a5b8c569bf593b0140bde72ea885a803b82086995367bf2037de0159d924/pygments-2.19.2.tar.gz", hash = "sha256:636cb2477cec7f8952536970bc533bc43743542f70392ae026374600add5b887", size = 4968631, upload-time = "2025-06-21T13:39:12.283Z" } +wheels = [ + { url = "https://files.pythonhosted.org/packages/c7/21/705964c7812476f378728bdf590ca4b771ec72385c533964653c68e86bdc/pygments-2.19.2-py3-none-any.whl", hash = "sha256:86540386c03d588bb81d44bc3928634ff26449851e99741617ecb9037ee5ec0b", size = 1225217, upload-time = "2025-06-21T13:39:07.939Z" }, +] + +[[package]] +name = "pyserial-labgrid" +version = "3.5.0.2" +source = { registry = "https://pypi.org/simple" } +sdist = { url = "https://files.pythonhosted.org/packages/dd/29/7b475dc1d39fde54a99ce58a73b98ea7ef77863a4bf974d2850fbc456d9e/pyserial-labgrid-3.5.0.2.tar.gz", hash = "sha256:98f826609795e92157ba2c2e9d8e98c6cec2f399de353f76dd0436536f5c52ab", size = 161398, upload-time = "2023-04-26T09:57:04.041Z" } +wheels = [ + { url = "https://files.pythonhosted.org/packages/42/4e/bd1f71b7889cc1d34aeca3b162d5b5dd7eccd47631b5a59449288a5e3bb0/pyserial_labgrid-3.5.0.2-py2.py3-none-any.whl", hash = "sha256:99525df10c9d6b7739a1d1c44c63ac64a54237fb4c3d643d74a7aad8b96817fd", size = 91562, upload-time = "2023-04-26T09:57:02.357Z" }, +] + +[[package]] +name = "pytest" +version = "8.4.1" +source = { registry = "https://pypi.org/simple" } +dependencies = [ + { name = "colorama", marker = "sys_platform == 'win32'" }, + { name = "iniconfig" }, + { name = "packaging" }, + { name = "pluggy" }, + { name = "pygments" }, +] +sdist = { url = "https://files.pythonhosted.org/packages/08/ba/45911d754e8eba3d5a841a5ce61a65a685ff1798421ac054f85aa8747dfb/pytest-8.4.1.tar.gz", hash = "sha256:7c67fd69174877359ed9371ec3af8a3d2b04741818c51e5e99cc1742251fa93c", size = 1517714, upload-time = "2025-06-18T05:48:06.109Z" } +wheels = [ + { url = "https://files.pythonhosted.org/packages/29/16/c8a903f4c4dffe7a12843191437d7cd8e32751d5de349d45d3fe69544e87/pytest-8.4.1-py3-none-any.whl", hash = "sha256:539c70ba6fcead8e78eebbf1115e8b589e7565830d7d006a8723f19ac8a0afb7", size = 365474, upload-time = "2025-06-18T05:48:03.955Z" }, +] + +[[package]] +name = "pytest-check" +version = "2.5.3" +source = { registry = "https://pypi.org/simple" } +dependencies = [ + { name = "pytest" }, +] +sdist = { url = "https://files.pythonhosted.org/packages/7b/f8/58aa6cf0af7bc261c4606c28582258e76cec398a7d8bdc2e82de51039f84/pytest_check-2.5.3.tar.gz", hash = "sha256:2357d7df77c395d30c0c4957724fdfce1a75ea8bc9eb2308c0ffe56f62ac70ca", size = 27422, upload-time = "2025-04-04T01:44:43.188Z" } +wheels = [ + { url = "https://files.pythonhosted.org/packages/7a/29/d3fbd683c50aeb93d79377c904179c82f01f9f656299deb89328ab85b0a2/pytest_check-2.5.3-py3-none-any.whl", hash = "sha256:354685dfa63e714a20aa62b61c4d30d5acfa30891319df9046f59af653c3b9ba", size = 15998, upload-time = "2025-04-04T01:44:41.951Z" }, +] + +[[package]] +name = "pytest-harvest" +version = "1.10.5" +source = { registry = "https://pypi.org/simple" } +dependencies = [ + { name = "decopatch" }, + { name = "makefun" }, + { name = "packaging" }, + { name = "six" }, +] +sdist = { url = "https://files.pythonhosted.org/packages/a5/cc/5750e19285f81070adbc513b18f1f31a9202aa944cf8b7f8ce27082095d4/pytest-harvest-1.10.5.tar.gz", hash = "sha256:27cfa4cf1f2e43d1e7749c282e0fe72d30e90e89bede6b30a59b626848c7d418", size = 81064, upload-time = "2024-03-16T00:57:35.88Z" } +wheels = [ + { url = "https://files.pythonhosted.org/packages/1f/a3/d6c4da04ce6b63ad8b09f956a4dc3741a00fcb74c4a9079a5c014fcb5fb2/pytest_harvest-1.10.5-py2.py3-none-any.whl", hash = "sha256:44a0c53dd33449ec7c0473ff2edb4887d4f105bb2d6faeb0f3784e2a15cae55c", size = 23651, upload-time = "2024-03-16T00:57:33.59Z" }, +] + +[[package]] +name = "pyudev" +version = "0.24.3" +source = { registry = "https://pypi.org/simple" } +sdist = { url = "https://files.pythonhosted.org/packages/c4/5c/6cc034da13830e3da123ccf9a30910bc868fa16670362f004e4b788d0df1/pyudev-0.24.3.tar.gz", hash = "sha256:2e945427a21674893bb97632401db62139d91cea1ee96137cc7b07ad22198fc7", size = 55970, upload-time = "2024-05-10T18:24:04.599Z" } +wheels = [ + { url = "https://files.pythonhosted.org/packages/9d/3b/c37870f68ceb067707ca7b04db364a1478fcd40c6194007fb6e492ff9a92/pyudev-0.24.3-py3-none-any.whl", hash = "sha256:e8246f0a014fe370119ba2bc781bfbe62c0298d0d6b39c94e83102a8a3f56960", size = 62677, upload-time = "2024-05-10T18:24:02.743Z" }, +] + +[[package]] +name = "pyusb" +version = "1.3.1" +source = { registry = "https://pypi.org/simple" } +sdist = { url = "https://files.pythonhosted.org/packages/00/6b/ce3727395e52b7b76dfcf0c665e37d223b680b9becc60710d4bc08b7b7cb/pyusb-1.3.1.tar.gz", hash = "sha256:3af070b607467c1c164f49d5b0caabe8ac78dbed9298d703a8dbf9df4052d17e", size = 77281, upload-time = "2025-01-08T23:45:01.866Z" } +wheels = [ + { url = "https://files.pythonhosted.org/packages/28/b8/27e6312e86408a44fe16bd28ee12dd98608b39f7e7e57884a24e8f29b573/pyusb-1.3.1-py3-none-any.whl", hash = "sha256:bf9b754557af4717fe80c2b07cc2b923a9151f5c08d17bdb5345dac09d6a0430", size = 58465, upload-time = "2025-01-08T23:45:00.029Z" }, +] + +[[package]] +name = "pyyaml" +version = "6.0.2" +source = { registry = "https://pypi.org/simple" } +sdist = { url = "https://files.pythonhosted.org/packages/54/ed/79a089b6be93607fa5cdaedf301d7dfb23af5f25c398d5ead2525b063e17/pyyaml-6.0.2.tar.gz", hash = "sha256:d584d9ec91ad65861cc08d42e834324ef890a082e591037abe114850ff7bbc3e", size = 130631, upload-time = "2024-08-06T20:33:50.674Z" } +wheels = [ + { url = "https://files.pythonhosted.org/packages/ef/e3/3af305b830494fa85d95f6d95ef7fa73f2ee1cc8ef5b495c7c3269fb835f/PyYAML-6.0.2-cp313-cp313-macosx_10_13_x86_64.whl", hash = "sha256:efdca5630322a10774e8e98e1af481aad470dd62c3170801852d752aa7a783ba", size = 181309, upload-time = "2024-08-06T20:32:43.4Z" }, + { url = "https://files.pythonhosted.org/packages/45/9f/3b1c20a0b7a3200524eb0076cc027a970d320bd3a6592873c85c92a08731/PyYAML-6.0.2-cp313-cp313-macosx_11_0_arm64.whl", hash = "sha256:50187695423ffe49e2deacb8cd10510bc361faac997de9efef88badc3bb9e2d1", size = 171679, upload-time = "2024-08-06T20:32:44.801Z" }, + { url = "https://files.pythonhosted.org/packages/7c/9a/337322f27005c33bcb656c655fa78325b730324c78620e8328ae28b64d0c/PyYAML-6.0.2-cp313-cp313-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:0ffe8360bab4910ef1b9e87fb812d8bc0a308b0d0eef8c8f44e0254ab3b07133", size = 733428, upload-time = "2024-08-06T20:32:46.432Z" }, + { url = "https://files.pythonhosted.org/packages/a3/69/864fbe19e6c18ea3cc196cbe5d392175b4cf3d5d0ac1403ec3f2d237ebb5/PyYAML-6.0.2-cp313-cp313-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:17e311b6c678207928d649faa7cb0d7b4c26a0ba73d41e99c4fff6b6c3276484", size = 763361, upload-time = "2024-08-06T20:32:51.188Z" }, + { url = "https://files.pythonhosted.org/packages/04/24/b7721e4845c2f162d26f50521b825fb061bc0a5afcf9a386840f23ea19fa/PyYAML-6.0.2-cp313-cp313-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:70b189594dbe54f75ab3a1acec5f1e3faa7e8cf2f1e08d9b561cb41b845f69d5", size = 759523, upload-time = "2024-08-06T20:32:53.019Z" }, + { url = "https://files.pythonhosted.org/packages/2b/b2/e3234f59ba06559c6ff63c4e10baea10e5e7df868092bf9ab40e5b9c56b6/PyYAML-6.0.2-cp313-cp313-musllinux_1_1_aarch64.whl", hash = "sha256:41e4e3953a79407c794916fa277a82531dd93aad34e29c2a514c2c0c5fe971cc", size = 726660, upload-time = "2024-08-06T20:32:54.708Z" }, + { url = "https://files.pythonhosted.org/packages/fe/0f/25911a9f080464c59fab9027482f822b86bf0608957a5fcc6eaac85aa515/PyYAML-6.0.2-cp313-cp313-musllinux_1_1_x86_64.whl", hash = "sha256:68ccc6023a3400877818152ad9a1033e3db8625d899c72eacb5a668902e4d652", size = 751597, upload-time = "2024-08-06T20:32:56.985Z" }, + { url = "https://files.pythonhosted.org/packages/14/0d/e2c3b43bbce3cf6bd97c840b46088a3031085179e596d4929729d8d68270/PyYAML-6.0.2-cp313-cp313-win32.whl", hash = "sha256:bc2fa7c6b47d6bc618dd7fb02ef6fdedb1090ec036abab80d4681424b84c1183", size = 140527, upload-time = "2024-08-06T20:33:03.001Z" }, + { url = "https://files.pythonhosted.org/packages/fa/de/02b54f42487e3d3c6efb3f89428677074ca7bf43aae402517bc7cca949f3/PyYAML-6.0.2-cp313-cp313-win_amd64.whl", hash = "sha256:8388ee1976c416731879ac16da0aff3f63b286ffdd57cdeb95f3f2e085687563", size = 156446, upload-time = "2024-08-06T20:33:04.33Z" }, +] + +[[package]] +name = "requests" +version = "2.32.4" +source = { registry = "https://pypi.org/simple" } +dependencies = [ + { name = "certifi" }, + { name = "charset-normalizer" }, + { name = "idna" }, + { name = "urllib3" }, +] +sdist = { url = "https://files.pythonhosted.org/packages/e1/0a/929373653770d8a0d7ea76c37de6e41f11eb07559b103b1c02cafb3f7cf8/requests-2.32.4.tar.gz", hash = "sha256:27d0316682c8a29834d3264820024b62a36942083d52caf2f14c0591336d3422", size = 135258, upload-time = "2025-06-09T16:43:07.34Z" } +wheels = [ + { url = "https://files.pythonhosted.org/packages/7c/e4/56027c4a6b4ae70ca9de302488c5ca95ad4a39e190093d6c1a8ace08341b/requests-2.32.4-py3-none-any.whl", hash = "sha256:27babd3cda2a6d50b30443204ee89830707d396671944c998b5975b031ac2b2c", size = 64847, upload-time = "2025-06-09T16:43:05.728Z" }, +] + +[[package]] +name = "resolvelib" +version = "1.0.1" +source = { registry = "https://pypi.org/simple" } +sdist = { url = "https://files.pythonhosted.org/packages/ce/10/f699366ce577423cbc3df3280063099054c23df70856465080798c6ebad6/resolvelib-1.0.1.tar.gz", hash = "sha256:04ce76cbd63fded2078ce224785da6ecd42b9564b1390793f64ddecbe997b309", size = 21065, upload-time = "2023-03-09T05:10:38.292Z" } +wheels = [ + { url = "https://files.pythonhosted.org/packages/d2/fc/e9ccf0521607bcd244aa0b3fbd574f71b65e9ce6a112c83af988bbbe2e23/resolvelib-1.0.1-py2.py3-none-any.whl", hash = "sha256:d2da45d1a8dfee81bdd591647783e340ef3bcb104b54c383f70d422ef5cc7dbf", size = 17194, upload-time = "2023-03-09T05:10:36.214Z" }, +] + +[[package]] +name = "ruff" +version = "0.12.1" +source = { registry = "https://pypi.org/simple" } +sdist = { url = "https://files.pythonhosted.org/packages/97/38/796a101608a90494440856ccfb52b1edae90de0b817e76bfade66b12d320/ruff-0.12.1.tar.gz", hash = "sha256:806bbc17f1104fd57451a98a58df35388ee3ab422e029e8f5cf30aa4af2c138c", size = 4413426, upload-time = "2025-06-26T20:34:14.784Z" } +wheels = [ + { url = "https://files.pythonhosted.org/packages/06/bf/3dba52c1d12ab5e78d75bd78ad52fb85a6a1f29cc447c2423037b82bed0d/ruff-0.12.1-py3-none-linux_armv6l.whl", hash = "sha256:6013a46d865111e2edb71ad692fbb8262e6c172587a57c0669332a449384a36b", size = 10305649, upload-time = "2025-06-26T20:33:39.242Z" }, + { url = "https://files.pythonhosted.org/packages/8c/65/dab1ba90269bc8c81ce1d499a6517e28fe6f87b2119ec449257d0983cceb/ruff-0.12.1-py3-none-macosx_10_12_x86_64.whl", hash = "sha256:b3f75a19e03a4b0757d1412edb7f27cffb0c700365e9d6b60bc1b68d35bc89e0", size = 11120201, upload-time = "2025-06-26T20:33:42.207Z" }, + { url = "https://files.pythonhosted.org/packages/3f/3e/2d819ffda01defe857fa2dd4cba4d19109713df4034cc36f06bbf582d62a/ruff-0.12.1-py3-none-macosx_11_0_arm64.whl", hash = "sha256:9a256522893cb7e92bb1e1153283927f842dea2e48619c803243dccc8437b8be", size = 10466769, upload-time = "2025-06-26T20:33:44.102Z" }, + { url = "https://files.pythonhosted.org/packages/63/37/bde4cf84dbd7821c8de56ec4ccc2816bce8125684f7b9e22fe4ad92364de/ruff-0.12.1-py3-none-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:069052605fe74c765a5b4272eb89880e0ff7a31e6c0dbf8767203c1fbd31c7ff", size = 10660902, upload-time = "2025-06-26T20:33:45.98Z" }, + { url = "https://files.pythonhosted.org/packages/0e/3a/390782a9ed1358c95e78ccc745eed1a9d657a537e5c4c4812fce06c8d1a0/ruff-0.12.1-py3-none-manylinux_2_17_armv7l.manylinux2014_armv7l.whl", hash = "sha256:a684f125a4fec2d5a6501a466be3841113ba6847827be4573fddf8308b83477d", size = 10167002, upload-time = "2025-06-26T20:33:47.81Z" }, + { url = "https://files.pythonhosted.org/packages/6d/05/f2d4c965009634830e97ffe733201ec59e4addc5b1c0efa035645baa9e5f/ruff-0.12.1-py3-none-manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:bdecdef753bf1e95797593007569d8e1697a54fca843d78f6862f7dc279e23bd", size = 11751522, upload-time = "2025-06-26T20:33:49.857Z" }, + { url = "https://files.pythonhosted.org/packages/35/4e/4bfc519b5fcd462233f82fc20ef8b1e5ecce476c283b355af92c0935d5d9/ruff-0.12.1-py3-none-manylinux_2_17_ppc64.manylinux2014_ppc64.whl", hash = "sha256:70d52a058c0e7b88b602f575d23596e89bd7d8196437a4148381a3f73fcd5010", size = 12520264, upload-time = "2025-06-26T20:33:52.199Z" }, + { url = "https://files.pythonhosted.org/packages/85/b2/7756a6925da236b3a31f234b4167397c3e5f91edb861028a631546bad719/ruff-0.12.1-py3-none-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:84d0a69d1e8d716dfeab22d8d5e7c786b73f2106429a933cee51d7b09f861d4e", size = 12133882, upload-time = "2025-06-26T20:33:54.231Z" }, + { url = "https://files.pythonhosted.org/packages/dd/00/40da9c66d4a4d51291e619be6757fa65c91b92456ff4f01101593f3a1170/ruff-0.12.1-py3-none-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:6cc32e863adcf9e71690248607ccdf25252eeeab5193768e6873b901fd441fed", size = 11608941, upload-time = "2025-06-26T20:33:56.202Z" }, + { url = "https://files.pythonhosted.org/packages/91/e7/f898391cc026a77fbe68dfea5940f8213622474cb848eb30215538a2dadf/ruff-0.12.1-py3-none-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:7fd49a4619f90d5afc65cf42e07b6ae98bb454fd5029d03b306bd9e2273d44cc", size = 11602887, upload-time = "2025-06-26T20:33:58.47Z" }, + { url = "https://files.pythonhosted.org/packages/f6/02/0891872fc6aab8678084f4cf8826f85c5d2d24aa9114092139a38123f94b/ruff-0.12.1-py3-none-musllinux_1_2_aarch64.whl", hash = "sha256:ed5af6aaaea20710e77698e2055b9ff9b3494891e1b24d26c07055459bb717e9", size = 10521742, upload-time = "2025-06-26T20:34:00.465Z" }, + { url = "https://files.pythonhosted.org/packages/2a/98/d6534322c74a7d47b0f33b036b2498ccac99d8d8c40edadb552c038cecf1/ruff-0.12.1-py3-none-musllinux_1_2_armv7l.whl", hash = "sha256:801d626de15e6bf988fbe7ce59b303a914ff9c616d5866f8c79eb5012720ae13", size = 10149909, upload-time = "2025-06-26T20:34:02.603Z" }, + { url = "https://files.pythonhosted.org/packages/34/5c/9b7ba8c19a31e2b6bd5e31aa1e65b533208a30512f118805371dbbbdf6a9/ruff-0.12.1-py3-none-musllinux_1_2_i686.whl", hash = "sha256:2be9d32a147f98a1972c1e4df9a6956d612ca5f5578536814372113d09a27a6c", size = 11136005, upload-time = "2025-06-26T20:34:04.723Z" }, + { url = "https://files.pythonhosted.org/packages/dc/34/9bbefa4d0ff2c000e4e533f591499f6b834346025e11da97f4ded21cb23e/ruff-0.12.1-py3-none-musllinux_1_2_x86_64.whl", hash = "sha256:49b7ce354eed2a322fbaea80168c902de9504e6e174fd501e9447cad0232f9e6", size = 11648579, upload-time = "2025-06-26T20:34:06.766Z" }, + { url = "https://files.pythonhosted.org/packages/6f/1c/20cdb593783f8f411839ce749ec9ae9e4298c2b2079b40295c3e6e2089e1/ruff-0.12.1-py3-none-win32.whl", hash = "sha256:d973fa626d4c8267848755bd0414211a456e99e125dcab147f24daa9e991a245", size = 10519495, upload-time = "2025-06-26T20:34:08.718Z" }, + { url = "https://files.pythonhosted.org/packages/cf/56/7158bd8d3cf16394928f47c637d39a7d532268cd45220bdb6cd622985760/ruff-0.12.1-py3-none-win_amd64.whl", hash = "sha256:9e1123b1c033f77bd2590e4c1fe7e8ea72ef990a85d2484351d408224d603013", size = 11547485, upload-time = "2025-06-26T20:34:11.008Z" }, + { url = "https://files.pythonhosted.org/packages/91/d0/6902c0d017259439d6fd2fd9393cea1cfe30169940118b007d5e0ea7e954/ruff-0.12.1-py3-none-win_arm64.whl", hash = "sha256:78ad09a022c64c13cc6077707f036bab0fac8cd7088772dcd1e5be21c5002efc", size = 10691209, upload-time = "2025-06-26T20:34:12.928Z" }, +] + +[[package]] +name = "six" +version = "1.17.0" +source = { registry = "https://pypi.org/simple" } +sdist = { url = "https://files.pythonhosted.org/packages/94/e7/b2c673351809dca68a0e064b6af791aa332cf192da575fd474ed7d6f16a2/six-1.17.0.tar.gz", hash = "sha256:ff70335d468e7eb6ec65b95b99d3a2836546063f63acc5171de367e834932a81", size = 34031, upload-time = "2024-12-04T17:35:28.174Z" } +wheels = [ + { url = "https://files.pythonhosted.org/packages/b7/ce/149a00dd41f10bc29e5921b496af8b574d8413afcd5e30dfa0ed46c2cc5e/six-1.17.0-py2.py3-none-any.whl", hash = "sha256:4721f391ed90541fddacab5acf947aa0d3dc7d27b2e1e8eda2be8970586c3274", size = 11050, upload-time = "2024-12-04T17:35:26.475Z" }, +] + +[[package]] +name = "tests" +version = "0.1.0" +source = { virtual = "." } +dependencies = [ + { name = "ansible" }, + { name = "labgrid" }, + { name = "pytest" }, + { name = "pytest-check" }, + { name = "pytest-harvest" }, +] + +[package.dev-dependencies] +dev = [ + { name = "isort" }, + { name = "ruff" }, +] + +[package.metadata] +requires-dist = [ + { name = "ansible", specifier = "<12.0.0" }, + { name = "labgrid", git = "https://github.com/aparcar/labgrid.git?branch=aparcar%2Fstaging" }, + { name = "pytest", specifier = ">=8.4.1" }, + { name = "pytest-check", specifier = ">=2.5.3" }, + { name = "pytest-harvest", specifier = ">=1.10.5" }, +] + +[package.metadata.requires-dev] +dev = [ + { name = "isort", specifier = ">=6.0.1" }, + { name = "ruff", specifier = ">=0.12.0" }, +] + +[[package]] +name = "urllib3" +version = "2.5.0" +source = { registry = "https://pypi.org/simple" } +sdist = { url = "https://files.pythonhosted.org/packages/15/22/9ee70a2574a4f4599c47dd506532914ce044817c7752a79b6a51286319bc/urllib3-2.5.0.tar.gz", hash = "sha256:3fc47733c7e419d4bc3f6b3dc2b4f890bb743906a30d56ba4a5bfa4bbff92760", size = 393185, upload-time = "2025-06-18T14:07:41.644Z" } +wheels = [ + { url = "https://files.pythonhosted.org/packages/a7/c2/fe1e52489ae3122415c51f387e221dd0773709bad6c6cdaa599e8a2c5185/urllib3-2.5.0-py3-none-any.whl", hash = "sha256:e6b01673c0fa6a13e374b50871808eb3bf7046c4b125b216f6bf1cc604cff0dc", size = 129795, upload-time = "2025-06-18T14:07:40.39Z" }, +] + +[[package]] +name = "xmodem" +version = "0.4.7" +source = { registry = "https://pypi.org/simple" } +sdist = { url = "https://files.pythonhosted.org/packages/3d/17/fd6668a09afdc46c22990172b6f65e07dfb5bcf38960d063a7a887ca926d/xmodem-0.4.7.tar.gz", hash = "sha256:2f1068aa8676f0d1d112498b5786c4f8ea4f89d8f25d07d3a0f293cd21db1c35", size = 34052, upload-time = "2023-06-11T18:02:42.897Z" } +wheels = [ + { url = "https://files.pythonhosted.org/packages/67/6b/371c541d67e926d7e87e47aaa7117f8d30ac13dd16e64f9ac2ffdcdead44/xmodem-0.4.7-py2.py3-none-any.whl", hash = "sha256:0842d2266175f01225053db721ea952b3f4b239cb3ace83c32b1daf90aa413af", size = 35626, upload-time = "2023-06-11T18:02:38.583Z" }, + { url = "https://files.pythonhosted.org/packages/8c/1a/e383b538450963953d4e652d2bce0a4cfbcf15ecc8c34ab998bd1329c95d/xmodem-0.4.7-py3-none-any.whl", hash = "sha256:e6a2c7608f7b187da786c47780f8407dbc4ac2d3dfeb34fe683cc19778f01360", size = 35689, upload-time = "2023-06-11T18:02:40.845Z" }, +]