-
Notifications
You must be signed in to change notification settings - Fork 16
[SVLS 8070] Switch dd-octo permissions to running a workflow for serverless-init ghcr #997
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Changes from all commits
9b9bc4b
9cbbe26
4692869
215edba
63001b8
File filter
Filter by extension
Conversations
Jump to
Diff view
Diff view
There are no files selected for viewing
| Original file line number | Diff line number | Diff line change |
|---|---|---|
| @@ -0,0 +1,2 @@ | ||
| .github/chainguard/serverless-init-ci-publish.sts.yaml @DataDog/serverless | ||
| .github/publish-serverless-init-to-ghcr.yaml @DataDog/serverless |
| Original file line number | Diff line number | Diff line change |
|---|---|---|
| @@ -1,24 +1,25 @@ | ||
| # DD Octo STS Trust Policy for serverless-init-ci GitLab pipeline | ||
| # | ||
| # This policy allows the serverless-init-ci GitLab pipeline to publish | ||
| # serverless-init images to GitHub Container Registry (GHCR). | ||
| # This policy allows the serverless-init-ci GitLab pipeline to trigger | ||
| # GitHub Actions workflows that publish serverless-init images to GHCR. | ||
| # | ||
| # Reference: https://datadoghq.atlassian.net/wiki/spaces/SECENG/pages/5138645099 | ||
| # Pipeline: https://gitlab.ddbuild.io/DataDog/serverless-init-ci | ||
|
|
||
| issuer: https://gitlab.ddbuild.io | ||
|
|
||
| # Subject pattern matches the serverless-init-ci repo on any branch or tag | ||
| # Subject pattern matches the serverless-init-ci repo on any protected branch or tag | ||
| subject_pattern: "project_path:DataDog/serverless-init-ci:ref_type:(branch|tag):ref:.*" | ||
|
|
||
| # Allow all branches and tags for building RC and prod images | ||
| # Only allow protected branches and tags (security control) | ||
| claim_pattern: | ||
| project_path: "DataDog/serverless-init-ci" | ||
| ref_type: "^(branch|tag)$" | ||
| pipeline_source: "^(web|pipeline|push)$" | ||
| ref_protected: "true" | ||
| pipeline_source: "^(web|pipeline)$" | ||
| ci_config_ref_uri: "^gitlab\\.ddbuild\\.io/DataDog/serverless-init-ci//\\.gitlab-ci\\.yml@refs/(heads|tags)/.*$" | ||
|
|
||
| # Minimal permissions: only write packages to GHCR | ||
| # Minimal permissions: only trigger GitHub Actions workflows | ||
| # The workflow itself uses GITHUB_TOKEN for GHCR access | ||
| permissions: | ||
| packages: write | ||
| metadata: read | ||
| actions: write |
|
Contributor
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. nit: do we want the scripts to be inline or do we want them in a separate file?
Contributor
Author
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. I have absolutely no opinion on this! Happy to move them wherever y'all prefer, though if you want serverless-init in as little of the repo as possible, then inline makes sense |
| Original file line number | Diff line number | Diff line change |
|---|---|---|
| @@ -0,0 +1,129 @@ | ||
| name: Publish serverless-init to GHCR | ||
|
|
||
| on: | ||
| workflow_dispatch: | ||
| inputs: | ||
| source_image: | ||
| description: 'Source image from registry.datadoghq.com (e.g., registry.datadoghq.com/serverless-init:1.7.8)' | ||
| required: true | ||
| type: string | ||
| version: | ||
| description: 'Version tag (e.g., 1.7.8 or 1.7.8-rc1)' | ||
| required: true | ||
| type: string | ||
| image_suffix: | ||
| description: 'Image suffix (empty for standard, -alpine for alpine)' | ||
| required: false | ||
| type: string | ||
| default: '' | ||
| pipeline_id: | ||
| description: 'GitLab pipeline ID' | ||
| required: true | ||
| type: string | ||
| is_latest: | ||
| description: 'Tag as latest (true for prod releases, false for RCs)' | ||
| required: true | ||
| type: boolean | ||
| default: false | ||
|
|
||
| permissions: | ||
| packages: write | ||
| contents: read | ||
|
|
||
| jobs: | ||
| publish-to-ghcr: | ||
| runs-on: ubuntu-latest | ||
| steps: | ||
| - name: Install crane | ||
| run: | | ||
| cd /tmp | ||
| wget https://github.com/google/go-containerregistry/releases/download/v0.20.2/go-containerregistry_Linux_x86_64.tar.gz | ||
| tar -xzf go-containerregistry_Linux_x86_64.tar.gz | ||
| sudo mv crane /usr/local/bin/ | ||
| crane version | ||
|
|
||
| - name: Login to GHCR | ||
| uses: docker/login-action@v3 | ||
| with: | ||
| registry: ghcr.io | ||
| username: ${{ github.actor }} | ||
| password: ${{ secrets.GITHUB_TOKEN }} | ||
|
|
||
| - name: Wait for image availability | ||
| run: | | ||
| SOURCE_IMAGE="${{ inputs.source_image }}" | ||
| MAX_ATTEMPTS=20 | ||
| RETRY_DELAY=30 | ||
| # Maximum wait time: 20 attempts × 30s = 600s (10 minutes) | ||
|
|
||
| echo "⏳ Waiting for image to be available: ${SOURCE_IMAGE}" | ||
| echo "Will check every ${RETRY_DELAY}s for up to $((MAX_ATTEMPTS * RETRY_DELAY))s" | ||
|
|
||
| for i in $(seq 1 $MAX_ATTEMPTS); do | ||
| echo "Attempt $i/$MAX_ATTEMPTS: Checking if image exists..." | ||
|
|
||
| if crane manifest ${SOURCE_IMAGE} >/dev/null 2>&1; then | ||
| echo "✅ Image is available!" | ||
| exit 0 | ||
| fi | ||
|
|
||
| if [ $i -lt $MAX_ATTEMPTS ]; then | ||
| echo "⏳ Image not yet available, waiting ${RETRY_DELAY}s..." | ||
| sleep $RETRY_DELAY | ||
| fi | ||
| done | ||
|
|
||
| echo "❌ Image did not become available after $((MAX_ATTEMPTS * RETRY_DELAY))s" | ||
| exit 1 | ||
|
|
||
| - name: Copy image to GHCR | ||
| run: | | ||
| SOURCE_IMAGE="${{ inputs.source_image }}" | ||
| VERSION="${{ inputs.version }}" | ||
| IMAGE_SUFFIX="${{ inputs.image_suffix }}" | ||
| PIPELINE_ID="${{ inputs.pipeline_id }}" | ||
| IS_LATEST="${{ inputs.is_latest }}" | ||
|
|
||
| DEST_BASE="ghcr.io/datadog/datadog-lambda-extension/serverless-init" | ||
|
|
||
| echo "📦 Publishing serverless-init image to GHCR" | ||
| echo " Source: ${SOURCE_IMAGE}" | ||
| echo " Destinations:" | ||
| echo " - ${DEST_BASE}:${VERSION}${IMAGE_SUFFIX}" | ||
| echo " - ${DEST_BASE}:v${PIPELINE_ID}${IMAGE_SUFFIX}" | ||
|
|
||
| # Copy with version tag (with retry logic) | ||
| # Maximum retry duration: 3 attempts with 10s delays between retries | ||
| # This workflow is triggered in parallel with the publish attempt to registry.datadoghq.com | ||
| # Registry.datadoghq.com should normally need about ~30 seconds to recieve the new image | ||
| MAX_COPY_ATTEMPTS=3 | ||
| COPY_RETRY_DELAY=10 | ||
|
|
||
| for i in $(seq 1 $MAX_COPY_ATTEMPTS); do | ||
| echo "Copying image (attempt $i/$MAX_COPY_ATTEMPTS)..." | ||
|
|
||
| if crane copy ${SOURCE_IMAGE} ${DEST_BASE}:${VERSION}${IMAGE_SUFFIX}; then | ||
| echo "✅ Image copied successfully!" | ||
| break | ||
| fi | ||
|
|
||
| if [ $i -lt $MAX_COPY_ATTEMPTS ]; then | ||
| echo "⚠️ Copy failed, retrying in ${COPY_RETRY_DELAY}s..." | ||
| sleep $COPY_RETRY_DELAY | ||
| else | ||
| echo "❌ Failed to copy image after $MAX_COPY_ATTEMPTS attempts" | ||
| exit 1 | ||
| fi | ||
| done | ||
|
|
||
| # Tag for pipeline ID | ||
| crane tag ${DEST_BASE}:${VERSION}${IMAGE_SUFFIX} v${PIPELINE_ID}${IMAGE_SUFFIX} | ||
|
|
||
| # Tag as latest if this is a production release | ||
| if [ "$IS_LATEST" = "true" ]; then | ||
| echo " - ${DEST_BASE}:latest${IMAGE_SUFFIX}" | ||
| crane tag ${DEST_BASE}:${VERSION}${IMAGE_SUFFIX} latest${IMAGE_SUFFIX} | ||
| fi | ||
|
|
||
| echo "✅ Successfully published image to GHCR!" | ||
| echo "📍 View at: https://github.com/DataDog/datadog-lambda-extension/pkgs/container/datadog-lambda-extension%2Fserverless-init" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is this truly needed?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The dd-octo folks claim to do a periodic review of these: https://datadoghq.atlassian.net/wiki/spaces/SECENG/pages/5138645099/User+guide+dd-octo-sts#%3Abest%3A-Best-practices