Skip to content

(2025.12 and 2025.12.01) Fail to enable QUIC support when no custom tags provided #144

@Murielxun

Description

@Murielxun

Bug description

Enabling QUIC support fails and deletes the <environment-name>-vdc-external-nlb Load Balancer listener when CloudFormation tags are not provided during deployment.

Affected versions

2025.12 and 2025.12.01

Mitigation

  1. (Optional) Recreate the listener of <environment-name>-vdc-external-nlb Load Balancer if it is already deleted

    1. Get the Target Group ARN by replacing <target-group-name> with the name of the <environment-name>-gateway-TN-<suffix> in EC2 Target Groups console and <environment-name> with the name of your RES environment.
    ENVIRONMENT_NAME=<environment-name>
    TARGET_GROUP_NAME=<target-group-name>
    TARGET_GROUP_ARN=$(aws elbv2 describe-target-groups --names ${TARGET_GROUP_NAME} --query "TargetGroups[0].TargetGroupArn" --output text)
    
    1. Get the <environment-name>-vdc-external-nlb Load Balancer ARN
    LOAD_BALANCER_NAME=${ENVIRONMENT_NAME}-vdc-external-nlb
    LOAD_BALANCER_ARN=$(aws elbv2 describe-load-balancers --names ${LOAD_BALANCER_NAME} --query "LoadBalancers[0].LoadBalancerArn" --output text)
    
    1. Recreate the listener
    aws elbv2 create-listener --load-balancer-arn ${LOAD_BALANCER_ARN} --protocol TCP --port 443 --default-actions "Type=forward,TargetGroupArn=${TARGET_GROUP_ARN}"
    
  2. Create a S3 bucket with ACLs disabled.

  3. Download patch_host.py and idea-cluster-manager-2025.12.01-679477ec.tar.gz by replacing <output-directory> with the directory to download the patch script, <environment-name> with the name of your RES environment, <bucket-name> with the name of a ACLs disabled S3 bucket under the account/region where RES is deployed and RES_VERSION with 2025.12.01 in the command below:

    1. The patch applies to 2025.12.01
    2. The patch file requires AWS CLI v2, Python 3.10 or above, and Boto3.
    3. Configure the AWS CLI for the account / region where RES is deployed, and make sure that you have S3 permissions to write to the bucket provided through <bucket-name>.
    OUTPUT_DIRECTORY=<output-directory>
    ENVIRONMENT_NAME=<environment-name>
    RES_VERSION=<RES_VERSION>
    BUCKET_NAME=<bucket-name>
    
    mkdir -p ${OUTPUT_DIRECTORY}
    curl https://research-engineering-studio-us-east-1.s3.us-east-1.amazonaws.com/releases/${RES_VERSION}/patch_scripts/patch_host.py --output ${OUTPUT_DIRECTORY}/patch_host.py
    curl https://research-engineering-studio-us-east-1.s3.us-east-1.amazonaws.com/releases/${RES_VERSION}/patch_scripts/patches/idea-cluster-manager-2025.12.01-679477ec.tar.gz --output ${OUTPUT_DIRECTORY}/idea-cluster-manager-2025.12.01-679477ec.tar.gz
    
  4. Run the following patch commands:

python3 ${OUTPUT_DIRECTORY}/patch_host.py --environment-name ${ENVIRONMENT_NAME} --module cluster-manager --zip-file ${OUTPUT_DIRECTORY}/idea-cluster-manager-2025.12.01-679477ec.tar.gz --s3-bucket ${BUCKET_NAME}
  1. Cycle the Cluster Manager instance for your environment. You may also terminate the instance from the Amazon EC2 Management Console.
INSTANCE_ID=$(aws ec2 describe-instances \
           --filters \
           Name=tag:Name,Values=${ENVIRONMENT_NAME}-cluster-manager \
           Name=tag:res:EnvironmentName,Values=${ENVIRONMENT_NAME}\
           --query "Reservations[0].Instances[0].InstanceId" \
           --output text)

aws ec2 terminate-instances --instance-ids ${INSTANCE_ID}
  1. Verify the Cluster Manager instance status by checking the activity of the auto scaling group starting with the name <RES-EnvironmentName>-cluster-manager-asg. Wait until the new instance is launched successfully.

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions