-
Notifications
You must be signed in to change notification settings - Fork 507
Open
Description
When starting a new minio operator chart with following values:
console:
enabled: false
operator:
image:
repository: quay.io/minio/operator
tag: v6.0.4
replicaCount: 2
resources:
limits:
cpu: 100m
memory: 128Mi
requests:
cpu: 0
memory: 0
tolerations:
- effect: NoSchedule
key: node.kubernetes.io/node
operator: Equal
value: cloudops
I ran into an issue where both of the minio-operator replicas report:
panic: tls: private key does not match public key
goroutine 168 [running]:
github.com/minio/operator/pkg/controller.(*Controller).waitForCertSecretReady(0xc000a242c0, {0x210e1cc, 0x3}, {0x21104a3, 0x7})
github.com/minio/operator/pkg/controller/tls.go:67 +0x2ec
github.com/minio/operator/pkg/controller.(*Controller).waitSTSTLSCert(...)
github.com/minio/operator/pkg/controller/sts.go:421
github.com/minio/operator/pkg/controller.(*Controller).startSTSAPIServer(0xc000a242c0, {0x244fc30, 0xc000a7a050}, 0xc0001329c0)
github.com/minio/operator/pkg/controller/main-controller.go:420 +0xab
created by github.com/minio/operator/pkg/controller.(*Controller).Start in goroutine 1
github.com/minio/operator/pkg/controller/main-controller.go:563 +0x4db
The minio secret looks to have a malformed private key:
onprem_shell@onprem-node23:~$ kubectl get secret -n minio-operator sts-tls -o yaml
apiVersion: v1
data:
private.key: LS0tLS1CRUdJTiBQUklWQVRFIEtFWS0tLS0tCk1JR0hBZ0VBTUJNR0J5cUdTTTQ5QWdFR0NDcUdTTTQ5QXdFSEJHMHdhd0lCQVFRZzRweHlDbS94WmJmRENPR1AKMVR1N2dTb21CcFBxbHNNVHUwN2JrSGIzOFJHaFJBTkNBQVRpKzBSZ3FTV3ZQVkJTZ0FRZDZINjYxZUQrMExZcgpEMHIyWHdsSU9HdnFLclc0bVkydWRncjJ3RVh4M1dNOTRySlQxVit4TWVHOWhxSFhKVjVJdDJqcAotLS0tLUVORCBQUklWQVRFIEtFWS0tLS0tCg==
public.crt: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUN4VENDQWEyZ0F3SUJBZ0lRY0NpOFV1aHBrSDhKdFgrb3JSOGhxekFOQmdrcWhraUc5dzBCQVFzRkFEQWMKTVJvd0dBWURWUVFEREJGVGFXeDJaWEpqY21WbGEwMXJPSE5EUVRBZUZ3MHlOVEEwTWpneE1URXdNRFJhRncweQpOakEwTWpNd09EQXdNamRhTUVReEZUQVRCZ05WQkFvVERITjVjM1JsYlRwdWIyUmxjekVyTUNrR0ExVUVBeE1pCmMzbHpkR1Z0T201dlpHVTZjM1J6TG0xcGJtbHZMVzl3WlhKaGRHOXlMbk4yWXpCWk1CTUdCeXFHU000OUFnRUcKQ0NxR1NNNDlBd0VIQTBJQUJHR3Z4ZW1iZ2hUN2k1K3QyWjVKcVpCVmcwRWZoaGdka0xseng1bTI2R2JnTmlWTwpyR29JemJEVHo2N2tBSU9Ua2hGOHpCS2RvNUV4WStaNEFOcGNLWStqZ2FVd2dhSXdEZ1lEVlIwUEFRSC9CQVFECkFnV2dNQk1HQTFVZEpRUU1NQW9HQ0NzR0FRVUZCd01CTUF3R0ExVWRFd0VCL3dRQ01BQXdId1lEVlIwakJCZ3cKRm9BVVhSNHUvU1B6cS9VS3RVeWNUb0RBVXU4aGhQWXdUQVlEVlIwUkJFVXdRNElEYzNSemdoWnpkSE11YldsdQphVzh0YjNCbGNtRjBiM0l1YzNaamdpUnpkSE11YldsdWFXOHRiM0JsY21GMGIzSXVjM1pqTG1Oc2RYTjBaWEl1CmJHOWpZV3d3RFFZSktvWklodmNOQVFFTEJRQURnZ0VCQUZnOUZ1ZVBKaGUzcVNTL2tjRUxDM1lDNWNXVHF3TXUKc3N0YlZnb0ZzYWw0c0lmVW5wZlo2SmZCcEtEdkNNZFdpczd0eU1YVlp6QUpuTEd3NkNzYWZSUEx2UW4ybjY2bwp5VnVhc2JEeTAzK2dzQW54Y2dONWdsbFErMEhlUW85Vkh4TjZnOVN3bUY4cHIvRVJRcEhNN1E5U3B0bmZJUmdSCnRBcC9lOHB5cGFBRFVZMkRTOHcxcmRNcTMwOWcySXFWYmltLzRnSEZONWRwWkM1d0ZJclhrMHh3bXBoMWJoaEcKNi9pT1NvRTBRaVFjR0prZmxSaFNzSGZ4bHZ1S1ZNWE5NLzBuVlJyeVRhNjRKV0lBTExVKzVQZEEwOTNwL0dnbgpHaUx1Z3NNM1hueFJUbDJTVm4zUjRUNGoySWRlQnlha3RlMy9JTkI3djVMbXN3UElFelFlRzZrPQotLS0tLUVORCBDRVJUSUZJQ0FURS0tLS0tCg==
kind: Secret
metadata:
creationTimestamp: "2025-04-28T11:15:08Z"
name: sts-tls
namespace: minio-operator
ownerReferences:
- apiVersion: apps/v1
blockOwnerDeletion: true
controller: true
kind: Deployment
name: minio-operator
uid: a696df15-f7ac-4539-a749-78056d0ba64d
resourceVersion: "40373"
uid: 6ae744cf-f472-4d7a-97f7-624608da9123
type: Opaque
The issues seemed to not have occurred right away. We deploy minio operator and a tenant right after. The tenant got deployed, but midway thru the deployment minio-operator started reporting this issue. This bug also doesn't manifest itself consistently.
Your Environment
- Version used (
minio-operator): 6.0.4 - Environment name and version (e.g. kubernetes v1.17.2): k8s 1.31
- Server type and version: Vmware
- Operating System and version (
uname -a): Ubuntu