-
Notifications
You must be signed in to change notification settings - Fork 417
CNTRLPLANE-1551: feat(nodepool): add HAProxy image override via annotation #7187
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Changes from all commits
File filter
Filter by extension
Conversations
Jump to
Diff view
Diff view
There are no files selected for viewing
| Original file line number | Diff line number | Diff line change |
|---|---|---|
| @@ -0,0 +1,126 @@ | ||
| # Customize Worker Node HAProxy Image | ||
|
Member
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. If a HostedCluster has multiple NodePools with different hypershift.openshift.io/haproxy-image annotations, how will the HAProxy image be determined?
Contributor
Author
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. this is for the haProxy deployed on worker nodes, so each nodePool can deploy different image based on |
||
|
|
||
| This guide explains how to customize the HAProxy image used for worker node API server proxy on a per-NodePool basis. | ||
|
|
||
| ## Overview | ||
|
|
||
| Worker nodes in HyperShift use HAProxy to proxy connections to the hosted control plane API server. By default, the HAProxy image comes from the OpenShift release payload. However, you can override this image using either: | ||
|
|
||
| 1. **NodePool annotation** (recommended for per-NodePool customization) | ||
| 2. **Environment variable** (for global override when shared ingress is enabled) | ||
|
|
||
| ## Image Resolution Priority | ||
|
|
||
| The HAProxy image is resolved in the following priority order (highest to lowest): | ||
|
|
||
| 1. **NodePool annotation** `hypershift.openshift.io/haproxy-image` (highest priority) | ||
| 2. **Environment variable** `IMAGE_SHARED_INGRESS_HAPROXY` (when shared ingress is enabled) | ||
| 3. **Hardcoded default** (when shared ingress is enabled) | ||
| 4. **Release payload** (default when shared ingress is disabled) | ||
|
|
||
| ## Per-NodePool Customization | ||
|
|
||
| ### Use Case | ||
|
|
||
| Use the NodePool annotation when you want to: | ||
| - Test a newer HAProxy version on a specific NodePool | ||
| - Use different HAProxy images for different workload types | ||
| - Gradually roll out HAProxy updates across NodePools | ||
| - Use a custom HAProxy image with specific patches or configurations | ||
|
|
||
| ### Configuration | ||
|
|
||
| To override the HAProxy image for a specific NodePool, add the `hypershift.openshift.io/haproxy-image` annotation: | ||
|
|
||
| ```yaml | ||
| apiVersion: hypershift.openshift.io/v1beta1 | ||
| kind: NodePool | ||
| metadata: | ||
| name: my-nodepool | ||
| namespace: clusters | ||
| annotations: | ||
| hypershift.openshift.io/haproxy-image: "quay.io/my-org/haproxy:custom-v2.9.1" | ||
| spec: | ||
| clusterName: my-cluster | ||
| replicas: 3 | ||
| # ... rest of spec | ||
| ``` | ||
|
|
||
| ### Applying the Annotation | ||
|
|
||
| You can add the annotation to an existing NodePool using `kubectl annotate`: | ||
|
|
||
| ```bash | ||
| kubectl annotate nodepool my-nodepool \ | ||
| -n clusters \ | ||
| hypershift.openshift.io/haproxy-image="quay.io/my-org/haproxy:custom-v2.9.1" | ||
| ``` | ||
|
|
||
| ### Removing the Override | ||
|
|
||
| To remove the override and revert to the default behavior: | ||
|
|
||
| ```bash | ||
| kubectl annotate nodepool my-nodepool \ | ||
| -n clusters \ | ||
| hypershift.openshift.io/haproxy-image- | ||
| ``` | ||
|
|
||
| ## Global Override (Shared Ingress Only) | ||
|
|
||
| When shared ingress is enabled, you can set a global HAProxy image override using the `IMAGE_SHARED_INGRESS_HAPROXY` environment variable on the HyperShift operator. This affects all NodePools that don't have the annotation set. | ||
|
|
||
| **Note**: The NodePool annotation always takes precedence over the environment variable. | ||
|
|
||
| ## Verification | ||
|
|
||
| After applying the annotation, new worker nodes will use the specified HAProxy image. To verify: | ||
|
|
||
| 1. Check the NodePool's token secret generation to ensure the new configuration is picked up | ||
| 2. Verify the ignition configuration contains the correct image | ||
| 3. On a worker node, check the static pod manifest: | ||
|
|
||
| ```bash | ||
| # On a worker node | ||
| cat /etc/kubernetes/manifests/kube-apiserver-proxy.yaml | grep image: | ||
| ``` | ||
|
|
||
| ## Rollout Behavior | ||
|
|
||
| The HAProxy image change triggers a NodePool rollout: | ||
| - New ignition configuration is generated with the updated image | ||
| - Worker nodes are replaced according to the NodePool's upgrade strategy | ||
| - The rollout respects `maxUnavailable` settings | ||
|
|
||
| ## Important Notes | ||
|
|
||
| 1. **Image Availability**: Ensure the custom HAProxy image is accessible from worker nodes | ||
| 2. **Pull Secrets**: The worker nodes must have credentials to pull the custom image | ||
| 3. **Compatibility**: The custom HAProxy image must be compatible with HyperShift's configuration expectations | ||
| 4. **Shared Ingress**: When shared ingress is enabled, ensure the custom image supports proxy protocol v2 with TLV (requires HAProxy v2.9+) | ||
| 5. **Multiple NodePools**: Each NodePool can have a different HAProxy image override | ||
|
|
||
|
|
||
| ## Troubleshooting | ||
|
|
||
| ### Image Pull Errors | ||
|
|
||
| If worker nodes fail to pull the custom image: | ||
| 1. Verify the image exists and is accessible | ||
| 2. Check that the global pull secret includes credentials for the image registry | ||
| 3. Verify network connectivity from worker nodes to the image registry | ||
|
|
||
| ### Wrong Image in Use | ||
|
|
||
| If the expected image is not being used: | ||
| 1. Check the annotation is correctly set on the NodePool | ||
| 2. Verify the NodePool has reconciled (check status conditions) | ||
| 3. Inspect the ignition configuration in the NodePool's token secret | ||
|
|
||
| ### Rollout Issues | ||
|
|
||
| If the rollout doesn't complete: | ||
| 1. Check NodePool conditions for errors | ||
| 2. Verify the custom image is compatible | ||
| 3. Check worker node logs for HAProxy startup failures | ||
| 4. Ensure the image supports the required features (e.g., proxy protocol for shared ingress) | ||
| Original file line number | Diff line number | Diff line change |
|---|---|---|
|
|
@@ -18,7 +18,7 @@ storage: | |
| overwrite: true | ||
| path: /etc/kubernetes/apiserver-proxy-config/haproxy.cfg | ||
| - contents: | ||
| source: data:text/plain;charset=utf-8;base64,YXBpVmVyc2lvbjogdjEKa2luZDogUG9kCm1ldGFkYXRhOgogIGNyZWF0aW9uVGltZXN0YW1wOiBudWxsCiAgbGFiZWxzOgogICAgazhzLWFwcDoga3ViZS1hcGlzZXJ2ZXItcHJveHkKICBuYW1lOiBrdWJlLWFwaXNlcnZlci1wcm94eQogIG5hbWVzcGFjZToga3ViZS1zeXN0ZW0Kc3BlYzoKICBjb250YWluZXJzOgogIC0gY29tbWFuZDoKICAgIC0gaGFwcm94eQogICAgLSAtZgogICAgLSAvdXNyL2xvY2FsL2V0Yy9oYXByb3h5CiAgICBpbWFnZTogcXVheS5pby9yZWRoYXQtdXNlci13b3JrbG9hZHMvY3J0LXJlZGhhdC1hY20tdGVuYW50L2h5cGVyc2hpZnQtc2hhcmVkLWluZ3Jlc3MtbWFpbkBzaGEyNTY6MWFmNTliN2EyOTQzMjMxNGJkZTU0ZTg5NzdmYTQ1ZmE5MmRjNDg4ODVlZmJmMGRmNjAxNDE4ZWMwOTEyZjQ3MgogICAgbGl2ZW5lc3NQcm9iZToKICAgICAgZmFpbHVyZVRocmVzaG9sZDogMwogICAgICBodHRwR2V0OgogICAgICAgIGhvc3Q6IGNsdXN0ZXIuaW50ZXJuYWwuZXhhbXBsZS5jb20KICAgICAgICBwYXRoOiAvdmVyc2lvbgogICAgICAgIHBvcnQ6IDg0NDMKICAgICAgICBzY2hlbWU6IEhUVFBTCiAgICAgIGluaXRpYWxEZWxheVNlY29uZHM6IDEyMAogICAgICBwZXJpb2RTZWNvbmRzOiAxMjAKICAgICAgc3VjY2Vzc1RocmVzaG9sZDogMQogICAgbmFtZTogaGFwcm94eQogICAgcG9ydHM6CiAgICAtIGNvbnRhaW5lclBvcnQ6IDg0NDMKICAgICAgaG9zdFBvcnQ6IDg0NDMKICAgICAgbmFtZTogYXBpc2VydmVyCiAgICAgIHByb3RvY29sOiBUQ1AKICAgIHJlc291cmNlczoKICAgICAgcmVxdWVzdHM6CiAgICAgICAgY3B1OiAxM20KICAgICAgICBtZW1vcnk6IDE2TWkKICAgIHNlY3VyaXR5Q29udGV4dDoKICAgICAgcnVuQXNVc2VyOiAxMDAxCiAgICB2b2x1bWVNb3VudHM6CiAgICAtIG1vdW50UGF0aDogL3Vzci9sb2NhbC9ldGMvaGFwcm94eQogICAgICBuYW1lOiBjb25maWcKICBob3N0TmV0d29yazogdHJ1ZQogIHByaW9yaXR5Q2xhc3NOYW1lOiBzeXN0ZW0tbm9kZS1jcml0aWNhbAogIHZvbHVtZXM6CiAgLSBob3N0UGF0aDoKICAgICAgcGF0aDogL2V0Yy9rdWJlcm5ldGVzL2FwaXNlcnZlci1wcm94eS1jb25maWcKICAgIG5hbWU6IGNvbmZpZwpzdGF0dXM6IHt9Cg== | ||
| source: data:text/plain;charset=utf-8;base64,YXBpVmVyc2lvbjogdjEKa2luZDogUG9kCm1ldGFkYXRhOgogIGNyZWF0aW9uVGltZXN0YW1wOiBudWxsCiAgbGFiZWxzOgogICAgazhzLWFwcDoga3ViZS1hcGlzZXJ2ZXItcHJveHkKICBuYW1lOiBrdWJlLWFwaXNlcnZlci1wcm94eQogIG5hbWVzcGFjZToga3ViZS1zeXN0ZW0Kc3BlYzoKICBjb250YWluZXJzOgogIC0gY29tbWFuZDoKICAgIC0gaGFwcm94eQogICAgLSAtZgogICAgLSAvdXNyL2xvY2FsL2V0Yy9oYXByb3h5CiAgICBpbWFnZTogaGEtcHJveHktaW1hZ2U6bGF0ZXN0CiAgICBsaXZlbmVzc1Byb2JlOgogICAgICBmYWlsdXJlVGhyZXNob2xkOiAzCiAgICAgIGh0dHBHZXQ6CiAgICAgICAgaG9zdDogY2x1c3Rlci5pbnRlcm5hbC5leGFtcGxlLmNvbQogICAgICAgIHBhdGg6IC92ZXJzaW9uCiAgICAgICAgcG9ydDogODQ0MwogICAgICAgIHNjaGVtZTogSFRUUFMKICAgICAgaW5pdGlhbERlbGF5U2Vjb25kczogMTIwCiAgICAgIHBlcmlvZFNlY29uZHM6IDEyMAogICAgICBzdWNjZXNzVGhyZXNob2xkOiAxCiAgICBuYW1lOiBoYXByb3h5CiAgICBwb3J0czoKICAgIC0gY29udGFpbmVyUG9ydDogODQ0MwogICAgICBob3N0UG9ydDogODQ0MwogICAgICBuYW1lOiBhcGlzZXJ2ZXIKICAgICAgcHJvdG9jb2w6IFRDUAogICAgcmVzb3VyY2VzOgogICAgICByZXF1ZXN0czoKICAgICAgICBjcHU6IDEzbQogICAgICAgIG1lbW9yeTogMTZNaQogICAgc2VjdXJpdHlDb250ZXh0OgogICAgICBydW5Bc1VzZXI6IDEwMDEKICAgIHZvbHVtZU1vdW50czoKICAgIC0gbW91bnRQYXRoOiAvdXNyL2xvY2FsL2V0Yy9oYXByb3h5CiAgICAgIG5hbWU6IGNvbmZpZwogIGhvc3ROZXR3b3JrOiB0cnVlCiAgcHJpb3JpdHlDbGFzc05hbWU6IHN5c3RlbS1ub2RlLWNyaXRpY2FsCiAgdm9sdW1lczoKICAtIGhvc3RQYXRoOgogICAgICBwYXRoOiAvZXRjL2t1YmVybmV0ZXMvYXBpc2VydmVyLXByb3h5LWNvbmZpZwogICAgbmFtZTogY29uZmlnCnN0YXR1czoge30K | ||
|
Member
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. IIUC, will this cause the NodePool to restart? |
||
| mode: 420 | ||
| overwrite: true | ||
| path: /etc/kubernetes/manifests/kube-apiserver-proxy.yaml | ||
|
|
||
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is for NodePool; IMHO, it should be placed in api/hypershift/v1beta1/nodepool_types.go.