Skip to content

Commit 96f2de9

Browse files
committed
feat(nodepool): add HAProxy image override via annotation
This change enables per-NodePool customization of the HAProxy image used for worker node API server proxy through the annotation hypershift.openshift.io/haproxy-image. The implementation includes: - New annotation constant HAProxyImageAnnotation in API types - Image resolution function with priority order: 1. NodePool annotation (highest priority) 2. Environment variable when shared ingress enabled 3. Hardcoded default when shared ingress enabled 4. Release payload (default) - Updated all call sites to use the new resolution logic - Comprehensive unit tests validating all priority scenarios - User documentation with examples and troubleshooting The refactored resolveHAProxyImage() function is independently testable, avoiding complex HAProxy config generation setup. Signed-off-by: Mulham Raee <[email protected]> Commit-Message-Assisted-by: Claude (via Claude Code)
1 parent 207f8a2 commit 96f2de9

File tree

8 files changed

+294
-11
lines changed

8 files changed

+294
-11
lines changed

api/hypershift/v1beta1/hostedcluster_types.go

Lines changed: 4 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -50,6 +50,10 @@ const (
5050
// KonnectivityAgentImageAnnotation is a temporary annotation that allows the specification of the konnectivity agent image.
5151
// This will be removed when Konnectivity is added to the Openshift release payload
5252
KonnectivityAgentImageAnnotation = "hypershift.openshift.io/konnectivity-agent-image"
53+
// HAProxyImageAnnotation can be set on a NodePool to override the HAProxy image
54+
// used for worker node API server proxy. This takes precedence over the environment
55+
// variable IMAGE_SHARED_INGRESS_HAPROXY and the default shared ingress image.
56+
HAProxyImageAnnotation = "hypershift.openshift.io/haproxy-image"
5357
// ControlPlaneOperatorImageAnnotation is an annotation that allows the specification of the control plane operator image.
5458
// This is used for development and e2e workflows
5559
ControlPlaneOperatorImageAnnotation = "hypershift.openshift.io/control-plane-operator-image"
Lines changed: 126 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,126 @@
1+
# Customize Worker Node HAProxy Image
2+
3+
This guide explains how to customize the HAProxy image used for worker node API server proxy on a per-NodePool basis.
4+
5+
## Overview
6+
7+
Worker nodes in HyperShift use HAProxy to proxy connections to the hosted control plane API server. By default, the HAProxy image comes from the OpenShift release payload. However, you can override this image using either:
8+
9+
1. **NodePool annotation** (recommended for per-NodePool customization)
10+
2. **Environment variable** (for global override when shared ingress is enabled)
11+
12+
## Image Resolution Priority
13+
14+
The HAProxy image is resolved in the following priority order (highest to lowest):
15+
16+
1. **NodePool annotation** `hypershift.openshift.io/haproxy-image` (highest priority)
17+
2. **Environment variable** `IMAGE_SHARED_INGRESS_HAPROXY` (when shared ingress is enabled)
18+
3. **Hardcoded default** (when shared ingress is enabled)
19+
4. **Release payload** (default when shared ingress is disabled)
20+
21+
## Per-NodePool Customization
22+
23+
### Use Case
24+
25+
Use the NodePool annotation when you want to:
26+
- Test a newer HAProxy version on a specific NodePool
27+
- Use different HAProxy images for different workload types
28+
- Gradually roll out HAProxy updates across NodePools
29+
- Use a custom HAProxy image with specific patches or configurations
30+
31+
### Configuration
32+
33+
To override the HAProxy image for a specific NodePool, add the `hypershift.openshift.io/haproxy-image` annotation:
34+
35+
```yaml
36+
apiVersion: hypershift.openshift.io/v1beta1
37+
kind: NodePool
38+
metadata:
39+
name: my-nodepool
40+
namespace: clusters
41+
annotations:
42+
hypershift.openshift.io/haproxy-image: "quay.io/my-org/haproxy:custom-v2.9.1"
43+
spec:
44+
clusterName: my-cluster
45+
replicas: 3
46+
# ... rest of spec
47+
```
48+
49+
### Applying the Annotation
50+
51+
You can add the annotation to an existing NodePool using `kubectl annotate`:
52+
53+
```bash
54+
kubectl annotate nodepool my-nodepool \
55+
-n clusters \
56+
hypershift.openshift.io/haproxy-image="quay.io/my-org/haproxy:custom-v2.9.1"
57+
```
58+
59+
### Removing the Override
60+
61+
To remove the override and revert to the default behavior:
62+
63+
```bash
64+
kubectl annotate nodepool my-nodepool \
65+
-n clusters \
66+
hypershift.openshift.io/haproxy-image-
67+
```
68+
69+
## Global Override (Shared Ingress Only)
70+
71+
When shared ingress is enabled, you can set a global HAProxy image override using the `IMAGE_SHARED_INGRESS_HAPROXY` environment variable on the HyperShift operator. This affects all NodePools that don't have the annotation set.
72+
73+
**Note**: The NodePool annotation always takes precedence over the environment variable.
74+
75+
## Verification
76+
77+
After applying the annotation, new worker nodes will use the specified HAProxy image. To verify:
78+
79+
1. Check the NodePool's token secret generation to ensure the new configuration is picked up
80+
2. Verify the ignition configuration contains the correct image
81+
3. On a worker node, check the static pod manifest:
82+
83+
```bash
84+
# On a worker node
85+
cat /etc/kubernetes/manifests/kube-apiserver-proxy.yaml | grep image:
86+
```
87+
88+
## Rollout Behavior
89+
90+
The HAProxy image change triggers a NodePool rollout:
91+
- New ignition configuration is generated with the updated image
92+
- Worker nodes are replaced according to the NodePool's upgrade strategy
93+
- The rollout respects `maxUnavailable` settings
94+
95+
## Important Notes
96+
97+
1. **Image Availability**: Ensure the custom HAProxy image is accessible from worker nodes
98+
2. **Pull Secrets**: The worker nodes must have credentials to pull the custom image
99+
3. **Compatibility**: The custom HAProxy image must be compatible with HyperShift's configuration expectations
100+
4. **Shared Ingress**: When shared ingress is enabled, ensure the custom image supports proxy protocol v2 with TLV (requires HAProxy v2.9+)
101+
5. **Multiple NodePools**: Each NodePool can have a different HAProxy image override
102+
103+
104+
## Troubleshooting
105+
106+
### Image Pull Errors
107+
108+
If worker nodes fail to pull the custom image:
109+
1. Verify the image exists and is accessible
110+
2. Check that the global pull secret includes credentials for the image registry
111+
3. Verify network connectivity from worker nodes to the image registry
112+
113+
### Wrong Image in Use
114+
115+
If the expected image is not being used:
116+
1. Check the annotation is correctly set on the NodePool
117+
2. Verify the NodePool has reconciled (check status conditions)
118+
3. Inspect the ignition configuration in the NodePool's token secret
119+
120+
### Rollout Issues
121+
122+
If the rollout doesn't complete:
123+
1. Check NodePool conditions for errors
124+
2. Verify the custom image is compatible
125+
3. Check worker node logs for HAProxy startup failures
126+
4. Ensure the image supports the required features (e.g., proxy protocol for shared ingress)

hypershift-operator/controllers/nodepool/apiserver-haproxy/haproxy.go

Lines changed: 0 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -18,7 +18,6 @@ import (
1818
sharedingress "github.com/openshift/hypershift/hypershift-operator/controllers/sharedingress"
1919
api "github.com/openshift/hypershift/support/api"
2020
"github.com/openshift/hypershift/support/config"
21-
"github.com/openshift/hypershift/support/images"
2221
"github.com/openshift/hypershift/support/releaseinfo"
2322
"github.com/openshift/hypershift/support/util"
2423

@@ -255,11 +254,6 @@ func apiServerProxyConfig(haProxyImage, cpoImage, clusterID,
255254
livenessProbeEndpoint = "/livez?exclude=etcd&exclude=log"
256255
}
257256

258-
if sharedingress.UseSharedIngress() {
259-
// proxy protocol v2 with TLV support (custom proxy protocol header) requires haproxy v2.9+, see: https://www.haproxy.com/blog/announcing-haproxy-2-9#proxy-protocol-tlv-fields
260-
haProxyImage = images.GetSharedIngressHAProxyImage()
261-
}
262-
263257
filesToAdd := []fileToAdd{
264258
{
265259
template: setupAPIServerIPScriptTemplate,

hypershift-operator/controllers/nodepool/conditions.go

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -325,7 +325,7 @@ func (r *NodePoolReconciler) validMachineConfigCondition(ctx context.Context, no
325325
return &ctrl.Result{}, nil
326326
}
327327

328-
haproxyRawConfig, err := r.generateHAProxyRawConfig(ctx, hcluster, releaseImage)
328+
haproxyRawConfig, err := r.generateHAProxyRawConfig(ctx, nodePool, hcluster, releaseImage)
329329
if err != nil {
330330
return &ctrl.Result{}, fmt.Errorf("failed to generate HAProxy raw config: %w", err)
331331
}

hypershift-operator/controllers/nodepool/nodepool_controller.go

Lines changed: 29 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -13,8 +13,10 @@ import (
1313
"github.com/openshift/hypershift/hypershift-operator/controllers/manifests"
1414
haproxy "github.com/openshift/hypershift/hypershift-operator/controllers/nodepool/apiserver-haproxy"
1515
"github.com/openshift/hypershift/hypershift-operator/controllers/nodepool/kubevirt"
16+
"github.com/openshift/hypershift/hypershift-operator/controllers/sharedingress"
1617
kvinfra "github.com/openshift/hypershift/kubevirtexternalinfra"
1718
"github.com/openshift/hypershift/support/capabilities"
19+
"github.com/openshift/hypershift/support/images"
1820
"github.com/openshift/hypershift/support/releaseinfo"
1921
"github.com/openshift/hypershift/support/supportedversion"
2022
"github.com/openshift/hypershift/support/upsert"
@@ -336,7 +338,7 @@ func (r *NodePoolReconciler) reconcile(ctx context.Context, hcluster *hyperv1.Ho
336338
return ctrl.Result{}, nil
337339
}
338340

339-
haproxyRawConfig, err := r.generateHAProxyRawConfig(ctx, hcluster, releaseImage)
341+
haproxyRawConfig, err := r.generateHAProxyRawConfig(ctx, nodePool, hcluster, releaseImage)
340342
if err != nil {
341343
return ctrl.Result{}, fmt.Errorf("failed to generate HAProxy raw config: %w", err)
342344
}
@@ -413,7 +415,7 @@ func (r *NodePoolReconciler) token(ctx context.Context, hcluster *hyperv1.Hosted
413415
return nil, fmt.Errorf("failed to look up release image metadata: %w", err)
414416
}
415417

416-
haproxyRawConfig, err := r.generateHAProxyRawConfig(ctx, hcluster, releaseImage)
418+
haproxyRawConfig, err := r.generateHAProxyRawConfig(ctx, nodePool, hcluster, releaseImage)
417419
if err != nil {
418420
return nil, fmt.Errorf("failed to generate HAProxy raw config: %w", err)
419421
}
@@ -974,11 +976,35 @@ func (r *NodePoolReconciler) getAdditionalTrustBundle(ctx context.Context, hoste
974976
return additionalTrustBundle, nil
975977
}
976978

977-
func (r *NodePoolReconciler) generateHAProxyRawConfig(ctx context.Context, hcluster *hyperv1.HostedCluster, releaseImage *releaseinfo.ReleaseImage) (string, error) {
979+
// resolveHAProxyImage determines which HAProxy image to use based on priority:
980+
// 1. NodePool annotation (highest priority)
981+
// 2. Environment variable override (when shared ingress enabled)
982+
// 3. Hardcoded default (when shared ingress enabled)
983+
// 4. Release payload (default)
984+
func resolveHAProxyImage(nodePool *hyperv1.NodePool, releaseImage *releaseinfo.ReleaseImage) (string, error) {
985+
// Check NodePool annotation first (highest priority)
986+
if annotationImage, ok := nodePool.Annotations[hyperv1.HAProxyImageAnnotation]; ok && annotationImage != "" {
987+
return annotationImage, nil
988+
}
989+
990+
// Check if shared ingress is enabled
991+
if sharedingress.UseSharedIngress() {
992+
return images.GetSharedIngressHAProxyImage(), nil
993+
}
994+
995+
// Fall back to release payload image
978996
haProxyImage, ok := releaseImage.ComponentImages()[haproxy.HAProxyRouterImageName]
979997
if !ok {
980998
return "", fmt.Errorf("release image doesn't have a %s image", haproxy.HAProxyRouterImageName)
981999
}
1000+
return haProxyImage, nil
1001+
}
1002+
1003+
func (r *NodePoolReconciler) generateHAProxyRawConfig(ctx context.Context, nodePool *hyperv1.NodePool, hcluster *hyperv1.HostedCluster, releaseImage *releaseinfo.ReleaseImage) (string, error) {
1004+
haProxyImage, err := resolveHAProxyImage(nodePool, releaseImage)
1005+
if err != nil {
1006+
return "", err
1007+
}
9821008

9831009
haProxy := haproxy.HAProxy{
9841010
Client: r.Client,

hypershift-operator/controllers/nodepool/nodepool_controller_test.go

Lines changed: 129 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -10,6 +10,7 @@ import (
1010

1111
hyperv1 "github.com/openshift/hypershift/api/hypershift/v1beta1"
1212
"github.com/openshift/hypershift/api/util/ipnet"
13+
haproxy "github.com/openshift/hypershift/hypershift-operator/controllers/nodepool/apiserver-haproxy"
1314
ignserver "github.com/openshift/hypershift/ignition-server/controllers"
1415
kvinfra "github.com/openshift/hypershift/kubevirtexternalinfra"
1516
"github.com/openshift/hypershift/support/api"
@@ -1475,3 +1476,131 @@ func Test_validateHCPayloadSupportsNodePoolCPUArch(t *testing.T) {
14751476
})
14761477
}
14771478
}
1479+
func TestResolveHAProxyImage(t *testing.T) {
1480+
const (
1481+
testAnnotationImage = "quay.io/test/haproxy:custom"
1482+
testSharedIngressImage = "quay.io/test/haproxy:shared-ingress"
1483+
testReleaseImage = "registry.test.io/openshift/haproxy-router:v4.16"
1484+
)
1485+
1486+
testCases := []struct {
1487+
name string
1488+
nodePoolAnnotations map[string]string
1489+
useSharedIngress bool
1490+
envVarImage string
1491+
expectedImage string
1492+
}{
1493+
{
1494+
name: "When NodePool annotation is set it should use annotation image",
1495+
nodePoolAnnotations: map[string]string{
1496+
hyperv1.HAProxyImageAnnotation: testAnnotationImage,
1497+
},
1498+
useSharedIngress: false,
1499+
expectedImage: testAnnotationImage,
1500+
},
1501+
{
1502+
name: "When NodePool annotation is set it should override shared ingress image",
1503+
nodePoolAnnotations: map[string]string{
1504+
hyperv1.HAProxyImageAnnotation: testAnnotationImage,
1505+
},
1506+
useSharedIngress: true,
1507+
envVarImage: testSharedIngressImage,
1508+
expectedImage: testAnnotationImage,
1509+
},
1510+
{
1511+
name: "When NodePool annotation is empty it should use shared ingress image",
1512+
nodePoolAnnotations: map[string]string{
1513+
hyperv1.HAProxyImageAnnotation: "",
1514+
},
1515+
useSharedIngress: true,
1516+
envVarImage: testSharedIngressImage,
1517+
expectedImage: testSharedIngressImage,
1518+
},
1519+
{
1520+
name: "When no annotation and shared ingress enabled it should use shared ingress image",
1521+
useSharedIngress: true,
1522+
envVarImage: testSharedIngressImage,
1523+
expectedImage: testSharedIngressImage,
1524+
},
1525+
{
1526+
name: "When no annotation and shared ingress disabled it should use release payload image",
1527+
useSharedIngress: false,
1528+
expectedImage: testReleaseImage,
1529+
},
1530+
{
1531+
name: "When annotation is empty and shared ingress disabled it should use release payload image",
1532+
nodePoolAnnotations: map[string]string{
1533+
hyperv1.HAProxyImageAnnotation: "",
1534+
},
1535+
useSharedIngress: false,
1536+
expectedImage: testReleaseImage,
1537+
},
1538+
}
1539+
1540+
for _, tc := range testCases {
1541+
t.Run(tc.name, func(t *testing.T) {
1542+
g := NewWithT(t)
1543+
1544+
// Set up environment variables for shared ingress
1545+
if tc.useSharedIngress {
1546+
t.Setenv("MANAGED_SERVICE", hyperv1.AroHCP)
1547+
if tc.envVarImage != "" {
1548+
t.Setenv("IMAGE_SHARED_INGRESS_HAPROXY", tc.envVarImage)
1549+
}
1550+
}
1551+
1552+
// Create test NodePool
1553+
nodePool := &hyperv1.NodePool{
1554+
ObjectMeta: metav1.ObjectMeta{
1555+
Name: "test-nodepool",
1556+
Namespace: "clusters",
1557+
Annotations: tc.nodePoolAnnotations,
1558+
},
1559+
}
1560+
1561+
// Create fake pull secret
1562+
pullSecret := &corev1.Secret{
1563+
ObjectMeta: metav1.ObjectMeta{
1564+
Name: "pull-secret",
1565+
},
1566+
Data: map[string][]byte{
1567+
corev1.DockerConfigJsonKey: []byte(`{"auths":{"test":{"auth":"dGVzdDp0ZXN0"}}}`),
1568+
},
1569+
}
1570+
1571+
// Create fake client
1572+
c := fake.NewClientBuilder().WithObjects(pullSecret).Build()
1573+
1574+
// Create fake release provider with component images
1575+
releaseProvider := &fakereleaseprovider.FakeReleaseProvider{
1576+
Components: map[string]string{
1577+
haproxy.HAProxyRouterImageName: testReleaseImage,
1578+
},
1579+
}
1580+
1581+
// Create test HostedCluster
1582+
hc := &hyperv1.HostedCluster{
1583+
Spec: hyperv1.HostedClusterSpec{
1584+
PullSecret: corev1.LocalObjectReference{
1585+
Name: "pull-secret",
1586+
},
1587+
Release: hyperv1.Release{
1588+
Image: "test-release:latest",
1589+
},
1590+
},
1591+
}
1592+
1593+
// Get release image using the fake provider
1594+
ctx := t.Context()
1595+
releaseImage := fakereleaseprovider.GetReleaseImage(ctx, hc, c, releaseProvider)
1596+
g.Expect(releaseImage).ToNot(BeNil())
1597+
1598+
// Call resolveHAProxyImage
1599+
image, err := resolveHAProxyImage(nodePool, releaseImage)
1600+
1601+
// Verify no error and correct image
1602+
g.Expect(err).ToNot(HaveOccurred())
1603+
g.Expect(image).To(Equal(tc.expectedImage))
1604+
})
1605+
}
1606+
}

hypershift-operator/controllers/nodepool/secret_janitor.go

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -93,7 +93,7 @@ func (r *secretJanitor) Reconcile(ctx context.Context, req reconcile.Request) (r
9393
if err != nil {
9494
return ctrl.Result{}, err
9595
}
96-
haproxyRawConfig, err := r.generateHAProxyRawConfig(ctx, hcluster, releaseImage)
96+
haproxyRawConfig, err := r.generateHAProxyRawConfig(ctx, nodePool, hcluster, releaseImage)
9797
if err != nil {
9898
return ctrl.Result{}, fmt.Errorf("failed to generate HAProxy raw config: %w", err)
9999
}

vendor/github.com/openshift/hypershift/api/hypershift/v1beta1/hostedcluster_types.go

Lines changed: 4 additions & 0 deletions
Some generated files are not rendered by default. Learn more about customizing how changed files appear on GitHub.

0 commit comments

Comments
 (0)