Skip to content

SSL passthrough requires enabling PROXY protocol to get client IPs, but breaks HTTP requests #14030

@erhhung

Description

@erhhung

What happened:

I enable SSL passthrough in the ingress controller (--enable-ssl-passthrough) because some services need to directly examine client certs (similar scenario as #10706) or return their own certs (e.g. vCluster control plane).

By enabling SSL passthrough, ingress-nginx creates a listener on 443 and uses the PROXY protocol to proxy to HTTPS backends via port 442. In order to receive real client IPs, I've set hostNetwork: true, enable-real-ip: true, use-forwarded-headers: true, compute-full-forwarded-for: true, and proxy-real-ip-cidr: <lan-cidr> settings, but $remote_addr is always 127.0.0.1 due to the local 443->442 hop.

Only when use-proxy-protocol: true is set and proxy-real-ip-cidr includes 127.0.0.1/32 do real client IPs appear, along with $server_port as 442.

This is all fine, although not documented very well, BUT, by enabling the use of PROXY protocol, the nginx processes (on ports 80 and 442 as shown in the listening ports below) are requiring that both HTTP and HTTPS requests be received with PROXY headers, which isn't the case for plain HTTP traffic in a local environment without a cloud provider (I'm running a local RKE2 cluster).

80     nginx
442    nginx
443    nginx-ingress-controller
8181   nginx
8443   nginx-ingress-controller
10245  nginx-ingress-controller
10246  nginx
10247  nginx
10254  nginx-ingress-controller

This non-symmetry causes HTTP requests to fail with errors like:

[error] 31#31: *3516 broken header: "GET /hello/world HTTP/1.1" while reading PROXY protocol, client: 192.168.0.32, server: 0.0.0.0:80

And an empty response because the request never even made it to the upstream service.

Even if the primary purpose for handling HTTP requests is simply to return 308 redirects due to the use of nginx.ingress.kubernetes.io/force-ssl-redirect: true annotations, this function is no longer possible.

What you expected to happen:

I understand that, with SSL passthrough enabled, the X-Fowarded-For header cannot be added because the TLS traffic is never decrypted, so using PROXY protocol is the only way to pass along client IPs. However, I expected that plain HTTP traffic should continue working, even if only to serve 308 redirects. So either allow both PROXY'ed HTTP requests (if coming from a cloud provider) or non-PROXY'ed requests (local LAN traffic).

NGINX Ingress controller version (exec into the pod and run /nginx-ingress-controller --version):

NGINX Ingress controller
  Release:       v1.12.4-hardened2
  Build:         git-d367141db
  Repository:    https://github.com/rancher/ingress-nginx
  nginx version: nginx/1.25.5

Kubernetes version (use kubectl version):

Environment:

  • Cloud provider or hardware configuration: RKE2 v1.32.7+rke2r1
  • OS (e.g. from /etc/os-release): Ubuntu 24.04.3 LTS
  • Kernel (e.g. uname -a): 6.8.0-85-generic
  • Install tools: official RKE2 installation
  • Basic cluster related info:
    • kubectl version: both client+server v1.32.7+rke2r1
kubectl get nodes -o wide
NAME   STATUS   ROLES                              AGE   VERSION          INTERNAL-IP     EXTERNAL-IP   OS-IMAGE             KERNEL-VERSION     CONTAINER-RUNTIME
k8s1   Ready    control-plane,etcd,master,worker   8d    v1.32.7+rke2r1   192.168.0.171           Ubuntu 24.04.3 LTS   6.8.0-85-generic   containerd://2.0.5-k3s2
k8s2   Ready    control-plane,etcd,master,worker   8d    v1.32.7+rke2r1   192.168.0.172           Ubuntu 24.04.3 LTS   6.8.0-85-generic   containerd://2.0.5-k3s2
k8s3   Ready    control-plane,etcd,master,worker   8d    v1.32.7+rke2r1   192.168.0.173           Ubuntu 24.04.3 LTS   6.8.0-85-generic   containerd://2.0.5-k3s2
k8s4   Ready    worker                             8d    v1.32.7+rke2r1   192.168.0.174           Ubuntu 24.04.3 LTS   6.8.0-85-generic   containerd://2.0.5-k3s2
k8s5   Ready    worker                             8d    v1.32.7+rke2r1   192.168.0.175           Ubuntu 24.04.3 LTS   6.8.0-85-generic   containerd://2.0.5-k3s2
k8s6   Ready    runner,worker                      8d    v1.32.7+rke2r1   192.168.0.176           Ubuntu 24.04.3 LTS   6.8.0-85-generic   containerd://2.0.5-k3s2
  • How was the ingress-nginx-controller installed:
helm ls -A | grep -i ingress
rke2-ingress-nginx    kube-system    2    2025-10-10 02:40:37.698951469 +0000 UTC deployed    rke2-ingress-nginx-4.12.401
helm -n kube-system get values rke2-ingress-nginx
controller:
  allowSnippetAnnotations: true
  config:
    annotations-risk-level: Critical
    client-body-buffer-size: 256k
    compute-full-forwarded-for: "true"
    enable-brotli: "true"
    enable-real-ip: "true"
    http-snippet: |
      map $http_origin $request_origin {
        default "$scheme://$host";
        "~*" $http_origin;
      }
    keep-alive: "125"
    keep-alive-requests: "50"
    log-format-escape-json: "true"
    log-format-upstream: '{"time_ms": $msec, "remote_addr": "$remote_addr", "user_agent":
      "$http_user_agent", "host": "$http_host", "port": "$server_port", "method":
      "$request_method", "uri": "$request_uri", "protocol": "$server_protocol", "tls":
      "$ssl_protocol", "status": $status, "request_time": $request_time, "request_length":
      $request_length, "body_bytes_sent": $body_bytes_sent, "bytes_sent": $bytes_sent,
      "upstream_addr": "$upstream_addr", "upstream_connect_time": $upstream_connect_time,
      "upstream_response_time": $upstream_response_time}'
    proxy-body-size: 5m
    proxy-buffering: "off"
    proxy-next-upstream: "off"
    proxy-next-upstream-tries: "1"
    proxy-read-timeout: "600"
    proxy-real-ip-cidr: 127.0.0.1/32,192.168.0.0/16,100.64.0.0/10
    proxy-request-buffering: "off"
    proxy-send-timeout: "600"
    use-forwarded-headers: "true"
    use-proxy-protocol: "true"
  dnsPolicy: ClusterFirstWithHostNet
  extraArgs:
    default-ssl-certificate: kube-system/rke2-ingress-nginx-default-tls
  hostNetwork: true
  hostPort:
    enabled: false
  ingressClass: nginx
  ingressClassByName: true
  ingressClassResource:
    controllerValue: k8s.io/ingress-nginx
    default: true
    enabled: true
    name: nginx
  priorityClassName: system-cluster-critical
  resources:
    requests:
      cpu: 25m
      memory: 256Mi
  service:
    annotations:
      metallb.universe.tf/loadBalancerIPs: 192.168.4.222
    enableHttp: true
    enableHttps: true
    enabled: true
    externalTrafficPolicy: Local
  watchIngressWithoutClass: true
global:
  clusterCIDR: 10.42.0.0/16
  clusterCIDRv4: 10.42.0.0/16
  clusterDNS: 10.43.0.10
  clusterDomain: cluster.local
  rke2DataDir: /var/lib/rancher/rke2
  serviceCIDR: 10.43.0.0/16
  systemDefaultIngressClass: ingress-nginx
tcp:
  "2222": gitlab/gitlab-gitlab-shell:2222
udp: {}
  • Current State of the controller:
kubectl describe ingressclasses
Name:         nginx
Labels:       app.kubernetes.io/component=controller
              app.kubernetes.io/instance=rke2-ingress-nginx
              app.kubernetes.io/managed-by=Helm
              app.kubernetes.io/name=rke2-ingress-nginx
              app.kubernetes.io/part-of=rke2-ingress-nginx
              app.kubernetes.io/version=1.12.4
              helm.sh/chart=rke2-ingress-nginx-4.12.401
Annotations:  ingressclass.kubernetes.io/is-default-class: true
              meta.helm.sh/release-name: rke2-ingress-nginx
              meta.helm.sh/release-namespace: kube-system
Controller:   k8s.io/ingress-nginx
Events:       
kubectl -n kube-system describe po rke2-ingress-nginx-controller-ptxp2
Name:                 rke2-ingress-nginx-controller-ptxp2
Namespace:            kube-system
Priority:             2000000000
Priority Class Name:  system-cluster-critical
Service Account:      rke2-ingress-nginx
Node:                 k8s1/192.168.0.171
Start Time:           Thu, 06 Oct 2025 19:42:00 -0700
Labels:               app.kubernetes.io/component=controller
                      app.kubernetes.io/instance=rke2-ingress-nginx
                      app.kubernetes.io/managed-by=Helm
                      app.kubernetes.io/name=rke2-ingress-nginx
                      app.kubernetes.io/part-of=rke2-ingress-nginx
                      app.kubernetes.io/version=1.12.4
                      controller-revision-hash=8647fb6979
                      helm.sh/chart=rke2-ingress-nginx-4.12.401
                      pod-template-generation=21
Annotations:          cni.projectcalico.org/containerID: 311f36c6b2eb5f3f575a7e990947d51af2c4aac9bb66e4c3254da03dd0b047b4
                      cni.projectcalico.org/podIP: 10.42.0.174/32
                      cni.projectcalico.org/podIPs: 10.42.0.174/32
                      kubectl.kubernetes.io/restartedAt: 2025-10-08T18:00:00-07:00
Status:               Running
IP:                   10.42.0.174
IPs:
  IP:           10.42.0.174
Controlled By:  DaemonSet/rke2-ingress-nginx-controller
Containers:
  rke2-ingress-nginx-controller:
    Container ID:    containerd://76a1e5d07cc6e1e548bd90dba5db0168f2cf2ca565909ab26232753123345936
    Image:           rancher/nginx-ingress-controller:v1.12.4-hardened2
    Image ID:        docker.io/rancher/nginx-ingress-controller@sha256:9c9f9167b373e3cc6a7794f19842185018a1c0fb6007fb1e26aff83ecc8b5a87
    Ports:           80/TCP (http), 443/TCP (https), 8443/TCP (webhook), 2222/TCP (2222-tcp)
    Host Ports:      0/TCP (http), 0/TCP (https), 0/TCP (webhook), 0/TCP (2222-tcp)
    SeccompProfile:  RuntimeDefault
    Args:
      /nginx-ingress-controller
      --election-id=rke2-ingress-nginx-leader
      --controller-class=k8s.io/ingress-nginx
      --ingress-class=nginx
      --configmap=$(POD_NAMESPACE)/rke2-ingress-nginx-controller
      --tcp-services-configmap=$(POD_NAMESPACE)/rke2-ingress-nginx-tcp
      --validating-webhook=:8443
      --validating-webhook-certificate=/usr/local/certificates/cert
      --validating-webhook-key=/usr/local/certificates/key
      --ingress-class-by-name=true
      --watch-ingress-without-class=true
      --default-ssl-certificate=kube-system/rke2-ingress-nginx-default-tls
      --enable-ssl-passthrough
    State:          Running
      Started:      Thu, 06 Oct 2025 19:42:00 -0700
    Ready:          True
    Restart Count:  0
    Requests:
      cpu:      25m
      memory:   256Mi
    Liveness:   http-get http://:10254/healthz delay=10s timeout=1s period=10s #success=1 #failure=5
    Readiness:  http-get http://:10254/healthz delay=10s timeout=1s period=10s #success=1 #failure=3
    Environment:
      POD_NAME:       rke2-ingress-nginx-controller-ptxp2 (v1:metadata.name)
      POD_NAMESPACE:  kube-system (v1:metadata.namespace)
      LD_PRELOAD:     /usr/local/lib/libmimalloc.so
    Mounts:
      /usr/local/certificates/ from webhook-cert (ro)
      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-mvfhc (ro)
Conditions:
  Type                        Status
  PodReadyToStartContainers   True 
  Initialized                 True 
  Ready                       True 
  ContainersReady             True 
  PodScheduled                True 
Volumes:
  webhook-cert:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  rke2-ingress-nginx-admission
    Optional:    false
  kube-api-access-mvfhc:
    Type:                    Projected (a volume that contains injected data from multiple sources)
    TokenExpirationSeconds:  3607
    ConfigMapName:           kube-root-ca.crt
    Optional:                false
    DownwardAPI:             true
QoS Class:                   Burstable
Node-Selectors:              kubernetes.io/os=linux
Tolerations:                 node.kubernetes.io/disk-pressure:NoSchedule op=Exists
                            node.kubernetes.io/memory-pressure:NoSchedule op=Exists
                            node.kubernetes.io/not-ready:NoExecute op=Exists
                            node.kubernetes.io/pid-pressure:NoSchedule op=Exists
                            node.kubernetes.io/unreachable:NoExecute op=Exists
                            node.kubernetes.io/unschedulable:NoSchedule op=Exists
Events:                      
kubectl -n kube-system describe svc rke2-ingress-nginx-controller
Name:                     rke2-ingress-nginx-controller
Namespace:                kube-system
Labels:                   app.kubernetes.io/component=controller
                          app.kubernetes.io/instance=rke2-ingress-nginx
                          app.kubernetes.io/managed-by=Helm
                          app.kubernetes.io/name=rke2-ingress-nginx
                          app.kubernetes.io/part-of=rke2-ingress-nginx
                          app.kubernetes.io/version=1.12.4
                          helm.sh/chart=rke2-ingress-nginx-4.12.401
Annotations:              field.cattle.io/publicEndpoints:
                            [{"addresses":["192.168.4.222"],"port":80,"protocol":"TCP","serviceName":"kube-system:rke2-ingress-nginx-controller","allNodes":false},{"a...
                          meta.helm.sh/release-name: rke2-ingress-nginx
                          meta.helm.sh/release-namespace: kube-system
                          metallb.io/ip-allocated-from-pool: cluster-pool
                          metallb.universe.tf/loadBalancerIPs: 192.168.4.222
Selector:                 app.kubernetes.io/component=controller,app.kubernetes.io/instance=rke2-ingress-nginx,app.kubernetes.io/name=rke2-ingress-nginx
Type:                     LoadBalancer
IP Family Policy:         SingleStack
IP Families:              IPv4
IP:                       10.43.126.204
IPs:                      10.43.126.204
LoadBalancer Ingress:     192.168.4.222 (VIP)
Port:                     http  80/TCP
TargetPort:               http/TCP
NodePort:                 http  32453/TCP
Endpoints:                10.42.0.174:80,10.42.3.36:80,10.42.2.86:80 + 3 more...
Port:                     https  443/TCP
TargetPort:               https/TCP
NodePort:                 https  32648/TCP
Endpoints:                10.42.0.174:443,10.42.3.36:443,10.42.2.86:443 + 3 more...
Port:                     2222-tcp  2222/TCP
TargetPort:               2222-tcp/TCP
NodePort:                 2222-tcp  30293/TCP
Endpoints:                10.42.0.174:2222,10.42.3.36:2222,10.42.2.86:2222 + 3 more...
Session Affinity:         None
External Traffic Policy:  Local
Internal Traffic Policy:  Cluster
HealthCheck NodePort:     32166
Events:                   

Metadata

Metadata

Assignees

No one assigned

    Labels

    kind/bugCategorizes issue or PR as related to a bug.needs-priorityneeds-triageIndicates an issue or PR lacks a `triage/foo` label and requires one.

    Type

    No type

    Projects

    Status

    No status

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions