Skip to content

OCPBUGS-75200: Configure kubelet GracefulNodeShutdown#5708

Open
saschagrunert wants to merge 1 commit intoopenshift:mainfrom
saschagrunert:kubelet-shutdown-grace-period
Open

OCPBUGS-75200: Configure kubelet GracefulNodeShutdown#5708
saschagrunert wants to merge 1 commit intoopenshift:mainfrom
saschagrunert:kubelet-shutdown-grace-period

Conversation

@saschagrunert
Copy link
Member

@saschagrunert saschagrunert commented Feb 26, 2026

- What I did

OpenShift never configures kubelet's shutdownGracePeriod in KubeletConfiguration, making GracefulNodeShutdown a no-op. During MCO-triggered reboots:

  1. MCO drains regular pods (static pods like kube-apiserver are skipped)
  2. MCO calls systemctl reboot via logind
  3. Kubelet exits immediately on SIGTERM without terminating pods
  4. kube-apiserver needs up to 194s for graceful shutdown (platform-dependent) but systemd's DefaultTimeoutStopSec is 90s
  5. kube-apiserver gets SIGKILLed, watch-termination detects non-graceful termination

This is a latent bug exposed by new MCO changes introducing additional master reboots:

Configures shutdownGracePeriod and shutdownGracePeriodCriticalPods in kubelet templates:

  • Master/arbiter: 270s total, 240s for critical pods (covers the worst-case kube-apiserver terminationGracePeriodSeconds of 194s on AWS with headroom)
  • Worker: 90s total, 60s for critical pods
  • SNO: disabled

Kubelet automatically overrides logind's InhibitDelayMaxSec and acquires a delay inhibitor lock so that systemctl reboot waits for graceful pod termination.

- How to verify it

  • go test ./pkg/controller/kubelet-config/... -run TestShutdownGracePeriod -v
  • go test ./pkg/controller/template/... -v
  • Deploy to a cluster and verify kubelet config on nodes contains shutdownGracePeriod
  • Run [sig-api-machinery][Feature:APIServer][Late] kubelet terminates kube-apiserver gracefully extended test

- Payload test results

Test Result Details
AWS e2e (e2e-aws-ovn) PASS kubelet terminates kube-apiserver gracefully + extended both passed
GCP e2e (e2e-gcp-ovn) PASS kubelet terminates kube-apiserver gracefully + extended both passed
OCL (e2e-aws-ovn-ocl) INFRA FAILURE Unrelated test harness bug: oc wait --for=create not supported

- Description for the changelog

Configure kubelet GracefulNodeShutdown with generous grace periods to prevent kube-apiserver from being SIGKILLed during node reboots.

Summary by CodeRabbit

  • Tests

    • Added comprehensive test coverage validating shutdown grace period configuration for kubelet across multiple platform scenarios and node roles.
  • New Features

    • Kubelet graceful shutdown settings now configured for multi-replica control plane topologies on master, arbiter, and worker nodes, with configuration excluded for single-replica setups.

@openshift-ci openshift-ci bot added the do-not-merge/work-in-progress Indicates that a PR should not merge because it is a work in progress. label Feb 26, 2026
@openshift-ci-robot openshift-ci-robot added jira/valid-reference Indicates that this PR references a valid Jira ticket of any type. jira/invalid-bug Indicates that a referenced Jira bug is invalid for the branch this PR is targeting. labels Feb 26, 2026
@openshift-ci-robot
Copy link
Contributor

@saschagrunert: This pull request references Jira Issue OCPBUGS-75200, which is invalid:

  • expected the bug to target the "4.22.0" version, but no target version was set

Comment /jira refresh to re-evaluate validity if changes to the Jira bug are made, or edit the title of this pull request to link to a different bug.

The bug has been updated to refer to the pull request using the external bug tracker.

Details

In response to this:

- What I did

Configured kubelet's GracefulNodeShutdown by setting shutdownGracePeriod and shutdownGracePeriodCriticalPods in the kubelet configuration templates for master, worker, and arbiter roles.

Without these settings, kubelet exits immediately on SIGTERM during MCO-triggered node reboots without terminating pods. This causes kube-apiserver to be SIGKILLed when its graceful shutdown period exceeds systemd's default 90s DefaultTimeoutStopSec.

The values are platform-aware to match kube-apiserver's terminationGracePeriodSeconds:

  • Master/arbiter: AWS 235s/205s, GCP 200s/170s, default 175s/145s
  • Worker: 90s/60s
  • SNO (SingleReplica): disabled

- How to verify it

  • go test ./pkg/controller/kubelet-config/... -run TestShutdownGracePeriod -v
  • go test ./pkg/controller/template/... -v
  • Deploy to a cluster and verify kubelet config on nodes contains shutdownGracePeriod
  • Run [sig-api-machinery][Feature:APIServer][Late] kubelet terminates kube-apiserver gracefully extended test

- Description for the changelog

Configure kubelet GracefulNodeShutdown with platform-aware grace periods to prevent kube-apiserver from being SIGKILLed during node reboots.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the openshift-eng/jira-lifecycle-plugin repository.

@openshift-ci-robot openshift-ci-robot added jira/valid-bug Indicates that a referenced Jira bug is valid for the branch this PR is targeting. and removed jira/invalid-bug Indicates that a referenced Jira bug is invalid for the branch this PR is targeting. labels Feb 26, 2026
@openshift-ci-robot
Copy link
Contributor

@saschagrunert: This pull request references Jira Issue OCPBUGS-75200, which is valid. The bug has been moved to the POST state.

3 validation(s) were run on this bug
  • bug is open, matching expected state (open)
  • bug target version (4.22.0) matches configured target version for branch (4.22.0)
  • bug is in the state ASSIGNED, which is one of the valid states (NEW, ASSIGNED, POST)
Details

In response to this:

/jira refresh

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the openshift-eng/jira-lifecycle-plugin repository.

@openshift-ci-robot
Copy link
Contributor

@saschagrunert: This pull request references Jira Issue OCPBUGS-75200, which is valid.

3 validation(s) were run on this bug
  • bug is open, matching expected state (open)
  • bug target version (4.22.0) matches configured target version for branch (4.22.0)
  • bug is in the state POST, which is one of the valid states (NEW, ASSIGNED, POST)
Details

In response to this:

- What I did

OpenShift never configures kubelet's shutdownGracePeriod in KubeletConfiguration, making GracefulNodeShutdown a no-op. During MCO-triggered reboots:

  1. MCO drains regular pods (static pods like kube-apiserver are skipped)
  2. MCO calls systemctl reboot via logind
  3. Kubelet exits immediately on SIGTERM without terminating pods
  4. kube-apiserver needs 135-194s for graceful shutdown but systemd's DefaultTimeoutStopSec is 90s
  5. kube-apiserver gets SIGKILLed, watch-termination detects non-graceful termination

This is a latent bug exposed by new MCO changes introducing additional master reboots:

Configures shutdownGracePeriod and shutdownGracePeriodCriticalPods in kubelet templates with platform-aware values matching kube-apiserver's terminationGracePeriodSeconds. Kubelet automatically overrides logind's InhibitDelayMaxSec and acquires a delay inhibitor lock so that systemctl reboot waits for graceful pod termination.

- How to verify it

  • go test ./pkg/controller/kubelet-config/... -run TestShutdownGracePeriod -v
  • go test ./pkg/controller/template/... -v
  • Deploy to a cluster and verify kubelet config on nodes contains shutdownGracePeriod
  • Run [sig-api-machinery][Feature:APIServer][Late] kubelet terminates kube-apiserver gracefully extended test

- Description for the changelog

Configure kubelet GracefulNodeShutdown with platform-aware grace periods to prevent kube-apiserver from being SIGKILLed during node reboots.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the openshift-eng/jira-lifecycle-plugin repository.

@saschagrunert saschagrunert changed the title WIP: OCPBUGS-75200: Configure kubelet GracefulNodeShutdown OCPBUGS-75200: Configure kubelet GracefulNodeShutdown Feb 26, 2026
@openshift-ci openshift-ci bot removed the do-not-merge/work-in-progress Indicates that a PR should not merge because it is a work in progress. label Feb 26, 2026
@openshift-ci-robot
Copy link
Contributor

@saschagrunert: This pull request references Jira Issue OCPBUGS-75200, which is valid.

3 validation(s) were run on this bug
  • bug is open, matching expected state (open)
  • bug target version (4.22.0) matches configured target version for branch (4.22.0)
  • bug is in the state POST, which is one of the valid states (NEW, ASSIGNED, POST)

The bug has been updated to refer to the pull request using the external bug tracker.

Details

In response to this:

- What I did

OpenShift never configures kubelet's shutdownGracePeriod in KubeletConfiguration, making GracefulNodeShutdown a no-op. During MCO-triggered reboots:

  1. MCO drains regular pods (static pods like kube-apiserver are skipped)
  2. MCO calls systemctl reboot via logind
  3. Kubelet exits immediately on SIGTERM without terminating pods
  4. kube-apiserver needs up to 194s for graceful shutdown (platform-dependent) but systemd's DefaultTimeoutStopSec is 90s
  5. kube-apiserver gets SIGKILLed, watch-termination detects non-graceful termination

This is a latent bug exposed by new MCO changes introducing additional master reboots:

Configures shutdownGracePeriod and shutdownGracePeriodCriticalPods in kubelet templates:

  • Master/arbiter: 270s total, 240s for critical pods (covers the worst-case kube-apiserver terminationGracePeriodSeconds of 194s on AWS with headroom)
  • Worker: 90s total, 60s for critical pods
  • SNO: disabled

Kubelet automatically overrides logind's InhibitDelayMaxSec and acquires a delay inhibitor lock so that systemctl reboot waits for graceful pod termination.

- How to verify it

  • go test ./pkg/controller/kubelet-config/... -run TestShutdownGracePeriod -v
  • go test ./pkg/controller/template/... -v
  • Deploy to a cluster and verify kubelet config on nodes contains shutdownGracePeriod
  • Run [sig-api-machinery][Feature:APIServer][Late] kubelet terminates kube-apiserver gracefully extended test

- Description for the changelog

Configure kubelet GracefulNodeShutdown with generous grace periods to prevent kube-apiserver from being SIGKILLed during node reboots.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the openshift-eng/jira-lifecycle-plugin repository.

@saschagrunert saschagrunert force-pushed the kubelet-shutdown-grace-period branch from ddaebae to 3e7e032 Compare February 26, 2026 10:35
@openshift-ci openshift-ci bot added the do-not-merge/hold Indicates that a PR should not merge because someone has issued a /hold command. label Feb 26, 2026
@openshift-ci
Copy link
Contributor

openshift-ci bot commented Feb 26, 2026

@saschagrunert: trigger 2 job(s) for the /payload-(with-prs|job|aggregate|job-with-prs|aggregate-with-prs) command

  • periodic-ci-openshift-release-main-ci-4.22-e2e-aws-ovn
  • periodic-ci-openshift-release-main-ci-4.22-e2e-gcp-ovn

See details on https://pr-payload-tests.ci.openshift.org/runs/ci/cfe1d6f0-1303-11f1-83e3-11d85e3b80d8-0

@openshift-ci
Copy link
Contributor

openshift-ci bot commented Feb 26, 2026

@saschagrunert: trigger 2 job(s) for the /payload-(with-prs|job|aggregate|job-with-prs|aggregate-with-prs) command

  • periodic-ci-openshift-release-main-ci-4.22-e2e-aws-ovn
  • periodic-ci-openshift-release-main-ci-4.22-e2e-gcp-ovn

See details on https://pr-payload-tests.ci.openshift.org/runs/ci/1c16a8c0-1304-11f1-9ce3-b8000e827635-0

@openshift-ci
Copy link
Contributor

openshift-ci bot commented Feb 26, 2026

@saschagrunert: trigger 1 job(s) for the /payload-(with-prs|job|aggregate|job-with-prs|aggregate-with-prs) command

  • periodic-ci-openshift-machine-config-operator-release-4.22-periodics-e2e-aws-ovn-ocl

See details on https://pr-payload-tests.ci.openshift.org/runs/ci/759a6550-1320-11f1-828e-df656025dabc-0

@openshift-ci
Copy link
Contributor

openshift-ci bot commented Feb 26, 2026

@saschagrunert: trigger 1 job(s) for the /payload-(with-prs|job|aggregate|job-with-prs|aggregate-with-prs) command

  • periodic-ci-openshift-machine-config-operator-release-4.22-periodics-e2e-aws-ovn-ocl

See details on https://pr-payload-tests.ci.openshift.org/runs/ci/5044fab0-132d-11f1-8357-d7060a5fcd59-0

@openshift-ci openshift-ci bot removed the do-not-merge/hold Indicates that a PR should not merge because someone has issued a /hold command. label Mar 2, 2026
@openshift-ci-robot
Copy link
Contributor

@saschagrunert: This pull request references Jira Issue OCPBUGS-75200, which is valid. The bug has been moved to the POST state.

3 validation(s) were run on this bug
  • bug is open, matching expected state (open)
  • bug target version (4.22.0) matches configured target version for branch (4.22.0)
  • bug is in the state ASSIGNED, which is one of the valid states (NEW, ASSIGNED, POST)
Details

In response to this:

- What I did

OpenShift never configures kubelet's shutdownGracePeriod in KubeletConfiguration, making GracefulNodeShutdown a no-op. During MCO-triggered reboots:

  1. MCO drains regular pods (static pods like kube-apiserver are skipped)
  2. MCO calls systemctl reboot via logind
  3. Kubelet exits immediately on SIGTERM without terminating pods
  4. kube-apiserver needs up to 194s for graceful shutdown (platform-dependent) but systemd's DefaultTimeoutStopSec is 90s
  5. kube-apiserver gets SIGKILLed, watch-termination detects non-graceful termination

This is a latent bug exposed by new MCO changes introducing additional master reboots:

Configures shutdownGracePeriod and shutdownGracePeriodCriticalPods in kubelet templates:

  • Master/arbiter: 270s total, 240s for critical pods (covers the worst-case kube-apiserver terminationGracePeriodSeconds of 194s on AWS with headroom)
  • Worker: 90s total, 60s for critical pods
  • SNO: disabled

Kubelet automatically overrides logind's InhibitDelayMaxSec and acquires a delay inhibitor lock so that systemctl reboot waits for graceful pod termination.

- How to verify it

  • go test ./pkg/controller/kubelet-config/... -run TestShutdownGracePeriod -v
  • go test ./pkg/controller/template/... -v
  • Deploy to a cluster and verify kubelet config on nodes contains shutdownGracePeriod
  • Run [sig-api-machinery][Feature:APIServer][Late] kubelet terminates kube-apiserver gracefully extended test

- Payload test results

Test Result Details
AWS e2e (e2e-aws-ovn) PASS kubelet terminates kube-apiserver gracefully + extended both passed
GCP e2e (e2e-gcp-ovn) PASS kubelet terminates kube-apiserver gracefully + extended both passed
OCL (e2e-aws-ovn-ocl) INFRA FAILURE Unrelated test harness bug: oc wait --for=create not supported

- Description for the changelog

Configure kubelet GracefulNodeShutdown with generous grace periods to prevent kube-apiserver from being SIGKILLed during node reboots.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the openshift-eng/jira-lifecycle-plugin repository.

@openshift-ci
Copy link
Contributor

openshift-ci bot commented Mar 2, 2026

@saschagrunert: trigger 1 job(s) for the /payload-(with-prs|job|aggregate|job-with-prs|aggregate-with-prs) command

  • periodic-ci-openshift-machine-config-operator-release-4.22-periodics-e2e-aws-ovn-ocl

See details on https://pr-payload-tests.ci.openshift.org/runs/ci/4d6b6c48-1611-11f1-906d-d05b4787918c-0

@openshift-ci
Copy link
Contributor

openshift-ci bot commented Mar 2, 2026

@saschagrunert: trigger 1 job(s) for the /payload-(with-prs|job|aggregate|job-with-prs|aggregate-with-prs) command

  • periodic-ci-openshift-machine-config-operator-release-4.22-periodics-e2e-aws-ovn-ocl

See details on https://pr-payload-tests.ci.openshift.org/runs/ci/4eef332e-1611-11f1-904a-755af07f4b25-0

@openshift-ci
Copy link
Contributor

openshift-ci bot commented Mar 2, 2026

@saschagrunert: trigger 2 job(s) for the /payload-(with-prs|job|aggregate|job-with-prs|aggregate-with-prs) command

  • periodic-ci-openshift-release-main-ci-4.22-e2e-aws-ovn
  • periodic-ci-openshift-release-main-ci-4.22-e2e-gcp-ovn

See details on https://pr-payload-tests.ci.openshift.org/runs/ci/55c864cc-1611-11f1-88c4-0219bf41a9c9-0

@saschagrunert
Copy link
Member Author

@cheesesashimi @dkhater-redhat PTAL for approval

@saschagrunert
Copy link
Member Author

@ngopalak-redhat @eggfoobar PTAL for review

@ngopalak-redhat
Copy link
Contributor

/lgtm

@openshift-ci openshift-ci bot added the lgtm Indicates that a PR is ready to be merged. label Mar 2, 2026
@saschagrunert
Copy link
Member Author

@haircommander @rphillips PTAL

@haircommander
Copy link
Member

/hold

we hit issues with this in the past so we may want to make sure those are resolved. I fear they're fundamental to the current state of GNS and we may need to wait for eviction requests to move forward first.

@openshift-ci openshift-ci bot added the do-not-merge/hold Indicates that a PR should not merge because someone has issued a /hold command. label Mar 5, 2026
@saschagrunert
Copy link
Member Author

saschagrunert commented Mar 6, 2026

I looked into alternatives to GNS for solving this. Looks like there is not much of an alternative.

Without shutdownGracePeriod configured, kubelet's shutdown manager is a no-op. It exits immediately on SIGTERM without terminating pods, regardless of signal source (systemctl reboot, systemctl stop kubelet, etc.). Every alternative runs into this:

Alternative Problem
Increase kubelet TimeoutStopSec Kubelet still exits immediately on SIGTERM. More time doesn't help.
Increase logind InhibitDelayMaxSec No effect without GNS holding an inhibitor lock.
Re-add systemctl stop kubelet before reboot Same issue. Without shutdownGracePeriod, kubelet doesn't terminate pods on stop either.
MCO pre-reboot static pod handling Would fight with cluster-kube-apiserver-operator. Very invasive.
Wait for KEP-4563 (Evacuation API) Implementation PR went stale. Not viable for 4.22.

GNS was previously turned off due to test failures around networking DaemonSet/static pods. The key question is whether those issues still exist in current kubelet/CRI-O versions. A few things that limit the blast radius of this PR:

  • MCO already drains regular pods (including DS pods) before reboot on multi-node clusters. GNS only acts as a post-drain safety net for static pods.
  • SNO is explicitly excluded (SingleReplica disables GNS).
  • Payload tests passed on both AWS and GCP e2e, including the kubelet terminates kube-apiserver gracefully test variants.
  • The PR uses the simple two-bucket model (shutdownGracePeriod / shutdownGracePeriodCriticalPods), avoiding the priority threshold ordering bugs (k/k#113940).

It would help to understand which specific networking DS/static pod failures were seen before, so we can confirm they're resolved. If those are reproducible with this PR, we'd need to look into them.

As a possible follow-up we could also add TimeoutStopSec=300s to kubelet.service as defense-in-depth. If GNS fails to acquire the inhibitor lock, systemd still SIGKILLs kubelet at the default 90s.

@kannon92
Copy link
Contributor

kannon92 commented Mar 6, 2026

Have you checked why these two jobs are failing on gcp?

Enable kubelet's GracefulNodeShutdown by setting shutdownGracePeriod and
shutdownGracePeriodCriticalPods in the kubelet configuration templates.
Without these settings, kubelet exits immediately on SIGTERM during node
reboots without terminating pods, causing kube-apiserver to be SIGKILLed
when its graceful shutdown exceeds systemd's 90s timeout.

Values:
- Master/arbiter: 270s total, 240s for critical pods
- Worker: 90s total, 60s for critical pods
- SNO: disabled (MCO skips drain and uses short grace periods)

The 240s critical pod budget provides sufficient headroom above the
longest kube-apiserver terminationGracePeriodSeconds (194s on AWS)
without requiring platform-specific logic.

Signed-off-by: Sascha Grunert <sgrunert@redhat.com>
@saschagrunert saschagrunert force-pushed the kubelet-shutdown-grace-period branch from 3e7e032 to 21c5da8 Compare March 9, 2026 11:31
@openshift-ci openshift-ci bot removed the lgtm Indicates that a PR is ready to be merged. label Mar 9, 2026
@coderabbitai
Copy link

coderabbitai bot commented Mar 9, 2026

No actionable comments were generated in the recent review. 🎉

ℹ️ Recent review info
⚙️ Run configuration

Configuration used: Repository: openshift/coderabbit/.coderabbit.yaml

Review profile: CHILL

Plan: Pro

Run ID: 4ab5abfa-e9d3-47cb-98d4-3d1fa96a64c0

📥 Commits

Reviewing files that changed from the base of the PR and between d5dfdcf and 21c5da8.

📒 Files selected for processing (4)
  • pkg/controller/kubelet-config/kubelet_config_shutdown_test.go
  • templates/arbiter/01-arbiter-kubelet/_base/files/kubelet.yaml
  • templates/master/01-master-kubelet/_base/files/kubelet.yaml
  • templates/worker/01-worker-kubelet/_base/files/kubelet.yaml

Walkthrough

This PR adds shutdown grace period configuration to kubelet across master, worker, and arbiter node roles. The settings (shutdownGracePeriod: 270s and shutdownGracePeriodCriticalPods: 240s) are conditionally applied when control plane topology is not SingleReplica. A new test validates these configurations across multiple platform and node role scenarios.

Changes

Cohort / File(s) Summary
Kubelet Template Configuration
templates/master/01-master-kubelet/_base/files/kubelet.yaml, templates/worker/01-worker-kubelet/_base/files/kubelet.yaml, templates/arbiter/01-arbiter-kubelet/_base/files/kubelet.yaml
Added conditional template blocks to inject shutdownGracePeriod (270s) and shutdownGracePeriodCriticalPods (240s) when ControlPlaneTopology is not SingleReplica.
Shutdown Grace Period Test
pkg/controller/kubelet-config/kubelet_config_shutdown_test.go
New table-driven test validating shutdownGracePeriod and shutdownGracePeriodCriticalPods configuration across AWS, GCP, and None platforms for master, arbiter, and worker roles, including SNO scenarios.

Estimated code review effort

🎯 2 (Simple) | ⏱️ ~15 minutes

🚥 Pre-merge checks | ✅ 5
✅ Passed checks (5 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Title check ✅ Passed The title accurately describes the main change: configuring kubelet GracefulNodeShutdown by setting shutdownGracePeriod and shutdownGracePeriodCriticalPods across kubelet templates.
Docstring Coverage ✅ Passed No functions found in the changed files to evaluate docstring coverage. Skipping docstring coverage check.
Stable And Deterministic Test Names ✅ Passed All test case names are static and deterministic with no dynamic values, properly descriptive, and follow best practices by separating configuration details from titles.
Test Structure And Quality ✅ Passed Test adheres to quality principles: single responsibility per case via t.Run subtests, proper setup with fixtures, meaningful assertion messages with context, follows codebase conventions, and appropriate error handling.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing Touches
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Post copyable unit tests in a comment

Warning

There were issues while running some tools. Please review the errors and either fix the tool's configuration or disable the tool if it's a critical failure.

🔧 golangci-lint (2.5.0)

Error: can't load config: unsupported version of the configuration: "" See https://golangci-lint.run/docs/product/migration-guide for migration instructions
The command is terminated due to an error: can't load config: unsupported version of the configuration: "" See https://golangci-lint.run/docs/product/migration-guide for migration instructions

Tip

Try Coding Plans. Let us write the prompt for your AI agent so you can ship faster (with fewer bugs).
Share your feedback on Discord.


Comment @coderabbitai help to get the list of available commands and usage tips.

@openshift-ci-robot
Copy link
Contributor

@saschagrunert: This pull request references Jira Issue OCPBUGS-75200, which is valid.

3 validation(s) were run on this bug
  • bug is open, matching expected state (open)
  • bug target version (4.22.0) matches configured target version for branch (4.22.0)
  • bug is in the state POST, which is one of the valid states (NEW, ASSIGNED, POST)
Details

In response to this:

- What I did

OpenShift never configures kubelet's shutdownGracePeriod in KubeletConfiguration, making GracefulNodeShutdown a no-op. During MCO-triggered reboots:

  1. MCO drains regular pods (static pods like kube-apiserver are skipped)
  2. MCO calls systemctl reboot via logind
  3. Kubelet exits immediately on SIGTERM without terminating pods
  4. kube-apiserver needs up to 194s for graceful shutdown (platform-dependent) but systemd's DefaultTimeoutStopSec is 90s
  5. kube-apiserver gets SIGKILLed, watch-termination detects non-graceful termination

This is a latent bug exposed by new MCO changes introducing additional master reboots:

Configures shutdownGracePeriod and shutdownGracePeriodCriticalPods in kubelet templates:

  • Master/arbiter: 270s total, 240s for critical pods (covers the worst-case kube-apiserver terminationGracePeriodSeconds of 194s on AWS with headroom)
  • Worker: 90s total, 60s for critical pods
  • SNO: disabled

Kubelet automatically overrides logind's InhibitDelayMaxSec and acquires a delay inhibitor lock so that systemctl reboot waits for graceful pod termination.

- How to verify it

  • go test ./pkg/controller/kubelet-config/... -run TestShutdownGracePeriod -v
  • go test ./pkg/controller/template/... -v
  • Deploy to a cluster and verify kubelet config on nodes contains shutdownGracePeriod
  • Run [sig-api-machinery][Feature:APIServer][Late] kubelet terminates kube-apiserver gracefully extended test

- Payload test results

Test Result Details
AWS e2e (e2e-aws-ovn) PASS kubelet terminates kube-apiserver gracefully + extended both passed
GCP e2e (e2e-gcp-ovn) PASS kubelet terminates kube-apiserver gracefully + extended both passed
OCL (e2e-aws-ovn-ocl) INFRA FAILURE Unrelated test harness bug: oc wait --for=create not supported

- Description for the changelog

Configure kubelet GracefulNodeShutdown with generous grace periods to prevent kube-apiserver from being SIGKILLed during node reboots.

Summary by CodeRabbit

  • Tests

  • Added comprehensive test coverage validating shutdown grace period configuration for kubelet across multiple platform scenarios and node roles.

  • New Features

  • Kubelet graceful shutdown settings now configured for multi-replica control plane topologies on master, arbiter, and worker nodes, with configuration excluded for single-replica setups.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the openshift-eng/jira-lifecycle-plugin repository.

@ngopalak-redhat
Copy link
Contributor

/lgtm

@openshift-ci openshift-ci bot added the lgtm Indicates that a PR is ready to be merged. label Mar 9, 2026
@openshift-ci
Copy link
Contributor

openshift-ci bot commented Mar 9, 2026

[APPROVALNOTIFIER] This PR is NOT APPROVED

This pull-request has been approved by: ngopalak-redhat, saschagrunert
Once this PR has been reviewed and has the lgtm label, please assign dkhater-redhat for approval. For more information see the Code Review Process.

The full list of commands accepted by this bot can be found here.

Details Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@openshift-ci
Copy link
Contributor

openshift-ci bot commented Mar 9, 2026

@damdo: This PR was included in a payload test run from openshift/cluster-machine-approver#295
trigger 0 job(s) for the /payload-(with-prs|job|aggregate|job-with-prs|aggregate-with-prs) command

@saschagrunert
Copy link
Member Author

/retest

@openshift-ci
Copy link
Contributor

openshift-ci bot commented Mar 11, 2026

@saschagrunert: The following tests failed, say /retest to rerun all failed tests or /retest-required to rerun all mandatory failed tests:

Test name Commit Details Required Rerun command
ci/prow/e2e-hypershift 21c5da8 link true /test e2e-hypershift
ci/prow/e2e-gcp-op-part1 21c5da8 link true /test e2e-gcp-op-part1
ci/prow/e2e-gcp-op-part2 21c5da8 link true /test e2e-gcp-op-part2

Full PR test history. Your PR dashboard.

Details

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. I understand the commands that are listed here.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

do-not-merge/hold Indicates that a PR should not merge because someone has issued a /hold command. jira/valid-bug Indicates that a referenced Jira bug is valid for the branch this PR is targeting. jira/valid-reference Indicates that this PR references a valid Jira ticket of any type. lgtm Indicates that a PR is ready to be merged.

Projects

None yet

Development

Successfully merging this pull request may close these issues.

5 participants