Skip to content

OCPBUGS-75869: kubelet: Less aggressive low memory reservation#5716

Merged
sdodson merged 1 commit intoopenshift:mainfrom
sdodson:OCPBUGS-75869
Mar 10, 2026
Merged

OCPBUGS-75869: kubelet: Less aggressive low memory reservation#5716
sdodson merged 1 commit intoopenshift:mainfrom
sdodson:OCPBUGS-75869

Conversation

@sdodson
Copy link
Copy Markdown
Member

@sdodson sdodson commented Feb 27, 2026

Out of the box a standard OpenShift worker has about 3000 Mi of unevictable workload. Thus when we reserve 2GiB on an 8GiB instance that node will not autoscale down because it never drops below the 50% usage threshold.

Therefore, lets reduce the system reserved on the lowest end. The assumption here is that nodes this small are less likely to run the full 250 pods and actually consume the full set of resources. We should make sure that this aligns with our understanding of the problem we're trying to solve by enabling dynamic resource reservation in the first place, which I believe is the fact that massive nodes were only getting 1GiB of reserved memory despite running hundreds of pods.

Here's the difference in memory reservation at common sizes :

Total Old Reserved New Reserved
8 2 1
16 3 1.48
32 4 2.44
64 5 4.36
128 9 8.2
256 12 10.44
512 17 15.56
1024 27 25.8
2048 48 46.28

Fixes OCPBUGS-75869

Please provide the following information:

- What I did
Amended the dynamic system reservation scripts to only reserve 1GiB of the first 8GiB of memory. All other memory reservation logic is left in place. See the table above

- How to verify it
Launch a cluster with an 8GiB node, review allocatable and it should be 7GiB rather than 6GiB.

- Description for the changelog
Reduced dynamic memory reservation, on for workers by default in clusters installed on 4.21 or newer, for the first 8GiB of memory to a static 1GiB which mirrors the old non dynamic reservation. This slightly reduces all reservations by less than 2GiB.

Out of the box a standard OpenShift worker has about 3000 Mi of unevictable
workload. Thus when we reserve 2GiB on an 8GiB instance that node will not
autoscale down because it never drops below the 50% usage threshold.

Therefore, lets reduce the system reserved on the lowest end. The assumption
here is that nodes this small are less likely to run the full 250 pods and
actually consume the full set of resources. We should make sure that this
aligns with our understanding of the problem we're trying to solve by enabling
dynamic resource reservation in the first place, which I believe is the fact
that massive nodes were only getting 1GiB of reserved memory despite running
hundreds of pods.

Here's the difference in memory reservation at common sizes :

| Total | Old Reserved | New Reserved |
| ----- | ------------ | ------------ |
| 8     | 2            | 1            |
| 16    | 3            | 1.48         |
| 32    | 4            | 2.44         |
| 64    | 5            | 4.36         |
| 128   | 9            | 8.2          |
| 256   | 12           | 10.44        |
| 512   | 17           | 15.56        |
| 1024  | 27           | 25.8         |
| 2048  | 48           | 46.28        |
@openshift-ci openshift-ci bot added the do-not-merge/work-in-progress Indicates that a PR should not merge because it is a work in progress. label Feb 27, 2026
@openshift-ci
Copy link
Copy Markdown
Contributor

openshift-ci bot commented Feb 27, 2026

Skipping CI for Draft Pull Request.
If you want CI signal for your change, please convert it to an actual PR.
You can still manually trigger a test run with /test all

@sdodson sdodson changed the title kubelet: Less aggressive low memory reservation OCPBUGS-75869: kubelet: Less aggressive low memory reservation Feb 27, 2026
@openshift-ci-robot openshift-ci-robot added the jira/valid-reference Indicates that this PR references a valid Jira ticket of any type. label Feb 27, 2026
@openshift-ci-robot
Copy link
Copy Markdown
Contributor

@sdodson: This pull request references Jira Issue OCPBUGS-75869, which is invalid:

  • expected the bug to target the "4.22.0" version, but no target version was set

Comment /jira refresh to re-evaluate validity if changes to the Jira bug are made, or edit the title of this pull request to link to a different bug.

The bug has been updated to refer to the pull request using the external bug tracker.

Details

In response to this:

Out of the box a standard OpenShift worker has about 3000 Mi of unevictable workload. Thus when we reserve 2GiB on an 8GiB instance that node will not autoscale down because it never drops below the 50% usage threshold.

Therefore, lets reduce the system reserved on the lowest end. The assumption here is that nodes this small are less likely to run the full 250 pods and actually consume the full set of resources. We should make sure that this aligns with our understanding of the problem we're trying to solve by enabling dynamic resource reservation in the first place, which I believe is the fact that massive nodes were only getting 1GiB of reserved memory despite running hundreds of pods.

Here's the difference in memory reservation at common sizes :

Total Old Reserved New Reserved
8 2 1
16 3 1.48
32 4 2.44
64 5 4.36
128 9 8.2
256 12 10.44
512 17 15.56
1024 27 25.8
2048 48 46.28

Fixes OCPBUGS-75869

Please provide the following information:

- What I did
Amended the dynamic system reservation scripts to only reserve 1GiB of the first 8GiB of memory. All other memory reservation logic is left in place. See the table above

- How to verify it
Launch a cluster with an 8GiB node, review allocatable and it should be 7GiB rather than 6GiB.

- Description for the changelog
Reduced dynamic memory reservation, on for workers by default in clusters installed on 4.21 or newer, for the first 8GiB of memory to a static 1GiB which mirrors the old non dynamic reservation. This slightly reduces all reservations by less than 2GiB.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the openshift-eng/jira-lifecycle-plugin repository.

@openshift-ci-robot openshift-ci-robot added the jira/invalid-bug Indicates that a referenced Jira bug is invalid for the branch this PR is targeting. label Feb 27, 2026
@sdodson
Copy link
Copy Markdown
Member Author

sdodson commented Feb 27, 2026

/jira refresh

@openshift-ci-robot
Copy link
Copy Markdown
Contributor

@sdodson: This pull request references Jira Issue OCPBUGS-75869, which is invalid:

  • expected the bug to target the "4.22.0" version, but no target version was set

Comment /jira refresh to re-evaluate validity if changes to the Jira bug are made, or edit the title of this pull request to link to a different bug.

Details

In response to this:

/jira refresh

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the openshift-eng/jira-lifecycle-plugin repository.

@sdodson
Copy link
Copy Markdown
Member Author

sdodson commented Feb 27, 2026

/jira refresh

@openshift-ci-robot openshift-ci-robot added jira/valid-bug Indicates that a referenced Jira bug is valid for the branch this PR is targeting. and removed jira/invalid-bug Indicates that a referenced Jira bug is invalid for the branch this PR is targeting. labels Feb 27, 2026
@openshift-ci-robot
Copy link
Copy Markdown
Contributor

@sdodson: This pull request references Jira Issue OCPBUGS-75869, which is valid. The bug has been moved to the POST state.

3 validation(s) were run on this bug
  • bug is open, matching expected state (open)
  • bug target version (4.22.0) matches configured target version for branch (4.22.0)
  • bug is in the state New, which is one of the valid states (NEW, ASSIGNED, POST)
Details

In response to this:

/jira refresh

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the openshift-eng/jira-lifecycle-plugin repository.

@sdodson sdodson marked this pull request as ready for review February 27, 2026 19:50
@openshift-ci openshift-ci bot removed the do-not-merge/work-in-progress Indicates that a PR should not merge because it is a work in progress. label Feb 27, 2026
@2uasimojo
Copy link
Copy Markdown
Member

/retest-required

@ngopalak-redhat
Copy link
Copy Markdown
Contributor

ngopalak-redhat commented Mar 3, 2026

The configuration of the auto-node sizing will be covered as part of long running test:

These are additional tasks that can be taken up post this merge:

  • Make this script unit testable
  • Document this for customers to know how the allocation happens

@ngopalak-redhat
Copy link
Copy Markdown
Contributor

/payload-job periodic-ci-openshift-release-master-ci-4.22-e2e-aws-ovn-techpreview-serial-2of3 periodic-ci-openshift-release-master-ci-4.22-e2e-aws-ovn-techpreview-serial-3of3

Running additional tests that were attempted during the auto-node sizing

@openshift-ci
Copy link
Copy Markdown
Contributor

openshift-ci bot commented Mar 3, 2026

@ngopalak-redhat: trigger 0 job(s) for the /payload-(with-prs|job|aggregate|job-with-prs|aggregate-with-prs) command

@ngopalak-redhat
Copy link
Copy Markdown
Contributor

/payload-job periodic-ci-openshift-release-main-ci-4.22-e2e-aws-ovn-techpreview-serial-2of3 periodic-ci-openshift-release-main-ci-4.22-e2e-aws-ovn-techpreview-serial-3of3

@openshift-ci
Copy link
Copy Markdown
Contributor

openshift-ci bot commented Mar 3, 2026

@ngopalak-redhat: trigger 2 job(s) for the /payload-(with-prs|job|aggregate|job-with-prs|aggregate-with-prs) command

  • periodic-ci-openshift-release-main-ci-4.22-e2e-aws-ovn-techpreview-serial-2of3
  • periodic-ci-openshift-release-main-ci-4.22-e2e-aws-ovn-techpreview-serial-3of3

See details on https://pr-payload-tests.ci.openshift.org/runs/ci/a74baa70-1698-11f1-8d41-df7d51334589-0

@ngopalak-redhat
Copy link
Copy Markdown
Contributor

/payload-aggregate periodic-ci-openshift-release-main-nightly-4.22-e2e-aws-ovn-upgrade-fips 10

@openshift-ci
Copy link
Copy Markdown
Contributor

openshift-ci bot commented Mar 3, 2026

@ngopalak-redhat: trigger 1 job(s) for the /payload-(with-prs|job|aggregate|job-with-prs|aggregate-with-prs) command

  • periodic-ci-openshift-release-main-nightly-4.22-e2e-aws-ovn-upgrade-fips

See details on https://pr-payload-tests.ci.openshift.org/runs/ci/1614df30-1699-11f1-9639-0fa330f5b6dd-0

@ngopalak-redhat
Copy link
Copy Markdown
Contributor

/test e2e-aws-mco-disruptive

@ngopalak-redhat
Copy link
Copy Markdown
Contributor

/payload-job periodic-ci-openshift-machine-config-operator-release-4.22-periodics-e2e-aws-mco-disruptive-techpreview-1of2

@openshift-ci
Copy link
Copy Markdown
Contributor

openshift-ci bot commented Mar 3, 2026

@ngopalak-redhat: trigger 1 job(s) for the /payload-(with-prs|job|aggregate|job-with-prs|aggregate-with-prs) command

  • periodic-ci-openshift-machine-config-operator-release-4.22-periodics-e2e-aws-mco-disruptive-techpreview-1of2

See details on https://pr-payload-tests.ci.openshift.org/runs/ci/d3bf6d80-16ac-11f1-86d3-185bd84fe8af-0

@ngopalak-redhat
Copy link
Copy Markdown
Contributor

/payload-aggregate periodic-ci-openshift-release-main-nightly-4.22-e2e-aws-ovn-upgrade-fips 1

@openshift-ci
Copy link
Copy Markdown
Contributor

openshift-ci bot commented Mar 3, 2026

@ngopalak-redhat: trigger 1 job(s) for the /payload-(with-prs|job|aggregate|job-with-prs|aggregate-with-prs) command

  • periodic-ci-openshift-release-main-nightly-4.22-e2e-aws-ovn-upgrade-fips

See details on https://pr-payload-tests.ci.openshift.org/runs/ci/4b9bfa70-16ae-11f1-9ae2-ecfe0ac2f325-0

@haircommander
Copy link
Copy Markdown
Member

/retest
/lgtm
/approve

a follow-up: we should document the history of this, and in the future investigate having this value have a relationship between max pods and memory reservation (or use some other variable).

@openshift-ci openshift-ci bot added the lgtm Indicates that a PR is ready to be merged. label Mar 10, 2026
@eggfoobar
Copy link
Copy Markdown
Contributor

/payload-job periodic-ci-openshift-release-main-ci-4.22-e2e-aws-upgrade-ovn-single-node periodic-ci-openshift-release-main-nightly-4.22-e2e-metal-ovn-two-node-arbiter-upgrade

@openshift-ci
Copy link
Copy Markdown
Contributor

openshift-ci bot commented Mar 10, 2026

@eggfoobar: trigger 2 job(s) for the /payload-(with-prs|job|aggregate|job-with-prs|aggregate-with-prs) command

  • periodic-ci-openshift-release-main-ci-4.22-e2e-aws-upgrade-ovn-single-node
  • periodic-ci-openshift-release-main-nightly-4.22-e2e-metal-ovn-two-node-arbiter-upgrade

See details on https://pr-payload-tests.ci.openshift.org/runs/ci/13b709a0-1c9a-11f1-80c1-e025692eaaad-0

@eggfoobar
Copy link
Copy Markdown
Contributor

/payload-job periodic-ci-openshift-release-main-nightly-4.22-e2e-aws-ovn-single-node-workers

@openshift-ci
Copy link
Copy Markdown
Contributor

openshift-ci bot commented Mar 10, 2026

@eggfoobar: trigger 1 job(s) for the /payload-(with-prs|job|aggregate|job-with-prs|aggregate-with-prs) command

  • periodic-ci-openshift-release-main-nightly-4.22-e2e-aws-ovn-single-node-workers

See details on https://pr-payload-tests.ci.openshift.org/runs/ci/3a7a2220-1c9a-11f1-96e0-925662212127-0

@sdodson sdodson added the approved Indicates a PR has been approved by an approver from all required OWNERS files. label Mar 10, 2026
@openshift-ci
Copy link
Copy Markdown
Contributor

openshift-ci bot commented Mar 10, 2026

[APPROVALNOTIFIER] This PR is APPROVED

Approval requirements bypassed by manually added approval.

This pull-request has been approved by: haircommander, sdodson

The full list of commands accepted by this bot can be found here.

The pull request process is described here

Details Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@sdodson
Copy link
Copy Markdown
Member Author

sdodson commented Mar 10, 2026

/verified by CI
camgi indicates the slight increase in allocatable memory as expected

@openshift-ci-robot openshift-ci-robot added the verified Signifies that the PR passed pre-merge verification criteria label Mar 10, 2026
@openshift-ci-robot
Copy link
Copy Markdown
Contributor

@sdodson: This PR has been marked as verified by CI.

Details

In response to this:

/verified by CI
camgi indicates the slight increase in allocatable memory as expected

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the openshift-eng/jira-lifecycle-plugin repository.

@sdodson
Copy link
Copy Markdown
Member Author

sdodson commented Mar 10, 2026

/cherry-pick release-4.21

@openshift-cherrypick-robot
Copy link
Copy Markdown

@sdodson: once the present PR merges, I will cherry-pick it on top of release-4.21 in a new PR and assign it to you.

Details

In response to this:

/cherry-pick release-4.21

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository.

@sdodson sdodson merged commit 3c7a1fc into openshift:main Mar 10, 2026
10 of 19 checks passed
@openshift-ci-robot
Copy link
Copy Markdown
Contributor

@sdodson: Jira Issue Verification Checks: Jira Issue OCPBUGS-75869
✔️ This pull request was pre-merge verified.
✔️ All associated pull requests have merged.
✔️ All associated, merged pull requests were pre-merge verified.

Jira Issue OCPBUGS-75869 has been moved to the MODIFIED state and will move to the VERIFIED state when the change is available in an accepted nightly payload. 🕓

Details

In response to this:

Out of the box a standard OpenShift worker has about 3000 Mi of unevictable workload. Thus when we reserve 2GiB on an 8GiB instance that node will not autoscale down because it never drops below the 50% usage threshold.

Therefore, lets reduce the system reserved on the lowest end. The assumption here is that nodes this small are less likely to run the full 250 pods and actually consume the full set of resources. We should make sure that this aligns with our understanding of the problem we're trying to solve by enabling dynamic resource reservation in the first place, which I believe is the fact that massive nodes were only getting 1GiB of reserved memory despite running hundreds of pods.

Here's the difference in memory reservation at common sizes :

Total Old Reserved New Reserved
8 2 1
16 3 1.48
32 4 2.44
64 5 4.36
128 9 8.2
256 12 10.44
512 17 15.56
1024 27 25.8
2048 48 46.28

Fixes OCPBUGS-75869

Please provide the following information:

- What I did
Amended the dynamic system reservation scripts to only reserve 1GiB of the first 8GiB of memory. All other memory reservation logic is left in place. See the table above

- How to verify it
Launch a cluster with an 8GiB node, review allocatable and it should be 7GiB rather than 6GiB.

- Description for the changelog
Reduced dynamic memory reservation, on for workers by default in clusters installed on 4.21 or newer, for the first 8GiB of memory to a static 1GiB which mirrors the old non dynamic reservation. This slightly reduces all reservations by less than 2GiB.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the openshift-eng/jira-lifecycle-plugin repository.

@openshift-ci
Copy link
Copy Markdown
Contributor

openshift-ci bot commented Mar 10, 2026

@sdodson: The following tests failed, say /retest to rerun all failed tests or /retest-required to rerun all mandatory failed tests:

Test name Commit Details Required Rerun command
ci/prow/e2e-aws-mco-disruptive 3a1921c link false /test e2e-aws-mco-disruptive
ci/prow/e2e-gcp-op-ocl 3a1921c link false /test e2e-gcp-op-ocl

Full PR test history. Your PR dashboard.

Details

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. I understand the commands that are listed here.

@openshift-cherrypick-robot
Copy link
Copy Markdown

@sdodson: new pull request created: #5756

Details

In response to this:

/cherry-pick release-4.21

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository.

@openshift-merge-robot
Copy link
Copy Markdown
Contributor

Fix included in accepted release 4.22.0-0.nightly-2026-03-13-065313

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

approved Indicates a PR has been approved by an approver from all required OWNERS files. jira/valid-bug Indicates that a referenced Jira bug is valid for the branch this PR is targeting. jira/valid-reference Indicates that this PR references a valid Jira ticket of any type. lgtm Indicates that a PR is ready to be merged. verified Signifies that the PR passed pre-merge verification criteria

Projects

None yet

Development

Successfully merging this pull request may close these issues.

8 participants