Skip to content

Conversation

@alanconway
Copy link
Contributor

Description

"This guide explains how to handle scenarios where high-volume logging can cause log loss in
OpenShift clusters, and how to configure your cluster to minimize this risk."

/cc @xperimental
/cc @cahartma
/assign @jcantrill

Links

doc: Article on high volume log loss.

"This guide explains how to handle scenarios where high-volume logging can cause log loss in
OpenShift clusters, and how to configure your cluster to minimize this risk.""

@openshift-ci
Copy link
Contributor

openshift-ci bot commented Dec 4, 2025

[APPROVALNOTIFIER] This PR is APPROVED

This pull-request has been approved by: alanconway

The full list of commands accepted by this bot can be found here.

The pull request process is described here

Details Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@openshift-ci openshift-ci bot added the approved Indicates a PR has been approved by an approver from all required OWNERS files. label Dec 4, 2025
Copy link
Contributor

@jcantrill jcantrill left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Overall I believe this needs changes related to changing container log settings and not expanding capacity of /var/log

@jcantrill
Copy link
Contributor

/hold

@openshift-ci openshift-ci bot added the do-not-merge/hold Indicates that a PR should not merge because someone has issued a /hold command. label Dec 11, 2025
@alanconway
Copy link
Contributor Author

@jcantrill good feedback. I'm doing a rewrite to clarify the focus on rotation parameters rather than /var/log size as you suggested. Will have new version shortly.

@alanconway alanconway force-pushed the high-volume-article branch 4 times, most recently from f4fc421 to 124cb75 Compare December 19, 2025 16:19
@alanconway
Copy link
Contributor Author

/lgtm
/unhold

@openshift-ci
Copy link
Contributor

openshift-ci bot commented Dec 19, 2025

@alanconway: you cannot LGTM your own PR.

Details

In response to this:

/lgtm
/unhold

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository.

@openshift-ci openshift-ci bot removed the do-not-merge/hold Indicates that a PR should not merge because someone has issued a /hold command. label Dec 19, 2025
@alanconway
Copy link
Contributor Author

@jcantrill please re-review, I think I've addressed your comments.

This guide explains how to handle scenarios where high-volume logging can cause log loss in
OpenShift clusters, and how to configure your cluster to minimize this risk.
@alanconway alanconway force-pushed the high-volume-article branch from 124cb75 to 9e5e077 Compare January 7, 2026 14:56
@alanconway
Copy link
Contributor Author

@jcantrill made changes to "Recommendations" to clarify your points - definite improvement.
Cosmetic changes to other sections.
Let me know how it reads to you

@openshift-ci
Copy link
Contributor

openshift-ci bot commented Jan 7, 2026

@alanconway: The following test failed, say /retest to rerun all failed tests or /retest-required to rerun all mandatory failed tests:

Test name Commit Details Required Rerun command
ci/prow/images 9e5e077 link true /test images

Full PR test history. Your PR dashboard.

Details

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. I understand the commands that are listed here.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

approved Indicates a PR has been approved by an approver from all required OWNERS files. release/6.4

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants