-
Notifications
You must be signed in to change notification settings - Fork 159
tide: fallback to individual PR merge in batch failures #538
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Changes from all commits
File filter
Filter by extension
Conversations
Jump to
Diff view
Diff view
There are no files selected for viewing
| Original file line number | Diff line number | Diff line change |
|---|---|---|
|
|
@@ -1959,6 +1959,17 @@ func testTakeAction(clients localgit.Clients, t *testing.T) { | |
| action: Trigger, | ||
| enableScheduling: true, | ||
| }, | ||
| { | ||
| name: "batch merge fails, falls back to individual PR merge (issue #474)", | ||
|
Contributor
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Isn't this what the "batch merge errors but continues if a PR is unmergeable" test on L1899 is supposed to test?
Member
Author
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. That is a situation where something was merged (merged: 2), but the bug I am trying to solve is that nothing got merged |
||
| batchMerges: []int{0, 1}, | ||
| mergeErrs: map[int]error{ | ||
| 0: github.UnmergablePRError("merge conflict"), | ||
| 1: github.UnmergablePRError("merge conflict"), | ||
| }, | ||
| merged: 0, | ||
| triggered: 0, | ||
| action: Merge, | ||
| }, | ||
| } | ||
|
|
||
| for _, tc := range testcases { | ||
|
|
||
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
#474 is really weird to me and I'm really not sure if the change proposed in this PR actually helps with anything.
Have you managed to reproduce or at least really understand what was actually happening in #474 (Tide logs would really help with that, it's a pity we haven't captured them when the problem was happening).
My thinking is: if we are here, then we have at least one successful batch job of a presubmit that is required for merge. That implies that all pull requests involved must be git-mergeable (not necessarilty GH-mergeable) without conflicts, otherwise the batch job would fail right at the beginning, when clonerefs merges all involved pulls into the base ref. If that happened, we would not have a successful batch job and we would not get here to try merge PRs based on that fact.
Additionally, if at least one PR from the batch is GH-mergeable (and therefore, if the fallback single merge on L1448 has any chance of succeeding) , it would merge already. The PRs are attempted to be merged sequentially and the code tracks
keepTryingfor merge failures that should only affect that single PR. It only returns "stop trying" on "we do not have permissions to merge" errors and "merge commits are forbidden" errors. Otherwise it continues and tries to merge other PRs. If none of the PRs merged it means that all of them failed to merge individually (and fallback individual merge has no chance of merging) or Tide hit one of the hard errors (and fallback merge has no chance of merging).This is all really strange to me, because I remember that we actually saw #474 when it was happening. I just do not understand what is the actual failure case from which the user can recover through individual merges.
I think the idea that if we once have a passing batch job that fails to merge all its PRs then Tide gets stuck on this and does not let any other PR to merge (because it will always prefer to act on the passing Batch job it sees) is sound, I just do not understand the conditions that lead to that.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I remember looking at the logs, but didn't find anything interesting since it is thousands of lines, mostly tide queries. I am not even sure if any log would show up, as Tide thinks everything is ok.
Here is at least the pool https://prow.ci.openshift.org/tide-history?repo=openshift%2Fdpu-operator&branch=main&pull=405, it kept choosing the trigger_batch action? Or was it still the same batch job?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think it is always choosing the TRIGGER_BATCH action. That would mean, that the
accumulateBatch()does not return a pending nor a successful batch https://github.com/kubernetes-sigs/prow/blob/main/pkg/tide/tide.go#L944-L946 otherwise thetakeAction()would choose a different action, right?