Skip to content

Conversation

@ggwpez
Copy link
Member

@ggwpez ggwpez commented Nov 14, 2025

Reported here https://hackmd.io/@JjziWrpMQ2OeBtz99n7JXg/ByqF7Av0ge. Change:

  • Cleanup old LastHrmpMqcHeads entries when the corresponding channel was remove from RC state

@ggwpez ggwpez added the T9-cumulus This PR/Issue is related to cumulus. label Nov 14, 2025
Signed-off-by: Oliver Tale-Yazdi <[email protected]>
ingress_channels: &[(ParaId, cumulus_primitives_core::AbridgedHrmpChannel)],
mqc_heads: &mut BTreeMap<ParaId, MessageQueueChain>,
) {
// Complexity is O(N * lg N) but could be optimized for O(N)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Could you expand on how this could be done ?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The Mqc Heads and ingress channels are both sorted. So instead of always calling binary search, you can keep an index and increment it. Although just writing this, I think the increment would possibly need to happen multiple times (loop).
Not sure if its worth it in practice, hence why i used the trivial approach here.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I meant if you can write this in the comment :D

It sounds easy to implement. But anyway, yes, probably it won't bring a very big performance benefit so we can also leave it as it is for the moment.

@paritytech-workflow-stopper
Copy link

All GitHub workflows were cancelled due to failure one of the required jobs.
Failed workflow url: https://github.com/paritytech/polkadot-sdk/actions/runs/19428915920
Failed job name: cargo-clippy

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

T9-cumulus This PR/Issue is related to cumulus.

Projects

Status: Scheduled

Development

Successfully merging this pull request may close these issues.

4 participants