Skip to content

Conversation

@gainsley
Copy link

Hi, I made and tested these changes to update the kamaji control plane provider to use cluster api v1beta2 apis. Please see https://cluster-api.sigs.k8s.io/developer/providers/migrations/v1.10-to-v1.11#how-to-implement-the-new-v1beta2-contract.

I thought you may be interested in having them. They are pretty straightforward as functionally nothing has changed, it's just things have moved around or are accomplished slightly differently.

I have tested this with clusterapi v1.11.3 and metal3 v1.11.0. The changes to rbac are required for resolving object references (see above link).

These are obviously not backwards compatible for users of clusterapi v1.10 (v1beta1), if you take them I would suggest keeping separate release branches for clusterapi v1beta1 vs v1beta2, much like clusterapi does itself.

@gainsley
Copy link
Author

Sorry, I unintentionally added those extra changes to go.mod. I have removed them.

@prometherion
Copy link
Member

prometherion commented Nov 21, 2025

I didn't thank you for the efforts put in place to address v1beta2, @gainsley: it's been something we started thinking of with #247, originally from #kamaji on Kubernetes' Slack workspace: v1beta2 is definitely something we're aiming to.

These are obviously not backwards compatible for users of clusterapi v1.10 (v1beta1)

I guess you're talking about the retrieval of Cluster objects using the v1beta2 signature: it's something we need to discuss, we're worried that introducing such a change could decrease Kamaji's adoption by forcing people to forcefully upgrade.

@prometherion
Copy link
Member

Besides the code base import, we also need to implement the required Control Plane contracts:

https://cluster-api.sigs.k8s.io/developer/providers/contracts/control-plane#rules-contract-version-v1beta2

@gainsley
Copy link
Author

I didn't thank you for the efforts put in place to address v1beta2, @gainsley: it's been something we started thinking of with #247, originally from #kamaji on Kubernetes' Slack workspace: v1beta2 is definitely something we're aiming to.

No problem, Kamaji is a great project, glad I can contribute.

These are obviously not backwards compatible for users of clusterapi v1.10 (v1beta1)

I guess you're talking about the retrieval of Cluster objects using the v1beta2 signature: it's something we need to discuss, we're worried that introducing such a change could decrease Kamaji's adoption by forcing people to forcefully upgrade.

What happens now is if you start a new clusterapi project (like I did), by default you will get clusterapi v1.11.x, which can't use the current kamaji control plane provider because the provier is trying to read a v1beta1.Cluster object that isn't present. That hurts adoption for new users, but your point is valid for existing v1.10.x users that may try to upgrade the Kamaji provider.

This is an interesting problem. I looked at the other control plane providers. None of them have moved to v1beta2 yet. I looked at the infra providers AWS, Openstack, metal3, which have moved to v1beta2. They apparently don't need to read clusterapi objects (unlike the control plane providers which apparently needs to get information from clusterapi.Cluster), so they don't have as much of a dependency on the clusterapi objects. So there's no precedent for how a control plane provider should implement backwards compatibility yet.

Btw thanks for pointing out the control plane contracts, I missed that. I went through them, I'm not exactly sure what the right path forward is. It sounds like clusterapi v1.11 can support the old v1beta1 contracts, but I don't know if it's possible to maintain both contracts in Kamaji at the same time, allowing for a single Kamaji provider version to support both clusterapi versions. Otherwise I'd guess you'd need to maintain two branches. You might want to maintain two branches anyway, as metal3 does, even if it's just for stability purposes.

If I have some time I will see if I can implement the v1beta2 contracts alongside the v1beta1 requirements. This would also require being able to support both v1beta1.Cluster and v1beta2.Cluster lookups. If it's possible that would be great, but otherwise I think you'd need to keep separate branches. Users on clusterapi v1.10.x would then need to specify the branch tag when installing the Kamaji control plane provider to get the compatible branch release.

@prometherion
Copy link
Member

First, thanks a lot for delving into the discussion.

If I have some time I will see if I can implement the v1beta2 contracts alongside the v1beta1 requirements. This would also require being able to support both v1beta1.Cluster and v1beta2.Cluster lookups

The main issue I see there is that we're going to query the API Server for CAPI v1beta2 resources: these types are available only if you upgrade Cluster API to > 0.11.

Kubernetes is a fast-paced world, something that doesn't fit the enterprise world we're addressing as a company (CLASTIX), despite embracing the Open Source manifesto: tl;dr; we can't slow down development for the whole of adopters' base, if these can't upgrade to a higher CAPI version, they can engage with us in maintaing an updated for with CAPI v1beta1 contract. I see this as a legit move: maintaining backward compatibility would require a lot of engineering hours, and I think it's a fair trade-off.

Unless @bsctl has a different opinion, I would go all-in with the v1beta2 switch starting from the next Kubernetes release: CLASTIX customers not yet ready to upgrade will be provided with an LTS version to avoid forcing them to upgrade to a newer CAPI.

This means we're ready to get this merged as a starting point: there are still some missing contracts, but it's definitely a good starting point.

@hrak
Copy link

hrak commented Dec 9, 2025

First, thanks a lot for delving into the discussion.

If I have some time I will see if I can implement the v1beta2 contracts alongside the v1beta1 requirements. This would also require being able to support both v1beta1.Cluster and v1beta2.Cluster lookups

The main issue I see there is that we're going to query the API Server for CAPI v1beta2 resources: these types are available only if you upgrade Cluster API to > 0.11.

Cluster-api v1.11 contains a CRD migrator that will automatically convert any existing v1beta1 CAPI object to v1beta2 automatically, so there's no need to worry about just querying for v1beta2 Cluster objects. We had the same worries and just tested this, here's a small log excerpt of machine objects being converted:

I1209 09:01:28.929289       1 crd_migrator.go:371] "Running storage version migration to apiVersion v1beta2 (for 40 objects)" controller="crdmigrator" control
lerGroup="apiextensions.k8s.io" controllerKind="CustomResourceDefinition" CustomResourceDefinition="machines.cluster.x-k8s.io" namespace="" name="machines.clu
ster.x-k8s.io" reconcileID="4f1c0a7a-dac5-4331-bb9c-6e6d5043b2ec"
I1209 09:01:36.329423       1 crd_migrator.go:440] "Running managedField cleanup (for 40 objects)" controller="crdmigrator" controllerGroup="apiextensions.k8s
.io" controllerKind="CustomResourceDefinition" CustomResourceDefinition="machines.cluster.x-k8s.io" namespace="" name="machines.cluster.x-k8s.io" reconcileID=
"4f1c0a7a-dac5-4331-bb9c-6e6d5043b2ec"

@prometherion
Copy link
Member

@gainsley thanks for the further confirmation, even tho I'm more worried about the conditions for non-core CAPI objects.

@prometherion prometherion changed the title clusterapi v1beta2 api changes feat: first capi v1beta2 api support Dec 9, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants