Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 1 addition & 1 deletion clusters/about/mce_networking.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -57,5 +57,5 @@ For the {mce-short} cluster networking requirements, see the following table:

|===

*Note:* If the klusterlet agent on the managed cluster requires proxy settings to access the `apiserver` on the hub cluster instead of connecting directly, see xref:../cluster_lifecycle/adv_config_cluster.adoc#config-proxy-hub-cluster[Configuring the proxy between hub cluster and managed cluster].
*Note:* If the klusterlet agent on the managed cluster requires proxy settings to access the `apiserver` on the hub cluster instead of connecting directly, see xref:../cluster_lifecycle/config_proxy_hub_managed.adoc#config-proxy-hub-cluster[Configuring the proxy between hub cluster and managed cluster].

473 changes: 0 additions & 473 deletions clusters/cluster_lifecycle/adv_config_cluster.adoc

Large diffs are not rendered by default.

113 changes: 113 additions & 0 deletions clusters/cluster_lifecycle/config_proxy_hub_managed.adoc
Original file line number Diff line number Diff line change
@@ -0,0 +1,113 @@
[#config-proxy-hub-cluster]
= Configuring the proxy between hub cluster and managed cluster

To register a managed cluster to your {mce} hub cluster, you need to transport the managed cluster to your {mce-short} hub cluster. Sometimes your managed cluster cannot directly reach your {mce-short} hub cluster. In this instance, configure the proxy settings to allow the communications from the managed cluster to access the {mce-short} hub cluster through a HTTP or HTTPS proxy server.

For example, the {mce-short} hub cluster is in a public cloud, and the managed cluster is in a private cloud environment behind firewalls. The communications out of the private cloud can only go through a HTTP or HTTPS proxy server.

.Prerequisites

- You have a HTTP or HTTPS proxy server running that supports HTTP tunnels. For example, HTTP connect method.
- You have a manged cluster that can reach the HTTP or HTTPS proxy server, and the proxy server can access the {mce-short} hub cluster.

Complete the following steps to configure the proxy settings between hub cluster and managed cluster:

. Create a `KlusterConfig` resource with proxy settings.
.. See the following configuration with HTTP proxy:
+
[source,yaml]
----
apiVersion: config.open-cluster-management.io/v1alpha1
kind: KlusterletConfig
metadata:
name: http-proxy
spec:
hubKubeAPIServerConfig:
proxyURL: "http://<username>:<password>@<ip>:<port>"
----
+
.. See the following configuration with HTTPS proxy:
+
[source,yaml]
----
apiVersion: config.open-cluster-management.io/v1alpha1
kind: KlusterletConfig
metadata:
name: https-proxy
spec:
hubKubeAPIServerConfig:
proxyURL: "https://<username>:<password>@<ip>:<port>"
trustedCABundles:
- name: "proxy-ca-bundle"
caBundle:
name: <configmap-name>
namespace: <configmap-namespace>
----
+
*Note:* A CA bundle is required for HTTPS proxy. It refers to a ConfigMap containing one or multiple CA certificates. You can create the ConfigMap by running the following command:
+
[source,bash]
----
oc create -n <configmap-namespace> configmap <configmap-name> --from-file=ca.crt=/path/to/ca/file
----

. When creating a managed cluster, choose the `KlusterletConfig` resource by adding an annotation that refers to the `KlusterletConfig` resource. See the following example:
+
[source,yaml]
----
apiVersion: cluster.open-cluster-management.io/v1
kind: ManagedCluster
metadata:
annotations:
agent.open-cluster-management.io/klusterlet-config: <klusterlet-config-name>
name:<managed-cluster-name>
spec:
hubAcceptsClient: true
leaseDurationSeconds: 60
----
+
*Notes:*
+
* You might need to toggle the YAML view to add the annotation to the `ManagedCluster` resource when you operate on the {mce-short} console.
* You can use a global `KlusterletConfig` to enable the configuration on every managed cluster without using an annotation for binding.

[#disable-proxy-hub-managed]
== Disabling the proxy between hub cluster and managed cluster

If your development changes, you might need to disable the HTTP or HTTPS proxy.

. Go to the `ManagedCluster` resource.
. Remove the `agent.open-cluster-management.io/klusterlet-config` annotation.

[#config-klusterlet-nodes]
== Optional: Configuring the klusterlet to run on specific nodes

When you create a cluster using {acm}, you can specify which nodes you want to run the managed cluster klusterlet to run on by configuring the `nodeSelector` and `tolerations` annotation for the managed cluster. Complete the following steps to configure these settings:

. Select the managed cluster that you want to update from the clusters page in the console.

. Set the YAML switch to `On` to view the YAML content.
+
*Note:* The YAML editor is only available when importing or creating a cluster. To edit the managed cluster YAML definition after importing or creating, you must use the {ocp-short} command-line interface or the {acm-short} search feature.

. Add the `nodeSelector` annotation to the managed cluster YAML definition. The key for this annotation is: `open-cluster-management/nodeSelector`. The value of this annotation is a string map with JSON formatting.

. Add the `tolerations` entry to the managed cluster YAML definition. The key of this annotation is: `open-cluster-management/tolerations`. The value of this annotation represents a link:https://github.com/kubernetes/api/blob/release-1.24/core/v1/types.go#L3007[toleration] list with JSON formatting.
The resulting YAML might resemble the following example:
+
[source,yaml]
----
apiVersion: cluster.open-cluster-management.io/v1
kind: ManagedCluster
metadata:
annotations:
open-cluster-management/nodeSelector: '{"dedicated":"acm"}'
open-cluster-management/tolerations: '[{"key":"dedicated","operator":"Equal","value":"acm","effect":"NoSchedule"}]'
----

. To make sure that you can deploy your content to the correct nodes, see link:../../add-ons/configure_klusterlet_addons.adoc#configure-klusterlet-addons[Configuring klusterlet add-ons].

[#add-resources-config-proxy-hub]
== Additional resources

* xref:../cluster_lifecycle/cluster_proxy_addon_config.adoc#cluster-proxy-addon-settings[Enabling proxy settings for cluster proxy add-ons]
2 changes: 1 addition & 1 deletion clusters/cluster_lifecycle/create_cluster_cli.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -42,7 +42,7 @@ spec:
releaseImage: quay.io/openshift-release-dev/ocp-release:4.x.47-x86_64
----

- Review the hub cluster `KubeAPIServer` certificate verification strategy and update the strategy if needed. To learn what strategy to use for your setup, see xref:../cluster_lifecycle/adv_config_cluster.adoc#config-hub-kube-api-server[Configuring the hub cluster `KubeAPIServer` verification strategy].
- Review the hub cluster `KubeAPIServer` certificate verification strategy and update the strategy if needed. To learn what strategy to use for your setup, see xref:../cluster_lifecycle/set_verification_strat.adoc#set-hub-kube-api-server[Configuring the hub cluster `KubeAPIServer` verification strategy].

[#create-a-cluster-with-clusterdeployment]
== Create a cluster with ClusterDeployment
Expand Down
2 changes: 1 addition & 1 deletion clusters/cluster_lifecycle/create_cluster_on_prem.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -28,7 +28,7 @@ See the following prerequisites before creating a cluster in an on-premises envi
- The following application base domain must point to the static IP address for Ingress VIP:
+
`*.apps.<cluster_name>.<base_domain>`
* Review the hub cluster `KubeAPIServer` certificate verification strategy and update the strategy if needed. To learn what strategy to use for your setup, see xref:../cluster_lifecycle/adv_config_cluster.adoc#config-hub-kube-api-server[Configuring the hub cluster `KubeAPIServer` verification strategy].
* Review the hub cluster `KubeAPIServer` certificate verification strategy and update the strategy if needed. To learn what strategy to use for your setup, see xref:../cluster_lifecycle/set_verification_strat.adoc#set-hub-kube-api-server[Configuring the hub cluster `KubeAPIServer` verification strategy].


[#on-prem-creating-your-cluster-with-the-console]
Expand Down
107 changes: 107 additions & 0 deletions clusters/cluster_lifecycle/custom_hub_cert.adoc
Original file line number Diff line number Diff line change
@@ -0,0 +1,107 @@
[#custom-hub-api-certificates]
= Customizing the hub cluster `KubeAPIServer` certificates

The managed clusters communicate with the hub cluster through a mutual connection with the OpenShift `KubeAPIServer` external load balancer. The default OpenShift `KubeAPIServer` certificate is issued by an internal {ocp} cluster certificate authority (CA) when {ocp-short} is installed. If necessary, you can add or change certificates.

Changing the API server certificate might impact the communication between the managed cluster and the hub cluster. When you add the named certificate before installing the product, you can avoid an issue that might leave your managed clusters in an offline state.

The following list contains some examples of when you might need to update your certificates:

* You want to replace the default API server certificate for the external load balancer with your own certificate. By following the guidance in link:https://docs.redhat.com/documentation/en-us/openshift_container_platform/4.17/html/security_and_compliance/configuring-certificates#api-server-certificates[Adding API server certificates] in the {ocp-short} documentation, you can add a named certificate with host name `api.<cluster_name>.<base_domain>` to replace the default API server certificate for the external load balancer. Replacing the certificate might cause some of your managed clusters to move to an offline state. If your clusters are in an offline state after upgrading the certificates, follow the troubleshooting instructions for link:../support_troubleshooting/trouble_cluster_offline_cert_mce.adoc#troubleshooting-imported-clusters-offline-after-certificate-change-mce[Troubleshooting imported clusters offline after certificate change] to resolve it.
+
*Note:* Adding the named certificate before installing the product helps to avoid your clusters moving to an offline state.

* The named certificate for the external load balancer is expiring and you need to replace it. If both the old and the new certificate share the same root CA certificate, despite the number of intermediate certificates, you can follow the guidance in link:https://docs.redhat.com/documentation/en-us/openshift_container_platform/4.17/html/security_and_compliance/configuring-certificates#api-server-certificates[Adding API server certificates] in the {ocp-short} documentation to create a new secret for the new certificate. Then update the serving certificate reference for host name `api.<cluster_name>.<base_domain>` to the new secret in the `APIServer` custom resource. Otherwise, when the old and new certificates have different root CA certificates, complete the following steps to replace the certificate:
+
. Locate your `APIServer` custom resource, which resembles the following example:
+
[source,yaml]
----
apiVersion: config.openshift.io/v1
kind: APIServer
metadata:
name: cluster
spec:
audit:
profile: Default
servingCerts:
namedCertificates:
- names:
- api.mycluster.example.com
servingCertificate:
name: old-cert-secret
----

. Create a new secret in the `openshift-config` namespace that contains the content of the existing and new certificates by running the following commands:
+
.. Copy the old certificate into a new certificate:
+
[source,bash]
----
cp old.crt combined.crt
----

.. Add the contents of the new certificate to the copy of the old certificate:
+
[source,bash]
----
cat new.crt >> combined.crt
----

.. Apply the combined certificates to create a secret:
+
[source,bash]
----
oc create secret tls combined-certs-secret --cert=combined.crt --key=old.key -n openshift-config
----

. Update your `APIServer` resource to reference the combined certificate as the `servingCertificate`.
+
[source,yaml]
----
apiVersion: config.openshift.io/v1
kind: APIServer
metadata:
name: cluster
spec:
audit:
profile: Default
servingCerts:
namedCertificates:
- names:
- api.mycluster.example.com
servingCertificate:
name: combined-cert-secret
----

. After about 15 minutes, the CA bundle containing both new and old certificates is propagated to the managed clusters.

. Create another secret named `new-cert-secret` in the `openshift-config` namespace that contains only the new certificate information by entering the following command:
+
[source,bash]
----
oc create secret tls new-cert-secret --cert=new.crt --key=new.key -n openshift-config {code}
----

. Update the `APIServer` resource by changing the name of `servingCertificate` to reference the `new-cert-secret`. Your resource might resemble the following example:
+
[source,yaml]
----
apiVersion: config.openshift.io/v1
kind: APIServer
metadata:
name: cluster
spec:
audit:
profile: Default
servingCerts:
namedCertificates:
- names:
- api.mycluster.example.com
servingCertificate:
name: new-cert-secret
----
+
After about 15 minutes, the old certificate is removed from the CA bundle, and the change is automatically propagated to the managed clusters.

*Note:* Managed clusters must use the host name `api.<cluster_name>.<base_domain>` to access the hub cluster. You cannot use named certificates that are configured with other host names.
2 changes: 1 addition & 1 deletion clusters/cluster_lifecycle/import_cli.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -20,7 +20,7 @@ After you install {mce}, you are ready to import a cluster and manage it by usin
* A separate cluster you want to manage.
* The {ocp-short} CLI. See link:https://docs.redhat.com/en/documentation/openshift_container_platform/4.17/html/cli_tools/openshift-cli-oc#cli-getting-started[Getting started with the OpenShift CLI] for information about installing and configuring the {ocp-short} CLI.
* A defined `multiclusterhub.spec.imagePullSecret` if you are importing a cluster that was not created by {ocp-short}. This secret might have been created when {mce} was installed. See xref:../install_upgrade/adv_config_install.adoc#custom-image-pull-secret[Custom image pull secret] for more information about how to define this secret.
* Review the hub cluster `KubeAPIServer` certificate verification strategy and update the strategy if needed. To learn what strategy to use for your setup, see xref:../cluster_lifecycle/adv_config_cluster.adoc#config-hub-kube-api-server[Configuring the hub cluster `KubeAPIServer` verification strategy].
* Review the hub cluster `KubeAPIServer` certificate verification strategy and update the strategy if needed. To learn what strategy to use for your setup, see xref:../cluster_lifecycle/set_verification_strat.adoc#set-hub-kube-api-server[Configuring the hub cluster `KubeAPIServer` verification strategy].

[#supported-architectures]
== Supported architectures
Expand Down
Loading