Skip to content

Commit 6e9ecd2

Browse files
authored
HCP Consul Dedicated restore migration steps (#1298)
### Description This PR re-adds the steps to migrate from HCP Consul Dedicated to a self-managed cluster. HashiCorp retains snapshots for 30 days, and support expects to continue supporting customers through the migration process. This page was created at `/consul/docs/hcp` instead of in the `hcp-docs` directory because the migration process onboards to Consul Enterprise. The PR also updates the redirect from HCP Consul content, and includes a redirect for the versioned Consul docs. ### Preview links [Preview](http://unified-docs-frontend-preview-j5ns6lwqh-hashicorp.vercel.app/consul/docs/hcp)
2 parents 0d5ac38 + a3cb8a8 commit 6e9ecd2

File tree

3 files changed

+344
-1
lines changed

3 files changed

+344
-1
lines changed
Lines changed: 334 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,334 @@
1+
---
2+
page_title: HCP Consul Dedicated
3+
description: |-
4+
This topic provides an overview of HCP Consul Dedicated clusters and the process to migrate to self-managed Consul clusters.
5+
---
6+
7+
# HCP Consul Dedicated
8+
9+
This topic describes HCP Consul Dedicated, the networking software as a service (SaaS) product that was previously available through the HashiCorp Cloud Platform (HCP).
10+
11+
HCP Consul Dedicated reached end-of-life on November 12, 2025.
12+
13+
## Introduction
14+
15+
HCP Consul dedicated was a service that provided simplified workflows for common Consul tasks and the option to have HashiCorp set up and manage your Consul servers for you.
16+
17+
On November 12, 2025, HashiCorp ended operations and support for HCP Consul Dedicated clusters. As of this this date, you are no longer be able to deploy access, update, or manage Dedicated clusters.
18+
19+
We recommend migrating HCP Consul Dedicated deployments to self-managed server clusters running Consul Enterprise. On virtual machines, this migration requires some downtime for the server cluster but enables continuity between existing configurations and operations. Downtime is not required on Kubernetes, although we suggest scheduling downtime to ensure the migration is successful.
20+
21+
## Migration workflows
22+
23+
The process to migrate a Dedicated cluster to a self-managed environment consists of steps that depend on whether your cluster runs on virtual machines (VMs) or Kubernetes.
24+
25+
### VMs
26+
27+
To migrate on VMs, complete the following steps:
28+
29+
1. [Retrieve a snapshot of your cluster](#retrieve-a-snapshot-of-your-cluster).
30+
1. [Transfer the snapshot to a self-managed cluster](#transfer-the-snapshot-to-a-self-managed-cluster).
31+
1. [Use the snapshot to restore the cluster in your self-managed environment](#use-the-snapshot-to-restore-the-cluster-in-your-self-managed-environment).
32+
1. [Update the client configuration file to point to the new server](#update-the-client-configuration-file-to-point-to-the-new-server).
33+
1. [Restart the client agent and verify that the migration was successful](#restart-the-client-agent-and-verify-that-the-migration-was-successful).
34+
1. [Disconnect and decommission the HCP Consul Dedicated cluster and its supporting resources](#disconnect-supporting-resources-and-decommission-the-hcp-consul-dedicated-cluster).
35+
36+
### Kubernetes
37+
38+
To migrate on Kubernetes, complete the following steps:
39+
40+
1. [Retrieve a snapshot of the HCP Consul Dedicated cluster](#retrieve-a-snapshot-of-the-hcp-consul-dedicated-cluster-1).
41+
1. [Transfer the snapshot to a self-managed cluster](#transfer-the-snapshot-to-a-self-managed-cluster-1).
42+
1. [Use the snapshot to restore the cluster in your self-managed environment](#use-the-snapshot-to-restore-the-cluster-in-your-self-managed-environment).
43+
1. [Update the CoreDNS configuration](#update-the-coredns-configuration).
44+
1. [Update the `values.yaml` file](#update-the-values-yaml-file).
45+
1. [Upgrade the cluster](#upgrade-the-cluster).
46+
1. [Redeploy workload applications](#redeploy-workload-applications).
47+
1. [Switch the CoreDNS entry](#switch-the-coredns-entry).
48+
1. [Verify that the migration was successful](#verify-that-the-migration-was-successful).
49+
1. [Disconnect and decommission the HCP Consul Dedicated cluster and its supporting resources](#disconnect-and-decommission-the-hcp-consul-dedicated-cluster-and-its-supporting-resources).
50+
51+
## Migration recommendations and best practices
52+
53+
On VMs, the migration process requires a temporary outage that lasts from the time when you restore the snapshot on the self-managed cluster until the time when you restart client agents after updating their configuration. Downtime is not required on Kubernetes, although we suggest scheduling downtime to ensure the migration is successful.
54+
55+
In addition, data written to the Dedicated server after the snapshot is created cannot be restored.
56+
57+
To limit the duration of outages, we recommend using a dev environment to test the migration before fully migrating production workloads. The length of the outage depends on the number of clients, the self-managed environment, and the automated processes involved.
58+
59+
Regardless of whether you use VMs or Kubernetes, we also recommend using [Consul maintenance mode](/consul/commands/maint) to schedule a period of inactivity to address unforeseen data loss or data sync issues that result from the migration.
60+
61+
## Migration prerequisites
62+
63+
The migration instructions on this page make the following assumptions about your existing infrastructure:
64+
65+
- Your previous HCP Consul Dedicated server cluster and current self-managed server cluster have matching configurations. These configurations should include the following settings:
66+
- Both clusters have 3 nodes.
67+
- ACLs, TLS, and gossip encryption are enabled.
68+
- You have command line access to your self-managed cluster.
69+
- You already identified the client nodes affected by the migration.
70+
71+
If you are migrating clusters on Kubernetes, refer to the [version compatibility matrix](/consul/docs/k8s/compatibility#compatibility-matrix) to ensure that you are using compatible versions of `consul` and `consul-k8s`.
72+
73+
In addition, you must migrate to an Enterprise cluster, which requires an Enterprise license. Migrating to Community edition clusters is not possible. If you do not have access to a Consul Enterprise license, [file a support request to let us know](https://support.hashicorp.com/hc/en-us/requests/new). A member of the account team will reach out to assist you.
74+
75+
## Migrate to self-managed on VMs
76+
77+
Complete the following steps to migrate to a self-managed Consul Enterprise cluster on VMs.
78+
79+
### Retrieve a snapshot of your cluster
80+
81+
A snapshot is a backup of your HCP Consul cluster’s state. Consul uses this snapshot to restore its previous state in the new self-managed environment.
82+
83+
As of November 12, 2025, you cannot take a snapshot of an HCP Dedicated cluster. We will retain cluster snapshots for 30 days. [Contact HCP support](https://support.hashicorp.com/hc/en-us/requests/new) if you need help accessing your most recent snapshot.
84+
85+
### Transfer the snapshot to a self-managed cluster
86+
87+
Use a secure copy (SCP) command to move the snapshot file to the self-managed Consul cluster.
88+
89+
```shell-session
90+
$ scp /home/backup/hcp-cluster.snapshot <user>@<self-managed-node>:/home/backup
91+
```
92+
93+
### Use the snapshot to restore the cluster in your self-managed environment
94+
95+
After you transfer the snapshot file to the self-managed node, you can restore the cluster’s state from the snapshot in your self-managed environment.
96+
97+
Make sure the `CONSUL_HTTP_TOKEN` environment variable is set to the value of an ACL tokenin your self-managed environment. Then run the following command.
98+
99+
```shell-session
100+
$ consul snapshot restore /home/backup/hcp-cluster.snapshot
101+
Restored snapshot
102+
```
103+
104+
If you cannot use use environment variables, add the `-token=` flag to the command:
105+
106+
```shell-session
107+
$ consul snapshot restore /home/backup/hcp-cluster.snapshot -token="<token-value">
108+
Restored snapshot
109+
```
110+
111+
For more information on this command, refer to the [Consul CLI documentation](/consul/commands/snapshot/restore).
112+
113+
### Update the client configuration file to point to the new server
114+
115+
Modify the agent configuration on your Consul clients. You must update the following configuration values:
116+
117+
- `retry_join` IP address
118+
- TLS encryption
119+
- ACL token
120+
121+
You can use an existing certificate authority or create a new one in your self-managed cluster. For more information, refer to [Service mesh certificate authority overview in the Consul documentation](/consul/docs/connect/ca)
122+
123+
The following example demonstrates a modified client configuration.
124+
125+
```hcl
126+
retry_join = ["<new.server.IP.address>"]
127+
128+
tls {
129+
defaults {
130+
auto_encrypt {
131+
allow_tls =true
132+
tls = true
133+
}
134+
verify_incoming = true
135+
verify_outgoing = true
136+
}
137+
}
138+
139+
acl {
140+
enabled = true
141+
default_policy = "deny"
142+
enable_token_persistence = true
143+
tokens {
144+
agent = "<Token-Value>"
145+
}
146+
}
147+
```
148+
149+
For more information about configuring these fields, refer to the [agent configuration reference in the Consul documentation](/consul/docs/agent/config/config-files).
150+
151+
### Restart the client agent and verify that the migration was successful
152+
153+
Restart the client to apply the updated configuration and reconnect it to the new cluster.
154+
155+
```shell-session
156+
$ sudo systemctl restart consul
157+
```
158+
159+
After you update and restart all of the client agents, check the catalog to ensure that clients migrated successfully. You can check the Consul UI or run the following CLI command.
160+
161+
```shell-session
162+
$ consul members
163+
```
164+
165+
### Disconnect supporting resources and decommission the HCP Consul Dedicated cluster
166+
167+
After you confirm that your client agents successfully connected to the self-managed cluster, delete VPC peering connections and any other unused resources. If you use other HCP services, ensure that these resources are not currently in use. After you delete a peering connection or an HVN, it cannot be used by any HCP product.
168+
169+
## Migrate to self-managed on Kubernetes
170+
171+
Complete the following steps to migrate to a self-managed Consul Enterprise cluster on Kubernetes.
172+
173+
### Retrieve a snapshot of the HCP Consul Dedicated cluster
174+
175+
A snapshot is a backup of your HCP Consul cluster’s state. Consul uses this snapshot to restore its previous state in the new self-managed environment.
176+
177+
As of November 12, 2025, you cannot take a snapshot of an HCP Dedicated cluster. We will retain cluster snapshots for 30 days. [Contact HCP support](https://support.hashicorp.com/hc/en-us/requests/new) if you need help accessing your most recent snapshot.
178+
179+
### Transfer the snapshot to a self-managed cluster
180+
181+
Use a secure copy (SCP) command to move the snapshot file to the self-managed Consul cluster.
182+
183+
```shell-session
184+
$ scp /home/backup/hcp-cluster.snapshot <user>@<self-managed-node>:/home/backup
185+
```
186+
187+
### Use the snapshot to restore the cluster in your self-managed environment
188+
189+
After you transfer the snapshot file to the self-managed node, use the `kubectl exec` command to restore the cluster’s state in your self-managed Kubernetes environment.
190+
191+
```shell-session
192+
$ kubectl exec -c consul-server-0 -- consul snapshot restore /home/backup/hcp-cluster.snapshot
193+
Restored snapshot
194+
```
195+
196+
For more information on the snapshot command, refer to the [Consul CLI documentation](/consul/commands/snapshot/restore).
197+
198+
### Update the CoreDNS configuration
199+
200+
Update the CoreDNS configuration on your Kubernetes cluster to point to the Dedicated cluster's IP address. Make sure the configured hostname resolves correctly to cluster’s IP from inside a deployed pod.
201+
202+
<CodeBlockConfig highlight="13" hideClipboard>
203+
204+
```yaml
205+
Corefile: |-
206+
.:53 {
207+
errors
208+
health {
209+
lameduck 5s }
210+
ready
211+
kubernetes cluster.local in-addr.arpa ip6.arpa {
212+
pods insecure
213+
fallthrough in-addr.arpa ip6.arpa
214+
ttl 30
215+
}
216+
hosts {
217+
35.91.49.134 server.hcp-managed.consul
218+
fallthrough
219+
}
220+
prometheus 0.0.0.0:9153
221+
forward . 8.8.8.8 8.8.4.4 /etc/resolv.conf
222+
cache 30
223+
loop
224+
reload
225+
loadbalance
226+
}
227+
```
228+
229+
</CodeBlockConfig>
230+
231+
If there are issues when you attempt to resolve the hostname, check if the nameserver resolves to the `CLUSTER-IP` inside the pod. Run the following command to return the `CLUSTER-IP`.
232+
233+
```shell-session
234+
# k -n kube-system get svc
235+
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
236+
coredns ClusterIP 10.100.224.88 <none> 53/UDP,53/TCP 4h24m
237+
```
238+
239+
### Update the `values.yaml` file
240+
241+
Update the Helm configuration or `values.yaml` file for your self-managed cluster. You should perform the following actions:
242+
243+
- Update the `externalServers.host` value. Use the host name you added when you updated the CoreDNS configuration.
244+
- Create a Kubernetes secret in the `consul` namespace with a new CA file created by adding the contents of all of the following CA files. Add the CA file contents of the new self managed server at the end.
245+
- [https://letsencrypt.org/certs/isrg-root-x1-cross-signed.pem](https://letsencrypt.org/certs/isrg-root-x1-cross-signed.pem)
246+
- [https://letsencrypt.org/certs/isrg-root-x2-cross-signed.pem](https://letsencrypt.org/certs/isrg-root-x2-cross-signed.pem)
247+
- [https://letsencrypt.org/certs/2024/e5-cross.pem](https://letsencrypt.org/certs/2024/e5-cross.pem)
248+
- [https://letsencrypt.org/certs/2024/e6-cross.pem](https://letsencrypt.org/certs/2024/e6-cross.pem)
249+
- [https://letsencrypt.org/certs/2024/r10.pem](https://letsencrypt.org/certs/2024/r10.pem)
250+
- [https://letsencrypt.org/certs/2024/r11.pem](https://letsencrypt.org/certs/2024/r11.pem)
251+
- Update the `externalServers.tlsServerName` field to the appropriate value. It is usually the hostname of the
252+
managed cluster. If the value is not known, TLS verification fails when you apply this configuration and the error log lists possible values.
253+
- Set `externalServers.useSystemRoots` to `false` to use the new CA certs.
254+
255+
For more information about configuring these fields, refer to the [Consul on Kubernetes Helm chart reference](/consul/docs/k8s/helm).
256+
257+
### Upgrade the cluster
258+
259+
After you update the `values.yaml` file, run the `consul-k8s upgrade` command to update the self-managed Kubernetes cluster.
260+
261+
```shell-session
262+
$ consul-k8s upgrade -config-file=values.yaml
263+
```
264+
265+
This command redeploys the Consul pods with the updated configurations. Although the CoreDNS installation still points to the Dedicated cluster, the pods have access to the new CA file.
266+
267+
### Redeploy workload applications
268+
269+
Redeploy all the workload applications so that the `init` containers run again and fetch the new CA file. After you redeploy the applications, run a `kubectl describe pod` command on any workload pod and verify the output resembles the following example.
270+
271+
<CodeBlockConfig hideClipboard>
272+
273+
```shell-session
274+
$ kubectl describe pod -l name="product-api-8cf8c8ccc-kvkk8"
275+
Environment:
276+
POD_NAME: product-api-8cf8c8ccc-kvkk8 (v1:metadata.name)
277+
POD_NAMESPACE: default (v1:metadata.namespace)
278+
NODE_NAME: (v1:spec.nodeName)
279+
CONSUL_ADDRESSES: server.consul.one
280+
CONSUL_GRPC_PORT: 8502
281+
CONSUL_HTTP_PORT: 443
282+
CONSUL_API_TIMEOUT: 5m0s
283+
CONSUL_NODE_NAME: $(NODE_NAME)-virtual
284+
CONSUL_USE_TLS: true
285+
CONSUL_CACERT_PEM: -----BEGIN CERTIFICATE-----\r
286+
MIIFYDCCBEigAwIBAgIQQAF3ITfU6UK47naqPGQKtzANBgkqhkiG9w0BAQsFADA/\r
287+
MSQwIgYDVQQKExtEaWdpdGFsIFNpZ25hdHVyZSBUcnVzdCBDby4xFzAVBgNVBAMT\r
288+
DkRTVCBSb290IENBIFgzMB4XDTIxMDEyMDE5MTQwM1oXDTI0MDkzMDE4MTQwM1ow\r
289+
TzELMAkGA1UEBhMCVVMxKTAnBgNVBAoTIEludGVybmV0IFNlY3VyaXR5IFJlc2Vh\r
290+
cmNoIEdyb3VwMRUwEwYDVQQDEwxJU1JHIFJvb3QgWDEwggIiMA0GCSqGSIb3DQEB\r
291+
AQUAA4ICDwAwggIKAoICAQCt6CRz9BQ385ueK1coHIe+3LffOJCMbjzmV6B493XC
292+
```
293+
294+
</CodeBlockConfig>
295+
296+
### Switch the CoreDNS entry
297+
298+
Update the CoreDNS configuration to use the self-managed server's IP address.
299+
300+
If the `tlsServerName` of the self-managed cluster is different than the `tlsServerName` on the Dedicated cluster, you must update the field and re-run the `consul-k8s upgrade` command. For self-managed clusters, the `tlsServerName` usually take form of `server.<datacenter-name>.consul`.
301+
302+
### Verify that the migration was successful
303+
304+
After you update the CoreDNS entry, check the Consul catalog to ensure that the migration was successful. You can check the Consul UI or run the `kubectl exec` command.
305+
306+
```shell-session
307+
$ kubectl exec -c consul-server-0 -- consul members
308+
```
309+
310+
### Disconnect and decommission the HCP Consul Dedicated cluster and its supporting resources
311+
312+
After you confirm that your services successfully connected to the self-managed cluster, delete VPC peering connections and any other unused resources. If you use other HCP services, ensure that these resources are not currently in use. After you delete a peering connection or an HVN, it cannot be used by any HCP product.
313+
314+
## Troubleshooting
315+
316+
You might encounter errors when migrating from an HCP Consul Dedicated cluster to a self-managed Consul Enterprise cluster.
317+
318+
### Troubleshoot on VMs
319+
320+
If you encounter a `403 Permission Denied` error when you attempt to generate a new ACL bootstrap token, or if you misplace the bootstrap token, you can update the Raft index to reset the ACL system. Use the Raft index number included in the error output to write the reset index into the bootstrap reset file. You must run this command on the Leader node.
321+
322+
The following example uses `13` as its Raft index:
323+
324+
```shell-session
325+
$ echo 13 >> consul.d/acl-bootstrap-reset
326+
```
327+
328+
### Troubleshoot on Kubernetes
329+
330+
If you encounter issues resolving the hostname, check if the nameserver does not match the `CLUSTER-IP`. One possible issue is that the `ClusterDNS` field points to an IP in the kubelet configuration that differs from the Kubernetes worker nodes. You should change the kubelet configuration to use the `CLUSTER-IP` and then restart the kubelet process on all nodes.
331+
332+
## Support
333+
334+
If have questions or need additional help when migrating to a self-managed Consul Enterprise cluster, [submit a request to our support team](https://support.hashicorp.com/hc/en-us/requests/new).

content/consul/v1.22.x/data/docs-nav-data.json

Lines changed: 4 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -2552,6 +2552,10 @@
25522552
"path": "openshift"
25532553
}
25542554
]
2555+
},
2556+
{
2557+
"title": "HCP Consul Dedicated",
2558+
"path": "hcp"
25552559
},
25562560
{
25572561
"divider": true

content/hcp-docs/redirects.jsonc

Lines changed: 6 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -624,7 +624,12 @@
624624
},
625625
{
626626
"source": "/hcp/docs/consul/:slug*",
627-
"destination": "/hcp/docs/changelog#2025-11-12",
627+
"destination": "/consul/docs/hcp",
628628
"permanent": true,
629+
},
630+
{
631+
"source": "/consul/docs/:version(v1\\.(?:18|19|20)\\.x)/hcp",
632+
"destination": "/consul/docs/hcp",
633+
"permanent": true
629634
}
630635
]

0 commit comments

Comments
 (0)