Migrating PAM data to a new system

By using a snapshot, you can migrate your data to a new Puppet Application Manager (PAM) instance.

Data migration prerequisites

In order to perform a data migration, your system must be configured as follows:
  • On the original system, Puppet Application Manager (PAM) must be configured to support Full Snapshots (Instance). For instructions on configuring snapshots, see Backing up PAM using snapshots.
  • Velero must be configured to use an external snapshot destination accessible to both the old and new clusters, such as S3 or NFS.
  • Both the old and new clusters must have the same connection status (online or offline). Migrating from offline to online clusters or vice versa is not supported.
  • For offline installs, both the old and new clusters must use the same version of PAM.
  • Upgrade to the latest version of PAM on both the old and new clusters before you begin.

Migrating data between two systems with the same architecture

To perform data migration between two systems using the same architecture (from standalone to standalone, or from HA to HA), you must create a new cluster to migrate to, then follow the process outlined below to recover your instance from a snapshot.

Before you begin

Review the requirements in Data migration prerequisites.

Important: If you are migrating from a legacy architecture, go to our Support Knowledge Base instructions for migrating to a supported architecture for your Puppet application:
  1. On the original system, find the version of kURL your deployment is using by running the following command. Save the version for use in step 3.
    kubectl get configmap -n kurl kurl-current-config -o jsonpath="{.data.kurl-version}" && echo
  2. Get the installer spec section by running the command appropriate to your PAM installation type:
    Tip: See How to determine your version of Puppet Application Manager if you're not sure which installation type you're running.
    • HA installation: kubectl get installers puppet-application-manager -o yaml
    • Standalone installation: kubectl get installers puppet-application-manager-standalone -o yaml
    • Legacy installation: kubectl get installers puppet-application-manager-legacy -o yaml
    The command's output looks similar to the following. The spec section is shown in bold in the example below. Save your spec section for use in step 3.
    # kubectl get installers puppet-application-manager-standalone -o yaml
    apiVersion: cluster.kurl.sh/v1beta1
    kind: Installer
    metadata:
      annotations:
        kubectl.kubernetes.io/last-applied-configuration: |
    
      {"apiVersion":"cluster.kurl.sh/v1beta1","kind":"Installer","metadata":{"annotations":{},"creationTimestamp":null,"name":"puppet-application-manager-standalone","namespace":"default"},"spec":{"containerd":{"version":"1.4.12"},"contour":{"version":"1.18.0"},"ekco":{"version":"0.16.0"},"kotsadm":{"applicationSlug":"puppet-application-manager","version":"1.64.0"},"kubernetes":{"version":"1.21.8"},"metricsServer":{"version":"0.4.1"},"minio":{"version":"2020-01-25T02-50-51Z"},"openebs":{"isLocalPVEnabled":true,"localPVStorageClassName":"default","version":"2.6.0"},"prometheus":{"version":"0.49.0-17.1.1"},"registry":{"version":"2.7.1"},"velero":{"version":"1.6.2"},"weave":{"podCidrRange":"/22","version":"2.8.1"}},"status":{}}
      creationTimestamp: "2021-06-04T00:05:08Z"
      generation: 4
      labels:
        velero.io/exclude-from-backup: "true"
      name: puppet-application-manager-standalone
      namespace: default
      resourceVersion: "102061068"
      uid: 4e7f1196-5fab-4072-9399-15d18dcc5137
    spec:
      containerd:
        version: 1.4.12
      contour:
        version: 1.18.0
      ekco:
        version: 0.16.0
      kotsadm:
        applicationSlug: puppet-application-manager
        version: 1.64.0
      kubernetes:
        version: 1.21.8
      metricsServer:
        version: 0.4.1
      minio:
        version: 2020-01-25T02-50-51Z
      openebs:
        isLocalPVEnabled: true
        localPVStorageClassName: default
        version: 2.6.0
      prometheus:
        version: 0.49.0-17.1.1
      registry:
        version: 2.7.1
      velero:
        version: 1.6.2
      weave:
        podCidrRange: /22
        version: 2.8.1
    status: {}
    Note: If the command returns Error from server (NotFound), check that you used the correct command for your architecture. You can view all installers by running kubectl get installers. You’re targeting the most recent installer.
  3. On a new machine, create a file named installer.yaml with the following contents, replacing <SPEC> and <KURL VERSION> with the information you gathered in the previous steps.
    apiVersion: cluster.kurl.sh/v1beta1
    kind: Installer
    metadata:
    <SPEC>
      kurl:
        installerVersion: "<KURL VERSION>"
    Important: If you are running PAM version 1.68.0 or newer, the kURL installer version might be included in the spec section. If this is the case, omit the kurl: section from the bottom of the installer.yaml file. There must be only one kurl: entry in the file.
    Tip: Spacing is critical in YAML files. Use a YAML file linter to confirm that the format of your file is correct.
    Here is an example of the contents of the installer.yaml file:
    apiVersion: cluster.kurl.sh/v1beta1
    kind: Installer
    metadata:
    spec:
      containerd:
        version: 1.4.12
      contour:
        version: 1.18.0
      ekco:
        version: 0.16.0
      kotsadm:
        applicationSlug: puppet-application-manager
        version: 1.64.0
      kubernetes:
        version: 1.21.8
      metricsServer:
        version: 0.4.1
      minio:
        version: 2020-01-25T02-50-51Z
      openebs:
        isLocalPVEnabled: true
        localPVStorageClassName: default
        version: 2.6.0
      prometheus:
        version: 0.49.0-17.1.1
      registry:
        version: 2.7.1
      velero:
        version: 1.6.2
      weave:
        podCidrRange: /22
        version: 2.8.1
      kurl:
        installerVersion: "v2022.03.11-0"
  4. Build an installer using the installer.yaml file. Run the following command:
    curl -s -X POST -H "Content-Type: text/yaml" --data-binary "@installer.yaml" https://kurl.sh/installer |grep -o "[^/]*$"
    The output is a hash. Carefully save the hash for use in step 5.
  5. Install a new cluster. To do so, you can either:
    1. Point your browser to https://kurl.sh/<HASH> (replacing <HASH> with the hash you generated in step 4) to see customized installation scripts and information.
    2. Follow the appropriate PAM documentation.
      • For online installations: Follow the steps in PAM HA online installation or PAM standalone online installation, replacing the installation script with the following:
        curl https://kurl.sh/<HASH> | sudo bash
      • For offline installations: Follow the steps in PAM HA offline installation or PAM standalone offline installation, replacing the installation script with the following:
        curl -LO https://k8s.kurl.sh/bundle/<HASH>.tar.gz
        When setting up a new offline cluster as part of disaster recovery, add kurl-registry-ip=<IP> to the install options, replacing <IP> with the value you recorded when setting up snapshots.
        Note: If you do not include the kurl-registry-ip=<IP> flag, the registry service will be assigned a new IP address that does not match the IP on the machine where the snapshot was created. You must align the registry service IP address on the new offline cluster to ensure that the restored configuration can pull images from the correct location.
    Important: Do not install any Puppet applications after the PAM installation is complete. You'll recover your Puppet applications later in the process.
  6. To recover using a snapshot saved to a host path, ensure user/group 1001 has full access on all nodes by running:
    chown -R 1001:1001 /<PATH/TO/HOSTPATH>
  7. Configure the new cluster to connect to your snapshot storage location. Run the following to see the arguments needed to complete this task:
    kubectl kots -n default velero configure-{hostpath,nfs,aws-s3,other-s3,gcp} --help
  8. Run kubectl kots get backup and wait for the list of snapshots to become available. This might take several minutes.
  9. Start the restoration process by running kubectl kots restore --from-backup <BACKUP NAME>.
    The restoration process takes several minutes to complete. When the PAM UI is available, use it to monitor the status of the application.
    Note: When restoring, wait for all restores to complete before making any changes. The following command waits for pods to finish restoring data from backup. Other pods may not be ready until updated configuration is deployed in the next step:
    kubectl get pod -o json | jq -r '.items[] | select(.metadata.annotations."backup.velero.io/backup-volumes") | .metadata.name' | xargs kubectl wait --for=condition=Ready pod --timeout=20m
    

    This command requires the jq CLI tool to be installed. It is available in most Linux OS repositories.

  10. After the restoration process completes, save your config and deploy:
    1. From the PAM UI, click Config.
    2. (Optional) If the new cluster's hostname is different from the old one, update the Hostname.
    3. Click Save Config.
    4. Deploy the application. You must save your config and deploy even if you haven't made any changes.
      Note: If you have installed Continuous Delivery for PE and changed the hostname, you need to update the webhooks that connect Continuous Delivery for PE with your source control provider. For information on how to do this, see Update webhooks.

Migrating data between two systems with different architectures

To perform data migration between two systems using different PAM architectures (from standalone to HA, or from HA to standalone), you must create a new cluster to recover to, then follow the process outlined below to recover your instance from a snapshot.

Before you begin

Review the requirements in Data migration prerequisites.

Important: If you are migrating from a legacy architecture, go to our Support Knowledge Base instructions for migrating to a supported architecture for your Puppet application:

  1. On the original system, find the version of kURL your deployment is using by running the following command. Save the version for use in step 2.
    kubectl get configmap -n kurl kurl-current-config -o jsonpath="{.data.kurl-version}" && echo
  2. Set up a new cluster to house the recovered instance, following the system requirements for your applications.
    Important: Do not install any Puppet applications after the PAM installation is complete. You'll recover your Puppet applications later in the process.
    • Install PAM using the version of kURL you retrieved earlier:
      • For online installs:
        curl -sSL https://k8s.kurl.sh/version/<VERSION STRING>/puppet-application-manager | sudo bash <-s options>
      • For offline installs:
        curl -O https://k8s.kurl.sh/bundle/version/<VERSION STRING>/puppet-application-manager.tar.gz
    • When setting up a new offline cluster as part of disaster recovery, add kurl-registry-ip=<IP> to the install options, replacing <IP> with the value you recorded when setting up snapshots.
      Note: If you do not include the kurl-registry-ip=<IP> flag, the registry service will be assigned a new IP address that does not match the IP on the machine where the snapshot was created. You must align the registry service IP address on the new offline cluster to ensure that the restored configuration can pull images from the correct location.
  3. To recover using a snapshot saved to a host path, ensure user/group 1001 has full access on all nodes by running:
    chown -R 1001:1001 /<PATH/TO/HOSTPATH>
  4. Configure the new cluster to connect to your snapshot storage location. Run the following to see the arguments needed to complete this task:
    kubectl kots -n default velero configure-{hostpath,nfs,aws-s3,other-s3,gcp} --help
  5. Run kubectl kots get backup and wait for the list of snapshots to become available. This might take several minutes.
  6. Start the restoration process by running kubectl kots restore --from-backup <BACKUP NAME>.
    The restoration process takes several minutes to complete. When the PAM UI is available, use it to monitor the status of the application.
    Note: When restoring, wait for all restores to complete before making any changes. The following command waits for pods to finish restoring data from backup. Other pods may not be ready until updated configuration is deployed in the next step:
    kubectl get pod -o json | jq -r '.items[] | select(.metadata.annotations."backup.velero.io/backup-volumes") | .metadata.name' | xargs kubectl wait --for=condition=Ready pod --timeout=20m
    

    This command requires the jq CLI tool to be installed. It is available in most Linux OS repositories.

  7. After the restoration process completes, save your config and deploy:
    1. From the PAM UI, click Config.
    2. (Optional) If the new cluster's hostname is different from the old one, update the Hostname.
    3. Click Save Config.
    4. Deploy the application. You must save your config and deploy even if you haven't made any changes.
      Note: If you have installed Continuous Delivery for PE and changed the hostname, you need to update the webhooks that connect Continuous Delivery for PE with your source control provider. For information on how to do this, see Update webhooks.