Maintenance and tuning

Follow these guidelines when you're tuning or performing maintenance on a node running Puppet Application Manager (PAM).

How to look up your Puppet Application Manager architecture

If you're running PAM on a Puppet-supported cluster, you can use the following command to determine your PAM architecture version:
kubectl get installer --sort-by=.metadata.creationTimestamp -o jsonpath='{.items[-1:].metadata.name}' ; echo
Depending on which architecture you used when installing, the command returns one of these values:
  • HA architecture: puppet-application-manager
  • Standalone architecture: puppet-application-manager-standalone
  • Legacy architecture: Any other value, for example, puppet-application-manager-legacy, cd4pe, or comply

Rebooting PAM nodes

Where possible, avoid rebooting or shutting down a PAM node. Shutting down an HA PAM node incorrectly could result in storage volume corruption and the loss of data.

For tasks such as package updates or security patches, where you must perform a reboot or shut down, follow the procedure below to gracefully shut down the node and ensure that it is drained correctly.

To reboot a node:

  1. Shut down services using Ceph-backed storage:
    /opt/ekco/shutdown.sh
  2. If you're using a high availability (HA) cluster, drain the node:
    kubectl drain <NODE NAME> --ignore-daemonsets --delete-local-data
  3. Reboot the node.

Load balancer health checks

To set up health checks for the load balancer that your Puppet Application Manager (PAM) applications are running behind, set up rules for these applications and services.

Application/service URL/port Notes
Puppet application. For example, Continuous Delivery for Puppet Enterprise or Puppet Comply https://<CDPE HOSTNAME>:443/status Although Puppet applications might expose other ports (Continuous Delivery for PE exposes ports 443, 80, and 8000), 443 is the HTTPS endpoint, and is the best port to use for health checks.
Puppet Application Manager (PAM) https://<KUBERNETES PRIMARY IP>:8800/healthz
External load balancer endpoint Port 6443 or https://<KUBERNETES PRIMARY IP>:6443/livez For information on setting up a TCP probe on an external load balancer endpoint, consult the kURL load balancer documentation.
Local container registry (for offline installations) https://<KUBERNETES PRIMARY IP>:9001

Load balancing

The following load balancer requirements are needed for a HA install:

  • A network (L4, TCP) load balancer for port 6443 across primary nodes. This is required for Kubernetes components to continue operating in the event that a node fails. The port is only accessed by the Kubernetes nodes and any admins using kubectl.

  • A network (L4, TCP) or application (L7, HTTP/S) load balancer for ports 80, and 443 across all primaries and secondaries. This maintains access to applications in event of a node failure. Include 8800 if you want external access to the Puppet Application Manager UI.

    Note: Include port 8000 for webhook callbacks if you are installing Continuous Delivery for PE.
Important: If you are using application load balancing, be aware that Ingress items use Server Name Indication (SNI) to route requests, which may require additional configuration with your load balancer. If your load balancer does not support SNI for health checks, enable Enable load balancer HTTP health check in the Puppet Application Manager UI Config page .