Continuous Delivery for PE known issues
These are the known issues for the Continuous Delivery for Puppet Enterprise (PE) 4.x release series.
Users trying to login using SAML SSO could see a 405 Method Not Allowed
The SAML redirect URL changed in version 4.22.0 to: <YOUR
CD4PE WEB UI ENDPOINT>/cd4pe/saml-auth
. Versions of Continuous Delivery for PE prior to 4.22.0 should continue to use: <YOUR CD4PE WEB UI ENDPOINT>/saml-auth
.
When upgrading Continuous Delivery for PE and Puppet Application Manager, you lose connectivity to your cluster
The issue occurs either during or after an upgrade to Continuous Delivery for PE and Puppet Application Manager. In each scenario you eventually lose
connectivity to your clusters in Puppet Application Manager and Kubernetes. You get errors that images are missing,
but containerd
and kubelet
are running.
To address this issue, follow the instructions in this knowledge base article.
Installing on 4 CPUs can hang with the new cd4pe
pod stuck in a "Pending" status
Standalone installations on systems with 4 CPUs have an issue where upgrades and
configuration changes can hang with the new cd4pe
pod stuck in a "Pending" status.
To work around the issue temporarily delete a non-critical pod. For example, to
delete the Prometheus pod use kubectl delete pod -n monitoring -l
app.kubernetes.io/instance=k8s
.
Once 4.22.0 is deployed, the best long term solution is to deselect Enable Vault in the Puppet Application Manager (PAM) UI under Advanced configuration and tuning. Vault is not used in Continuous Delivery for PE 4.22.0 and freeing the resources it consumes allows 4 CPU installations to upgrade without this problem.
Upgrading to 4.12.0 can delete the cd4pe
Ingress
In busier Kubernetes clusters, upgrading to Continuous Delivery for PE version 4.12.0 can delete the cd4pe
Ingress. If the application is unavailable after
upgrading and kubectl get ingress cd4pe
returns an
empty list, redeploy the deployed version to recreate the Ingress.
Impact analysis can require additional disk space
Impact analysis branches are not always pruned after impact analysis is complete. This creates orphaned environments that fill up the /etc/puppetlabs/code/environments directory on the PE primary server. If you are experiencing storage capacity issues with impact analysis, increase the storage capacity on the /etc/puppetlabs/code/environments mount.
Impact analysis tasks fail when using the satellite_pe_tools
module
When using the satellite_pe_tools
module with Continuous Delivery for PE, impact analysis tasks fail with a
"Internal Server Error: org.jruby.exceptions.RuntimeError: (Error)
PuppetDB not configured, please provide facts with your catalog
request"
error. This issue occurs because the API endpoint used to
collect facts during impact analysis tasks errors if the fact_terminus
parameter is set to satellite
or any value other than puppetdb
. This issue is resolved in the versions of Puppet Server included in PE
versions 2021.4 and 2019.8.9.
Impact analysis tasks fail when using Puppet Enterprise versions 2021.2 or 2019.8.7
Impact analysis fails with a R10K::Module::Forge cannot handle
option ‘default_branch_override’
error. If you're using PE version 2021.2 or 2019.8.7, you must update the
pe-r10k
package by following the instructions in this Puppet Support
article to continue to use impact analysis. After you update the package,
you can update to future versions of PE using the
installer as normal.
Preflight check failure when using Puppet Application Manager versions 1.19 or 1.20
Puppet Application Manager versions 1.19 and 1.20 display an Analyzer Failed: invalid analyzer
error during the
preflight checks when deploying a new version of Continuous Delivery for PE.
This error relates to analyzers supported by version 1.24 and newer versions of the
Puppet Application Manager. The error can be safely ignored. To
resolve the failure and take advantage of the new preflight checks, upgrade Puppet Application Manager to the latest version.
Deployments might time out on node groups with complex rules
When using a built-in deployment policy other than the eventual consistency policy to deploy changes to a node group with highly complex rules, the deployment times out in some cases.
A PE instance cannot be integrated if dns_alt_names
is not set on the master
certificate
If the Puppet master certificate for your PE instance does not have dns_alt_names
configured, attempting to integrate the instance with
Continuous Delivery for PE fails with a We could not successfully
validate the provided credentials against the Code Manager Service
error. The master certificate must be regenerated before PE is integrated with Continuous Delivery for PE. For instructions, see Regenerate master certificates in the PE documentation.
Jobs fail when using chained SSL certificates on Windows
If you are using Continuous Delivery for PE with SSL configured to use chained certificates, attempts to run jobs on Windows job hardware fail.
Custom deployment policies aren't initially shown for new control repos
When your first action in a newly created control repo is to add a deployment to a pipeline, any custom deployment policies stored in the control repo aren't shown as deployment policy options. To work around this issue, click Built-in deployment policies and then click Custom deployment policies to refresh the list of available policies.