PE release notes

These are the new features, enhancements, resolved issues, and deprecations in this version of PE.

For security and vulnerability announcements, see Security: Puppet's Vulnerability Submission Process.

PE 2023.0

Released January 2023

Important: PE 2023 is our new leading-edge PE release stream (also referred to as STS). For important information about upgrading to 2023, see Upgrading Puppet Enterprise.

If you're on the LTS stream (2021.7), you'll find release notes and other information for that series in the 2021.7 documentation.

Customers on 2019.8.z, which is EOL, are encouraged to upgrade to either 2021.7 or 2023.

New features

Authenticate users in multiple LDAP domains
You can now connect multiple Lightweight Directory Access Protocol (LDAP) domains to PE. This new feature brings many changes to the role-based access control (RBAC) API and LDAP-related pages in the PE console.
In the PE console, view and manage all of your LDAP external directory service connections on the LDAP tab of the Access control page.
The Test connection button is removed. When you Connect to external directory services, the Connect button now automatically tests the connection before saving the configuration.
Use the Certificate chain field (or cert_chain API key) to define unique certificate chains across servers.
The following new endpoints replace deprecated or removed endpoints. For a list of deprecated and removed endpoints, refer to the Deprecations and removals section of these release notes.
Responses from these endpoints now include the identity_provider_id:
Default timeout limits for tasks and plans
Timeout limits forcibly stop tasks and plans that run too long. This feature is useful for stopping tasks and plans that are stuck without requiring you to manually monitor task or plan progress.
CAUTION: The feature for forcibly stopping tasks and plans can result in incomplete Puppet runs, partial configuration changes, and other issues. When setting timeout limits, consider the task or plan scope, typical runtime, and your infrastructure's capacity (such as concurrency limits).
The default timeout limits are 40 minutes for tasks (per node) and 60 minutes for plans (for the entire plan run). You can change the global default limits by modifying the default_task_node_timeout and default_plan_timeout settings in your Orchestrator and pe-orchestration-services parameters.
Alternatively, you can set timeout limits for an individual task or plan when Running tasks from the console, Running plans from the console, or running tasks and plans with the Orchestrator API.
You can use the timeout option with the following Orchestrator API endpoints:
Unique status for queued jobs
To better differentiate queued-but-unstarted jobs from jobs that are running, a new pending state was introduced for queued jobs.
The pending state is visible in the console and in responses from GET /plan_jobs and GET /plan_jobs/<job-id>.
View and edit scheduled tasks in the console
You can now view and edit scheduled task details in the console.


Java 17 upgrade
This version upgrades Java from version 11 to 17 and changes the default garbage collector from Parallel to G1.
Thoroughly test PE 2023.0 in a non-production environment before upgrading if you customized PE Java services or you use plug-ins that include Java code.
Stop in-progress plans in the console
When Running plans in PE, you can click Stop plan on the plan's run details page to stop the plan. In this way, you can prevent new tasks from starting and allow in-progress tasks to finish. To forcibly stop in-progress tasks from a stopped plan, follow the instructions in Stop a task in progress.
Forcibly stop in-progress tasks in the console
To Stop a task in progress, you can now both stop and forcibly stop in-progress tasks from the console. Previously, you had to use the Orchestrator API to forcibly stop tasks.
CAUTION: A forcible stop is the last resort when a task is stuck. This type of stop can result in incomplete Puppet runs, partial configuration changes, and other issues.
Provisioning replicas requires matching agent versions
When provisioning a replica, the target node's agent version must match the primary server's agent version. If the versions don't match, the puppet infra provision replica command fails before initializing the provisioning process. Previously, the agent version wasn't checked, and mismatched agent versions caused provisioning to fail partway through.
Increased task_concurrency limit
The default value of the task_concurrency orchestrator parameter was increased from 250 to 1000.
recover_configuration command recreates nodes files
Previously, the puppet infrastructure recover_configuration command merged new values into the nodes files (at /etc/puppetlabs/enterprise/conf.d/nodes) instead of overwriting the files. This process caused problems if you deleted a value relevant to one or more nodes, because the deleted value would remain in these files and continue to be applied.
Now, the recover_configuration command fully rewrites the nodes files on each invocation. This process matches how the command handles changes to the user_data.conf file.
Notification when session expires due to inactivity
PE redirects users to the login page when a session expires due to inactivity. When this happens, the login page now includes a message that indicates why the user was logged out.
Improved performance when regenerating agent certificates for multiple agents
The puppet infrastructure run regenerate_agent_certificate action is now faster when you Regenerate agent certificates for multiple agents. You can also now use the agent_pdb_query parameter to use a PDB query to generate a list of agents for which you want to regenerate certificates.
This action now uses the Puppet Server CA API endpoints directly, rather than relying on the puppetserver ca CLI, as it did previously. This process is faster, but, if you encounter problems, you can revert to the previous behavior by including use_puppetserver_cli=true in the command.
Specify Code Manager worker cache cleanup interval
The deploy_pool_cleanup_interval specified how often workers pause to clean their on-disk caches. Learn more about this setting in Code Manager parameters.
This release includes enhancements to cipher compatibility. For a complete list, go to Compatible ciphers.
CHACHA20 ciphers, compatible with non-FIPS PE installs
TLS_CHACHA20_POLY1305_SHA256 (TLSv1.3)
AES versions of two GCM ciphers, compatible with FIPS and non-FIPS installs
Removed restrictions
TLS_CHACHA20_POLY1305_SHA256 is no longer limited to Bolt server, ACE server, and NGINX.
ECDHE-ECDSA-CHACHA20-POLY1305 is no longer limited to NGINX.
ECDHE-RSA-CHACHA20-POLY1305 is no longer limited to NGINX.

Platform support

With this release, several previously deprecated platforms were removed. Before upgrading, review the important information provided in Platforms removed in 2023.0.
Removed primary server platforms
CentOS 8
Removed agent platforms
CentOS 8
Debian 9
Fedora 32
Fedora 34
Ubuntu 16.04
Removed patch management platforms
Debian 9
Fedora 34

Deprecations and removals

Deprecated RBAC API endpoints
POST /v1/groups and POST /v2/groups are replaced by POST /command/groups/create.
PUT /v1/ds is replaced by POST /command/ldap/create, POST /command/ldap/update, and POST /command/ldap/delete.
GET /v2/ds is replaced by GET /ldap.
GET /ds/test and PUT /ds/test are replaced by POST /command/ldap/test.
Removed RBAC API endpoints
Removed the previously deprecated GET /v1/ds/, which is replaced by GET /ldap.
Removed platforms
For information about platforms removed in this release, see the Platform Support section.

Resolved issues

Code Manager respects full_deploy setting in Hiera
The full_deploy parameter is now correctly applied when you Customize Code Manager configuration in Hiera.
Previously, full_deploy was disregarded when included in your Code Manager configuration in Hiera. As a work-around, you could create a separate .conf file to manually manage this parameter.
Important: If you created a .conf file for the full_deploy parameter, you must remove this file and reconfigure the parameter in Hiera (as described in Configuring module deployment scope).
Certain plans correctly restore puppet service to pre-plan state
Due to a bug introduced in PE 2021.6, some plans that must stop the puppet service while the plans run were not restoring the puppet service to its pre-plan state after the plan finished running.
The four affected plans, and their associated puppet infra commands, are as follows:
  • The secondary_cert_regen plan, which is triggered by puppet infra run regenerate_compiler_certificate and puppet infra run regenerate_replica_certificate
  • The convert_legacy_compiler plan, which is triggered by puppet infra run convert_legacy_compiler
  • The reprovision_replica plan, which is triggered specifically by puppet infra upgrade replica --only-recreate-databases
  • The enable_ha_failover plan, which is triggered by puppet infra run enable_ha_failover
Important: If you were running PE 2021.6, 2021.7.0, or 2021.7.1 before upgrading to 2023.0, and you ran any of these four plans while running 2021.6, 2021.7.0, or 2021.7.1, check the state of the puppet service on your infrastructure nodes.
PuppetDB database user can purge reports
An issue was fixed to ensure that the PuppetDB database user can purge reports.
Corrected fact list handling in some PE console UI components
Some UI components in the PE console use fact lists. A recent change caused these component to use the entire list of fact names, which caused performance problems in environments with many facts. The handling of fact lists was corrected to fix this issue and improve performance.
Orchestrator code directories excluded from puppet-backup create --scope=config
When Customizing backup and restore scope, the orchestrator code directories (specifically /opt/puppetlabs/server/data/orchestration-services/data-dir and /opt/puppetlabs/server/data/orchestration-services/code) are excluded when you specify the config scope.
These directories are included in the code scope.
Plan action jobs have user data
Previously, jobs started as a result of plan action function didn't have an associated user stored in the database, which caused problems with some orchestrator commands. Now, user data is stored for these jobs.
Garbage collection log fixes
The introduction of Java 11 resulted in two issues relating to garbage collection logs. The issues are now fixed:
Dates and times are now included in garbage collection logs.
The maximum volume of retained garbage collection logs is 256 MB.
Security fixes
Addressed CVE-2022-41946 and CVE-2022-41404.