PE release notes

These are the new features, enhancements, resolved issues, and deprecations in this version of PE.

Security and vulnerability announcements are posted at

PE 2021.4

Released November 2021


TLS v1.3 is enabled by default

PE is now compatible with TLSv1.2 and TLSv1.3 by default for both FIPS and non-FIPS installations. To update your protocol or ciphers, review the Configuring security settings docs. For a list of compatible ciphers, see the Ciphers reference.

New RBAC /command endpoints

Several new /command endpoints in the RBAC API v1 allow you to use the API to make small changes to existing data. These endpoints have been added:
Endpoint Usage
POST /command/roles/add-users Add users to a role.
POST /command/roles/remove-users Remove users from a role.
POST /command/roles/add-user-groups Add user groups to a role.
POST /command/roles/remove-user-groups Remove user groups from a role.
POST /command/roles/add-permissions Add permissions to a role.
POST /command/roles/remove-permissions Remove permissions from a role.
POST /command/users/revoke Revoke users.
POST /command/users/reinstate Reinstate users.
POST /command/users/add-roles Add roles to a user.
POST /command/users/remove-roles Remove roles from a user.

Run patches sequentially in the group_patching plan

The pe_patch::group_patching plan now has a parameter called sequential_patching, which defaults to false (disabled). When set to true, nodes in the specified patch group are patched, rebooted (if needed), and the post-reboot script run (if specified) one a time, rather than all at once.

Run the puppet infra run command with WinRM

The command puppet infra run now supports a --use-winrm flag, which forces the run command to connect to nodes via WinRM and use Bolt instead of the orchestrator.

LDAP lookup password not preserved on reload

For improved security, the lookup password is no longer preserved when the LDAP configuration page is reloaded or revisited in the console. You must enter the lookup password every time you make a change to the LDAP configuration, and it is required if there is a lookup user specified.

Metrics collector and database modules bundled with Puppet Enterprise

In the PE Databases module, database maintenance (puppet_enterprise::enable_database_maintenance) is now enabled by default.

Profiling metrics for versioned deploys

Profiling metrics are now reported for versioned deploys in the file-sync client's debug status output. Previously metrics were only collected for basic deploys.

Delete spec directories from Code Manager deployments

Previously, Code Manager deployed whole modules to disk, often including the spec directory. The spec directory is only used for testing and is not useful in a production environment. Now, Code Manager deletes the spec dirs from deployments to decrease disk size. This behavior can be disabled per-module by setting exclude_spec: false for any module declarations in your Puppetfile.

Prune CA CRL

The puppetserver ca prune action now runs during upgrade. On upgrade, the CA CRL is purged of duplicate entries, potentially making it a much smaller file. The Puppet CA also no longer adds duplicate entries to the CRL in the first place.

:full-deploy setting in Code Manager

Code Manager now utilizes the new r10k feature --incremental deploys for improved performance. Incremental deploys only sync those modules whose definitions allow their version to "float" (such as Git branches) or those modules whose definitions have changed since the last deployment of said environment. SVN modules are not supported.

More options when running the support script

This version of PE includes version 3 of the PE support script, which offers more options for modifying the support script's behavior to meet your needs.

Platform support

This version adds support for these platforms.

Primary server
  • AlmaLinux x86_64 for Enterprise Linux 8
  • Rocky Linux x86_64 for Enterprise Linux 8
  • Ubuntu 18.04 aarch64
  • Debian 11 (Bullseye) amd64
  • Red Hat Enterprise Linux 8 FIPS x86_64
  • AlmaLinux x86_64 for Enterprise Linux 8
  • Rocky Linux x86_64 for Enterprise Linux 8

Deprecations and removals

The LDAP endpoint GET /v1/ds is deprecated

The GET /v1/ds endpoint has been deprecated in favor of a more secure GET /v2/ds endpoint. To use the new version of the endpoint, see GET /v2/ds.

Resolved issues

Client-side lockfiles were not deleted on startup

Server-side lockfiles were cleaned up on startup by Puppet Server, but client-side lockfiles were not. Now both client- and server-side lockfiles are deleted during the Puppet Server startup process.

r10k deleted files in environments pointed to by symlinks

In control repositories that contain symlinks, r10k incorrectly interpreted the files in symlinked locations as duplicates and deleted the files.

Failed or in-progress reboots incorrectly reported that they finished rebooting successfully

When rebooting a node using the pe_patch::group_patching plan, the check to detect if a node rebooted always detected that it finished rebooting successfully, even if the reboot failed or was still in progress, due to a parsing error in the output. This behavior was observed and tested on RHEL-based platform versions 6 and 7, and SLES version 12, but might have existed on other platforms as well.

Windows agent installation failed if user name contained a space

The Windows agent install script failed if executed with a user name that included a space, like Max Spacey. You received the error Something went wrong with the installation along with exit code 1639. You can now use spaces in usernames without causing a failure.

Configuring environmentdir to be a relative path caused deploy failures

When deploying modules from a Puppetfile using r10k or Code Manager, the deploy failed if your environmentdir was configured to be a relative path instead of an absolute path (default).

The puppet code tool output informational data to stderr

A regression in the puppet code tool caused Code Manager to output information to stderr, whether it was successful or not. This was inconvenient if deploys were done through pipelines that were configured to register failures based on stderr output, because the behavior of puppet code always led to a failure notification. Now, the notification is printed to stdout instead of stderr.

SSO login failed if no email address was specified

When logging into the console via SSO, if no email address was specified in the IdP, the login failed. Users can now log in via SSO without specifying an email address.

The puppet plan subcommand segfaulted

When run without arguments, the puppet plan subcommand segfaulted. A check was added to ensure the command has arguments set when called.

PE 2021.3

Released September 2021


Code Manager support for Forge authentication

Code Manager now supports authentication to custom servers. You can configure this authentication via hieradata by setting authorization_token in the forge_settings parameter:

    baseurl: "https://private-forge.mysite"
    authorization_token: "Bearer mysupersecretauthtoken"

You must prepend the token with 'Bearer', particularly if you use Artifactory as your Forge server.

Puppet metrics collector module included in PE installation

The Puppet metrics collector module is now included in PE installations and upgrades. The module collects Puppet metrics by default, but system metrics collection is disabled. To enable the module to collect system metrics, change this parameter to true:
puppet_enterprise::enable_system_metrics_collection: true
If you have already downloaded the module from the Forge, you must either uninstall your copy of the module or upgrade it to the version installed with PE.

PE databases module included in PE installation

The pe_databases module is now included in PE installations and upgrades. The module is disabled by default, but can be enabled by setting this parameter:
puppet_enterprise::enable_database_maintenace: <true or false>

If you have already downloaded the module from the Forge, we recommend you upgrade to the version installed with PE.

Prevent replay attacks in SAML

SAML can now handle replay attacks by storing message IDs with their timestamps and rejecting message IDs that have been recently used, which prevents a bad actor from replaying a previously valid message to gain access. Stored message IDs are purged every 30 minutes.

Query by order and view timestamps in GET /plan_jobs endpoint

The GET /plan_jobs endpoint response now includes a timestamp field, and you can include the sorting parameters order and order_by in your request.

Faster Code Managerdeploys

Code Manager deploys are now faster because unmanaged resources are more efficiently purged.

Platform support

This version adds support for these platforms.

Client tools
  • macOS 11

Deprecations and removals

Platforms deprecated

Support for these agent platforms is deprecated in this release.
  • Fedora 30, 31
  • macOS 10.14

Resolved issues

r10k refactors erroneously passed flag into modules and broke impact analysis

Recent r10k refactors broke Continuous Delivery for PE's impact analysis in the 2019.8.7 and 2021.2.0 release. These refactors passed a default_branch_override to r10k via Code Manager's API. r10k erroneously passed the flag to all modules created when the Puppetfile was parsed. This flag was not supported for Forge modules and displayed the following error:

ERROR	 -> Failed to evaluate /etc/puppetlabs/code-staging/environments/production_cdpe_ia_1624622874129/Puppetfile
Original exception:
R10K::Module::Forge cannot handle option 'default_branch_override'
This bug is now fixed.

Replica promotion could fail in air-gapped installations

If your primary server included AIX or Solaris pe_repo classes, replica promotion failed in air-gapped environments because the staged AIX and Solaris tarballs weren't copied to the replica.

r10k deployment purge level was unsafe when run with parallel deploys

Previously, Code Manager occasionally failed and returned an HTTP 500 error during environment deployments. This error occurred because of how Code Manager handled a bug/race condition when using pools of r10k caches. This bug also affected Continuous Delivery for PE users. Now, Continuous Delivery for PE users no longer encounter issues related to this race condition and Code Manager's parallel deploys no longer conflict each other.

PE 2021.2

Released June 2021


Update CRLs

You can now update your CRLs using the new API endpoint: certificate_revocation_list. This new endpoint accepts a list of CRL PEMs as a body, inserting updated copies of the applicable CRLs into the trust chain. The CA updates the matching CRLs saved on disk if the submitted ones have a higher CRL number than their counterparts. You can use this endpoint if your CRLs require frequent updates. Do not use the endpoint to update the CRL associated with the Puppet CA signing certificate (only earlier ones in the certificate chain).

Enable the argon2id algorithm for new password storage

You can switch the algorithm PE uses to store passwords from the default SHA-256 to argon2id by configuring new password algorithm parameters. To configure the algorithm, see Configure the password algorithm.
Note: Argon2id is not compatible with FIPS-enabled PE installations.

Generate cryptographic tokens for password resets

RBAC now only generates and accepts cryptographic tokens instead of JSON web tokens (jwt), which were lengthy and directly tied to certificates used by the RBAC instance for validation.

Filter by node state in jobs endpoint

You can filter the nodes by their current state in the /jobs/:job-id/nodes endpoint when retrieving a list of nodes associated with a given job. The following node states are available to query:

  • new
  • ready
  • running
  • stopping
  • stopped
  • finished
  • failed

Export node data from task runs to CSV

In the console, on the Task details page, you can now export the node data results from task runs to a CSV file by clicking Export data.

Sort activities by oldest to newest in events endpoint

In the activity service API, the /v1/events and /v2/events endpoints now allow you to sort activity from either oldest to newest (asc) or newest to oldest (desc).

Disable force-sync mode

File sync now always overrides the contents of the live directory when syncing. This default override corrects any local changes made in the live directory outside of Code Manager's workflow. You can no longer disable file sync's force-sync mode to implement this enhancement.

Regenerate primary server certificates with updated command

As part of the ongoing effort to remove harmful terminology, the command to regenerate primary server certificates has been renamed puppet infrastructure run regenerate_primary_certificate.

Differentiate backup and restore logs

Backup and restore log files are now appended with timestamps and aren't overwritten with each backup or restore action. Previously, backup and restore logs were created as singular, statically named files, backup.log and restore.log, which were overwritten on each execution of the scripts.

Encrypt backups

You can now encrypt backups created with the puppet-backup create command by specifying an optional --gpgkey.

Clean up old PE versions with smarter defaults

When cleaning up old PE versions with puppet infrastructure run remove_old_pe_packages, you no longer need to specify pe_version=current to clean up versions prior to the current one. current is now the default.

Platform support

This version adds support for these platforms.

  • macOS 11
  • Red Hat Enterprise Linux 8 ppc64le
  • Ubuntu 20.04 aarch64
  • Fedora 34

Deprecations and removals

purge-whitelist replaced with purge-allowlist

For Code Manager and file sync, the term purge-whitelist is deprecated and replaced with the new setting name purge-allowlist. The functionality and purpose of both setting names are identical.

pe_java_ks module removed

The pe_java_ks module has been removed from PE packages. If you have an references to the packaged module in your code base, you must remove them to avoid errors in catalog runs.

Resolved issues

Windows agent installation failed with a manually transferred certificate

Performing a secure installation on Windows nodes by manually transferring the primary server CA certificate failed with the connection error: Could not establish trust relationship for the SSL/TLS secure channel.

Upgrading a replica failed after regenerating the master certificate

If you previously regenerated the certificate for your master, upgrading a replica from 2019.6 or earlier could fail due to permission issues with backed up directories.

The apply shim in pxp-agent didn't pick up changes

When upgrading agents, the ruby_apply_shim didn't update properly, which caused plans containing apply or apply_prep actions to fail when run through the orchestrator, and resulted in this error message:
Exited 1:\n/opt/puppetlabs/pxp-agent/tasks-cache/apply_ruby_shim/apply_ruby_shim.rb:39:in `<main>': undefined method `map' for nil:NilClass (NoMethodError)\n

Running client tool commands against a replica could produce errors

Running puppet-code, puppet-access, or puppet query against a replica produced an error if the replica certificate used the legacy common name field instead of the subject alt name. The error has been downgraded to a warning, which you can bypass with some minimal security risk using the flag --use-cn-verification or -k, for example puppet-access login -k. To permanently fix the issue, you must regenerate the replica certificate: puppet infrastructure run regenerate_replica_certificate target=<REPLICA_HOSTNAME>.

Generating a token using puppet-access on Windows resulted in zero-byte token file error

Running puppet-access login to generate a token on Windows resulted in a zero-byte token file error. This is now fixed due to the token file method being changed from os.chmod to file.chmod.

Invoking puppet-access when it wasn't configured resulted in unhelpful error

If you invoked puppet-access while it was missing a configuration file, it failed and returned unhelpful errors. Now, a useful message displays when puppet-access needs to be configured or if there is an unexpected answer from the server.

Enabling manage_delta_rpm caused agent run failures on CentOS and RHEL 8

Enabling the manage_delta_rpm parameter in the pe_patch class caused agent run failures on CentOS and RHEL 8 due to a package name change. The manage_delta_rpm parameter now appropriately installs the drpm package, resolving the agent run issue.

Editing a hash in configuration data caused parts of the hash to disappear

If you edited configuring data with hash values in the console, the parts of the hash that did not get edited disappeared after committing changes—and then reappeared when the hash was edited again.

Null characters in task output caused errors

Tasks that print null bytes caused an orchestrator database error that prevented the result from being stored. This issue occurred most frequently for tasks on Windows that print output in UTF-16 rather than UTF-8.

Plans still ran after failure

When pe-orchestration-services exited unexpectedly, plan jobs sometimes continued running even though they failed. Now, jobs are correctly transitioned to failed status when pe-orchestration-services starts up again.

SAML rejected entity-id URIs

SAML only accepted URLs for the entity-id and would fail if a valid URI was specified. SAML now accepts both URLs and URIs for the entity-id.

Login length requirements applied to existing remote users

The login length requirement prevented reinstating existing remote users when they were revoked, resulting in a permissions error in the console. The requirement now applies to local users only.

Plan apply activity logging contained malformed descriptions

In activity entries for plan apply actions, the description was incorrectly prepended with desc.

Errors when enabling and disabling versioned deploys

Previously, if you switched back and forth from enabling and disabling versioned deploys mode, file sync failed to correctly manage deleted control repository branches. This bug is now fixed.

Lockless code deployment lead to failed removal of old code directories

Previously, turning on lockless code deployment led to full disk utilization because of the failed removal of previous old code directories. To work around this issue, you must manually delete existing old directories. However, going forward—the removal is automatic.

PE 2021.1

Released May 2021


Customize value report estimates

You can now customize the low, med, and high time-freed estimates provided by the PE value report by specifying any of the value_report_* parameters in the PE Console node group in the puppet_enterprise::profile::console class.

Add a custom disclaimer banner to the console

You can optionally add a custom disclaimer banner to the console login page. To add a banner, see Create a custom login disclaimer.

Configure and view password complexity requirements in the console

There are configurable password complexity requirements that local users see when creating a new password. For example, "Usernames must be at least {0} characters long." To configure the password complexity options, see Password complexity parameters.

Re-download CRL on a regular interval

You can now configure the new parameter crl_refresh_interval to enable puppet agent to re-download its CRL on a regular interval. Use the console to configure the interval in the PE Agent group, in the puppet_enterprise::profile::agent class, and enter a duration (e.g. 60m) for Value.

Remove staging directory status for memory, disk usage, and timeout error improvements

The status output of the file sync storage service (specifically at the debug level), no longer reports the staging directory’s status. This staging information removal reduces timeout errors in the logs, removes heavy disk usage created by the endpoint, and preserves memory if there are many long-running status checks in the Puppet Server.

Exclude events from usage endpoint response

In the /usage endpoint, the new events parameter allows you to specify whether to include or exclude event activity information from the response. If set to exclude, the endpoint only returns information about node counts.

Avoid spam during patching

The patching task and plan now log fact generation, rather than echoing Uploading facts. This change reduces spam from servers with a large amount of facts.

Return sensitive data from tasks

You can now return sensitive data from tasks by using the _sensitive key in the output. The orchestrator then redacts the key value so that it isn't printed to the console or stored in the database, and plans must include unwrap() to get the value. This feature is not supported when using the PCP transport in Bolt.

Parameter name updates

As part of the ongoing effort to remove harmful terminology, the parameter master_uris was renamed primary_uris.

Changes to defaults

  • The environment timeout settings introduced in 2019.8.3 have been updated to simplify defaults. When you enable Code Manager, environment_timeout is now set to 5m, clearing short-lived environments 5 minutes from when they were last used. The environment_timeout_mode parameter has been removed, and the timeout countdown on environments now always begins from their last use.

Platform support

This version adds support for these platforms.

  • Fedora 32

Deprecations and removals

Configuration settings deprecated

The following configuration setting names are deprecated in favor of new terminology:

Previous setting names New setting names
master-conf-dir server-conf-dir
master-code-dir server-code-dir
master-var-dir server-var-dir
master-log-dir server-log-dir
master-run-dir server-run-dir

The previous setting names are available for backwards compatibly, but you must upgrade to the newer setting names at your earliest convenience.

Resolved issues

Upgrade failed with cryptic errors if agent_version was configured for your infrastructure pe_repo class

If you configured the agent_version parameter for the pe_repo class that matches your infrastructure nodes, upgrade could fail with a timeout error when the installer attempted to download a non-default agent version. The installer now warns you to remove the agent_version parameter if applicable.

Upgrade with versioned deploys caused Puppet Server crash

If versioned_deploys was enabled when upgrading to version 2019.8.6 or 2021.1, then the Puppet Server crashed.

Compiler upgrade failed with client certnames defined

Existing settings for client certnames could cause upgrade to fail on compilers, typically with the error Value does not match schema: {:client-certnames disallowed-key}.

Compiler upgrade failed with no-op configured

Upgrade failed on compilers running in no-op mode. Upgrade now proceeds on infrastructure nodes regardless of their no-op configuration.

Installing Windows agents with the .msi package failed with a non-default INSTALLDIR

When installing Windows agents with the .msi package, if you specified a non-default installation directory, agent files were nonetheless installed at the default location, and the installation command failed when attempting to locate files in the specified INSTALLDIR.

Patching failed on Windows nodes with non-default agent location

On Windows nodes, if the Puppet agent was installed to a location other than the default C: drive, the patching task or plan failed with the error No such file or directory.

Patching failed on Windows nodes when run during a fact generation

The patching task and plan failed on Windows nodes if run during fact generation. Patching and fact generation processes, which share a lock file, now wait for each other to finish before proceeding.

File sync failed to copy symlinks if versioned deploys was enabled

If you enabled versioned deploys, then the file sync client failed to copy symlinks and incorrectly copied the symlinks' targets instead. This copy failure crashed the Puppet Server.

Backup failed with an error about the stockpile directory

The puppet-backup create command failed under certain conditions with an error that the /opt/puppetlabs/server/data/puppetdb/stockpile directory was inaccessible. That directory is now excluded from backup.

Console reboot task failed

Rebooting a node using the reboot task in the console failed due to the removal of win32 gems in Puppet 7. The reboot module packaged with PE has been updated to version 4.0.2, which resolves this issue.

Removed Pantomime dependency in the orchestrator

The version of pantomime in the orchestrator had a third party vulnerability (tika-core). Because of the vulnerability, pantomime usage was removed from the orchestrator, but pantomime still existed in the orchestration-services build. The dependency has now been completely removed.

Injection attack vulnerability in csv exports

There was a vulnerability in the console where .csv files could contain malicious user input when exported. The values =, +, -, and @ are now prohibited at the beginning of cells to prevent an injection attack.

License page in the console timed out

Some large queries run by the License page caused the page to have trouble loading and timeout.

PE 2021.0

Released February 2021

New features

SAML support

SAML 2.0 support allows you to securely authenticate users with single sign-on (SSO) and/or multi-factor authentication (MFA) through your SAML identity provider. To configure SAML in the console, see Connect a SAML identity provider to PE.


Generate, view, and revoke tokens in the console

In the console, on the My account page, in the Tokens tab, you can create and revoke tokens, or view a list of your currently active tokens. Administrators can view and revoke another user's tokens on the User details page.

Migrate CA files to the new default directory

The default CA directory has moved to a new location at /etc/puppetlabs/puppetserver/ca from its previous location at /etc/puppetlabs/puppet/ssl/ca. This change helps prevent unintentionally deleting your CA files in the process of regenerating certificates. If applicable, you're prompted with CLI instructions for migrating your CA directory after upgrade.
/opt/puppetlabs/bin/puppet resource service pe-puppetserver ensure=stopped 
/opt/puppetlabs/bin/puppetserver ca migrate 
/opt/puppetlabs/bin/puppet resource service pe-puppetserver ensure=running
/opt/puppetlabs/bin/puppet agent -t

Install the Puppet agent despite issues in other yum repositories

When installing the Puppet agent on a node, the installer's yum operations are now limited to the PE repository, allowing the agent to be installed successfully even if other yum repositories have issues.

Get better insight into replica sync status after upgrade

Improved error handling for replica upgrades now results in a warning instead of an error if re-syncing PuppetDB between the primary and replica nodes takes longer than 15 minutes.

Fix replica enablement issues

When provisioning and enabling a replica (puppet infra provision replica --enable), the command now times out if there are issues syncing PuppetDB, and provides instructions for fixing any issues and separately provisioning the replica.

Patch nodes with built-in health checks

The new group_patching plan patches nodes with pre- and post-patching health checks. The plan verifies that Puppet is configured and running correctly on target nodes, patches the nodes, waits for any reboots, and then runs Puppet on the nodes to verify that they're still operational.

Run a command after patching nodes

A new parameter in the pe_patch class, post_patching_scriptpath enables you to run an executable script or binary on a target node after patching is complete. Additionally, the pre_patching_command parameter has been renamed pre_patching_scriptpath to more clearly indicate that you must provide the file path to a script, rather than an actual command.

Patch nodes despite certain read-only directory permissions

Patching files have moved to more established directories that are less likely to be read-only: /opt/puppetlabs/pe_patch for *nix, and C:\ProgramData\PuppetLabs\pe_patch for Windows. Previously, patching files were located at /var/cache/pe_patch and /usr/local/bin for *nix and C:\ProgramData\pe_patch for Windows.

If you use patch-management, keep these implications in mind as you upgrade to this version:
  • Before upgrading, optionally back up existing patching log files, located on patch-managed nodes at /var/cache/pe_patch/run_history or C:\ProgramData\pe_patch. Existing log files are deleted when the patching directory is moved.
  • After upgrading, you must run Puppet on patch-managed nodes before running the patching task again, or the task fails.

Use Hiera lookups outside of apply blocks in plans

You look up static Hiera data in plans outside of apply blocks by adding the plan_hierarchy key to your Hiera configuration.

See the duration of Puppet and plan runs

New duration, created_timestamp, and finished_timestamp keys allow you to see the duration of Puppet and plan runs in the GET /jobs and GET /plan_jobs endpoints.

View the error location in plan error details

The puppet plan functions provide the file and line number where the error occurred in the details key of the error response.

Run plans on PuppetDB queries and node classifier group targets

The params key in the POST /command/environment_plan_run endpoint allows you to specify PuppetDB queries and node groups as targets during a plan run.

Use masked inputs for sensitive parameters

The console now uses password inputs for sensitive parameter in tasks and plans to mitigate a potential "over the shoulder" attack vector.

Configure how many times the orchestrator allows status request timeouts

Configure the new allowed_pcp_status_requests parameter to define how many times an orchestrator job allows status requests to time out before the job fails. The parameter defaults to "35" timeouts. You can use the console to configure it in the PE Orchestrator group, in the puppet_enterprise::profile::orchestrator class.

Accept and store arbitrary data related to a job

An optional userdata key allows you to supply arbitrary key-value data to a task, plan, or Puppet run. The key was added to the following endpoints:
  • POST /command/deploy
  • POST /command/task
  • POST /command/plan_run
  • POST /command/environment_plan_run
The key is returned in the following endpoints:
  • GET /jobs
  • GET /jobs/:job-id
  • GET /plan_jobs
  • GET /plan_jobs:/job-id

Sort and reorder nodes in node lists

New optional parameters are available in the GET /jobs/:job-id/nodes endpoint that allow you to sort and reorder node names in the node list from a job.

Add custom parameters when installing agents in the console

In the console, on the Install agent on nodes, you can click Advanced install and add custom parameters to the pe_boostrap task during installation.

Update facts cache terminus to use JSON or YAML

The facts cache terminus is now JSON by default. You can configure the facts_chache_terminus parameter to switch from JSON to YAML. Use the console to configure the parameter in the PE Master group, in the puppet_enterprise::profile::master::puppetdb class, and enter yaml for Value.

Configure failed deployments to display r10k stacktrace in error output

Configure the new r10k_trace parameter to include the r10k stack trace in the error output of failed deployments. The parameter defaults to false. Use the console to configure the parameter in the PE Master group, in the puppet_enterprise::master::code_manager class, and enter true for Value.

Reduce query time when querying nodes with a fact filter

When you run a query in the console that populates information on the Status page to PuppetDB, the query uses the optimize_drop_unused_joins feature in PuppetDB to increase performance when filtering on facts. You can disable drop-joins by setting the environment variable PE_CONSOLE_DISABLE_DROP_JOINS=yes in /etc/sysconfig/pe-console-services and restarting the console service.

Deprecations and removals

Platforms deprecated

Support for these agent platforms is deprecated in this release.

  • Enterprise Linux 5
  • Enterprise Linux 7 ppc64le
  • SUSE Linux Enterprise Server 11
  • SUSE Linux Enterprise Server 12 ppc64le
  • Ubuntu 16.04 ppc64le
  • Debian 8
  • Solaris 10
  • Microsoft Windows 7, 8.1
  • Microsoft Windows Server 2008, 2008 R2

Platforms removed

Support for these agent platforms is removed in this release. Before upgrading to this version, remove the pe_repo::platform class for these operating systems from the PE Master node group in the console, and from your code and Hiera.

  • AIX 6.1
  • Enterprise Linux 4
  • Enterprise Linux 6, 7 s390x
  • Fedora 26, 27, 28, 29
  • Mac OS X 10.9, 10.12, 10.13
  • SUSE Linux Enterprise Server 11, 12 s390x

Resolved issues

PuppetDB restarted continually after upgrade with deprecated parameters

After upgrade, if the deprecated parameters facts_blacklist or cert_whitelist_path remained, PuppetDB restarted after each Puppet run.

Tasks failed when specifying both as the input method

In task metadata, using both for the input method caused the task run to fail.

Patch task misreported success when it timed out on Windows nodes

If the pe_patch::patch_server task took longer than the timeout setting to apply patches on a Windows node, the debug output noted the timeout, but the task erroneously reported that it completed successfully. Now, the task fails with an error noting that the task timed out. Any updates in progress continue until they finish, but remaining patches aren't installed.

Orchestrator created an extra JRuby pool

During startup, the orchestrator created two JRuby pools - one for scheduled jobs and one for everything else. This is because the JRuby pool was not yet available in the configuration passed to the post-migration-fa function, which created its own JRuby pool in response. These JRuby pools accumulated over time because the stop function didn't know about them.

Console install script installed non-FIPS agents on FIPS Windows nodes

The command provided in the console to install Windows nodes installed a non-FIPS agent regardless of the node's FIPS status.

Unfinished sync reported as finished when clients shared the same identifier

Because the orchestrator and puppetserver file sync clients shared the same identifier, Code Manager reported an unfinished sync as "all-synced": true. Whichever client finished polling first, notified the storage service that the sync was complete, regardless of the other client's sync status. This reported sync might have caused attempts to access tasks and plans before the newly-deployed code was available.

Refused connection in orchestrator startup caused PuppetDB migration failure

A condition on startup failed to delete stale scheduled jobs and prevented the orchestrator service from starting.

Upgrade failed with Hiera data based on certificate extensions

If your Hiera hierarchy contained levels based off certificate extensions, like {{trusted.extensions.pp_role}}, upgrade could fail if that Hiera entry was vital to running services, such as {{java_args}}. The failure was due to the puppet infrastructure recover_configuration command, which runs during upgrade, failing to recognize the hierarchy level.

File sync issued an alert when a repository had no commits

When a repository had no commits, the file sync status recognized this repository’s state as invalid and issued an alert. A repository without any commits is still a valid state, and the service is fully functional even when there are no commits.

Upgrade failed with infrastructure nodes classified based on trusted facts

If your infrastructure nodes were classified into an environment based on a trusted fact, the recover configuration command used during upgrade could choose an incorrect environment when gathering data about infrastructure nodes, causing upgrade to fail.

Patch task failed on Windows nodes with old logs

When patching Windows nodes, if an existing patching log file was 30 or more days old, the task failed trying to both write to and clean up the log file.

Backups failed if a Puppet run was in progress

The puppet-backup command failed if a Puppet run was in progress.

Default branch override did not deploy from the module's default branch

A default branch override did not deploy from the module’s default branch if the branch override specified by Impact Analysis did not exist.

Module-only environment updates did not deploy in Versioned Deploys

Module-only environment updates did not deploy if you tracked a module's branch and redeployed the same control repository SHA, which pulled in new versions of the modules.