Configuring orchestration

After installing PE, you can change some default settings to further configure the orchestrator and pe-orchestration-services.

Configure the orchestrator and pe-orchestration-services

These are some optional parameters you can use to configure the behavior of the orchestrator and the pe-orchestration-services service.

You can modify these profile class parameters in the Puppet Enterprise (PE) console on the Classes tab for the PE Orchestrator infrastructure node group.

puppet_enterprise::profile::orchestrator::task_concurrency
Integer representing the number of simultaneous task or plan actions that can run at the same time in the orchestrator. All task and plan actions are limited by this concurrency limit regardless of transport type (WinRM, SSH, PCP).
If a task or plan action runs on multiple nodes, each node consumes one action. For example, if a task needs to run on 300 nodes, and your task_concurrency is set to 200, then the task can run on 200 nodes while the remaining 100 nodes wait in queue.
Default: 250 (actions)
puppet_enterprise::profile::bolt_server::concurrency
An integer that determines the maximum number of simultaneous task or plan requests orchestrator can make to bolt-server. Only task or plan executions on nodes with SSH or WinRM transport methods are limited by this setting because only they require requests to bolt-sever.
Default: 100 (requests)
CAUTION: Do not set a concurrency limit that is higher than the bolt-server limit. This can cause timeouts that lead to failed task runs.
puppet_enterprise::profile::agent::pxp_enabled
Disable or enable the PXP service by setting it to true or false. If you disable this setting you can’t use the orchestrator or the Run Puppet button in the console.
Default: true
puppet_enterprise::profile::orchestrator::global_concurrent_compiles
An integer that determines how many concurrent compile requests can be outstanding to the primary server, across all orchestrator jobs.
Default: 8 (requests)
puppet_enterprise::profile::orchestrator::job_prune_threshold
An integer of 2 or greater, which specifies the number of days to retain job reports.
This parameter sets the corresponding parameter job-prune-days-threshold.
While job_prune_threshold itself has no default value, job-prune-days-threshold has a default of 30 (30 days).
puppet_enterprise::profile::orchestrator::pcp_timeout
An integer that represents how many seconds must pass while an agent attempts to connect to a PCP broker. If the agent can’t connect to the broker in that time frame, the run times out.
Default: 30 (seconds)
puppet_enterprise::profile::orchestrator::run_service
Disable or enable orchestration services. Set to true or false.
Default: true
puppet_enterprise::profile::orchestrator::allowed_pcp_status_requests
An integer that defines how many times an orchestrator job allows status requests to time out before a job is considered failed. Status requests wait 12 seconds between timeouts, so multiply the value of the allowed_pcp_status_requests by 12 to determine how many seconds the orchestrator waits on targets that aren’t responding to status requests.
Default: 35 (timeouts)
puppet_enterprise::profile::orchestrator::java_args
Specifies the heap size, or, the amount of memory that each Java process is allowed to request from the operating for the orchestrator to use.
Default: 704 MB
puppet_enterprise::profile::orchestrator::jruby_max_active_instances
An integer that determines the maximum number of JRuby instances that the orchestrator creates to execute plans. Because each plan uses one JRuby to run, this value is effectively the maximum number of concurrent plans. Setting the orchestrator heap size (java_args) automatically sets the jruby_max_active_instances using the formula java_args / 1024. If the value equals less than one, the default is one JRuby instance.
Default: 1 (instance)
Note: The jruby_max_active_instances pool for the orchestrator is separate from the Puppet Server pool. See the JRuby max active instances tuning guide for more information.
puppet_enterprise::profile::plan_executor::versioned_deploys
Set to true to enable versioned deployments of environment code. Use this setting for Running plans alongside code deployments.
Default: false
Important: Setting this to true disables the file sync client's locking mechanism that usually enforces a consistent environment state for your plans. This can cause Puppet functions and plans that call other plans to behave unexpectedly if a code deployment occurs while a plan is running.

For information about how the orchestrator works, what you can do with it, and additional parameters and configuration options, refer to Orchestrating Puppet runs, tasks, and plans.

Configure the PXP agent

Puppet Execution Protocol (PXP) is a messaging system used to request tasks and communicate task statuses. The PXP agent runs the PXP service and you can configure it using Hiera or the console.

puppet_enterprise::pxp_agent::ping_interval
Controls how frequently (in seconds) PXP agents will ping PCP brokers. If the brokers don’t respond, the agents try to reconnect.
Default: 120 (seconds)
puppet_enterprise::pxp_agent::pxp_logfile
A string that represents the path to the PXP agent log file and can be used to debug issues with orchestrator.
Default:
  • *nix: /var/log/puppetlabs/pxp-agent/pxp-agent.log

  • Windows: C:\Program Data\PuppetLabs\pxp-agent\var\log\pxp-agent.log

puppet_enterprise::pxp_agent::spool_dir_purge_ttl
The amount of time to keep records of old Puppet or task runs on agents. You can declare time in minutes (30m), hours (2h), and days (14d).
Default: 14d
puppet_enterprise::pxp_agent::task_cache_dir_purge_ttl
Controls how long tasks are cached after use. You can declare time in minutes (30m), hours (2h), and days (14d).
Default: 14d
puppet_enterprise::pxp_agent::broker_proxy
Sets a proxy URI used to connect to the pcp-broker to listen for task and Puppet runs.
puppet_enterprise::pxp_agent::master_proxy
Sets a proxy URI used to connect to the primary server to download task implementations.
puppet_enterprise::pcp_max_message_size_mb
Sets the message size, in mb, for pcp_broker, pxp_agent, and the orchestrator. The maximum message size cannot be higher than the default of 64mb, so you can only reduce it.
Default: 64 (mb)
Note: We do not recommend changing the pcp_max_message_size_mb parameter if you send or receive large payloads because it might cause errors for large task and plan run parameters and output.

Correct ARP table overflow

In larger deployments that use the PCP broker, you might encounter ARP table overflows and need to adjust some system settings.

Overflows occur when the ARP table—a local cache of IP address to MAC address resolutions—fills and starts evicting old entries. When frequently used entries are evicted, network traffic will increase to restore them, increasing network latency and CPU load on the broker.

A typical log message looks like:

[root@s1 peadmin]# tail -f /var/log/messages
Aug 10 22:42:36 s1 kernel: Neighbour table overflow.
Aug 10 22:42:36 s1 kernel: Neighbour table overflow.
Aug 10 22:42:36 s1 kernel: Neighbour table overflow.

To work around this issue:

Increase sysctl settings related to ARP tables.
For example, the following settings are appropriate for networks hosting up to 2000 agents:
# Set max table size
net.ipv6.neigh.default.gc_thresh3=4096
net.ipv4.neigh.default.gc_thresh3=4096
# Start aggressively clearing the table at this threshold
net.ipv6.neigh.default.gc_thresh2=2048
net.ipv4.neigh.default.gc_thresh2=2048
# Don't clear any entries until this threshold
net.ipv6.neigh.default.gc_thresh1=1024
net.ipv4.neigh.default.gc_thresh1=1024