PE known issues
These are the known issues in PE 2023.5.
Installation and upgrade known issues
These are the known issues for installation and upgrade in this release.
Converting legacy compilers fails with an external certificate authority
puppet infrastructure
run convert_legacy_compiler
command fails with an error during the
certificate-signing step.
Agent_cert_regen: ERROR: Failed to regenerate agent certificate on node <compiler-node.domain.com> Agent_cert_regen: bolt/run-failure:Plan aborted: run_task 'enterprise_tasks::sign' failed on 1 target Agent_cert_regen: puppetlabs.sign/sign-cert-failed Could not sign request for host with certname <compiler-node.domain.com> using caserver <master-host.domain.com>
- Log on to the CA server and manually sign certificates for the compiler.
- On the compiler, run Puppet:
puppet agent -t
- Unpin the compiler from PE Master group, either from
the console, or from the CLI using the command:
/opt/puppetlabs/bin/puppet resource pe_node_group "PE Master" unpinned="<COMPILER_FQDN>"
- On your primary server, in the
pe.conf
file, remove the entrypuppet_enterprise::profile::database::private_temp_puppetdb_host
- If you have an external PE-PostgreSQL node, run Puppet on that node:
puppet agent -t
- Run Puppet on your primary server:
puppet agent -t
- Run Puppet on all compilers:
puppet agent -t
Converted compilers can slow PuppetDB in multi-region installations
In configurations that rely on high-latency connections between your primary servers and compilers – for example, in multi-region installations – converted compilers running the PuppetDB service might experience significant slowdowns. If your primary server and compilers are distributed among multiple data centers connected by high-latency links or congested network segments, reach out to Support for guidance before converting legacy compilers.
Disaster recovery known issues
These are the known issues for disaster recovery in this release.
There are no known issues related to disaster recovery at this time.
FIPS known issues
These are the known issues with FIPS-enabled PE in this release.
Puppet agent fails to start on FIPS-compliant RHEL 7 and 8
After you install or upgrade the Puppet agent on FIPS-compliant Red Hat Enterprise Linux (RHEL) 7 or 8, you might find that thepuppet
service stops. Running journalctl
--unit puppet
might return an error message similar to the following:
Sep 22 17:53:49 <hostname> puppet[8982]: Error: Could not run: SSL_CTX_new: library has no ciphers
Until
the issue is fixed, you can manually start the puppet
service by running systemctl start
puppet
on the agent node.
FIPS-enabled PE 2023.0 and later can't use the default system cert store
FIPS-compliant builds running PE 2023.0 and later
can't use the default system cert store, which is used automatically with some
reporting services. This setting is configured by the report_include_system_store
Puppet parameter that ships with PE.
Removing the puppet-cacerts
file (located at /opt/puppetlabs/puppet/ssl/puppet-cacerts
) can allow a
report processor that eagerly loads the system store to continue with a warning that
the file is missing.
If HTTP clients require external certs, we recommend using a custom cert store
containing only the necessary certs. You can create this cert store by concatenating
existing pem
files and configuring the ssl_trust_store
Puppet parameter to point to the new cert
store.
Puppet Server FIPS installations don’t support Ruby’s OpenSSL module
FIPS-enabled PE installations don't support extensions
or modules that use the standard Ruby Open SSL
library, such as hiera-eyaml. As a workaround, you can use a non-FIPS-enabled
primary server with FIPS-enabled agents, which limits the issue to situations where
only the primary uses the Ruby library. This
limitation does not apply to versions 1.1.0 and later of the splunk_hec
module, which supports FIPS-enabled servers. The FIPS Mode section of the module's Forge page explains the limitations of running this
module in a FIPS environment.
Configuration and maintenance known issues
These are the known issues for configuration and maintenance in this release.
Puppet Server memory leak due to issue in concurrent-ruby
Puppet Server can leak memory due to a known issue in the concurrent-ruby version packaged with Puppet Server in PE 2023.4 and later. This issue results in gradual degradation of Puppet Server performance until the service crashes or is restarted.
puppetserver gem install --no-document -v 1.2.2 concurrent-ruby
Restoring PE from a backup
might fail when puppet agent
is
running
When you run puppet-backup restore
and a Puppet run is either already in progress or
starts during the restore process, the restore operation might fail with an error.
Until this issue is fixed, you can use the following workaround to avoid the
error:
Before initiating the restore operation, run the following command to prevent Puppet runs:
puppet agent --disable
When the restore
operation is complete, run the following command to allow Puppet runs again:
puppet agent --enable
Puppet autorequire functionality fails when an
exec
resource command is deferred
Puppet enables resources to automatically require other resources, so
you do not have to explicitly state an order of dependencies. For example, if you're
managing an exec
resource that runs a file, Puppet automatically ensures that the file is created
or managed before executing the exec
.
exec
resource's command
is deferred, the exec
resource fails with an error
similar
to:Error: Failed to apply catalog: undefined method `scan' for #<Puppet::Pops::Evaluator::DeferredValue
Until the issue is resolved, you can set the preprocess_deferred
parameter as true
, either in the in the main section of the agent
puppet.conf file, or in the puppet_enterprise::profile::agent
class
in the console. This setting forces the agent to evaluate all deferred parameters
immediately when the catalog is applied, rather than lazily as each resource is
evaluated.
puppet infrastructure tune
fails with
multi-environment environmentpath
The puppet infrastructure tune command fails if environmentpath
(in your puppet.conf
file) is set to multiple environments. To avoid the
failure, comment out this setting before running this command. For details about the
environmentpath
setting, refer to environmentpath
in the open source Puppet
documentation.
Restarting or running Puppet on infrastructure
nodes can trigger an illegal reflective access
operation
warning
When restarting PE services or performing agent runs on infrastructure nodes, you might see this warning in the command-line output or logs: Illegal reflective access operation ... All illegal access operations will be denied in a future release
These warnings are internal to PE service components and have no impact on their functionality. You can safely disregard them.
Orchestration services known issues
These are the known issues for the orchestration services in this release.
There are no known issues related to Orchestration services at this time.
Console and console services known issues
The known issues in this release for the console and console services are described.
No issues are known for the console and console services at this time.
Patching known issues
These are the known issues for patching in this release.
Patching fails with excluded YUM packages
In the patching task or plan, using yum_params
to
pass the --exclude
flag in order to exclude certain
packages can result in task or plan failure if the only packages requiring updates
are excluded. As a workaround, use the versionlock
command (which requires installing the yum-plugin-versionlock
package) to lock the packages you want to
exclude at their current version. Alternatively, you can fix a package at a
particular version by specifying the version with a package resource for a manifest
that applies to the nodes to be patched.
Code management known issues
These are the known issues for Code Manager, r10k, and file sync in this release.
Changing a file type in a control repo produces a checkout conflict error
Changing a file type in a control repository – for example, deleting a file and replacing it with a directory of the same name – generates the error JGitInternalException: Checkout conflict with files accompanied by a stack trace in the Puppet Server log. As a workaround, deploy the control repo with the original file deleted, and then deploy again with the replacement file or directory.