Since Systemd took over as the main init system in Red Hat Linux and derrivatives like CentOS, it has become more difficult to get a core dump out of a daemon application. The traditional approach of running
ulimit -c unlimited before executing the binary works when running the application from the command line but does nothing for a daemon managed by Systemd’s unit files.
There is a lot of misleading information online about how to solve this so I thought I’d add a correct solution to the mix in the hope that it’s helpful.
The suggestions I found online include editing
/etc/security/limits.conf, adding LimitCore=infinity to the Unit file, and messing around with
/etc/systemd/coredump.conf. None of these methods work without customising the kernel configuration first.
Systemd is not configured to handle core dumps by default on CentOS (and by extension RHEL) distributions. The default behaviour is to write to the file
core in the process’s working directory, which for daemons is often the root directory.
The obvious problem here is that the daemon probably doesn’t have write access to the root directory (if running as a non-root user). If is possible to change the working directory with the Systemd unit directive
WorkingDirectory=/var/run/XXX. This is typically used with
RuntimeDirectory=XXX, which creates and manages the lifecycle of
/var/run is a symlink to
/run). Unfortunately, we can’t write the core file to a RuntimeDirectory because it gets deleted when the application terminates.
The simplest solution I found is to overwrite the kernel core_pattern setting. This can be edited at runtime by echoing a new value into
echo /tmp/core-%e-sig%s-user%u-group%g-pid%p-time%t > /proc/sys/kernel/core_pattern
This will force the kernel to write all core files during the current OS uptime to
/tmp with the filename pattern specified. The core manpage has more information on the syntax.
This change will be lost when the machine reboots. To effect the change at kernel startup, you need to edit
/etc/sysctl.conf or a file in
Our solution at work was to write a script to create a file in
/etc/sysctl.d/ at machine image creation time, so that the config is always there when we roll out to different environments (int, test, live etc.)
It should go without saying that there is no particular reason to use
/tmp. The output can be redirected to any location the process has permission to write to. A network share may be more appropriate in some cases.
There may be another solution using systemd-coredump, but it is not part of this release of CentOS (7.2) and not in the yum repository at this time.