By on

code aws aws-sns

AWS Simple Notification Service is a great tool for broadcasting messages to a topic. Subscribers receive all messages by default, or can use a filter on the messagee’s attributes to receive only a subset of the messages sent to the topic.

Message attributes have a few possible data types. Unfortunately the documentation for the Javascript SDK is pretty bad at the time of writing. It’s fairly obvious how to set an attribute of type String but it says nothing about how to set an attribute of type String.Array. Fortunately, I guessed correctly when I gave it a try.

const AWS = require('aws-sdk')
const config = require('./config')

AWS.config.region = 'eu-west-1' // or your region

const sns = new aws.SNS()

const notificationGroups = [

async function sendMessage(message, errorCode) {
  const params = {
    Message: message,
    Subject: 'Something happened',
    TopicArn: config.sns.arn,
    MessageAttributes: {
      errorCode: {
        DataType: 'String',
        StringValue: `${errorCode}`
      group: {
        DataType: 'String.Array',
        StringValue: JSON.stringify(notificationGroups)

  await sns.publish(params).promise()

The trick is to call JSON.stringify(someArray) and stuff it into the StringValue key in the MessageAttribute object.

By on

code bash

It’s funny how easy it is to overlook an obvious solution to a trivial problem and keep doing things the slow way for years at a time.

While writing a short bash script today, it occurred to me that I’ve been handling requirements poorly for years. I’ve previously used long chains of if !$(which COMMAND) &>/dev/null to declare and enforce requirements but it never occurred to me to wrap it in a simple function. This is what popped out of my head today:

# Declare requirements in bash scripts

set -e

function requires() {
    if ! command -v $1 &>/dev/null; then
        echo "Requires $1"
        exit 1

requires "jq"
requires "curl"
# etc.

# ... rest of script

This makes it easy to declare simple command requirements without repeating the basic if not found then fail logic over and over. In hindsight it should have been obvious to wrap this stuff in a function years ago, but at least I’ve caught up now.

The requires function can of course be placed in a common file for inclusion into multiple scripts. I’ve shown it inline for simplicity.

This snippet also available as a Gist.

By on

code aws aws-lambda aws-s3

As the web continues to evolve, the selection of HTTP response headers a site is expected to support grows. Some of the security-related headers are now considered mandatory by various online publishers, including Google and Mozilla. If like me you run your site using an S3 bucket of static files, it’s not easy to add the recommended headers and improve your site’s security scores in tools like Mozilla’s Observatory.

CloudFront and Lambda@Edge can fix that. Others have detailed the process more fully than I can but beware that some features may have changed since the older posts were written. I suggest following the article linked previously if you want to implement this for your site. I’ve listed some of the gotchas that slowed me down below.

Adding the right permissions to the Lambda IAM role

While creating the Lambda you will be asked to assign it a role. Make sure you add the Basic Edge Lambda template to the role you create to allow the function trigger to be created. I missed this step the first time and it took me a few tries to figure it out.

Beware of replication

When you deploy the Lambda and attach the trigger, CloudFront will create replicas of the Lambda in various regions. If you then update your function, publish a new version and redeploy, it will create more replicas. The replicas of older versions are not automatically deleted and cannot be deleted by the user at this time so they will pollute your account with potentially large numbers of unused Lambda functions. Hopefully Amazon will fix this issue at some point.

Choose your trigger wisely

CloudFront supports 4 triggers: origin-request, origin-response, viewer-request and viewer-response. The restrictions on run-time and memory size are different for different triggers so pay attention to the delays your functions introduce to the trafic flow.

The viewer triggers are run outside of CloudFront’s caching layer so viewer-request is triggered before the request hits the cache and viewer-response is triggered after the response has exited the cache.

origin-response is triggered after CloudFront has found the resource but before it writes its response to its own cache. That means you can add headers and cache the result, reducing Lambda invokations and delays, keeping the cost down.

Headers and their values

I’ve configured CloudFront to redirect HTTP requests to my site to use HTTPS but browsers like a header to make that even clearer. HSTS or Strict-Host-Security does this job. Other than that, there’s a range of headers that help the browser to mitigate the risks of XSS (Cross-site scripting) vulnerabilities. These are well documented by Mozilla so I won’t rehash it all here.

The interesting one is the CSP (Content-Security-Policy) header. There are a few versions of the syntax supported by different browsers and getting it right is a little tricky. The excellent CSP Evaluator by Google is very helpful for testing wordings of the CSP. Tunine the policy to work properly with Google Analytics and allow inline stylesheets while disabling most other avenues of attack took a few attempts to get right but I’m happy with what I ended up with.

I disabled everything by default with default-src: 'none' then added the permissions I needed for my site. I use a very small inline CSS stylesheet so I needed to enable that with style-src 'self' 'unsafe-inline'. I don’t make much use of images at present but if I do I’ll access them over HTTPS so I enabled that with img-src 'self' https:. Opening up just the needed permissions for scripts was a bit more difficult but the CSP Evaluator helped a great deal. It recommends a strict-dynamic policy for browsers that support it. I only use one script on my site for Google Analytics, so I had to extract the contents (including all whitespace) from the <script> tag, hash it with SHA256, then encode the hash with Base64 and add the result directly to the CSP policy. CSP Evaluator also recommends some fall-back options for browsers that do not yet support strict-dynamic so I end up with script-src 'strict-dynamic' 'sha256-my_script_hash' 'unsafe-inline' https:, where my_script_hash is the Base64-encoded SHA256 hash of the contents of my script. The complete example is in the below code.

Lambda code template

My basic Lambda template for adding custom headers to all HTTP responses on my site is shared below.

'use strict';

exports.handler = (event, context, callback) => {
    function add(h, k, v) {
        h[k.toLowerCase()] = [
                key: k,
                value: v

    const response = event.Records[0].cf.response;
    const headers = response.headers;

    // hsts header at 2 years
    // Strict-Transport-Security: max-age=63072000; includeSubDomains;
    add(headers, "Strict-Transport-Security", "max-age=63072000; includeSubDomains; preload");

    // Reduce XSS risks
    add(headers, "X-Content-Type-Options", "nosniff");
    add(headers, "X-XSS-Protection", "1; mode=block");
    add(headers, "X-Frame-Options", "DENY");
    // TODO: fill in value of the sha256 hash
    const csp = "default-src 'none'" +
        "; frame-ancestors 'none'" +
        "; base-uri 'none'" +
        "; style-src 'self' 'unsafe-inline'" +
        "; img-src 'self' https:" +
        "; script-src 'strict-dynamic' 'sha256-my_script_hash' 'unsafe-inline' https:"
    add(headers, "Content-Security-Policy", csp);

    console.log('Response headers added');

    callback(null, response);

By on

cmake cpack rpmbuild

I’ve been trying to analyse a core dump generated by a C++ application when it seg-faulted. I use CMake 3 to build it and create an RPM with CPack. This application is currently built in debug mode using -DCMAKE_BUILD_TYPE=Debug on the command line that invokes CMake.

The generated binaries have all their debug symbols as expected but the binaries in the RPM package do not. After some searching, I learned that rpmbuild strips binaries by default on some distributions. This makes analysing a core dump way harder than it needs to be so I found a way to turn this feature off using CPack. The trick is to set a variable in the CMakeLists.txt file:

# prevent rpmbuild from stripping binaries when in debug mode
  set(CPACK_RPM_SPEC_INSTALL_POST "/bin/true")

Now my debug packages retain their debug info after installation so it’s possible to get a lot more information out of gdb when looking at a core dump.

This is documented in a roundabout way online, but it took me a while to figure it out so I thought I’d write it up.

By on

core-dump linux

Since Systemd took over as the main init system in Red Hat Linux and derrivatives like CentOS, it has become more difficult to get a core dump out of a daemon application. The traditional approach of running ulimit -c unlimited before executing the binary works when running the application from the command line but does nothing for a daemon managed by Systemd’s unit files.

There is a lot of misleading information online about how to solve this so I thought I’d add a correct solution to the mix in the hope that it’s helpful.

The suggestions I found online include editing /etc/security/limits.conf, adding LimitCore=infinity to the Unit file, and messing around with /etc/systemd/coredump.conf. None of these methods work without customising the kernel configuration first.

Systemd is not configured to handle core dumps by default on CentOS (and by extension RHEL) distributions. The default behaviour is to write to the file core in the process’s working directory, which for daemons is often the root directory.

The obvious problem here is that the daemon probably doesn’t have write access to the root directory (if running as a non-root user). If is possible to change the working directory with the Systemd unit directive WorkingDirectory=/var/run/XXX. This is typically used with RuntimeDirectory=XXX, which creates and manages the lifecycle of /run/XXX (/var/run is a symlink to /run). Unfortunately, we can’t write the core file to a RuntimeDirectory because it gets deleted when the application terminates.

The simplest solution I found is to overwrite the kernel core_pattern setting. This can be edited at runtime by echoing a new value into /proc/sys/kernel/core_pattern:

echo /tmp/core-%e-sig%s-user%u-group%g-pid%p-time%t > /proc/sys/kernel/core_pattern

This will force the kernel to write all core files during the current OS uptime to /tmp with the filename pattern specified. The core manpage has more information on the syntax.

This change will be lost when the machine reboots. To effect the change at kernel startup, you need to edit /etc/sysctl.conf or a file in /etc/sysctl.d/.


Our solution at work was to write a script to create a file in /etc/sysctl.d/ at machine image creation time, so that the config is always there when we roll out to different environments (int, test, live etc.)

It should go without saying that there is no particular reason to use /tmp. The output can be redirected to any location the process has permission to write to. A network share may be more appropriate in some cases.

There may be another solution using systemd-coredump, but it is not part of this release of CentOS (7.2) and not in the yum repository at this time.