August 21, 2020

How to Create a Serverless Workflow For Puppet With Bolt

Products & Services
How to & Use Cases

Want to get started with serverless workflows? Find out how Bolt can help in this blog!

Table of Contents:

 

What Is a Serverless Workflow?

A serverless workflow allows you to perform tasks without adding another server to manage. 

 

Why Use Bolt to Create a Puppet Serverless Workflow?

You should use Bolt to create a Puppet serverless workflow because Bolt allows you to add the necessary automation.

Bolt was first introduced as a simple, agentless automation tool for running tasks on smaller infrastructures made up of a wide variety of remote hosts. Users told us they wanted one language to handle both one-off tasks and model-based automation. The idea was to allow you to run common commands or bring your own existing scripts to manage routine automation.

It quickly became apparent that Bolt could do much more — including handling mature Puppet code — while remaining easy to use. The latest release allows users to leverage existing content from the Puppet Forge from the comfort of their own workstations, introducing the ability to apply classes from modules and take advantage of built-in types such as file, service and package. These capabilities make Bolt a great way to start automating.

Related resource >> Bolt: Uniting Models and Tasks

 

How to Create a Serverless Workflow With Bolt

Can I Use Bolt to Get Started With a Serverless Workflow From My Workstation?

A lot of users want or need to get started from their workstation, and don't want another server to manage. This mode of using Puppet has always existed, but requires solving a number of problems yourself to use on more than a single node. You need to sync all the modules to each node, copy any other files over, and run 'puppet apply'. It also fundamentally changes the security model: where are secrets stored and accessed, and how do you restrict data to only what's needed for each node. The advantages of this approach are that it removes the need for a central server and is conceptually simpler to set up.

Bolt handles all that, making it easy to get started provisioning and managing a small set of nodes from your workstation, or to manage only certain aspects of your systems. It compiles catalogs on your workstation that contain only the input needed for each node, can pull in secrets as needed, and copies module plugin code to nodes when applying the catalog.

Starting with a short example, let's walk through setting up an nginx server (on Debian or Ubuntu, to keep it simple for the moment). This is a Bolt plan that ensures the 'nginx' package is installed, creates a file that serves our site content, and starts the 'nginx' web server. It uses the apply_prep function to install packages needed by apply on remote nodes.

plan profiles::nginx_install(
  TargetSpec $nodes,
  String $site_content = 'hello!',
) {
  # Install puppet on the target and gather facts
  $nodes.apply_prep

  # Compile the manifest block into a catalog
  apply($nodes) {
    package { 'nginx':
      ensure => present,
    }

    file { '/var/www/html/index.html':
      content => $site_content,
      ensure  => file,
    }

    service { 'nginx':
      ensure  => 'running',
      enable  => 'true',
      require => Package['nginx'],
    }
  }
}

How Can I Set Up an nginx Server With Bolt?

  1. Install Bolt
  2. Go to ~/.puppetlabs/bolt/modules (create the directories if necessary)
  3. Create a new module using PDK with pdk new module profiles and add a plans directory (or create ~/.puppetlabs/bolt/modules/profiles/plans)
  4. Add the code above to the manifest profiles/plans/nginx_install.pp
  5. Set up an Ubuntu node to work with Docker (lab 2 of our Task Hands-on-lab walks through getting Ubuntu running with Docker or Vagrant)
  6. Run bolt plan run profiles::nginx_install --nodes <NODE NAME>
  7. From a web browser, navigate to <NODE NAME> and you should see a page saying 'hello!'

For a general intro to Bolt, our Tasks Hands-on-lab walks through learning many of its features step by step.

For more on what's happening behind the scenes and how Bolt handles more complex manifest code, see Bolt's docs on applying manifest blocks.

How Can I Do Orchestration With Bolt?

The additional power that Bolt brings is in its ability to tie together and thread data through multiple instances of "puppet apply".

We can extend the original example of setting up several nginx servers to include configuring them behind a load balancer.

First, let's abstract the nginx setup by pulling it into a class. Note that we've also generalized it to work on RedHat systems.

  1. In the nginx module, run the command pdk new class profiles::server (or create ~/.puppetlabs/bolt/modules/profiles/manifests), and place the following in profiles/manifests/server.pp
class profiles::server(String $site_content) {
  if($facts['os']['family'] == 'redhat') {
    package { 'epel-release':
      ensure => present,
      before => Package['nginx'],
    }
    $html_dir = '/usr/share/nginx/html'
  } else {
    $html_dir = '/var/www/html'
  }

  package { 'nginx':
    ensure => present,
  }

  file { "${html_dir}/index.html":
    content => $site_content,
    ensure  => file,
  }

  service { 'nginx':
    ensure  => 'running',
    enable  => 'true',
    require => Package['nginx'],
  }
}
  1. Update our plan to use the class, and setup an HAProxy load balancer.
plan profiles::nginx_install(
  TargetSpec $servers,
  TargetSpec $lb,
  String $site_content = 'hello!',
) {
  if get_targets($lb).size != 1 {
    fail("Must specify a single load balancer, not ${lb}")
  }
  # Ensure puppet tools are installed and gather facts for the apply
  apply_prep([$servers, $lb])

  apply($servers) {
    class { 'profiles::server':
      site_content => "${site_content} from ${$trusted['certname']}",
    }
  }

  apply($lb) {
    include haproxy
    haproxy::listen { 'nginx':
      collect_exported => false,
      ipaddress        => $facts['ipaddress'],
      ports            => '80',
    }

    $targets = get_targets($servers)
    $targets.each |$target| {
      haproxy::balancermember { $target.name:
        listening_service => 'nginx',
        server_names      => $target.host,
        ipaddresses       => $target.facts['ipaddress'],
        ports             => '80',
        options           => 'check',
      }
    }
  }
}
  1. Install module dependencies.
  • Create ~/.puppetlabs/bolt/Puppetfile with the modules you want:
forge 'http://forge.puppetlabs.com'
mod 'puppetlabs-stdlib', '4.25.1'
mod 'puppetlabs-concat', '4.2.1'
mod 'puppetlabs-haproxy', '2.1.0'
mod 'profiles', local: true
  • Install the Puppetfile bolt puppetfile install
    1. Run bolt plan run profiles::nginx_install servers=<nodea>,<nodeb> lb=<nodec>

You should now be able to go to your load balancer and have it respond with "hello! from NODE NAME", with NODE NAME corresponding to whichever nginx server handled your request.

Note that Vox Pupuli maintains an nginx module that you could swap in for our simple server class to manage more complex nginx configuration.

In this example we've done a couple of notable things:

  • We configured several nginx servers, then immediately passed important details — their name and IP address — to our load balancer config.
  • We used existing content from the Forge to quickly get our load balancer up and running.
  • We demonstrated using classes to structure our code, and made it reusable by others via modules or down-the-road in ongoing management with Puppet Enterprise.

This addition to Bolt makes it much easier to get started automating existing workflows thanks to how quickly you can leverage existing solutions from the Forge. We're excited to see what you do with it!

Get Started With Puppet Enterprise

Not using Puppet Enterprise yet? Get started with a free trial today. 

START MY TRIAL

Learn More

This blog was originally published on August 21, 2018 and has since been updated for accuracy and relevance.