December 10, 2020

Puppet Google Cloud Integrations Make GCP Management Easier

How to & Use Cases

Puppet makes it easy to manage Google Cloud Platform (GCP) services. A series of Puppet Google Cloud modules are the key to making GCP management a breeze. Follow the instructions below and you'll be well on your way.

How to Install Puppet Google Cloud Integrations

Install the Puppet Google Cloud Platform Modules

These modules are dynamically built using a code-generation tool developed by Google to generate Puppet types and providers from API specifications. The modules released today allow GCP users to integrate management of their cloud resources into the same infrastructure-as-code workflows and practices they use when managing applications deployed to these resources.

AWS, Azure, GCP, Wherever – Improve Hybrid Cloud Efficiency with These Tips

Download our free eBook for pointers on ensuring consistency to create less friction in your hybrid IT.


Since we are getting started using Puppet Apply, once all the system prerequisites are installed, you won’t need administrative privileges.

There are two Ruby gems that the modules need to function, “googleauth” and “google-api-client.” These can be easily installed into your Puppet installation using the “puppet resource” command which provides a CLI-driven interface to leveraging Puppet.

$ sudo puppet resource package googleauth ensure=present provider=puppet_gem
$ sudo puppet resource package google-api-client ensure=present provider=puppet_gem

To now install the actual modules you can use the meta module by running:

$ puppet module install google/cloud

Or, individually the five published modules can be installed with:

$ puppet module install google/gcompute    # Google Compute Engine
$ puppet module install google/gcontainer  # Google Container Engine
$ puppet module install google/gdns        # Google Cloud DNS
$ puppet module install google/gsql        # Google Cloud SQL
$ puppet module install google/gstorage    # Google Cloud Storage

Get a Service Account and Enable APIs

Get a service account with privileges on the GCP resources you want to manage, and generate/download a Key ID. Ensure you have enabled the GCP APIs for the services you intend to use.

To enable a high level of flexibility and portability, and to remove the need to store your personal credentials somewhere, all the authentication and authorization to GCP services can be done through service account credentials. A service account comes with the ability to enable only the minimal number of permissions required to get the appropriate amount of work done, thereby limiting the risk associated with unauthorized action.

Go here to learn more how to create and enable service accounts. Then look at how to assign the appropriate roles to the account and create/download keys to be used for authentication to GCP by Puppet.

Also make sure you have enabled the APIs for each of the GCP services you intend to use.

Define the Authentication Mechanism to GCP

NOTE: All code examples from here on out are for the purpose of this getting started to be in the same init.pp file.

This is the first required resource that you must define and it will directly leverage the service account you set up in the previous section.

gauth_credential { 'engine-only':
  provider => serviceaccount,
  path     => '/home/cody/engine-only.json',
  scopes   => [''],

In this example I download the key file I created in the previous section to my home directory and renamed it “engine-only.json” just for my own personal reference because I assigned this service account permissions for only Google Compute Engine. The resource title is the more important part since that title will be referenced by further defined resources. Those resources will look up this resource and use the authentication information defined. In case you were wondering, you can define multiple of these with different names and credential pairs so that different resources or app teams can authenticate safely to appropriate projects and services.

Define Your GCP Cloud Resources

At this point all the prerequisites are out of the way so we can get to defining some actual GCP infrastructure. All of the available resources can be viewed here in aggregate. We’ll go ahead and do something everyone is familiar with for the blog post today: launching a number of Ubuntu virtual machines upon some persistent disks. You can find the entire example in an easy-to-download Gist on GitHub.

Before defining the actual virtual machines, make Puppet aware of the default VPC, region, and zone you want to operate in by defining a gcompute_network, gcompute_region, gcompute_machine_type, and gcompute_zone resources.

gcompute_network { 'default':
  ensure     => present,
  project    => 'slice-cody',
  credential => 'engine-only',

gcompute_region { 'us-west1':
  project    => 'slice-cody',
  credential => 'engine-only',

gcompute_zone { 'us-west1-c':
  project    => 'slice-cody',
  credential => 'engine-only',

gcompute_machine_type { 'f1-micro':
  zone       => 'us-west1-c',
  project    => 'slice-cody',
  credential => 'engine-only',

Now five virtual machines are a quick and easy gcompute_disk and gcompute_instance resource plus a loop.

[“one”, ”two”, ”three”, ”four”, ”five”].each |$vm| {
  gcompute_address { $vm:
    ensure     => present,
    region     => 'us-west1',
    project    => 'slice-cody',
    credential => 'engine-only',
  gcompute_disk { ${vm}:
    ensure       => present,
    size_gb      => 50,
    source_image => gcompute_image_family('ubuntu-1604-lts', 'ubuntu-os-cloud'),
    zone         => 'us-west1-c',
    project      => 'slice-cody',
    credential   => 'engine-only',
  gcompute_instance { $vm:
    ensure             => present,
    machine_type       => 'f1-micro',
    disks              => [
         boot        => true,
         source      => $vm,
         auto_delete => true,

    network_interfaces => [
         network        => 'default',
         access_configs => [
            name   => 'External NAT',
            nat_ip => $vm,
            type   => 'ONE_TO_ONE_NAT',
    zone               => 'us-west1-c',
    project            => 'slice-cody',
    credential         => 'engine-only',

The above two resources show the creation of ten 50GB disks in us-west1-c, derived from the Ubuntu LTS image, and then each disk is attached to a corresponding instance that is leveraging the default VPC for connectivity.

Run Puppet to Apply the Manifest

To bring your defined infrastructure online you simply need to inform Puppet of the name of the file that contains our code from earlier. Puppet will enforce the state you’ve informed it of and make sure the five instances and disks are brought online. This is accomplished from running “puppet apply” and will result is similar log output as the following:

$ /opt/puppetlabs/bin/puppet apply -v init.pp
Info: Loading facts
Notice: Compiled catalog for base in environment production in 0.43 seconds
Info: Applying configuration version '1503431803'
Notice: /Stage[main]/Main/Gcompute_address[one]/ensure: created
Notice: /Stage[main]/Main/Gcompute_disk[one]/ensure: created
Notice: /Stage[main]/Main/Gcompute_instance[one]/ensure: created
Notice: /Stage[main]/Main/Gcompute_address[two]/ensure: created
Notice: /Stage[main]/Main/Gcompute_disk[two]/ensure: created
Notice: /Stage[main]/Main/Gcompute_instance[two]/ensure: created
Notice: /Stage[main]/Main/Gcompute_address[three]/ensure: created
Notice: /Stage[main]/Main/Gcompute_disk[three]/ensure: created
Notice: /Stage[main]/Main/Gcompute_instance[three]/ensure: created
Notice: /Stage[main]/Main/Gcompute_address[four]/ensure: created
Notice: /Stage[main]/Main/Gcompute_disk[four]/ensure: created

Notice: /Stage[main]/Main/Gcompute_instance[four]/ensure: created
Notice: /Stage[main]/Main/Gcompute_address[five]/ensure: created
Notice: /Stage[main]/Main/Gcompute_disk[five]/ensure: created
Notice: /Stage[main]/Main/Gcompute_instance[five]/ensure: created
Notice: Applied catalog in 73.84 seconds

If you wish to have your defined infrastructure continually enforced by Puppet, you can also add this code to a production Puppet Enterprise installation and have an agent periodically or on demand validate that all named instance have been created.

Puppetize Your GCP Infrastructure

Now that you’ve given it a try, you’re ready to dive head first into integrating the management and definition of your cloud resources into your organization’s infrastructure-as-code practices.

These modules are aimed squarely at reducing the friction associated with building fluid, portable infrastructure so that everyone can reap the rewards of migrating their applications to and across the cloud.


This blog was originally published on August 21, 2017, and has since been updated for relevance and accuracy.

Learn More