Github Action to Build Golden Images with HashiCorp Packer


Bicycle

In previous posts we have already shown multiple ways to use HashiCorp Packer to build Golden Images. In this post we will show how to automate the process with Github Actions. We are going to use HashiCorp Vault to store the secrets for Packer, Terraform in combination with terratest to deploy the actual built images and verify the actual result of Packer with the use of mondoo. Our images will be published on HCP Packer to make them available and maintain the version information on different release channels ( development, staging, production ).

And we are going to add a few sequences to make the github workflow runnable with act, like already shown in this article.

Setup Github Actions

First of all in our workflow we are going to setup the tools we have in use by using appropriate github actions:

Vault Github Action

In order to access various resources we need to pull those secrets in our workflow. We are using HCP Vault to store these values and so we will use the vault-action to access these secrets and make them available within the upcoming actions. At the moment we are building golden images in parallel for Azure and AWS. Also we need access to HCP platform to store information about our built images, and last but not least we are providing security and compliance scan information through Mondoo. So we need to setup the following secrets by the vault github action:

 1- name: Import Secrets
 2  id: import-secrets
 3  uses: hashicorp/vault-action@v2
 4  with:
 5    url: ${{ vars.VAULT_ADDRESS }}
 6    namespace: ${{ vars.VAULT_NAMESPACE}}
 7    method: approle
 8    path: github_ci
 9    roleId: ${{ secrets.VAULT_APPROLE_ID}}
10    secretId: ${{ secrets.VAULT_APPSECRET_ID }}
11    secrets: |
12        secret/data/ci/aws accessKey | AWS_ACCESS_KEY_ID ;
13        secret/data/ci/aws secretKey | AWS_SECRET_ACCESS_KEY;
14
15        secret/data/ci/hcp username | HCP_CLIENT_ID;
16        secret/data/ci/hcp password | HCP_CLIENT_SECRET;
17
18        secret/data/ci/mondoo password | MONDOO_AGENT_ACCOUNT;
19
20        secret/data/ci/azure subscriptionId | ARM_SUBSCRIPTION_ID;
21        secret/data/ci/azure tentantId | ARM_TENANT_ID;
22        secret/data/ci/azure clientId | ARM_CLIENT_ID;
23        secret/data/ci/azure clientSecret | ARM_CLIENT_SECRET;
24
25        secret/data/ci/ssh private;
26        secret/data/ci/ssh public;        

Python Setup

Within our provisioning setup of the actual Packer build we are using ansible playbooks to provision the actual image templates. So, we have to setup a python to make sure that we have all required dependencies installed and available for our ansible playbooks.

1- uses: actions/setup-python@v5
2  with:
3    python-version: ${{ env.PYTHON_VERSION }}
4
5- name: Install Packertools
6  run: |
7    python -m pip install --upgrade pip
8    pip3 install -r requirements.txt
9    ansible-galaxy install -r ansible/requirements.yaml    

Packer within Github Actions

Because we are able to control the actual build source by the environment variables CLOUD, DISTRIBUTION and PLAY we have wrapper in place that creates the Packer configuration file and the Packer variables file. This way we are able to build the same image for different clouds ( aws, azure, vmware, ... ) and distributions ( e.g. windows, ubuntu, centos, ... ) with the same set of playbooks.

Within these playbooks all secrets ( host_vars and group_vars ) are managed by a Key/Value Secret store within HashiCorp Vault - that is described in detail within this post.

Packer Build

 1- name: Create Packer Configuration
 2  run: ./scripts/create_packer_config.sh
 3
 4- name: Packer Build
 5  id: build
 6  run: |
 7    packer init "${CI_PROJECT_DIR}/packer/current_${GITHUB_RUN_ID}.pkr.hcl"
 8
 9    packer build ${EXTRA_ARGS} \
10      -var-file="${CI_PROJECT_DIR}/packer/current_${GITHUB_RUN_ID}.pkrvars.hcl" \
11      "${CI_PROJECT_DIR}/packer/current_${GITHUB_RUN_ID}.pkr.hcl" | tee "${GITHUB_RUN_ID}.log"
12
13    ./scripts/post-build.sh    

The build step is just calling packer init to download all necessary plugins and then building the actual configuration. The tee command is used to save the log file for later reference and the post-build.sh script will take care of parsing the log file and saving the actual build information for later use within the workflow.

Testing Packer Image

From the build step we got information about the actual image template names or IDs that we want to test. The test-wrapper.sh is only about checking if tests can be started and pull in the test configuration for the actual cloud, distribution and play. The actual testing is done by terratest to deploy the image and then mondoo to scan the result. We already published more information about that in a dedicated post here.

1- name: Test Packer Image
2  id: test
3  run: ./scripts/test-wrapper.sh

Publishing Packer Image

After a successful test the images are published to the following HCP Packer Channel. The image is published to the latest channel after building it with Packer and will be promoted to other channels with some terraform code.

 1
 2data "hcp_packer_version" "src" {
 3  bucket_name  = local.hcp_bucket_name
 4  channel_name = var.hcp_channel_source
 5}
 6
 7resource "hcp_packer_channel" "destination" {
 8  name        = var.hcp_channel_destination
 9  bucket_name = local.hcp_bucket_name
10}
11
12resource "hcp_packer_channel_assignment" "destination" {
13  bucket_name         = local.hcp_bucket_name
14  channel_name        = var.hcp_channel_destination
15  version_fingerprint = data.hcp_packer_version.src.fingerprint
16}

Within the scripting terraform init and terraform apply are called. But because we are not managing the terraform state, we are calling terraform import to associate the current hcp_packer_channel.destination and hcp_packer_channel_assignment.destination with their values.

1- name: Publish Packer Image
2  id: publish
3  if: github.ref_type == 'tag'
4  run: ./scripts/promote.sh

Execute if the job was cancelled

Sometimes the job is cancelled or newer commits are already waiting in the queue and we want to execute some cleanup code, if there were any resources already created by our workflow run.

So you may need to lookup if any packer created machines are still running, or your test code has created certain resources that need to be destroyed.

For example within packer configuration you can define tags that mark these machines as packer related, may be also add your commit hash as a tag.

1- name: Execute if the job was cancelled
2  if: ${{ cancelled() }}
3  run: ./scripts/cancel.sh

ACT enhancements

And when reading through all blog posts, you will come along our act article that allows you to run your github workflows on your local machine. So you do not need to define another method to execute the actual steps to build the machines when developing your playbooks and definitions.

act injects an environment variable ACT that can be used for this purpose.

1- name: Install Act dependencies
2  if: ${{ env.ACT != '' }}
3  run: |
4    apt-get update  && apt-get install -y $MISSING_ACT_PACKAGES    

Conclusion

This GitHub workflow is comprehensive and well-designed for automating the Packer image creation process in a secure and efficient manner. Its integration with HashiCorp Vault for secret management ensures secure handling of sensitive information, while its multi-cloud support, automation of testing, and cleanup processes make it suitable for a wide range of use cases. Overall, the workflow strikes a balance between flexibility, security, and automation, making it a strong candidate for use in production CI/CD pipelines.

Go Back explore our courses

We are here for you

You are interested in our courses or you simply have a question that needs answering? You can contact us at anytime! We will do our best to answer all your questions.

Contact us