Understanding DevOps and Cloud Maturity Models: A Guide to Elevating Your IT Strategy
In today’s fast-paced technological landscape, DevOps and Cloud practices are integral to accelerating software delivery and optimizing cloud resources. But as
Securing secrets is a crucial part of any infrastructure. In this article we will show how to use HashiCorp Vault as a secrets store for HashiCorp Packer in combination with Ansible provisioners. We will use the HCP Packer Pipelines to build our images.
We are using ansible in this process to still use our already gained knowledge about configuration as code with ansible. Also this enables us to use our ansible repository for Day 2 operations and to maintain our deployed images with ansible. We are not forced to do some manual actions with the deployed images. This is especially useful when we have to run those images for a longer time and we need to maintain them, e.g. within on premise datacenter or when using the same configuration as code for bare metal deployments.
HasiCorp Vault is a secrets store. It can be used to store secrets like passwords, certificates, tokens, etc. Vault can be used to store secrets for applications, but also for infrastructure. Vault can be used to store secrets for Packer, Terraform, Kubernetes, etc. And so it is the next step in the journey on generating images with Packer to get these data from Vault.
Within Ansible there exist a feature called Ansible Vault. This feature can be used to encrypt sensitive data within Ansible playbooks. This is a good way to store secrets within Ansible playbooks. But it is not a good way to store secrets within Packer templates. The reason is that Ansible Vault is not a secrets store. It is a way to encrypt data. But it is not a secrets store. So we need a secrets store for Packer. And this is where HashiCorp Vault comes into play.
With HashiCorp Vault we can store secrets in a secure way. And we can use these secrets within our Packer templates. Ansible is written in Python and there exists a HashiCorp Vault Python client. And this client is used as core within the hashi_vault Collection. This Collection can be used to get secrets from HashiCorp Vault within Ansible playbooks. We just need another way to get these secrets injected into our Ansible playbooks.
We want to write a task that can be included multiple times to load secrets from HashiCorp Vault into our playbook. We are going to use a Key/Value Secret engine to store our secrets and assume in the following sample that this engine is mounted at ansible
. We will use the following structure to access one dedicated secret:
1---
2# snippets/get_vault_extra_vars.yml
3- name: "load extra vars {{ vault_vars }}"
4 when: vault_vars is defined
5 register: vault_data
6 ignore_errors: true
7 delegate_to: localhost
8 become: false
9 no_log: "{{ lookup('ansible.builtin.env', 'CI') | length > 0 }}"
10 community.hashi_vault.vault_kv2_get:
11 retry_action: ignore
12 engine_mount_point: 'ansible'
13 path: "{{vault_vars}}"
14 token: "{{ vault_token }}"
15
16- name: "update extra host facts {{ vault_vars }}"
17 when: vault_data is defined and vault_data.failed == false
18 loop: "{{ vault_data.secret | dict2items }}"
19 no_log: "{{ lookup('ansible.builtin.env', 'CI') | length > 0 }}"
20 set_fact:
21 "{{ item.key }}": "{{ item.value }}"
The first task loads data from HashiCorp Vault and the second task sets the data as facts. If we store advanced data within our values, we need to address this later on when we are using the actual secrets. The following data structure that might be stored as ansible/group_vars/consul_nodes
and in use with the ansible role ansible-consul of shows your data might look like. In this sample we need to process the consul_config_custom
value as consul_config_custom | to_nice_json
.
1{
2 "consul_bind_address": "0.0.0.0",
3 "consul_client_address": "0.0.0.0",
4 "consul_config_custom": {
5 "telemetry": {
6 "disable_hostname": true,
7 "prometheus_retention_time": "1h"
8 }
9 },
10 "consul_datacenter": "home",
11 "consul_domain": "consul",
12 "consul_group_name": "consul_nodes",
13 "consul_ports_grpc": "8502",
14 "consul_version": "1.17.0"
15}
The task can be included multiple times to load data from different paths. The task can be used to load data for all hosts, all groups, the current host and the current groups of the host. The following snippet shows how to use the task to load data in such a way:
1---
2# snippets/get_vault_secrets.yml
3- name: "vault - load host_vars for all hosts"
4 include_tasks: snippets/get_vault_extra_vars.yml
5 vars:
6 vault_vars: "host_vars/all"
7
8- name: "vault - load host_vars for {{ inventory_hostname }}"
9 include_tasks: snippets/get_vault_extra_vars.yml
10 vars:
11 vault_vars: "host_vars/{{ inventory_hostname }}"
12
13- name: "vault - load group data for all hosts"
14 include_tasks: snippets/get_vault_extra_vars.yml
15 vars:
16 vault_vars: "group_vars/all"
17
18- name: "vault - load group for {{ inventory_hostname }}"
19 include_tasks: snippets/get_vault_extra_vars.yml
20 loop: "{{ ['group_vars']| product(group_names) | map('join','/') | list }}"
21 vars:
22 vault_vars: "{{ item }}"
And with a pre_task that include_tasks
we can load the secrets for all hosts and groups that are relevant for a playbook:
1---
2# docker.yaml playbook
3- hosts: all
4 gather_facts: true
5 pre_tasks:
6 - include_tasks: snippet/get_vault_secrets.yml
7
8 roles:
9 - role: "base"
10
11 - role: "customize"
12
13 - role: "docker"
It is left how to setup your pipeline automation or configuration tool to get a valid VAULT_TOKEN
. This can either be done by using a Vault Agent by using a Vault AppRole, but it can also be implemented with hvac and Ansible:
1- name: "vault - get environment data"
2 set_fact:
3 vault_token: "{{ lookup('ansible.builtin.env', 'VAULT_TOKEN') }}"
4 role_id: "{{ lookup('ansible.builtin.env', 'VAULT_ROLE_ID') }}"
5 secret_id: "{{ lookup('ansible.builtin.env', 'VAULT_SECRET_ID') }}"
6
7- name: "vault - check vault authentication credentials"
8 when: vault_token is not defined and ( role_id is not defined or secret_id is not defined )
9 fail:
10 msg: please check your vault credentials
11
12- name: "vault - verify token information"
13 when: vault_token is defined and vault_token | length > 0
14 delegate_to: localhost
15 become: false
16 community.hashi_vault.vault_login:
17 auth_method: token
18 token: "{{ vault_token }}"
19
20- name: "vault - use approle to login"
21 when: secret_id is defined and role_id is defined and secret_id | length > 0 and role_id | length > 0
22 block:
23 - name: "vault - login by approle"
24 delegate_to: localhost
25 become: false
26 register: login_data
27 community.hashi_vault.vault_login:
28 auth_method: approle
29 mount_point: pipeline_approle
30 role_id: "{{ role_id }}"
31 secret_id: "{{ secret_id }}"
32
33 - name: "vault - set vault token fact"
34 set_fact:
35 vault_token: '{{ login_data | community.hashi_vault.vault_login_token }}'
We have shown how to use HashiCorp Vault as a secrets store for HashiCorp Packer provisioning in combination with Ansible. We have shown how to use the hashi_vault Collection to get secrets from HashiCorp Vault within Ansible playbooks by using the hashi_vault tasks.
This setup allows to maintain secrets outside of a git repositroy with having a tool for the job, that is designed to distribute secrets in a secure way. Also we can maintain the actual secrets on their own iterations because the Key/Value secret engine Version 2 allows to iterate through the secrets. So the security team can audit the rotation of secrets and the operations team can maintain the secrets.
You are interested in our courses or you simply have a question that needs answering? You can contact us at anytime! We will do our best to answer all your questions.
Contact us