AI for DevOps Engineers - Part 1: The Building Blocks of DevOps AI
DevOps is a key success factor for modern software development and we have most definitely come across AI in one way or another. The intersection of AI and
Starting with version 1.14.0, the Vault PKI secrets engine supports the Automatic Certificate Management Environment (ACME) specification for issuing and renewing leaf server certificates.
HashiCorp has released a tutorial about Vault ACME Configuration, but it is just based on commandline statements. On the other hand we have with HashiCorp terraform we have a tool available to generate our Infrastructure as Code (IaC).
Our example here is a direct mapping of the exising tutorial, but we will break it up into several terraform code samples.
We will review the code about the PKI secrets engine ACME functionality by deploying and configuring a Caddy web server and a Vault server. We are going to enable ACME support in a PKI secrets engine instance and configure Caddy to use Vault as its ACME server to enable automatic HTTPS.
Docker
We deploy Caddy and Vault as containers
curl
To verify interaction we are going to use curl
Terraform
Our core tool within this article to run and apply confiuration changes
We will also create the required containers by using terraform Docker provider. In the beginning we define a dedicated network to reference this also on the container instances.
1resource "docker_network" "learn_vault" {
2 name = "learn_vault"
3 driver = "bridge"
4 ipam_config {
5 subnet = "10.1.1.0/24"
6 }
7}
8resource "docker_image" "caddy" {
9 name = "caddy:2.6.4"
10}
11resource "docker_image" "vault" {
12 name = "hashicorp/vault:1.14.2"
13}
Our goal is to deploy a dev mode Vault server container that we'll use for the tutorial.
❕ | The dev mode server does not support TLS for non-loopback addresses, and is used without TLS just for this tutorial. Vault should always be used with TLS in production deployments. This configuration requires a certificate file and key file on each Vault host. |
---|
The container definition is equal to the tutorial template, just based on terraform code.
1
2resource "docker_container" "vault" {
3 name = "learn-vault"
4 image = docker_image.vault.image_id
5 hostname = "learn-vault"
6 rm = true
7 command = ["vault", "server", "-dev", "-dev-root-token-id=root", "-dev-listen-address=0.0.0.0:8200"]
8 networks_advanced {
9 name = docker_network.learn_vault.name
10 ipv4_address = "10.1.1.100"
11 }
12 host {
13 host = "caddy-server.learn.internal"
14 ip = "10.1.1.200"
15 }
16 ports {
17 internal = 8200
18 external = 8200
19 }
20 capabilities {
21 add = ["IPC_LOCK"]
22 }
23}
Also Caddy will run in a container, but we want to define our own configuration to direct its certificate retrival to our vault instance. Because we need to spin up the containers before applying the configuration, the container will report failures until the configuration of vault is fully applied.
1resource "local_file" "caddyfile" {
2 content = <<EOF
3{
4 acme_ca http://10.1.1.100:8200/v1/pki_int/acme/directory
5}
6caddy-server {
7 root * /usr/share/caddy
8 file_server browse
9}
10EOF
11 filename = "${abspath(path.module)}/Caddyfile"
12}
13
14resource "local_file" "index" {
15 content = "Hello World"
16 filename = "${abspath(path.module)}/index.html"
17}
18
19resource "docker_container" "caddy" {
20
21 name = "caddy-server"
22 image = docker_image.caddy.image_id
23 hostname = "caddy-server"
24 rm = true
25 networks_advanced {
26 name = docker_network.learn_vault.name
27 ipv4_address = "10.1.1.200"
28 }
29 ports {
30 internal = 80
31 external = 80
32 }
33 ports {
34 internal = 443
35 external = 443
36 }
37 volumes {
38 host_path = local_file.caddyfile.filename
39 container_path = "/etc/caddy/Caddyfile"
40 }
41 volumes {
42 host_path = local_file.index.filename
43 container_path = "/usr/share/caddy/index.html"
44 }
45}
Because of the dependency management of terraform we are forced to maintain this following configuration in a different folder - e.g. into a subdirectory config
- and apply this configuration after spinning up the containers by
1terraform -chdir=config apply
Also, this configuration is based on steps like those in the Build Your Own Certificate Authority (CA) tutorial. You are encouraged to complete the hands-on lab in that tutorial if you are unfamiliar with the PKI secrets engine.
1resource "vault_mount" "pki" {
2 path = "pki"
3 type = "pki"
4 max_lease_ttl_seconds = 87600 * 60
5}
6resource "vault_pki_secret_backend_root_cert" "root" {
7
8 backend = vault_mount.pki.path
9 type = "internal"
10 common_name = "learn.internal"
11 issuer_name = "root-2023"
12 ttl = "87600h"
13
14}
15resource "local_file" "root_ca_cert" {
16 content = vault_pki_secret_backend_root_cert.root.certificate
17 filename = "${path.module}/root_2023_ca.crt"
18}
And now, when following the commandline script we are running into a problem that a resource of the cluster configuration is just not available by terraform. To bypass this problem we have vault_generic_endpoint resource available. In combination with the HashiCorp Vault API documenation we can create a configuration for the cluster configuration.
1resource "vault_generic_endpoint" "root_config_cluster" {
2 depends_on = [vault_mount.pki]
3 path = "${vault_mount.pki.path}/config/cluster"
4 ignore_absent_fields = true
5 disable_delete = true
6
7 data_json = <<EOT
8{
9 "aia_path": "http://10.1.1.100:8200/v1/${vault_mount.pki.path}",
10 "path": "http://10.1.1.100:8200/v1/${vault_mount.pki.path}"
11}
12EOT
13}
The next steps in the commandline script we need to use again the vault_generic_endpoint resource and the PKI engine's URL endpoint even there exists a vault_pki_secret_backend_config_urls resource. But this resources lacks the option to configure the enable_templating
property.
1resource "vault_generic_endpoint" "root_config_urls" {
2 depends_on = [vault_mount.pki, vault_generic_endpoint.root_config_cluster]
3 path = "${vault_mount.pki.path}/config/urls"
4 ignore_absent_fields = true
5 disable_delete = true
6
7 data_json = <<EOT
8{
9 "enable_templating": true,
10 "issuing_certificates": "{{cluster_aia_path}}/issuer/{{issuer_id}}/der",
11 "crl_distribution_points": "{{cluster_aia_path}}/issuer/{{issuer_id}}/crl/der",
12 "ocsp_servers": "{{cluster_path}}/ocsp"
13}
14EOT
15}
Finally, for the Root CA we need to generate a role to consume the PKI secret engine as a client.
1resource "vault_pki_secret_backend_role" "server2023" {
2 backend = vault_mount.pki.path
3 name = "2023-servers"
4 no_store = false
5 allow_any_name = true
6}
The configuration of the Intermediate CA PKI secrets engine follows the same first few steps and resources as we already used for the Root CA configuration.
1resource "vault_mount" "pki_int" {
2 path = "pki_int"
3 type = "pki"
4 max_lease_ttl_seconds = 43800 * 60
5}
6resource "vault_generic_endpoint" "int_config_cluster" {
7 path = "${vault_mount.pki_int.path}/config/cluster"
8 ignore_absent_fields = true
9 disable_delete = true
10
11 data_json = <<EOT
12{
13 "aia_path": "http://10.1.1.100:8200/v1/${vault_mount.pki_int.path}",
14 "path": "http://10.1.1.100:8200/v1/${vault_mount.pki_int.path}"
15}
16EOT
17}
18resource "vault_generic_endpoint" "int_config_urls" {
19 depends_on = [vault_mount.pki_int, vault_generic_endpoint.int_config_cluster]
20 path = "${vault_mount.pki_int.path}/config/urls"
21 ignore_absent_fields = true
22 disable_delete = true
23
24 data_json = <<EOT
25{
26 "enable_templating": true,
27 "issuing_certificates": "{{cluster_aia_path}}/issuer/{{issuer_id}}/der",
28 "crl_distribution_points": "{{cluster_aia_path}}/issuer/{{issuer_id}}/crl/der",
29 "ocsp_servers": "{{cluster_path}}/ocsp"
30}
31EOT
32}
Afterwards, we need to create a certificate signin request ( CSR ) that gets handled by our own Root CA. So we build our own certificate chain between these PKI secret engines.
1resource "vault_pki_secret_backend_intermediate_cert_request" "int" {
2 backend = vault_mount.pki_int.path
3 type = vault_pki_secret_backend_root_cert.root.type
4 common_name = "learn.internal Intermediate Authority"
5}
6resource "vault_pki_secret_backend_root_sign_intermediate" "int" {
7 backend = vault_mount.pki.path
8 csr = vault_pki_secret_backend_intermediate_cert_request.int.csr
9 common_name = vault_pki_secret_backend_intermediate_cert_request.int.common_name
10 issuer_ref = "root-2023"
11 format = "pem_bundle"
12 ttl = "43800h"
13}
14resource "vault_pki_secret_backend_intermediate_set_signed" "int" {
15 backend = vault_mount.pki_int.path
16 certificate = vault_pki_secret_backend_root_sign_intermediate.int.certificate
17}
So, that clients can use this secrets engine, we need to create a role.
1data "vault_pki_secret_backend_issuers" "int" {
2 depends_on = [ vault_pki_secret_backend_intermediate_set_signed.int ]
3 backend = vault_mount.pki_int.path
4}
5resource "vault_pki_secret_backend_role" "learn" {
6 backend = vault_mount.pki_int.path
7 issuer_ref = data.vault_pki_secret_backend_issuers.int.keys[0]
8 name = "learn"
9 max_ttl = 720 * 60
10 allow_any_name = true
11 no_store = false
12}
Also, the final tasks to configure our intermediate CA to support ACME, we are forced to use the vault_generic_endpoint resource to apply the secrets engine tuning parameter and enable the acme configuration.
1resource "vault_generic_endpoint" "pki_int_tune" {
2 path = "sys/mounts/${vault_mount.pki_int.path}/tune"
3 ignore_absent_fields = true
4 disable_delete = true
5 data_json = <<EOT
6{
7 "allowed_response_headers": [
8 "Last-Modified",
9 "Location",
10 "Replay-Nonce",
11 "Link"
12 ],
13 "passthrough_request_headers": [
14 "If-Modified-Since"
15 ]
16}
17EOT
18}
19resource "vault_generic_endpoint" "pki_int_acme" {
20 depends_on = [vault_pki_secret_backend_role.learn]
21 path = "${vault_mount.pki_int.path}/config/acme"
22 ignore_absent_fields = true
23 disable_delete = true
24
25 data_json = <<EOT
26{
27 "enabled": true
28}
29EOT
30}
Because our Caddy container was already running before we applied our configuration, it is still in a failure state - also Caddy will retry every 60s as a default behavior - we are going to restart the container
1docker restart caddy-server
Now, we can observe the new behavior by reviewing the cointainer logs, that Caddy is now enabled with our own Vault ACME setup.
1docker logs caddy-server
We can now try a request to the HTTPS enabled Caddy server with curl
. We'll need to specify the root CA certificate so that curl
can validate the certificate chain.
1curl \
2 --cacert config/root_2023_ca.crt \
3 --resolve caddy-server:443:127.0.0.1 \
4 https://caddy-server
Example expected output:
1Hello World
A successful response indicates that Caddy is now using automatic HTTPS with Vault as its ACME CA.
It is not that hard converting the tutorials from HashiCorp Development documentation into terraform code. But with the limitations of some terraform resources without all options available, we have to use the vault_generic_endpoint resource with the help of the HashiCorp Vault API documenation.
You are interested in our courses or you simply have a question that needs answering? You can contact us at anytime! We will do our best to answer all your questions.
Contact us