AI for DevOps Engineers - Part 1: The Building Blocks of DevOps AI
DevOps is a key success factor for modern software development and we have most definitely come across AI in one way or another. The intersection of AI and
After creating our virtual machine templates in multiple clouds, having tested them and validated that all variants are behaving as expected - there is just one more question: how to distribute them?
At first, there are 2 strategies how to conquer this problem:
If you went for the first strategy then you want to copy your existing template to different regions. For this use case exists a resource in AWS aws_ami_copy
to copy a certain image to different regions.
On the other hand, you might also have a staging/production separation of accounts. And to solve this you can just use aws_ami_launch_permission
of terraform to update the permissions. In our example below we use a flag update_launch_permissions
that must be set to true to update the launch permissions. This is a task that gets set to true when a Merge Request is merged into the default branch.
1provider "aws" {
2 alias = "source"
3}
4provider "aws" {
5 region = var.aws_ami_region
6 alias = "dest"
7}
8data "aws_region" "source" {
9 provider = aws.source
10}
11variable "aws_ami_region" {
12 type = string
13}
14variable "aws_ami_id" {
15 type = string
16}
17variable "update_launch_permission" {
18 type = bool
19 default = false
20}
21data "aws_ami" "source" {
22 provider = aws.source
23
24 filter {
25 name = "image-id"
26 values = [var.aws_ami_id]
27 }
28}
29resource "aws_ami_copy" "copy" {
30 provider = aws.dest
31
32 name = data.aws_ami.source.name
33 description = data.aws_ami.source.description
34 tags = data.aws_ami.source.tags
35 source_ami_id = var.aws_ami_id
36 source_ami_region = data.aws_region.source.name
37}
38
39resource "aws_ami_launch_permission" "source_launch" {
40 provider = aws.source
41 count = var.update_launch_permission ? 1 : 0
42
43 image_id = var.aws_ami_id
44 account_id = var.production_account_id
45}
46
47resource "aws_ami_launch_permission" "dest_launch" {
48 provider = aws.dest
49 count = var.update_launch_permission ? 1 : 0
50
51 image_id = aws_ami_copy.copy.id
52 account_id = var.production_account_id
53}
The other tactic is to build every image in every location/region and test it also in these locations. In this case, you spend more time on build and test time but you must not be aware of in which location the image must be copied.
For this use case there is a solution from HashiCorp in development. At the time of writing the HashiCorp Cloud Platform Packer is in beta, can be used for free to test and get a first look at the implementation. The implementation is still going on so the information published here might also change when going in production.
You have to keep in mind that you have to define all those sources within this single Packer configuration. Then this build will get a new version within HCP Packer. Each build is here associated with a certain commit. So if your base image gets an update, you are forced to create also a change in your repository to create a new version in HCP Packer.
In our current example, we are using the previous source definitions of Azure and AWS to build one common image name on HCP Packer. The same solution can be used to define sources in the different regions you are using.
We are replacing the build
item here with the previous one. In this sample HCP Packer also is set up as a common place for the templates across cloud platforms. You can afterwards access these templates with their ID of a certain cloud platform by querying HCP Packer via terraform.
1{% raw %}
2build {
3 hcp_packer_registry {
4 bucket_name = "UbuntuDocker"
5 description = "Customized Ubuntu 21.04 Image with docker deployment"
6
7 provisioner "shell" {
8 inline = ["while [ ! -f /var/lib/cloud/instance/boot-finished ]; do echo 'Waiting for cloud-init...'; sleep 1; done"]
9 }
10
11 provisioner "shell" {
12 execute_command = "echo 'packer' | {{ .Vars }} sudo -S -E bash '{{ .Path }}'"
13 script = "packer/scripts/setup.sh"
14 }
15
16 provisioner "ansible-local" {
17 clean_staging_directory = true
18 playbook_dir = "ansible"
19 galaxy_file = "ansible/requirements.yaml"
20 playbook_files = ["ansible/${var.playbook}.yml"]
21 }
22
23 provisioner "shell" {
24 execute_command = "echo 'packer' | {{ .Vars }} sudo -S -E bash '{{ .Path }}'"
25 script = "packer/scripts/cleanup.sh"
26 }
27
28 }
29 sources = [
30 "source.amazon-ebs.core",
31 "source.azure-arm.core"
32 ]
33}
34{% endraw %}
Within HCP Packer you can create channels that can be used to distinguish between different stages of availability. In our point of view, this feature is awesome to create promote pipelines where you can have development, staging and production channels.
In this area is also some limitation in this current beta implementation: You cannot automate the channel selection of a certain image version. This must be done - at the moment - interactive via the UI.
The last part missing on HCP Packer is how to use the generated information. In our sample, we assume that you have promoted the current image to a channel production. So you can use the following code to query HCP Packer for the latest information about the current customized Ubuntu template.
1data "hcp_packer_iteration" "ubuntu" {
2 bucket_name = "UbuntuDocker"
3 channel = "production"
4}
5
6data "hcp_packer_image" "ubuntu_us_east_1" {
7 bucket_name = "UbuntuDocker"
8 cloud_provider = "aws"
9 iteration_id = data.hcp_packer_iteration.ubuntu.ulid
10 region = "us-east-1"
11}
12
13resource "aws_instance" "app_server" {
14 ami = data.hcp_packer_image.ubuntu_us_east_2.cloud_image_id
15 instance_type = "t2.micro"
16 tags = {
17 Name = "Ubuntu Docker Custom HCP"
18 }
19}
HCP Packer is a nice addition to maintain all the different image templates, versions and variants in a common place. With the ability to use channels and promote these changes to certain follower channels, we get a manageable release pipeline for our image creation process.
In this current beta stadium, we are just missing the ability to promote the changes as Infrastructure as Code implementation by CICD automation. But we are in good hope that this feature will follow.
You are interested in our courses or you simply have a question that needs answering? You can contact us at anytime! We will do our best to answer all your questions.
Contact us