In this part we will get started with Docker and build our iac-env
Docker image, which will be the star of the show! When we have our iac-env
Docker image we can immediately start using it to run infrastructure as code tools. At a high level, our iac-env
Docker image will do this for us:
The important bits are:
iac-env
Docker image container (we’ll write a script to help with this later).iac-env
starts, we are still in our directory, but within the context of the container. This means we can run any of the tools in the container against files in our directory, as though the tools were on our local machine.exit
the iac-env
container and we are back to normal again.iac-env
would be started and have tools invoked upon it.For this experiment I opted to create a monolithic Docker image to run all the tools within. I convinced my brain that the reasons for doing this were that it is:
iac-env
Docker image.First up, if you don’t already have Docker installed on your computer, you will need to do this before going further. You can see how to install Docker at the official site here: https://docs.docker.com/get-docker. We only need the community version. Installing Docker is a bit more involved on Linux but it’s not too bad.
Note: I will use the term Docker
image
and Dockercontainer
- when I useimage
I’m referring to the base image that isn’t necessarily running. You can see all Docker images on your machine withdocker images
. When I usecontainer
I’m referring to an instance of one of our images which is actively running. You can see all running containers withdocker ps
and all containers withdocker ps -a
. Sometimes a container is in a stopped state and you will only see it with the-a
flag.
We will be using a Docker feature to define our own network bridge as described in the official docs: https://docs.docker.com/network/bridge/. Using a custom bridge lets us have a network stack just for our own running Docker containers, and allows our containers to see other running containers on the local host environment using their running Docker container name to identify them.
Another way to have running containers ‘see’ each other on the local host is by using the flag --network host
but this only works on Linux! I’d like this experiment to run on Linux or MacOS so we will need to use our own custom network - which is probably not a bad thing anyway.
To see the default Docker networks that are already available enter the following:
$ docker network ls
NETWORK ID NAME DRIVER SCOPE
c92ad1162f20 bridge bridge local
3d9e4475e40c host host local
fa4a3b23a0f4 none null local
To create our custom network enter the following command:
docker network create iac-env
Show the available Docker networks again:
$ docker network ls
NETWORK ID NAME DRIVER SCOPE
c92ad1162f20 bridge bridge local
3d9e4475e40c host host local
17c399b0f644 iac-env bridge local
fa4a3b23a0f4 none null local
Notice we now have an iac-env
network in the list. You only need to create this network once but you must not forget to do it or our Docker containers won’t be able to talk to each other!
Note: As per the documentation https://docs.docker.com/network/network-tutorial-standalone/ once we have a Docker container running in our custom network, we can reach it by its container name. For example, if were running a Jenkins container and a Bitbucket container both within the
iac-env
network, then from within that network we can reach Jenkins viahttp://jenkins
and Bitbucket viahttp://bitbucket
- Docker resolves the IP address of each container automatically!
Next we can setup the files and folder structure to help us with the creation of our iac-env
Docker image. Create a folder to house our setup scripts and stub out empty text files (for now) along the way, just like below:
.
└─+ setup
└─+ iac-env
├── Dockerfile
├── create-docker-image.sh
└─+ resources
├─+ docker
│ ├── bash.bashrc
│ ├── iac-env-help
│ ├─+ kitchen-setup
│ │ ├── main.tf
│ │ ├── kitchen.yml
│ │ └─+ test
│ │ ├─+ fixtures
│ │ │ └─+ terraform_fixture_module
│ │ │ └── main.tf
│ │ └─+ integration
│ │ └─+ kitchen_integration_suite
│ │ ├─+ controls
│ │ │ └── basic.rb
│ │ └── inspec.yml
│ ├─+ opt
│ │ └─+ iac-env
│ │ └── iac-env-help.txt
│ └── provision.sh
└── iac-env
Our setup/iac-env
folder will contain everything needed to construct our Docker image, including:
Dockerfile
: This will describe to Docker how to orchestrate the build of our image.create-docker-image.sh
: This is a convenience script for us to kick off the Docker image build.resources/docker
: This folder will be copied into the Docker image during its build and used internally to provision all the tools and assets.resources/iac-env
: This will be a small utility script we can install on our local computer to make it super easy to start a new instance of our iac-env
Docker image.The remaining files will be explained as we go.
Edit setup/iac-env/Dockerfile
with the following:
FROM ubuntu:bionic
LABEL description 'Infrastructure as Code - Environment (iac-env).'
ENV TF_PLUGIN_CACHE_DIR=/opt/iac-env/terraform-plugins
ENV CHEF_LICENSE=accept-silent
COPY resources/docker /tmp/resources
RUN /tmp/resources/provision.sh
This is the description for how to construct our Docker image. There are a few things to explain:
ENV TF_PLUGIN_CACHE_DIR
: This environment variable will be detected by the Terraform tool and overrides the place where it will look to find plugins. By doing this we can bake in our opinionated Terraform plugins and avoid having to round trip to the Internet every time an instance of iac-env
is started to fetch plugins.ENV CHEF_LICENSE
: This environment variable is needed to allow the Terraform Kitchen suite to work.Other than that, we are simply copying the resources/docker
folder into the temp folder of the new ubuntu:bionic
image, then we instruct it to execute the provision.sh
file inside itself which carries out all the gory details of installing the tools.
Edit setup/iac-env/create-docker-image.sh
with the following:
#!/usr/bin/env bash
pushd $(cd $(dirname $0) && pwd)
docker build -t iac-env .
popd
Nothing amazing here, the only important part is the command which kicks off the Docker image build with the tag name of iac-env
and from the current folder .
:
docker build -t iac-env .
Mark the shell script as executable chmod +x create-docker-image.sh
.
Edit setup/iac-env/resources/iac-env
with the following:
#!/usr/bin/env bash
docker run \
--rm \
--interactive \
--tty \
--network iac-env \
--user $(id -u):$(id -g) \
--volume /etc/passwd:/etc/passwd:ro \
--volume /etc/group:/etc/group:ro \
--volume "$HOME":"$HOME" \
--volume "$(pwd)":/iac-env \
--workdir=/iac-env \
--env USER \
--env HOME \
--env AWS_ACCESS_KEY_ID \
--env AWS_SECRET_ACCESS_KEY \
--env AWS_DEFAULT_REGION \
iac-env:latest
This script is how we will be able to type iac-env
in any terminal session and have our Docker image started for us. We are starting iac-env
in interactive mode, mapping the current user and home folders to the container. We also mount the current directory ($(pwd)
) to an internal directory named /iac-env
. Other than that, we are attaching a few environment variables that are needed by the container to know where the user’s home folder is and what (if any) AWS environment credentials exist.
Note: By default, a non root user has no user profile or home folder in a Docker container. That’s why we need the additional volume mappings and environment variables to map the current user within the container.
The following line associates the Docker container with the iac-env
network which we created earlier:
--network iac-env
Mark the file as executable then copy it into your /usr/local/bin
folder - you can do this with:
sudo cp iac-env /usr/local/bin/iac-env
Make sure Docker is running on your computer then try out your new iac-env
shortcut:
$ iac-env
Unable to find image 'iac-env:latest' locally
docker: Error response from daemon: pull access denied for iac-env, repository does not exist or may require 'docker login': denied: requested access to the resource is denied.
See 'docker run --help'.
Docker couldn’t find the iac-env:latest
image but that’s to be expected because we haven’t actually created it yet! With this convenience script we can start iac-env
from any terminal session easily.
This bit is probably quite optional, but I had a bit of fun while doing this so I’ll include it. Edit setup/iac-env/resources/docker/bash.bashrc
with the following and mark the file as executable:
#!/usr/bin/env sh
cat << EOF
----------------------------------------------------------------------
█ ▄▀█ █▀▀ ▄▄ █▀▀ █▄░█ █░█
█ █▀█ █▄▄ ░░ ██▄ █░▀█ ▀▄▀ v1.0.0
----------------------------------------------------------------------
https://github.com/MarcelBraghetto/iac-env
Working directory mounted at /iac-env
Enter 'iac-env-help' for help, 'exit' to leave.
EOF
We will be placing this file into our Docker image such that when starting iac-env
we are greeted with this cool message :) It’s a pretty dorky thing to do but I like it!
We want to be able to print out some basic help information to a user so we will author a help text file that will be baked into the image.
Edit setup/iac-env/resources/docker/opt/iac-env/iac-env-help.txt
with the following:
----------------------------------------------------------------------
.:: iac-env - help ::.
https://github.com/MarcelBraghetto/iac-env
----------------------------------------------------------------------
Welcome to Infrastructure as Code Environment - iac-env!
Your working directory is mounted as /iac-env
Available tools in iac-env:
- vi: Basic command line text editor, useful for odd jobs ...
- terraform: https://github.com/hashicorp/terraform
This runs the Terraform CLI tool.
Important: The 'TF_PLUGIN_CACHE_DIR' environment variable is set to
point at /opt/terraform-plugin-cache to avoid round tripping to
fetch plugins from a remote. This does mean that if other Terraform
plugins are required apart from the ones bundled into this environment
Docker image, the Docker image needs to be updated to include them and
a new Docker image version created.
If you don't want to use the plugin cache, you can unset or change
the environment variable:
$> unset TF_PLUGIN_CACHE_DIR
Useful commands - most require Terraform to be initialised first:
1. Initialise Terraform in your working dir:
$> terraform init
2. Run basic Terraform validation on your code:
$> terraform validate
3. To format all your Terraform code:
$> terraform fmt -recursive
4. To generate a resource graph:
$> terraform graph | dot -Tpng > graph.png
- tflint: https://github.com/terraform-linters/tflint
This is a Terraform linter which can detect problems with non structural
AWS code that the default Terraform 'validate' command won't catch.
- kitchen: https://github.com/newcontext-oss/kitchen-terraform
This package allows the running of Terraform Kitchen based test suites
which use InSpec and orchestrate the required Terraform commands to
perform them.
Useful commands - note that a valid Terraform Kitchen project structure
must exist for these to work:
1. Converge your current project ready to run validation:
$> kitchen converge
2. Run the Kitchen verification test suites in the project:
$> kitchen verify
3. Destroy the Kitchen test session:
$> kitchen destroy
- End of help -
This is simply a text file that will be printed on the screen if the user enters the iac-env-help
command while inside the environment. We will be copying it into the image so it stays there.
To show the help text content we will write a small helper script that will be added to the Docker image. Edit setup/iac-env/resources/docker/iac-env-help
with the following and mark the file as executable:
#!/usr/bin/env sh
less /opt/iac-env/iac-env-help.txt
When we build our Docker image, we are going to actually run a Terraform Kitchen project inside the Docker build itself. This proves that the image has been constructed correctly by exercising the Terraform and Terraform Kitchen tooling. I won’t go into huge detail about how to write Chef InSpec tests inside Terraform Kitchen (I am still learning it myself) but there are lots of resources on the interweb about it. Edit and enter the following into the files under the setup/iac-env/resources/docker/kitchen-setup
folder:
kitchen-setup/kitchen.yml
---
# https://newcontext-oss.github.io/kitchen-terraform/getting_started.html
# https://www.rubydoc.info/github/newcontext-oss/kitchen-terraform/Kitchen/Driver/Terraform
driver:
name: terraform
root_module_directory: test/fixtures/terraform_fixture_module
# https://www.rubydoc.info/github/newcontext-oss/kitchen-terraform/Kitchen/Provisioner/Terraform
provisioner:
name: terraform
# https://www.rubydoc.info/github/newcontext-oss/kitchen-terraform/Kitchen/Verifier/Terraform
verifier:
name: terraform
systems:
- name: basic
backend: local
controls:
- file_check
platforms:
- name: terraform
suites:
- name: kitchen_integration_suite
The kitchen.yml
file defines the overall structure of our Terraform Kitchen test suite. In this case we will be performing a simple file system check, rather than an AWS integration. This should at least give us enough confidence that the testing tools are operating ok.
kitchen-setup/main.tf
terraform {
required_providers {
aws = "~> 2.57.0"
null = "~> 2.1.2"
}
}
resource "null_resource" "create_file" {
provisioner "local-exec" {
command = "echo 'this is my first test' > foobar"
}
}
This is a Terraform source file written in HCL
- HashiCorp Configuration Language: https://github.com/hashicorp/hcl. This is what declares resources to provision against different environments. In this example we are using the null_resource
which just runs a shell script printing a message into a local file named foobar
. If we were to run the Terraform tooling over this script, the output would be the creation of the foobar
file.
kitchen-setup/test/fixtures/terraform_fixture_module/main.tf
module "kitchen_terraform_test" {
source = "../../.."
}
This is a test fixture and represents a single scenario that tests can be run upon. In this case we have a Terraform module named kitchen_terraform_test
, which effectively subclasses the Terraform code three directories beneath it (because of ../../..
). Our test suite will execute this fixture when running its tests. In more advanced testing, the test module can override different aspects of the code it is subclassing to inject testing variables or configurations.
You may notice that in the .kitchen.yml
file we had the following line:
root_module_directory: test/fixtures/terraform_fixture_module
This is how the test module is associated with the test suite.
kitchen-setup/test/integration/kitchen_integration_suite/inspec.yml
---
name: default
This file is needed for the test suite to pick up the controls in the controls
folder.
kitchen-setup/test/integration/kitchen_integration_suite/controls/basic.rb
# frozen_string_literal: true
control "file_check" do
describe file('./test/fixtures/terraform_fixture_module/foobar') do
it { should exist }
end
end
InSpec tests are written in Ruby, you can learn more about them here:
The test we are writing above uses the file
resources, for which the InSpec DSL documentation can be found here: https://www.inspec.io/docs/reference/resources/file.
You can find the documentation for all the other resource types that InSpec supports here: https://www.inspec.io/docs/reference/resources
For our test we are asserting that there is a file named foobar
in the test fixture folder. We expect it to be there because when Terraform is run over the main.tf
file in the test/fixtures/terraform_fixture_module
it should produce a file named foobar
in the same folder.
Ok this is the big one, we will author a shell script whose job it is to programmatically install and configure all the tools we need inside our Docker image.
Important: In our
Dockerfile
we ranCOPY resources/docker /tmp/resources
before the provisioning script is run - meaning we can safely assume that all our resource files are in/tmp/resources
during the provisioning.
Edit setup/iac-env/resources/docker/provision.sh
- I’ll go a section at a time so I can explain each part:
Print system information
Start off the script with a bit of system information printed out so we can see what the operating system looks like:
#!/usr/bin/env sh
set -e
# Utility script to automate the internal provisioning of the iac-env Docker image.
echo '----------------------------------------------------'
echo 'Operating system details:'
cat /etc/*release
Copy resource scripts
We will copy our help content, our welcome message script and the help command script from the resources into the appropriate places in the file system:
echo '----------------------------------------------------'
echo 'Copying iac-env helper files ...'
# Help content
cp -r /tmp/resources/opt/iac-env /opt
# This Provides a nice welcome message.
cp /tmp/resources/bash.bashrc /etc/bash.bashrc
# This provides an 'iac-env-help' command.
cp /tmp/resources/iac-env-help /usr/local/bin/iac-env-help
chmod a+rx /usr/local/bin/iac-env-help
Install APT packages
The base ubuntu:bionic
Docker image will need a few extra packages installed so our tooling can operate. A few of these packages will actually be removed at the end of the build as they are only needed temporarily:
echo '----------------------------------------------------'
echo 'Installing APT packages ...'
apt-get update
apt-get --yes --no-install-recommends install \
less=487-0.1 \
groff=1.22.3-10 \
curl=7.58.0-2ubuntu3.8 \
unzip=6.0-21ubuntu1 \
graphviz=2.40.1-2 \
vim-tiny=2:8.0.1453-1ubuntu1.3 \
ruby=1:2.5.1 \
ruby-dev=1:2.5.1 \
build-essential=12.4ubuntu1
Amazon AWS CLI
We will install the Amazon AWS command line tools so we can run them while inside our environment:
Note: We are pruning a few folders after installation to free up some space but you don’t strictly need to do this.
# https://docs.aws.amazon.com/cli/latest/userguide/install-cliv2-linux.html#cliv2-linux-install
echo '----------------------------------------------------'
echo 'Installing Amazon AWS CLI ...'
curl -o /tmp/awscliv2.zip https://awscli.amazonaws.com/awscli-exe-linux-x86_64.zip
unzip -q /tmp/awscliv2.zip -d /tmp
/tmp/aws/install
rm -rf /usr/local/aws-cli/v2/*/dist/aws_completer
rm -rf /usr/local/aws-cli/v2/*/dist/awscli/data/ac.index
rm -rf /usr/local/aws-cli/v2/*/dist/awscli/examples
Install Terraform
Terraform is installed by downloading a single binary file and placing it into our /usr/local/bin
folder so it is available by default on the command line:
# https://www.terraform.io/docs/commands/index.html
echo '----------------------------------------------------'
echo 'Installing Terraform ...'
curl -o /tmp/terraform.zip https://releases.hashicorp.com/terraform/0.12.24/terraform_0.12.24_linux_amd64.zip
unzip -q /tmp/terraform.zip -d /usr/local/bin
Install Terraform plugins
This is where we start to get opinionated about our environment. We are going to preload two Terraform plugins; aws
and null resource
with specific versions. We have also configured our environment through the Dockerfile
via the TF_PLUGIN_CACHE_DIR
environment variable to instruct Terraform to only look in the /opt/iac-env/terraform-plugins
folder when resolving plugins.
The advantage of this is that the iac-env
container forces the use of those specific plugins and avoids needing to download them from the internet every time Terraform is used.
The disadvantage is that if a project needs to use different plugins, or different versions of the baked in plugins, they have to be added to a new version of the Docker image - though in my opinion this is actually more of an advantage as it enforces a strict deterministic view of the world :) It also means you could have versioned Docker images so older software can still use older images etc.
# https://www.terraform.io/docs/commands/cli-config.html
# Used by TF_PLUGIN_CACHE_DIR conifigured via the Dockerfile
echo '----------------------------------------------------'
echo 'Creating Terraform plugin cache directory ...'
mkdir -p /opt/iac-env/terraform-plugins/linux_amd64
echo '----------------------------------------------------'
echo 'Installing Terraform AWS Plugin ...'
curl -o /tmp/terraform-provider-aws.zip https://releases.hashicorp.com/terraform-provider-aws/2.57.0/terraform-provider-aws_2.57.0_linux_amd64.zip
unzip -q /tmp/terraform-provider-aws.zip -d /opt/iac-env/terraform-plugins/linux_amd64
echo '----------------------------------------------------'
echo 'Installing Terraform Null Resource Plugin ...'
curl -o /tmp/terraform-provider-null.zip https://releases.hashicorp.com/terraform-provider-null/2.1.2/terraform-provider-null_2.1.2_linux_amd64.zip
unzip -q /tmp/terraform-provider-null.zip -d /opt/iac-env/terraform-plugins/linux_amd64
TFLint
This is a third party tool that offers a static code analysis perspective not offered by the standard Terraform tool itself. It will detect problems that aren’t necessarily syntax errors but that could represent incorrect values for configuration code.
# https://github.com/terraform-linters/tflint
echo '----------------------------------------------------'
echo 'Installing TFLint ...'
curl -L -o /tmp/tflint.zip https://github.com/terraform-linters/tflint/releases/download/v0.15.4/tflint_linux_amd64.zip
unzip -q /tmp/tflint.zip -d /usr/local/bin
Terraform Kitchen
This is the test suite framework for running automated tests on our infrastructure code. Note I am not using Ruby bundler
here on purpose as there are no other Ruby programs being installed into the image, instead preferring a direct gem install
to avoid forcing a user to have a Gemfile
in their project and needing to enter bundle exec kitchen
every time. Instead with a plain gem install
, a user can just enter kitchen
to run the Terraform Kitchen tooling:
# https://kitchen.ci/docs/getting-started/introduction/
# https://github.com/newcontext-oss/kitchen-terraform
# https://newcontext-oss.github.io/kitchen-terraform/getting_started.html
echo 'Installing Kitchen - Terraform ...'
gem install rake --version 12.3.1 --no-ri --no-rdoc
gem install kitchen-terraform --version 5.3.0 --no-ri --no-rdoc
Clean up APT packages
At this point of the provisioning script we have installed all the tools we want so now is a good time to do some cleanup by removing any redundant packages or temporary files. I found that this step removed hundreds of megabytes from the final Docker image size:
echo '----------------------------------------------------'
echo 'Removing redundant build tools ...'
apt-get remove --yes --purge ruby-dev build-essential
apt-get autoremove --yes
apt-get clean
Update file permissions
A few files we have in the /opt
folder won’t be usable by non root users at the moment so we need to mark them to allow this:
echo '----------------------------------------------------'
echo 'Marking files in /opt as accessible to all users ...'
chmod -R +rx /opt
Running verification Terraform Kitchen tests
I put this here so we can actually run a real Terraform Kitchen test suite as part of the Docker image build. In a way its like a test for the Docker image build itself - if the test suite passes it means that at least some of our core tooling is setup correctly and is working:
echo '----------------------------------------------------'
echo 'Running Terraform Kitchen test suite ...'
cd /tmp/resources/kitchen-setup
kitchen verify
kitchen destroy
Final clean up
We can now delete everything in the /tmp
folder, which will remove all our provisioning resources and temporary files. There is no need to leave them in the final Docker image:
echo '----------------------------------------------------'
echo 'Removing provisioning resources ...'
rm -r /tmp/*
echo '----------------------------------------------------'
echo 'Done, iac-env:latest Docker image is ready to use!'
echo '----------------------------------------------------'
Mark the setup/iac-env/resources/docker/provision.sh
as executable, then navigate into the setup/iac-env
folder and run:
./create_docker_image.sh
Grab a cup of coffee - it takes a little while to complete - and check out the awesome iac-env
Docker image that is generated at the end! When the build is complete, have a look at your local Docker images like so:
$ docker images
REPOSITORY TAG IMAGE ID SIZE
iac-env latest 1541cb836ece 714MB
ubuntu bionic c3c304cb4f22 64.2MB
Note that we have an ubuntu:bionic
image because that was the base image we used, and we have a shiny new iac-env:latest
image too!
Now, at this point you can try our iac-env
script we wrote earlier:
$ iac-env
----------------------------------------------------------------------
█ ▄▀█ █▀▀ ▄▄ █▀▀ █▄░█ █░█
█ █▀█ █▄▄ ░░ ██▄ █░▀█ ▀▄▀ v1.0.0
----------------------------------------------------------------------
https://github.com/MarcelBraghetto/iac-env
Working directory mounted at /iac-env
Enter 'iac-env-help' for help, 'exit' to leave.
bash-4.4$
Sweet huh? We are now inside iac-env
in the current directory. You can enter iac-env-help
to see the help content we made earlier. While you are here, try a few tools:
$ iac-env
----------------------------------------------------------------------
█ ▄▀█ █▀▀ ▄▄ █▀▀ █▄░█ █░█
█ █▀█ █▄▄ ░░ ██▄ █░▀█ ▀▄▀ v1.0.0
----------------------------------------------------------------------
https://github.com/MarcelBraghetto/iac-env
Working directory mounted at /iac-env
Enter 'iac-env-help' for help, 'exit' to leave.
bash-4.4$ aws --version
aws-cli/2.0.10 Python/3.7.3 Linux/4.19.76-linuxkit botocore/2.0.0dev14
bash-4.4$ terraform -v
Terraform v0.12.24
bash-4.4$ tflint -v
TFLint version 0.15.4
bash-4.4$ kitchen -v
Test Kitchen version 2.4.0
bash-4.4$
Enter exit
to leave iac-env
and return to your own computer.
Let’s do a small experiment now that we have our iac-env
image. Copy the setup/iac-env/resources/docker/kitchen-setup
folder to somewhere else on your computer - totally doesn’t matter where. Open a new terminal session in your copied folder and start a new iac-env
session followed by ls
to show the files in the folder:
$ iac-env
bash-4.4$ ls
main.tf test
Now we will run the Terraform script to produce the foobar
output file, this is a sequence of the following commands which represent a typical Terraform lifecycle for provisioning what is in the Terraform code:
terraform init
terraform plan
terraform apply
bash-4.4$ terraform init
Initializing the backend...
Initializing provider plugins...
- Checking for available provider plugins...
- Downloading plugin for provider "aws" (hashicorp/aws) 2.57.0...
- Downloading plugin for provider "null" (hashicorp/null) 2.1.2...
Terraform has been successfully initialized!
You may now begin working with Terraform. Try running "terraform plan" to see
any changes that are required for your infrastructure. All Terraform commands
should now work.
If you ever set or change modules or backend configuration for Terraform,
rerun this command to reinitialize your working directory. If you forget, other
commands will detect it and remind you to do so if necessary.
bash-4.4$ terraform plan
Refreshing Terraform state in-memory prior to plan...
The refreshed state will be used to calculate this plan, but will not be
persisted to local or remote state storage.
------------------------------------------------------------------------
An execution plan has been generated and is shown below.
Resource actions are indicated with the following symbols:
+ create
Terraform will perform the following actions:
# null_resource.create_file will be created
+ resource "null_resource" "create_file" {
+ id = (known after apply)
}
Plan: 1 to add, 0 to change, 0 to destroy.
------------------------------------------------------------------------
Note: You didn't specify an "-out" parameter to save this plan, so Terraform
can't guarantee that exactly these actions will be performed if
"terraform apply" is subsequently run.
bash-4.4$ terraform apply
An execution plan has been generated and is shown below.
Resource actions are indicated with the following symbols:
+ create
Terraform will perform the following actions:
# null_resource.create_file will be created
+ resource "null_resource" "create_file" {
+ id = (known after apply)
}
Plan: 1 to add, 0 to change, 0 to destroy.
Do you want to perform these actions?
Terraform will perform the actions described above.
Only 'yes' will be accepted to approve.
Enter a value: yes
null_resource.create_file: Creating...
null_resource.create_file: Provisioning with 'local-exec'...
null_resource.create_file (local-exec): Executing: ["/bin/sh" "-c" "echo 'this is my first test' > foobar"]
null_resource.create_file: Creation complete after 0s [id=7576542584539425055]
Apply complete! Resources: 1 added, 0 changed, 0 destroyed.
Neat huh? Let’s see if our foobar
file was indeed created by Terraform:
bash-4.4$ ls
foobar main.tf terraform.tfstate test
bash-4.4$ cat foobar
this is my first test
Let’s run the test suite now, delete the foobar
file and run kitchen verify
- this will kick off the Terraform Kitchen project and run the tests in the test fixture, automating all the Terraform orchestration for us:
bash-4.4$ kitchen verify
-----> Starting Test Kitchen (v2.4.0)
-----> Creating <kitchen-integration-suite-terraform>...
$$$$$$ Verifying the Terraform client version is in the supported interval of < 0.13.0, >= 0.11.4...
$$$$$$ Reading the Terraform client version...
Terraform v0.12.24
$$$$$$ Finished reading the Terraform client version.
$$$$$$ Finished verifying the Terraform client version.
$$$$$$ Initializing the Terraform working directory...
Upgrading modules...
- kitchen_terraform_test in ../../..
Initializing the backend...
Initializing provider plugins...
- Checking for available provider plugins...
- Downloading plugin for provider "null" (hashicorp/null) 2.1.2...
- Downloading plugin for provider "aws" (hashicorp/aws) 2.57.0...
Terraform has been successfully initialized!
$$$$$$ Finished initializing the Terraform working directory.
$$$$$$ Creating the kitchen-terraform-kitchen-integration-suite-terraform Terraform workspace...
Created and switched to workspace "kitchen-terraform-kitchen-integration-suite-terraform"!
You're now on a new, empty workspace. Workspaces isolate their state,
so if you run "terraform plan" Terraform will not see any existing state
for this configuration.
$$$$$$ Finished creating the kitchen-terraform-kitchen-integration-suite-terraform Terraform workspace.
Finished creating <kitchen-integration-suite-terraform> (0m2.80s).
-----> Converging <kitchen-integration-suite-terraform>...
$$$$$$ Verifying the Terraform client version is in the supported interval of < 0.13.0, >= 0.11.4...
$$$$$$ Reading the Terraform client version...
Terraform v0.12.24
+ provider.aws v2.57.0
+ provider.null v2.1.2
$$$$$$ Finished reading the Terraform client version.
$$$$$$ Finished verifying the Terraform client version.
$$$$$$ Selecting the kitchen-terraform-kitchen-integration-suite-terraform Terraform workspace...
$$$$$$ Finished selecting the kitchen-terraform-kitchen-integration-suite-terraform Terraform workspace.
$$$$$$ Downloading the modules needed for the Terraform configuration...
- kitchen_terraform_test in ../../..
$$$$$$ Finished downloading the modules needed for the Terraform configuration.
$$$$$$ Validating the Terraform configuration files...
Success! The configuration is valid.
$$$$$$ Finished validating the Terraform configuration files.
$$$$$$ Building the infrastructure based on the Terraform configuration...
module.kitchen_terraform_test.null_resource.create_file: Creating...
module.kitchen_terraform_test.null_resource.create_file: Provisioning with 'local-exec'...
module.kitchen_terraform_test.null_resource.create_file (local-exec): Executing: ["/bin/sh" "-c" "echo 'this is my first test' > foobar"]
module.kitchen_terraform_test.null_resource.create_file: Creation complete after 0s [id=621622174594208428]
Apply complete! Resources: 1 added, 0 changed, 0 destroyed.
$$$$$$ Finished building the infrastructure based on the Terraform configuration.
$$$$$$ Reading the output variables from the Terraform state...
$$$$$$ Finished reading the output variables from the Terraform state.
$$$$$$ Parsing the Terraform output variables as JSON...
$$$$$$ Finished parsing the Terraform output variables as JSON.
$$$$$$ Writing the output variables to the Kitchen instance state...
$$$$$$ Finished writing the output varibales to the Kitchen instance state.
$$$$$$ Writing the input variables to the Kitchen instance state...
$$$$$$ Finished writing the input variables to the Kitchen instance state.
Finished converging <kitchen-integration-suite-terraform> (0m5.39s).
-----> Setting up <kitchen-integration-suite-terraform>...
Finished setting up <kitchen-integration-suite-terraform> (0m0.00s).
-----> Verifying <kitchen-integration-suite-terraform>...
$$$$$$ Reading the Terraform input variables from the Kitchen instance state...
$$$$$$ Finished reading the Terraform input variables from the Kitchen instance state.
$$$$$$ Reading the Terraform output variables from the Kitchen instance state...
$$$$$$ Finished reading the Terraform output varibales from the Kitchen instance state.
$$$$$$ Verifying the systems...
$$$$$$ Verifying the 'basic' system...
Profile: default
Version: (not specified)
Target: local://
✔ file_check: File ./test/fixtures/terraform_fixture_module/foobar
✔ File ./test/fixtures/terraform_fixture_module/foobar is expected to exist
Profile Summary: 1 successful control, 0 control failures, 0 controls skipped
Test Summary: 1 successful, 0 failures, 0 skipped
$$$$$$ Finished verifying the 'basic' system.
$$$$$$ Finished verifying the systems.
Finished verifying <kitchen-integration-suite-terraform> (0m0.19s).
-----> Test Kitchen is finished. (0m9.73s)
The foobar
file was created in the test/fixtures/terraform_fixture_module
folder instead this time, because it was actually the Terraform main.tf
file in that folder that the test was run upon.
It’s good practice to destroy your local Terraform changes when you are done:
bash-4.4$ kitchen destroy
-----> Starting Test Kitchen (v2.4.0)
-----> Destroying <kitchen-integration-suite-terraform>...
$$$$$$ Verifying the Terraform client version is in the supported interval of < 0.13.0, >= 0.11.4...
$$$$$$ Reading the Terraform client version...
Terraform v0.12.24
+ provider.aws v2.57.0
+ provider.null v2.1.2
$$$$$$ Finished reading the Terraform client version.
$$$$$$ Finished verifying the Terraform client version.
$$$$$$ Initializing the Terraform working directory...
Initializing modules...
Initializing the backend...
Initializing provider plugins...
Terraform has been successfully initialized!
$$$$$$ Finished initializing the Terraform working directory.
$$$$$$ Selecting the kitchen-terraform-kitchen-integration-suite-terraform Terraform workspace...
$$$$$$ Finished selecting the kitchen-terraform-kitchen-integration-suite-terraform Terraform workspace.
$$$$$$ Destroying the Terraform-managed infrastructure...
module.kitchen_terraform_test.null_resource.create_file: Refreshing state... [id=621622174594208428]
module.kitchen_terraform_test.null_resource.create_file: Destroying... [id=621622174594208428]
module.kitchen_terraform_test.null_resource.create_file: Destruction complete after 0s
Destroy complete! Resources: 1 destroyed.
$$$$$$ Finished destroying the Terraform-managed infrastructure.
$$$$$$ Selecting the default Terraform workspace...
Switched to workspace "default".
$$$$$$ Finished selecting the default Terraform workspace.
$$$$$$ Deleting the kitchen-terraform-kitchen-integration-suite-terraform Terraform workspace...
Deleted workspace "kitchen-terraform-kitchen-integration-suite-terraform"!
$$$$$$ Finished deleting the kitchen-terraform-kitchen-integration-suite-terraform Terraform workspace.
Finished destroying <kitchen-integration-suite-terraform> (0m5.23s).
-----> Test Kitchen is finished. (0m6.69s)
Exit iac-env
with the exit
command.
I hope that demonstrates how we can now start using our infrastructure as code environment! Next up we will look at how to incorporate iac-env
in a Jenkins build pipeline to allow our code to be highly automated.
Source code can be found at: https://github.com/MarcelBraghetto/iac-env
Continue to Part 3: Setup Bitbucket.
End of part 2