- Blog/
Terraform Pipeline in Azure DevOps
When I started my job at Meta Bytes one of my first task was to implement and design the infrastructure for one of our projects. I took this as my opportunity to do a really good implementation of Terraform with Azure DevOps as I could. My experience from this was that there were no single guide for this which I want to create here. You should be able to get all information you need here but I expect you have basic knowledge of Azure to start with.
Preparation #
Authentication #
In order to get terraform working in Azure pipelines we need to authenticate with Azure. We can’t use regular user authentication for this since we want this to run on its own. The solution for this is to create an app registration in Azure AD.
Go to Azure AD Portal and create a new app registration under “App registrations” and just enter a name. We do not need to enter any more information.
Now we need to create a secret in order to authenticate to with our app registration. Do this under “Certificates and secrets” and then create new “Client secret”.
You will then get a menu where you can choose for how long this secret is valid. Standard is 6 months which means that you will have to create a new secret within 6 months and update where this is used. If not you will not be able to authenticate with Azure and will not be able to deploy any resources. So please take note of the expiration.
After this you will get the value of the secret and this is the only time it will be visible to you. Take a copy of it for now so we can store it in a secure place in the next step. You will also need your tenant id and application/client id which you will find in the “Overview” menu of your new app registration.
Azure Resources #
Before we can deploy terraform we need to have somewhere to actually deploy them to. I assume here that you already have a subscription to use or setup a new one for testing. Then we need to assign the app registration the role “Contributor” to the subscription where we want to deploy resources with Terraform.
For a even better implementation we can create a custom role as described here. Using the custom role as below in json format the app registration is allowed all actions but can’t delete any permissions. This is to ensure that we do not delete any existing permissions.
{
"properties": {
"roleName": "Terraform Contributor",
"description": "Let Terraform Manage everyting except delete for access.",
"assignableScopes": [
"/"
],
"permissions": [
{
"actions": [
"*"
],
"notActions": [
"Microsoft.Authorization/*/Delete"
],
"dataActions": [],
"notDataActions": []
}
]
}
}
create storage account and key vault
Last thing we need to prepare in the Azure Portal is to create som supporting resources for Terraform. We need only two resources for this, Key Vault and a storage account. I prefer to have these two resources in one resource group in the subscription where we deploy to.
Start creating the storage account with standard settings, I usually change to LRS for some cost saving. After it has been created create a container called “terrastate”. This is where Terraform will create the files and store all states and information about the resources it handles. On every deployment you run with Terraform it will compare with the state file and use for reference.
Then create a Key Vault and after it has been created we need to give the app registration permission to read the secrets. Go to “Access policies” and add new access policy. Choose you app registration as principal and select “Get” and “List” in the secret permissions. Add that policy and click save in the “Access policies” menu.
Now we just need to create the secrets that Terraform needs to run the pipeline and deploy som infrastructure for us. Create the secrets as the table below and here you can also set the expiration of the client secret. That way we have the information save in the Key Vault with the secret. This also means that when it is near expiration we can just create a new an update the secret, nothing else needed!
Secret Name | Value |
---|---|
arm-client-id | Application/Client ID |
arm-client-secret | Application/Client Secret |
arm-subscription-id | Subscription ID |
arm-tenant-id | Tenant ID |
state-blob-account | Storage Account name |
state-blob-container | Container name in storage account |
state-blob-file | terraform.tfstate |
state-sas-token | SAS Token from storage account |
Azure DevOps #
We will continue the work at Azure DevOps and create a new project or use existing for this deployment. The only thing we need to prepare here are the service connection which is what Azure DevOps will use to authenticate with Azure. Go to Project Settings->Pipelines->Service connections and click “New service connection”, then choose “Azure Resource Manager”
We will now get som options of thow we want to create this. There are multiple ways of setting up a service connection with Azure DevOps. The recommended choise in the portal is automatic and it creates everything needed for you. I recommend that we use the manual and use the app registraiton we have prepared. This gives us total control over the permissions and control for the Terraform.
In this last step we will need to fill in all required fields about our subscription we will use and our app registration.
Service Principal ID = Application/Client ID Service principal key = Client Secret
Folder Structure #
To structure our code I went with the folder structure below. This is to separate the terraform code from the pipelines that we will use to run the terraform configuration. Doing it this way we also have the pipelines version controlled toghether with the terraform.
TerraformTraining
│ README.md
│ pipeline-prod.yaml
| pipeline-dev.yaml
│
└───terraform
│ main.tf
│ variables.tf
| appgw.tf
| network.tf
| ...
│
└───module
│ main.tf
│ ...
Variables #
In our pipeline we will need to reference some varibles since we do not want to have them in our version controlled repository. Even when using a private repository like Azure DevOps we should be careful since it will be distributed to the clients who have cloned the repo.
So start with creating a variable group which you will find in the left menu in the Library settings under Pipelines.
Name the variable group to “KV_Variables” or if you have something better, please notice that you will have to change the name in the pipeline code. Then we enable the toggle for “Link secrets from an Azure key vault as variables”. Once we have done that we select our service connection as Azure subscription and now we can select our Key Vault we prepared. What we did here is that we will use the secrets direct from the Key Vault so no need to create them here in Azure DevOps. We also used our service connection to limit and control the permission used to access resources.
Approval Gate #
To be able to control if we actually want to apply the updates to our infrastructure we want to implement an approval gate. We do this in Azure DevOps by creating a environment under Pipelines. We will then implement this in our yaml pipeline so it has to be reviewed before Terraform Apply runs.
Go to Environments and create one, I prefer to create one for every kind of environment for that project. For example production, test and development. You will find this under Pipelines in the sidebar menu.
When you have created your environments we can add some settings in order to control what is required for a stage to go through. What I am after here is an approval flow and we fix that by going to the settings for the environment and then “Approvals and checks”.
Here we choose to add “Approvals” and we get a new menu where we can add users. These users are the ones that can approve or reject the deployment. We can also configure here how many is needed to approve in order for it to go through but also if you can approve your own triggered pipeline. This would make sure that someone else has to approve the pipeline and changes someone else made.
Pipeline #
Initial settings #
In our pipeline we start by defining som general settings for the entire pipeline. Here we specify which variable group we will use and also on what machine we will run this pipeline on. For this pipeline we will use Microsofts provided images but in a upcoming post we will create our own VM image. Using Microsofts provided has some limitations but the biggest one for me was the time limit. You only get 60 minutes for a pipeline and the it will timeout. For bigger recourses like Application Gateway and App Service Envrionment this will most likely not be enough.
name: $(BuildDefinitionName)_$(date:yyyyMMdd)$(rev:.r)
pool:
vmImage: 'ubuntu-latest'
variables:
- group: KV_Variables
Terraform Init #
For every stage in our pipeline we have to do run terraform init. Below is the snippet of code needed to do the this. This creates a working directory for the Terraform configuration and reads the backend we have specified so we get the state file.
In the code below I am using bash to run “terraform init”. There are other ways to run Terraform in a pipeline, there are for example plugins you could use but then we have dependancy to a third part. Using bash and Terraform CLI from Hashicorp directly also gives us the same environment as we would have on our local machine when testing.
Here we also take our secrets from the variable group and add as environment variables that is referenced in the terraform init.
- bash: |
terraform init \
-backend-config="storage_account_name=$TF_STATE_BLOB_ACCOUNT_NAME" \
-backend-config="container_name=$TF_STATE_BLOB_CONTAINER_NAME" \
-backend-config="key=$TF_STATE_BLOB_FILE" \
-backend-config="sas_token=$TF_STATE_BLOB_SAS_TOKEN"
displayName: Terraform Init
workingDirectory: '$(System.DefaultWorkingDirectory)/TerraformTraining/terraform'
env:
TF_STATE_BLOB_ACCOUNT_NAME: $(state-blob-account)
TF_STATE_BLOB_CONTAINER_NAME: $(state-blob-container)
TF_STATE_BLOB_FILE: $(state-blob-file-dev)
TF_STATE_BLOB_SAS_TOKEN: $(state-sas-token)
Complete pipeline code #
Here is the complete yaml code for the pipeline.
name: $(BuildDefinitionName)_$(date:yyyyMMdd)$(rev:.r)
pool:
vmImage: 'ubuntu-latest'
variables:
- group: KV_Variables
stages:
- stage: Terraform_Validate
jobs:
- job: terraform_validate
steps:
- bash: ls -l
- bash: |
terraform init \
-backend-config="storage_account_name=$TF_STATE_BLOB_ACCOUNT_NAME" \
-backend-config="container_name=$TF_STATE_BLOB_CONTAINER_NAME" \
-backend-config="key=$TF_STATE_BLOB_FILE" \
-backend-config="sas_token=$TF_STATE_BLOB_SAS_TOKEN"
displayName: Terraform Init
workingDirectory: '$(System.DefaultWorkingDirectory)/terraform'
env:
TF_STATE_BLOB_ACCOUNT_NAME: $(state-blob-account)
TF_STATE_BLOB_CONTAINER_NAME: $(state-blob-container)
TF_STATE_BLOB_FILE: $(state-blob-file-dev)
TF_STATE_BLOB_SAS_TOKEN: $(state-sas-token)
- bash: terraform validate
displayName: Terraform Validate
workingDirectory: '$(System.DefaultWorkingDirectory)/terraform'
env:
ARM_SUBSCRIPTION_ID: $(arm-subscription-id)
ARM_CLIENT_ID: $(arm-client-id)
ARM_CLIENT_SECRET: $(arm-client-secret)
ARM_TENANT_ID: $(arm-tenant-id)
- stage: Terraform_Plan
dependsOn: [Terraform_Validate]
condition: succeeded('Terraform_Validate')
jobs:
- job: terraform_plan
steps:
- bash: |
terraform init \
-backend-config="storage_account_name=$TF_STATE_BLOB_ACCOUNT_NAME" \
-backend-config="container_name=$TF_STATE_BLOB_CONTAINER_NAME" \
-backend-config="key=$TF_STATE_BLOB_FILE" \
-backend-config="sas_token=$TF_STATE_BLOB_SAS_TOKEN"
displayName: Terraform Init
workingDirectory: '$(System.DefaultWorkingDirectory)/terraform'
env:
TF_STATE_BLOB_ACCOUNT_NAME: $(state-blob-account)
TF_STATE_BLOB_CONTAINER_NAME: $(state-blob-container)
TF_STATE_BLOB_FILE: $(state-blob-file-dev)
TF_STATE_BLOB_SAS_TOKEN: $(state-sas-token)
- bash: terraform plan -input=false -var-file="../terraform/variables/dev.tfvars" -var="terraform_variable=$(az-var-name)"
displayName: Terraform Plan
workingDirectory: '$(System.DefaultWorkingDirectory)/terraform'
env:
ARM_SUBSCRIPTION_ID: $(arm-subscription-id)
ARM_CLIENT_ID: $(arm-client-id)
ARM_CLIENT_SECRET: $(arm-client-secret)
ARM_TENANT_ID: $(arm-tenant-id)
- stage: Terraform_Apply
dependsOn: [Terraform_Plan]
condition: succeeded('Terraform_Plan')
jobs:
- deployment: terraform_apply
timeoutInMinutes: 0
environment: 'approvalgates-production'
strategy:
runOnce:
deploy:
steps:
- checkout: self
- bash: |
terraform init \
-backend-config="storage_account_name=$TF_STATE_BLOB_ACCOUNT_NAME" \
-backend-config="container_name=$TF_STATE_BLOB_CONTAINER_NAME" \
-backend-config="key=$TF_STATE_BLOB_FILE" \
-backend-config="sas_token=$TF_STATE_BLOB_SAS_TOKEN"
displayName: Terraform Init
workingDirectory: '$(System.DefaultWorkingDirectory)/terraform'
env:
TF_STATE_BLOB_ACCOUNT_NAME: $(state-blob-account)
TF_STATE_BLOB_CONTAINER_NAME: $(state-blob-container)
TF_STATE_BLOB_FILE: $(state-blob-file-dev)
TF_STATE_BLOB_SAS_TOKEN: $(state-sas-token)
- bash: terraform apply -input=false -var-file="../terraform/variables/dev.tfvars" -auto-approve -var="terraform_variable=$(az-var-name)"
displayName: Terraform Apply
workingDirectory: '$(System.DefaultWorkingDirectory)/terraform'
env:
ARM_SUBSCRIPTION_ID: $(arm-subscription-id)
ARM_CLIENT_ID: $(arm-client-id)
ARM_CLIENT_SECRET: $(arm-client-secret)
ARM_TENANT_ID: $(arm-tenant-id)
Links #
Jule Ng - Terraform on Azure Pipelines Best Practices