If you’ve used Terraform before and you want to jump to the meat of how to combine Terraform, Forge, and Cloudflare, jump to the paragraph titled “Combining Terraform, Forge, and Cloudflare“
Introduction
I have recently joined a project which required a transition from existing infrastructure to a brand new one.
One of the reasons for this was that we have identified the existing infrastructure was overprovisioned and underutilized. It also lumped four different databases on a single, very pricy DigitalOcean droplet.
Furthermore, we really had no visibility into these databases, which wouldn’t have been the case if these were DigitalOcean-managed services.
When we join a project, we are always flexible in how we ease into existing processes and workflows. However, this project essentially had none. This meant we had a higher degree of freedom in terms of establishing new ways of working.
The process of managing infrastructure was the first candidate.
To date, the app had been deployed using Laravel Forge on a bunch of Forge provisioned servers. The firewall configuration of the Droplets was manual. The domain configuration and updates were manual. All-things-infra were manual, so if I went to server configuration and misclicked a firewall rule and called it a day, you would have a hard time understanding what exactly broke and why. This is where Terraform shines.
Terraform automates infra as code (IaC)
Terraform is a well-known, mature product by HashiCorp for automating infrastructure with any cloud provider. It allows you to provision, change, and version resources in any environment. In this article, we will be working solely with DigitalOcean.
Why is using Terraform a good idea anyway? I am certain most of the readers will have the code review process implemented for the software they work on with their teams. Code reviews have a number of benefits, which are outside of the scope of this article.
You don’t let developers tinker with software without someone else looking at and deploying those changes, do you? That is exactly what frequently happens with infrastructure on smaller to medium size projects.
You need another API server? Jane will spin one up no problem. We need to open a firewall port for external parties to contact us? John will update the configuration in no time.
And what is the visibility of that to the rest of the team? Unless you have a document where you put these changes (a change log) or someone heard they’d be making those changes, the visibility is basically zero.
Terraform changes this. It uses IoC (Infrastructure as Code) to declaratively codify what the infrastructure should be. What would be the difference then?
Going back to our example, Jane opens a Pull Request with the new API resource added. John points out she incorrectly tagged the server, which would attach it to the web application load balancer instead of the API load balancer. Nice one, John.
Then John opens a pull request to update a firewall rule, but it’s not his day either. Jane immediately points out he’s opening an incorrect port, as the service runs on a different one. Nice catch, Jane.
This is a trivial example. But you can see how codifying your infrastructure helps in reviewing the changes (hence, raising awareness of them and preventing issues) and versioning them for future reference (for example: “the service health stats are much better; I can see they resized the Droplet vertically yesterday.”)
Forge versus Ansible
Why do we need Forge if we use Terraform? Wouldn’t Ansible be more appropriate to configure the provisioned servers?
And the answer is: quite possibly yes! However, there are many factors we have to take into consideration when making those decisions.
One of them is that Ansible, besides Terraform itself, would be yet another tool for the future engineering team to learn. Every time you add a tool, you add overhead. Even though Ansible is nice because it is declarative like Terraform, it is another thing we won’t dig into here.
Forge on the other hand does provide a nice UI for things like managing environment variable values, configuring daemons, schedulers, updating deploy scripts, and more.
I hope the benefits of using Terraform even when already using Forge are becoming evident. The benefits of using Forge on top of servers provisioned by Terraform is that Forge is a tool many are familiar with and a tool that is easier to learn when compared with Ansible.
Cloudflare’s benefits
It is the same story as clicking around in the DigitalOcean interface by one person and the rest of the team having little or no visibility into what has changed. Why not treat Cloudflare as the deployment target of our IoC code?
If we declaratively codify the changes we would like to see in Cloudflare, they will go through code review just like any other software code, and they will be versioned.
I can also guarantee you that Terraform will be much faster to run against Cloudflare API than manually applying the changes in Cloudflare UI, especially when the change set is of significant size or repetitive.
Combining Terraform, Forge, and Cloudflare
Now that we have an understanding of the reasons this toolkit was chosen, let’s show how to get those tools to work together.
Terraform knows how to integrate with different services via “providers” and has good documentation on how to work with DigitalOcean and Cloudflare.
We will be making use of both of these providers in this article.
Terraform is open-source and custom providers can be implemented. It would be nice if there was a Laravel Forge Provider that we could use directly instead of the DigitalOcean one. However, the only one I found on GitHub was 4 years old, undocumented and very limited, so I’m guessing someone wrote it for exercise or their own use.
Start with a basic infrastructure
For the sake of keeping the example readable, we will stick to very basic infrastructure, but I am sure by the end of the article you will see how this can easily scale for more sophisticated infrastructure needs.
Let’s assume we want to deploy an API-type application. We need the following:
- An API server which:
- Is configured in Forge (and visible in the interface)
- Scales horizontally over time (possible to add more API servers to handle increasing traffic)
- Is secured with SSL (Let’s Encrypt)
- A load balancer which:
- Receives traffic before routing to the API server(s). This will also help with OS upgrades, which effectively is replacing an older API server with a newer one.
- A DigitalOcean managed MySQL service
- A managed Redis service
- A subdomain added to Cloudflare within a pre-existing Zone
Ideally, a server provisioned by Terraform should be immediately configured by Forge. Later on, we will explore an issue I had with this step and how I resolved it.
Kicking it off with Terraform
As you may know, Terraform has a notion of “state.” State describes the state of the resources as Terraform expects them to be. For instance, if you call a server “foo” and later manually update it to “bar” in DigitalOcean, Terraform will be able to detect this inconsistency.
There are different means of storing Terraform state. The simplest one is using local state. However, this means Terraform will have to be run from your computer.
One alternative is using external storage for state, such as DigitalOcean Spaces.
Terraform also has a free version of their Terraform Cloud offering. This gives you insights into terraform
plan and terraform apply
runs and allows it to integrate with GitHub Pull Requests, where Terraform can run speculative plans as part of the PR checks.
For now, let’s stick to the simplest option, which is local state. Let’s start by creating a main.tf
file in an empty directory.
# main.tf
terraform {
required_version = "~> 1.1.7"
}
And run terraform init
. You should see the following output:
$ terraform init
Initializing the backend…
Initializing provider plugins…
Terraform has been successfully initialized!
You may now begin working with Terraform. Try running "terraform plan" to see any changes that are required for your infrastructure. All Terraform commands should now work.
If you ever set or change modules or backend configuration for Terraform, rerun this command to reinitialize your working directory. If you forget, other commands will detect it and remind you to do so if necessary.
As you can see, Terraform has very nice, descriptive output, which is self explanatory. It does suggest we try running terraform plan
, so let’s do that.
$ terraform plan
No changes. Your infrastructure matches the configuration.
Terraform has compared your real infrastructure against your configuration and found no differences, so no changes are needed.
This is expected because there’s no infrastructure defined for Terraform to look at. Let’s get that fixed and start by defining the DigitalOcean provider. Update your main.tf
file as follows:
# main.tf
terraform {
required_version = "~> 1.1.7"
required_providers {
digitalocean = {
source = "digitalocean/digitalocean"
version = "~> 2.0"
}
}
}
If you run terraform plan
now, you would get the following error:
$ terraform plan
╷
│ Error: Inconsistent dependency lock file
│
│ The following dependency selections recorded in the lock file are inconsistent with the current configuration:
│ - provider registry.terraform.io/digitalocean/digitalocean: required by this configuration but no version is selected
│
│ To make the initial dependency selections that will initialize the dependency lock file, run:
│ terraform init
╵
In fact, this is in line with the initial message Terraform has displayed:
If you ever set or change modules or backend configuration for Terraform, rerun this command to reinitialize your working directory. If you forget, other commands will detect it and remind you to do so if necessary.
Let’s run terraform init
again to initialize the DigitalOcean Provider:
$ terraform init
Initializing the backend...
Initializing provider plugins...
- Finding digitalocean/digitalocean versions matching "~> 2.0"...
- Installing digitalocean/digitalocean v2.19.0...
- Installed digitalocean/digitalocean v2.19.0 (signed by a HashiCorp partner, key ID F82037E524B9C0E8)
Partner and community providers are signed by their developers.
If you'd like to know more about provider signing, you can read about it here:
https://www.terraform.io/docs/cli/plugins/signing.html
Terraform has created a lock file .terraform.lock.hcl to record the provider
selections it made above. Include this file in your version control repository
so that Terraform can guarantee to make the same selections by default when
you run "terraform init" in the future.
[...]
You should include .terraform.lock.hcl
in your version control repository, just like you would include NPM’s package.lock or PHP’s composer.lock. This way, your team will have the same versions of providers installed when running Terraform commands.
To use the initialized provider in our main.tf
file, we have to define it as such:
# main.tf
terraform {
[...]
}
# export DIGITALOCEAN_TOKEN
provider "digitalocean" {}
This provider will require a DigitalOcean API personal access token (PAT) to work. You can create a DO PAT here. Note that for use in production it is best practice to not use your personal account to create the PAT, but a machine user account of your company. Ensure the token also has the “write” permission assigned.
There are different ways to pass this token to the provider. You can use a variable option as described here, then invoke terraform plan -var="do_token=..."
.
I like to keep it clean and use an environment variable. The upsides of doing so are:
- Automic inclusion of the token with every command run as opposed to adding it manually
- Automatically avoiding the inclusion of the secret in the Bash command history as opposed to prefixing each Bash command with a single space to get the same effect
- The same approach can be used in Terraform Cloud
For this provider, it will look for the DIGITALOCEAN_TOKEN environment variable, so that is what I define in my terminal by running:
export DIGITALOCEAN_TOKEN=...
Pay attention to the single space before the “export …” command which may not be easily visible — this prevents the line from being stored in your Bash history!
Creating a VPC with Terraform
Let’s start by creating a VPC we will use. Append the following content to main.tf
# VPC
resource "digitalocean_vpc" "vpc" {
name = "production-vpc-ams3"
region = "ams3"
}
The first label after resource
indicates the resource type. The second label that follows is the resource name (of this type) in Terraform. Because resource names are scoped by resource type, they do not have to be unique between different resource types.
The name
argument is what will show in the DigitalOcean UI for that VPC. The region
argument indicates which region we would like to create the VPC in.
This resource would not be reusable between environments, as its name has a hardcoded environment name and region. We can parameterize it as follows:
resource "digitalocean_vpc" "vpc" {
name = "${var.prefix}-vpc-${var.region}"
region = var.region
}
To make this work, we have to create two files. The first one is to declare these variables:
# variables.tf
variable "region" {
description = "This is the DigitalOcean region where the application will be deployed."
}
variable "prefix" {
description = "This is the resource name prefix visible in DigitalOcean resource names"
}
These variables can be passed to Terraform in CLI as mentioned before, -var="region=ams3" -var="prefix=production"
, but that’s not very convenient. Let’s store them in the file that gets automatically loaded by Terraform.
# terraform.auto.tfvars
region = "ams3"
prefix = "production"
Please ignore the indentation – I’m copy pasting from the templates I wrote and terraform fmt
uses this style of equals sign alignment when formatting with other entries in the file.
Finally, let’s create the VPC by first running terraform plan
.
$ terraform plan
Terraform used the selected providers to generate the following execution plan. Resource actions are indicated with the following symbols:
+ create
Terraform will perform the following actions:
# digitalocean_vpc.vpc will be created
+ resource "digitalocean_vpc" "vpc" {
+ created_at = (known after apply)
+ default = (known after apply)
+ id = (known after apply)
+ ip_range = (known after apply)
+ name = "production-vpc-ams3"
+ region = "ams3"
+ urn = (known after apply)
}
Plan: 1 to add, 0 to change, 0 to destroy.
Note: You didn't use the -out option to save this plan, so Terraform can't guarantee to take exactly these actions if you run "terraform apply" now.
Please review the note in the last line. The reason for this is we did not use the -out
option with the plan
command and because the state can change between the moment we ran plan
and when we run apply
(say, because other people are also working with the infrastructure), Terraform makes it clear it can’t make guarantees that what we saw here is the same that will be executed when running apply
.
Lucky for us, terraform apply
shows the expected change set anyway before asking for confirmation.
The output of terraform plan
looks good, so we can now run terraform apply
. Terraform will ask us to confirm the execution. Only yes
will be accepted to approve, so let’s type it in and confirm.
$ terraform apply
[The plan will be outputted here]
Do you want to perform these actions? Terraform will perform the actions described above. Only 'yes' will be accepted to approve.
Enter a value: yes
digitalocean_vpc.vpc: Creating...
digitalocean_vpc.vpc: Creation complete after 3s [id=fc552885-a529-48b1-8153-df607e7e3d19]
Apply complete! Resources: 1 added, 0 changed, 0 destroyed.
At this point you can confirm your new VPC appears in DigitalOcean UI.
We have taken it step-by-step with this first resource. By now you should understand the basics of how this works
- we define the infrastructure change
- we
plan
to review what will change - we
apply
the change
Moving on, we will go a bit faster for the next resources.
Creating an API server with Terraform
Let’s continue by adding an API server to the just created VPC.
# rest of main.tf here
resource "digitalocean_droplet" "api" {
image = "ubuntu-20-04-x64"
name = "${var.prefix}-api-1"
region = var.region
size = "s-2vcpu-2gb"
monitoring = true
vpc_uuid = digitalocean_vpc.vpc.id
resize_disk = false
tags = ["${var.tag}", "${var.tag}-api"]
droplet_agent = true
graceful_shutdown = true
}
Let’s review a few of those arguments.
size
is a Droplet slug, of which the official list you can find here or by runningdoctl compute size list
if you have doctl available locally.vpc_uuid
defines the VPC, where the Droplet should be placed.- This is an example of how we can reference existing resources. In this case,
digitalocean_vpc.vpc.id
is the resource type, name, and the id attribute that we can read from.
- This is an example of how we can reference existing resources. In this case,
tags
is a custom tag, which will be useful when we point the load balancer traffic at the Droplets.
I am sure you have noticed one variable is missing from our files, so let’s add it:
# variables.tf
variable "tag" {
description = "This is the environment tag the resources in DigitalOcean are tagged with"
}
# terraform.auto.tfvars
tag = "production"
You can run terraform fmt
to format your Terraform files. You can also run terraform validate
to validate your files after updating them, to make sure there’s no syntax errors like quotes missing.
As usual, run terraform plan
, review the changes, then run terraform appl
y. You will see the API server being created in the DigitalOcean UI.
From this point on we will skip mentioning the need to run plan
and apply
commands, which you can run whenever you like while working through this exercise.
DigitalOcean has a nice way to bring resources together under designated projects (seen on the top left of the web UI), so let’s create a project for our test.
resource "digitalocean_project" "project" {
name = var.project_name
description = "${var.project_name} environment"
purpose = "Web Application"
environment = var.project_environment
resources = [
digitalocean_droplet.api.urn,
]
}
resources
is an array of uniform resource names (URNs) of the resources you want to put in the DigitalOcean project.environment
is the deployment environment.- In DigitalOcean, the supported values for the optional
environment
argument are: Production, Staging and Development.
- In DigitalOcean, the supported values for the optional
You can also add a new project_name
and project_environment
variables, the same way we did for the tag variable earlier. The value of project_name
will be what you will see on your project list in DigitalOcean UI. Apply the changes to see them reflected in DO.
Creating a load balancer with Terraform
Now that we have an API server (we’ll get to the Forge part, soon), let’s bring up a DigitalOcean Managed Load Balancer in front of it.
resource "digitalocean_loadbalancer" "api-lb" {
name = "${var.prefix}-api-lb"
region = var.region
forwarding_rule {
entry_port = 80
entry_protocol = "http"
target_port = 80
target_protocol = "http"
}
forwarding_rule {
entry_port = 443
entry_protocol = "https"
target_port = 443
target_protocol = "https"
tls_passthrough = true
}
healthcheck {
port = 80
protocol = "tcp"
}
redirect_http_to_https = false
disable_lets_encrypt_dns_records = true
vpc_uuid = digitalocean_vpc.vpc.id
droplet_tag = "${var.tag}-api"
}
This one is a bit longer, albeit pretty self-explanatory. We’ll touch on the more interesting points.
The second forwarding_rule
has tls_passthrough = true
attribute, which indicates the load balancer should not terminate SSL. The SSL traffic will pass through the load balancer to the API server(s).
It is possible to configure SSL termination on the load balancer, but we won’t dive into it here.
If you purchased a subdomain wildcard certificate in Cloudflare, it makes a lot of sense to configure SSL termination as the traffic between your LB and your API server is secured within the VPC anyway (and Forge is not involved in SSL setup, unlike our case with Let’s Encrypt).
healthcheck
defines which port and protocol should be used to determine the health of the attached API instances.- You can additionally customize healthcheck check interval, response timeout, etc. if you need to.
droplet-tag
is the custom tag where traffic is routed to (the destination)- This is the key piece here. By using a tag, no additional load-balancer-specific configuration is needed when bringing more API servers up.
Setting droplet-tag
also allows you to disconnect an API server from the load balancer for whatever maintenance is necessary, by simply removing this tag from the Droplet.
At this point, we have an (unconfigured) API server attached to a managed load balancer.
Creating databases with Terraform
We are still missing managed databases, so let’s create them now.
# main.tf
resource "digitalocean_database_cluster" "mysql" {
name = "${var.prefix}-mysql"
engine = "mysql"
version = "8"
size = "db-s-1vcpu-2gb"
region = var.region
node_count = 2
tags = ["${var.tag}"]
private_network_uuid = digitalocean_vpc.vpc.id
}
resource "digitalocean_database_cluster" "redis" {
name = "${var.prefix}-redis"
engine = "redis"
version = "6"
size = "db-s-1vcpu-2gb"
region = var.region
node_count = 2
tags = ["${var.tag}"]
private_network_uuid = digitalocean_vpc.vpc.id
}
Don’t forget to add these to your digital_ocean.project
resource, so they neatly show up grouped in the DigitalOcean UI.
Again, we will use a tag to allow specific resources to access the managed databases (instead of adding them one by one to trusted sources).
Monitoring systems with Terraform
Let’s show how we can add a CPU usage alert to the API server:
# main.tf
resource "digitalocean_monitor_alert" "api-cpu-alert" {
alerts {
email = ["engineering@yourdomain"]
}
window = "5m"
type = "v1/insights/droplet/cpu"
compare = "GreaterThan"
value = 70
enabled = true
entities = [
digitalocean_droplet.api.id,
]
description = "${var.project_environment} API CPU usage alert"
}
This will send an email to engineering@yourdomain whenever CPU usage on the enlisted resources
crosses the value
threshold (in this case, 70%).
Easy, isn’t it? To monitor memory, we’d use type = "v1/insights/droplet/memory_utilization_percent"
. To monitor disk, we’d use type = "v1/insights/droplet/disk_utilization_percent"
.
Our infra is shaping up nicely. Let’s now point out the elephant in the room: up to this point, Forge was not involved at all. The next steps will explain how we add it to the mix.
Adding Forge to your Terraform setup
Our API server is in its pristine form. How can we have it configured by Forge?
One way is to open Forge, click “Create Server”, pick “Custom VPS” and fill out the necessary details of the newly created API server. But what is the fun in doing that manually?
When creating a Droplet, there is a way to instruct DigitalOcean to configure the server for us. We can set the user_data
argument which is then picked up by cloud-init.
There are two things we can pass to user_data
. Our first option is a yaml file that is then handled by cloud-init. Alternatively, we can send a Bash script that cloud-init will execute for us after the server has been created. We can use this to trigger configuration of a server in Forge, the one just provisioned by Terraform.
To do this, we’ll need to do two things:
- instruct Terraform to set a Forge API token in the Bash script
- pass the rendered script into
user_data
This can be achieved as follows (note the data "template_file"
on line 1 and the user_data
argument in the api
resource):
data "template_file" "configure-forge-api-server" {
template = file("configure-forge-api-server.sh")
vars = {
forge_api_token = var.forge_api_token
}
}
resource "digitalocean_droplet" "api" {
# ...
droplet_agent = true
graceful_shutdown = true
user_data = data.template_file.configure-forge-api-server.rendered
}
As usual, this will require us to define a forge_api_token
variable, which you already know how to do.
The last line of the snippet above will look for the configure-forge-api-server.sh
script in the current working directory, and replace ${forge_api_token}
with the value of the variable.
Here’s how the configuration script can look:
#!/bin/bash
export DEBIAN_FRONTEND=noninteractive
cd /root
export PUBLIC_IP=$(curl -s http://169.254.169.254/metadata/v1/interfaces/public/0/ipv4/address)
export PRIVATE_IP=$(curl -s http://169.254.169.254/metadata/v1/interfaces/private/0/ipv4/address)
# Note the space at the beginning of this line, because how we pass the forge_api_token there.
FORGE_RESPONSE=$(curl -X POST https://forge.laravel.com/api/v1/servers \
-H "Accept: application/json" \
-H "Content-Type: application/json" \
-H "Authorization: Bearer ${forge_api_token}" \
-d "{\"provider\": \"custom\", \"type\": \"web\", \"name\": \"$HOSTNAME\", \"ip_address\": \"$PUBLIC_IP\", \"private_ip_address\": \"$PRIVATE_IP\", \"php_version\": \"php74\"}")
echo "*** Forge will now configure the web server ***"
echo "set -x" > /root/provision.sh
echo $FORGE_RESPONSE | jq -r '.provision_command' >> /root/provision.sh
# Firewall configured in DigitalOcean, so disable here.
echo "ufw disable" >> /root/provision.sh
chmod u+x /root/provision.sh
In case you are wondering what this magic IP 169.254.169.254
stands for, it is the Droplet metadata API. We get the Droplets public and private IP from the metadata API. We then post to the Forge API what we mentioned before (creating, or rather configuring, a Custom VPS).
The last part of the script needs an explanation. The response from Forge API contains a provision_command
, which is basically a wget
of a specific Bash script from Forge, which should be executed on the server.
Initially, I have attempted to eval
this provision_command
. And while things seemed to work fine (the server would show as provisioning in Forge, going through all the configuration steps), Forge seemed to be unreliable in reporting the progress towards the end of the process. It would complain it was not able to provision the server (unfortunately without any further details), even though I know it did provision the server as indicated by the existence of /root/.forge-provisioned
file. Because the effort to value ratio was increasing, I decided to work around this as follows:
- create the server with Terraform
- execute project specific, preliminary setup in the script passed to
user_data
- initiate the Forge server configuration (so it shows up in Forge, but doesn’t actually start)
- drop a script on the server that should be run manually (from
user_data
)
While this is not ideal, as a manual step is required in this seemingly fully-automated process, perhaps with time we will be able to hash out this issue. For now, it does the job fine.
If you add the user_data
argument to your Droplet, terraform plan
or terraform apply
will indicate this change requires destroying the old server and creating a new one.
The reason for this is Terraform is not able to determine what custom actions may be contained in user_data
, therefore it is not able to determine if changing user_data
changes the existing Droplet (or not).
Simply put, making any change to the script fed to user_data
will trigger Droplet re-creation. To avoid this, you can add the following lifecycle section to the api
resource definition:
resource "digitalocean_droplet" "api" {
[...]
user_data = data.template_file.configure-forge-api-server.rendered
lifecycle {
prevent_destroy = true
ignore_changes = [user_data]
}
}
This way, you can further tweak the script intended for user_data
on newly created servers only and not worry that Terraform will suggest destroying and recreating the server when it notices that you have made changes.
This time, once we terraform apply
, we can observe that the production-api-1
server shows up in Forge in Building
state. It doesn’t progress from there though. We have to sign into the server for the first time as root
(DigitalOcean emails you the details and you will be required to change the root password the first time you SSH into the server) and simply run our custom ./provision.sh
script, placed in the /root
directory by the user_data
script.
Note that cloud-init may take a moment to create the /root/provision.sh
script if you contain more custom setup in your configuration script. In this case, cloud-init will execute whatever you codified in user data and, finally, create the provision.sh
script for you to run manually.
Once the provision.sh
script is run manually, the server provisioning steps in Forge will start to progress. Voilà! This allows us to provision the infrastructure with Terraform, but still have the servers configured and then managed in Forge.
Consider removing provision.sh
after it’s done running. Note that the user_data
of a server cannot be changed once it is set, so the Forge token we used to call Forge API will be stored in the DO metadata API forever for that Droplet.
If accessing the Forge token from within the server sounds like a security concern to you, remember your .env
files also live on the server, so you’d have a much bigger concern with that instead.
You can hit me up at @mkarnicki on Twitter if you have suggestions on how to improve this! (I haven’t played with HashiCorp Vault in the context of Forge API tokens, yet.)
Setting up domains in Cloudflare and SSL with Let’s Encrypt
We have our infra in place and our API configured. It’s a good moment to configure our web application in Forge and activate a Let’s Encrypt SSL certificate on the site.
What is now missing is the domain that will point to the API load balancer.
Let’s first add the Cloudflare provider.
# main.tf
terraform {
[...]
required_providers {
[...]
cloudflare = {
source = "cloudflare/cloudflare"
version = "~> 3.0"
}
}
}
# under provider "digitalocean" {}
# export CLOUDFLARE_API_TOKEN
# export CLOUDFLARE_API_USER_SERVICE_KEY
provider "cloudflare" {}
As you may remember by now, it’s time to terraform init
again.
For my own reference, I add the necessary environment variables to export for a provider in comments. You only need the second env var (CLOUDFLARE_API_USER_SERVICE_KEY
) if you would like to create an Origin CA certificate. We’ll be using Let’s Encrypt to keep things simple, but I encourage you to experiment with issuing an Origin CA certificate using Terraform’s Cloudflare provider.
This time, let’s find an existing resource, instead of creating it from scratch. Let’s assume we own acme.com
. Let’s define a resource as follows:
data "cloudflare_zones" "main_zone" {
filter {
name = "acme.com"
}
}
The filter
will pick up a matching Zone, if one exists. Let’s now use this zone to create a subdomain for our API.
resource "cloudflare_record" "api" {
allow_overwrite = true
zone_id = lookup(data.cloudflare_zones.main_zone.zones[0], "id")
name = "api"
type = "A"
value = digitalocean_loadbalancer.api-lb.ip
ttl = 3600
proxied = false
}
Let’s start with the most interesting line, the zone_id
. Our api
record belongs in the acme.com
zone, so we have to resolve its id. To achieve this, we use the lookup
function to extract the id
attribute of the first zone that matches our filter: data.cloudflare_zones.main_zone.zones[0]
.
Here are the other arguments:
allow_overwrite
explicitly allows Terraform to overwrite the record if one with a matchingname
already exists.value
takes the IP of our load balancer, which we simply retrieve by explicitly pointing at theapi-lb
resource IP.type
andttl
indicate record type (A
record in our case) and the DNSTime To Liv
e value in seconds.proxied
indicates whether Cloudflare should proxy the requests through their network, which would hide the API load balancer IP.- Please note this will need to be
true
to work with Cloudflare’s Origin CA certificates, but unfortunatelyfalse
to work with our Let’s Encrypt setup.
- Please note this will need to be
That’s it! Once you apply
the changes, the api.acme.com
record in the acme.com
zone will be created in Cloudflare by Terraform and will point at your API load balancer.
With this setup, after you configure your api.acme.com site in Forge, you can simply add a Let’s Encrypt SSL certificate to it using the Forge UI.
Success!
This article does not make use of a number of features for simplicity’s sake. For example, our api
resource name ${var.prefix}-api-1
has api-1
hardcoded, which is not ideal. Terraform has a way to indicate we want a count number of resources (say, count = 5
), then the name would become ${var.prefix}-api-${count.index}
. For small infrastructures, the upside of hardcoding a name like this allows you to be more explicit about your specific resources, but it doesn’t scale well when you have a larger number of resources to manage.
I believe we will still be able to refine how we use these tools together, especially the combination of Terraform and Forge.
I hope you’ve enjoyed reading this article. I highly encourage you to learn more about Terraform as it is a very powerful tool in the hands of a knowledgeable person.
Member discussion