Terraform Archives - credativ®

Hashicorp Terraform is a well-known infrastructure automation tool mostly targeting cloud deployments. Instaclustr (a part of NetApp’s CloudOps division and credativ’s parent company) provides a managed service for various data stores, including PostgreSQL. Provisioning managed clusters is possible via the Instaclustr console, a REST API or through the Instaclustr Terraform Provider.

In this first part of a blog series, it is shown how a Postgres cluster can be provisioned using the Instaclustr Terraform Provider, including whitelisting the IP address and finally connecting to it via the psql command-line client.

Initial Setup

The general requirement is having an account for the Instaclustr Managed service. The web console is located at https://console2.instaclustr.com/. Next, a provisioning API key needs to be created if not available already, as explained here.

Terraform providers are usually defined and configured in a file called provider.tf. For the Instaclustr Terraform provider, this means adding it to the list of required providers and setting the API key mentioned above:

terraform {
  required_providers {
    instaclustr = {
      source  = "instaclustr/instaclustr"
      version = ">= 2.0.0, < 3.0.0"
    }
  }
}

variable "ic_username" {
  type = string
}
variable "ic_api_key" {
  type = string
}

provider "instaclustr" {
    terraform_key = "Instaclustr-Terraform ${var.ic_username}:${var.ic_api_key}"
}

Here, ic_username and ic_api_key are defined as variables. They should be set in a terraform.tfvars file in the same directory

ic_username       = "username"
ic_api_key        = "0db87a8bd1[...]"

As the final preparatory step, Terraform needs to be initialized, installing the provider:

$ terraform init

Initializing the backend...

Initializing provider plugins...
- Finding instaclustr/instaclustr versions matching ">= 2.0.0, < 3.0.0"...
- Installing instaclustr/instaclustr v2.0.136...
- Installed instaclustr/instaclustr v2.0.136 (self-signed, key ID 58D5F4E6CBB68583)

[...]

Terraform has been successfully initialized!

Defining Resources

Terraform resources define infrastructure objects, in our case a managed PostgreSQL cluster. Customarily, they are defined in a main.tf file, but any other file name can be chosen:

resource "instaclustr_postgresql_cluster_v2" "main" {
  name                    = "username-test1"
  postgresql_version      = "16.2.0"
  private_network_cluster = false
  sla_tier                = "NON_PRODUCTION"
  synchronous_mode_strict = false
  data_centre {
    name                         = "AWS_VPC_US_EAST_1"
    cloud_provider               = "AWS_VPC"
    region                       = "US_EAST_1"
    node_size                    = "PGS-DEV-t4g.small-5"
    number_of_nodes              = "2"
    network                      = "10.4.0.0/16"
    client_to_cluster_encryption = true
    intra_data_centre_replication {
      replication_mode           = "ASYNCHRONOUS"
    }
    inter_data_centre_replication {
      is_primary_data_centre = true
    }
  }
}

The above defines a 2-node cluster named username-test1 (and referred to internally as main by Terraform) in the AWS US_EAST_1 region with PGS-DEV-t4g.small-5 instance sizes (2 vCores, 2 GB RAM, 5 GB data disk) for the nodes. Test/developer instance sizes for the other cloud providers would be:

Cloud Provider Default Region Instance Size Data Disk RAM CPU
AWS_VPC US_EAST_1 PGS-DEV-t4g.small-5 5 GB 2 GB 2 Cores
AWS_VPC US_EAST_1 PGS-DEV-t4g.medium-30 30 GB 4 GB 2 Cores
AZURE_AZ CENTRAL_US PGS-DEV-Standard_DS1_v2-5-an 5 GB 3.5 GB 1 Core
AZURE_AZ CENTRAL_US PGS-DEV-Standard_DS1_v2-30-an 30 GB 3.5 GB 1 Core
GCP us-west1 PGS-DEV-n2-standard-2-5 5 GB 8 GB 2 Cores
GCP us-west1 PGS-DEV-n2-standard-2-30 30 GB 8 GB 2 Core

Other instance sizes or regions can be looked up in the console or in the section node_size of the Instaclustr Terraform Provider documentation.

Running Terraform

Before letting Terraform provision the defined resources, it is best-practice to run terraform plan. This lets Terraform plan the provisioning as a dry-run, and makes it possible to review the expected actions before creating any actual infrastructure:

$ terraform plan

Terraform used the selected providers to generate the following execution plan. Resource actions are
indicated with the following symbols:
  + create

Terraform will perform the following actions:

  # instaclustr_postgresql_cluster_v2.main will be created
  + resource "instaclustr_postgresql_cluster_v2" "main" {
      + current_cluster_operation_status = (known after apply)
      + default_user_password            = (sensitive value)
      + description                      = (known after apply)
      + id                               = (known after apply)
      + name                             = "username-test1"
      + pci_compliance_mode              = (known after apply)
      + postgresql_version               = "16.2.0"
      + private_network_cluster          = false
      + sla_tier                         = "NON_PRODUCTION"
      + status                           = (known after apply)
      + synchronous_mode_strict          = false

      + data_centre {
          + client_to_cluster_encryption     = true
          + cloud_provider                   = "AWS_VPC"
          + custom_subject_alternative_names = (known after apply)
          + id                               = (known after apply)
          + name                             = "AWS_VPC_US_EAST_1"
          + network                          = "10.4.0.0/16"
          + node_size                        = "PGS-DEV-t4g.small-5"
          + number_of_nodes                  = 2
          + provider_account_name            = (known after apply)
          + region                           = "US_EAST_1"
          + status                           = (known after apply)

          + intra_data_centre_replication {
              + replication_mode = "ASYNCHRONOUS"
            }
        }
    }

Plan: 1 to add, 0 to change, 0 to destroy.
[...]

When the planned output looks reasonable, it can be applied via terraform apply:

$ terraform apply

Terraform used the selected providers to generate the following execution plan. Resource actions are
indicated with the following symbols:
  + create
[...]
Plan: 1 to add, 0 to change, 0 to destroy.

Do you want to perform these actions?
  Terraform will perform the actions described above.
  Only 'yes' will be accepted to approve.

  Enter a value: yes

instaclustr_postgresql_cluster_v2.main: Creating...
instaclustr_postgresql_cluster_v2.main: Still creating... [10s elapsed]
[...]
instaclustr_postgresql_cluster_v2.main: Creation complete after 5m37s [id=704e1c20-bda6-410c-b95b-8d22ef3f5a04]

Apply complete! Resources: 1 added, 0 changed, 0 destroyed.

That is it! The PostgreSQL cluster is now up and running after barely 5 minutes.

IP Whitelisting

In order to access the PostgreSQL cluster, firewall rules need to be defined for IP-whitelisting. In general, any network address block can be defined, but in order to allow access from the host running Terraform, a firewall rule for the local public IP address can be set via a service like icanhazip.com, appending to main.tf:

data "http" "myip" {
  url = "https://ipecho.net/plain"
}

resource "instaclustr_cluster_network_firewall_rules_v2" "main" {
  cluster_id = resource.instaclustr_postgresql_cluster_v2.main.id

  firewall_rule {
    network = "${chomp(data.http.myip.response_body)}/32"
    type    = "POSTGRESQL"
  }
}

The usage of the http Terraform module also needs an update to the providers.tf file, adding it to the list of required providers:

terraform {
  required_providers {
    instaclustr = {
      source  = "instaclustr/instaclustr"
      version = ">= 2.0.0, < 3.0.0"
    }
    http = {
      source = "hashicorp/http"
      version = "3.4.3"
    }
  }
}

And a subsequent re-run of terraform init, followed by terraform apply:

$ terraform init

Initializing the backend...

Initializing provider plugins...
- Reusing previous version of instaclustr/instaclustr from the dependency lock file
- Finding hashicorp/http versions matching "3.4.3"...
- Using previously-installed instaclustr/instaclustr v2.0.136
- Installing hashicorp/http v3.4.3...
- Installed hashicorp/http v3.4.3 (signed by HashiCorp)

[...]
Terraform has been successfully initialized!
[...]

$ terraform apply

data.http.myip: Reading...
instaclustr_postgresql_cluster_v2.main: Refreshing state... [id=704e1c20-bda6-410c-b95b-8d22ef3f5a04]
data.http.myip: Read complete after 1s [id=https://ipv4.icanhazip.com]

Terraform used the selected providers to generate the following execution plan. Resource actions are
indicated with the following symbols:
  + create

Terraform will perform the following actions:

  # instaclustr_cluster_network_firewall_rules_v2.main will be created
  + resource "instaclustr_cluster_network_firewall_rules_v2" "main" {
      + cluster_id = "704e1c20-bda6-410c-b95b-8d22ef3f5a04"
      + id         = (known after apply)
      + status     = (known after apply)

      + firewall_rule {
          + deferred_reason = (known after apply)
          + id              = (known after apply)
          + network         = "123.134.145.5/32"
          + type            = "POSTGRESQL"
        }
    }

Plan: 1 to add, 0 to change, 0 to destroy.

Do you want to perform these actions?
  Terraform will perform the actions described above.
  Only 'yes' will be accepted to approve.

  Enter a value: yes

instaclustr_cluster_network_firewall_rules_v2.main: Creating...
instaclustr_cluster_network_firewall_rules_v2.main: Creation complete after 2s [id=704e1c20-bda6-410c-b95b-8d22ef3f5a04]

Apply complete! Resources: 1 added, 0 changed, 0 destroyed.

Connecting to the Cluster

In order to connect to the newly-provisioned Postgres cluster, we need the public IP addresses of the nodes, and the password of the administrative database user, icpostgresql. Those are retrieved and stored in the Terraform state by the Instaclustr Terraform provider, by default in a local file terraform.tfstate. To secure the password, one can change the password after initial connection, secure the host Terraform is run from, or store the Terraform state remotely.

The combined connection string can be setup as an output variable in a outputs.tf file:

output "connstr" {
  value = format("host=%s user=icpostgresql password=%s dbname=postgres target_session_attrs=read-write",
          join(",", [for node in instaclustr_postgresql_cluster_v2.main.data_centre[0].nodes:
               format("%s", node.public_address)]),
                      instaclustr_postgresql_cluster_v2.main.default_user_password
                )
  sensitive = true
}

After another terraform apply to set the output variable, it is possible to connect to the PostgreSQL cluster without having to type or paste the default password via:

$ psql "$(terraform output -raw connstr)"
psql (16.3 (Debian 16.3-1.pgdg120+1), server 16.2 (Debian 16.2-1.pgdg110+2))
SSL connection (protocol: TLSv1.3, cipher: TLS_AES_256_GCM_SHA384, compression: off)
Type "help" for help.

postgres=> 

Conclusion

In this first part, the provisioning of an Instaclustr Managed PostgreSQL cluster with Terraform was demonstrated. In the next part of this blog series, we plan to present a Terraform module that makes it even easier to provision PostgreSQL clusters. We will also check out which input variables can be set to further customize the managed PostgreSQL cluster.

Instaclustr offers a 30-day free trial for its managed service which allows to provision clusters with development instance sizes, so you can signup and try the above yourself today!