As promised in “Terraform: how to init your infrastructure”, here is the second installment of instructions for deploying all your infrastructure with Terraform at Scaleway!

We are going to cover some new concepts of Terraform to help you manage your infrastructure easily. As we are continuously working on new products and our Terraform provider, I’m also excited to cover new products that will strengthen your architecture.

First, we will use our first repository as a base to set up our new infrastructure. We will transform our main directory into a sub-section to have multiple modules inside one big Scaleway module. This structure opens up the possibility to deploy all the infrastructure you need in one click. Then, if you don’t need products (like a database for example), you will simply not call this sub-module in your

You can find this second and more developed repository right here.

Before starting on the project, you need to have an account and your credentials all set up. You’ll also need to install Terraform on the server you are using, or locally, using the last version of the Scaleway Terraform provider.

First, what is a module?

Hashicorp documentation defines a module as a container for multiple resources used together. More specifically, a module is a small portion of your code that you put inside a specific repository to use again later. It will allow you to:

  • Give a clearer vision of your code and how to use it;
  • deploy the same piece of code multiple times in a second;
  • call the same code into a different configuration (launching multiple servers with different sizing).

In this tutorial, I will turn one specific product (in this case, an instance) and turn it into a module. Then, we will deploy our infrastructure.

Second, what are we going to deploy?

We are going to deploy the basics of infrastructure to allow you to build any kind of application: an instance, a database, a Kubernetes cluster, a load balancer. We’ll wrap everything in a private network, behind a public gateway, for safety reasons. However, the integration of the Kubernetes cluster inside a VPC is not done yet, so it will be completed outside of our private network.

Schema of the infrastructure we will deploy

A new architecture

The first change is that we will modify the tree structure of the repository. Instead of having everything on the same level and one file per product/feature, we are going to divide our resources into sub-directory, with the same layout each time.

We are going from this:

To this:

├── Terraform_Module_Scaleway_Schema.png
├── module
│   ├── database
│   │   ├──
│   │   ├──
│   │   ├──
│   │   └──
│   ├── instance
│   │   ├──
│   │   ├──
│   │   ├──
│   │   └──
│   ├── kapsule
│   │   ├──
│   │   ├──
│   │   ├──
│   │   └──
│   ├── loadbalancer
│   │   ├──
│   │   ├──
│   │   ├──
│   │   └──
│   └── vpc
│       ├──
│       ├──
│       ├──
│       └──
├── terraform.tfvars

6 directories, 28 files

Although it looks more complicated at first glance, this is an easier and more scalable way to deploy and maintain your infrastructure. In fact, your root will be only used to call the resources that you need by sourcing the right module.

Your variables and outputs will be linked to your dedicated resources in the right module. And if you need global variables, you will still have the root file.

Finally, if you want to deploy more/new resources, you simply have to create a new sub-directory and call it as many times as you need.

Turning a Terraform repository into a module

Let’s take a deep dive into our new, where all the logic is going to be gathered.

We went from this:

resource “scaleway_instance_ip” “public_ip” {}
resource “scaleway_instance_volume” “scw-instance” {
  size_in_gb = 30
  type       = “l_ssd”
resource “scaleway_instance_server” “scw-instance” {
  type  = “DEV1-L”
  image = “ubuntu_focal”
  tags = [“terraform instance”, “scw-instance”]
  ip_id =
  additional_volume_ids = []
  root_volume {
    # The local storage of a DEV1-L instance is 80 GB, subtract 30 GB from the additional l_ssd volume, then the root volume needs to be 50 GB.
    size_in_gb = 50

to this:

module "instance" {
  source              = "./module/instance"
  instance_size_in_gb = var.instance_size_in_gb
  instance_type       = var.instance_type
  instance_image      = var.instance_image
  volume_size_in_gb   = var.volume_size_in_gb
  volume_type         = var.volume_type
  tags                = var.tags
  private_network_id  = module.vpc.private_network_id

module "database" {
  source = "./module/database"

  rdb_instance_node_type         = var.rdb_instance_node_type
  rdb_instance_engine            = var.rdb_instance_engine
  rdb_is_ha_cluster              = var.rdb_is_ha_cluster
  rdb_disable_backup             = var.rdb_disable_backup
  rdb_instance_volume_type       = var.rdb_instance_volume_type
  rdb_instance_volume_size_in_gb = var.rdb_instance_volume_size_in_gb
  rdb_user_root_password         = var.rdb_user_root_password
  rdb_user_scaleway_db_password  = var.rdb_user_scaleway_db_password
  instance_ip_addr               = module.instance.instance_ip_addr
  private_network_id             = module.vpc.private_network_id
  user_name                      = var.user_name
  zone                           =
  region                         = var.region
  env                            = var.env

module "kapsule" {
  source = "./module/kapsule"

  kapsule_cluster_version = var.kapsule_cluster_version
  kapsule_pool_size       = var.kapsule_pool_size
  kapsule_pool_min_size   = var.kapsule_pool_min_size
  kapsule_pool_max_size   = var.kapsule_pool_max_size
  kapsule_pool_node_type  = var.kapsule_pool_node_type
  cni                     = var.cni
  zone                    =
  region                  = var.region
  env                     = var.env

module "loadbalancer" {
  source = "./module/loadbalancer"

  lb_size            = var.lb_size
  inbound_port       = var.inbound_port
  forward_port       = var.forward_port
  forward_protocol   = var.forward_protocol
  private_network_id = module.vpc.private_network_id
  zone               =
  region             = var.region
  env                = var.env

module "vpc" {
  source = "./module/vpc"

  public_gateway_dhcp = var.public_gateway_dhcp
  public_gateway_type = var.public_gateway_type
  bastion_port        = var.bastion_port
  zone                =
  region              = var.region
  env                 = var.env


As you can see, we deleted the instance resources from our previous example to call all the modules. Our will be our “call-center”. It will call all the modules we need to deploy our infrastructure.

If we compare it with our previous repository, we see the logic of the resources has been moved to module/instance/ (I just added the private network to my instance).

Also, I advise you to transform every argument into a variable. First, it will help add granularity to your project: You will be able to deploy multiple resources easily, with different configurations. Second, you will have an overview of your configuration at a glance.

The from your instance’s module will look quite similar to the previous version (I just added the private network to my instance).

Then, I simply used the same logic to all my previous files (,, etc…) and created modules for each product.

All the values are still in the tfvars, which will allow you to have a clear overview of what kind of resources you have deployed. And if you want to, for example, switch to a bigger database, you simply have to update this file. There’s no need to modify the structure of your architecture or touch the resource itself.


Scaleway launched VPC at the end of 2021. And for the last six months, we have continuously added features to our network ecosystem. Here is the example I used to deploy our VPC:

#Private Network creation
resource "scaleway_vpc_private_network" "scaleway_pn" {
  name = "${var.env}-private_network"

resource "scaleway_vpc_public_gateway_dhcp" "scaleway_dhcp" {
  subnet             = var.public_gateway_dhcp
  push_default_route = var.dhcp_push_default_route

#Public Gateway
resource "scaleway_vpc_public_gateway_ip" "scaleway" {

resource "scaleway_vpc_public_gateway" "scaleway_pg" {
  name            = "${var.env}-public_gateway"
  type            = var.public_gateway_type
  ip_id           =
  bastion_enabled = var.bastion_enabled
  bastion_port    = var.bastion_port

resource "scaleway_vpc_gateway_network" "scaleway" {
  gateway_id         =
  private_network_id =
  dhcp_id            =
  cleanup_dhcp       = var.cleanup_dhcp
  enable_masquerade  = var.enable_masquerade
  depends_on         = [scaleway_vpc_public_gateway.scaleway_pg, scaleway_vpc_public_gateway_ip.scaleway, scaleway_vpc_private_network.scaleway_pn]

With our current infrastructure, you can see we implemented our virtual private cloud:

  • You can deploy a private network for an instance, a load balancer, and a database.
  • Our Public Gateway owns three main components:
  • 1. DHCP (allocation of private IP addresses);
  • 2. NAT gateway (managing the ingress/egress traffic);
  • 3. SSH Bastion (secure management of your SSH keys).

You can now implement a secure infrastructure thanks to the combination of Public Gateway + Private Networks (!).

What’s next?

I did not go through all the code of this repository or all the “specificities”. For example, I didn’t discuss how to secure sensitive information or how to manage outputs through modules. The examples I gave speak for themselves.

However, now you have a fully functional module that allows you to deploy rapidly a fully usable infrastructure to launch your first project. At the end of the day, all you have to update is your tfvars. This repository can serve as a guideline for anyone looking to write and maintain their Terraform infrastructure.

More products/features are coming in the next months. We’re preparing Terraform integration for Serverless, CaaS, and more new products in the near future, so stay tuned!

Read more

CTO Summer Camp | Scaleway
We teamed up with 50+ CTOs & VPs engineering to create a limited edition newsletter with all the resources you need to build and scale your startup.Every Thursday, directly in your inbox.