This page looks best with JavaScript enabled

How to develop your own Terraform provider - Part 1

· ☕ 21 min read
🏷️
  • #terraform
  • Welcome to my first post of “How to develop your own Terraform provider”.

    Terraform had been the defacto provision tool. In most of your time using Terraform, there are enough Terraform providers for the cloud services that you use. But something, we want to manage the resources by Terraform that there are no providers. You have to develop your own one.

    This series of posts shows you the “how-to” of developing your own(=custom) Terarform provider with best practices. Though this post, I will develop Terraform provider for Spinnaker mercari/terraform-provider-spinnaker.

    In this series of posts, Terraform 0.12 is supported and some features may not work for 0.11.

    What you can learn

    In this series of posts, you will learn …

    • How to define custom resources
    • Deep understanding of Terraform
    • Advance functions of Terraform provider

    Since this post focuses on Terraform, I will not explain the details about Go.

    What I am focusing

    I am focusing on “how to develop terraform provider` and not “how to create terraform provider”. The difference between these two are whether it is practical or not. The former explains the actuall process while the latter explains the method.

    In this posts I will focus on the process in detail like “How Terraform handles the resources ID”, “How should I debug the Terrafor provider” and so on. And I will not explain all the options and functions of Terraform and Terraform plugins that are not used widely.

    Adjendas

    In this series of post, I will explain the following topics:

    1. How to define your first custom resource(this article)
      • Basic knowledge of Terraform plugins
      • Preparing an development environment
      • Implement CRUD for your provider
    2. How to Test-Driven-Development(TTD)
      • Acceptance test
      • Unit test
    3. Advance Schema feature
      • Implementing importer
      • Validation
      • Partial applies

    What’s Terraform provider?

    Terraform can be splitted into two main parts: Terraform Core and Terraform Plugins.

    Terraform consist of Terraform Core and Terraform Plugin

    Terraform Cores uses remote procedure call(PRC) defined by Terraform Plugins. Terraform Cores are responsible for these functionalities.

    • Infrastructure as code
    • Resource state management
    • Construction of the Resource graph
    • Planning the resource operation
    • Using plugin over PRC

    While Terraform Plugins offer the PRC implementation for each specific service, such as AWS, GCP(providers) or such as bash(provisoners). The group of providers and provisoners are called “plugins”. What’s the difference?

    Kinds of Terraform plugins

    These plugins are categories in three kinds.

    1. Built-in provisioners: Installed by default in the Terraform binary
    2. Providers distributed by HashiCorp: Automatically downloaded(if caches don’t exist)
    3. Third-party providers and provisioners: Must be installed manually

    What we call the custom Terraform provider is the third one. If you develop your own terraform-provider, you have to install manually in each environment. That means you will also need a system that can distribute your terraform provider. In this post, I will explain later in Distributing your Terraform provider.

    Terraform Plugin consist of Terraform Provider and Terraform Provisioner

    The Provider distributed by Hashicorp is hosted on terraform-providers Github organization.

    Providers

    Providers are the most common type of Plugins. Providers communicate with specific service though their APIS. For example, terraform-provider-aws (mostly) uses the Amazon Web Service(AWS) APIs. The provider is responsible for managing their life cycles. Here, life cycles mean the CRUD, the state management of the resource.

    Create your own logic in your custom Terraform provider

    If there are APIs of your using service, you can develop your own Terraform provider.

    Provisioners

    Provisioners are used to executing scripts on a local or remote machine as part of resource creation or destruction. Imagine Ansible.

    I guess you won’t use this very often but here is one example of printing out the private IP address of the AWS EC2 instance.

    1
    2
    3
    4
    5
    6
    7
    
    resource "aws_instance" "web" {
      # ...
    
      provisioner "local-exec" {
        command = "echo The server's IP address is ${self.private_ip}"
      }
    }
    

    Plugin name conventions

    To Terraform can find plugins, the name of the binary should be terraform-provider-<NAME>. Moreover, if you want to version the provider, the name should be terraform-provider-<NAME>_vX.Y.Z as semver.
    See details of Semantic Versioning 2.0.0 if you don’t know about it.

    These version suffix vX.Y.Z is used to specify which version the plugins should be used during the Terraform execution. Here is one example of specifying the version of terraform-provider-aws.

    1
    2
    3
    4
    5
    
    terraform {
      required_providers {
        aws = "~> 1.0"
      }
    }
    

    If you want to see the information about the providers used in the current configuration, run terraform providers.

    Plugin Locations

    As explained in the previous section, you don’t need to do anything to use built-in provisioners and Providers distributed by Hashicorp. When you initialize Terraform by terraform init, Terraform will search the paths below for the plugins that you will need.

    But if you are going to use third-party providers and provisioners, you will have to store them in these paths to Terraform to find your plugins.

    Path When to use
    . Useful during your plugin development
    same as which terraform result| For airgapped installations(liketerraform bundle`) |
    terraform.d/plugins/<OS>_<ARCH>/
    .terraform/plugins/<OS>_<ARCH>/
    ~/.terraform/plugins/ The users plugins directory
    ~/.terraform.d/plugins/<OS>_<ARCH>/ The users plugin directory with explicit OS and architecture

    Still, you can override these paths by --plugin-dir=<PATH> option in terraform init command.


    Now you know the basic terms and system about Terraform plugins. Here, I will start explaining the details of the custom Terraform provider, the main topic of the post.


    Let’s start developing a custom Terraform provider.

    As explained above, there are provider and provisoners for Terraform plugin. I’ll focus on the developing a provider.

    0. Create a repository

    Terraform providers name should be terraform-provider-<NAME>. Therefore, the repository name of VCS is recommended to name the same name too.

    1. Build a skelton of your plugin

    As explained before, the plugins are loaded as binaries. To create a binary that Terraform can load as a plugin, implement the entry-point with the following code as main.go:

     1
     2
     3
     4
     5
     6
     7
     8
     9
    10
    11
    12
    13
    14
    
    package main
    
    import (
            "github.com/hashicorp/terraform-plugin-sdk/plugin"
            "github.com/hashicorp/terraform-plugin-sdk/terraform"
    )
    
    func main() {
            plugin.Serve(&plugin.ServeOpts{
                    ProviderFunc: func() terraform.ResourceProvider {
                            return Provider()
                    },
            })
    }
    

    When the Terraform is executed, the Provider.Func will be called and your terraform provider will be returned to Terraform. This is how your custom Terraform providers are imported to Terraform.

    2. Define the Provider Schema

    Provider Schema is “Features that the Terraform Provider offers”. It will be the root object of your provider.

    Write your first Provider Schema like the following code as provider.go:

     1
     2
     3
     4
     5
     6
     7
     8
     9
    10
    11
    12
    
    package main
    
    import (
            "github.com/hashicorp/terraform-plugin-sdk/helper/schema"
    )
    
    func Provider() *schema.Provider {
    	return &schema.Provider{
    		Schema: map[string]*schema.Schema{},
    		ResourcesMap: map[string]*schema.Resource{},
    	}
    }
    

    Best practices: In most providers, the definition of schema and provider is located at /<NAME> directory. For example, it is great to follow those rules and locate provider.go to <NAME>/provider.go.

    Now your first terraform provider is created. Good job!

    2.0 Design your resource: Terraform provider design principles

    For a good developer experience who uses your provider, you should follow the design principles from Hashicorp. See the details in this link HashiCorp Provider Design Principles.

    2.0.1 Provider should focus on a single API or SDK

    A Terraform provider should focus on one single API

    Terraform provider should manage multiple APIs for the reasons below.

    • Remove the complexity of provider
    • Restricting the scope of the provider
    • Focusing on a single API

    2.0.2 Resource should represend a single API object/mapping/event

    One resource should represent a single API resource. You shouldn’t create an abstraction of the service’s APIs.

    2.0.3 Resource and attributes should match underlying API naming

    A Terraform resource should focus on one object in actual API

    Don’t rename the resource and attributes naming. Your naming should directly represent the actual API resource for simplicity.

    2.0.4 API client package should be seporated

    Seperate the responsibilty of Terraform provider and the PRC logic.

    A Terraform provider should focus on the management of the resource

    2.1 What’s Provider Schema?

    What you know about the Provider Schema? The only thing you should know is that this schema is used as root object. It defines

    • What kind of data, resource, output, etc … block type that the provider has
    • Configuration of the provider. Imagine the - options for CLI commands
    • Schema of the provider resource block.

    I will post some examples of providers.

    Akamai:

    1
    2
    3
    4
    5
    6
    7
    8
    
    provider "akamai" {
        property {
          host = var.akamai_host
          access_token = var.akamai_access_token
          client_token = var.akamai_client_token
          client_secret = var.akamai_client_secret
        }
    }
    

    Okta:

    1
    2
    3
    4
    5
    
    provider "okta" {
      org_name  = "dev-123456"
      base_url  = "okta.com"
      api_token = "xxxx"
    }
    

    Provider has various schema definition(property for Akamai, org_name, base_url, api_token for Okta here). The schema of the provider is fundamental properties that will be uses widely in your provider.

    2.2 Using sensitive data

    Terraform provider is responsible for authn & authz for you to access the APIs. Therefore, some sensitive data may be required such as API token, client credentials.

    For those variables, use the Terraform variable and load them during the execution.

     1
     2
     3
     4
     5
     6
     7
     8
     9
    10
    
    provider "okta" {
      org_name  = "dev-123456"
      base_url  = "okta.com"
      api_token = var.okta_api_token
    }
    
    variable "okta_api_token" {
      description = "API token for Okta"
      type = string
    }
    

    3. Prepare debug environment

    But how can we know this provider will be imported and be used with no bugs? In this section, I will explain how to prepare for developing a provider.

    3.1 Create .gitignore

    To version control your provider, it is important to ignore what you don’t want to commit. Use gibo and execute the following commands:

    $ gibo dump Terraform > .gitignore
    $ gibo dump Go >> .gitignore
    

    And also, ignore the binary of your provider and add debugging directory.

    $ basename $(git rev-parse --show-toplevel) >> .gitignore
    $ echo "terraform/" >> .gitignore
    

    Now you can develop without comming what shouldn’t be commited.

    3.2 Build your terraform provider binary

    Now, build a binary that Terraform imports.

    $ go build -o terraform-provider-<NAME>
    

    Since the repository name should be terraform-provider-<NAME> (same as the providers name), go build is enough and you won’t need -o option.

    Now, you will see a binary with terraform-provider-<NAME>.

    3.3 Use that terraform provider

    You have no resources yet but let’s import the initial terraform provider to see that Provider.Func is working as expected.

    Thus you ignored the terraform/ directory we will use for debugging, create a directory and initialize with the basic .tf file as following:

    $ mkdir terraform
    

    provider.tf:

    1
    
    provider "<NAME>" {}
    

    Replace the "<NAME> with your Terraform providers name. Then try to initialize by this command.

    $ terraform init terraform/
    

    It worked but nothing will be created because there are no resource schema.

    4. Define your first schema

    Now let’s define your resource schema.

    4.1 About the Resource Schema

    Resource Schema is the main part of the schemas. It will define the life cycle of the resource.

    Let’s say you want to manage the GCE instance. In your first apply, Terraform will create the GCE instance.

    1
    2
    3
    
    resource "google_compute_instance" "ga-instance" {
      name = "my-gce-instance"
    }
    

    After removing this line, Terraform will delete the instance. In the case you rename the instance, the UPDATE will happen. The behaivour of the UPDATE relies on the provider.
    It may patch your changes or delete the old one and create a new resource. If you have sensitive resources, please check the provider documents carefully.

    4.3 Implement Resource Schema

    Resources should be passed the *schema.Providers's ResourcesMap which requires map[string]*schema.Resource like below.

    1
    2
    3
    
    ResourcesMap: map[string]*schema.Resource{
    			"<resource_type>":              resource<NAME>(),
    	},
    

    This will map as below.

    1
    2
    3
    
    resource "<resource_type>" "<any>" {
      ...Configuration in *schema.Resource
    }
    

    In most cases you should seprate the files into each block type(resource, data) and those types. For example, here we should create resource_<resource_name>.go and the create a function that returns the schema.
    Then, let’s implement the Resource Schema *schema.Resource. *schema.Resource is a struct.

     1
     2
     3
     4
     5
     6
     7
     8
     9
    10
    11
    12
    13
    14
    15
    
    type Resource struct {
    	Schema map[string]*Schema
    	SchemaVersion int
    	MigrateState StateMigrateFunc
    	StateUpgraders []StateUpgrader
    	Create CreateFunc
    	Read   ReadFunc
    	Update UpdateFunc
    	Delete DeleteFunc
    	Exists ExistsFunc
    	CustomizeDiff CustomizeDiffFunc
    	Importer *ResourceImporter
    	DeprecationMessage string
    	Timeouts *ResourceTimeout
    }
    

    There are two main parts, schema, CRUD definitions.

    4.3.1 Schema

    The Schema field of the resource defines the attributes. Following the design principles from Terraform, use the same name for attributes like the following-code:

    1
    2
    3
    
    resource "" "<name_of_resource>" {
      ...schema
    }
    

    This is an example of how to define a scheme.

     1
     2
     3
     4
     5
     6
     7
     8
     9
    10
    
    func resource() *schema.Resource {
      return &schema.Resource{
        Schema: map[string]*schema.Schema{
                "name": &schema.Schema{
                        Type:     schema.TypeString,
                        Required: true,
                },
        },
      }
    }
    

    You can specify any kinds of schemas. For details, see the third post Advance Schema feature post.

    4.3.2 CRUD

    Now you defined a schema, an resource object. The next step is the CRUD of that resource. CRUD is the basic event for a resource. This is the main part of the Terraform plugin. CRUD operation defines the communication between the service API and the Terraform (state).

    The important point of starting to manage resources by Terraform is finding the ID of the resources. For example, for instances might be an IP address or instance ID given by the cloud provider. For users, teams it might by a user ID, team ID. Please always focus on how the Terraform will manage the resources.

    In most cases, the service will provide the ID. They use the ID too to manage their own resources in their service. So you can use that ID. Before we start implementing, let’s investigate what kind of ID’s is used to make sure that we aren’t doing something wrong.

    Example of the IDs in Terraform provider distributed by Hashicorp(that means it is reviewed and trust-worthy):

    Provider Resource ID used in the Terraform provider
    GCP google_compute_instance fmt.Sprintf("projects/%s/zones/%s/instances/%s", project, z, instance.Name))
    PagerDuty pagerduty_user UserID from the API
    Datadog datadog_monitor Monitor ID in Datadog

    Every resource has an ID to Terraform to specify the actual resource.

    You will define each function for CRUD.

    1
    2
    3
    4
    5
    6
    7
    
    type Resource struct {
      ...
    	Create CreateFunc
    	Read   ReadFunc
    	Update UpdateFunc
    	Delete DeleteFunc
    }
    
    Implement Create

    To create a resource, define a CreateFunc and invoke the service API and create a resource. And then save the ID of that resource.

    1
    2
    3
    4
    5
    
    func resourceCreate(d *schema.ResourceData, m interface{}) error {
      id := d.Get("id").(string)
      d.SetId(id)
      return resourceRead(d, m)
    }
    

    Here, *schema.ResourceData is the values that is defined in .tf file. Then use d.SetId(id) to save the ID.

    Always pass to resourceRead to ensure that the resource is created and states are saved in the last line

    Note that there is no RPC between any service. This shows how the Terraform manages the resources. The actual function should look like this.

    1
    2
    3
    4
    5
    6
    7
    8
    
    func resourceCreate(d *schema.ResourceData, m interface{}) error {
      something := d.Get("something").(string)
      
      // API call to provider and get ID
      
      d.SetId(id)
      return resourceRead(d, m)
    }
    

    Then apply.

    $ terraform apply
    

    You will see a .tfstate file with the ID you specified in the .tf file. The .tfstate holds the attribute values as resources[].instances[]. For example, the .tfstate file after applying

    resource "spinnaker_application" "my_app" {
        application = "keke-test"
        email       = "hoho@test.com"
    }
    

    will look like this.

     1
     2
     3
     4
     5
     6
     7
     8
     9
    10
    11
    12
    13
    14
    15
    16
    17
    18
    19
    20
    21
    22
    23
    24
    25
    26
    
    {
      "version": 4,
      "terraform_version": "0.12.20",
      "serial": 1,
      "lineage": "5e997bf1-5d56-5777-b913-7ff4f510ac99",
      "outputs": {},
      "resources": [
        {
          "mode": "managed",
          "type": "spinnaker_application",
          "name": "my_app",
          "provider": "provider.spinnaker",
          "instances": [
            {
              "schema_version": 0,
              "attributes": {
                "application": "keke-test",
                "email": "hoho@test.com",
                "id": "keke-test"
              },
              "private": "bnVsbA=="
            }
          ]
        }
      ]
    }
    
    Example of datadog_monitor resource's read function
     1
     2
     3
     4
     5
     6
     7
     8
     9
    10
    11
    12
    13
    14
    
    func resourceDatadogMonitorCreate(d *schema.ResourceData, meta interface{}) error {
    
    	client := meta.(*datadog.Client)
    
    	m := buildMonitorStruct(d)
    	m, err := client.CreateMonitor(m)
    	if err != nil {
    		return fmt.Errorf("error updating monitor: %s", err.Error())
    	}
    
    	d.SetId(strconv.Itoa(m.GetId()))
    
    	return resourceDatadogMonitorRead(d, meta)
    }
    

    ref: create function, terraformp-provider-datadog

    Implement Read

    ReadFunc is used to sync the local state with the actual state. This function should be read-only and mustn’t mutate any provider resource. If the ID is blank, this tells Terraform
    the resource no longer exists.

     1
     2
     3
     4
     5
     6
     7
     8
     9
    10
    11
    
    func resourceRead(d *schema.ResourceData, m interface{}) error {
      client := m.(*MyClient)
      obj, ok := client.Get(d.Id())
      if !ok {
        d.SetId("")
        return nil
      }
    
      d.Set("id", obj.id)
      return nil
    }
    
    Example of datadog_monitor resources read function
     1
     2
     3
     4
     5
     6
     7
     8
     9
    10
    11
    12
    13
    14
    15
    16
    17
    18
    19
    20
    21
    22
    23
    24
    25
    26
    27
    28
    29
    30
    31
    32
    33
    34
    35
    36
    37
    38
    39
    40
    41
    42
    43
    44
    45
    46
    47
    48
    49
    50
    51
    52
    53
    54
    55
    56
    57
    58
    59
    60
    61
    62
    63
    64
    65
    66
    67
    68
    69
    70
    71
    72
    73
    74
    75
    76
    77
    78
    79
    80
    81
    82
    83
    84
    85
    86
    87
    88
    89
    90
    91
    92
    
    func resourceDatadogMonitorRead(d *schema.ResourceData, meta interface{}) error {
    	client := meta.(*datadog.Client)
    
    	i, err := strconv.Atoi(d.Id())
    	if err != nil {
    		return err
    	}
    
    	m, err := client.GetMonitor(i)
    	if err != nil {
    		return err
    	}
    
    	thresholds := make(map[string]string)
    	for k, v := range map[string]json.Number{
    		"ok":                m.Options.Thresholds.GetOk(),
    		"warning":           m.Options.Thresholds.GetWarning(),
    		"critical":          m.Options.Thresholds.GetCritical(),
    		"unknown":           m.Options.Thresholds.GetUnknown(),
    		"warning_recovery":  m.Options.Thresholds.GetWarningRecovery(),
    		"critical_recovery": m.Options.Thresholds.GetCriticalRecovery(),
    	} {
    		s := v.String()
    		if s != "" {
    			thresholds[k] = s
    		}
    	}
    
    	thresholdWindows := make(map[string]string)
    	for k, v := range map[string]string{
    		"recovery_window": m.Options.ThresholdWindows.GetRecoveryWindow(),
    		"trigger_window":  m.Options.ThresholdWindows.GetTriggerWindow(),
    	} {
    		if v != "" {
    			thresholdWindows[k] = v
    		}
    	}
    
    	tags := []string{}
    	for _, s := range m.Tags {
    		tags = append(tags, s)
    	}
    	sort.Strings(tags)
    
    	log.Printf("[DEBUG] monitor: %+v", m)
    	d.Set("name", m.GetName())
    	d.Set("message", m.GetMessage())
    	d.Set("query", m.GetQuery())
    	d.Set("type", m.GetType())
    
    	d.Set("thresholds", thresholds)
    	d.Set("threshold_windows", thresholdWindows)
    
    	d.Set("new_host_delay", m.Options.GetNewHostDelay())
    	d.Set("evaluation_delay", m.Options.GetEvaluationDelay())
    	d.Set("notify_no_data", m.Options.GetNotifyNoData())
    	d.Set("no_data_timeframe", m.Options.NoDataTimeframe)
    	d.Set("renotify_interval", m.Options.GetRenotifyInterval())
    	d.Set("notify_audit", m.Options.GetNotifyAudit())
    	d.Set("timeout_h", m.Options.GetTimeoutH())
    	d.Set("escalation_message", m.Options.GetEscalationMessage())
    	d.Set("include_tags", m.Options.GetIncludeTags())
    	d.Set("tags", tags)
    	d.Set("require_full_window", m.Options.GetRequireFullWindow()) // TODO Is this one of those options that we neeed to check?
    	d.Set("locked", m.Options.GetLocked())
    
    	if m.GetType() == logAlertMonitorType {
    		d.Set("enable_logs_sample", m.Options.GetEnableLogsSample())
    	}
    
    	// The Datadog API doesn't return old timestamps or support a special value for unmuting scopes
    	// So we provide this functionality by saving values to the state
    	apiSilenced := m.Options.Silenced
    	configSilenced := d.Get("silenced").(map[string]interface{})
    
    	for _, scope := range getUnmutedScopes(d) {
    		if _, ok := apiSilenced[scope]; !ok {
    			apiSilenced[scope] = -1
    		}
    	}
    
    	// Ignore any timestamps in the past that aren't -1 or 0
    	for k, v := range configSilenced {
    		if v.(int) < int(time.Now().Unix()) && v.(int) != 0 && v.(int) != -1 {
    			// sync the state with whats in the config so its ignored
    			apiSilenced[k] = v.(int)
    		}
    	}
    	d.Set("silenced", apiSilenced)
    
    	return nil
    }
    

    ref: read function, terraformp-provider-datadog

    Implement Update

    Update is called when you try to change the existing Terraform managed resource. I guess the update function will be the most complicated part in CRUD. You have to implement the following things.

    • Request a PATCH request
    • Update the state file
    Example of datadog_monitor resource's update function
      1
      2
      3
      4
      5
      6
      7
      8
      9
     10
     11
     12
     13
     14
     15
     16
     17
     18
     19
     20
     21
     22
     23
     24
     25
     26
     27
     28
     29
     30
     31
     32
     33
     34
     35
     36
     37
     38
     39
     40
     41
     42
     43
     44
     45
     46
     47
     48
     49
     50
     51
     52
     53
     54
     55
     56
     57
     58
     59
     60
     61
     62
     63
     64
     65
     66
     67
     68
     69
     70
     71
     72
     73
     74
     75
     76
     77
     78
     79
     80
     81
     82
     83
     84
     85
     86
     87
     88
     89
     90
     91
     92
     93
     94
     95
     96
     97
     98
     99
    100
    101
    102
    103
    104
    105
    106
    107
    108
    109
    110
    111
    112
    113
    114
    115
    116
    117
    118
    119
    120
    121
    122
    123
    124
    125
    126
    127
    128
    129
    130
    131
    132
    133
    134
    135
    136
    137
    138
    139
    140
    141
    142
    143
    144
    145
    146
    147
    148
    149
    150
    151
    152
    
    func resourceDatadogMonitorUpdate(d *schema.ResourceData, meta interface{}) error {
    	client := meta.(*datadog.Client)
    
    	m := &datadog.Monitor{}
    
    	i, err := strconv.Atoi(d.Id())
    	if err != nil {
    		return err
    	}
    
    	m.Id = datadog.Int(i)
    	if attr, ok := d.GetOk("name"); ok {
    		m.SetName(attr.(string))
    	}
    	if attr, ok := d.GetOk("message"); ok {
    		m.SetMessage(attr.(string))
    	}
    	if attr, ok := d.GetOk("query"); ok {
    		m.SetQuery(attr.(string))
    	}
    
    	if attr, ok := d.GetOk("tags"); ok {
    		s := make([]string, 0)
    		for _, v := range attr.(*schema.Set).List() {
    			s = append(s, v.(string))
    		}
    		sort.Strings(s)
    		m.Tags = s
    	}
    
    	o := datadog.Options{
    		NotifyNoData:      datadog.Bool(d.Get("notify_no_data").(bool)),
    		RequireFullWindow: datadog.Bool(d.Get("require_full_window").(bool)),
    		IncludeTags:       datadog.Bool(d.Get("include_tags").(bool)),
    	}
    	if attr, ok := d.GetOk("thresholds"); ok {
    		thresholds := attr.(map[string]interface{})
    		o.Thresholds = &datadog.ThresholdCount{} // TODO: This is a little annoying..
    		if thresholds["ok"] != nil {
    			o.Thresholds.SetOk(json.Number(thresholds["ok"].(string)))
    		}
    		if thresholds["warning"] != nil {
    			o.Thresholds.SetWarning(json.Number(thresholds["warning"].(string)))
    		}
    		if thresholds["critical"] != nil {
    			o.Thresholds.SetCritical(json.Number(thresholds["critical"].(string)))
    		}
    		if thresholds["unknown"] != nil {
    			o.Thresholds.SetUnknown(json.Number(thresholds["unknown"].(string)))
    		}
    		if thresholds["warning_recovery"] != nil {
    			o.Thresholds.SetWarningRecovery(json.Number(thresholds["warning_recovery"].(string)))
    		}
    		if thresholds["critical_recovery"] != nil {
    			o.Thresholds.SetCriticalRecovery(json.Number(thresholds["critical_recovery"].(string)))
    		}
    	}
    
    	if attr, ok := d.GetOk("threshold_windows"); ok {
    		thresholdWindows := attr.(map[string]interface{})
    		o.ThresholdWindows = &datadog.ThresholdWindows{}
    		if thresholdWindows["recovery_window"] != nil {
    			o.ThresholdWindows.SetRecoveryWindow(thresholdWindows["recovery_window"].(string))
    		}
    		if thresholdWindows["trigger_window"] != nil {
    			o.ThresholdWindows.SetTriggerWindow(thresholdWindows["trigger_window"].(string))
    		}
    	}
    
    	newHostDelay := d.Get("new_host_delay")
    	o.SetNewHostDelay(newHostDelay.(int))
    
    	if attr, ok := d.GetOk("evaluation_delay"); ok {
    		o.SetEvaluationDelay(attr.(int))
    	}
    	if attr, ok := d.GetOk("no_data_timeframe"); ok {
    		o.NoDataTimeframe = datadog.NoDataTimeframe(attr.(int))
    	}
    	if attr, ok := d.GetOk("renotify_interval"); ok {
    		o.SetRenotifyInterval(attr.(int))
    	}
    	if attr, ok := d.GetOk("notify_audit"); ok {
    		o.SetNotifyAudit(attr.(bool))
    	}
    	if attr, ok := d.GetOk("timeout_h"); ok {
    		o.SetTimeoutH(attr.(int))
    	}
    	if attr, ok := d.GetOk("escalation_message"); ok {
    		o.SetEscalationMessage(attr.(string))
    	}
    
    	silenced := false
    	configuredSilenced := map[string]int{}
    	if attr, ok := d.GetOk("silenced"); ok {
    		// TODO: this is not very defensive, test if we can fail non int input
    		s := make(map[string]int)
    		for k, v := range attr.(map[string]interface{}) {
    			s[k] = v.(int)
    			configuredSilenced[k] = v.(int)
    		}
    		o.Silenced = s
    		silenced = true
    	}
    	if attr, ok := d.GetOk("locked"); ok {
    		o.SetLocked(attr.(bool))
    	}
    	// can't use m.GetType here, since it's not filled for purposes of updating
    	if d.Get("type") == logAlertMonitorType {
    		if attr, ok := d.GetOk("enable_logs_sample"); ok {
    			o.SetEnableLogsSample(attr.(bool))
    		} else {
    			o.SetEnableLogsSample(false)
    		}
    	}
    
    	m.Options = &o
    
    	if err = client.UpdateMonitor(m); err != nil {
    		return fmt.Errorf("error updating monitor: %s", err.Error())
    	}
    
    	var retval error
    	if retval = resourceDatadogMonitorRead(d, meta); retval != nil {
    		return retval
    	}
    
    	// if the silenced section was removed from the config, we unmute it via the API
    	// The API wouldn't automatically unmute the monitor if the config is just missing
    	// else we check what other silenced scopes were added from API response in the
    	// "read" above and add them to "unmutedScopes" to be explicitly unmuted (because
    	// they're "drift")
    	unmutedScopes := getUnmutedScopes(d)
    	if newSilenced, ok := d.GetOk("silenced"); ok && !silenced {
    		retval = client.UnmuteMonitorScopes(*m.Id, &datadog.UnmuteMonitorScopes{AllScopes: datadog.Bool(true)})
    		d.Set("silenced", map[string]int{})
    	} else {
    		for scope := range newSilenced.(map[string]interface{}) {
    			if _, ok := configuredSilenced[scope]; !ok {
    				unmutedScopes = append(unmutedScopes, scope)
    			}
    		}
    	}
    
    	// Similarly, if the silenced attribute is -1, lets unmute those scopes
    	if len(unmutedScopes) != 0 {
    		for _, scope := range unmutedScopes {
    			client.UnmuteMonitorScopes(*m.Id, &datadog.UnmuteMonitorScopes{Scope: &scope})
    		}
    	}
    
    	return resourceDatadogMonitorRead(d, meta)
    }
    

    ref: updateFunc, terraform-provider-datadog

    The update function of the mercari/terraform-provider-spinnaker has missing, so I’ve implemented it.

    Implement Destroy

    DeleteFunc is called when terraform destroy command is called. As the name tells, it will delete the real resource and remove from Terraform state file.

    1
    2
    3
    4
    5
    
    func resourceServerDelete(d *schema.ResourceData, m interface{}) error {
            // Call API and delete
            d.SetId("")
            return nil
    }
    
    Example of datadog_monitor resouce's delete function
     1
     2
     3
     4
     5
     6
     7
     8
     9
    10
    11
    12
    13
    14
    
    func resourceDatadogMonitorDelete(d *schema.ResourceData, meta interface{}) error {
    	client := meta.(*datadog.Client)
    
    	i, err := strconv.Atoi(d.Id())
    	if err != nil {
    		return err
    	}
    
    	if err = client.DeleteMonitor(i); err != nil {
    		return err
    	}
    
    	return nil
    }
    

    ref: delete function, terraform-provider-datadog

    4.4 Implementing import

    No only CRUD, there are sometimes we need to use Terraform after the actual resource is already created. We have to implement terraform import.

    The easiest way to implement import features is using the ImportStatePassthrough like below:

    1
    2
    3
    
    Importer: &schema.ResourceImporter{
    	State: schema.ImportStatePassthrough,
    },
    

    Or you can just call read function of the CRUD. This works if you have d.Set() in your read function. It will create a Terraform state for you.

    4.5 Meta data of the provider

    Providers can provide a meta data to the CRUD function though meta interface. The second argument meta interface will be the returned value of ConfigFunc in *schema.Provider. It is used to initialize you API client before you call CRUD functions.

    In most cases, the ConfigFunc will look like this.

    1
    2
    3
    4
    5
    6
    7
    
    func providerConfigureFunc(data *schema.ResourceData) (interface{}, error) {
      // Initialize the client
      
    	return &client{
    		url: "https://example.com",
    	}, nil
    }
    

    This client can be used in every CRUD function by casting into this *Client.

    Is it better to create a API client package in another repository to manage the responsibilies

    4.6 Implement Existance check (Optional)

    It is better to implement ExistsFunc which checks the existance of the resource in a deep level compared to ReadFunc.
    If this function returns false, the Terraform will handle as the resource is deleted.

    5. State management of the resource type

    Terraform is only managing if the resource are created or not since there are only ID’s in the Terraform state.
    Let’s say you want to manage the User name and email with Terraform. Use the Set to use the method to manage the resource fields.

    1
    
    d.Set("address", resp.Address)
    

    Next steps

    We developed our first simple custom Terraform provider. But there are many things that we should inhance.

    • Better update strategy
    • Error handling
    • Implement other resources such as data.
    • Add details of resource

    These topics will be discussed in the third post. In the next post, I will explain how to improve our development cycle of Terraform providers.

    See you soon ✋

    Share on

    Keisuke Yamashita
    WRITTEN BY
    Keisuke Yamashita
    Site Reliability Engineer