profile
viewpoint

hashicorp/terraform-provider-consul 82

Terraform Consul provider

hashicorp/terraform-provider-nomad 42

Terraform Nomad provider

K-Jean/Urigin 0

Project for Polytech APP5 INFO - cloud

remilapeyre/atlantis 0

Terraform Pull Request Automation

remilapeyre/blaze 0

NumPy and Pandas interface to Big Data

remilapeyre/confd 0

Manage local application configuration files using templates and data from etcd or consul

remilapeyre/conftest 0

Write tests against structured configuration data using the Open Policy Agent Rego query language

remilapeyre/consul 0

Consul is a distributed, highly available, and data center aware solution to connect and configure applications across dynamic, distributed infrastructure.

push eventremilapeyre/hashicat-aws

Rémi Lapeyre

commit sha 7dff3957f4ecee3232cf792d55a246bf9c097ad4

Replace hashicat -> hashidog (#1)

view details

push time in 2 days

PR opened remilapeyre/hashicat-aws

Replace hashicat -> hashidog
+1 -1

0 comment

1 changed file

pr created time in 2 days

pull request commenthashicorp/hashicat-aws

Replace hashicat -> hashidog

Sorry about this, I opened against the wrong repository will doing some tests :/

remilapeyre

comment created time in 2 days

PR opened hashicorp/hashicat-aws

Replace hashicat -> hashidog
+1 -1

0 comment

1 changed file

pr created time in 2 days

create barnchremilapeyre/hashicat-aws

branch : hashidog

created branch time in 2 days

fork remilapeyre/hashicat-aws

Sample app for Terraform workshops

fork in 3 days

push eventremilapeyre/terraform-provider-nomad

Rémi Lapeyre

commit sha 2b4df8928e3aa9ab782540f73581d27a91445938

Use Nomad defaults

view details

push time in 6 days

push eventremilapeyre/terraform-provider-nomad

Julien Pivotto

commit sha 7ab9e76c7dacc2644b147541c451a7df43034339

Fix nomad_job_parser doc title

view details

Luiz Aoqui

commit sha 517ba3bcba226abf135b09a5d839c8144da4d479

Merge pull request #152 from roidelapluie/patch-1 Fix nomad_job_parser doc title

view details

Rémi Lapeyre

commit sha 06c2a38f24abe698af544c2128569b51961bbb05

Merge remote-tracking branch 'origin/master' into nomad-job-v2

view details

push time in 6 days

push eventremilapeyre/terraform-provider-nomad

Rémi Lapeyre

commit sha 423984a084e0f25ddbdc5982fa7915f45d8807e9

Remove the constraint added by Nomad

view details

push time in 6 days

push eventremilapeyre/terraform-provider-nomad

Rémi Lapeyre

commit sha ed24173d57abcbc4d596d8f96840f2003b4fffb7

Move all normalization in a new step

view details

push time in 6 days

Pull request review commenthashicorp/terraform-provider-nomad

Add a nomad_job_v2 resource

+package nomad++import "github.com/hashicorp/terraform-plugin-sdk/helper/schema"++func getJobFields() map[string]*schema.Schema {+	return map[string]*schema.Schema{+		"namespace": {+			Type:     schema.TypeString,+			Default:  "default",+			Optional: true,+		},+		"priority": {+			Type:     schema.TypeInt,+			Default:  50,+			Optional: true,+		},+		"type": {+			Type:     schema.TypeString,+			Default:  "service",+			Optional: true,+		},+		"region": {+			Type:     schema.TypeString,+			Default:  "global",

In theory, this should be fixed by 3bb15606299dd7a5a411be98590f643d1b52800f, I still need to write tests for it thought.

remilapeyre

comment created time in 6 days

PullRequestReviewEvent

push eventhashicorp/terraform-provider-consul

Rémi Lapeyre

commit sha 9e244e1f398585f960f227c93490c7e44d3cb8af

Update changelog for 2.10.0 (#228)

view details

push time in 10 days

push eventhashicorp/terraform-provider-consul

Rémi Lapeyre

commit sha 4c02aa6811e8d8201eba7e35922869a78b665ec0

Add support for the node binding rule (#218) Closes https://github.com/terraform-providers/terraform-provider-consul/issues/217

view details

push time in 10 days

PR merged hashicorp/terraform-provider-consul

Add support for the node binding rule size/M

Closes https://github.com/terraform-providers/terraform-provider-consul/issues/217

+70 -34

0 comment

2 changed files

remilapeyre

pr closed time in 10 days

issue closedhashicorp/terraform-provider-consul

[Feature] new binding rule type "node"

Terraform Version

Terraform v0.13.0, provider.consul v2.9.0

Affected Resource(s)

consul_acl_binding_rule

Enhancement Support for NodeIdentities GH-7970 was added in consul 1.8.1.

As a part of applying NodeIdentities to a login token, a binding rule needs to have the type of "node" in order to match the node against a NodeIdentity. In the CLI this can be achieved through the following command

consul acl binding-rule create -method=some-jwt -bind-type=node -bind-name='agent-${value.nodename}'

Currently in terraform the following error is received on trying to create a binding rule with the node bind type

Error: expected bind_type to be one of [service role], got node

  on main.tf line 80, in resource "consul_acl_binding_rule" "agent_binding":
  80: resource "consul_acl_binding_rule" "agent_binding" {

It'd be great if we could support this as a valid value to be passed into terraform.

closed time in 10 days

wolfmd

push eventhashicorp/terraform-provider-consul

Rémi Lapeyre

commit sha 74e3bb60c6cd5950e666be48d53b56b291eb17b7

Correctly handles environment variables for the protocol scheme (#219) Closes https://github.com/terraform-providers/terraform-provider-consul/issues/215

view details

push time in 10 days

PR merged hashicorp/terraform-provider-consul

Correctly handles environment variables for the protocol scheme size/XS

Closes https://github.com/terraform-providers/terraform-provider-consul/issues/215

+1 -1

0 comment

1 changed file

remilapeyre

pr closed time in 10 days

issue closedhashicorp/terraform-provider-consul

CONSUL_HTTP_SSL environment variable not recognised?

Hi there,

it seems the consul resource provider does not recognise CONSUL_HTTP_SSL environment variable.

Related issue from terraform: https://github.com/hashicorp/terraform/issues/23688 Initially I thought this was about terraform but it seems it's about the consul provider.

To be fully compatible with Ansible (consul_kv lookup plugin) I have to use the following environment variables:

env|grep -v TOKEN|grep CONSUL
CONSUL_HTTP_SSL=true
CONSUL_HTTP_ADDR=consul.example.com:443

Terraform Version

Terraform v0.13.0

Affected Resource(s)

  • consul_keys
  • anything from the consul provider, I guess

Terraform Configuration Files

terraform {
  backend "consul" {
    address = "consul.example.com"
    scheme  = "https"
    path    = "tfstate/terraform-provider-consul"
  }
}
  • terraform init successful
terraform {
  backend "consul" {
    address = "consul.bws.tmnet.zone"
    scheme  = "https"
    path    = "_tfstate/terraform-provider-consul"
  }
}

data consul_keys "consuldata" {
  key {
    name    = "foo"
    path    = "foo/bar"
    default = "bar"
  }
}
  • terraform init successful
terraform -v
Terraform v0.13.0
+ provider registry.terraform.io/hashicorp/consul v2.9.0
  • Now terraform starts to fail
terraform plan
data.consul_keys.consuldata: Refreshing state...

Error: Failed to get datacenter from Consul agent: Unexpected response code: 400 (Client sent an HTTP request to an HTTPS server.
)

  on terraform.tf line 9, in data "consul_keys" "consuldata":
   9: data consul_keys "consuldata" {
terraform {
  backend "consul" {
    address = "consul.example.com"
    scheme  = "https"
    path    = "_tfstate/terraform-provider-consul"
  }
}

provider "consul" {
  scheme = "https"
}

data consul_keys "consuldata" {
  key {
    name    = "foo"
    path    = "foo/bar"
    default = "bar"
  }
}

Adding consul provider configuration with scheme = "https" fixes the issue.

Expected Behavior

It should not be required to configure

provider "consul" {
  scheme = "https"
}

when CONSUL_HTTP_SSL is present with a boolean value of true.

Actual Behavior

Configuring the consul provider for scheme = "https" is required in hcl even if CONSUL_HTTP_SSL variable is present with a boolean value of true.

References

  • https://github.com/hashicorp/terraform/issues/23688

closed time in 10 days

transferkraM

push eventhashicorp/terraform-provider-consul

Rémi Lapeyre

commit sha 37d9b8472bc290b2af614be783537ad84d7bb13b

Add import support to consul_intention (#225) Closes https://github.com/hashicorp/terraform-provider-consul/issues/222

view details

push time in 10 days

PR merged hashicorp/terraform-provider-consul

Add import support to consul_intention

Closes https://github.com/hashicorp/terraform-provider-consul/issues/222

+18 -1

0 comment

3 changed files

remilapeyre

pr closed time in 10 days

issue closedhashicorp/terraform-provider-consul

[Feature Request] Support Intention imports

Hi there,

Error: resource consul_intention doesn't support import

it would be awesome that we can import consul_intentions.

BR,

Javier

closed time in 10 days

daktari

push eventhashicorp/terraform-provider-consul

Rémi Lapeyre

commit sha 09c2814ceb1f0e43e49e4dc84e5136adb6b27ddb

Remove the flags attribute in consul_license (#227) Since this just removes an attribute it does not need a StateUpgrader. Closes https://github.com/hashicorp/terraform-provider-consul/issues/223

view details

push time in 10 days

PR merged hashicorp/terraform-provider-consul

Remove the flags attribute in consul_license

Since this just removes an attribute it does not need a StateUpgrader.

Closes https://github.com/hashicorp/terraform-provider-consul/issues/223

+42 -43

0 comment

3 changed files

remilapeyre

pr closed time in 10 days

issue closedhashicorp/terraform-provider-consul

Applying enterprise consul_license resource results in error

While applying the consul_license enterprise resource, Terraform fails on read of the license schema. However, the license does get applied successfully according to Consul.

From what I can tell, this is due to a mismatched schema for some properties of the license response from Consul.

Terraform Version

Terraform v0.13.1-dev (testing w/ unrelated salt-masterless changes on https://github.com/hashicorp/terraform/pull/25944)

  • provider registry.terraform.io/hashicorp/consul v2.9.0

Affected Resource(s)

  • consul_license

Debug Output

2020/08/28 11:33:02 [DEBUG] module.consul_ent.consul_license.hce: apply errored, but we're indicating that via the Error pointer rather than returning it: failed to set 'flags': flags.modules: '' expected type 'string', got unconvertible type '[]interface {}'
2020/08/28 11:33:02 [ERROR] eval: *terraform.EvalApplyPost, err: failed to set 'flags': flags.modules: '' expected type 'string', got unconvertible type '[]interface {}'
2020/08/28 11:33:02 [ERROR] eval: *terraform.EvalSequence, err: failed to set 'flags': flags.modules: '' expected type 'string', got unconvertible type '[]interface {}'
2020-08-28T11:33:02.555-0400 [DEBUG] plugin: plugin process exited: path=.terraform/plugins/terraform.tucows.net/tucowsinc/openstackfwaasv2/0.2.0/darwin_amd64/terraform-provider-openstackfwaasv2 pid=89680
2020-08-28T11:33:02.555-0400 [DEBUG] plugin: plugin exited

Error: failed to set 'flags': flags.modules: '' expected type 'string', got unconvertible type '[]interface {}'

  on modules/consul_ent/main.tf line 7, in resource "consul_license" "hce":
   7: resource "consul_license" "hce" {

Expected Behavior

The consul license should be correctly applied and no error is returned from Terraform.

Actual Behavior

The consul license is applied successfully but Terraform errors.

Steps to Reproduce

Use a license with the most amount of features for Consul enterprise v1.8.3+ent

  1. terraform apply

Important Factoids

Our enterprise Consul license has the highest tier of features.

References

N/A

closed time in 10 days

acornies

push eventremilapeyre/terraform-provider-nomad

Rémi Lapeyre

commit sha 843d16af52cf43bb088531f06bd58aa6bda6fd7e

Add test with constraint block

view details

push time in 11 days

push eventremilapeyre/terraform-provider-nomad

Rémi Lapeyre

commit sha e256690b18cd9fc8a8d1121cc097802e7a873c8b

Fix handling of the spread attribute

view details

push time in 11 days

push eventremilapeyre/terraform-provider-nomad

Rémi Lapeyre

commit sha 4ccac807c7c22914b23109407163aa27bba3a0f2

Add ForceNew to id, namespace and type

view details

Rémi Lapeyre

commit sha 9218406fe061b639893bd972ab4db95f8d61097f

Set the default for a few attributes

view details

push time in 11 days

push eventremilapeyre/terraform-provider-nomad

Rémi Lapeyre

commit sha 3bd9fe0ba6d3cacc06768287627e34a2607fc252

Fix diff for durations and defaults in template block

view details

push time in 11 days

push eventremilapeyre/terraform-provider-nomad

Rémi Lapeyre

commit sha 57d5a8bc9c9ffcee1ea98703d9e8299dd917a132

Fix handling of consul_token and vault_token in resourceJobV2Read()

view details

push time in 11 days

push eventremilapeyre/terraform-provider-nomad

Rémi Lapeyre

commit sha b6d6930123cf13b83225da7b300eac0c1c5e227b

Read port properly

view details

push time in 11 days

push eventremilapeyre/terraform-provider-nomad

Rémi Lapeyre

commit sha 016b726ea84c29eec1d9f6a1802d44ecc36caa9e

Fix handling of the port block

view details

push time in 11 days

Pull request review commenthashicorp/terraform-provider-nomad

Add a nomad_job_v2 resource

+package nomad++import (+	"encoding/json"+	"fmt"+	"reflect"+	"strings"+	"time"++	"github.com/hashicorp/nomad/api"+	"github.com/hashicorp/terraform-plugin-sdk/helper/schema"+)++func resourceJobV2() *schema.Resource {+	return &schema.Resource{+		Schema: getJobFields(),+		Create: resourceJobV2Register,+		Update: resourceJobV2Register,+		Read:   resourceJobV2Read,+		Delete: resourceJobV2Deregister,+	}+}++func resourceJobV2Register(d *schema.ResourceData, meta interface{}) error {+	client := meta.(ProviderConfig).client+	job, err := getJob(d)+	if err != nil {+		return fmt.Errorf("Failed to get job definition: %v", err)+	}++	_, _, err = client.Jobs().Register(job, nil)+	if err != nil {+		return fmt.Errorf("Failed to create the job: %v", err)+	}++	d.SetId(d.Get("name").(string))++	return resourceJobV2Read(d, meta)+}++func resourceJobV2Read(d *schema.ResourceData, meta interface{}) error {+	client := meta.(ProviderConfig).client+	job, _, err := client.Jobs().Info(d.Id(), nil)+	if err != nil {+		if strings.Contains(err.Error(), "404") {+			d.SetId("")+			return nil+		}+		return fmt.Errorf("Failed to read the job: %v", err)+	}++	if err = d.Set("namespace", job.Namespace); err != nil {+		return fmt.Errorf("Failed to set 'namespace': %v", err)+	}+	if err = d.Set("priority", job.Priority); err != nil {+		return fmt.Errorf("Failed to set 'priority': %v", err)+	}+	if err = d.Set("type", job.Type); err != nil {+		return fmt.Errorf("Failed to set 'type': %v", err)+	}+	if err = d.Set("region", job.Region); err != nil {+		return fmt.Errorf("Failed to set 'region': %v", err)+	}+	if err = d.Set("meta", job.Meta); err != nil {+		return fmt.Errorf("Failed to set 'meta': %v", err)+	}+	if err = d.Set("all_at_once", job.AllAtOnce); err != nil {+		return fmt.Errorf("Failed to set 'all_at_once': %v", err)+	}+	if err = d.Set("datacenters", job.Datacenters); err != nil {+		return fmt.Errorf("Failed to set 'datacenters': %v", err)+	}+	if err = d.Set("name", job.Name); err != nil {+		return fmt.Errorf("Failed to set 'name': %v", err)+	}+	// if err = d.Set("vault_token", job.VaultToken); err != nil {+	// 	return fmt.Errorf("Failed to set 'vault_token': %v", err)+	// }+	// if err = d.Set("consul_token", job.ConsulToken); err != nil {+	// 	return fmt.Errorf("Failed to set 'consul_token': %v", err)+	// }+	if err = d.Set("constraint", readConstraints(job.Constraints)); err != nil {+		return fmt.Errorf("Failed to set 'constraint': %v", err)+	}+	if err = d.Set("affinity", readAffinities(job.Affinities)); err != nil {+		return fmt.Errorf("Failed to set 'affinity': %v", err)+	}+	if err = d.Set("spread", readSpreads(job.Spreads)); err != nil {+		return fmt.Errorf("Failed to set 'spread': %v", err)+	}+	groups, err := readGroups(d, job.TaskGroups)+	if err != nil {+		return err+	}+	if err = d.Set("group", groups); err != nil {+		return fmt.Errorf("Failed to set 'group': %v", err)+	}++	parameterized := make([]interface{}, 0)+	if job.ParameterizedJob != nil {+		p := map[string]interface{}{+			"meta_optional": job.ParameterizedJob.MetaOptional,+			"meta_required": job.ParameterizedJob.MetaRequired,+			"payload":       job.ParameterizedJob.Payload,+		}+		parameterized = append(parameterized, p)+	}+	if err = d.Set("parameterized", parameterized); err != nil {+		return fmt.Errorf("Failed to set 'parameterized': %v", err)+	}++	periodic := make([]interface{}, 0)+	if job.Periodic != nil {+		p := map[string]interface{}{+			"cron":             job.Periodic.Spec,+			"prohibit_overlap": job.Periodic.ProhibitOverlap,+			"time_zone":        job.Periodic.TimeZone,+		}+		periodic = append(periodic, p)+	}+	if err = d.Set("periodic", periodic); err != nil {+		return fmt.Errorf("Failed to set 'periodic': %v", err)+	}++	update, err := readUpdate(d, job.Update)+	if err != nil {+		return err+	}++	if err = d.Set("update", update); err != nil {+		return fmt.Errorf("Failed to set 'update': %v", err)+	}++	return nil+}++func resourceJobV2Deregister(d *schema.ResourceData, meta interface{}) error {+	client := meta.(ProviderConfig).client++	_, _, err := client.Jobs().Deregister(d.Id(), true, nil)+	if err != nil {+		return fmt.Errorf("Failed to deregister the job: %v", err)+	}++	d.SetId("")+	return nil+}++// Helpers to covert to representation used by the Nomad API++func strToPtr(s string) *string {+	if s == "" {+		return nil+	}+	return &s+}++func boolToPtr(b bool) *bool {+	return &b+}++func intToPtr(i int) *int {+	return &i+}++func getString(d interface{}, name string) *string {+	if m, ok := d.(map[string]interface{}); ok {+		return strToPtr(m[name].(string))+	}+	return strToPtr(d.(*schema.ResourceData).Get(name).(string))+}++func getBool(d interface{}, name string) *bool {+	if m, ok := d.(map[string]interface{}); ok {+		return boolToPtr(m[name].(bool))+	}+	return boolToPtr(d.(*schema.ResourceData).Get(name).(bool))+}++func getInt(d interface{}, name string) *int {+	if m, ok := d.(map[string]interface{}); ok {+		return intToPtr(m[name].(int))+	}+	return intToPtr(d.(*schema.ResourceData).Get(name).(int))+}++func getMapOfString(d interface{}) map[string]string {+	res := make(map[string]string)+	for key, value := range d.(map[string]interface{}) {+		res[key] = value.(string)+	}+	return res+}++func getDuration(d interface{}) (*time.Duration, error) {+	s := d.(string)++	if s == "" {+		return nil, nil+	}++	duration, err := time.ParseDuration(s)+	if duration.Seconds() == 0 {+		return nil, err+	}+	return &duration, err+}++// Those functions should have a 1 to 1 correspondance with the ones in+// resource_job_v2_fields to make it easy to check we did not forget anything++func getJob(d *schema.ResourceData) (*api.Job, error) {+	datacenters := make([]string, 0)+	for _, dc := range d.Get("datacenters").([]interface{}) {+		datacenters = append(datacenters, dc.(string))+	}++	var parametrizedJob *api.ParameterizedJobConfig+	for _, pj := range d.Get("parameterized").([]interface{}) {+		p := pj.(map[string]interface{})++		metaRequired := make([]string, 0)+		for _, s := range p["meta_required"].([]interface{}) {+			metaRequired = append(metaRequired, s.(string))+		}++		metaOptional := make([]string, 0)+		for _, s := range p["meta_optional"].([]interface{}) {+			metaOptional = append(metaOptional, s.(string))+		}+		parametrizedJob = &api.ParameterizedJobConfig{+			Payload:      p["payload"].(string),+			MetaRequired: metaRequired,+			MetaOptional: metaOptional,+		}+	}++	var periodic *api.PeriodicConfig+	for _, pc := range d.Get("periodic").([]interface{}) {+		p := pc.(map[string]interface{})+		periodic = &api.PeriodicConfig{+			Enabled:         boolToPtr(true),+			Spec:            getString(p, "cron"),+			SpecType:        strToPtr("cron"),+			ProhibitOverlap: getBool(p, "prohibit_overlap"),+			TimeZone:        getString(p, "time_zone"),+		}+	}++	update, err := getUpdate(d.Get("update"))+	if err != nil {+		return nil, err+	}+	taskGroups, err := getTaskGroups(d.Get("group"))+	if err != nil {+		return nil, err+	}++	return &api.Job{+		Namespace:   getString(d, "namespace"),+		Priority:    getInt(d, "priority"),+		Type:        getString(d, "type"),+		Meta:        getMapOfString(d.Get("meta")),+		AllAtOnce:   getBool(d, "all_at_once"),+		Datacenters: datacenters,+		ID:          getString(d, "name"),+		Region:      getString(d, "region"),+		VaultToken:  getString(d, "vault_token"),+		ConsulToken: getString(d, "consul_token"),++		Constraints: getConstraints(d.Get("constraint")),+		Affinities:  getAffinities(d.Get("affinity")),+		Spreads:     getSpreads(d.Get("spread")),+		TaskGroups:  taskGroups,++		ParameterizedJob: parametrizedJob,+		Periodic:         periodic,++		Update: update,+	}, nil+}++func getConstraints(d interface{}) []*api.Constraint {+	constraints := make([]*api.Constraint, 0)++	for _, ct := range d.([]interface{}) {+		c := ct.(map[string]interface{})+		constraints = append(+			constraints,+			api.NewConstraint(+				c["attribute"].(string),+				c["operator"].(string),+				c["value"].(string),+			),+		)+	}++	return constraints+}++func getAffinities(d interface{}) []*api.Affinity {+	affinities := make([]*api.Affinity, 0)++	for _, af := range d.([]interface{}) {+		a := af.(map[string]interface{})+		affinities = append(+			affinities,+			api.NewAffinity(+				a["attribute"].(string),+				a["operator"].(string),+				a["value"].(string),+				int8(a["weight"].(int)),+			),+		)+	}++	return affinities+}++func getSpreads(d interface{}) []*api.Spread {+	spreads := make([]*api.Spread, 0)++	for _, sp := range d.([]interface{}) {+		s := sp.(map[string]interface{})++		targets := make([]*api.SpreadTarget, 0)+		for _, tg := range s["target"].([]interface{}) {+			t := tg.(map[string]interface{})+			targets = append(+				targets,+				&api.SpreadTarget{+					Value:   t["value"].(string),+					Percent: uint8(t["percent"].(int)),+				},+			)+		}++		spreads = append(+			spreads,+			api.NewSpread(+				s["attribute"].(string),+				int8(s["weight"].(int)),+				targets,+			),+		)+	}++	return nil+}++func getTaskGroups(d interface{}) ([]*api.TaskGroup, error) {+	taskGroups := make([]*api.TaskGroup, 0)++	for _, tg := range d.([]interface{}) {+		g := tg.(map[string]interface{})++		migrate, err := getMigrate(g["migrate"])+		if err != nil {+			return nil, err+		}+		reschedule, err := getReschedule(g["reschedule"])+		if err != nil {+			return nil, err+		}++		var ephemeralDisk *api.EphemeralDisk+		for _, ed := range g["ephemeral_disk"].([]interface{}) {+			e := ed.(map[string]interface{})+			ephemeralDisk = &api.EphemeralDisk{+				Sticky:  getBool(e, "sticky"),+				Migrate: getBool(e, "migrate"),+				SizeMB:  getInt(e, "size"),+			}+		}++		var restartPolicy *api.RestartPolicy+		for _, rp := range g["restart"].([]interface{}) {+			r := rp.(map[string]interface{})+			restartPolicy = &api.RestartPolicy{+				Attempts: getInt(r, "attempts"),+				Mode:     getString(r, "mode"),+			}++			var duration *time.Duration+			duration, err = getDuration(r["delay"])+			if err != nil {+				return nil, err+			}+			restartPolicy.Delay = duration++			duration, err = getDuration(r["interval"])+			if err != nil {+				return nil, err+			}+			restartPolicy.Interval = duration+		}+		volumes := make(map[string]*api.VolumeRequest)+		for _, vr := range g["volume"].([]interface{}) {+			v := vr.(map[string]interface{})+			name := v["name"].(string)+			volumes[name] = &api.VolumeRequest{+				Name:     name,+				Type:     v["type"].(string),+				Source:   v["source"].(string),+				ReadOnly: v["read_only"].(bool),+			}+		}++		tasks, err := getTasks(g["task"])+		if err != nil {+			return nil, err+		}++		services, err := getServices(g["service"])+		if err != nil {+			return nil, err+		}++		group := &api.TaskGroup{+			Name:             getString(g, "name"),+			Meta:             getMapOfString(g["meta"]),+			Count:            getInt(g, "count"),+			Constraints:      getConstraints(g["constraint"]),+			Affinities:       getAffinities(g["affinity"]),+			Spreads:          getSpreads(g["spread"]),+			EphemeralDisk:    ephemeralDisk,+			Migrate:          migrate,+			Networks:         getNetworks(g["network"]),+			ReschedulePolicy: reschedule,+			RestartPolicy:    restartPolicy,+			Services:         services,+			Tasks:            tasks,+			Volumes:          volumes,+		}++		var duration *time.Duration+		duration, err = getDuration(g["shutdown_delay"])+		if err != nil {+			return nil, err+		}+		group.ShutdownDelay = duration++		duration, err = getDuration(g["stop_after_client_disconnect"])+		if err != nil {+			return nil, err+		}+		group.StopAfterClientDisconnect = duration++		taskGroups = append(taskGroups, group)+	}++	return taskGroups, nil+}++func getMigrate(d interface{}) (*api.MigrateStrategy, error) {+	for _, mg := range d.([]interface{}) {+		m := mg.(map[string]interface{})++		migrateStrategy := &api.MigrateStrategy{+			MaxParallel: getInt(m, "max_parallel"),+			HealthCheck: getString(m, "health_check"),+		}++		var duration *time.Duration+		duration, err := getDuration(m["min_healthy_time"])+		if err != nil {+			return nil, err+		}+		migrateStrategy.MinHealthyTime = duration++		duration, err = getDuration(m["healthy_deadline"])+		if err != nil {+			return nil, err+		}+		migrateStrategy.HealthyDeadline = duration++		return migrateStrategy, nil+	}++	return nil, nil+}++func getReschedule(d interface{}) (*api.ReschedulePolicy, error) {+	for _, re := range d.([]interface{}) {+		r := re.(map[string]interface{})++		reschedulePolicy := &api.ReschedulePolicy{+			Attempts:      getInt(r, "attempts"),+			DelayFunction: getString(r, "delay_function"),+			Unlimited:     getBool(r, "unlimited"),+		}++		var duration *time.Duration+		duration, err := getDuration(r["interval"])+		if err != nil {+			return nil, err+		}+		reschedulePolicy.Interval = duration++		duration, err = getDuration(r["delay"])+		if err != nil {+			return nil, err+		}+		reschedulePolicy.Delay = duration++		duration, err = getDuration(r["max_delay"])+		if err != nil {+			return nil, err+		}+		reschedulePolicy.MaxDelay = duration++		return reschedulePolicy, nil+	}++	return nil, nil+}++func getUpdate(d interface{}) (*api.UpdateStrategy, error) {+	for _, up := range d.([]interface{}) {+		u := up.(map[string]interface{})++		update := &api.UpdateStrategy{+			MaxParallel: getInt(u, "max_parallel"),+			HealthCheck: getString(u, "health_check"),+			Canary:      getInt(u, "canary"),+			AutoRevert:  getBool(u, "auto_revert"),+			AutoPromote: getBool(u, "auto_promote"),+		}++		var duration *time.Duration+		duration, err := getDuration(u["stagger"])+		if err != nil {+			return nil, err+		}+		update.Stagger = duration++		duration, err = getDuration(u["min_healthy_time"])+		if err != nil {+			return nil, err+		}+		update.MinHealthyTime = duration++		duration, err = getDuration(u["healthy_deadline"])+		if err != nil {+			return nil, err+		}+		update.HealthyDeadline = duration++		duration, err = getDuration(u["progress_deadline"])+		if err != nil {+			return nil, err+		}+		update.ProgressDeadline = duration++		return update, nil+	}+	return nil, nil+}++func getTasks(d interface{}) ([]*api.Task, error) {+	tasks := make([]*api.Task, 0)++	for _, tk := range d.([]interface{}) {+		t := tk.(map[string]interface{})++		artifacts := make([]*api.TaskArtifact, 0)+		for _, af := range t["artifact"].([]interface{}) {+			a := af.(map[string]interface{})++			artifact := &api.TaskArtifact{+				GetterSource:  getString(a, "source"),+				GetterOptions: getMapOfString(a["options"]),+				GetterMode:    getString(a, "mode"),+				RelativeDest:  getString(a, "destination"),+			}++			artifacts = append(artifacts, artifact)+		}++		var dispatchPayloadConfig *api.DispatchPayloadConfig+		for _, dp := range t["dispatch_payload"].([]interface{}) {+			d := dp.(map[string]interface{})++			dispatchPayloadConfig = &api.DispatchPayloadConfig{+				File: d["file"].(string),+			}+		}++		var taskLifecycle *api.TaskLifecycle+		for _, tl := range t["lifecycle"].([]interface{}) {+			l := tl.(map[string]interface{})++			taskLifecycle = &api.TaskLifecycle{+				Hook:    l["hook"].(string),+				Sidecar: l["sidecar"].(bool),+			}+		}++		templates := make([]*api.Template, 0)+		for _, tpl := range t["template"].([]interface{}) {+			tp := tpl.(map[string]interface{})++			template := &api.Template{+				ChangeMode:   getString(tp, "change_mode"),+				ChangeSignal: getString(tp, "change_signal"),+				EmbeddedTmpl: getString(tp, "data"),+				DestPath:     getString(tp, "destination"),+				Envvars:      getBool(tp, "env"),+				LeftDelim:    getString(tp, "left_delimiter"),+				Perms:        getString(tp, "perms"),+				RightDelim:   getString(tp, "right_delimiter"),+				SourcePath:   getString(tp, "source"),+			}++			duration, err := getDuration(tp["splay"])+			if err != nil {+				return nil, err+			}+			template.Splay = duration++			duration, err = getDuration(tp["vault_grace"])+			if err != nil {+				return nil, err+			}+			template.VaultGrace = duration++			templates = append(templates, template)+		}++		volumeMounts := make([]*api.VolumeMount, 0)+		for _, vm := range t["volume_mount"].([]interface{}) {+			v := vm.(map[string]interface{})++			volumeMount := &api.VolumeMount{+				Volume:      getString(v, "volume"),+				Destination: getString(v, "destination"),+				ReadOnly:    getBool(v, "read_only"),+			}++			volumeMounts = append(volumeMounts, volumeMount)+		}++		var config map[string]interface{}+		err := json.Unmarshal([]byte(t["config"].(string)), &config)+		if err != nil {+			return nil, err+		}++		services, err := getServices(t["service"])+		if err != nil {+			return nil, err+		}++		task := &api.Task{+			Name:            t["name"].(string),+			Config:          config,+			Meta:            getMapOfString(t["meta"]),+			Driver:          t["driver"].(string),+			KillSignal:      t["kill_signal"].(string),+			Leader:          t["leader"].(bool),+			User:            t["user"].(string),+			Kind:            t["kind"].(string),+			Artifacts:       artifacts,+			Constraints:     getConstraints(t["constraint"]),+			Affinities:      getAffinities(t["affinity"]),+			DispatchPayload: dispatchPayloadConfig,+			Env:             getMapOfString(t["env"]),+			Lifecycle:       taskLifecycle,+			LogConfig:       getLogConfig(t["logs"]),+			Resources:       getResources(t["resources"]),+			Services:        services,+			Templates:       templates,+			Vault:           getVault(t["vault"]),+			VolumeMounts:    volumeMounts,+		}++		duration, err := getDuration(t["kill_timeout"])+		if err != nil {+			return nil, err+		}+		task.KillTimeout = duration++		duration, err = getDuration(t["shutdown_delay"])+		if err != nil {+			return nil, err+		}+		if duration != nil {+			task.ShutdownDelay = *duration+		}++		tasks = append(tasks, task)+	}++	return tasks, nil+}++func getNetworks(d interface{}) []*api.NetworkResource {+	networks := make([]*api.NetworkResource, 0)++	for _, nr := range d.([]interface{}) {+		n := nr.(map[string]interface{})++		// TODO(remi): find what to do with the ports here

Thanks!

remilapeyre

comment created time in 11 days

PullRequestReviewEvent

push eventremilapeyre/terraform-provider-nomad

Rémi Lapeyre

commit sha 3a7e53d8e1324da2e6f35443c1cce12ffd1388a2

Add a label attribute in the port block

view details

push time in 11 days

push eventremilapeyre/terraform-provider-nomad

Rémi Lapeyre

commit sha 3bb15606299dd7a5a411be98590f643d1b52800f

Use provider region as default

view details

push time in 11 days

push eventhashicorp/terraform-provider-consul

Rémi Lapeyre

commit sha e1bf6240f038e81899dca11104cc5cb98a942d35

Make the warning in the consul_service documentation more visible (#229)

view details

push time in 11 days

push eventremilapeyre/terraform-provider-nomad

Rémi Lapeyre

commit sha f8f59015818df03e9573033677902924933f81be

Add import support to nomad_job_v2

view details

push time in 11 days

push eventremilapeyre/terraform-provider-nomad

Rémi Lapeyre

commit sha 9e1c753bab596e9c4336cf8da4a53e284a6ffb27

Fix type of local_bind_port and native

view details

push time in 11 days

push eventremilapeyre/terraform-provider-nomad

Rémi Lapeyre

commit sha 686dd39ce48f1bb7cb2d70d8c8912d44b9b09c8b

Fix handling of ID and name

view details

push time in 11 days

push eventremilapeyre/terraform-provider-nomad

Rémi Lapeyre

commit sha 92b92314c1684707c1f4d334740517604e0a09f4

Fix ID handling

view details

push time in 11 days

push eventremilapeyre/terraform-provider-nomad

Rémi Lapeyre

commit sha 210f08bc93bc3406bc40492d4c4b8cce8f23f5cb

Fix typo

view details

push time in 12 days

push eventremilapeyre/terraform-provider-nomad

Rémi Lapeyre

commit sha 8a6c01abb4e05a7cfac531e5faee584407f74c39

Add ID attribute

view details

push time in 12 days

delete branch hashicorp/terraform-provider-nomad

delete branch : nomad-job-v2

delete time in 12 days

push eventremilapeyre/terraform-provider-nomad

Rémi Lapeyre

commit sha cf4fa6869854d8ea5f8405cef9dec3653d4cf844

Move the job definition in a block

view details

push time in 12 days

create barnchhashicorp/terraform-provider-nomad

branch : nomad-job-v2

created branch time in 12 days

push eventremilapeyre/terraform-provider-nomad

Rémi Lapeyre

commit sha 58278cc2657b3224782dfb8c7aeefe660b862521

Don't set consul_token and vault_token in resourceJobV2Read()

view details

push time in 12 days

create barnchremilapeyre/terraform-provider-consul

branch : external-service

created branch time in 12 days

Pull request review commenthashicorp/terraform-provider-nomad

Add a nomad_job_v2 resource

+package nomad++import (+	"encoding/json"+	"fmt"+	"reflect"+	"strings"+	"time"++	"github.com/hashicorp/nomad/api"+	"github.com/hashicorp/terraform-plugin-sdk/helper/schema"+)++func resourceJobV2() *schema.Resource {+	return &schema.Resource{+		Schema: getJobFields(),+		Create: resourceJobV2Register,+		Update: resourceJobV2Register,+		Read:   resourceJobV2Read,+		Delete: resourceJobV2Deregister,+	}+}++func resourceJobV2Register(d *schema.ResourceData, meta interface{}) error {+	client := meta.(ProviderConfig).client+	job, err := getJob(d)+	if err != nil {+		return fmt.Errorf("Failed to get job definition: %v", err)+	}++	_, _, err = client.Jobs().Register(job, nil)+	if err != nil {+		return fmt.Errorf("Failed to create the job: %v", err)+	}++	d.SetId(d.Get("name").(string))++	return resourceJobV2Read(d, meta)+}++func resourceJobV2Read(d *schema.ResourceData, meta interface{}) error {+	client := meta.(ProviderConfig).client+	job, _, err := client.Jobs().Info(d.Id(), nil)+	if err != nil {+		if strings.Contains(err.Error(), "404") {+			d.SetId("")+			return nil+		}+		return fmt.Errorf("Failed to read the job: %v", err)+	}++	if err = d.Set("namespace", job.Namespace); err != nil {+		return fmt.Errorf("Failed to set 'namespace': %v", err)+	}+	if err = d.Set("priority", job.Priority); err != nil {+		return fmt.Errorf("Failed to set 'priority': %v", err)+	}+	if err = d.Set("type", job.Type); err != nil {+		return fmt.Errorf("Failed to set 'type': %v", err)+	}+	if err = d.Set("region", job.Region); err != nil {+		return fmt.Errorf("Failed to set 'region': %v", err)+	}+	if err = d.Set("meta", job.Meta); err != nil {+		return fmt.Errorf("Failed to set 'meta': %v", err)+	}+	if err = d.Set("all_at_once", job.AllAtOnce); err != nil {+		return fmt.Errorf("Failed to set 'all_at_once': %v", err)+	}+	if err = d.Set("datacenters", job.Datacenters); err != nil {+		return fmt.Errorf("Failed to set 'datacenters': %v", err)+	}+	if err = d.Set("name", job.Name); err != nil {+		return fmt.Errorf("Failed to set 'name': %v", err)+	}+	// if err = d.Set("vault_token", job.VaultToken); err != nil {+	// 	return fmt.Errorf("Failed to set 'vault_token': %v", err)+	// }+	// if err = d.Set("consul_token", job.ConsulToken); err != nil {+	// 	return fmt.Errorf("Failed to set 'consul_token': %v", err)+	// }+	if err = d.Set("constraint", readConstraints(job.Constraints)); err != nil {+		return fmt.Errorf("Failed to set 'constraint': %v", err)+	}+	if err = d.Set("affinity", readAffinities(job.Affinities)); err != nil {+		return fmt.Errorf("Failed to set 'affinity': %v", err)+	}+	if err = d.Set("spread", readSpreads(job.Spreads)); err != nil {+		return fmt.Errorf("Failed to set 'spread': %v", err)+	}+	groups, err := readGroups(d, job.TaskGroups)+	if err != nil {+		return err+	}+	if err = d.Set("group", groups); err != nil {+		return fmt.Errorf("Failed to set 'group': %v", err)+	}++	parameterized := make([]interface{}, 0)+	if job.ParameterizedJob != nil {+		p := map[string]interface{}{+			"meta_optional": job.ParameterizedJob.MetaOptional,+			"meta_required": job.ParameterizedJob.MetaRequired,+			"payload":       job.ParameterizedJob.Payload,+		}+		parameterized = append(parameterized, p)+	}+	if err = d.Set("parameterized", parameterized); err != nil {+		return fmt.Errorf("Failed to set 'parameterized': %v", err)+	}++	periodic := make([]interface{}, 0)+	if job.Periodic != nil {+		p := map[string]interface{}{+			"cron":             job.Periodic.Spec,+			"prohibit_overlap": job.Periodic.ProhibitOverlap,+			"time_zone":        job.Periodic.TimeZone,+		}+		periodic = append(periodic, p)+	}+	if err = d.Set("periodic", periodic); err != nil {+		return fmt.Errorf("Failed to set 'periodic': %v", err)+	}++	update, err := readUpdate(d, job.Update)+	if err != nil {+		return err+	}++	if err = d.Set("update", update); err != nil {+		return fmt.Errorf("Failed to set 'update': %v", err)+	}++	return nil+}++func resourceJobV2Deregister(d *schema.ResourceData, meta interface{}) error {+	client := meta.(ProviderConfig).client++	_, _, err := client.Jobs().Deregister(d.Id(), true, nil)+	if err != nil {+		return fmt.Errorf("Failed to deregister the job: %v", err)+	}++	d.SetId("")+	return nil+}++// Helpers to covert to representation used by the Nomad API++func strToPtr(s string) *string {+	if s == "" {+		return nil+	}+	return &s+}++func boolToPtr(b bool) *bool {+	return &b+}++func intToPtr(i int) *int {+	return &i+}++func getString(d interface{}, name string) *string {+	if m, ok := d.(map[string]interface{}); ok {+		return strToPtr(m[name].(string))+	}+	return strToPtr(d.(*schema.ResourceData).Get(name).(string))+}++func getBool(d interface{}, name string) *bool {+	if m, ok := d.(map[string]interface{}); ok {+		return boolToPtr(m[name].(bool))+	}+	return boolToPtr(d.(*schema.ResourceData).Get(name).(bool))+}++func getInt(d interface{}, name string) *int {+	if m, ok := d.(map[string]interface{}); ok {+		return intToPtr(m[name].(int))+	}+	return intToPtr(d.(*schema.ResourceData).Get(name).(int))+}++func getMapOfString(d interface{}) map[string]string {+	res := make(map[string]string)+	for key, value := range d.(map[string]interface{}) {+		res[key] = value.(string)+	}+	return res+}++func getDuration(d interface{}) (*time.Duration, error) {+	s := d.(string)++	if s == "" {+		return nil, nil+	}++	duration, err := time.ParseDuration(s)+	if duration.Seconds() == 0 {+		return nil, err+	}+	return &duration, err+}++// Those functions should have a 1 to 1 correspondance with the ones in+// resource_job_v2_fields to make it easy to check we did not forget anything++func getJob(d *schema.ResourceData) (*api.Job, error) {+	datacenters := make([]string, 0)+	for _, dc := range d.Get("datacenters").([]interface{}) {+		datacenters = append(datacenters, dc.(string))+	}++	var parametrizedJob *api.ParameterizedJobConfig+	for _, pj := range d.Get("parameterized").([]interface{}) {+		p := pj.(map[string]interface{})++		metaRequired := make([]string, 0)+		for _, s := range p["meta_required"].([]interface{}) {+			metaRequired = append(metaRequired, s.(string))+		}++		metaOptional := make([]string, 0)+		for _, s := range p["meta_optional"].([]interface{}) {+			metaOptional = append(metaOptional, s.(string))+		}+		parametrizedJob = &api.ParameterizedJobConfig{+			Payload:      p["payload"].(string),+			MetaRequired: metaRequired,+			MetaOptional: metaOptional,+		}+	}++	var periodic *api.PeriodicConfig+	for _, pc := range d.Get("periodic").([]interface{}) {+		p := pc.(map[string]interface{})+		periodic = &api.PeriodicConfig{+			Enabled:         boolToPtr(true),+			Spec:            getString(p, "cron"),+			SpecType:        strToPtr("cron"),+			ProhibitOverlap: getBool(p, "prohibit_overlap"),+			TimeZone:        getString(p, "time_zone"),+		}+	}++	update, err := getUpdate(d.Get("update"))+	if err != nil {+		return nil, err+	}+	taskGroups, err := getTaskGroups(d.Get("group"))+	if err != nil {+		return nil, err+	}++	return &api.Job{+		Namespace:   getString(d, "namespace"),+		Priority:    getInt(d, "priority"),+		Type:        getString(d, "type"),+		Meta:        getMapOfString(d.Get("meta")),+		AllAtOnce:   getBool(d, "all_at_once"),+		Datacenters: datacenters,+		ID:          getString(d, "name"),+		Region:      getString(d, "region"),+		VaultToken:  getString(d, "vault_token"),+		ConsulToken: getString(d, "consul_token"),++		Constraints: getConstraints(d.Get("constraint")),+		Affinities:  getAffinities(d.Get("affinity")),+		Spreads:     getSpreads(d.Get("spread")),+		TaskGroups:  taskGroups,++		ParameterizedJob: parametrizedJob,+		Periodic:         periodic,++		Update: update,+	}, nil+}++func getConstraints(d interface{}) []*api.Constraint {+	constraints := make([]*api.Constraint, 0)++	for _, ct := range d.([]interface{}) {+		c := ct.(map[string]interface{})+		constraints = append(+			constraints,+			api.NewConstraint(+				c["attribute"].(string),+				c["operator"].(string),+				c["value"].(string),+			),+		)+	}++	return constraints+}++func getAffinities(d interface{}) []*api.Affinity {+	affinities := make([]*api.Affinity, 0)++	for _, af := range d.([]interface{}) {+		a := af.(map[string]interface{})+		affinities = append(+			affinities,+			api.NewAffinity(+				a["attribute"].(string),+				a["operator"].(string),+				a["value"].(string),+				int8(a["weight"].(int)),+			),+		)+	}++	return affinities+}++func getSpreads(d interface{}) []*api.Spread {+	spreads := make([]*api.Spread, 0)++	for _, sp := range d.([]interface{}) {+		s := sp.(map[string]interface{})++		targets := make([]*api.SpreadTarget, 0)+		for _, tg := range s["target"].([]interface{}) {+			t := tg.(map[string]interface{})+			targets = append(+				targets,+				&api.SpreadTarget{+					Value:   t["value"].(string),+					Percent: uint8(t["percent"].(int)),+				},+			)+		}++		spreads = append(+			spreads,+			api.NewSpread(+				s["attribute"].(string),+				int8(s["weight"].(int)),+				targets,+			),+		)+	}++	return nil+}++func getTaskGroups(d interface{}) ([]*api.TaskGroup, error) {+	taskGroups := make([]*api.TaskGroup, 0)++	for _, tg := range d.([]interface{}) {+		g := tg.(map[string]interface{})++		migrate, err := getMigrate(g["migrate"])+		if err != nil {+			return nil, err+		}+		reschedule, err := getReschedule(g["reschedule"])+		if err != nil {+			return nil, err+		}++		var ephemeralDisk *api.EphemeralDisk+		for _, ed := range g["ephemeral_disk"].([]interface{}) {+			e := ed.(map[string]interface{})+			ephemeralDisk = &api.EphemeralDisk{+				Sticky:  getBool(e, "sticky"),+				Migrate: getBool(e, "migrate"),+				SizeMB:  getInt(e, "size"),+			}+		}++		var restartPolicy *api.RestartPolicy+		for _, rp := range g["restart"].([]interface{}) {+			r := rp.(map[string]interface{})+			restartPolicy = &api.RestartPolicy{+				Attempts: getInt(r, "attempts"),+				Mode:     getString(r, "mode"),+			}++			var duration *time.Duration+			duration, err = getDuration(r["delay"])+			if err != nil {+				return nil, err+			}+			restartPolicy.Delay = duration++			duration, err = getDuration(r["interval"])+			if err != nil {+				return nil, err+			}+			restartPolicy.Interval = duration+		}+		volumes := make(map[string]*api.VolumeRequest)+		for _, vr := range g["volume"].([]interface{}) {+			v := vr.(map[string]interface{})+			name := v["name"].(string)+			volumes[name] = &api.VolumeRequest{+				Name:     name,+				Type:     v["type"].(string),+				Source:   v["source"].(string),+				ReadOnly: v["read_only"].(bool),+			}+		}++		tasks, err := getTasks(g["task"])+		if err != nil {+			return nil, err+		}++		services, err := getServices(g["service"])+		if err != nil {+			return nil, err+		}++		group := &api.TaskGroup{+			Name:             getString(g, "name"),+			Meta:             getMapOfString(g["meta"]),+			Count:            getInt(g, "count"),+			Constraints:      getConstraints(g["constraint"]),+			Affinities:       getAffinities(g["affinity"]),+			Spreads:          getSpreads(g["spread"]),+			EphemeralDisk:    ephemeralDisk,+			Migrate:          migrate,+			Networks:         getNetworks(g["network"]),+			ReschedulePolicy: reschedule,+			RestartPolicy:    restartPolicy,+			Services:         services,+			Tasks:            tasks,+			Volumes:          volumes,+		}++		var duration *time.Duration+		duration, err = getDuration(g["shutdown_delay"])+		if err != nil {+			return nil, err+		}+		group.ShutdownDelay = duration++		duration, err = getDuration(g["stop_after_client_disconnect"])+		if err != nil {+			return nil, err+		}+		group.StopAfterClientDisconnect = duration++		taskGroups = append(taskGroups, group)+	}++	return taskGroups, nil+}++func getMigrate(d interface{}) (*api.MigrateStrategy, error) {+	for _, mg := range d.([]interface{}) {+		m := mg.(map[string]interface{})++		migrateStrategy := &api.MigrateStrategy{+			MaxParallel: getInt(m, "max_parallel"),+			HealthCheck: getString(m, "health_check"),+		}++		var duration *time.Duration+		duration, err := getDuration(m["min_healthy_time"])+		if err != nil {+			return nil, err+		}+		migrateStrategy.MinHealthyTime = duration++		duration, err = getDuration(m["healthy_deadline"])+		if err != nil {+			return nil, err+		}+		migrateStrategy.HealthyDeadline = duration++		return migrateStrategy, nil+	}++	return nil, nil+}++func getReschedule(d interface{}) (*api.ReschedulePolicy, error) {+	for _, re := range d.([]interface{}) {+		r := re.(map[string]interface{})++		reschedulePolicy := &api.ReschedulePolicy{+			Attempts:      getInt(r, "attempts"),+			DelayFunction: getString(r, "delay_function"),+			Unlimited:     getBool(r, "unlimited"),+		}++		var duration *time.Duration+		duration, err := getDuration(r["interval"])+		if err != nil {+			return nil, err+		}+		reschedulePolicy.Interval = duration++		duration, err = getDuration(r["delay"])+		if err != nil {+			return nil, err+		}+		reschedulePolicy.Delay = duration++		duration, err = getDuration(r["max_delay"])+		if err != nil {+			return nil, err+		}+		reschedulePolicy.MaxDelay = duration++		return reschedulePolicy, nil+	}++	return nil, nil+}++func getUpdate(d interface{}) (*api.UpdateStrategy, error) {+	for _, up := range d.([]interface{}) {+		u := up.(map[string]interface{})++		update := &api.UpdateStrategy{+			MaxParallel: getInt(u, "max_parallel"),+			HealthCheck: getString(u, "health_check"),+			Canary:      getInt(u, "canary"),+			AutoRevert:  getBool(u, "auto_revert"),+			AutoPromote: getBool(u, "auto_promote"),+		}++		var duration *time.Duration+		duration, err := getDuration(u["stagger"])+		if err != nil {+			return nil, err+		}+		update.Stagger = duration++		duration, err = getDuration(u["min_healthy_time"])+		if err != nil {+			return nil, err+		}+		update.MinHealthyTime = duration++		duration, err = getDuration(u["healthy_deadline"])+		if err != nil {+			return nil, err+		}+		update.HealthyDeadline = duration++		duration, err = getDuration(u["progress_deadline"])+		if err != nil {+			return nil, err+		}+		update.ProgressDeadline = duration++		return update, nil+	}+	return nil, nil+}++func getTasks(d interface{}) ([]*api.Task, error) {+	tasks := make([]*api.Task, 0)++	for _, tk := range d.([]interface{}) {+		t := tk.(map[string]interface{})++		artifacts := make([]*api.TaskArtifact, 0)+		for _, af := range t["artifact"].([]interface{}) {+			a := af.(map[string]interface{})++			artifact := &api.TaskArtifact{+				GetterSource:  getString(a, "source"),+				GetterOptions: getMapOfString(a["options"]),+				GetterMode:    getString(a, "mode"),+				RelativeDest:  getString(a, "destination"),+			}++			artifacts = append(artifacts, artifact)+		}++		var dispatchPayloadConfig *api.DispatchPayloadConfig+		for _, dp := range t["dispatch_payload"].([]interface{}) {+			d := dp.(map[string]interface{})++			dispatchPayloadConfig = &api.DispatchPayloadConfig{+				File: d["file"].(string),+			}+		}++		var taskLifecycle *api.TaskLifecycle+		for _, tl := range t["lifecycle"].([]interface{}) {+			l := tl.(map[string]interface{})++			taskLifecycle = &api.TaskLifecycle{+				Hook:    l["hook"].(string),+				Sidecar: l["sidecar"].(bool),+			}+		}++		templates := make([]*api.Template, 0)+		for _, tpl := range t["template"].([]interface{}) {+			tp := tpl.(map[string]interface{})++			template := &api.Template{+				ChangeMode:   getString(tp, "change_mode"),+				ChangeSignal: getString(tp, "change_signal"),+				EmbeddedTmpl: getString(tp, "data"),+				DestPath:     getString(tp, "destination"),+				Envvars:      getBool(tp, "env"),+				LeftDelim:    getString(tp, "left_delimiter"),+				Perms:        getString(tp, "perms"),+				RightDelim:   getString(tp, "right_delimiter"),+				SourcePath:   getString(tp, "source"),+			}++			duration, err := getDuration(tp["splay"])+			if err != nil {+				return nil, err+			}+			template.Splay = duration++			duration, err = getDuration(tp["vault_grace"])+			if err != nil {+				return nil, err+			}+			template.VaultGrace = duration++			templates = append(templates, template)+		}++		volumeMounts := make([]*api.VolumeMount, 0)+		for _, vm := range t["volume_mount"].([]interface{}) {+			v := vm.(map[string]interface{})++			volumeMount := &api.VolumeMount{+				Volume:      getString(v, "volume"),+				Destination: getString(v, "destination"),+				ReadOnly:    getBool(v, "read_only"),+			}++			volumeMounts = append(volumeMounts, volumeMount)+		}++		var config map[string]interface{}+		err := json.Unmarshal([]byte(t["config"].(string)), &config)+		if err != nil {+			return nil, err+		}++		services, err := getServices(t["service"])+		if err != nil {+			return nil, err+		}++		task := &api.Task{+			Name:            t["name"].(string),+			Config:          config,+			Meta:            getMapOfString(t["meta"]),+			Driver:          t["driver"].(string),+			KillSignal:      t["kill_signal"].(string),+			Leader:          t["leader"].(bool),+			User:            t["user"].(string),+			Kind:            t["kind"].(string),+			Artifacts:       artifacts,+			Constraints:     getConstraints(t["constraint"]),+			Affinities:      getAffinities(t["affinity"]),+			DispatchPayload: dispatchPayloadConfig,+			Env:             getMapOfString(t["env"]),+			Lifecycle:       taskLifecycle,+			LogConfig:       getLogConfig(t["logs"]),+			Resources:       getResources(t["resources"]),+			Services:        services,+			Templates:       templates,+			Vault:           getVault(t["vault"]),+			VolumeMounts:    volumeMounts,+		}++		duration, err := getDuration(t["kill_timeout"])+		if err != nil {+			return nil, err+		}+		task.KillTimeout = duration++		duration, err = getDuration(t["shutdown_delay"])+		if err != nil {+			return nil, err+		}+		if duration != nil {+			task.ShutdownDelay = *duration+		}++		tasks = append(tasks, task)+	}++	return tasks, nil+}++func getNetworks(d interface{}) []*api.NetworkResource {+	networks := make([]*api.NetworkResource, 0)++	for _, nr := range d.([]interface{}) {+		n := nr.(map[string]interface{})++		// TODO(remi): find what to do with the ports here+		network := &api.NetworkResource{+			Mode:  n["mode"].(string),+			MBits: getInt(n, "mbits"),+		}++		for _, dns := range n["dns"].([]interface{}) {+			d := dns.(map[string]interface{})++			network.DNS = &api.DNSConfig{}++			servers := make([]string, 0)+			for _, s := range d["servers"].([]interface{}) {+				network.DNS.Servers = append(servers, s.(string))+			}+			searches := make([]string, 0)+			for _, s := range d["searches"].([]interface{}) {+				network.DNS.Searches = append(searches, s.(string))+			}+			options := make([]string, 0)+			for _, s := range d["options"].([]interface{}) {+				network.DNS.Options = append(options, s.(string))+			}+		}++		networks = append(networks, network)+	}++	return networks+}++func getServices(d interface{}) ([]*api.Service, error) {+	services := make([]*api.Service, 0)++	for _, svc := range d.([]interface{}) {+		s := svc.(map[string]interface{})++		tags := make([]string, 0)+		for _, t := range s["tags"].([]interface{}) {+			tags = append(tags, t.(string))+		}++		canaryTags := make([]string, 0)+		for _, t := range s["canary_tags"].([]interface{}) {+			canaryTags = append(canaryTags, t.(string))+		}++		checks := make([]api.ServiceCheck, 0)+		for _, cks := range s["check"].([]interface{}) {+			c := cks.(map[string]interface{})++			args := make([]string, 0)+			for _, a := range c["args"].([]interface{}) {+				args = append(args, a.(string))+			}++			var checkRestart *api.CheckRestart+			for _, crt := range c["check_restart"].([]interface{}) {+				cr := crt.(map[string]interface{})++				grace, err := getDuration(cr["grace"])+				if err != nil {+					return nil, err+				}+				checkRestart = &api.CheckRestart{+					Limit:          cr["limit"].(int),+					Grace:          grace,+					IgnoreWarnings: cr["ignore_warnings"].(bool),+				}+			}++			check := api.ServiceCheck{+				AddressMode:            c["address_mode"].(string),+				Args:                   args,+				Command:                c["command"].(string),+				GRPCService:            c["grpc_service"].(string),+				GRPCUseTLS:             c["grpc_use_tls"].(bool),+				InitialStatus:          c["initial_status"].(string),+				SuccessBeforePassing:   c["success_before_passing"].(int),+				FailuresBeforeCritical: c["failures_before_critical"].(int),+				Method:                 c["method"].(string),+				Name:                   c["name"].(string),+				Path:                   c["path"].(string),+				Expose:                 c["expose"].(bool),+				PortLabel:              c["port"].(string),+				Protocol:               c["protocol"].(string),+				TaskName:               c["task"].(string),+				Type:                   c["type"].(string),+				TLSSkipVerify:          c["tls_skip_verify"].(bool),+				CheckRestart:           checkRestart,+			}++			duration, err := getDuration(c["timeout"])+			if err != nil {+				return nil, err+			}+			if duration != nil {+				check.Timeout = *duration+			}++			duration, err = getDuration(c["interval"])+			if err != nil {+				return nil, err+			}+			if duration != nil {+				check.Interval = *duration+			}++			checks = append(checks, check)+		}++		var connect *api.ConsulConnect+		for _, con := range s["connect"].([]interface{}) {+			cn := con.(map[string]interface{})++			var sidecarTask *api.SidecarTask+			for _, stask := range cn["sidecar_task"].([]interface{}) {+				st := stask.(map[string]interface{})++				var config map[string]interface{}+				err := json.Unmarshal([]byte(st["config"].(string)), &d)+				if err != nil {+					return nil, err+				}++				sidecarTask = &api.SidecarTask{+					Meta:       getMapOfString(st["meta"]),+					Name:       st["Name"].(string),+					Driver:     st["Driver"].(string),+					User:       st["User"].(string),+					Config:     config,+					Env:        getMapOfString(st["env"]),+					KillSignal: st["kill_signal"].(string),+					Resources:  getResources(st["resources"]),+					LogConfig:  getLogConfig(st["logs"]),+				}++				sidecarTask.KillTimeout, err = getDuration(st["kill_timeout"])+				if err != nil {+					return nil, err+				}++				sidecarTask.ShutdownDelay, err = getDuration(st["shutdown_delay"])+				if err != nil {+					return nil, err+				}+			}++			var sidecarService *api.ConsulSidecarService+			for _, sservice := range cn["sidecar_service"].([]interface{}) {+				ss := sservice.(map[string]interface{})++				tags := make([]string, 0)+				for _, t := range ss["tags"].([]interface{}) {+					tags = append(tags, t.(string))+				}++				var consulProxy *api.ConsulProxy+				for _, proxy := range ss["proxy"].([]interface{}) {+					p := proxy.(map[string]interface{})++					var config map[string]interface{}+					err := json.Unmarshal([]byte(p["config"].(string)), &config)+					if err != nil {+						return nil, err+					}++					upstreams := make([]*api.ConsulUpstream, 0)+					for _, up := range p["upstreams"].([]interface{}) {+						u := up.(map[string]interface{})++						upstream := &api.ConsulUpstream{+							DestinationName: u["destination_name"].(string),+							LocalBindPort:   u["local_bind_port"].(int),+						}++						upstreams = append(upstreams, upstream)+					}++					var exposeConfig *api.ConsulExposeConfig+					for _, cec := range p["expose"].([]interface{}) {+						ec := cec.(map[string]interface{})++						paths := make([]*api.ConsulExposePath, 0)+						for _, cep := range ec["path"].([]interface{}) {+							p := cep.(map[string]interface{})++							path := &api.ConsulExposePath{+								Path:          p["path"].(string),+								Protocol:      p["protocol"].(string),+								LocalPathPort: p["local_path_port"].(int),+								ListenerPort:  p["listener_port"].(string),+							}++							paths = append(paths, path)+						}++						exposeConfig = &api.ConsulExposeConfig{+							Path: paths,+						}+					}++					consulProxy = &api.ConsulProxy{+						LocalServiceAddress: p["local_service_address"].(string),+						LocalServicePort:    p["local_service_port"].(int),+						ExposeConfig:        exposeConfig,+						Upstreams:           upstreams,+						Config:              config,+					}+				}++				sidecarService = &api.ConsulSidecarService{+					Tags:  tags,+					Port:  ss["port"].(string),+					Proxy: consulProxy,+				}+			}++			connect = &api.ConsulConnect{+				Native:         cn["native"].(bool),+				SidecarService: sidecarService,+				SidecarTask:    sidecarTask,+			}+		}++		service := &api.Service{+			Meta:              getMapOfString(s["meta"]),+			Name:              s["name"].(string),+			PortLabel:         s["port"].(string),+			Tags:              tags,+			CanaryTags:        canaryTags,+			EnableTagOverride: s["enable_tag_override"].(bool),+			AddressMode:       s["address_mode"].(string),+			TaskName:          s["task"].(string),+			Checks:            checks,+			Connect:           connect,+			CanaryMeta:        getMapOfString(s["canary_meta"]),+		}++		services = append(services, service)+	}++	return services, nil+}++func getLogConfig(d interface{}) *api.LogConfig {+	for _, lg := range d.([]interface{}) {+		l := lg.(map[string]interface{})++		return &api.LogConfig{+			MaxFiles:      getInt(l, "max_files"),+			MaxFileSizeMB: getInt(l, "max_file_size"),+		}+	}++	return nil+}++func getResources(d interface{}) *api.Resources {+	for _, rs := range d.([]interface{}) {+		r := rs.(map[string]interface{})++		devices := make([]*api.RequestedDevice, 0)+		for _, dv := range r["device"].([]interface{}) {+			d := dv.(map[string]interface{})++			count := uint64(d["count"].(int))+			device := &api.RequestedDevice{+				Name:        d["name"].(string),+				Count:       &count,+				Constraints: getConstraints(d["constraint"]),+				Affinities:  getAffinities(d["affinity"]),+			}++			devices = append(devices, device)+		}++		return &api.Resources{+			CPU:      getInt(r, "cpu"),+			MemoryMB: getInt(r, "memory"),+			Networks: getNetworks(r["network"]),+			Devices:  devices,+		}+	}++	return nil+}++func getVault(d interface{}) *api.Vault {+	for _, vlt := range d.([]interface{}) {+		v := vlt.(map[string]interface{})++		policies := make([]string, 0)+		for _, p := range v["policies"].([]interface{}) {+			policies = append(policies, p.(string))+		}++		return &api.Vault{+			Policies:     policies,+			Namespace:    getString(v, "namespace"),+			Env:          getBool(v, "env"),+			ChangeMode:   getString(v, "change_mode"),+			ChangeSignal: getString(v, "change_signal"),+		}+	}++	return nil+}++// Readers++func readConstraints(constraints []*api.Constraint) interface{} {+	res := make([]interface{}, 0)++	for _, cn := range constraints {+		constraint := map[string]interface{}{+			"attribute": cn.LTarget,+			"operator":  cn.Operand,+			"value":     cn.RTarget,+		}++		res = append(res, constraint)+	}++	return res+}++func readAffinities(affinities []*api.Affinity) interface{} {+	res := make([]interface{}, 0)++	for _, af := range affinities {+		affinity := map[string]interface{}{+			"attribute": af.LTarget,+			"operator":  af.Operand,+			"value":     af.RTarget,+			"weight":    af.Weight,+		}++		res = append(res, affinity)+	}++	return res+}++func readSpreads(spreads []*api.Spread) interface{} {+	res := make([]interface{}, 0)++	for _, s := range spreads {+		targets := make([]interface{}, 0)++		for _, t := range s.SpreadTarget {+			target := map[string]interface{}{+				"value":   t.Value,+				"percent": t.Percent,+			}++			targets = append(targets, target)+		}++		spread := map[string]interface{}{+			"attribute": s.Attribute,+			"weight":    s.Weight,+			"target":    targets,+		}++		res = append(res, spread)+	}++	return res+}++func readGroups(d *schema.ResourceData, groups []*api.TaskGroup) (interface{}, error) {+	res := make([]interface{}, 0)++	// we have to look for the groups the user created in its configuration as+	// we will need to set the "ephemeral_disk" and "restart" block only if he+	// created one or the value for the block is different from the default one++	groupsConfig := d.Get("group").([]interface{})++	for i, g := range groups {+		currentConfig := groupsConfig[i].(map[string]interface{})++		ephemeralDisk := make([]interface{}, 0)++		if g.EphemeralDisk != nil {+			disk := map[string]interface{}{+				"migrate": *g.EphemeralDisk.Migrate,+				"size":    *g.EphemeralDisk.SizeMB,+				"sticky":  *g.EphemeralDisk.Sticky,+			}++			defaultValue := map[string]interface{}{+				"migrate": false,+				"size":    300,+				"sticky":  false,+			}++			set := currentConfig["ephemeral_disk"].([]interface{})++			if len(set) > 0 || !reflect.DeepEqual(disk, defaultValue) {+				ephemeralDisk = append(ephemeralDisk, disk)+			}+		}++		restart := make([]interface{}, 0)+		if g.RestartPolicy != nil {+			r := map[string]interface{}{+				"attempts": *g.RestartPolicy.Attempts,+				"delay":    g.RestartPolicy.Delay.String(),+				"interval": g.RestartPolicy.Interval.String(),+				"mode":     *g.RestartPolicy.Mode,+			}++			// The default depends on the job type+			var defaultValue map[string]interface{}+			_type := d.Get("type").(string)+			if _type == "service" {+				defaultValue = map[string]interface{}{+					"attempts": 2,+					"delay":    "15s",+					"interval": "30m0s",+					"mode":     "fail",+				}+			} else if _type == "batch" {+				defaultValue = map[string]interface{}{+					"attempts": 3,+					"delay":    "15s",+					"interval": "24h0m0s",+					"mode":     "fail",+				}+			} else if _type == "system" {+				defaultValue = map[string]interface{}{+					"attempts": 2,+					"delay":    "15s",+					"interval": "30m0s",+					"mode":     "fail",+				}

This is actually linked to https://github.com/hashicorp/terraform-plugin-sdk/issues/62. If the user did not set a `restart policy it is automatically set by Nomad and returned in the job spec. In this case Terraform gets a list of one element but does not expect any a perpetually creates a diff for this resource.

We had this issue in the Consul provider (https://github.com/hashicorp/terraform-provider-consul/pull/121 and https://github.com/hashicorp/terraform-provider-consul/issues/119) and the only solution I could find was to check whether the user had set the attribute and if the server had sent back the default values for this attribute and ignore it in this case. It's cumbersome but I'm not aware of a better solution at the moment. Does this make sense?

remilapeyre

comment created time in 12 days

PullRequestReviewEvent

push eventremilapeyre/terraform-provider-nomad

James Rasell

commit sha b465ad7969c4ee73702c56a1b8109ce8dca48f91

resource/volume: add "mount_options" argument to volume resource.

view details

James Rasell

commit sha c4f6f0ae0b86323d4d6dc496d0aa9106b563b152

docs: update volume resource doc with mount_options argument.

view details

James Rasell

commit sha 7dee079299f3be5c5c2ba714eda2593c16c53deb

Merge pull request #147 from hashicorp/f-gh-145 resource/volume: add "mount_options" argument to volume resource.

view details

James Rasell

commit sha 84db7e5ebfef5656572a289c290e448c7d58ed25

changelog: add entry for GH-147

view details

James Rasell

commit sha d25bb448c7dcb233dc4f685ea6928c0a000798d6

Merge pull request #148 from hashicorp/f-147-changelog-entry changelog: add entry for GH-147

view details

Rémi Lapeyre

commit sha f48113f8658dbe8586d7c7fc9d552f5f6a17613c

Merge remote-tracking branch 'origin/master' into nomad-job-v2

view details

push time in 12 days

Pull request review commenthashicorp/terraform-provider-nomad

Add a nomad_job_v2 resource

+package nomad++import "github.com/hashicorp/terraform-plugin-sdk/helper/schema"++func getJobFields() map[string]*schema.Schema {+	return map[string]*schema.Schema{+		"namespace": {+			Type:     schema.TypeString,+			Default:  "default",+			Optional: true,+		},+		"priority": {+			Type:     schema.TypeInt,+			Default:  50,+			Optional: true,+		},+		"type": {+			Type:     schema.TypeString,+			Default:  "service",+			Optional: true,+		},+		"region": {+			Type:     schema.TypeString,+			Default:  "global",+			Optional: true,+		},+		"meta": {+			Type:     schema.TypeMap,+			Elem:     &schema.Schema{Type: schema.TypeString},+			Optional: true,+		},+		"all_at_once": {+			Type:     schema.TypeBool,+			Optional: true,+		},+		"datacenters": {+			Type:     schema.TypeList,+			Elem:     &schema.Schema{Type: schema.TypeString},+			Optional: true,+		},+		"name": {+			Type:     schema.TypeString,+			Optional: true,+		},+		"vault_token": {+			Type:     schema.TypeString,+			Optional: true,+		},+		"consul_token": {+			Type:     schema.TypeString,+			Optional: true,+		},+		"constraint": getConstraintFields(),+		"affinity":   getAffinityFields(),+		"spread":     getSpreadFields(),+		"group":      getGroupFields(),+		"parameterized": {+			Type:     schema.TypeList,+			Optional: true,+			MaxItems: 1,+			Elem: &schema.Resource{+				Schema: map[string]*schema.Schema{+					"meta_optional": {+						Type:     schema.TypeList,+						Elem:     &schema.Schema{Type: schema.TypeString},+						Optional: true,+					},+					"meta_required": {+						Type:     schema.TypeList,+						Elem:     &schema.Schema{Type: schema.TypeString},+						Optional: true,+					},+					"payload": {+						Type:     schema.TypeString,+						Optional: true,+					},+				},+			},+		},+		"periodic": {+			Type:     schema.TypeList,+			Optional: true,+			MaxItems: 1,+			Elem: &schema.Resource{+				Schema: map[string]*schema.Schema{+					"cron": {+						Type:     schema.TypeString,+						Optional: true,+					},+					"prohibit_overlap": {+						Type:     schema.TypeBool,+						Optional: true,+					},+					"time_zone": {+						Type:     schema.TypeString,+						Optional: true,+					},+				},+			},+		},+		"update": getUpdateFields(),+	}+}++func getSpreadFields() *schema.Schema {+	return &schema.Schema{+		Type:     schema.TypeList,+		Optional: true,+		Elem: &schema.Resource{+			Schema: map[string]*schema.Schema{+				"attribute": {+					Type:     schema.TypeString,+					Optional: true,+				},+				"weight": {+					Type:     schema.TypeInt,+					Optional: true,+				},+				"target": {+					Type:     schema.TypeList,+					Optional: true,+					Elem: &schema.Resource{+						Schema: map[string]*schema.Schema{+							"value": {+								Type:     schema.TypeString,+								Optional: true,+							},+							"percent": {+								Type:     schema.TypeInt,+								Optional: true,+							},+						},+					},+				},+			},+		},+	}+}++func getConstraintFields() *schema.Schema {+	return &schema.Schema{+		Type:     schema.TypeList,+		Optional: true,+		Elem: &schema.Resource{+			Schema: map[string]*schema.Schema{+				"attribute": {+					Type:     schema.TypeString,+					Optional: true,+				},+				"operator": {+					Type:     schema.TypeString,+					Optional: true,+				},+				"value": {+					Type:     schema.TypeString,+					Optional: true,+				},+			},+		},+	}+}++func getAffinityFields() *schema.Schema {+	return &schema.Schema{+		Type:     schema.TypeList,+		Optional: true,+		Elem: &schema.Resource{+			Schema: map[string]*schema.Schema{+				"attribute": {+					Type:     schema.TypeString,+					Optional: true,+				},+				"operator": {+					Type:     schema.TypeString,+					Optional: true,+				},+				"value": {+					Type:     schema.TypeString,+					Optional: true,+				},+				"weight": {+					Type:     schema.TypeInt,+					Optional: true,+				},+			},+		},+	}+}++func getVaultFields() *schema.Schema {+	return &schema.Schema{+		Type:     schema.TypeList,+		Optional: true,+		MaxItems: 1,+		Elem: &schema.Resource{+			Schema: map[string]*schema.Schema{+				"change_mode": {+					Type:     schema.TypeString,+					Default:  "restart",+					Optional: true,+				},+				"change_signal": {+					Type:     schema.TypeString,+					Optional: true,+				},+				"env": {+					Type:     schema.TypeBool,+					Optional: true,+				},+				"namespace": {+					Type:     schema.TypeString,+					Optional: true,+				},+				"policies": {+					Type:     schema.TypeList,+					Elem:     &schema.Schema{Type: schema.TypeString},+					Optional: true,+				},+			},+		},+	}+}++func getMigrateFields() *schema.Schema {+	return &schema.Schema{+		Type:     schema.TypeList,+		Optional: true,+		MaxItems: 1,+		Elem: &schema.Resource{+			Schema: map[string]*schema.Schema{+				"health_check": {+					Type:     schema.TypeString,+					Default:  "checks",+					Optional: true,+				},+				"healthy_deadline": {+					Type:     schema.TypeString,+					Default:  "5m0s",+					Optional: true,+				},+				"max_parallel": {+					Type:     schema.TypeInt,+					Default:  1,+					Optional: true,+				},+				"min_healthy_time": {+					Type:     schema.TypeString,+					Default:  "10s",+					Optional: true,+				},+			},+		},+	}+}++func getRescheduleFields() *schema.Schema {+	return &schema.Schema{+		Type:     schema.TypeList,+		Optional: true,+		MaxItems: 1,+		Elem: &schema.Resource{+			Schema: map[string]*schema.Schema{+				"attempts": {+					Type:     schema.TypeInt,+					Optional: true,+				},+				"interval": {+					Type:     schema.TypeString,+					Optional: true,+				},+				"delay": {+					Type:     schema.TypeString,+					Optional: true,+				},+				"delay_function": {+					Type:     schema.TypeString,+					Optional: true,+				},+				"max_delay": {+					Type:     schema.TypeString,+					Optional: true,+				},+				"unlimited": {+					Type:     schema.TypeBool,+					Optional: true,+				},+			},+		},+	}+}++func getResourcesFields() *schema.Schema {+	return &schema.Schema{+		Type:     schema.TypeList,+		Optional: true,+		MaxItems: 1,+		Elem: &schema.Resource{+			Schema: map[string]*schema.Schema{+				"cpu": {+					Type:     schema.TypeInt,+					Optional: true,+				},+				"memory": {+					Type:     schema.TypeInt,+					Optional: true,+				},+				"device": {+					Type:     schema.TypeList,+					Optional: true,+					Elem: &schema.Resource{+						Schema: map[string]*schema.Schema{+							"name": {+								Type:     schema.TypeString,+								Optional: true,+							},+							"count": {+								Type:     schema.TypeInt,+								Optional: true,+							},+							"constraint": getConstraintFields(),+							"affinity":   getAffinityFields(),+						},+					},+				},+				"network": getNetworkFields(),+			},+		},+	}+}++func getLogsFields() *schema.Schema {+	return &schema.Schema{+		Type:     schema.TypeList,+		Optional: true,+		MaxItems: 1,+		Elem: &schema.Resource{+			Schema: map[string]*schema.Schema{+				"max_files": {+					Type:     schema.TypeInt,+					Optional: true,+				},+				"max_file_size": {+					Type:     schema.TypeInt,+					Optional: true,+				},+			},+		},+	}+}++func getServiceFields() *schema.Schema {+	return &schema.Schema{+		Type:     schema.TypeList,+		Optional: true,+		MaxItems: 1,+		Elem: &schema.Resource{+			Schema: map[string]*schema.Schema{+				"canary_meta": {+					Type:     schema.TypeMap,+					Elem:     &schema.Schema{Type: schema.TypeString},+					Optional: true,+				},+				"meta": {+					Type:     schema.TypeMap,+					Elem:     &schema.Schema{Type: schema.TypeString},+					Optional: true,+				},+				"name": {+					Type:     schema.TypeString,+					Optional: true,+				},+				"port": {+					Type:     schema.TypeString,+					Optional: true,+				},+				"tags": {+					Type:     schema.TypeList,+					Elem:     &schema.Schema{Type: schema.TypeString},+					Optional: true,+				},+				"canary_tags": {+					Type:     schema.TypeList,+					Elem:     &schema.Schema{Type: schema.TypeString},+					Optional: true,+				},+				"enable_tag_override": {+					Type:     schema.TypeBool,+					Optional: true,+				},+				"address_mode": {+					Type:     schema.TypeString,+					Optional: true,+				},+				"task": {+					Type:     schema.TypeString,+					Optional: true,+				},+				"check": {+					Type:     schema.TypeList,+					Optional: true,+					Elem: &schema.Resource{+						Schema: map[string]*schema.Schema{+							"address_mode": {+								Type:     schema.TypeString,+								Optional: true,+							},+							"args": {+								Type:     schema.TypeList,+								Elem:     &schema.Schema{Type: schema.TypeString},+								Optional: true,+							},+							"command": {+								Type:     schema.TypeString,+								Optional: true,+							},+							"grpc_service": {+								Type:     schema.TypeString,+								Optional: true,+							},+							"grpc_use_tls": {+								Type:     schema.TypeBool,+								Optional: true,+							},+							"initial_status": {+								Type:     schema.TypeString,+								Optional: true,+							},+							"success_before_passing": {+								Type:     schema.TypeInt,+								Optional: true,+							},+							"failures_before_critical": {+								Type:     schema.TypeInt,+								Optional: true,+							},+							"interval": {+								Type:     schema.TypeString,+								Optional: true,+							},+							"method": {+								Type:     schema.TypeString,+								Optional: true,+							},+							"name": {+								Type:     schema.TypeString,+								Optional: true,+							},+							"path": {+								Type:     schema.TypeString,+								Optional: true,+							},+							"expose": {+								Type:     schema.TypeBool,+								Optional: true,+							},+							"port": {+								Type:     schema.TypeString,+								Optional: true,+							},+							"protocol": {+								Type:     schema.TypeString,+								Optional: true,+							},+							"task": {+								Type:     schema.TypeString,+								Optional: true,+							},+							"timeout": {+								Type:     schema.TypeString,+								Optional: true,+							},+							"type": {+								Type:     schema.TypeString,+								Optional: true,+							},+							"tls_skip_verify": {+								Type:     schema.TypeBool,+								Optional: true,+							},+							"check_restart": {+								Type:     schema.TypeList,+								Optional: true,+								MaxItems: 1,+								Elem: &schema.Resource{+									Schema: map[string]*schema.Schema{+										"limit": {+											Type:     schema.TypeInt,+											Optional: true,+										},+										"grace": {+											Type:     schema.TypeString,+											Optional: true,+										},+										"ignore_warnings": {+											Type:     schema.TypeBool,+											Optional: true,+										},+									},+								},+							},+						},+					},+				},+				"connect": {+					Type:     schema.TypeList,+					Optional: true,+					MaxItems: 1,+					Elem: &schema.Resource{+						Schema: map[string]*schema.Schema{+							"native": {+								Type:     schema.TypeString,+								Optional: true,+							},+							"sidecar_service": {+								Type:     schema.TypeList,+								Optional: true,+								MaxItems: 1,+								Elem: &schema.Resource{+									Schema: map[string]*schema.Schema{+										"tags": {+											Type:     schema.TypeString,+											Optional: true,+										},+										"port": {+											Type:     schema.TypeString,+											Optional: true,+										},+										"proxy": {+											Type:     schema.TypeList,+											Optional: true,+											MaxItems: 1,+											Elem: &schema.Resource{+												Schema: map[string]*schema.Schema{+													"local_service_address": {+														Type:     schema.TypeString,+														Optional: true,+													},+													"local_service_port": {+														Type:     schema.TypeInt,+														Optional: true,+													},+													"config": {+														Type:     schema.TypeMap,+														Elem:     &schema.Schema{Type: schema.TypeString},+														Optional: true,+													},+													"upstreams": {+														Type:     schema.TypeList,+														Optional: true,+														MaxItems: 1,+														Elem: &schema.Resource{+															Schema: map[string]*schema.Schema{+																"destination_name": {+																	Type:     schema.TypeString,+																	Optional: true,+																},+																"local_bind_port": {+																	Type:     schema.TypeString,+																	Optional: true,+																},+															},+														},+													},+													"expose": {+														Type:     schema.TypeList,+														Optional: true,+														MaxItems: 1,+														Elem: &schema.Resource{+															Schema: map[string]*schema.Schema{+																"path": {+																	Type:     schema.TypeList,+																	Optional: true,+																	Elem: &schema.Resource{+																		Schema: map[string]*schema.Schema{+																			"path": {+																				Type:     schema.TypeString,+																				Optional: true,+																			},+																			"protocol": {+																				Type:     schema.TypeString,+																				Optional: true,+																			},+																			"local_path_port": {+																				Type:     schema.TypeInt,+																				Optional: true,+																			},+																			"listener_port": {+																				Type:     schema.TypeString,+																				Optional: true,+																			},+																		},+																	},+																},+															},+														},+													},+												},+											},+										},+									},+								},+							},+							"sidecar_task": {+								Type:     schema.TypeList,+								Optional: true,+								MaxItems: 1,+								Elem: &schema.Resource{+									Schema: map[string]*schema.Schema{+										"meta": {+											Type:     schema.TypeMap,+											Elem:     &schema.Schema{Type: schema.TypeString},+											Optional: true,+										},+										"name": {+											Type:     schema.TypeString,+											Optional: true,+										},+										"driver": {+											Type:     schema.TypeString,+											Optional: true,+										},+										"user": {+											Type:     schema.TypeString,+											Optional: true,+										},+										"config": {+											Type:     schema.TypeMap,+											Elem:     &schema.Schema{Type: schema.TypeString},+											Optional: true,+										},+										"env": {+											Type:     schema.TypeMap,+											Elem:     &schema.Schema{Type: schema.TypeString},+											Optional: true,+										},+										"kill_timeout": {+											Type:     schema.TypeString,+											Optional: true,+										},+										"shutdown_delay": {+											Type:     schema.TypeString,+											Optional: true,+										},+										"kill_signal": {+											Type:     schema.TypeString,+											Optional: true,+										},+										"resources": getResourcesFields(),+										"logs":      getLogsFields(),+									},+								},+							},+						},+					},+				},+			},+		},+	}+}++func getGroupFields() *schema.Schema {+	return &schema.Schema{+		Type:     schema.TypeList,+		Required: true,+		MinItems: 1,+		Elem: &schema.Resource{+			Schema: map[string]*schema.Schema{+				"name": {+					Type:     schema.TypeString,+					Optional: true,+				},+				"meta": {+					Type:     schema.TypeMap,+					Elem:     &schema.Schema{Type: schema.TypeString},+					Optional: true,+				},+				"count": {+					Type:     schema.TypeInt,+					Optional: true,+				},+				"shutdown_delay": {+					Type:     schema.TypeString,+					Optional: true,+				},+				"stop_after_client_disconnect": {+					Type:     schema.TypeString,+					Optional: true,+				},+				"constraint": getConstraintFields(),+				"affinity":   getAffinityFields(),+				"spread":     getSpreadFields(),+				"ephemeral_disk": {+					Type:     schema.TypeList,+					Optional: true,+					MaxItems: 1,+					Elem: &schema.Resource{+						Schema: map[string]*schema.Schema{+							"migrate": {+								Type:     schema.TypeBool,+								Optional: true,+							},+							"size": {+								Type:     schema.TypeInt,+								Optional: true,+							},+							"sticky": {+								Type:     schema.TypeBool,+								Optional: true,+							},+						},+					},+				},+				"migrate":    getMigrateFields(),+				"network":    getNetworkFields(),+				"reschedule": getRescheduleFields(),+				"restart": {+					Type:     schema.TypeList,+					Optional: true,+					MaxItems: 1,+					Elem: &schema.Resource{+						Schema: map[string]*schema.Schema{+							"attempts": {+								Type:     schema.TypeInt,+								Optional: true,+							},+							"delay": {+								Type:     schema.TypeString,+								Optional: true,+							},+							"interval": {+								Type:     schema.TypeString,+								Optional: true,+							},+							"mode": {+								Type:     schema.TypeString,+								Optional: true,+							},+						},+					},+				},+				"service": getServiceFields(),+				"task":    getTaskFields(),+				"volume": {+					Type:     schema.TypeList,+					Optional: true,+					Elem: &schema.Resource{+						Schema: map[string]*schema.Schema{+							"name": {+								Type:     schema.TypeString,+								Optional: true,+							},+							"type": {+								Type:     schema.TypeString,+								Optional: true,+							},+							"source": {+								Type:     schema.TypeString,+								Optional: true,+							},+							"read_only": {+								Type:     schema.TypeBool,+								Optional: true,+							},+						},+					},+				},+			},+		},+	}+}++func getTaskFields() *schema.Schema {+	return &schema.Schema{+		Type:     schema.TypeList,+		Required: true,+		MinItems: 1,+		Elem: &schema.Resource{+			Schema: map[string]*schema.Schema{+				"name": {+					Type:     schema.TypeString,+					Optional: true,+				},+				"config": {+					Type:     schema.TypeString,+					Optional: true,+				},+				"kill_timeout": {+					Type:     schema.TypeString,+					Default:  "5s",+					Optional: true,+				},+				"shutdown_delay": {+					Type:     schema.TypeString,+					Default:  "0s",+					Optional: true,+				},+				"env": {+					Type:     schema.TypeMap,+					Elem:     &schema.Schema{Type: schema.TypeString},+					Optional: true,+				},+				"meta": {+					Type:     schema.TypeMap,+					Elem:     &schema.Schema{Type: schema.TypeString},+					Optional: true,+				},+				"driver": {+					Type:     schema.TypeString,+					Optional: true,+				},+				"kill_signal": {+					Type:     schema.TypeString,+					Optional: true,+				},+				"leader": {+					Type:     schema.TypeBool,+					Optional: true,+				},+				"user": {+					Type:     schema.TypeString,+					Optional: true,+				},+				"kind": {+					Type:     schema.TypeString,+					Optional: true,+				},+				"artifact": {+					Type:     schema.TypeList,+					Optional: true,+					Elem: &schema.Resource{+						Schema: map[string]*schema.Schema{+							"destination": {+								Type:     schema.TypeString,+								Optional: true,+							},+							"mode": {+								Type:     schema.TypeString,+								Optional: true,+							},+							"options": {+								Type:     schema.TypeMap,+								Elem:     &schema.Schema{Type: schema.TypeString},+								Optional: true,+							},+							"source": {+								Type:     schema.TypeString,+								Optional: true,+							},+						},+					},+				},+				"constraint": getConstraintFields(),+				"affinity":   getAffinityFields(),+				"dispatch_payload": {+					Type:     schema.TypeList,+					Optional: true,+					MaxItems: 1,+					Elem: &schema.Resource{+						Schema: map[string]*schema.Schema{+							"file": {+								Type:     schema.TypeString,+								Optional: true,+							},+						},+					},+				},+				"lifecycle": {+					Type:     schema.TypeList,+					Optional: true,+					MaxItems: 1,+					Elem: &schema.Resource{+						Schema: map[string]*schema.Schema{+							"hook": {+								Type:     schema.TypeString,+								Optional: true,+							},+							"sidecar": {+								Type:     schema.TypeBool,+								Optional: true,+							},+						},+					},+				},+				"logs":      getLogsFields(),+				"resources": getResourcesFields(),+				"service":   getServiceFields(),+				"template": {+					Type:     schema.TypeList,+					Optional: true,+					Elem: &schema.Resource{+						Schema: map[string]*schema.Schema{+							"change_mode": {+								Type:     schema.TypeString,+								Default:  "restart",+								Optional: true,+							},+							"change_signal": {+								Type:     schema.TypeString,+								Optional: true,+							},+							"data": {+								Type:     schema.TypeString,+								Optional: true,+							},+							"destination": {+								Type:     schema.TypeString,+								Optional: true,+							},+							"env": {+								Type:     schema.TypeBool,+								Optional: true,+							},+							"left_delimiter": {+								Type:     schema.TypeString,+								Optional: true,+							},+							"perms": {+								Type:     schema.TypeString,+								Optional: true,+							},+							"right_delimiter": {+								Type:     schema.TypeString,+								Optional: true,+							},+							"source": {+								Type:     schema.TypeString,+								Optional: true,+							},+							"splay": {+								Type:     schema.TypeString,+								Optional: true,+							},+							"vault_grace": {+								Type:     schema.TypeString,+								Optional: true,+							},+						},+					},+				},+				"vault": getVaultFields(),+				"volume_mount": {+					Type:     schema.TypeList,+					Optional: true,+					Elem: &schema.Resource{+						Schema: map[string]*schema.Schema{+							"volume": {+								Type:     schema.TypeString,+								Optional: true,+							},+							"destination": {+								Type:     schema.TypeString,+								Optional: true,+							},+							"read_only": {+								Type:     schema.TypeBool,+								Optional: true,+							},+						},+					},+				},+			},+		},+	}+}++func getNetworkFields() *schema.Schema {+	return &schema.Schema{+		Type:     schema.TypeList,+		Optional: true,+		MaxItems: 1,+		Elem: &schema.Resource{+			Schema: map[string]*schema.Schema{+				"mbits": {+					Type:     schema.TypeInt,+					Optional: true,+				},+				"mode": {+					Type:     schema.TypeString,+					Optional: true,+				},+				"port": {+					Type:     schema.TypeList,+					Optional: true,+					MaxItems: 1,+					Elem: &schema.Resource{+						Schema: map[string]*schema.Schema{+							"static": {+								Type:     schema.TypeInt,+								Optional: true,+							},+							"to": {+								Type:     schema.TypeString,+								Optional: true,+							},+							"host_network": {+								Type:     schema.TypeString,+								Optional: true,+							},+						},+					},+				},+				"dns": {+					Type:     schema.TypeList,+					Optional: true,+					MaxItems: 1,+					Elem: &schema.Resource{+						Schema: map[string]*schema.Schema{+							"servers": {+								Type:     schema.TypeList,+								Elem:     &schema.Schema{Type: schema.TypeString},+								Optional: true,+							},+							"searches": {+								Type:     schema.TypeList,+								Elem:     &schema.Schema{Type: schema.TypeString},+								Optional: true,+							},+							"options": {+								Type:     schema.TypeList,+								Elem:     &schema.Schema{Type: schema.TypeString},+								Optional: true,+							},+						},+					},+				},+			},+		},+	}+}++func getUpdateFields() *schema.Schema {+	return &schema.Schema{+		Type:     schema.TypeList,+		Optional: true,+		MaxItems: 1,+		Elem: &schema.Resource{+			Schema: map[string]*schema.Schema{+				"healthy_deadline": {+					Type:     schema.TypeString,+					Default:  "0s",+					Optional: true,+				},+				"min_healthy_time": {+					Type:     schema.TypeString,+					Default:  "0s",+					Optional: true,+				},+				"progress_deadline": {+					Type:     schema.TypeString,+					Default:  "0s",+					Optional: true,+				},+				"stagger": {+					Type:     schema.TypeString,+					Default:  "0s",+					Optional: true,+				},

Yes, I have not look for what Nomad uses as defaults here. I think we may be obligated to use the same the defaults as the Nomad API in the Terraform provider for the reasons I gave in https://github.com/hashicorp/terraform-provider-nomad/pull/149#discussion_r487458932 thought.

remilapeyre

comment created time in 12 days

PullRequestReviewEvent

Pull request review commenthashicorp/terraform-provider-nomad

Add a nomad_job_v2 resource

+package nomad++import "github.com/hashicorp/terraform-plugin-sdk/helper/schema"++func getJobFields() map[string]*schema.Schema {+	return map[string]*schema.Schema{+		"namespace": {+			Type:     schema.TypeString,+			Default:  "default",+			Optional: true,+		},+		"priority": {+			Type:     schema.TypeInt,+			Default:  50,

The Nomad API does not require a default for priority, if we don't set one a default will be used automatically so we could set Computed: true here. The issue when doing that is that if an operator later manually change the priority value to something else, 75 for example, Terraform will be unable to detect that this change should be tracked and will not report a diff as the attribute is set as Computed.

Does this make sense?

remilapeyre

comment created time in 12 days

PullRequestReviewEvent

push eventremilapeyre/terraform-provider-nomad

Rémi Lapeyre

commit sha b4ac4d805646fb5abfc49f70a52455fd8adc3968

Don't reuse the isSet name

view details

push time in 12 days

push eventremilapeyre/terraform-provider-nomad

Rémi Lapeyre

commit sha 2e780534615a5a1513dee15c738688229ef5508a

Don't reuse the duration name

view details

push time in 12 days

Pull request review commenthashicorp/terraform-provider-nomad

Add a nomad_job_v2 resource

+package nomad++import "github.com/hashicorp/terraform-plugin-sdk/helper/schema"++func getJobFields() map[string]*schema.Schema {+	return map[string]*schema.Schema{+		"namespace": {+			Type:     schema.TypeString,+			Default:  "default",+			Optional: true,+		},+		"priority": {+			Type:     schema.TypeInt,+			Default:  50,+			Optional: true,+		},+		"type": {+			Type:     schema.TypeString,+			Default:  "service",+			Optional: true,+		},+		"region": {+			Type:     schema.TypeString,+			Default:  "global",+			Optional: true,+		},+		"meta": {+			Type:     schema.TypeMap,+			Elem:     &schema.Schema{Type: schema.TypeString},+			Optional: true,+		},+		"all_at_once": {+			Type:     schema.TypeBool,+			Optional: true,+		},+		"datacenters": {+			Type:     schema.TypeList,+			Elem:     &schema.Schema{Type: schema.TypeString},+			Optional: true,+		},+		"name": {+			Type:     schema.TypeString,+			Optional: true,+		},+		"vault_token": {+			Type:     schema.TypeString,+			Optional: true,+		},+		"consul_token": {+			Type:     schema.TypeString,+			Optional: true,+		},+		"constraint": getConstraintFields(),+		"affinity":   getAffinityFields(),+		"spread":     getSpreadFields(),+		"group":      getGroupFields(),+		"parameterized": {+			Type:     schema.TypeList,

As the issue @lgfa29 linked explains you can't have a complex resource in a schema.TypeMap attribute. If you try to do it, it may look like it's working but all attributes get their type converted. This is caught by https://github.com/bflad/tfproviderlint.

remilapeyre

comment created time in 12 days

PullRequestReviewEvent

push eventremilapeyre/terraform-provider-nomad

Rémi Lapeyre

commit sha 693fe1a7b2d30fd05171f2f79b7e0526191b1220

Add getListOfString() helper

view details

push time in 12 days

push eventremilapeyre/terraform-provider-nomad

Rémi Lapeyre

commit sha af5b807cb6906dadac2de6ce383cff71a2b5148b

Use the helpers from funcs.go

view details

push time in 12 days

push eventremilapeyre/terraform-provider-nomad

Rémi Lapeyre

commit sha a4a865aca07f1dc9705b254ce34f88dc1540ca0c

Rename resource_job_fields.go to resource_job_v2_fields.go

view details

push time in 12 days

push eventremilapeyre/terraform-provider-nomad

Rémi Lapeyre

commit sha bf13fd2b6702a0db116868f80241c0d4735afb18

Simplify error checking in resourceJobV2Read() by using helper.StateWriter

view details

push time in 12 days

Pull request review commenthashicorp/terraform-provider-nomad

Add a nomad_job_v2 resource

+package nomad++import (+	"encoding/json"+	"fmt"+	"reflect"+	"strings"+	"time"++	"github.com/hashicorp/nomad/api"+	"github.com/hashicorp/terraform-plugin-sdk/helper/schema"+)++func resourceJobV2() *schema.Resource {+	return &schema.Resource{+		Schema: getJobFields(),+		Create: resourceJobV2Register,+		Update: resourceJobV2Register,+		Read:   resourceJobV2Read,+		Delete: resourceJobV2Deregister,+	}+}++func resourceJobV2Register(d *schema.ResourceData, meta interface{}) error {+	client := meta.(ProviderConfig).client+	job, err := getJob(d)+	if err != nil {+		return fmt.Errorf("Failed to get job definition: %v", err)+	}++	_, _, err = client.Jobs().Register(job, nil)+	if err != nil {+		return fmt.Errorf("Failed to create the job: %v", err)+	}++	d.SetId(d.Get("name").(string))++	return resourceJobV2Read(d, meta)+}++func resourceJobV2Read(d *schema.ResourceData, meta interface{}) error {+	client := meta.(ProviderConfig).client+	job, _, err := client.Jobs().Info(d.Id(), nil)+	if err != nil {+		if strings.Contains(err.Error(), "404") {+			d.SetId("")+			return nil+		}+		return fmt.Errorf("Failed to read the job: %v", err)+	}++	if err = d.Set("namespace", job.Namespace); err != nil {+		return fmt.Errorf("Failed to set 'namespace': %v", err)+	}+	if err = d.Set("priority", job.Priority); err != nil {+		return fmt.Errorf("Failed to set 'priority': %v", err)+	}+	if err = d.Set("type", job.Type); err != nil {+		return fmt.Errorf("Failed to set 'type': %v", err)+	}+	if err = d.Set("region", job.Region); err != nil {+		return fmt.Errorf("Failed to set 'region': %v", err)+	}+	if err = d.Set("meta", job.Meta); err != nil {+		return fmt.Errorf("Failed to set 'meta': %v", err)+	}+	if err = d.Set("all_at_once", job.AllAtOnce); err != nil {+		return fmt.Errorf("Failed to set 'all_at_once': %v", err)+	}+	if err = d.Set("datacenters", job.Datacenters); err != nil {+		return fmt.Errorf("Failed to set 'datacenters': %v", err)+	}+	if err = d.Set("name", job.Name); err != nil {+		return fmt.Errorf("Failed to set 'name': %v", err)+	}+	// if err = d.Set("vault_token", job.VaultToken); err != nil {+	// 	return fmt.Errorf("Failed to set 'vault_token': %v", err)+	// }+	// if err = d.Set("consul_token", job.ConsulToken); err != nil {+	// 	return fmt.Errorf("Failed to set 'consul_token': %v", err)+	// }+	if err = d.Set("constraint", readConstraints(job.Constraints)); err != nil {+		return fmt.Errorf("Failed to set 'constraint': %v", err)+	}+	if err = d.Set("affinity", readAffinities(job.Affinities)); err != nil {+		return fmt.Errorf("Failed to set 'affinity': %v", err)+	}+	if err = d.Set("spread", readSpreads(job.Spreads)); err != nil {+		return fmt.Errorf("Failed to set 'spread': %v", err)+	}

We had a few bugs in the Consul provider where we either had a type in the attribute we wanted to set, were using the wrong type, or were doing something completely unsupported (e.g. d.Set("attr.0.foo", ...)) and adding those checks showed some of those bugs.

I like stateWriter a lot thought as the verbosity always bothered me, it's a great solution :) I stole it for the Consul provider too.

remilapeyre

comment created time in 12 days

PullRequestReviewEvent

create barnchremilapeyre/terraform-provider-consul

branch : changelog-2.10

created branch time in 12 days

pull request commenthashicorp/terraform-provider-consul

Allow specifying TLS Certificates via PEM strings

Hi @lawliet89 thanks for the work you did to get this issue fixed. As I said in my previous message the Terraform provider should behave like the CLI in regard to the environment variables. If you think you found a bug, please don't hesitate to report it in a new issue :)

lawliet89

comment created time in 12 days

push eventhashicorp/terraform-provider-consul

Yong Wen Chua

commit sha cd171cedbc5fcf6a839ce6edc68096574786445f

Allow specifying TLS Certificates via PEM strings in the provider configuration (#220) Closes https://github.com/hashicorp/terraform-provider-consul/issues/5

view details

push time in 12 days

issue closedhashicorp/terraform-provider-consul

Certificates as data for the Consul provider

This issue was originally opened by @cbarbour as hashicorp/terraform#8760. It was migrated here as part of the provider split. The original body of the issue is below.

<hr>

Description

It would be nice if the consul provider could accept certificates as data rather than as a file reference. This would simplify the process of generating self-signed certificates using the tls provider.

Currently. the only solution to both generate and consume the certificates from terraform is to write the certificates to disk using a local-exec provisioner, and then reference them from the provider.

Having a local-file provider would also simplify this process.

This approach also introduces a number of dependency ordering problems; the provider now depends on the TLS resources, but doesn't directly reference them, preventing Terraform from establishing dependency relationships between the provider and resources.

Code showing my current solution is provided below.

Terraform Version

0.7.3

Affected Provider

  • consul

Terraform Configuration Files

resource "tls_private_key" "consul_ca" {
  algorithm = "RSA"
}

resource "tls_self_signed_cert" "consul_ca" {
  is_ca_certificate = true
  key_algorithm = "RSA"
  private_key_pem = "${tls_private_key.consul_ca.private_key_pem}"
  validity_period_hours = 8760

  allowed_uses = [
    "cert_signing",
    "client_auth",
    "server_auth",
  ]

  subject {
    common_name = "${var.region}.compute.internal"
    organization = "${coalesce(var.cluster_name, var.subnet_id)}"
  }

  provisioner "local-exec" {
    command = "python -c 'print \"\"\"${self.cert_pem}\"\"\"' > ${path.root}/consul_ca_cert.pem"
  }

}

resource "tls_private_key" "consul_node" {
  algorithm = "RSA"

  provisioner "local-exec" {
    command = "python -c 'print \"\"\"${self.private_key_pem}\"\"\"' > ${path.root}/consul_private_key.pem"
  }
}

data "tls_cert_request" "consul_node" {
  key_algorithm = "RSA"
  private_key_pem = "${tls_private_key.consul_node.private_key_pem}"

  subject {
    common_name = "${var.region}.compute.internal"
    organization = "${coalesce(var.cluster_name, var.subnet_id)}"
  }
}

resource "tls_locally_signed_cert" "consul_node" {
  cert_request_pem = "${data.tls_cert_request.consul_node.cert_request_pem}"

  ca_cert_pem = "${tls_self_signed_cert.consul_ca.cert_pem}"
  ca_key_algorithm = "RSA"
  ca_private_key_pem = "${tls_private_key.consul_ca.private_key_pem}"
  validity_period_hours = 8760

  allowed_uses = [
    "client_auth",
    "server_auth",
  ]

  provisioner "local-exec" {
    command = "python -c 'print \"\"\"${self.cert_pem}\"\"\"' > ${path.root}/consul_cert.pem"
  }
}

data "template_file" "consul_tls" {
  template = "consul_ca_cert.pem"
  depends_on = [
    "tls_self_signed_cert.consul_ca",
    "tls_private_key.consul_node",
    "tls_locally_signed_cert.consul_node",
  ]
  vars {
    ca_file = "${path.root}/consul_ca_cert.pem"
    cert_file = "${path.root}/consul_cert.pem"
    key_file = "${path.root}/consul_private_key.pem"
  }
}

provider "consul" {
  address = "${aws_instance.master.public_ip}:8080"
  datacenter = "${coalesce(var.cluster_name, var.subnet_id)}"
  scheme = "https"

  # Use of templates is a nasty hack to force dependency ordering
  ca_file = "${data.template_file.consul_tls.vars.ca_file}"
  cert_file = "${data.template_file.consul_tls.vars.cert_file}"
  key_file = "${data.template_file.consul_tls.vars.key_file}"
}

References

  • GH3379

closed time in 12 days

hashibot

PR merged hashicorp/terraform-provider-consul

Allow specifying TLS Certificates via PEM strings documentation size/M
  • Instead of paths only
Running tool: /usr/bin/go test -timeout 30s -run ^TestResourceProvider_ConfigureTLSPem$

2020/08/18 17:20:21 [INFO] Initializing Consul client
2020/08/18 17:20:21 [INFO] Consul Client configured with address: 'demo.consul.io:80', scheme: 'https', datacenter: 'nyc3', insecure_https: 'false'
PASS
ok  	github.com/terraform-providers/terraform-provider-consul/consul	0.029s

+79 -11

5 comments

4 changed files

lawliet89

pr closed time in 12 days

pull request commenthashicorp/terraform-provider-consul

Allow specifying TLS Certificates via PEM strings

Side question: I notice that the environment variable supported here are different from what the Consul CLI supports. Is there a reason for that? Should we also support what Consul CLI supports?

I'm not sure why there is some additional environment variables in the Terraform provider, they were added before I worked on the project. All environment variables supported by the Consul CLI because in https://github.com/hashicorp/terraform-provider-consul/blob/8f65aef6090652e665082ab4e0000923cf93ba17/consul/config.go#L27-L88 we call consulapi.DefaultConfig() that reads the environment variables to set some attributes. It did happen that the defaults in the Terraform provider clashed with the environment variables and they were ignored in some cases but we should have fixed those. Do you have one in particular in mind?

lawliet89

comment created time in 12 days

pull request commenthashicorp/terraform-provider-consul

Allow specifying TLS Certificates via PEM strings

@lawliet89, thanks for explaining. Can you try with the changes I pushed? I tested it and it should work but I would like to be sure.

lawliet89

comment created time in 16 days

push eventlawliet89/terraform-provider-consul

Rémi Lapeyre

commit sha 66d7dbf5b2c80bfe11d074e9bba3e914ff0231a7

Revert "Remove `ConflictsWith`" This reverts commit d697fdd63eaa4d8fee86f91f95e04042ba037d9e.

view details

Rémi Lapeyre

commit sha 0210c93a530c72e6f769760fdc219f24124de26d

Use nil as default in schema.EnvDefaultFunc()

view details

push time in 16 days

push eventhashicorp/terraform-provider-consul

Ryan Ooi

commit sha 8f65aef6090652e665082ab4e0000923cf93ba17

Fix typo in acl_token_secret_id datasource documentation (#226)

view details

push time in 16 days

PR merged hashicorp/terraform-provider-consul

Fix typo in acl_token_secret_id datasource documentation

It should be consul_acl_token_secret_id instead of consul_acl_token

Error: Reference to undeclared resource

  on consul_token.tf line 22, in output "consul_acl_token_secret_id":
  22:   value = data.consul_acl_token.read.encrypted_secret_id

A data resource "consul_acl_token" "read" has not been declared in the root
module.```
+1 -1

0 comment

1 changed file

ryanooi

pr closed time in 16 days

PullRequestReviewEvent

push eventremilapeyre/terraform-provider-consul

Rémi Lapeyre

commit sha 3ffb1148987bccfcf260bb19c09df6a2228f8578

Remove flags from consul_license documentation

view details

push time in 16 days

PR opened hashicorp/terraform-provider-consul

Remove the flags attribute in consul_license

Since this just removes an attribute it does not need a StateUpgrader.

Closes https://github.com/hashicorp/terraform-provider-consul/issues/223

+42 -42

0 comment

2 changed files

pr created time in 16 days

create barnchremilapeyre/terraform-provider-consul

branch : license-flags

created branch time in 16 days

pull request commenthashicorp/terraform-provider-consul

Allow specifying TLS Certificates via PEM strings

Hi @lawliet89, this looks mainly good to me, I just have one question: why did you remove the ConflictsWith you had set in the first place? It looks like they would actually be useful here.

lawliet89

comment created time in 17 days

push eventhashicorp/terraform-provider-consul

gfdsa

commit sha bc4fe43affc72dd6fc32fe3ca56b430d07d9b62e

Fix documentation of consul_node attributes (#224)

view details

push time in 17 days

PR merged hashicorp/terraform-provider-consul

Replace "service" with "node" in attributes description

Apparently this one was created from the consul_service doc and attributes were not corrected.

+2 -2

2 comments

1 changed file

gfdsa

pr closed time in 17 days

PullRequestReviewEvent

pull request commenthashicorp/terraform-provider-consul

Replace "service" with "node" in attributes description

Thanks for fixing this typo @gfdsa!

gfdsa

comment created time in 17 days

issue commenthashicorp/terraform-provider-consul

[Feature Request] Support Intention imports

Hi @daktari, this is indeed missing for consul_intention. I just opened a PR that will fix this, thanks for the feature request.

daktari

comment created time in 17 days

PR opened hashicorp/terraform-provider-consul

Add import support to consul_intention

Closes https://github.com/hashicorp/terraform-provider-consul/issues/222

+18 -1

0 comment

3 changed files

pr created time in 17 days

create barnchremilapeyre/terraform-provider-consul

branch : consul-intention-import

created branch time in 17 days

issue commentvaexio/vaex

Vaex fails opening partitioned Parquet file

It seems like this is fixed on master now.

theophilechevalier

comment created time in 17 days

push eventremilapeyre/terraform-provider-nomad

Rémi Lapeyre

commit sha 309f00a71c0e0ff71bdfda1d217f771c23ba5fbd

Fix pronoun Co-authored-by: Luiz Aoqui <lgfa29@gmail.com>

view details

push time in 20 days

issue closedhashicorp/terraform-provider-hashicups

It is not clear how to use a local provider with Terraform 0.13

terraform init exit with an error when running the commands in the README because it tries to download the provider fro the registry:

➜  src cd ~/go/src/github.com/hashicorp 
➜  hashicorp g clone https://github.com/hashicorp/terraform-provider-hashicups
Clonage dans 'terraform-provider-hashicups'...
remote: Enumerating objects: 29, done.
remote: Counting objects: 100% (29/29), done.
remote: Compressing objects: 100% (28/28), done.
remote: Total 3073 (delta 8), reused 17 (delta 0), pack-reused 3044
Réception d'objets: 100% (3073/3073), 71.44 Mio | 2.28 Mio/s, fait.
Résolution des deltas: 100% (640/640), fait.
➜  hashicorp cd terraform
➜  terraform git:(consul-split-state) ../terraform-provider-hashicups/
➜  terraform-provider-hashicups git:(master) go build -o terraform-provider-hashicups
go: downloading github.com/hashicorp-demoapp/hashicups-client-go v0.0.0-20200508203820-4c67e90efb8e
go: downloading github.com/hashicorp/terraform-plugin-sdk/v2 v2.0.0-rc.2
go: downloading github.com/hashicorp/go-cty v1.4.1-0.20200414143053-d3edf31b6320
go: downloading github.com/mitchellh/go-testing-interface v1.0.4
➜  terraform-provider-hashicups git:(master) cd examples 
➜  examples git:(master) terraform init && terraform apply
Initializing modules...
- psl in coffee

Initializing the backend...

Initializing provider plugins...
- Finding latest version of hashicorp/hashicups...

Error: Failed to install provider

Error while installing hashicorp/hashicups: provider registry
registry.terraform.io does not have a provider named
registry.terraform.io/hashicorp/hashicups

It is not clear how to test a local build of a provider.

closed time in 22 days

remilapeyre

issue commentterraform-providers/terraform-provider-aws

force_new_deployment does not redeploy the ecs service

Hi folks 👋 Terraform needs some value to show a difference for the resource update function to run. If there are no updates, the update function (and the call to force new deployment) cannot be triggered.

We may be able to work around this by introducing a new "triggers" argument, similar to the API Gateway deployment resources, to allow operators to provide their own redeployment criteria outside a lack of ECS service configuration changes.

For the time being it would be possible to change the value of a tag to get this behavior but Terraform tries to be smart in resourceAwsEcsServiceUpdate() and does not update the service when just the tags changed. I think https://github.com/terraform-providers/terraform-provider-aws/blob/master/aws/resource_aws_ecs_service.go#L922-L929 could be changed to:

	conn := meta.(*AWSClient).ecsconn
	updateService := aws.Bool(d.Get("force_new_deployment").(bool))
	input := ecs.UpdateServiceInput{
		Cluster:            aws.String(d.Get("cluster").(string)),
		ForceNewDeployment: updateService,
		Service:            aws.String(d.Id()),
	}

so that users could change the value of a tag and get the service redeployed.

nbrys

comment created time in 23 days

create barnchremilapeyre/terraform-provider-nomad

branch : nomad-job-v2

created branch time in 24 days

more