profile
viewpoint
Roger Peppe rogpeppe Heetch Newcastle upon Tyne, UK

rogpeppe/gohack 705

Make temporary edits to your Go module dependencies

rogpeppe/godef 594

Print where symbols are defined in Go source code

mvdan/gogrep 436

Search for Go code using syntax trees

rogpeppe/go-internal 214

Selected Go-internal packages factored out from the standard library

myitcv/gobin 213

gobin is an experimental, module-aware command to install/run main packages.

rogpeppe/fastuuid 72

Fast generation of 192-bit UUIDs

dgryski/go-sequitur 33

Sequitur algorithm for recognizing lexical structure in strings

heetch/avro 30

Avro codec and code generation for Go

rogpeppe/godeps 27

Simple dependency locking tool for Go

cuelang/cuelang.org 19

Source for the https://cuelang.org site

issue openeddvyukov/go-fuzz

go-fuzz: Worker.crasherQueue can grow without bounds

I started investigating this because my go-fuzz process was OOM-killed within 30 minutes two times in a row.

After spending a while puzzling over the heap profile results (before I found the MemProfileRate=0 assignment), I acquired a reasonable profile and a few logs of what happened when it started using lots of memory.

The process in question does crash a lot (a restart rate of over 1/50), which is obviously a big contributor to the problem, but I believe that go-fuzz should continue without using arbitrary memory even in that situation.

Here's a screenshot of the heap profile from one such run (unfortunately I lost the profile from that run), where over 2GB of memory is kept around in Worker.crasherQueue:

image

Although code inspection pointed towards crasherQueue as a possible culprit, I wasn't entirely sure that's what was happening until I reproduced the issue with a log statement added that showed the current size of the queue (including its associated data) whenever the queue slice is grown.

The final line that it printed before I dumped the heap profile was:

crasherQueue 0xc0000ca380 len 37171; space 465993686 (data 430941433; error 26019700; suppression 9032553)

That 466MB was 65% of the total current heap size of 713MB. In previous runs, I observed the total alloc size to rise to more than 8GB, although I wasn't able to obtain a heap profile at that time.

This problem does not always happen! It seems to depend very much on the current workload. It seems like it might be starvation problem, because only one of the worker queues grows in this way.

Here's the whole log printed by that run: https://gist.github.com/rogpeppe/ad97d2c83834c24b0777a4009d71d120

The crasherQueue log lines were produced by this patch to the Worker.noteCrasher method:

+++ b/go-fuzz/worker.go
@@ -628,6 +628,15 @@ func (w *Worker) noteCrasher(data, output []byte, hanged bool) {
 	if _, ok := ro.suppressions[hash(supp)]; ok {
 		return
 	}
+	if len(w.crasherQueue) == cap(w.crasherQueue) {
+		totalData, totalError, totalSuppression := 0, 0, 0
+		for _, a := range w.crasherQueue {
+			totalData += len(a.Data)
+			totalError += len(a.Error)
+			totalSuppression += len(a.Suppression)
+		}
+		log.Printf("crasherQueue %p len %d; space %d (data %d; error %d; suppression %d)", w, len(w.crasherQueue), totalData+totalError+totalSuppression, totalData, totalError, totalSuppression)
+	}
 	w.crasherQueue = append(w.crasherQueue, NewCrasherArgs{
 		Data:        makeCopy(data),
 		Error:       output,

created time in 7 hours

create barnchrogpeppe/go-fuzz-profiling

branch : alloc-issue

created branch time in 3 days

create barnchrogpeppe/go-fuzz-profiling

branch : master

created branch time in 3 days

created repositoryrogpeppe/go-profile-issue

created time in 3 days

PR opened influxdata/telegraf

fix: plugins/parsers/influx: avoid ParseError.Error panic

A line ends at \n or \r\n, but not \r on its own. The ParseError.Error logic was assuming that any carriage return terminates the line. This caused a crash when the buffer is big, there's a single \r character and the error offset was after that, because it assumed that the offset is always before the end of the current line (a reasonable assumption as long as the line-termination logic agrees between the Error method and the parsing code).

This bug found by running go-fuzz.

Required for all PRs:

  • [x] Signed CLA.
  • [x] Has appropriate unit tests.
+17 -2

0 comment

2 changed files

pr created time in 4 days

create barnchrogpeppe-contrib/telegraf

branch : rogpeppe-001-fix-parse-error

created branch time in 4 days

PR opened josharian/pct

add go.mod file
+7 -0

0 comment

2 changed files

pr created time in 4 days

create barnchrogpeppe-contrib/pct

branch : rogpeppe-go.mod

created branch time in 4 days

push eventrogpeppe-contrib/pct

Roger Peppe

commit sha 435cd0500bf3b60800bad6c20339610487821d94

add go.mod file

view details

push time in 4 days

issue openedjosharian/pct

add go.mod file

You know it makes sense :)

created time in 4 days

issue openedcuelang/cue

cmd/cue: cue import unnecesarily escapes double quotes in multiline strings

commit ff306b79708eee06adaa73e6fb62449278727101

Running cue import on the following JSON produces CUE output that escapes the double-quotes even though there's no need to escape them unless there are three in a row:

{
	"x": "foo \n\"bar\""
}

This produces:

x: """
        foo 
        \"bar\"
        """

I'd expect to see:

x: """
        foo 
        "bar"
        """

created time in 5 days

issue openedcuelang/cue

cmd/cue: cue import formats multiline strings with space indentation

commit ff306b79708eee06adaa73e6fb62449278727101

Running cue export --out cue on the following JSON produces a multiline string that's indented with spaces rather than the usual tabs.

{
	"x": {
		"a": {
			"foo": "foo\nbar"
		},
		"b": 1
	}
}

I see this output:

x: {
	a: foo: """
        foo
        bar
        """
	b: 1
}

Note that foo and bar are preceded by spaces, but b: 1 is preceded by a single tab.

created time in 5 days

Pull request review commentinfluxdata/influxdb

feat: influxd upgrade command

+package upgrade++// This file contains code for v1 config file upgrade to v2.+// It supports basic restructuring (caveat: changing structure inside existing+// array or to new array may not work, but it is not required).+// Upgrade rules are in `upgrade_config.properties` file.++import (+	"bufio"+	"bytes"+	"errors"+	"fmt"+	"io"+	"io/ioutil"+	"os"+	"path/filepath"+	"strings"++	"github.com/BurntSushi/toml"+	"github.com/spf13/cobra"+	"go.uber.org/zap"+	"golang.org/x/text/encoding/unicode"+	"golang.org/x/text/transform"+)++// TODO for testing purposes+var upgradeConfigCommand = &cobra.Command{+	Use:   "upgrade-config",+	Short: "Upgrade InfluxDB 1.x config to 2.x",+	RunE: func(cmd *cobra.Command, args []string) error {+		log, _ := zap.NewProduction()+		_, err := upgradeConfig(options.source.configFile, log)+		return err+	},+}++// TODO for testing purposes+func init() {+	flags := upgradeConfigCommand.Flags()+	flags.StringVar(&options.source.configFile, "config-file", getV1ConfigPath(), "Path to 1.x config file")+}++// Backups existing config file and updates it with upgraded config.+func upgradeConfig(configFile string, log *zap.Logger) (*configV1, error) {+	configUpgradeProperties, err := AssetString("upgrade_config.properties")+	if err != nil {+		return nil, err+	}+	ext := filepath.Ext(configFile)+	cu := newConfigUpgrader(configUpgradeProperties, log)+	v1c, err := cu.loadV1(configFile)+	if err != nil {+		return nil, err+	}+	if ext == ".toml" { // if v1 config already has ".toml" extension, backup is needed+		err = cu.backup(configFile)+		if err != nil {+			return nil, err+		}+	}+	c, err := cu.transform(configFile)+	if err != nil {+		return nil, err+	}+	applyConfigOverrides(c)+	configFileV2 := strings.TrimSuffix(configFile, ext) + ".toml"+	err = cu.save(c, configFileV2)+	if err != nil {+		return nil, err+	}+	log.Info("config file upgraded",+		zap.String("1.x config", configFile),+		zap.String("2.x config", configFileV2))++	return v1c, nil+}++// Returns default 1.x config file path.+func getV1ConfigPath() string {+	if envVar := os.Getenv("INFLUXDB_CONFIG_PATH"); envVar != "" {+		return envVar+	}+	for _, path := range []string{+		os.ExpandEnv("${HOME}/.influxdb/influxdb.conf"),+		"/etc/influxdb/influxdb.conf",+	} {+		if _, err := os.Stat(path); err == nil {+			return path+		}+	}++	return ""+}++// private function used by `upgrade-config` command+func applyConfigOverrides(c config) {+	if options.target.enginePath != "" {+		c["engine-path"] = options.target.enginePath+	}+	if options.target.boltPath != "" {+		c["bolt-path"] = options.target.boltPath+	}+}++type properties map[string]string   // private type used by `upgrade-config` command+type table = map[string]interface{} // private type used by `upgrade-config` command+type config = table                 // private type used by `upgrade-config` command++// private type used by `upgrade-config` command+type configUpgrader struct {+	rules  properties+	config config+	log    *zap.Logger+}++// private function used by `upgrade-config` command+func newConfigUpgrader(rules string, log *zap.Logger) *configUpgrader {+	cm := &configUpgrader{+		log: log,+	}+	cm.init(rules)+	return cm+}++func (cu *configUpgrader) backup(path string) error {+	source, err := os.Open(path)+	if err != nil {+		return err+	}+	defer source.Close()++	backupFile := path + "~"+	if _, err := os.Stat(backupFile); !os.IsNotExist(err) {+		errMsg := fmt.Sprintf("upgrade: config file backup %s already exist", backupFile)+		return errors.New(errMsg)+	}++	destination, err := os.Create(backupFile)+	if err != nil {+		return err+	}+	defer destination.Close()+	_, err = io.Copy(destination, source)++	return err+}++func (cu *configUpgrader) save(c config, path string) error {+	buf := new(bytes.Buffer)+	if err := toml.NewEncoder(buf).Encode(&c); err != nil {+		return err+	}++	return ioutil.WriteFile(path, buf.Bytes(), 0666)+}++func (cu *configUpgrader) transform(path string) (config, error) {+	c, err := cu.parse(path)+	if err != nil {+		return nil, err+	}+	cu.config = make(config)+	cu.process(c, nil, -1)+	return cu.config, nil+}++func (cu *configUpgrader) parse(path string) (config, error) {+	bs, err := ioutil.ReadFile(path)+	if err != nil {+		return nil, err+	}++	// Handle any potential Byte-Order-Marks that may be in the config file.+	// This is for Windows compatibility only.+	// See https://github.com/influxdata/telegraf/issues/1378 and+	// https://github.com/influxdata/influxdb/issues/8965.+	bom := unicode.BOMOverride(transform.Nop)+	bs, _, err = transform.Bytes(bom, bs)+	if err != nil {+		return nil, err+	}++	var c config+	_, err = toml.Decode(string(bs), &c)+	return c, err+}++func (cu *configUpgrader) loadV1(path string) (*configV1, error) {+	bs, err := ioutil.ReadFile(path)+	if err != nil {+		return nil, err+	}++	// Handle any potential Byte-Order-Marks that may be in the config file.+	// This is for Windows compatibility only.+	// See https://github.com/influxdata/telegraf/issues/1378 and+	// https://github.com/influxdata/influxdb/issues/8965.+	bom := unicode.BOMOverride(transform.Nop)+	bs, _, err = transform.Bytes(bom, bs)+	if err != nil {+		return nil, err+	}++	var c configV1+	_, err = toml.Decode(string(bs), &c)+	return &c, err+}++func (cu *configUpgrader) init(rules string) {+	cu.rules = make(properties)+	scanner := bufio.NewScanner(strings.NewReader(rules))+	for scanner.Scan() {+		line := strings.Trim(scanner.Text(), " ")+		if line == "" || strings.HasPrefix(line, "#") {+			continue+		}+		rule := strings.SplitN(line, "=", 2)+		if len(rule) == 2 {+			sourceKey := strings.Trim(rule[0], " ")+			targetKey := strings.Trim(rule[1], " ")+			cu.rules[sourceKey] = targetKey+		}+	}+}++func (cu *configUpgrader) convert(path []string) ([]string, bool) {+	fqn := strings.Join(path, ".")+	target, ok := cu.rules[fqn]+	if ok {+		if target == "" {+			return nil, true+		}+		return strings.Split(target, "."), true+	}+	return path, false+}++// flat copy ie. without values for maps and arrays+func (cu *configUpgrader) add(v interface{}, source []string, target []string, index int) {+	var c table+	c = cu.config+	for len(target) > 1 {+		n := target[0]+		u, ok := c[n]+		if !ok {+			u = make(table)+			c[n] = u+		}+		if uc, ok := u.(table); ok {+			c = uc+		}+		if uc, ok := u.([]table); ok {+			if uc[index] == nil {+				uc[index] = make(table)+			}+			c = uc[index]+		}+		target = target[1:]+	}+	n := target[0]+	switch vv := v.(type) {+	case table:+		c[n] = make(table)+	case []table:+		array := make([]table, len(vv))+		for i := 0; i < len(vv); i++ {+			array[i] = make(table)+		}+		c[n] = array+	default:+		c[n] = v+	}+}++func (cu *configUpgrader) process(c interface{}, path []string, index int) []string {+	switch v := c.(type) {+	case table:+		for key, value := range v {+			path = append(path, key)+			target, changed := cu.convert(path)+			if target == nil { // entry is removed explicitly+				cu.log.Info(fmt.Sprintf("config %s removed (not supported in 2.x)", cu.entryType(value)),+					zap.String("entry", cu.prettyName(path, value, index)))+			} else if changed { // entry is moved+				cu.log.Info("config entry upgraded",+					zap.String("1.x entry", cu.prettyName(path, value, index)),+					zap.String("2.x entry", cu.prettyName(target, value, index)))+				cu.add(value, path, target, index)+			} else { // entry is removed implicitly or flattened (more likely)+				cu.log.Info(fmt.Sprintf("config %s ignored (not supported in 2.x)", cu.entryType(value)),

FWIW the transformation code can be a lot simpler if we assume that the new format is flat and that the rules specify all transformations:

I think something like this is sufficient:


func transform(m map[string]interface{}, rules map[string]string) map[string]interface{} {
	m1 := make(map[string]interface{})
	for old, new := range rules {
		val, ok := lookup(m, old)
		if ok {
			m1[new] = val
		}
	}
	return m1
}

func lookup(x map[string]interface{}, path string) (interface{}, bool) {
	for {
		elem := path
		rest := ""
		if i := strings.Index(path, "."); i != -1 {
			elem, rest = path[0:i], path[i+1:]
		}
		val, ok := x[elem]
		if rest == "" {
			return val, ok
		}
		child, ok := val.(map[string]interface{})
		if !ok {
			log.Printf("wrong type at %v", elem)
			return nil, false
		}
		path, x = rest, child
	}
}
vlastahajek

comment created time in 5 days

PullRequestReviewEvent

issue closedheetch/avro

go2avro: best practice for interface{}?

Hi,

Some of my Go structures have properties typed with interface{} and when generating I end into this error case: https://github.com/heetch/avro/blob/master/gotype.go#L265-L267

And I'm note sure what do you mean with this comment:

// TODO fill in from the writer schema.

How the "writer schema" could help? Did I miss something 🤔 ?

I'm interested to know what you recommend in this case? Should I use instead of interface{} a basic string and before marshalling everything as inline JSON? Or should I take all possible value structures (if not hundreds...) and create a specific structure with each one as optional property?

Thank you,

closed time in 6 days

sneko

issue commentheetch/avro

go2avro: best practice for interface{}?

This is related to issue #34.

TL;DR: avoid using go2avro and define an actual Avro schema and generate the Go types from that.

The problem is that Avro doesn't really have a concept of "anything". I had an idea that it might be possible, when we encounter an interface{} value (i.e. we don't have any notion of what's expected by the reader), we could fill in that piece with the appropriate piece of the schema that the data was originally written with.

I started implementing it, but ran into difficulties.

From my notes at the time:

Hmm, potential serious problem: say we've got a schema like this:
 {
	type: "record"
	name: "List"
	fields: [{
		name: "Item"
		type: "int"
	}, {
		name: "Next"
		type: ["null", "List"]
		default: null
	}]
 }

 We unmarshal into a Go type like this:

	type List struct {
		Item int
		Next interface{}
	}

 This means that we'll be using the same part of the VM program
 to unmarshal into different types, thereby invalidating one of
 our core assumptions.

I think it should be possible to implement this, but I'd need to implement the VM compilation directly rather than leaning on gogen-avro. That's part of the eventual plan anyway, but I'm afraid it won't happen very soon, as I haven't got the time to work on new features for this project currently, and that's a significant refactor.

I'm going to close this in favour of #34 for now.

sneko

comment created time in 6 days

push eventinfluxdata/influxdb

Roger Peppe

commit sha c25d0dde65731948d3c940d9d86591152e446e96

fix: chronograf/organizations: avoid nil context with WithValue

view details

Roger Peppe

commit sha 62af80f41886eade50b3734e6774c51c7ccd91fd

Merge pull request #19597 from influxdata/rogpeppe-004-avoid-nil-context fix: chronograf/organizations: avoid nil context with WithValue

view details

push time in 6 days

PR merged influxdata/influxdb

fix: chronograf/organizations: avoid nil context with WithValue

Under later Go versions, the tests panic because it's not OK to pass a nil Context to context.WithValue.

<!-- Checkboxes below this note can be erased if not applicable to your Pull Request. -->

+19 -25

0 comment

4 changed files

rogpeppe

pr closed time in 6 days

push eventinfluxdata/influxdb

Roger Peppe

commit sha 3803bd8e261cf248f6b8c8bc6ffa1719769c3b04

fix: storage: close PointsWriter when Engine is closed The PointsWriter has a Close method which seems like it should be called when the Engine is shut down.

view details

Roger Peppe

commit sha 1bc54c7bfd7cafe7059725993a3b2301146beda7

Merge pull request #19598 from influxdata/rogpeppe-005-storage-close-pointswriter fix: storage: close PointsWriter when Engine is closed

view details

push time in 6 days

PR merged influxdata/influxdb

fix: storage: close PointsWriter when Engine is closed

The PointsWriter has a Close method which seems like it should be called when the Engine is shut down.

<!-- Checkboxes below this note can be erased if not applicable to your Pull Request. -->

+10 -133

0 comment

3 changed files

rogpeppe

pr closed time in 6 days

Pull request review commentinfluxdata/influxdb

fix: storage: close PointsWriter when Engine is closed

 func (e *Engine) Close() error { 		retErr = multierror.Append(retErr, fmt.Errorf("error closing TSDB store: %w", err)) 	} +	if err := e.pointsWriter.Close(); err != nil {+		retErr = multierror.Append(retErr, fmt.Errorf("error closing points writer: %w", err))

done

rogpeppe

comment created time in 6 days

PullRequestReviewEvent

push eventinfluxdata/influxdb

Roger Peppe

commit sha 3803bd8e261cf248f6b8c8bc6ffa1719769c3b04

fix: storage: close PointsWriter when Engine is closed The PointsWriter has a Close method which seems like it should be called when the Engine is shut down.

view details

push time in 6 days

Pull request review commentinfluxdata/influxdb

feat: influxd upgrade command

 func newFileExistsError(path string) FileExistsError { func (e FileExistsError) Error() string { 	return fmt.Sprintf("operation not allowed, file %q exists", e.path) }++type DiskStatus struct {+	All   uint64+	Used  uint64+	Free  uint64+	Avail uint64+}++// DirSize returns total size in bytes of containing files+func DirSize(path string) (uint64, error) {+	var size uint64+	err := filepath.Walk(path, func(_ string, info os.FileInfo, err error) error {+		if err != nil {+			return err+		}+		if !info.IsDir() {+			size += uint64(info.Size())+		}+		return err+	})+	return size, err+}++// CopyFile copies the contents of the file named src to the file named+// by dst. The file will be created if it does not already exist. If the+// destination file exists, all it's contents will be replaced by the contents+// of the source file. The file mode will be copied from the source and+// the copied data is synced/flushed to stable storage.+func CopyFile(src, dst string) (err error) {+	in, err := os.Open(src)+	if err != nil {+		return+	}+	defer in.Close()++	out, err := os.Create(dst)+	if err != nil {+		return+	}+	defer func() {+		if e := out.Close(); e != nil {+			err = e+		}+	}()++	_, err = io.Copy(out, in)+	if err != nil {+		return+	}++	err = out.Sync()+	if err != nil {+		return+	}++	si, err := os.Stat(src)+	if err != nil {+		return+	}+	err = os.Chmod(dst, si.Mode())+	if err != nil {+		return+	}++	return+}++// CopyDir recursively copies a directory tree, attempting to preserve permissions.+// Source directory must exist, destination directory must *not* exist.+// Symlinks are ignored and skipped.+// dirRenameFunc allows renaming  directories. It must return orig name, if renaming is not done..+// dirFilterFunc allows filtering out directories. Returned true means skipping the dir.+// fileFilterFunc allows filtering out files. Returned true means skipping the file.+func CopyDir(src string, dst string, dirRenameFunc func(path string) string, dirFilterFunc func(path string) bool, fileFilterFunc func(path string) bool) (err error) {

On reflection, I see that the filtering functionality isn't going to be available in an off-the-shelf package. But then it's probably quite specific to the upgrade functionality, and would probably be better kept local to that functionality rather than exporting in a pkg/fs package that seems intended for general use.

vlastahajek

comment created time in 6 days

PullRequestReviewEvent

Pull request review commentinfluxdata/influxdb

feat: influxd upgrade command

 var options = struct {  	// flags for target InfluxDB 	target optionsV2++	// verbose output+	verbose bool++	logPath string }{} -func init() {-	flags := Command.Flags()+func NewCommand() *cobra.Command {  	// source flags 	v1dir, err := influxDirV1() 	if err != nil { 		panic("error fetching default InfluxDB 1.x dir: " + err.Error()) 	} -	flags.StringVar(&options.source.metaDir, "v1-meta-dir", filepath.Join(v1dir, "meta"), "Path to 1.x meta.db directory")- 	// target flags 	v2dir, err := fs.InfluxDir() 	if err != nil { 		panic("error fetching default InfluxDB 2.0 dir: " + err.Error()) 	} -	flags.StringVar(&options.target.boltPath, "v2-bolt-path", filepath.Join(v2dir, "influxd.bolt"), "Path to 2.0 metadata")+	// os-specific+	var defaultSsPath string+	if runtime.GOOS == "windows" {+		defaultSsPath = filepath.Join(homeOrAnyDir(), "influxd-upgrade-security.cmd")+	} else {+		defaultSsPath = filepath.Join(homeOrAnyDir(), "influxd-upgrade-security.sh")+	}++	cmd := &cobra.Command{+		Use:   "upgrade",+		Short: "Upgrade a 1.x version of InfluxDB",+		Long: `+    Upgrades a 1.x version of InfluxDB by performing following actions:+      1. Reads 1.x config file and creates 2.x config file with matching options. Unsupported 1.x options are reported.+      2. Upgrades 1.x database files.+      3. Creates a script for creating 1.x users and their permissions. This scripts needs to be revised and run manually after starting 2.x.++    If config file is not available, 1.x db folder is taken as an input. +`,+		RunE: runUpgradeE,+	}++	opts := []cli.Opt{+		{+			DestP:   &options.source.dbDir,+			Flag:    "v1-dir",+			Default: v1dir,+			Desc:    "path to source 1.x db directory containing meta,data and wal sub-folders",+		},+		{+			DestP:   &options.verbose,+			Flag:    "verbose",+			Default: false,+			Desc:    "verbose output",+			Short:   'v',+		},+		{+			DestP:   &options.target.boltPath,+			Flag:    "bolt-path",+			Default: filepath.Join(v2dir, bolt.DefaultFilename),+			Desc:    "path for boltdb database",+			Short:   'm',+		},+		{+			DestP:   &options.target.enginePath,+			Flag:    "engine-path",+			Default: filepath.Join(v2dir, "engine"),+			Desc:    "path for persistent engine files",+			Short:   'e',+		},+		{+			DestP:    &options.target.userName,+			Flag:     "username",+			Default:  "",+			Desc:     "primary username",+			Short:    'u',+			Required: true,+		},+		{+			DestP:    &options.target.password,+			Flag:     "password",+			Default:  "",+			Desc:     "password for username",+			Short:    'p',+			Required: true,+		},+		{+			DestP:    &options.target.orgName,+			Flag:     "org",+			Default:  "",+			Desc:     "primary organization name",+			Short:    'o',+			Required: true,+		},+		{+			DestP:    &options.target.bucket,+			Flag:     "bucket",+			Default:  "",+			Desc:     "primary bucket name",+			Short:    'b',+			Required: true,+		},+		{+			DestP:   &options.target.retention,+			Flag:    "retention",+			Default: "",+			Desc:    "optional: duration bucket will retain data. 0 is infinite. Default is 0.",+			Short:   'r',+		},+		{+			DestP:   &options.target.token,+			Flag:    "token",+			Default: "",+			Desc:    "optional: token for username, else auto-generated",+			Short:   't',+		},+		{+			DestP:   &options.source.configFile,+			Flag:    "config-file",+			Default: getV1ConfigPath(),+			Desc:    "optional: Custom InfluxDB 1.x config file path, else the default config file",+		},+		{+			DestP:   &options.target.securityScriptPath,+			Flag:    "security-script",+			Default: defaultSsPath,+			Desc:    "optional: generated security upgrade script path",+		},+		{+			DestP:   &options.logPath,+			Flag:    "log-path",+			Default: filepath.Join(homeOrAnyDir(), "upgrade.log"),+			Desc:    "optional: Custom log file path",+		},+	} +	cli.BindOptions(cmd, opts) 	// add sub commands-	Command.AddCommand(v1DumpMetaCommand)-	Command.AddCommand(v2DumpMetaCommand)+	cmd.AddCommand(v1DumpMetaCommand)+	cmd.AddCommand(v2DumpMetaCommand)+	// TODO for testing purposes only+	cmd.AddCommand(upgradeConfigCommand)+	return cmd }  type influxDBv1 struct { 	meta *meta.Client }  type influxDBv2 struct {+	log         *zap.Logger 	boltClient  *bolt.Client 	store       *bolt.KVStore 	kvStore     kv.SchemaStore 	tenantStore *tenant.Store 	ts          *tenant.Service 	dbrpSvc     influxdb.DBRPMappingServiceV2+	bucketSvc   influxdb.BucketService 	onboardSvc  influxdb.OnboardingService 	kvService   *kv.Service 	meta        *meta.Client } -func runUpgradeE(cmd *cobra.Command, args []string) error {+func (i *influxDBv2) close() error {+	err := i.meta.Close()+	if err != nil {+		return err+	}+	err = i.boltClient.Close()+	if err != nil {+		return err+	}+	err = i.store.Close()+	if err != nil {+		return err+	}+	return nil+}++func runUpgradeE(*cobra.Command, []string) error { 	ctx := context.Background()+	config := zap.NewProductionConfig()+	config.OutputPaths = append(config.OutputPaths, options.logPath)+	config.ErrorOutputPaths = append(config.ErrorOutputPaths, options.logPath)+	log, err := config.Build()+	if err != nil {+		return err+	}+	log.Info("starting InfluxDB 1.x upgrade")++	checkParam := func(name, value string) error {+		if value == "" {+			return fmt.Errorf("empty or missing mandatory option %s", name)+		}+		return nil+	}++	if err := checkParam("username", options.target.userName); err != nil {+		return err+	}+	if err := checkParam("password", options.target.password); err != nil {+		return err+	}+	if err := checkParam("org", options.target.orgName); err != nil {+		return err+	}+	if err := checkParam("bucket", options.target.bucket); err != nil {+		return err+	}++	if options.source.configFile != "" {+		log.Info("upgrading config file", zap.String("file", options.source.configFile))+		if _, err := os.Stat(options.source.configFile); err != nil {+			return err+		}+		v1Config, err := upgradeConfig(options.source.configFile, log)+		if err != nil {+			return err+		}+		// TODO how to load and validate new Config???++		options.source.metaDir = v1Config.Meta.Dir+		options.source.dataDir = v1Config.Data.Dir+		options.source.walDir = v1Config.Data.WALDir+	} else {+		log.Info("no InfluxDB 1.x config file specified, skipping its upgrade")+	}++	if err := options.source.checkDirs(); err != nil {+		return err+	}  	v1, err := newInfluxDBv1(&options.source) 	if err != nil { 		return err 	}-	_ = v1 -	v2, err := newInfluxDBv2(ctx, &options.target)+	v2, err := newInfluxDBv2(ctx, &options.target, log) 	if err != nil { 		return err 	}-	_ = v2 -	// 1. Onboard the initial admin user-	// v2.onboardSvc.OnboardInitialUser()+	or, err := setupAdmin(ctx, v2)+	if err != nil {+		return err+	}+	options.target.token = or.Auth.Token -	// 2. read each database / retention policy from v1.meta and create bucket db-name/rp-name-	// newBucket := v2.ts.CreateBucket(ctx, Bucket{})-	//-	// 3. create database in v2.meta-	// v2.meta.CreateDatabase(newBucket.ID.String())-	// copy shard info from v1.meta+	db2BucketIds, err := upgradeDatabases(ctx, v1, v2, or.Org.ID, log)+	if err != nil {+		//remove all files+		log.Info("database upgrade error, removing data")+		if e := os.Remove(options.target.boltPath); e != nil {+			log.Error("cleaning failed", zap.Error(e))+		}++		if e := os.RemoveAll(options.target.enginePath); e != nil {+			log.Error("cleaning failed", zap.Error(e))+		}+		return err+	}++	if err = generateSecurityScript(v1, db2BucketIds, log); err != nil {+		return err+	}++	log.Info("upgrade successfully completed. Start service now")  	return nil } +func upgradeDatabases(ctx context.Context, v1 *influxDBv1, v2 *influxDBv2, orgID influxdb.ID, log *zap.Logger) (map[string][]string, error) {+	db2BucketIds := make(map[string][]string)+	targetDataPath := filepath.Join(options.target.enginePath, "data")+	targetWalPath := filepath.Join(options.target.enginePath, "wal")+	dirFilterFunc := func(path string) bool {+		base := filepath.Base(path)+		if base == "_series" ||+			(len(base) > 0 && base[0] == '_') || //skip internal databases+			base == "index" {+			return true+		}+		return false+	}+	// read each database / retention policy from v1.meta and create bucket db-name/rp-name+	// create database in v2.meta+	// copy shard info from v1.meta+	if len(v1.meta.Databases()) > 0 {+		// Check space+		log.Info("checking space")+		size, err := fs2.DirSize(options.source.dataDir)+		if err != nil {+			return nil, fmt.Errorf("error getting size of %s: %w", options.source.dataDir, err)+		}+		size2, err := fs2.DirSize(options.source.walDir)+		if err != nil {+			return nil, fmt.Errorf("error getting size of %s: %w", options.source.walDir, err)+		}+		size += size2+		v2dir := filepath.Dir(options.target.boltPath)+		diskInfo, err := fs2.DiskUsage(v2dir)+		if err != nil {+			return nil, fmt.Errorf("error getting info of disk %s: %w", v2dir, err)+		}+		if options.verbose {+			log.Info("disk space info",+				zap.String("Free space", fs2.HumanSize(diskInfo.Free)),+				zap.String("Needed space", fs2.HumanSize(size)))+		}+		if size > diskInfo.Free {+			return nil, fmt.Errorf("not enough space on target disk of %s: need %d, available %d ", v2dir, size, diskInfo.Free)+		}+		log.Info("upgrading databases")+		for _, db := range v1.meta.Databases() {+			if db.Name[0] == '_' {+				if options.verbose {+					log.Info("skipping internal ",+						zap.String("database", db.Name))+				}+				continue+			}+			if options.verbose {+				log.Info("upgrading database ",+					zap.String("database", db.Name))+			}++			// db to buckets IDs mapping+			db2BucketIds[db.Name] = make([]string, 0, len(db.RetentionPolicies))++			for _, rp := range db.RetentionPolicies {+				sourcePath := filepath.Join(options.source.dataDir, db.Name, rp.Name)+				_, err := os.Stat(sourcePath)+				if err != nil {+					// skip retention policies that don't have data+					log.Info("empty retention policy",+						zap.String("name", rp.Name))+					continue+				}+				bucket := &influxdb.Bucket{+					OrgID:               orgID,+					Type:                influxdb.BucketTypeUser,+					Name:                db.Name + "-" + rp.Name,+					Description:         fmt.Sprintf("Upgraded from v1 database %s with retention policy %s", db.Name, rp.Name),+					RetentionPolicyName: rp.Name,+					RetentionPeriod:     rp.Duration,+				}+				if options.verbose {+					log.Info("creating bucket ",+						zap.String("Bucket", bucket.Name))+				}+				err = v2.bucketSvc.CreateBucket(ctx, bucket)+				if err != nil {+					return nil, fmt.Errorf("error creating bucket %s: %w", bucket.Name, err)++				}++				db2BucketIds[db.Name] = append(db2BucketIds[db.Name], bucket.ID.String())+				if options.verbose {+					log.Info("creating database with retention policy",+						zap.String("database", bucket.ID.String()))+				}+				spec := rp.ToSpec()+				spec.Name = meta.DefaultRetentionPolicyName+				dbv2, err := v2.meta.CreateDatabaseWithRetentionPolicy(bucket.ID.String(), spec)+				if err != nil {+					return nil, fmt.Errorf("error creating database %s: %w", bucket.ID.String(), err)+				}++				mapping := &influxdb.DBRPMappingV2{+					Database:        db.Name,+					RetentionPolicy: rp.Name,+					Default:         db.DefaultRetentionPolicy == rp.Name,+					OrganizationID:  orgID,+					BucketID:        bucket.ID,+				}+				if options.verbose {+					log.Info("Creating mapping",+						zap.String("database", mapping.Database),+						zap.String("retention policy", mapping.RetentionPolicy),+						zap.String("orgID", mapping.OrganizationID.String()),+						zap.String("bucketID", mapping.BucketID.String()))+				}+				err = v2.dbrpSvc.Create(ctx, mapping)+				if err != nil {+					return nil, fmt.Errorf("error creating mapping  %s/%s -> Org %s, bucket %s: %w", mapping.Database, mapping.RetentionPolicy, mapping.OrganizationID.String(), mapping.BucketID.String(), err)+				}+				for _, sg := range rp.ShardGroups {+					if options.verbose {+						log.Info("creating shard group",+							zap.String("database", dbv2.Name),+							zap.String("retention policy", dbv2.DefaultRetentionPolicy),+							zap.Time("time", sg.StartTime))+					}+					_, err := v2.meta.CreateShardGroupWithShards(dbv2.Name, dbv2.DefaultRetentionPolicy, sg.StartTime, sg.Shards...)+					if err != nil {+						return nil, fmt.Errorf("error creating database %s: %w", bucket.ID.String(), err)+					}+				}++				targetPath := filepath.Join(targetDataPath, dbv2.Name, spec.Name)+				if options.verbose {+					log.Info("copying data",+						zap.String("source", sourcePath),+						zap.String("target", targetPath))+				}+				err = fs2.CopyDir(sourcePath,+					targetPath,+					nil,+					dirFilterFunc,+					nil)+				if err != nil {+					return nil, fmt.Errorf("error copying v1 data from %s to %s: %w", sourcePath, targetPath, err)+				}+				sourcePath = filepath.Join(options.source.walDir, db.Name, rp.Name)+				targetPath = filepath.Join(targetWalPath, dbv2.Name, spec.Name)+				if options.verbose {+					log.Info("copying wal",+						zap.String("source", sourcePath),+						zap.String("target", targetPath))+				}+				err = fs2.CopyDir(sourcePath,+					targetPath,+					nil,+					dirFilterFunc,+					nil)+				if err != nil {+					return nil, fmt.Errorf("error copying v1 data from %s to %s: %w", sourcePath, targetPath, err)+				}+			}+		}++		err = v2.close()+		if err != nil {+			return nil, fmt.Errorf("error closing v2: %w", err)+		}+	} else {+		log.Warn("no database found")+	}+	return db2BucketIds, nil+}++func setupAdmin(ctx context.Context, v2 *influxDBv2) (*influxdb.OnboardingResults, error) {+	canOnboard, err := v2.onboardSvc.IsOnboarding(ctx)+	if err != nil {+		return nil, err+	}+	if canOnboard {

invert the condition and return early?

vlastahajek

comment created time in 6 days

Pull request review commentinfluxdata/influxdb

feat: influxd upgrade command

+package upgrade++// This file contains code for v1 config file upgrade to v2.+// It supports basic restructuring (caveat: changing structure inside existing+// array or to new array may not work, but it is not required).+// Upgrade rules are in `upgrade_config.properties` file.++import (+	"bufio"+	"bytes"+	"errors"+	"fmt"+	"io"+	"io/ioutil"+	"os"+	"path/filepath"+	"strings"++	"github.com/BurntSushi/toml"+	"github.com/spf13/cobra"+	"go.uber.org/zap"+	"golang.org/x/text/encoding/unicode"+	"golang.org/x/text/transform"+)++// TODO for testing purposes+var upgradeConfigCommand = &cobra.Command{+	Use:   "upgrade-config",+	Short: "Upgrade InfluxDB 1.x config to 2.x",+	RunE: func(cmd *cobra.Command, args []string) error {+		log, _ := zap.NewProduction()+		_, err := upgradeConfig(options.source.configFile, log)+		return err+	},+}++// TODO for testing purposes+func init() {+	flags := upgradeConfigCommand.Flags()+	flags.StringVar(&options.source.configFile, "config-file", getV1ConfigPath(), "Path to 1.x config file")+}++// Backups existing config file and updates it with upgraded config.+func upgradeConfig(configFile string, log *zap.Logger) (*configV1, error) {+	configUpgradeProperties, err := AssetString("upgrade_config.properties")+	if err != nil {+		return nil, err+	}+	ext := filepath.Ext(configFile)+	cu := newConfigUpgrader(configUpgradeProperties, log)+	v1c, err := cu.loadV1(configFile)+	if err != nil {+		return nil, err+	}+	if ext == ".toml" { // if v1 config already has ".toml" extension, backup is needed+		err = cu.backup(configFile)+		if err != nil {+			return nil, err+		}+	}+	c, err := cu.transform(configFile)+	if err != nil {+		return nil, err+	}+	applyConfigOverrides(c)+	configFileV2 := strings.TrimSuffix(configFile, ext) + ".toml"+	err = cu.save(c, configFileV2)+	if err != nil {+		return nil, err+	}+	log.Info("config file upgraded",+		zap.String("1.x config", configFile),+		zap.String("2.x config", configFileV2))++	return v1c, nil+}++// Returns default 1.x config file path.+func getV1ConfigPath() string {+	if envVar := os.Getenv("INFLUXDB_CONFIG_PATH"); envVar != "" {+		return envVar+	}+	for _, path := range []string{+		os.ExpandEnv("${HOME}/.influxdb/influxdb.conf"),+		"/etc/influxdb/influxdb.conf",+	} {+		if _, err := os.Stat(path); err == nil {+			return path+		}+	}++	return ""+}++// private function used by `upgrade-config` command+func applyConfigOverrides(c config) {+	if options.target.enginePath != "" {+		c["engine-path"] = options.target.enginePath+	}+	if options.target.boltPath != "" {+		c["bolt-path"] = options.target.boltPath+	}+}++type properties map[string]string   // private type used by `upgrade-config` command+type table = map[string]interface{} // private type used by `upgrade-config` command+type config = table                 // private type used by `upgrade-config` command++// private type used by `upgrade-config` command+type configUpgrader struct {+	rules  properties+	config config+	log    *zap.Logger+}++// private function used by `upgrade-config` command+func newConfigUpgrader(rules string, log *zap.Logger) *configUpgrader {+	cm := &configUpgrader{+		log: log,+	}+	cm.init(rules)+	return cm+}++func (cu *configUpgrader) backup(path string) error {+	source, err := os.Open(path)+	if err != nil {+		return err+	}+	defer source.Close()++	backupFile := path + "~"+	if _, err := os.Stat(backupFile); !os.IsNotExist(err) {+		errMsg := fmt.Sprintf("upgrade: config file backup %s already exist", backupFile)+		return errors.New(errMsg)+	}++	destination, err := os.Create(backupFile)+	if err != nil {+		return err+	}+	defer destination.Close()+	_, err = io.Copy(destination, source)++	return err+}++func (cu *configUpgrader) save(c config, path string) error {+	buf := new(bytes.Buffer)+	if err := toml.NewEncoder(buf).Encode(&c); err != nil {+		return err+	}++	return ioutil.WriteFile(path, buf.Bytes(), 0666)+}++func (cu *configUpgrader) transform(path string) (config, error) {+	c, err := cu.parse(path)+	if err != nil {+		return nil, err+	}+	cu.config = make(config)+	cu.process(c, nil, -1)+	return cu.config, nil+}++func (cu *configUpgrader) parse(path string) (config, error) {+	bs, err := ioutil.ReadFile(path)+	if err != nil {+		return nil, err+	}++	// Handle any potential Byte-Order-Marks that may be in the config file.+	// This is for Windows compatibility only.+	// See https://github.com/influxdata/telegraf/issues/1378 and+	// https://github.com/influxdata/influxdb/issues/8965.+	bom := unicode.BOMOverride(transform.Nop)+	bs, _, err = transform.Bytes(bom, bs)+	if err != nil {+		return nil, err+	}++	var c config+	_, err = toml.Decode(string(bs), &c)+	return c, err+}++func (cu *configUpgrader) loadV1(path string) (*configV1, error) {+	bs, err := ioutil.ReadFile(path)+	if err != nil {+		return nil, err+	}++	// Handle any potential Byte-Order-Marks that may be in the config file.+	// This is for Windows compatibility only.+	// See https://github.com/influxdata/telegraf/issues/1378 and+	// https://github.com/influxdata/influxdb/issues/8965.+	bom := unicode.BOMOverride(transform.Nop)+	bs, _, err = transform.Bytes(bom, bs)+	if err != nil {+		return nil, err+	}++	var c configV1+	_, err = toml.Decode(string(bs), &c)+	return &c, err+}++func (cu *configUpgrader) init(rules string) {+	cu.rules = make(properties)+	scanner := bufio.NewScanner(strings.NewReader(rules))+	for scanner.Scan() {+		line := strings.Trim(scanner.Text(), " ")+		if line == "" || strings.HasPrefix(line, "#") {+			continue+		}+		rule := strings.SplitN(line, "=", 2)+		if len(rule) == 2 {+			sourceKey := strings.Trim(rule[0], " ")+			targetKey := strings.Trim(rule[1], " ")+			cu.rules[sourceKey] = targetKey+		}+	}+}++func (cu *configUpgrader) convert(path []string) ([]string, bool) {+	fqn := strings.Join(path, ".")+	target, ok := cu.rules[fqn]+	if ok {+		if target == "" {+			return nil, true+		}+		return strings.Split(target, "."), true+	}+	return path, false+}++// flat copy ie. without values for maps and arrays+func (cu *configUpgrader) add(v interface{}, source []string, target []string, index int) {+	var c table+	c = cu.config+	for len(target) > 1 {+		n := target[0]+		u, ok := c[n]+		if !ok {+			u = make(table)+			c[n] = u+		}+		if uc, ok := u.(table); ok {+			c = uc+		}+		if uc, ok := u.([]table); ok {+			if uc[index] == nil {+				uc[index] = make(table)+			}+			c = uc[index]+		}+		target = target[1:]+	}+	n := target[0]+	switch vv := v.(type) {+	case table:+		c[n] = make(table)+	case []table:+		array := make([]table, len(vv))+		for i := 0; i < len(vv); i++ {+			array[i] = make(table)+		}+		c[n] = array+	default:+		c[n] = v+	}+}++func (cu *configUpgrader) process(c interface{}, path []string, index int) []string {+	switch v := c.(type) {+	case table:+		for key, value := range v {+			path = append(path, key)+			target, changed := cu.convert(path)+			if target == nil { // entry is removed explicitly+				cu.log.Info(fmt.Sprintf("config %s removed (not supported in 2.x)", cu.entryType(value)),+					zap.String("entry", cu.prettyName(path, value, index)))+			} else if changed { // entry is moved+				cu.log.Info("config entry upgraded",+					zap.String("1.x entry", cu.prettyName(path, value, index)),+					zap.String("2.x entry", cu.prettyName(target, value, index)))+				cu.add(value, path, target, index)+			} else { // entry is removed implicitly or flattened (more likely)+				cu.log.Info(fmt.Sprintf("config %s ignored (not supported in 2.x)", cu.entryType(value)),

Do we really want entries without an explicit transformation to be silently discarded? If we do, then is there any point in having the "explicitly removed" code path?

vlastahajek

comment created time in 6 days

Pull request review commentinfluxdata/influxdb

feat: influxd upgrade command

+package upgrade++import (+	"github.com/BurntSushi/toml"+	"go.uber.org/zap"+	"io/ioutil"+	"path/filepath"+	"reflect"+	"strings"+	"testing"+)++var testConfigV1 = `### Welcome to the InfluxDB configuration file.++# Change this option to true to disable reporting.+reporting-disabled = false++# Bind address to use for the RPC service for backup and restore.+bind-address = "127.0.0.1:8088"++[meta]+  dir = "/db/influxdb/meta"++[data]+  dir = "/db/influxdb/data"+  wal-dir = "/db/influxdb/wal"+  wal-fsync-delay = "100s"+  index-version = "inmem"++[coordinator]+  max-select-point = 0++[retention]+  check-interval = "30m"++[shard-precreation]+  check-interval = "5m"++[monitor]+  store-enabled = true++[http]+  flux-enabled = false+  bind-address = ":8086"+  https-certificate = "/etc/ssl/influxdb.pem"+  https-private-key = ""++[logging]+  level = "debug"++[subscriber]++[[graphite]]+  enabled = true+  database = "graphite"++[[collectd]]++[[opentsdb]]++[[udp]]+  enabled = true+  bind-address = ":8089"++[continuous_queries]+  query-stats-enabled = true++[tls]+  min-version = "tls1.2"+  max-version = "tls1.3"+`++var testConfigV2 = `+reporting-disabled = false+bolt-path = "/db/.influxdbv2/influxd.bolt"+engine-path = "/db/.influxdbv2/engine"+http-bind-address = ":8086"+influxql-max-select-point = 0+log-level = "debug"+storage-retention-check-interval = "30m"+storage-shard-precreator-check-interval = "5m"+storage-wal-fsync-delay = "100s"+tls-cert = "/etc/ssl/influxdb.pem"+tls-key = ""+`++func TestMinimalConfigUpgrade(t *testing.T) {+	options.target.boltPath = "/db/.influxdbv2/influxd.bolt"

These side-effects seem a bit icky to me. At the least, the test should return the global state to its previous value at the end of the test. I wonder if it would be possible to do a more end-to-end test. The testscript package can be useful for that.

vlastahajek

comment created time in 6 days

Pull request review commentinfluxdata/influxdb

feat: influxd upgrade command

+//go:generate env GO111MODULE=on go run github.com/kevinburke/go-bindata/go-bindata -o upgrade_gen.go -ignore go -pkg upgrade .+ package upgrade  import ( 	"context"+	"errors" 	"fmt" 	"os" 	"os/user" 	"path/filepath"+	"runtime"+	"time"  	"github.com/influxdata/influxdb/v2"-	"github.com/influxdata/influxdb/v2/authorizer" 	"github.com/influxdata/influxdb/v2/bolt" 	"github.com/influxdata/influxdb/v2/dbrp" 	"github.com/influxdata/influxdb/v2/internal/fs"+	"github.com/influxdata/influxdb/v2/kit/cli" 	"github.com/influxdata/influxdb/v2/kit/metric" 	"github.com/influxdata/influxdb/v2/kit/prom" 	"github.com/influxdata/influxdb/v2/kv" 	"github.com/influxdata/influxdb/v2/kv/migration" 	"github.com/influxdata/influxdb/v2/kv/migration/all"+	fs2 "github.com/influxdata/influxdb/v2/pkg/fs"+	"github.com/influxdata/influxdb/v2/storage"+	options2 "github.com/influxdata/influxdb/v2/task/options" 	"github.com/influxdata/influxdb/v2/tenant" 	"github.com/influxdata/influxdb/v2/v1/services/meta" 	"github.com/influxdata/influxdb/v2/v1/services/meta/filestore" 	"github.com/spf13/cobra" 	"go.uber.org/zap" ) -var Command = &cobra.Command{-	Use:   "upgrade",-	Short: "Upgrade a 1.x version of InfluxDB",-	RunE:  runUpgradeE,+// Simplified 1.x config.+type configV1 struct {+	Meta struct {+		Dir string `toml:"dir"`+	} `toml:"meta"`+	Data struct {+		Dir    string `toml:"dir"`+		WALDir string `toml:"wal-dir"`+	} `toml:"data"` }  type optionsV1 struct { 	metaDir string+	walDir  string+	dataDir string+	// cmd option+	dbDir      string+	configFile string+}++func (o *optionsV1) checkDirs() error {+	if o.metaDir == "" || o.dataDir == "" || o.walDir == "" {+		if o.dbDir == "" {+			return errors.New("source directory not specified")+		} else {+			o.metaDir = filepath.Join(o.dbDir, "meta")+			o.dataDir = filepath.Join(o.dbDir, "data")+			o.walDir = filepath.Join(o.dbDir, "wal")+		}+	}+	return nil }  type optionsV2 struct {-	boltPath string+	boltPath           string+	enginePath         string+	userName           string+	password           string+	orgName            string+	bucket             string+	token              string+	retention          string
	retention          time.Duration

Then you won't have to go to the trouble of parsing it yourself.

vlastahajek

comment created time in 6 days

Pull request review commentinfluxdata/influxdb

feat: influxd upgrade command

+package upgrade++// This file contains code for v1 config file upgrade to v2.+// It supports basic restructuring (caveat: changing structure inside existing+// array or to new array may not work, but it is not required).+// Upgrade rules are in `upgrade_config.properties` file.++import (+	"bufio"+	"bytes"+	"errors"+	"fmt"+	"io"+	"io/ioutil"+	"os"+	"path/filepath"+	"strings"++	"github.com/BurntSushi/toml"+	"github.com/spf13/cobra"+	"go.uber.org/zap"+	"golang.org/x/text/encoding/unicode"+	"golang.org/x/text/transform"+)++// TODO for testing purposes+var upgradeConfigCommand = &cobra.Command{+	Use:   "upgrade-config",+	Short: "Upgrade InfluxDB 1.x config to 2.x",+	RunE: func(cmd *cobra.Command, args []string) error {+		log, _ := zap.NewProduction()

As we've got an error return here, maybe use return an error if zap.NewProduction fails (otherwise you might risk having a hard-to-understand panic further down the code) ?

vlastahajek

comment created time in 6 days

Pull request review commentinfluxdata/influxdb

feat: influxd upgrade command

 func newFileExistsError(path string) FileExistsError { func (e FileExistsError) Error() string { 	return fmt.Sprintf("operation not allowed, file %q exists", e.path) }++type DiskStatus struct {+	All   uint64+	Used  uint64+	Free  uint64+	Avail uint64+}++// DirSize returns total size in bytes of containing files+func DirSize(path string) (uint64, error) {+	var size uint64+	err := filepath.Walk(path, func(_ string, info os.FileInfo, err error) error {+		if err != nil {+			return err+		}+		if !info.IsDir() {+			size += uint64(info.Size())+		}+		return err+	})+	return size, err+}++// CopyFile copies the contents of the file named src to the file named+// by dst. The file will be created if it does not already exist. If the+// destination file exists, all it's contents will be replaced by the contents+// of the source file. The file mode will be copied from the source and+// the copied data is synced/flushed to stable storage.+func CopyFile(src, dst string) (err error) {+	in, err := os.Open(src)+	if err != nil {+		return+	}+	defer in.Close()++	out, err := os.Create(dst)+	if err != nil {+		return+	}+	defer func() {+		if e := out.Close(); e != nil {+			err = e+		}+	}()++	_, err = io.Copy(out, in)+	if err != nil {+		return+	}++	err = out.Sync()+	if err != nil {+		return+	}++	si, err := os.Stat(src)+	if err != nil {+		return+	}+	err = os.Chmod(dst, si.Mode())+	if err != nil {+		return+	}++	return+}++// CopyDir recursively copies a directory tree, attempting to preserve permissions.+// Source directory must exist, destination directory must *not* exist.+// Symlinks are ignored and skipped.+// dirRenameFunc allows renaming  directories. It must return orig name, if renaming is not done..+// dirFilterFunc allows filtering out directories. Returned true means skipping the dir.+// fileFilterFunc allows filtering out files. Returned true means skipping the file.+func CopyDir(src string, dst string, dirRenameFunc func(path string) string, dirFilterFunc func(path string) bool, fileFilterFunc func(path string) bool) (err error) {

The new code in this package doesn't seem have any tests and is somewhat non-trivial. I've seen and written too many recursive directory copy functions in Go... Is there really no off-the-shelf directory copy function we can use? Failing that, it should be tested (and arguably put inside an internal directory so we don't have to worry about external consumers that might start importing it).

vlastahajek

comment created time in 6 days

Pull request review commentinfluxdata/influxdb

feat: influxd upgrade command

 func influxDirV1() (string, error) {  	return dir, nil }++// homeOrAnyDir retrieves user's home directory, current working one or just none.+func homeOrAnyDir() string {+	var dir string+	u, err := user.Current()+	if err == nil {+		dir = u.HomeDir+	} else if home := os.Getenv("HOME"); home != "" {+		dir = home+	} else if home := os.Getenv("USERPROFILE"); home != "" {+		dir = home+	} else {+		wd, err := os.Getwd()+		if err != nil {+			dir = ""+		} else {+			dir = wd+		}+	}++	return dir+}++func rawDurationToTimeDuration(raw string) (time.Duration, error) {

I don't see why this is needed when we already have time.ParseDuration and the flag package already directly supports time.Duration-typed flags.

vlastahajek

comment created time in 6 days

Pull request review commentinfluxdata/influxdb

feat: influxd upgrade command

+package upgrade++import (+	"github.com/BurntSushi/toml"+	"go.uber.org/zap"+	"io/ioutil"+	"path/filepath"+	"reflect"+	"strings"+	"testing"+)++var testConfigV1 = `### Welcome to the InfluxDB configuration file.++# Change this option to true to disable reporting.+reporting-disabled = false++# Bind address to use for the RPC service for backup and restore.+bind-address = "127.0.0.1:8088"++[meta]+  dir = "/db/influxdb/meta"++[data]+  dir = "/db/influxdb/data"+  wal-dir = "/db/influxdb/wal"+  wal-fsync-delay = "100s"+  index-version = "inmem"++[coordinator]+  max-select-point = 0++[retention]+  check-interval = "30m"++[shard-precreation]+  check-interval = "5m"++[monitor]+  store-enabled = true++[http]+  flux-enabled = false+  bind-address = ":8086"+  https-certificate = "/etc/ssl/influxdb.pem"+  https-private-key = ""++[logging]+  level = "debug"++[subscriber]++[[graphite]]+  enabled = true+  database = "graphite"++[[collectd]]++[[opentsdb]]++[[udp]]+  enabled = true+  bind-address = ":8089"++[continuous_queries]+  query-stats-enabled = true++[tls]+  min-version = "tls1.2"+  max-version = "tls1.3"+`++var testConfigV2 = `+reporting-disabled = false+bolt-path = "/db/.influxdbv2/influxd.bolt"+engine-path = "/db/.influxdbv2/engine"+http-bind-address = ":8086"+influxql-max-select-point = 0+log-level = "debug"+storage-retention-check-interval = "30m"+storage-shard-precreator-check-interval = "5m"+storage-wal-fsync-delay = "100s"+tls-cert = "/etc/ssl/influxdb.pem"+tls-key = ""+`++func TestMinimalConfigUpgrade(t *testing.T) {

The transformation logic is complex enough that I think it deserves quite a few test case, not just one.

vlastahajek

comment created time in 6 days

Pull request review commentinfluxdata/influxdb

feat: influxd upgrade command

+package upgrade++import (+	"errors"+	"fmt"+	"math/rand"+	"os"+	"path/filepath"+	"regexp"+	"runtime"+	"sort"+	"strings"+	"time"++	"github.com/influxdata/influxdb/v2/v1/services/meta"+	"github.com/influxdata/influxql"+	"go.uber.org/zap"+)++// Generates security upgrade script.+func generateSecurityScript(v1 *influxDBv1, dbBuckets map[string][]string, log *zap.Logger) error {+	// create helper+	helper := &securityScriptHelper{+		log: log,+	}+	if err := helper.init(); err != nil {+		return err+	}++	// target org name+	var targetOrg = options.target.orgName++	// get `influx` absolute path+	exePath, err := filepath.Abs(os.Args[0])+	if err != nil {+		return err+	}+	influxExe := filepath.Join(filepath.Dir(exePath), "influx")++	// first check if target buckets exists in 2.x+	v1meta := v1.meta+	proceed := helper.checkDbBuckets(v1meta, dbBuckets)+	if !proceed {+		return errors.New("upgrade: there were errors/warnings, please fix them and run the command again")+	}++	// create output+	var output *os.File+	var isFo bool+	if options.target.securityScriptPath == "" {+		output = os.Stdout+	} else {+		output, err = os.OpenFile(options.target.securityScriptPath, os.O_RDWR|os.O_CREATE|os.O_TRUNC, 0777)+		if err != nil {+			return err+		}+		isFo = true+	}++	// script printing helper funcs

ISTM that the rest of this function might be more readably expressed as a template (using the text/template package) - one template for Windows and one for Linux. It would mean some duplication, so YMMV, but I think there's some merit in being able to read the text more-or-less straight as a script.

Here's how the Windows variant might look as a template (untested). I've assumed some of the more involved parts are factored out into template functions (e.g. generatePassword, accessArgs) so they can be shared between the templates.

{{- range $u := $.Users}}
{{- if $u.Admin}}
REM user {{$u.Name}} is 1.x admin, will not be upgraded automatically
set {{userVar $u.Name}}=no
{{- else if $u.Privileges}}
REM user {{$u.Name}} has no 1.x privileges, will not be upgraded automatically
set {{userVar $u.Name}}=no
{{- else}}
REM user {{$u.Name}}
set {{userVar $u.Name}}=yes
{{- end}}

REM
REM VARIABLES
REM

set INFLUX="{{$.Exe}}"
set INFLUX_TOKEN={{$.TargetToken}}

{{- if .IsFileOutput}}
set PATH=%PATH%;C:\WINDOWS\system32\wbem
for /f %%x in ('wmic os get localdatetime ^| findstr /b [0-9]') do @set X=%%x && set LOG=%~dpn0.%X:~0,8%-%X:~8,6%.log
{{end}}

REM
REM INDIVIDUAL USER UPGRADES
REM

{{range $u := $.Users}}
IF /I "%{{userVar $u.Name}}%" == "yes"
    {{- if $u.Admin}}
    echo User {{.$u.Name}} is 1.x admin and should be added and invited manually to 2.x if needed
    {{- else}}
    {{- $password := generatePassword}}
    echo Creating user {{$u.Name}} with password {{$password}} in {{$.TargetOrg}}
    {{.InfluxExe}} user create --name={{.$u.Name}} --password={{$password}} --org={{$.TargetOrg}}{{with $args := accessArgs $u.Privileges}} ^
    && echo Creating authorization token... ^
    && {{.InfluxExe}} auth create --user={{$u.Name}} --org={{$.TargetOrg}} --description="{{.$u.Name}}'s token" {{$args}}
    {{- end}}
{{- end}}
{{- if $.IsFileOutput}}
echo.
echo Output saved to %LOG%
{{- end}}
vlastahajek

comment created time in 6 days

Pull request review commentinfluxdata/influxdb

feat: influxd upgrade command

+package upgrade++import (+	"github.com/BurntSushi/toml"+	"go.uber.org/zap"+	"io/ioutil"+	"path/filepath"+	"reflect"+	"strings"+	"testing"+)++var testConfigV1 = `### Welcome to the InfluxDB configuration file.++# Change this option to true to disable reporting.+reporting-disabled = false++# Bind address to use for the RPC service for backup and restore.+bind-address = "127.0.0.1:8088"++[meta]+  dir = "/db/influxdb/meta"++[data]+  dir = "/db/influxdb/data"+  wal-dir = "/db/influxdb/wal"+  wal-fsync-delay = "100s"+  index-version = "inmem"++[coordinator]+  max-select-point = 0++[retention]+  check-interval = "30m"++[shard-precreation]+  check-interval = "5m"++[monitor]+  store-enabled = true++[http]+  flux-enabled = false+  bind-address = ":8086"+  https-certificate = "/etc/ssl/influxdb.pem"+  https-private-key = ""++[logging]+  level = "debug"++[subscriber]++[[graphite]]+  enabled = true+  database = "graphite"++[[collectd]]++[[opentsdb]]++[[udp]]+  enabled = true+  bind-address = ":8089"++[continuous_queries]+  query-stats-enabled = true++[tls]+  min-version = "tls1.2"+  max-version = "tls1.3"+`++var testConfigV2 = `+reporting-disabled = false+bolt-path = "/db/.influxdbv2/influxd.bolt"+engine-path = "/db/.influxdbv2/engine"+http-bind-address = ":8086"+influxql-max-select-point = 0+log-level = "debug"+storage-retention-check-interval = "30m"+storage-shard-precreator-check-interval = "5m"+storage-wal-fsync-delay = "100s"+tls-cert = "/etc/ssl/influxdb.pem"+tls-key = ""+`++func TestMinimalConfigUpgrade(t *testing.T) {+	options.target.boltPath = "/db/.influxdbv2/influxd.bolt"+	options.target.enginePath = "/db/.influxdbv2/engine"+	f, err := ioutil.TempFile("", "influxdb-*.conf")+	if err != nil {+		t.Fatal(err)+	}+	_, err = f.WriteString(testConfigV1)+	if err != nil {+		t.Fatal(err)+	}+	if err = f.Close(); err != nil {+		t.Fatal(err)+	}+	configFile := f.Name()+	_, err = upgradeConfig(configFile, zap.NewNop())+	if err != nil {+		t.Fatal(err)+	}+	ext := filepath.Ext(configFile)+	configFileV2 := strings.TrimSuffix(configFile, ext) + ".toml"+	var actual, expected map[string]interface{}+	if _, err = toml.Decode(testConfigV2, &expected); err != nil {+		t.Fatal(err)+	}+	if _, err = toml.DecodeFile(configFileV2, &actual); err != nil {+		t.Fatal(err)+	}+	if len(expected) != len(actual) {

Why not just use DeepEqual on the entire value? (you could use cmp.Diff if you wanted more readable output)

vlastahajek

comment created time in 6 days

Pull request review commentinfluxdata/influxdb

feat: influxd upgrade command

+package upgrade++// This file contains code for v1 config file upgrade to v2.+// It supports basic restructuring (caveat: changing structure inside existing+// array or to new array may not work, but it is not required).+// Upgrade rules are in `upgrade_config.properties` file.++import (+	"bufio"+	"bytes"+	"errors"+	"fmt"+	"io"+	"io/ioutil"+	"os"+	"path/filepath"+	"strings"++	"github.com/BurntSushi/toml"+	"github.com/spf13/cobra"+	"go.uber.org/zap"+	"golang.org/x/text/encoding/unicode"+	"golang.org/x/text/transform"+)++// TODO for testing purposes+var upgradeConfigCommand = &cobra.Command{+	Use:   "upgrade-config",+	Short: "Upgrade InfluxDB 1.x config to 2.x",+	RunE: func(cmd *cobra.Command, args []string) error {+		log, _ := zap.NewProduction()+		_, err := upgradeConfig(options.source.configFile, log)+		return err+	},+}++// TODO for testing purposes+func init() {+	flags := upgradeConfigCommand.Flags()+	flags.StringVar(&options.source.configFile, "config-file", getV1ConfigPath(), "Path to 1.x config file")+}++// Backups existing config file and updates it with upgraded config.+func upgradeConfig(configFile string, log *zap.Logger) (*configV1, error) {+	configUpgradeProperties, err := AssetString("upgrade_config.properties")+	if err != nil {+		return nil, err+	}+	ext := filepath.Ext(configFile)+	cu := newConfigUpgrader(configUpgradeProperties, log)+	v1c, err := cu.loadV1(configFile)+	if err != nil {+		return nil, err+	}+	if ext == ".toml" { // if v1 config already has ".toml" extension, backup is needed+		err = cu.backup(configFile)+		if err != nil {+			return nil, err+		}+	}+	c, err := cu.transform(configFile)+	if err != nil {+		return nil, err+	}+	applyConfigOverrides(c)+	configFileV2 := strings.TrimSuffix(configFile, ext) + ".toml"+	err = cu.save(c, configFileV2)+	if err != nil {+		return nil, err+	}+	log.Info("config file upgraded",+		zap.String("1.x config", configFile),+		zap.String("2.x config", configFileV2))++	return v1c, nil+}++// Returns default 1.x config file path.+func getV1ConfigPath() string {+	if envVar := os.Getenv("INFLUXDB_CONFIG_PATH"); envVar != "" {+		return envVar+	}+	for _, path := range []string{+		os.ExpandEnv("${HOME}/.influxdb/influxdb.conf"),+		"/etc/influxdb/influxdb.conf",+	} {+		if _, err := os.Stat(path); err == nil {+			return path+		}+	}++	return ""+}++// private function used by `upgrade-config` command+func applyConfigOverrides(c config) {+	if options.target.enginePath != "" {+		c["engine-path"] = options.target.enginePath+	}+	if options.target.boltPath != "" {+		c["bolt-path"] = options.target.boltPath+	}+}++type properties map[string]string   // private type used by `upgrade-config` command+type table = map[string]interface{} // private type used by `upgrade-config` command+type config = table                 // private type used by `upgrade-config` command++// private type used by `upgrade-config` command+type configUpgrader struct {+	rules  properties+	config config+	log    *zap.Logger+}++// private function used by `upgrade-config` command+func newConfigUpgrader(rules string, log *zap.Logger) *configUpgrader {+	cm := &configUpgrader{+		log: log,+	}+	cm.init(rules)+	return cm+}++func (cu *configUpgrader) backup(path string) error {+	source, err := os.Open(path)+	if err != nil {+		return err+	}+	defer source.Close()++	backupFile := path + "~"+	if _, err := os.Stat(backupFile); !os.IsNotExist(err) {+		errMsg := fmt.Sprintf("upgrade: config file backup %s already exist", backupFile)+		return errors.New(errMsg)+	}++	destination, err := os.Create(backupFile)+	if err != nil {+		return err+	}+	defer destination.Close()+	_, err = io.Copy(destination, source)++	return err+}++func (cu *configUpgrader) save(c config, path string) error {+	buf := new(bytes.Buffer)+	if err := toml.NewEncoder(buf).Encode(&c); err != nil {+		return err+	}++	return ioutil.WriteFile(path, buf.Bytes(), 0666)+}++func (cu *configUpgrader) transform(path string) (config, error) {+	c, err := cu.parse(path)+	if err != nil {+		return nil, err+	}+	cu.config = make(config)+	cu.process(c, nil, -1)+	return cu.config, nil+}++func (cu *configUpgrader) parse(path string) (config, error) {+	bs, err := ioutil.ReadFile(path)+	if err != nil {+		return nil, err+	}++	// Handle any potential Byte-Order-Marks that may be in the config file.+	// This is for Windows compatibility only.+	// See https://github.com/influxdata/telegraf/issues/1378 and+	// https://github.com/influxdata/influxdb/issues/8965.+	bom := unicode.BOMOverride(transform.Nop)+	bs, _, err = transform.Bytes(bom, bs)+	if err != nil {+		return nil, err+	}++	var c config+	_, err = toml.Decode(string(bs), &c)+	return c, err+}++func (cu *configUpgrader) loadV1(path string) (*configV1, error) {+	bs, err := ioutil.ReadFile(path)+	if err != nil {+		return nil, err+	}++	// Handle any potential Byte-Order-Marks that may be in the config file.+	// This is for Windows compatibility only.+	// See https://github.com/influxdata/telegraf/issues/1378 and+	// https://github.com/influxdata/influxdb/issues/8965.+	bom := unicode.BOMOverride(transform.Nop)+	bs, _, err = transform.Bytes(bom, bs)+	if err != nil {+		return nil, err+	}++	var c configV1+	_, err = toml.Decode(string(bs), &c)+	return &c, err+}++func (cu *configUpgrader) init(rules string) {+	cu.rules = make(properties)+	scanner := bufio.NewScanner(strings.NewReader(rules))+	for scanner.Scan() {+		line := strings.Trim(scanner.Text(), " ")
		line := strings.TrimSpace(scanner.Text())
vlastahajek

comment created time in 6 days

Pull request review commentinfluxdata/influxdb

feat: influxd upgrade command

 var options = struct {  	// flags for target InfluxDB 	target optionsV2++	// verbose output+	verbose bool++	logPath string }{} -func init() {-	flags := Command.Flags()+func NewCommand() *cobra.Command {  	// source flags 	v1dir, err := influxDirV1() 	if err != nil { 		panic("error fetching default InfluxDB 1.x dir: " + err.Error()) 	} -	flags.StringVar(&options.source.metaDir, "v1-meta-dir", filepath.Join(v1dir, "meta"), "Path to 1.x meta.db directory")- 	// target flags 	v2dir, err := fs.InfluxDir() 	if err != nil { 		panic("error fetching default InfluxDB 2.0 dir: " + err.Error()) 	} -	flags.StringVar(&options.target.boltPath, "v2-bolt-path", filepath.Join(v2dir, "influxd.bolt"), "Path to 2.0 metadata")+	// os-specific+	var defaultSsPath string+	if runtime.GOOS == "windows" {+		defaultSsPath = filepath.Join(homeOrAnyDir(), "influxd-upgrade-security.cmd")+	} else {+		defaultSsPath = filepath.Join(homeOrAnyDir(), "influxd-upgrade-security.sh")+	}++	cmd := &cobra.Command{+		Use:   "upgrade",+		Short: "Upgrade a 1.x version of InfluxDB",+		Long: `+    Upgrades a 1.x version of InfluxDB by performing following actions:+      1. Reads 1.x config file and creates 2.x config file with matching options. Unsupported 1.x options are reported.+      2. Upgrades 1.x database files.+      3. Creates a script for creating 1.x users and their permissions. This scripts needs to be revised and run manually after starting 2.x.++    If config file is not available, 1.x db folder is taken as an input. +`,+		RunE: runUpgradeE,+	}++	opts := []cli.Opt{+		{+			DestP:   &options.source.dbDir,+			Flag:    "v1-dir",+			Default: v1dir,+			Desc:    "path to source 1.x db directory containing meta,data and wal sub-folders",+		},+		{+			DestP:   &options.verbose,+			Flag:    "verbose",+			Default: false,+			Desc:    "verbose output",+			Short:   'v',+		},+		{+			DestP:   &options.target.boltPath,+			Flag:    "bolt-path",+			Default: filepath.Join(v2dir, bolt.DefaultFilename),+			Desc:    "path for boltdb database",+			Short:   'm',+		},+		{+			DestP:   &options.target.enginePath,+			Flag:    "engine-path",+			Default: filepath.Join(v2dir, "engine"),+			Desc:    "path for persistent engine files",+			Short:   'e',+		},+		{+			DestP:    &options.target.userName,+			Flag:     "username",+			Default:  "",+			Desc:     "primary username",+			Short:    'u',+			Required: true,+		},+		{+			DestP:    &options.target.password,+			Flag:     "password",+			Default:  "",+			Desc:     "password for username",+			Short:    'p',+			Required: true,+		},+		{+			DestP:    &options.target.orgName,+			Flag:     "org",+			Default:  "",+			Desc:     "primary organization name",+			Short:    'o',+			Required: true,+		},+		{+			DestP:    &options.target.bucket,+			Flag:     "bucket",+			Default:  "",+			Desc:     "primary bucket name",+			Short:    'b',+			Required: true,+		},+		{+			DestP:   &options.target.retention,+			Flag:    "retention",+			Default: "",+			Desc:    "optional: duration bucket will retain data. 0 is infinite. Default is 0.",+			Short:   'r',+		},+		{+			DestP:   &options.target.token,+			Flag:    "token",+			Default: "",+			Desc:    "optional: token for username, else auto-generated",+			Short:   't',+		},+		{+			DestP:   &options.source.configFile,+			Flag:    "config-file",+			Default: getV1ConfigPath(),+			Desc:    "optional: Custom InfluxDB 1.x config file path, else the default config file",+		},+		{+			DestP:   &options.target.securityScriptPath,+			Flag:    "security-script",+			Default: defaultSsPath,+			Desc:    "optional: generated security upgrade script path",+		},+		{+			DestP:   &options.logPath,+			Flag:    "log-path",+			Default: filepath.Join(homeOrAnyDir(), "upgrade.log"),+			Desc:    "optional: Custom log file path",+		},+	} +	cli.BindOptions(cmd, opts) 	// add sub commands-	Command.AddCommand(v1DumpMetaCommand)-	Command.AddCommand(v2DumpMetaCommand)+	cmd.AddCommand(v1DumpMetaCommand)+	cmd.AddCommand(v2DumpMetaCommand)+	// TODO for testing purposes only+	cmd.AddCommand(upgradeConfigCommand)+	return cmd }  type influxDBv1 struct { 	meta *meta.Client }  type influxDBv2 struct {+	log         *zap.Logger 	boltClient  *bolt.Client 	store       *bolt.KVStore 	kvStore     kv.SchemaStore 	tenantStore *tenant.Store 	ts          *tenant.Service 	dbrpSvc     influxdb.DBRPMappingServiceV2+	bucketSvc   influxdb.BucketService 	onboardSvc  influxdb.OnboardingService 	kvService   *kv.Service 	meta        *meta.Client } -func runUpgradeE(cmd *cobra.Command, args []string) error {+func (i *influxDBv2) close() error {+	err := i.meta.Close()+	if err != nil {+		return err+	}+	err = i.boltClient.Close()+	if err != nil {+		return err+	}+	err = i.store.Close()+	if err != nil {+		return err+	}+	return nil+}++func runUpgradeE(*cobra.Command, []string) error { 	ctx := context.Background()+	config := zap.NewProductionConfig()+	config.OutputPaths = append(config.OutputPaths, options.logPath)+	config.ErrorOutputPaths = append(config.ErrorOutputPaths, options.logPath)+	log, err := config.Build()+	if err != nil {+		return err+	}+	log.Info("starting InfluxDB 1.x upgrade")++	checkParam := func(name, value string) error {+		if value == "" {+			return fmt.Errorf("empty or missing mandatory option %s", name)+		}+		return nil+	}++	if err := checkParam("username", options.target.userName); err != nil {+		return err+	}+	if err := checkParam("password", options.target.password); err != nil {+		return err+	}+	if err := checkParam("org", options.target.orgName); err != nil {+		return err+	}+	if err := checkParam("bucket", options.target.bucket); err != nil {+		return err+	}++	if options.source.configFile != "" {+		log.Info("upgrading config file", zap.String("file", options.source.configFile))+		if _, err := os.Stat(options.source.configFile); err != nil {+			return err+		}+		v1Config, err := upgradeConfig(options.source.configFile, log)+		if err != nil {+			return err+		}+		// TODO how to load and validate new Config???++		options.source.metaDir = v1Config.Meta.Dir+		options.source.dataDir = v1Config.Data.Dir+		options.source.walDir = v1Config.Data.WALDir+	} else {+		log.Info("no InfluxDB 1.x config file specified, skipping its upgrade")+	}++	if err := options.source.checkDirs(); err != nil {+		return err+	}  	v1, err := newInfluxDBv1(&options.source) 	if err != nil { 		return err 	}-	_ = v1 -	v2, err := newInfluxDBv2(ctx, &options.target)+	v2, err := newInfluxDBv2(ctx, &options.target, log) 	if err != nil { 		return err 	}-	_ = v2 -	// 1. Onboard the initial admin user-	// v2.onboardSvc.OnboardInitialUser()+	or, err := setupAdmin(ctx, v2)+	if err != nil {+		return err+	}+	options.target.token = or.Auth.Token -	// 2. read each database / retention policy from v1.meta and create bucket db-name/rp-name-	// newBucket := v2.ts.CreateBucket(ctx, Bucket{})-	//-	// 3. create database in v2.meta-	// v2.meta.CreateDatabase(newBucket.ID.String())-	// copy shard info from v1.meta+	db2BucketIds, err := upgradeDatabases(ctx, v1, v2, or.Org.ID, log)+	if err != nil {+		//remove all files+		log.Info("database upgrade error, removing data")+		if e := os.Remove(options.target.boltPath); e != nil {+			log.Error("cleaning failed", zap.Error(e))+		}++		if e := os.RemoveAll(options.target.enginePath); e != nil {+			log.Error("cleaning failed", zap.Error(e))+		}+		return err+	}++	if err = generateSecurityScript(v1, db2BucketIds, log); err != nil {+		return err+	}++	log.Info("upgrade successfully completed. Start service now")  	return nil } +func upgradeDatabases(ctx context.Context, v1 *influxDBv1, v2 *influxDBv2, orgID influxdb.ID, log *zap.Logger) (map[string][]string, error) {+	db2BucketIds := make(map[string][]string)+	targetDataPath := filepath.Join(options.target.enginePath, "data")+	targetWalPath := filepath.Join(options.target.enginePath, "wal")+	dirFilterFunc := func(path string) bool {+		base := filepath.Base(path)+		if base == "_series" ||+			(len(base) > 0 && base[0] == '_') || //skip internal databases+			base == "index" {+			return true+		}+		return false+	}+	// read each database / retention policy from v1.meta and create bucket db-name/rp-name+	// create database in v2.meta+	// copy shard info from v1.meta+	if len(v1.meta.Databases()) > 0 {+		// Check space+		log.Info("checking space")+		size, err := fs2.DirSize(options.source.dataDir)+		if err != nil {+			return nil, fmt.Errorf("error getting size of %s: %w", options.source.dataDir, err)+		}+		size2, err := fs2.DirSize(options.source.walDir)+		if err != nil {+			return nil, fmt.Errorf("error getting size of %s: %w", options.source.walDir, err)+		}+		size += size2+		v2dir := filepath.Dir(options.target.boltPath)+		diskInfo, err := fs2.DiskUsage(v2dir)

Rather than import this very OS-dependent code in order to do a fairly inaccurate check (sum of file sizes doesn't equate to used disk space), it might be best just to let the user decide for themselves and provide a "dry run" mode that just says how much new disk space will be needed or something.

vlastahajek

comment created time in 6 days

Pull request review commentinfluxdata/influxdb

feat: influxd upgrade command

 var options = struct {  	// flags for target InfluxDB 	target optionsV2++	// verbose output+	verbose bool++	logPath string }{} -func init() {-	flags := Command.Flags()+func NewCommand() *cobra.Command {  	// source flags 	v1dir, err := influxDirV1() 	if err != nil { 		panic("error fetching default InfluxDB 1.x dir: " + err.Error()) 	} -	flags.StringVar(&options.source.metaDir, "v1-meta-dir", filepath.Join(v1dir, "meta"), "Path to 1.x meta.db directory")- 	// target flags 	v2dir, err := fs.InfluxDir() 	if err != nil { 		panic("error fetching default InfluxDB 2.0 dir: " + err.Error()) 	} -	flags.StringVar(&options.target.boltPath, "v2-bolt-path", filepath.Join(v2dir, "influxd.bolt"), "Path to 2.0 metadata")+	// os-specific+	var defaultSsPath string+	if runtime.GOOS == "windows" {+		defaultSsPath = filepath.Join(homeOrAnyDir(), "influxd-upgrade-security.cmd")+	} else {+		defaultSsPath = filepath.Join(homeOrAnyDir(), "influxd-upgrade-security.sh")+	}++	cmd := &cobra.Command{+		Use:   "upgrade",+		Short: "Upgrade a 1.x version of InfluxDB",+		Long: `+    Upgrades a 1.x version of InfluxDB by performing following actions:+      1. Reads 1.x config file and creates 2.x config file with matching options. Unsupported 1.x options are reported.+      2. Upgrades 1.x database files.+      3. Creates a script for creating 1.x users and their permissions. This scripts needs to be revised and run manually after starting 2.x.++    If config file is not available, 1.x db folder is taken as an input. +`,+		RunE: runUpgradeE,+	}++	opts := []cli.Opt{+		{+			DestP:   &options.source.dbDir,+			Flag:    "v1-dir",+			Default: v1dir,+			Desc:    "path to source 1.x db directory containing meta,data and wal sub-folders",+		},+		{+			DestP:   &options.verbose,+			Flag:    "verbose",+			Default: false,+			Desc:    "verbose output",+			Short:   'v',+		},+		{+			DestP:   &options.target.boltPath,+			Flag:    "bolt-path",+			Default: filepath.Join(v2dir, bolt.DefaultFilename),+			Desc:    "path for boltdb database",+			Short:   'm',+		},+		{+			DestP:   &options.target.enginePath,+			Flag:    "engine-path",+			Default: filepath.Join(v2dir, "engine"),+			Desc:    "path for persistent engine files",+			Short:   'e',+		},+		{+			DestP:    &options.target.userName,+			Flag:     "username",+			Default:  "",+			Desc:     "primary username",+			Short:    'u',+			Required: true,+		},+		{+			DestP:    &options.target.password,+			Flag:     "password",+			Default:  "",+			Desc:     "password for username",+			Short:    'p',+			Required: true,+		},+		{+			DestP:    &options.target.orgName,+			Flag:     "org",+			Default:  "",+			Desc:     "primary organization name",+			Short:    'o',+			Required: true,+		},+		{+			DestP:    &options.target.bucket,+			Flag:     "bucket",+			Default:  "",+			Desc:     "primary bucket name",+			Short:    'b',+			Required: true,+		},+		{+			DestP:   &options.target.retention,+			Flag:    "retention",+			Default: "",+			Desc:    "optional: duration bucket will retain data. 0 is infinite. Default is 0.",+			Short:   'r',+		},+		{+			DestP:   &options.target.token,+			Flag:    "token",+			Default: "",+			Desc:    "optional: token for username, else auto-generated",+			Short:   't',+		},+		{+			DestP:   &options.source.configFile,+			Flag:    "config-file",+			Default: getV1ConfigPath(),+			Desc:    "optional: Custom InfluxDB 1.x config file path, else the default config file",+		},+		{+			DestP:   &options.target.securityScriptPath,+			Flag:    "security-script",+			Default: defaultSsPath,+			Desc:    "optional: generated security upgrade script path",+		},+		{+			DestP:   &options.logPath,+			Flag:    "log-path",+			Default: filepath.Join(homeOrAnyDir(), "upgrade.log"),+			Desc:    "optional: Custom log file path",+		},+	} +	cli.BindOptions(cmd, opts) 	// add sub commands-	Command.AddCommand(v1DumpMetaCommand)-	Command.AddCommand(v2DumpMetaCommand)+	cmd.AddCommand(v1DumpMetaCommand)+	cmd.AddCommand(v2DumpMetaCommand)+	// TODO for testing purposes only+	cmd.AddCommand(upgradeConfigCommand)+	return cmd }  type influxDBv1 struct { 	meta *meta.Client }  type influxDBv2 struct {+	log         *zap.Logger 	boltClient  *bolt.Client 	store       *bolt.KVStore 	kvStore     kv.SchemaStore 	tenantStore *tenant.Store 	ts          *tenant.Service 	dbrpSvc     influxdb.DBRPMappingServiceV2+	bucketSvc   influxdb.BucketService 	onboardSvc  influxdb.OnboardingService 	kvService   *kv.Service 	meta        *meta.Client } -func runUpgradeE(cmd *cobra.Command, args []string) error {+func (i *influxDBv2) close() error {+	err := i.meta.Close()+	if err != nil {+		return err+	}+	err = i.boltClient.Close()+	if err != nil {+		return err+	}+	err = i.store.Close()+	if err != nil {+		return err+	}+	return nil+}++func runUpgradeE(*cobra.Command, []string) error { 	ctx := context.Background()+	config := zap.NewProductionConfig()+	config.OutputPaths = append(config.OutputPaths, options.logPath)+	config.ErrorOutputPaths = append(config.ErrorOutputPaths, options.logPath)+	log, err := config.Build()+	if err != nil {+		return err+	}+	log.Info("starting InfluxDB 1.x upgrade")++	checkParam := func(name, value string) error {+		if value == "" {+			return fmt.Errorf("empty or missing mandatory option %s", name)+		}+		return nil+	}++	if err := checkParam("username", options.target.userName); err != nil {+		return err+	}+	if err := checkParam("password", options.target.password); err != nil {+		return err+	}+	if err := checkParam("org", options.target.orgName); err != nil {+		return err+	}+	if err := checkParam("bucket", options.target.bucket); err != nil {+		return err+	}++	if options.source.configFile != "" {+		log.Info("upgrading config file", zap.String("file", options.source.configFile))+		if _, err := os.Stat(options.source.configFile); err != nil {+			return err+		}+		v1Config, err := upgradeConfig(options.source.configFile, log)+		if err != nil {+			return err+		}+		// TODO how to load and validate new Config???++		options.source.metaDir = v1Config.Meta.Dir+		options.source.dataDir = v1Config.Data.Dir+		options.source.walDir = v1Config.Data.WALDir+	} else {+		log.Info("no InfluxDB 1.x config file specified, skipping its upgrade")+	}++	if err := options.source.checkDirs(); err != nil {+		return err+	}  	v1, err := newInfluxDBv1(&options.source) 	if err != nil { 		return err 	}-	_ = v1 -	v2, err := newInfluxDBv2(ctx, &options.target)+	v2, err := newInfluxDBv2(ctx, &options.target, log) 	if err != nil { 		return err 	}-	_ = v2 -	// 1. Onboard the initial admin user-	// v2.onboardSvc.OnboardInitialUser()+	or, err := setupAdmin(ctx, v2)+	if err != nil {+		return err+	}+	options.target.token = or.Auth.Token -	// 2. read each database / retention policy from v1.meta and create bucket db-name/rp-name-	// newBucket := v2.ts.CreateBucket(ctx, Bucket{})-	//-	// 3. create database in v2.meta-	// v2.meta.CreateDatabase(newBucket.ID.String())-	// copy shard info from v1.meta+	db2BucketIds, err := upgradeDatabases(ctx, v1, v2, or.Org.ID, log)+	if err != nil {+		//remove all files+		log.Info("database upgrade error, removing data")+		if e := os.Remove(options.target.boltPath); e != nil {+			log.Error("cleaning failed", zap.Error(e))+		}++		if e := os.RemoveAll(options.target.enginePath); e != nil {+			log.Error("cleaning failed", zap.Error(e))+		}+		return err+	}++	if err = generateSecurityScript(v1, db2BucketIds, log); err != nil {+		return err+	}++	log.Info("upgrade successfully completed. Start service now")  	return nil } +func upgradeDatabases(ctx context.Context, v1 *influxDBv1, v2 *influxDBv2, orgID influxdb.ID, log *zap.Logger) (map[string][]string, error) {+	db2BucketIds := make(map[string][]string)+	targetDataPath := filepath.Join(options.target.enginePath, "data")+	targetWalPath := filepath.Join(options.target.enginePath, "wal")+	dirFilterFunc := func(path string) bool {+		base := filepath.Base(path)+		if base == "_series" ||+			(len(base) > 0 && base[0] == '_') || //skip internal databases+			base == "index" {+			return true+		}+		return false+	}+	// read each database / retention policy from v1.meta and create bucket db-name/rp-name+	// create database in v2.meta+	// copy shard info from v1.meta+	if len(v1.meta.Databases()) > 0 {

How about inverting the condition and returning early when there are no databases to save a bunch of indented code?

vlastahajek

comment created time in 6 days

Pull request review commentinfluxdata/influxdb

feat: influxd upgrade command

 func newFileExistsError(path string) FileExistsError { func (e FileExistsError) Error() string { 	return fmt.Sprintf("operation not allowed, file %q exists", e.path) }++type DiskStatus struct {+	All   uint64+	Used  uint64+	Free  uint64+	Avail uint64+}++// DirSize returns total size in bytes of containing files+func DirSize(path string) (uint64, error) {+	var size uint64+	err := filepath.Walk(path, func(_ string, info os.FileInfo, err error) error {+		if err != nil {+			return err+		}+		if !info.IsDir() {+			size += uint64(info.Size())+		}+		return err+	})+	return size, err+}++// CopyFile copies the contents of the file named src to the file named+// by dst. The file will be created if it does not already exist. If the+// destination file exists, all it's contents will be replaced by the contents+// of the source file. The file mode will be copied from the source and+// the copied data is synced/flushed to stable storage.+func CopyFile(src, dst string) (err error) {+	in, err := os.Open(src)+	if err != nil {+		return+	}+	defer in.Close()++	out, err := os.Create(dst)+	if err != nil {+		return+	}+	defer func() {+		if e := out.Close(); e != nil {+			err = e+		}+	}()++	_, err = io.Copy(out, in)+	if err != nil {+		return+	}++	err = out.Sync()+	if err != nil {+		return+	}++	si, err := os.Stat(src)+	if err != nil {+		return+	}+	err = os.Chmod(dst, si.Mode())

Wouldn't this be better done at file creation time? How about using os.OpenFile instead of os.Create so you can specify the permissions when the file is created?

vlastahajek

comment created time in 6 days

Pull request review commentinfluxdata/influxdb

feat: influxd upgrade command

+package upgrade++// This file contains code for v1 config file upgrade to v2.+// It supports basic restructuring (caveat: changing structure inside existing+// array or to new array may not work, but it is not required).+// Upgrade rules are in `upgrade_config.properties` file.++import (+	"bufio"+	"bytes"+	"errors"+	"fmt"+	"io"+	"io/ioutil"+	"os"+	"path/filepath"+	"strings"++	"github.com/BurntSushi/toml"+	"github.com/spf13/cobra"+	"go.uber.org/zap"+	"golang.org/x/text/encoding/unicode"+	"golang.org/x/text/transform"+)++// TODO for testing purposes+var upgradeConfigCommand = &cobra.Command{+	Use:   "upgrade-config",+	Short: "Upgrade InfluxDB 1.x config to 2.x",+	RunE: func(cmd *cobra.Command, args []string) error {+		log, _ := zap.NewProduction()+		_, err := upgradeConfig(options.source.configFile, log)+		return err+	},+}++// TODO for testing purposes+func init() {+	flags := upgradeConfigCommand.Flags()+	flags.StringVar(&options.source.configFile, "config-file", getV1ConfigPath(), "Path to 1.x config file")+}++// Backups existing config file and updates it with upgraded config.+func upgradeConfig(configFile string, log *zap.Logger) (*configV1, error) {+	configUpgradeProperties, err := AssetString("upgrade_config.properties")+	if err != nil {+		return nil, err+	}+	ext := filepath.Ext(configFile)+	cu := newConfigUpgrader(configUpgradeProperties, log)+	v1c, err := cu.loadV1(configFile)+	if err != nil {+		return nil, err+	}+	if ext == ".toml" { // if v1 config already has ".toml" extension, backup is needed+		err = cu.backup(configFile)+		if err != nil {+			return nil, err+		}+	}+	c, err := cu.transform(configFile)+	if err != nil {+		return nil, err+	}+	applyConfigOverrides(c)+	configFileV2 := strings.TrimSuffix(configFile, ext) + ".toml"+	err = cu.save(c, configFileV2)+	if err != nil {+		return nil, err+	}+	log.Info("config file upgraded",+		zap.String("1.x config", configFile),+		zap.String("2.x config", configFileV2))++	return v1c, nil+}++// Returns default 1.x config file path.+func getV1ConfigPath() string {+	if envVar := os.Getenv("INFLUXDB_CONFIG_PATH"); envVar != "" {+		return envVar+	}+	for _, path := range []string{+		os.ExpandEnv("${HOME}/.influxdb/influxdb.conf"),+		"/etc/influxdb/influxdb.conf",+	} {+		if _, err := os.Stat(path); err == nil {+			return path+		}+	}++	return ""+}++// private function used by `upgrade-config` command+func applyConfigOverrides(c config) {+	if options.target.enginePath != "" {+		c["engine-path"] = options.target.enginePath+	}+	if options.target.boltPath != "" {+		c["bolt-path"] = options.target.boltPath+	}+}++type properties map[string]string   // private type used by `upgrade-config` command+type table = map[string]interface{} // private type used by `upgrade-config` command+type config = table                 // private type used by `upgrade-config` command++// private type used by `upgrade-config` command+type configUpgrader struct {+	rules  properties+	config config+	log    *zap.Logger+}++// private function used by `upgrade-config` command+func newConfigUpgrader(rules string, log *zap.Logger) *configUpgrader {+	cm := &configUpgrader{+		log: log,+	}+	cm.init(rules)+	return cm+}++func (cu *configUpgrader) backup(path string) error {+	source, err := os.Open(path)+	if err != nil {+		return err+	}+	defer source.Close()++	backupFile := path + "~"+	if _, err := os.Stat(backupFile); !os.IsNotExist(err) {+		errMsg := fmt.Sprintf("upgrade: config file backup %s already exist", backupFile)+		return errors.New(errMsg)+	}++	destination, err := os.Create(backupFile)+	if err != nil {+		return err+	}+	defer destination.Close()+	_, err = io.Copy(destination, source)++	return err+}++func (cu *configUpgrader) save(c config, path string) error {+	buf := new(bytes.Buffer)+	if err := toml.NewEncoder(buf).Encode(&c); err != nil {+		return err+	}++	return ioutil.WriteFile(path, buf.Bytes(), 0666)+}++func (cu *configUpgrader) transform(path string) (config, error) {+	c, err := cu.parse(path)+	if err != nil {+		return nil, err+	}+	cu.config = make(config)+	cu.process(c, nil, -1)+	return cu.config, nil+}++func (cu *configUpgrader) parse(path string) (config, error) {+	bs, err := ioutil.ReadFile(path)+	if err != nil {+		return nil, err+	}++	// Handle any potential Byte-Order-Marks that may be in the config file.+	// This is for Windows compatibility only.+	// See https://github.com/influxdata/telegraf/issues/1378 and+	// https://github.com/influxdata/influxdb/issues/8965.+	bom := unicode.BOMOverride(transform.Nop)+	bs, _, err = transform.Bytes(bom, bs)+	if err != nil {+		return nil, err+	}++	var c config+	_, err = toml.Decode(string(bs), &c)+	return c, err+}++func (cu *configUpgrader) loadV1(path string) (*configV1, error) {+	bs, err := ioutil.ReadFile(path)+	if err != nil {+		return nil, err+	}++	// Handle any potential Byte-Order-Marks that may be in the config file.+	// This is for Windows compatibility only.+	// See https://github.com/influxdata/telegraf/issues/1378 and+	// https://github.com/influxdata/influxdb/issues/8965.+	bom := unicode.BOMOverride(transform.Nop)+	bs, _, err = transform.Bytes(bom, bs)+	if err != nil {+		return nil, err+	}++	var c configV1+	_, err = toml.Decode(string(bs), &c)+	return &c, err+}++func (cu *configUpgrader) init(rules string) {+	cu.rules = make(properties)+	scanner := bufio.NewScanner(strings.NewReader(rules))+	for scanner.Scan() {+		line := strings.Trim(scanner.Text(), " ")+		if line == "" || strings.HasPrefix(line, "#") {+			continue+		}+		rule := strings.SplitN(line, "=", 2)+		if len(rule) == 2 {+			sourceKey := strings.Trim(rule[0], " ")+			targetKey := strings.Trim(rule[1], " ")+			cu.rules[sourceKey] = targetKey+		}+	}+}++func (cu *configUpgrader) convert(path []string) ([]string, bool) {+	fqn := strings.Join(path, ".")+	target, ok := cu.rules[fqn]+	if ok {+		if target == "" {+			return nil, true+		}+		return strings.Split(target, "."), true+	}+	return path, false+}++// flat copy ie. without values for maps and arrays+func (cu *configUpgrader) add(v interface{}, source []string, target []string, index int) {+	var c table+	c = cu.config+	for len(target) > 1 {+		n := target[0]+		u, ok := c[n]+		if !ok {+			u = make(table)+			c[n] = u+		}+		if uc, ok := u.(table); ok {+			c = uc+		}+		if uc, ok := u.([]table); ok {+			if uc[index] == nil {+				uc[index] = make(table)+			}+			c = uc[index]+		}+		target = target[1:]+	}+	n := target[0]+	switch vv := v.(type) {+	case table:+		c[n] = make(table)+	case []table:+		array := make([]table, len(vv))+		for i := 0; i < len(vv); i++ {+			array[i] = make(table)+		}+		c[n] = array+	default:+		c[n] = v+	}+}++func (cu *configUpgrader) process(c interface{}, path []string, index int) []string {

I found the logic in this function rather hard to understand. In particular, I don't really understand its contract. Why does it return a new path when the caller should know whether the last element needs to be popped or not? Some doc comments on the methods would help a lot, I think.

Also, ISTM that this logic won't be able to replace elements inside arrays, because it's not possible to know where the array(s) lie in the replacement path.

Here's a possible alternative approach, where the code is somewhat longer, but arguably separates the concerns a little better, and copes with transformations in arrays too: https://play.golang.org/p/iKZIfTm8BcQ

The above code does use a slightly different syntax for the rules file: if you want a path to be replaced within an array, you must use a * element in the source and the replacement path where the array is. Also, it keeps entries that don't have an explicit transformation rule, but that could easily be changed.

YMMV :)

vlastahajek

comment created time in 6 days

Pull request review commentinfluxdata/influxdb

feat: influxd upgrade command

+package upgrade++import (+	"errors"+	"fmt"+	"math/rand"+	"os"+	"path/filepath"+	"regexp"+	"runtime"+	"sort"+	"strings"+	"time"++	"github.com/influxdata/influxdb/v2/v1/services/meta"+	"github.com/influxdata/influxql"+	"go.uber.org/zap"+)++// Generates security upgrade script.+func generateSecurityScript(v1 *influxDBv1, dbBuckets map[string][]string, log *zap.Logger) error {+	// create helper+	helper := &securityScriptHelper{+		log: log,+	}+	if err := helper.init(); err != nil {+		return err+	}++	// target org name+	var targetOrg = options.target.orgName++	// get `influx` absolute path+	exePath, err := filepath.Abs(os.Args[0])+	if err != nil {+		return err+	}+	influxExe := filepath.Join(filepath.Dir(exePath), "influx")++	// first check if target buckets exists in 2.x+	v1meta := v1.meta+	proceed := helper.checkDbBuckets(v1meta, dbBuckets)+	if !proceed {+		return errors.New("upgrade: there were errors/warnings, please fix them and run the command again")+	}++	// create output+	var output *os.File+	var isFo bool+	if options.target.securityScriptPath == "" {+		output = os.Stdout+	} else {+		output, err = os.OpenFile(options.target.securityScriptPath, os.O_RDWR|os.O_CREATE|os.O_TRUNC, 0777)+		if err != nil {+			return err+		}+		isFo = true+	}++	// script printing helper funcs+	scriptf := func(format string, args ...interface{}) {+		fmt.Fprintf(output, format, args...)+	}+	scriptln := func(text string) {+		fmt.Fprintln(output, text)+	}+	script := func() {+		fmt.Fprintln(output, "")+	}++	// generate the script+	var comment, set, fi, nl, join, influx string+	if helper.isWin() {+		comment = "REM"+		set = "set "+		if isFo {+			fi = ") >> %LOG% 2>&1"+		} else {+			fi = ")"+		}+		nl = "  echo."+		join = " ^\n  && "+		influx = "%INFLUX%"+		scriptln("@ECHO OFF")+		script()+	} else {+		comment = "#"+		set = ""+		if isFo {+			fi = "} 2>&1 | tee -a $LOG\nfi"+		} else {+			fi = "fi"+		}+		nl = "  echo"+		join = " && \\\n  "+		influx = "env INFLUX_TOKEN=$INFLUX_TOKEN $INFLUX"+		scriptln("#!/bin/sh")+		script()+	}+	for _, row := range helper.sortUserInfo(v1meta.Users()) {+		username := row.Name+		varname := helper.shUserVar(username)+		if row.Admin {+			if helper.isWin() {+				scriptf("%s user %s is 1.x admin, will not be upgraded automatically\n%s%s=no\n",+					comment, username, set, varname)+			} else {+				scriptf("%s%s=no %s user %s is 1.x admin, will not be upgraded automatically\n",+					set, varname, comment, username)+			}+		} else if len(row.Privileges) == 0 {+			if helper.isWin() {+				scriptf("%s user %s has no 1.x privileges, will not be upgraded automatically\n%s%s=no\n",+					comment, username, set, varname)+			} else {+				scriptf("%s%s=no %s user %s has no 1.x privileges, will not be upgraded automatically\n",+					set, varname, comment, username)+			}+		} else {+			if helper.isWin() {+				scriptf("%s user %s\n%s%s=yes\n", comment, username, set, varname)+			} else {+				scriptf("%s%s=yes %s user %s\n", set, varname, comment, username)+			}+		}+	}+	script()+	scriptln(comment)+	scriptf("%s VARIABLES\n", comment)+	scriptln(comment)+	script()+	if helper.isWin() {+		scriptf("set INFLUX=\"%s\"\n", influxExe)+		scriptf("set INFLUX_TOKEN=%s\n", options.target.token)+	} else {+		scriptf("INFLUX=%s\n", influxExe)+		scriptf("INFLUX_TOKEN=%s\n", options.target.token)+	}+	if isFo {+		if helper.isWin() {+			scriptln("set PATH=%PATH%;C:\\WINDOWS\\system32\\wbem")+			scriptln("for /f %%x in ('wmic os get localdatetime ^| findstr /b [0-9]') do @set X=%%x && set LOG=%~dpn0.%X:~0,8%-%X:~8,6%.log")+		} else {+			scriptln("LOG=\"${0%.*}.$(date +%Y%m%d-%H%M%S).log\"")+		}+	}+	script()+	scriptln(comment)+	scriptf("%s INDIVIDUAL USER UPGRADES\n", comment)+	scriptln(comment)+	script()+	for _, row := range helper.sortUserInfo(v1meta.Users()) {+		username := row.Name+		if helper.isWin() {+			scriptf("IF /I \"%%%s%%\" == \"yes\" (\n", helper.shUserVar(username))+		} else {+			if isFo {+				scriptf("if [ \"$%s\" = \"yes\" ]; then\n{\n", helper.shUserVar(username))+			} else {+				scriptf("if [ \"$%s\" = \"yes\" ]; then\n", helper.shUserVar(username))+			}+		}+		if row.Admin {+			scriptln(nl)+			scriptf("  echo User %s is 1.x admin and should be added & invited manually to 2.x if needed\n", username)+			scriptf("  %s add & invite user %s in the InfluxDB 2.x UI\n", comment, username)+			scriptln(fi)+			script()+			continue+		}+		password := helper.generatePassword(8) // influx user create requires password+		tokenDesc := fmt.Sprintf("%s's token", username)+		readAccess := make([]string, 0)+		writeAccess := make([]string, 0)+		for database, permission := range row.Privileges {+			ids, ok := dbBuckets[database]+			if !ok || ids == nil || len(ids) == 0 { // db probably skipped+				continue+			}+			for _, id := range ids {+				switch permission {+				case influxql.ReadPrivilege:+					readAccess = append(readAccess, fmt.Sprintf("--read-bucket=%s", id))+				case influxql.WritePrivilege:+					writeAccess = append(writeAccess, fmt.Sprintf("--write-bucket=%s", id))+				case influxql.AllPrivileges:+					readAccess = append(readAccess, fmt.Sprintf("--read-bucket=%s", id))+					writeAccess = append(writeAccess, fmt.Sprintf("--write-bucket=%s", id))+				}+			}+		}+		var readBucketArgs, writeBucketArgs string+		if len(readAccess) > 0 {+			readBucketArgs = strings.Join(readAccess, " ")+		}+		if len(writeAccess) > 0 {+			writeBucketArgs = strings.Join(writeAccess, " ")+		}+		var cmds []string+		cmds = append(cmds, fmt.Sprintf("  echo Creating user %s with password %s in %s organization...", username, password, targetOrg))+		cmds = append(cmds, fmt.Sprintf("%s user create --name=%s --password=%s --org=%s", influx, username, password, targetOrg))+		if len(readAccess) > 0 || len(writeAccess) > 0 {+			cmds = append(cmds, "echo Creating authorization token...")+			cmds = append(cmds, fmt.Sprintf("%s auth create --user=%s --org=%s --description=\"%s\" %s %s",+				influx, username, targetOrg, tokenDesc, readBucketArgs, writeBucketArgs))+		}+		scriptln(nl)+		scriptln(strings.Join(cmds, join))+		scriptln(fi)+		script()+		// sample output per user:+		// ID			Name+		// 064b437f88377000	xyz1+		// ID			Token												User Name	User ID			Permissions+		// 064b485f4fe3a000	aysw_eMIF46WxKx8o0oz9bNmMVGL09AAg1Scoo1ynFJSW4uC2P3O8HyVy8NTISbNltNDFr7jAyQui3KS-ahpsQ==	xyz		064b435351b77000	[read:orgs/b20841b0f84c3b7e/buckets/8213da1997b3d89e write:orgs/b20841b0f84c3b7e/buckets/8213da1997b3d89e]+	}++	if isFo {+		if helper.isWin() {+			scriptln("type %LOG%")+			scriptln("echo.")+			scriptln("echo Output saved to %LOG%")+		} else {+			scriptln("echo")+			scriptln("echo Output saved to $LOG")+		}+		log.Info(fmt.Sprintf("security upgrade script saved to %s\n", options.target.securityScriptPath))+	}++	return nil+}++// private type used by `generate-security-script` command+type securityScriptHelper struct {+	shReg *regexp.Regexp+	log   *zap.Logger+}++func (h *securityScriptHelper) init() error {+	reg, err := regexp.Compile("[^a-zA-Z0-9]+")+	if err != nil {+		return fmt.Errorf("upgrade: error preparing security script: %v", err)+	}+	h.shReg = reg+	return nil+}++func (h *securityScriptHelper) checkDbBuckets(meta *meta.Client, databases map[string][]string) bool {+	ok := true+	for _, row := range meta.Users() {+		for database := range row.Privileges {+			if h.skipDb(database) { // same check is done in upgradeDatabases()+				continue+			}+			ids, ok := databases[database]+			if !ok || ids == nil || len(ids) == 0 {+				h.log.Warn(fmt.Sprintf("warning: no buckets for database [%s] exist in 2.x\n", database))+				ok = false+			}+		}+	}++	return ok+}++func (h *securityScriptHelper) shUserVar(name string) string {+	return "UPGRADE_USER_" + h.shReg.ReplaceAllString(name, "_")+}++func (h *securityScriptHelper) sortUserInfo(info []meta.UserInfo) []meta.UserInfo {+	sort.Slice(info, func(i, j int) bool {+		return info[i].Name < info[j].Name+	})+	return info+}++func (h *securityScriptHelper) generatePassword(length int) string {+	lowerCharSet := "abcdefghijklmnopqrstuvwxyz"+	upperCharSet := "ABCDEFGHIJKLMNOPQRSTUVWXYZ"+	specialCharSet := "" //".!@#$%"+	numberSet := "0123456789"+	allCharSet := lowerCharSet + upperCharSet + specialCharSet + numberSet+	rand.Seed(time.Now().UnixNano())

math/rand generates pseudo-random numbers that are not sufficient for generating a random password. How about something like this instead?

// generatePassword generates a password of the given length.
func generatePassword(length int) (string, error) {
	encoding := base32.StdEncoding.WithPadding(base32.NoPadding)
	n := encoding.DecodedLen(length)
	if encoding.EncodedLen(n) < length {
		n++
	}
	buf := make([]byte, n)
	_, err := rand.Read(buf)
	if err != nil {
		// Or just panic, as this situation never happens in practice?
		return "", fmt.Errorf("random number generation failed: %v", err)
	}
	return encoding.EncodeToString(buf)[:length], nil
}
vlastahajek

comment created time in 6 days

Pull request review commentinfluxdata/influxdb

feat: influxd upgrade command

+package upgrade++// This file contains code for v1 config file upgrade to v2.+// It supports basic restructuring (caveat: changing structure inside existing+// array or to new array may not work, but it is not required).+// Upgrade rules are in `upgrade_config.properties` file.++import (+	"bufio"+	"bytes"+	"errors"+	"fmt"+	"io"+	"io/ioutil"+	"os"+	"path/filepath"+	"strings"++	"github.com/BurntSushi/toml"+	"github.com/spf13/cobra"+	"go.uber.org/zap"+	"golang.org/x/text/encoding/unicode"+	"golang.org/x/text/transform"+)++// TODO for testing purposes+var upgradeConfigCommand = &cobra.Command{+	Use:   "upgrade-config",+	Short: "Upgrade InfluxDB 1.x config to 2.x",+	RunE: func(cmd *cobra.Command, args []string) error {+		log, _ := zap.NewProduction()+		_, err := upgradeConfig(options.source.configFile, log)+		return err+	},+}++// TODO for testing purposes+func init() {+	flags := upgradeConfigCommand.Flags()+	flags.StringVar(&options.source.configFile, "config-file", getV1ConfigPath(), "Path to 1.x config file")+}++// Backups existing config file and updates it with upgraded config.+func upgradeConfig(configFile string, log *zap.Logger) (*configV1, error) {+	configUpgradeProperties, err := AssetString("upgrade_config.properties")+	if err != nil {+		return nil, err+	}+	ext := filepath.Ext(configFile)+	cu := newConfigUpgrader(configUpgradeProperties, log)+	v1c, err := cu.loadV1(configFile)+	if err != nil {+		return nil, err+	}+	if ext == ".toml" { // if v1 config already has ".toml" extension, backup is needed+		err = cu.backup(configFile)+		if err != nil {+			return nil, err+		}+	}+	c, err := cu.transform(configFile)+	if err != nil {+		return nil, err+	}+	applyConfigOverrides(c)+	configFileV2 := strings.TrimSuffix(configFile, ext) + ".toml"+	err = cu.save(c, configFileV2)+	if err != nil {+		return nil, err+	}+	log.Info("config file upgraded",+		zap.String("1.x config", configFile),+		zap.String("2.x config", configFileV2))++	return v1c, nil+}++// Returns default 1.x config file path.+func getV1ConfigPath() string {+	if envVar := os.Getenv("INFLUXDB_CONFIG_PATH"); envVar != "" {+		return envVar+	}+	for _, path := range []string{+		os.ExpandEnv("${HOME}/.influxdb/influxdb.conf"),+		"/etc/influxdb/influxdb.conf",+	} {+		if _, err := os.Stat(path); err == nil {+			return path+		}+	}++	return ""+}++// private function used by `upgrade-config` command+func applyConfigOverrides(c config) {+	if options.target.enginePath != "" {+		c["engine-path"] = options.target.enginePath+	}+	if options.target.boltPath != "" {+		c["bolt-path"] = options.target.boltPath+	}+}++type properties map[string]string   // private type used by `upgrade-config` command+type table = map[string]interface{} // private type used by `upgrade-config` command+type config = table                 // private type used by `upgrade-config` command++// private type used by `upgrade-config` command+type configUpgrader struct {+	rules  properties+	config config+	log    *zap.Logger+}++// private function used by `upgrade-config` command+func newConfigUpgrader(rules string, log *zap.Logger) *configUpgrader {+	cm := &configUpgrader{+		log: log,+	}+	cm.init(rules)+	return cm+}++func (cu *configUpgrader) backup(path string) error {+	source, err := os.Open(path)+	if err != nil {+		return err+	}+	defer source.Close()++	backupFile := path + "~"+	if _, err := os.Stat(backupFile); !os.IsNotExist(err) {+		errMsg := fmt.Sprintf("upgrade: config file backup %s already exist", backupFile)+		return errors.New(errMsg)+	}++	destination, err := os.Create(backupFile)+	if err != nil {+		return err+	}+	defer destination.Close()+	_, err = io.Copy(destination, source)++	return err+}++func (cu *configUpgrader) save(c config, path string) error {+	buf := new(bytes.Buffer)+	if err := toml.NewEncoder(buf).Encode(&c); err != nil {+		return err+	}++	return ioutil.WriteFile(path, buf.Bytes(), 0666)+}++func (cu *configUpgrader) transform(path string) (config, error) {+	c, err := cu.parse(path)+	if err != nil {+		return nil, err+	}+	cu.config = make(config)+	cu.process(c, nil, -1)+	return cu.config, nil+}++func (cu *configUpgrader) parse(path string) (config, error) {+	bs, err := ioutil.ReadFile(path)+	if err != nil {+		return nil, err+	}++	// Handle any potential Byte-Order-Marks that may be in the config file.+	// This is for Windows compatibility only.+	// See https://github.com/influxdata/telegraf/issues/1378 and+	// https://github.com/influxdata/influxdb/issues/8965.+	bom := unicode.BOMOverride(transform.Nop)+	bs, _, err = transform.Bytes(bom, bs)+	if err != nil {+		return nil, err+	}++	var c config+	_, err = toml.Decode(string(bs), &c)+	return c, err+}++func (cu *configUpgrader) loadV1(path string) (*configV1, error) {+	bs, err := ioutil.ReadFile(path)+	if err != nil {+		return nil, err+	}++	// Handle any potential Byte-Order-Marks that may be in the config file.+	// This is for Windows compatibility only.+	// See https://github.com/influxdata/telegraf/issues/1378 and+	// https://github.com/influxdata/influxdb/issues/8965.+	bom := unicode.BOMOverride(transform.Nop)+	bs, _, err = transform.Bytes(bom, bs)+	if err != nil {+		return nil, err+	}++	var c configV1+	_, err = toml.Decode(string(bs), &c)+	return &c, err+}++func (cu *configUpgrader) init(rules string) {+	cu.rules = make(properties)+	scanner := bufio.NewScanner(strings.NewReader(rules))+	for scanner.Scan() {+		line := strings.Trim(scanner.Text(), " ")+		if line == "" || strings.HasPrefix(line, "#") {+			continue+		}+		rule := strings.SplitN(line, "=", 2)+		if len(rule) == 2 {

If there's no = sign, perhaps it would be better to return an error rather than silently ignoring the problem?

vlastahajek

comment created time in 6 days

Pull request review commentinfluxdata/influxdb

feat: influxd upgrade command

+package upgrade++// This file contains code for v1 config file upgrade to v2.+// It supports basic restructuring (caveat: changing structure inside existing+// array or to new array may not work, but it is not required).+// Upgrade rules are in `upgrade_config.properties` file.++import (+	"bufio"+	"bytes"+	"errors"+	"fmt"+	"io"+	"io/ioutil"+	"os"+	"path/filepath"+	"strings"++	"github.com/BurntSushi/toml"+	"github.com/spf13/cobra"+	"go.uber.org/zap"+	"golang.org/x/text/encoding/unicode"+	"golang.org/x/text/transform"+)++// TODO for testing purposes+var upgradeConfigCommand = &cobra.Command{+	Use:   "upgrade-config",+	Short: "Upgrade InfluxDB 1.x config to 2.x",+	RunE: func(cmd *cobra.Command, args []string) error {+		log, _ := zap.NewProduction()+		_, err := upgradeConfig(options.source.configFile, log)+		return err+	},+}++// TODO for testing purposes+func init() {+	flags := upgradeConfigCommand.Flags()+	flags.StringVar(&options.source.configFile, "config-file", "/etc/influxdb/influxdb.conf", "Path to config file")+}++// Backups existing config file and updates it with upgraded config.+func upgradeConfig(configFile string, log *zap.Logger) (*configV1, error) {+	configUpgradeProperties, err := AssetString("upgrade_config.properties")+	if err != nil {+		return nil, err+	}+	ext := filepath.Ext(configFile)+	cu := newConfigUpgrader(configUpgradeProperties, log)+	v1c, err := cu.loadV1(configFile)+	if err != nil {+		return nil, err+	}+	if ext == ".toml" { // if v1 config already has ".toml" extension, backup is needed+		err = cu.backup(configFile)+		if err != nil {+			return nil, err+		}+	}+	c, err := cu.transform(configFile)+	if err != nil {+		return nil, err+	}+	applyConfigOverrides(c)+	configFileV2 := strings.TrimSuffix(configFile, ext) + ".toml"+	err = cu.save(c, configFileV2)+	if err != nil {+		return nil, err+	}+	log.Info("config upgraded",+		zap.String("1.x config", configFile),+		zap.String("2.x config", configFileV2))++	return v1c, nil+}++func applyConfigOverrides(c config) {+	if options.target.enginePath != "" {+		c["engine-path"] = options.target.enginePath+	}+	if options.target.boltPath != "" {+		c["bolt-path"] = options.target.boltPath+	}+}++type properties map[string]string   // private type used by `upgrade-config` command+type table = map[string]interface{} // private type used by `upgrade-config` command+type config = table                 // private type used by `upgrade-config` command++// private type used by `upgrade-config` command+type configUpgrader struct {+	rules  properties+	config config+	log    *zap.Logger+}++// private function used by `upgrade-config` command+func newConfigUpgrader(rules string, log *zap.Logger) *configUpgrader {+	cm := &configUpgrader{+		log: log,+	}+	cm.init(rules)+	return cm+}++func (cu *configUpgrader) backup(path string) error {+	sourceFileStat, err := os.Stat(path)+	if err != nil {+		return err+	}++	if !sourceFileStat.Mode().IsRegular() {+		return errors.New("upgrade: '" + path + "' is not a regular file")+	}++	source, err := os.Open(path)+	if err != nil {+		return err+	}+	defer source.Close()++	backupFile := path + "~"++	// TODO testing only?+	if _, err := os.Stat(backupFile); !os.IsNotExist(err) {+		return fmt.Errorf("upgrade: config backup file %s already exist", backupFile)+	}++	destination, err := os.Create(backupFile)+	if err != nil {+		return err+	}+	defer destination.Close()+	_, err = io.Copy(destination, source)++	return err+}++func (cu *configUpgrader) save(c config, path string) error {+	buf := new(bytes.Buffer)+	if err := toml.NewEncoder(buf).Encode(&c); err != nil {+		return err+	}++	return ioutil.WriteFile(path, buf.Bytes(), 0666)+}++func (cu *configUpgrader) transform(path string) (config, error) {+	c, err := cu.parse(path)+	if err != nil {+		return nil, err+	}+	cu.config = make(config)+	cu.process(c, nil, -1)+	return cu.config, nil+}++func (cu *configUpgrader) parse(path string) (config, error) {+	bs, err := ioutil.ReadFile(path)+	if err != nil {+		return nil, err+	}++	// Handle any potential Byte-Order-Marks that may be in the config file.+	// This is for Windows compatibility only.+	// See https://github.com/influxdata/telegraf/issues/1378 and+	// https://github.com/influxdata/influxdb/issues/8965.+	bom := unicode.BOMOverride(transform.Nop)+	bs, _, err = transform.Bytes(bom, bs)+	if err != nil {+		return nil, err+	}++	var c config+	_, err = toml.Decode(string(bs), &c)+	return c, err+}++func (cu *configUpgrader) loadV1(path string) (*configV1, error) {+	bs, err := ioutil.ReadFile(path)+	if err != nil {+		return nil, err+	}++	// Handle any potential Byte-Order-Marks that may be in the config file.+	// This is for Windows compatibility only.+	// See https://github.com/influxdata/telegraf/issues/1378 and+	// https://github.com/influxdata/influxdb/issues/8965.+	bom := unicode.BOMOverride(transform.Nop)+	bs, _, err = transform.Bytes(bom, bs)+	if err != nil {+		return nil, err+	}++	var c configV1+	_, err = toml.Decode(string(bs), &c)+	return &c, err+}++func (cu *configUpgrader) init(rules string) {+	cu.rules = make(properties)+	scanner := bufio.NewScanner(strings.NewReader(rules))+	for scanner.Scan() {+		line := strings.Trim(scanner.Text(), " ")+		if line == "" || strings.HasPrefix(line, "#") {+			continue+		}+		rule := strings.SplitN(line, "=", 2)+		if len(rule) == 2 {+			sourceKey := strings.Trim(rule[0], " ")

This seems a bit surprising to me. I'd usually expect a "target" on the left hand side of a = sign.

That is, if I saw:

foo.bar=x.y

I think I'd expect foo.bar to be a path in new config format and x.y to be a path in the old format. I might have misunderstood how it's working though.

vlastahajek

comment created time in 6 days

Pull request review commentinfluxdata/influxdb

feat: influxd upgrade command

+package upgrade++// This file contains code for v1 config file upgrade to v2.+// It supports basic restructuring (caveat: changing structure inside existing+// array or to new array may not work, but it is not required).+// Upgrade rules are in `upgrade_config.properties` file.++import (+	"bufio"+	"bytes"+	"errors"+	"fmt"+	"io"+	"io/ioutil"+	"os"+	"path/filepath"+	"strings"++	"github.com/BurntSushi/toml"+	"github.com/spf13/cobra"+	"go.uber.org/zap"+	"golang.org/x/text/encoding/unicode"+	"golang.org/x/text/transform"+)++// TODO for testing purposes+var upgradeConfigCommand = &cobra.Command{+	Use:   "upgrade-config",+	Short: "Upgrade InfluxDB 1.x config to 2.x",+	RunE: func(cmd *cobra.Command, args []string) error {+		log, _ := zap.NewProduction()+		_, err := upgradeConfig(options.source.configFile, log)+		return err+	},+}++// TODO for testing purposes+func init() {+	flags := upgradeConfigCommand.Flags()+	flags.StringVar(&options.source.configFile, "config-file", "/etc/influxdb/influxdb.conf", "Path to config file")+}++// Backups existing config file and updates it with upgraded config.+func upgradeConfig(configFile string, log *zap.Logger) (*configV1, error) {+	configUpgradeProperties, err := AssetString("upgrade_config.properties")+	if err != nil {+		return nil, err+	}+	ext := filepath.Ext(configFile)+	cu := newConfigUpgrader(configUpgradeProperties, log)+	v1c, err := cu.loadV1(configFile)+	if err != nil {+		return nil, err+	}+	if ext == ".toml" { // if v1 config already has ".toml" extension, backup is needed+		err = cu.backup(configFile)+		if err != nil {+			return nil, err+		}+	}+	c, err := cu.transform(configFile)+	if err != nil {+		return nil, err+	}+	applyConfigOverrides(c)+	configFileV2 := strings.TrimSuffix(configFile, ext) + ".toml"+	err = cu.save(c, configFileV2)+	if err != nil {+		return nil, err+	}+	log.Info("config upgraded",+		zap.String("1.x config", configFile),+		zap.String("2.x config", configFileV2))++	return v1c, nil+}++func applyConfigOverrides(c config) {+	if options.target.enginePath != "" {+		c["engine-path"] = options.target.enginePath+	}+	if options.target.boltPath != "" {+		c["bolt-path"] = options.target.boltPath+	}+}++type properties map[string]string   // private type used by `upgrade-config` command+type table = map[string]interface{} // private type used by `upgrade-config` command+type config = table                 // private type used by `upgrade-config` command++// private type used by `upgrade-config` command+type configUpgrader struct {+	rules  properties+	config config+	log    *zap.Logger+}++// private function used by `upgrade-config` command+func newConfigUpgrader(rules string, log *zap.Logger) *configUpgrader {+	cm := &configUpgrader{+		log: log,+	}+	cm.init(rules)+	return cm+}++func (cu *configUpgrader) backup(path string) error {+	sourceFileStat, err := os.Stat(path)+	if err != nil {+		return err+	}++	if !sourceFileStat.Mode().IsRegular() {+		return errors.New("upgrade: '" + path + "' is not a regular file")+	}++	source, err := os.Open(path)+	if err != nil {+		return err+	}+	defer source.Close()++	backupFile := path + "~"++	// TODO testing only?+	if _, err := os.Stat(backupFile); !os.IsNotExist(err) {+		return fmt.Errorf("upgrade: config backup file %s already exist", backupFile)+	}++	destination, err := os.Create(backupFile)+	if err != nil {+		return err+	}+	defer destination.Close()+	_, err = io.Copy(destination, source)++	return err+}++func (cu *configUpgrader) save(c config, path string) error {+	buf := new(bytes.Buffer)+	if err := toml.NewEncoder(buf).Encode(&c); err != nil {+		return err+	}++	return ioutil.WriteFile(path, buf.Bytes(), 0666)+}++func (cu *configUpgrader) transform(path string) (config, error) {+	c, err := cu.parse(path)+	if err != nil {+		return nil, err+	}+	cu.config = make(config)+	cu.process(c, nil, -1)+	return cu.config, nil+}++func (cu *configUpgrader) parse(path string) (config, error) {+	bs, err := ioutil.ReadFile(path)+	if err != nil {+		return nil, err+	}++	// Handle any potential Byte-Order-Marks that may be in the config file.+	// This is for Windows compatibility only.+	// See https://github.com/influxdata/telegraf/issues/1378 and+	// https://github.com/influxdata/influxdb/issues/8965.

Nice useful comment, thanks!

vlastahajek

comment created time in 6 days

Pull request review commentinfluxdata/influxdb

feat: influxd upgrade command

+package upgrade++// This file contains code for v1 config file upgrade to v2.+// It supports basic restructuring (caveat: changing structure inside existing+// array or to new array may not work, but it is not required).+// Upgrade rules are in `upgrade_config.properties` file.++import (+	"bufio"+	"bytes"+	"errors"+	"fmt"+	"io"+	"io/ioutil"+	"os"+	"path/filepath"+	"strings"++	"github.com/BurntSushi/toml"+	"github.com/spf13/cobra"+	"go.uber.org/zap"+	"golang.org/x/text/encoding/unicode"+	"golang.org/x/text/transform"+)++// TODO for testing purposes+var upgradeConfigCommand = &cobra.Command{+	Use:   "upgrade-config",+	Short: "Upgrade InfluxDB 1.x config to 2.x",+	RunE: func(cmd *cobra.Command, args []string) error {+		log, _ := zap.NewProduction()+		_, err := upgradeConfig(options.source.configFile, log)+		return err+	},+}++// TODO for testing purposes+func init() {+	flags := upgradeConfigCommand.Flags()+	flags.StringVar(&options.source.configFile, "config-file", "/etc/influxdb/influxdb.conf", "Path to config file")+}++// Backups existing config file and updates it with upgraded config.+func upgradeConfig(configFile string, log *zap.Logger) (*configV1, error) {+	configUpgradeProperties, err := AssetString("upgrade_config.properties")+	if err != nil {+		return nil, err+	}+	ext := filepath.Ext(configFile)+	cu := newConfigUpgrader(configUpgradeProperties, log)+	v1c, err := cu.loadV1(configFile)+	if err != nil {+		return nil, err+	}+	if ext == ".toml" { // if v1 config already has ".toml" extension, backup is needed+		err = cu.backup(configFile)+		if err != nil {+			return nil, err+		}+	}+	c, err := cu.transform(configFile)+	if err != nil {+		return nil, err+	}+	applyConfigOverrides(c)+	configFileV2 := strings.TrimSuffix(configFile, ext) + ".toml"+	err = cu.save(c, configFileV2)+	if err != nil {+		return nil, err+	}+	log.Info("config upgraded",+		zap.String("1.x config", configFile),+		zap.String("2.x config", configFileV2))++	return v1c, nil+}++func applyConfigOverrides(c config) {+	if options.target.enginePath != "" {+		c["engine-path"] = options.target.enginePath+	}+	if options.target.boltPath != "" {+		c["bolt-path"] = options.target.boltPath+	}+}++type properties map[string]string   // private type used by `upgrade-config` command

These comments aren't that helpful. Perhaps describe what the type is for instead of just saying that it's a private type?

vlastahajek

comment created time in 6 days

Pull request review commentinfluxdata/influxdb

feat: influxd upgrade command

+package upgrade++import (+	"errors"+	"fmt"+	"math/rand"+	"os"+	"path/filepath"+	"regexp"+	"runtime"+	"sort"+	"strings"+	"time"++	"github.com/influxdata/influxdb/v2/v1/services/meta"+	"github.com/influxdata/influxql"+	"go.uber.org/zap"+)++// Generates security upgrade script.+func generateSecurityScript(v1 *influxDBv1, dbBuckets map[string][]string, log *zap.Logger) error {+	// create helper+	helper := &securityScriptHelper{+		log: log,+	}+	if err := helper.init(); err != nil {+		return err+	}++	// target org name+	var targetOrg = options.target.orgName++	// get `influx` absolute path+	exePath, err := filepath.Abs(os.Args[0])+	if err != nil {+		return err+	}+	influxExe := filepath.Join(filepath.Dir(exePath), "influx")++	// first check if target buckets exists in 2.x+	v1meta := v1.meta+	proceed := helper.checkDbBuckets(v1meta, dbBuckets)+	if !proceed {+		return errors.New("upgrade: there were errors/warnings, please fix them and run the command again")+	}++	// create output+	var output *os.File+	var isFo bool+	if options.target.securityScriptPath == "" {+		output = os.Stdout+	} else {+		output, err = os.OpenFile(options.target.securityScriptPath, os.O_RDWR|os.O_CREATE|os.O_TRUNC, 0777)+		if err != nil {+			return err+		}+		isFo = true+	}++	// script printing helper funcs+	scriptf := func(format string, args ...interface{}) {+		fmt.Fprintf(output, format, args...)+	}+	scriptln := func(text string) {+		fmt.Fprintln(output, text)+	}+	script := func() {+		fmt.Fprintln(output, "")+	}++	// generate the script+	var comment, set, fi, nl, join, influx string+	if helper.isWin() {+		comment = "REM"+		set = "set "+		if isFo {+			fi = ") >> %LOG% 2>&1"+		} else {+			fi = ")"+		}+		nl = "  echo."+		join = " ^\n  && "+		influx = "%INFLUX%"+		scriptln("@ECHO OFF")+		script()+	} else {+		comment = "#"+		set = ""+		if isFo {+			fi = "} 2>&1 | tee -a $LOG\nfi"+		} else {+			fi = "fi"+		}+		nl = "  echo"+		join = " && \\\n  "+		influx = "env INFLUX_TOKEN=$INFLUX_TOKEN $INFLUX"+		scriptln("#!/bin/sh")+		script()+	}+	for _, row := range helper.sortUserInfo(v1meta.Users()) {+		username := row.Name+		varname := helper.shUserVar(username)+		if row.Admin {+			if helper.isWin() {+				scriptf("%s user %s is 1.x admin, will not be upgraded automatically\n%s%s=no\n",+					comment, username, set, varname)+			} else {+				scriptf("%s%s=no %s user %s is 1.x admin, will not be upgraded automatically\n",+					set, varname, comment, username)+			}+		} else if len(row.Privileges) == 0 {+			if helper.isWin() {+				scriptf("%s user %s has no 1.x privileges, will not be upgraded automatically\n%s%s=no\n",+					comment, username, set, varname)+			} else {+				scriptf("%s%s=no %s user %s has no 1.x privileges, will not be upgraded automatically\n",+					set, varname, comment, username)+			}+		} else {+			if helper.isWin() {+				scriptf("%s user %s\n%s%s=yes\n", comment, username, set, varname)+			} else {+				scriptf("%s%s=yes %s user %s\n", set, varname, comment, username)+			}+		}+	}+	script()+	scriptln(comment)+	scriptf("%s VARIABLES\n", comment)+	scriptln(comment)+	script()+	if helper.isWin() {+		scriptf("set INFLUX=\"%s\"\n", influxExe)+		scriptf("set INFLUX_TOKEN=%s\n", options.target.token)+	} else {+		scriptf("INFLUX=%s\n", influxExe)+		scriptf("INFLUX_TOKEN=%s\n", options.target.token)+	}+	if isFo {+		if helper.isWin() {+			scriptln("set PATH=%PATH%;C:\\WINDOWS\\system32\\wbem")+			scriptln("for /f %%x in ('wmic os get localdatetime ^| findstr /b [0-9]') do @set X=%%x && set LOG=%~dpn0.%X:~0,8%-%X:~8,6%.log")+		} else {+			scriptln("LOG=\"${0%.*}.$(date +%Y%m%d-%H%M%S).log\"")+		}+	}+	script()+	scriptln(comment)+	scriptf("%s INDIVIDUAL USER UPGRADES\n", comment)+	scriptln(comment)+	script()+	for _, row := range helper.sortUserInfo(v1meta.Users()) {+		username := row.Name+		if helper.isWin() {+			scriptf("IF /I \"%%%s%%\" == \"yes\" (\n", helper.shUserVar(username))+		} else {+			if isFo {+				scriptf("if [ \"$%s\" = \"yes\" ]; then\n{\n", helper.shUserVar(username))+			} else {+				scriptf("if [ \"$%s\" = \"yes\" ]; then\n", helper.shUserVar(username))+			}+		}+		if row.Admin {+			scriptln(nl)+			scriptf("  echo User %s is 1.x admin and should be added & invited manually to 2.x if needed\n", username)+			scriptf("  %s add & invite user %s in the InfluxDB 2.x UI\n", comment, username)+			scriptln(fi)+			script()+			continue+		}+		password := helper.generatePassword(8) // influx user create requires password+		tokenDesc := fmt.Sprintf("%s's token", username)+		readAccess := make([]string, 0)+		writeAccess := make([]string, 0)+		for database, permission := range row.Privileges {+			ids, ok := dbBuckets[database]+			if !ok || ids == nil || len(ids) == 0 { // db probably skipped+				continue+			}+			for _, id := range ids {+				switch permission {+				case influxql.ReadPrivilege:+					readAccess = append(readAccess, fmt.Sprintf("--read-bucket=%s", id))+				case influxql.WritePrivilege:+					writeAccess = append(writeAccess, fmt.Sprintf("--write-bucket=%s", id))+				case influxql.AllPrivileges:+					readAccess = append(readAccess, fmt.Sprintf("--read-bucket=%s", id))+					writeAccess = append(writeAccess, fmt.Sprintf("--write-bucket=%s", id))+				}+			}+		}+		var readBucketArgs, writeBucketArgs string+		if len(readAccess) > 0 {+			readBucketArgs = strings.Join(readAccess, " ")+		}+		if len(writeAccess) > 0 {+			writeBucketArgs = strings.Join(writeAccess, " ")+		}
		readBucketArgs := strings.Join(readAccess, " ")
		writeBucketArgs := strings.Join(writeAccess, " ")

(this is equivalent because the result of strings.Join on a nil slice is the empty string)

vlastahajek

comment created time in 6 days

Pull request review commentinfluxdata/influxdb

feat: influxd upgrade command

+package upgrade++import (+	"errors"+	"fmt"+	"math/rand"+	"os"+	"path/filepath"+	"regexp"+	"runtime"+	"sort"+	"strings"+	"time"++	"github.com/influxdata/influxdb/v2/v1/services/meta"+	"github.com/influxdata/influxql"+	"go.uber.org/zap"+)++// Generates security upgrade script.+func generateSecurityScript(v1 *influxDBv1, dbBuckets map[string][]string, log *zap.Logger) error {+	// create helper+	helper := &securityScriptHelper{+		log: log,+	}+	if err := helper.init(); err != nil {+		return err+	}++	// target org name+	var targetOrg = options.target.orgName++	// get `influx` absolute path+	exePath, err := filepath.Abs(os.Args[0])+	if err != nil {+		return err+	}+	influxExe := filepath.Join(filepath.Dir(exePath), "influx")++	// first check if target buckets exists in 2.x+	v1meta := v1.meta+	proceed := helper.checkDbBuckets(v1meta, dbBuckets)+	if !proceed {+		return errors.New("upgrade: there were errors/warnings, please fix them and run the command again")+	}++	// create output+	var output *os.File+	var isFo bool+	if options.target.securityScriptPath == "" {+		output = os.Stdout+	} else {+		output, err = os.OpenFile(options.target.securityScriptPath, os.O_RDWR|os.O_CREATE|os.O_TRUNC, 0777)+		if err != nil {+			return err+		}+		isFo = true+	}++	// script printing helper funcs+	scriptf := func(format string, args ...interface{}) {+		fmt.Fprintf(output, format, args...)+	}+	scriptln := func(text string) {+		fmt.Fprintln(output, text)+	}+	script := func() {+		fmt.Fprintln(output, "")+	}++	// generate the script+	var comment, set, fi, nl, join, influx string+	if helper.isWin() {+		comment = "REM"+		set = "set "+		if isFo {+			fi = ") >> %LOG% 2>&1"+		} else {+			fi = ")"+		}+		nl = "  echo."+		join = " ^\n  && "+		influx = "%INFLUX%"+		scriptln("@ECHO OFF")+		script()+	} else {+		comment = "#"+		set = ""+		if isFo {+			fi = "} 2>&1 | tee -a $LOG\nfi"+		} else {+			fi = "fi"+		}+		nl = "  echo"+		join = " && \\\n  "+		influx = "env INFLUX_TOKEN=$INFLUX_TOKEN $INFLUX"+		scriptln("#!/bin/sh")+		script()+	}+	for _, row := range helper.sortUserInfo(v1meta.Users()) {+		username := row.Name+		varname := helper.shUserVar(username)+		if row.Admin {+			if helper.isWin() {+				scriptf("%s user %s is 1.x admin, will not be upgraded automatically\n%s%s=no\n",+					comment, username, set, varname)+			} else {+				scriptf("%s%s=no %s user %s is 1.x admin, will not be upgraded automatically\n",+					set, varname, comment, username)+			}+		} else if len(row.Privileges) == 0 {+			if helper.isWin() {+				scriptf("%s user %s has no 1.x privileges, will not be upgraded automatically\n%s%s=no\n",+					comment, username, set, varname)+			} else {+				scriptf("%s%s=no %s user %s has no 1.x privileges, will not be upgraded automatically\n",+					set, varname, comment, username)+			}+		} else {+			if helper.isWin() {+				scriptf("%s user %s\n%s%s=yes\n", comment, username, set, varname)+			} else {+				scriptf("%s%s=yes %s user %s\n", set, varname, comment, username)+			}+		}+	}+	script()+	scriptln(comment)+	scriptf("%s VARIABLES\n", comment)+	scriptln(comment)+	script()+	if helper.isWin() {+		scriptf("set INFLUX=\"%s\"\n", influxExe)+		scriptf("set INFLUX_TOKEN=%s\n", options.target.token)+	} else {+		scriptf("INFLUX=%s\n", influxExe)+		scriptf("INFLUX_TOKEN=%s\n", options.target.token)+	}+	if isFo {+		if helper.isWin() {+			scriptln("set PATH=%PATH%;C:\\WINDOWS\\system32\\wbem")+			scriptln("for /f %%x in ('wmic os get localdatetime ^| findstr /b [0-9]') do @set X=%%x && set LOG=%~dpn0.%X:~0,8%-%X:~8,6%.log")+		} else {+			scriptln("LOG=\"${0%.*}.$(date +%Y%m%d-%H%M%S).log\"")+		}+	}+	script()+	scriptln(comment)+	scriptf("%s INDIVIDUAL USER UPGRADES\n", comment)+	scriptln(comment)+	script()+	for _, row := range helper.sortUserInfo(v1meta.Users()) {+		username := row.Name+		if helper.isWin() {+			scriptf("IF /I \"%%%s%%\" == \"yes\" (\n", helper.shUserVar(username))+		} else {+			if isFo {+				scriptf("if [ \"$%s\" = \"yes\" ]; then\n{\n", helper.shUserVar(username))+			} else {+				scriptf("if [ \"$%s\" = \"yes\" ]; then\n", helper.shUserVar(username))+			}+		}+		if row.Admin {+			scriptln(nl)+			scriptf("  echo User %s is 1.x admin and should be added & invited manually to 2.x if needed\n", username)+			scriptf("  %s add & invite user %s in the InfluxDB 2.x UI\n", comment, username)+			scriptln(fi)+			script()+			continue+		}+		password := helper.generatePassword(8) // influx user create requires password+		tokenDesc := fmt.Sprintf("%s's token", username)+		readAccess := make([]string, 0)+		writeAccess := make([]string, 0)+		for database, permission := range row.Privileges {+			ids, ok := dbBuckets[database]+			if !ok || ids == nil || len(ids) == 0 { // db probably skipped+				continue+			}+			for _, id := range ids {+				switch permission {+				case influxql.ReadPrivilege:+					readAccess = append(readAccess, fmt.Sprintf("--read-bucket=%s", id))+				case influxql.WritePrivilege:+					writeAccess = append(writeAccess, fmt.Sprintf("--write-bucket=%s", id))+				case influxql.AllPrivileges:+					readAccess = append(readAccess, fmt.Sprintf("--read-bucket=%s", id))+					writeAccess = append(writeAccess, fmt.Sprintf("--write-bucket=%s", id))+				}+			}+		}+		var readBucketArgs, writeBucketArgs string+		if len(readAccess) > 0 {+			readBucketArgs = strings.Join(readAccess, " ")+		}+		if len(writeAccess) > 0 {+			writeBucketArgs = strings.Join(writeAccess, " ")+		}+		var cmds []string+		cmds = append(cmds, fmt.Sprintf("  echo Creating user %s with password %s in %s organization...", username, password, targetOrg))+		cmds = append(cmds, fmt.Sprintf("%s user create --name=%s --password=%s --org=%s", influx, username, password, targetOrg))+		if len(readAccess) > 0 || len(writeAccess) > 0 {+			cmds = append(cmds, "echo Creating authorization token...")+			cmds = append(cmds, fmt.Sprintf("%s auth create --user=%s --org=%s --description=\"%s\" %s %s",+				influx, username, targetOrg, tokenDesc, readBucketArgs, writeBucketArgs))+		}+		scriptln(nl)+		scriptln(strings.Join(cmds, join))+		scriptln(fi)+		script()+		// sample output per user:+		// ID			Name+		// 064b437f88377000	xyz1+		// ID			Token												User Name	User ID			Permissions+		// 064b485f4fe3a000	aysw_eMIF46WxKx8o0oz9bNmMVGL09AAg1Scoo1ynFJSW4uC2P3O8HyVy8NTISbNltNDFr7jAyQui3KS-ahpsQ==	xyz		064b435351b77000	[read:orgs/b20841b0f84c3b7e/buckets/8213da1997b3d89e write:orgs/b20841b0f84c3b7e/buckets/8213da1997b3d89e]+	}++	if isFo {+		if helper.isWin() {+			scriptln("type %LOG%")+			scriptln("echo.")+			scriptln("echo Output saved to %LOG%")+		} else {+			scriptln("echo")+			scriptln("echo Output saved to $LOG")+		}+		log.Info(fmt.Sprintf("security upgrade script saved to %s\n", options.target.securityScriptPath))+	}++	return nil+}++// private type used by `generate-security-script` command+type securityScriptHelper struct {+	shReg *regexp.Regexp+	log   *zap.Logger+}++func (h *securityScriptHelper) init() error {+	reg, err := regexp.Compile("[^a-zA-Z0-9]+")+	if err != nil {+		return fmt.Errorf("upgrade: error preparing security script: %v", err)+	}+	h.shReg = reg+	return nil+}++func (h *securityScriptHelper) checkDbBuckets(meta *meta.Client, databases map[string][]string) bool {+	ok := true+	for _, row := range meta.Users() {+		for database := range row.Privileges {+			if h.skipDb(database) { // same check is done in upgradeDatabases()+				continue+			}+			ids, ok := databases[database]+			if !ok || ids == nil || len(ids) == 0 {+				h.log.Warn(fmt.Sprintf("warning: no buckets for database [%s] exist in 2.x\n", database))+				ok = false

You're not doing what you think here, as ok is shadowed. This function will always return true.

vlastahajek

comment created time in 6 days

Pull request review commentinfluxdata/influxdb

feat: influxd upgrade command

+package upgrade++import (+	"errors"+	"fmt"+	"math/rand"+	"os"+	"path/filepath"+	"regexp"+	"runtime"+	"sort"+	"strings"+	"time"++	"github.com/influxdata/influxdb/v2/v1/services/meta"+	"github.com/influxdata/influxql"+	"go.uber.org/zap"+)++// Generates security upgrade script.+func generateSecurityScript(v1 *influxDBv1, dbBuckets map[string][]string, log *zap.Logger) error {+	// create helper+	helper := &securityScriptHelper{+		log: log,+	}+	if err := helper.init(); err != nil {+		return err+	}++	// target org name+	var targetOrg = options.target.orgName++	// get `influx` absolute path+	exePath, err := filepath.Abs(os.Args[0])
	exePath, err := os.Executable()
vlastahajek

comment created time in 6 days

Pull request review commentinfluxdata/influxdb

feat: influxd upgrade command

+package upgrade++import (+	"errors"+	"fmt"+	"math/rand"+	"os"+	"path/filepath"+	"regexp"+	"runtime"+	"sort"+	"strings"+	"time"++	"github.com/influxdata/influxdb/v2/v1/services/meta"+	"github.com/influxdata/influxql"+	"go.uber.org/zap"+)++// Generates security upgrade script.+func generateSecurityScript(v1 *influxDBv1, dbBuckets map[string][]string, log *zap.Logger) error {+	// create helper+	helper := &securityScriptHelper{+		log: log,+	}+	if err := helper.init(); err != nil {+		return err+	}++	// target org name+	var targetOrg = options.target.orgName++	// get `influx` absolute path+	exePath, err := filepath.Abs(os.Args[0])+	if err != nil {+		return err+	}+	influxExe := filepath.Join(filepath.Dir(exePath), "influx")++	// first check if target buckets exists in 2.x+	v1meta := v1.meta+	proceed := helper.checkDbBuckets(v1meta, dbBuckets)+	if !proceed {+		return errors.New("upgrade: there were errors/warnings, please fix them and run the command again")+	}++	// create output+	var output *os.File+	var isFo bool+	if options.target.securityScriptPath == "" {+		output = os.Stdout+	} else {+		output, err = os.OpenFile(options.target.securityScriptPath, os.O_RDWR|os.O_CREATE|os.O_TRUNC, 0777)+		if err != nil {+			return err+		}+		isFo = true+	}++	// script printing helper funcs+	scriptf := func(format string, args ...interface{}) {+		fmt.Fprintf(output, format, args...)+	}+	scriptln := func(text string) {+		fmt.Fprintln(output, text)+	}+	script := func() {+		fmt.Fprintln(output, "")+	}++	// generate the script+	var comment, set, fi, nl, join, influx string+	if helper.isWin() {+		comment = "REM"+		set = "set "+		if isFo {+			fi = ") >> %LOG% 2>&1"+		} else {+			fi = ")"+		}+		nl = "  echo."+		join = " ^\n  && "+		influx = "%INFLUX%"+		scriptln("@ECHO OFF")+		script()+	} else {+		comment = "#"+		set = ""+		if isFo {+			fi = "} 2>&1 | tee -a $LOG\nfi"+		} else {+			fi = "fi"+		}+		nl = "  echo"+		join = " && \\\n  "+		influx = "env INFLUX_TOKEN=$INFLUX_TOKEN $INFLUX"+		scriptln("#!/bin/sh")+		script()+	}+	for _, row := range helper.sortUserInfo(v1meta.Users()) {+		username := row.Name+		varname := helper.shUserVar(username)+		if row.Admin {+			if helper.isWin() {+				scriptf("%s user %s is 1.x admin, will not be upgraded automatically\n%s%s=no\n",+					comment, username, set, varname)+			} else {+				scriptf("%s%s=no %s user %s is 1.x admin, will not be upgraded automatically\n",+					set, varname, comment, username)+			}+		} else if len(row.Privileges) == 0 {+			if helper.isWin() {+				scriptf("%s user %s has no 1.x privileges, will not be upgraded automatically\n%s%s=no\n",+					comment, username, set, varname)+			} else {+				scriptf("%s%s=no %s user %s has no 1.x privileges, will not be upgraded automatically\n",+					set, varname, comment, username)+			}+		} else {+			if helper.isWin() {+				scriptf("%s user %s\n%s%s=yes\n", comment, username, set, varname)+			} else {+				scriptf("%s%s=yes %s user %s\n", set, varname, comment, username)+			}+		}+	}+	script()+	scriptln(comment)+	scriptf("%s VARIABLES\n", comment)+	scriptln(comment)+	script()+	if helper.isWin() {+		scriptf("set INFLUX=\"%s\"\n", influxExe)+		scriptf("set INFLUX_TOKEN=%s\n", options.target.token)+	} else {+		scriptf("INFLUX=%s\n", influxExe)+		scriptf("INFLUX_TOKEN=%s\n", options.target.token)+	}+	if isFo {+		if helper.isWin() {+			scriptln("set PATH=%PATH%;C:\\WINDOWS\\system32\\wbem")+			scriptln("for /f %%x in ('wmic os get localdatetime ^| findstr /b [0-9]') do @set X=%%x && set LOG=%~dpn0.%X:~0,8%-%X:~8,6%.log")+		} else {+			scriptln("LOG=\"${0%.*}.$(date +%Y%m%d-%H%M%S).log\"")+		}+	}+	script()+	scriptln(comment)+	scriptf("%s INDIVIDUAL USER UPGRADES\n", comment)+	scriptln(comment)+	script()+	for _, row := range helper.sortUserInfo(v1meta.Users()) {+		username := row.Name+		if helper.isWin() {+			scriptf("IF /I \"%%%s%%\" == \"yes\" (\n", helper.shUserVar(username))+		} else {+			if isFo {+				scriptf("if [ \"$%s\" = \"yes\" ]; then\n{\n", helper.shUserVar(username))+			} else {+				scriptf("if [ \"$%s\" = \"yes\" ]; then\n", helper.shUserVar(username))+			}+		}+		if row.Admin {+			scriptln(nl)+			scriptf("  echo User %s is 1.x admin and should be added & invited manually to 2.x if needed\n", username)+			scriptf("  %s add & invite user %s in the InfluxDB 2.x UI\n", comment, username)+			scriptln(fi)+			script()+			continue+		}+		password := helper.generatePassword(8) // influx user create requires password+		tokenDesc := fmt.Sprintf("%s's token", username)+		readAccess := make([]string, 0)+		writeAccess := make([]string, 0)+		for database, permission := range row.Privileges {+			ids, ok := dbBuckets[database]+			if !ok || ids == nil || len(ids) == 0 { // db probably skipped+				continue+			}+			for _, id := range ids {+				switch permission {+				case influxql.ReadPrivilege:+					readAccess = append(readAccess, fmt.Sprintf("--read-bucket=%s", id))+				case influxql.WritePrivilege:+					writeAccess = append(writeAccess, fmt.Sprintf("--write-bucket=%s", id))+				case influxql.AllPrivileges:+					readAccess = append(readAccess, fmt.Sprintf("--read-bucket=%s", id))+					writeAccess = append(writeAccess, fmt.Sprintf("--write-bucket=%s", id))+				}+			}+		}+		var readBucketArgs, writeBucketArgs string+		if len(readAccess) > 0 {+			readBucketArgs = strings.Join(readAccess, " ")+		}+		if len(writeAccess) > 0 {+			writeBucketArgs = strings.Join(writeAccess, " ")+		}+		var cmds []string+		cmds = append(cmds, fmt.Sprintf("  echo Creating user %s with password %s in %s organization...", username, password, targetOrg))+		cmds = append(cmds, fmt.Sprintf("%s user create --name=%s --password=%s --org=%s", influx, username, password, targetOrg))+		if len(readAccess) > 0 || len(writeAccess) > 0 {+			cmds = append(cmds, "echo Creating authorization token...")+			cmds = append(cmds, fmt.Sprintf("%s auth create --user=%s --org=%s --description=\"%s\" %s %s",+				influx, username, targetOrg, tokenDesc, readBucketArgs, writeBucketArgs))+		}+		scriptln(nl)+		scriptln(strings.Join(cmds, join))+		scriptln(fi)+		script()+		// sample output per user:+		// ID			Name+		// 064b437f88377000	xyz1+		// ID			Token												User Name	User ID			Permissions+		// 064b485f4fe3a000	aysw_eMIF46WxKx8o0oz9bNmMVGL09AAg1Scoo1ynFJSW4uC2P3O8HyVy8NTISbNltNDFr7jAyQui3KS-ahpsQ==	xyz		064b435351b77000	[read:orgs/b20841b0f84c3b7e/buckets/8213da1997b3d89e write:orgs/b20841b0f84c3b7e/buckets/8213da1997b3d89e]+	}++	if isFo {+		if helper.isWin() {+			scriptln("type %LOG%")

This is perhaps a debugging remnant - do we really want to print the whole log file to the console as well as saving it to the file?

vlastahajek

comment created time in 6 days

Pull request review commentinfluxdata/influxdb

feat: influxd upgrade command

+package upgrade++import (+	"errors"+	"fmt"+	"math/rand"+	"os"+	"path/filepath"+	"regexp"+	"runtime"+	"sort"+	"strings"+	"time"++	"github.com/influxdata/influxdb/v2/v1/services/meta"+	"github.com/influxdata/influxql"+	"go.uber.org/zap"+)++// Generates security upgrade script.+func generateSecurityScript(v1 *influxDBv1, dbBuckets map[string][]string, log *zap.Logger) error {+	// create helper+	helper := &securityScriptHelper{+		log: log,+	}+	if err := helper.init(); err != nil {+		return err+	}++	// target org name+	var targetOrg = options.target.orgName++	// get `influx` absolute path+	exePath, err := filepath.Abs(os.Args[0])+	if err != nil {+		return err+	}+	influxExe := filepath.Join(filepath.Dir(exePath), "influx")++	// first check if target buckets exists in 2.x+	v1meta := v1.meta+	proceed := helper.checkDbBuckets(v1meta, dbBuckets)+	if !proceed {+		return errors.New("upgrade: there were errors/warnings, please fix them and run the command again")+	}++	// create output+	var output *os.File+	var isFo bool+	if options.target.securityScriptPath == "" {+		output = os.Stdout+	} else {+		output, err = os.OpenFile(options.target.securityScriptPath, os.O_RDWR|os.O_CREATE|os.O_TRUNC, 0777)+		if err != nil {+			return err+		}+		isFo = true+	}++	// script printing helper funcs+	scriptf := func(format string, args ...interface{}) {+		fmt.Fprintf(output, format, args...)+	}+	scriptln := func(text string) {+		fmt.Fprintln(output, text)+	}+	script := func() {+		fmt.Fprintln(output, "")+	}++	// generate the script+	var comment, set, fi, nl, join, influx string+	if helper.isWin() {+		comment = "REM"+		set = "set "+		if isFo {+			fi = ") >> %LOG% 2>&1"+		} else {+			fi = ")"+		}+		nl = "  echo."+		join = " ^\n  && "+		influx = "%INFLUX%"+		scriptln("@ECHO OFF")+		script()+	} else {+		comment = "#"+		set = ""+		if isFo {+			fi = "} 2>&1 | tee -a $LOG\nfi"+		} else {+			fi = "fi"+		}+		nl = "  echo"+		join = " && \\\n  "+		influx = "env INFLUX_TOKEN=$INFLUX_TOKEN $INFLUX"+		scriptln("#!/bin/sh")+		script()+	}+	for _, row := range helper.sortUserInfo(v1meta.Users()) {+		username := row.Name+		varname := helper.shUserVar(username)+		if row.Admin {+			if helper.isWin() {+				scriptf("%s user %s is 1.x admin, will not be upgraded automatically\n%s%s=no\n",+					comment, username, set, varname)+			} else {+				scriptf("%s%s=no %s user %s is 1.x admin, will not be upgraded automatically\n",+					set, varname, comment, username)+			}+		} else if len(row.Privileges) == 0 {+			if helper.isWin() {+				scriptf("%s user %s has no 1.x privileges, will not be upgraded automatically\n%s%s=no\n",+					comment, username, set, varname)+			} else {+				scriptf("%s%s=no %s user %s has no 1.x privileges, will not be upgraded automatically\n",+					set, varname, comment, username)+			}+		} else {+			if helper.isWin() {+				scriptf("%s user %s\n%s%s=yes\n", comment, username, set, varname)+			} else {+				scriptf("%s%s=yes %s user %s\n", set, varname, comment, username)+			}+		}+	}+	script()+	scriptln(comment)+	scriptf("%s VARIABLES\n", comment)+	scriptln(comment)+	script()+	if helper.isWin() {+		scriptf("set INFLUX=\"%s\"\n", influxExe)+		scriptf("set INFLUX_TOKEN=%s\n", options.target.token)+	} else {+		scriptf("INFLUX=%s\n", influxExe)+		scriptf("INFLUX_TOKEN=%s\n", options.target.token)+	}+	if isFo {+		if helper.isWin() {+			scriptln("set PATH=%PATH%;C:\\WINDOWS\\system32\\wbem")+			scriptln("for /f %%x in ('wmic os get localdatetime ^| findstr /b [0-9]') do @set X=%%x && set LOG=%~dpn0.%X:~0,8%-%X:~8,6%.log")+		} else {+			scriptln("LOG=\"${0%.*}.$(date +%Y%m%d-%H%M%S).log\"")+		}+	}+	script()+	scriptln(comment)+	scriptf("%s INDIVIDUAL USER UPGRADES\n", comment)+	scriptln(comment)+	script()+	for _, row := range helper.sortUserInfo(v1meta.Users()) {+		username := row.Name+		if helper.isWin() {+			scriptf("IF /I \"%%%s%%\" == \"yes\" (\n", helper.shUserVar(username))+		} else {+			if isFo {+				scriptf("if [ \"$%s\" = \"yes\" ]; then\n{\n", helper.shUserVar(username))+			} else {+				scriptf("if [ \"$%s\" = \"yes\" ]; then\n", helper.shUserVar(username))+			}+		}+		if row.Admin {+			scriptln(nl)+			scriptf("  echo User %s is 1.x admin and should be added & invited manually to 2.x if needed\n", username)+			scriptf("  %s add & invite user %s in the InfluxDB 2.x UI\n", comment, username)+			scriptln(fi)+			script()+			continue+		}+		password := helper.generatePassword(8) // influx user create requires password+		tokenDesc := fmt.Sprintf("%s's token", username)+		readAccess := make([]string, 0)+		writeAccess := make([]string, 0)

ISTM that you could just use a single variable (accessArgs ?) instead of two, and that would simplify the code below somewhat.

vlastahajek

comment created time in 6 days

Pull request review commentinfluxdata/influxdb

feat: influxd upgrade command

+package upgrade++import (+	"errors"+	"fmt"+	"math/rand"+	"os"+	"path/filepath"+	"regexp"+	"runtime"+	"sort"+	"strings"+	"time"++	"github.com/influxdata/influxdb/v2/v1/services/meta"+	"github.com/influxdata/influxql"+	"go.uber.org/zap"+)++// Generates security upgrade script.+func generateSecurityScript(v1 *influxDBv1, dbBuckets map[string][]string, log *zap.Logger) error {+	// create helper+	helper := &securityScriptHelper{+		log: log,+	}+	if err := helper.init(); err != nil {+		return err+	}++	// target org name+	var targetOrg = options.target.orgName++	// get `influx` absolute path+	exePath, err := filepath.Abs(os.Args[0])+	if err != nil {+		return err+	}+	influxExe := filepath.Join(filepath.Dir(exePath), "influx")++	// first check if target buckets exists in 2.x+	v1meta := v1.meta+	proceed := helper.checkDbBuckets(v1meta, dbBuckets)+	if !proceed {+		return errors.New("upgrade: there were errors/warnings, please fix them and run the command again")+	}++	// create output+	var output *os.File+	var isFo bool+	if options.target.securityScriptPath == "" {+		output = os.Stdout+	} else {+		output, err = os.OpenFile(options.target.securityScriptPath, os.O_RDWR|os.O_CREATE|os.O_TRUNC, 0777)+		if err != nil {+			return err+		}+		isFo = true+	}++	// script printing helper funcs+	scriptf := func(format string, args ...interface{}) {+		fmt.Fprintf(output, format, args...)+	}+	scriptln := func(text string) {+		fmt.Fprintln(output, text)+	}+	script := func() {+		fmt.Fprintln(output, "")+	}++	// generate the script+	var comment, set, fi, nl, join, influx string+	if helper.isWin() {+		comment = "REM"+		set = "set "+		if isFo {+			fi = ") >> %LOG% 2>&1"+		} else {+			fi = ")"+		}+		nl = "  echo."+		join = " ^\n  && "+		influx = "%INFLUX%"+		scriptln("@ECHO OFF")+		script()+	} else {+		comment = "#"+		set = ""+		if isFo {+			fi = "} 2>&1 | tee -a $LOG\nfi"+		} else {+			fi = "fi"+		}+		nl = "  echo"+		join = " && \\\n  "+		influx = "env INFLUX_TOKEN=$INFLUX_TOKEN $INFLUX"+		scriptln("#!/bin/sh")+		script()+	}+	for _, row := range helper.sortUserInfo(v1meta.Users()) {+		username := row.Name+		varname := helper.shUserVar(username)+		if row.Admin {+			if helper.isWin() {+				scriptf("%s user %s is 1.x admin, will not be upgraded automatically\n%s%s=no\n",+					comment, username, set, varname)+			} else {+				scriptf("%s%s=no %s user %s is 1.x admin, will not be upgraded automatically\n",+					set, varname, comment, username)+			}+		} else if len(row.Privileges) == 0 {+			if helper.isWin() {+				scriptf("%s user %s has no 1.x privileges, will not be upgraded automatically\n%s%s=no\n",+					comment, username, set, varname)+			} else {+				scriptf("%s%s=no %s user %s has no 1.x privileges, will not be upgraded automatically\n",+					set, varname, comment, username)+			}+		} else {+			if helper.isWin() {+				scriptf("%s user %s\n%s%s=yes\n", comment, username, set, varname)+			} else {+				scriptf("%s%s=yes %s user %s\n", set, varname, comment, username)+			}+		}+	}+	script()+	scriptln(comment)+	scriptf("%s VARIABLES\n", comment)+	scriptln(comment)+	script()+	if helper.isWin() {+		scriptf("set INFLUX=\"%s\"\n", influxExe)+		scriptf("set INFLUX_TOKEN=%s\n", options.target.token)+	} else {+		scriptf("INFLUX=%s\n", influxExe)+		scriptf("INFLUX_TOKEN=%s\n", options.target.token)+	}+	if isFo {+		if helper.isWin() {+			scriptln("set PATH=%PATH%;C:\\WINDOWS\\system32\\wbem")+			scriptln("for /f %%x in ('wmic os get localdatetime ^| findstr /b [0-9]') do @set X=%%x && set LOG=%~dpn0.%X:~0,8%-%X:~8,6%.log")+		} else {+			scriptln("LOG=\"${0%.*}.$(date +%Y%m%d-%H%M%S).log\"")+		}+	}+	script()+	scriptln(comment)+	scriptf("%s INDIVIDUAL USER UPGRADES\n", comment)+	scriptln(comment)+	script()+	for _, row := range helper.sortUserInfo(v1meta.Users()) {+		username := row.Name+		if helper.isWin() {+			scriptf("IF /I \"%%%s%%\" == \"yes\" (\n", helper.shUserVar(username))+		} else {+			if isFo {+				scriptf("if [ \"$%s\" = \"yes\" ]; then\n{\n", helper.shUserVar(username))+			} else {+				scriptf("if [ \"$%s\" = \"yes\" ]; then\n", helper.shUserVar(username))+			}+		}+		if row.Admin {+			scriptln(nl)+			scriptf("  echo User %s is 1.x admin and should be added & invited manually to 2.x if needed\n", username)+			scriptf("  %s add & invite user %s in the InfluxDB 2.x UI\n", comment, username)+			scriptln(fi)+			script()+			continue+		}+		password := helper.generatePassword(8) // influx user create requires password+		tokenDesc := fmt.Sprintf("%s's token", username)+		readAccess := make([]string, 0)+		writeAccess := make([]string, 0)+		for database, permission := range row.Privileges {+			ids, ok := dbBuckets[database]+			if !ok || ids == nil || len(ids) == 0 { // db probably skipped+				continue+			}+			for _, id := range ids {
			for _, id := range dbBuckets[database] {

(looking a non-existent entry up in a map returns the zero value, and it's OK to range over the zero value of a slice)

vlastahajek

comment created time in 6 days

Pull request review commentinfluxdata/influxdb

feat: influxd upgrade command

+package upgrade++import (+	"errors"+	"fmt"+	"math/rand"+	"os"+	"path/filepath"+	"regexp"+	"runtime"+	"sort"+	"strings"+	"time"++	"github.com/influxdata/influxdb/v2/v1/services/meta"+	"github.com/influxdata/influxql"+	"go.uber.org/zap"+)++// Generates security upgrade script.+func generateSecurityScript(v1 *influxDBv1, dbBuckets map[string][]string, log *zap.Logger) error {+	// create helper+	helper := &securityScriptHelper{+		log: log,+	}+	if err := helper.init(); err != nil {+		return err+	}++	// target org name+	var targetOrg = options.target.orgName++	// get `influx` absolute path+	exePath, err := filepath.Abs(os.Args[0])+	if err != nil {+		return err+	}+	influxExe := filepath.Join(filepath.Dir(exePath), "influx")++	// first check if target buckets exists in 2.x+	v1meta := v1.meta+	proceed := helper.checkDbBuckets(v1meta, dbBuckets)+	if !proceed {+		return errors.New("upgrade: there were errors/warnings, please fix them and run the command again")+	}++	// create output+	var output *os.File+	var isFo bool+	if options.target.securityScriptPath == "" {+		output = os.Stdout+	} else {+		output, err = os.OpenFile(options.target.securityScriptPath, os.O_RDWR|os.O_CREATE|os.O_TRUNC, 0777)+		if err != nil {+			return err+		}+		isFo = true+	}++	// script printing helper funcs+	scriptf := func(format string, args ...interface{}) {+		fmt.Fprintf(output, format, args...)+	}+	scriptln := func(text string) {+		fmt.Fprintln(output, text)+	}+	script := func() {+		fmt.Fprintln(output, "")+	}++	// generate the script+	var comment, set, fi, nl, join, influx string+	if helper.isWin() {+		comment = "REM"+		set = "set "+		if isFo {+			fi = ") >> %LOG% 2>&1"+		} else {+			fi = ")"+		}+		nl = "  echo."+		join = " ^\n  && "+		influx = "%INFLUX%"+		scriptln("@ECHO OFF")+		script()+	} else {+		comment = "#"+		set = ""+		if isFo {+			fi = "} 2>&1 | tee -a $LOG\nfi"+		} else {+			fi = "fi"+		}+		nl = "  echo"+		join = " && \\\n  "+		influx = "env INFLUX_TOKEN=$INFLUX_TOKEN $INFLUX"+		scriptln("#!/bin/sh")+		script()+	}+	for _, row := range helper.sortUserInfo(v1meta.Users()) {+		username := row.Name+		varname := helper.shUserVar(username)+		if row.Admin {+			if helper.isWin() {+				scriptf("%s user %s is 1.x admin, will not be upgraded automatically\n%s%s=no\n",+					comment, username, set, varname)+			} else {+				scriptf("%s%s=no %s user %s is 1.x admin, will not be upgraded automatically\n",+					set, varname, comment, username)+			}+		} else if len(row.Privileges) == 0 {+			if helper.isWin() {+				scriptf("%s user %s has no 1.x privileges, will not be upgraded automatically\n%s%s=no\n",+					comment, username, set, varname)+			} else {+				scriptf("%s%s=no %s user %s has no 1.x privileges, will not be upgraded automatically\n",+					set, varname, comment, username)+			}+		} else {+			if helper.isWin() {+				scriptf("%s user %s\n%s%s=yes\n", comment, username, set, varname)+			} else {+				scriptf("%s%s=yes %s user %s\n", set, varname, comment, username)+			}+		}+	}+	script()+	scriptln(comment)+	scriptf("%s VARIABLES\n", comment)+	scriptln(comment)+	script()+	if helper.isWin() {+		scriptf("set INFLUX=\"%s\"\n", influxExe)+		scriptf("set INFLUX_TOKEN=%s\n", options.target.token)+	} else {+		scriptf("INFLUX=%s\n", influxExe)+		scriptf("INFLUX_TOKEN=%s\n", options.target.token)+	}+	if isFo {+		if helper.isWin() {+			scriptln("set PATH=%PATH%;C:\\WINDOWS\\system32\\wbem")+			scriptln("for /f %%x in ('wmic os get localdatetime ^| findstr /b [0-9]') do @set X=%%x && set LOG=%~dpn0.%X:~0,8%-%X:~8,6%.log")

This piece of script is involved enough that I think it justifies a comment (and also perhaps laying out over multiple lines if that's possible).

vlastahajek

comment created time in 6 days

Pull request review commentinfluxdata/influxdb

feat: influxd upgrade command

+package upgrade++import (+	"errors"+	"fmt"+	"math/rand"+	"os"+	"path/filepath"+	"regexp"+	"runtime"+	"sort"+	"strings"+	"time"++	"github.com/influxdata/influxdb/v2/v1/services/meta"+	"github.com/influxdata/influxql"+	"go.uber.org/zap"+)++// Generates security upgrade script.+func generateSecurityScript(v1 *influxDBv1, dbBuckets map[string][]string, log *zap.Logger) error {+	// create helper+	helper := &securityScriptHelper{+		log: log,+	}+	if err := helper.init(); err != nil {+		return err+	}++	// target org name+	var targetOrg = options.target.orgName++	// get `influx` absolute path+	exePath, err := filepath.Abs(os.Args[0])+	if err != nil {+		return err+	}+	influxExe := filepath.Join(filepath.Dir(exePath), "influx")++	// first check if target buckets exists in 2.x+	v1meta := v1.meta+	proceed := helper.checkDbBuckets(v1meta, dbBuckets)+	if !proceed {+		return errors.New("upgrade: there were errors/warnings, please fix them and run the command again")+	}++	// create output+	var output *os.File+	var isFo bool

I had to look quite carefully to understand what this meant. How about isFileOutput instead?

vlastahajek

comment created time in 6 days

Pull request review commentinfluxdata/influxdb

feat: influxd upgrade command

+package upgrade++import (+	"errors"+	"fmt"+	"math/rand"+	"os"+	"path/filepath"+	"regexp"+	"runtime"+	"sort"+	"strings"+	"time"++	"github.com/influxdata/influxdb/v2/v1/services/meta"+	"github.com/influxdata/influxql"+	"go.uber.org/zap"+)++// Generates security upgrade script.+func generateSecurityScript(v1 *influxDBv1, dbBuckets map[string][]string, log *zap.Logger) error {+	// create helper+	helper := &securityScriptHelper{+		log: log,+	}+	if err := helper.init(); err != nil {+		return err+	}++	// target org name+	var targetOrg = options.target.orgName
	targetOrg := options.target.orgName
vlastahajek

comment created time in 6 days

Pull request review commentinfluxdata/influxdb

feat: influxd upgrade command

+package upgrade++import (+	"errors"+	"fmt"+	"math/rand"+	"os"+	"path/filepath"+	"regexp"+	"runtime"+	"sort"+	"strings"+	"time"++	"github.com/influxdata/influxdb/v2/v1/services/meta"+	"github.com/influxdata/influxql"+	"go.uber.org/zap"+)++// Generates security upgrade script.+func generateSecurityScript(v1 *influxDBv1, dbBuckets map[string][]string, log *zap.Logger) error {+	// create helper+	helper := &securityScriptHelper{+		log: log,+	}+	if err := helper.init(); err != nil {+		return err+	}++	// target org name+	var targetOrg = options.target.orgName++	// get `influx` absolute path+	exePath, err := filepath.Abs(os.Args[0])+	if err != nil {+		return err+	}+	influxExe := filepath.Join(filepath.Dir(exePath), "influx")++	// first check if target buckets exists in 2.x+	v1meta := v1.meta+	proceed := helper.checkDbBuckets(v1meta, dbBuckets)+	if !proceed {+		return errors.New("upgrade: there were errors/warnings, please fix them and run the command again")+	}++	// create output+	var output *os.File+	var isFo bool+	if options.target.securityScriptPath == "" {+		output = os.Stdout+	} else {+		output, err = os.OpenFile(options.target.securityScriptPath, os.O_RDWR|os.O_CREATE|os.O_TRUNC, 0777)+		if err != nil {+			return err+		}+		isFo = true+	}++	// script printing helper funcs+	scriptf := func(format string, args ...interface{}) {+		fmt.Fprintf(output, format, args...)+	}+	scriptln := func(text string) {+		fmt.Fprintln(output, text)+	}+	script := func() {+		fmt.Fprintln(output, "")+	}++	// generate the script+	var comment, set, fi, nl, join, influx string+	if helper.isWin() {+		comment = "REM"+		set = "set "+		if isFo {+			fi = ") >> %LOG% 2>&1"+		} else {+			fi = ")"+		}+		nl = "  echo."+		join = " ^\n  && "+		influx = "%INFLUX%"+		scriptln("@ECHO OFF")+		script()+	} else {+		comment = "#"+		set = ""+		if isFo {+			fi = "} 2>&1 | tee -a $LOG\nfi"+		} else {+			fi = "fi"+		}+		nl = "  echo"+		join = " && \\\n  "+		influx = "env INFLUX_TOKEN=$INFLUX_TOKEN $INFLUX"+		scriptln("#!/bin/sh")+		script()+	}+	for _, row := range helper.sortUserInfo(v1meta.Users()) {+		username := row.Name+		varname := helper.shUserVar(username)+		if row.Admin {+			if helper.isWin() {+				scriptf("%s user %s is 1.x admin, will not be upgraded automatically\n%s%s=no\n",+					comment, username, set, varname)+			} else {+				scriptf("%s%s=no %s user %s is 1.x admin, will not be upgraded automatically\n",+					set, varname, comment, username)+			}+		} else if len(row.Privileges) == 0 {+			if helper.isWin() {+				scriptf("%s user %s has no 1.x privileges, will not be upgraded automatically\n%s%s=no\n",+					comment, username, set, varname)+			} else {+				scriptf("%s%s=no %s user %s has no 1.x privileges, will not be upgraded automatically\n",+					set, varname, comment, username)+			}+		} else {+			if helper.isWin() {+				scriptf("%s user %s\n%s%s=yes\n", comment, username, set, varname)+			} else {+				scriptf("%s%s=yes %s user %s\n", set, varname, comment, username)+			}+		}+	}+	script()+	scriptln(comment)+	scriptf("%s VARIABLES\n", comment)+	scriptln(comment)+	script()+	if helper.isWin() {+		scriptf("set INFLUX=\"%s\"\n", influxExe)+		scriptf("set INFLUX_TOKEN=%s\n", options.target.token)+	} else {+		scriptf("INFLUX=%s\n", influxExe)+		scriptf("INFLUX_TOKEN=%s\n", options.target.token)+	}+	if isFo {+		if helper.isWin() {+			scriptln("set PATH=%PATH%;C:\\WINDOWS\\system32\\wbem")+			scriptln("for /f %%x in ('wmic os get localdatetime ^| findstr /b [0-9]') do @set X=%%x && set LOG=%~dpn0.%X:~0,8%-%X:~8,6%.log")+		} else {+			scriptln("LOG=\"${0%.*}.$(date +%Y%m%d-%H%M%S).log\"")+		}+	}+	script()+	scriptln(comment)+	scriptf("%s INDIVIDUAL USER UPGRADES\n", comment)+	scriptln(comment)+	script()+	for _, row := range helper.sortUserInfo(v1meta.Users()) {+		username := row.Name+		if helper.isWin() {+			scriptf("IF /I \"%%%s%%\" == \"yes\" (\n", helper.shUserVar(username))+		} else {+			if isFo {

I don't think it's necessary to use a different variant when there's file output. This is valid shell:

if true; then
     echo hello
fi 2>&1 | tee -a somefile
vlastahajek

comment created time in 6 days

PullRequestReviewEvent
PullRequestReviewEvent

push eventrogpeppe/doorbell

Roger Peppe

commit sha 4c6da22962c2b81c5e1eff61f556005b3c4fb524

mcp23017: update with latest from tinygo-drivers PR

view details

Roger Peppe

commit sha e306d98ede54cb712bfc1b2d9a101feb2d6d8889

use one debouncer per button

view details

push time in 7 days

PR opened influxdata/influxdb

fix: storage: close PointsWriter when Engine is closed

The PointsWriter has a Close method which seems like it should be called when the Engine is shut down.

<!-- Checkboxes below this note can be erased if not applicable to your Pull Request. -->

+5 -0

0 comment

1 changed file

pr created time in 7 days

PR opened influxdata/influxdb

fix: chronograf/organizations: avoid nil context with WithValue

Under later Go versions, the tests panic because it's not OK to pass a nil Context to context.WithValue.

<!-- Checkboxes below this note can be erased if not applicable to your Pull Request. -->

+19 -25

0 comment

4 changed files

pr created time in 7 days

create barnchinfluxdata/influxdb

branch : rogpeppe-005-storage-close-pointswriter

created branch time in 7 days

create barnchinfluxdata/influxdb

branch : rogpeppe-004-avoid-nil-context

created branch time in 7 days

pull request commentinfluxdata/flux

fix: fixed exponent order of precedence to be above multiplicative operators

Could this break existing customers' query code? I wonder if it might be worth providing some way for people to vet their existing expressions for whether they've changed meaning (or even generate an error for some transition period - people could work around that by adding explicit brackets)

Bolladeen

comment created time in 9 days

issue commentinfluxdata/influxdb

cli write: Error: Failed to write data: bufio.Scanner: token too long

If one of the field values is larger than the limit, then presumably it's not possible to split the write? Maybe that's what's going on.

russorat

comment created time in 10 days

push eventrogpeppe/genericdemo

Roger Peppe

commit sha e25e6c0ee65bb0f048e13be5e19bf71be16d7516

use new generics syntax; remove cruft

view details

push time in 10 days

startedinfluxdata/telegraf

started time in 12 days

push eventinfluxdata/influxdb

Roger Peppe

commit sha 3e4c4028e6f758d446a3b161160cbe80b03e29ed

fix: http: add required name to LabelCreateRequest The label creation operation always requires a name, so make the OpenAPI specification reflect that.

view details

Roger Peppe

commit sha f1c5c7536970c47c5a66895d2bc6d119e847cd21

Merge pull request #19544 from influxdata/rogpeppe-002-label-create-requires-name fix: http: add required name to LabelCreateRequest

view details

push time in 13 days

PR merged influxdata/influxdb

Reviewers
fix: http: add required name to LabelCreateRequest

The label creation operation always requires a name, so make the OpenAPI specification reflect that.

Also fix a couple of other minor issues with the OpenAPI specification in passing.

<!-- Checkboxes below this note can be erased if not applicable to your Pull Request. -->

+4 -4

0 comment

1 changed file

rogpeppe

pr closed time in 13 days

startedinfluxdata/influxdb

started time in 13 days

Pull request review commentrogpeppe/go-internal

testscript: add unix2dos command

 func (ts *TestScript) cmdSymlink(neg bool, args []string) { 	ts.Check(os.Symlink(args[2], ts.MkAbs(args[0]))) } +// cmdUNIX2DOS converts files from UNIX line endings to DOS line endings.+func (ts *TestScript) cmdUNIX2DOS(neg bool, args []string) {+	if neg {+		ts.Fatalf("unsupported: ! unix2dos")+	}+	if len(args) < 1 {+		ts.Fatalf("usage: unix2dos paths...")+	}+	for _, arg := range args {+		filename := ts.MkAbs(arg)+		data, err := ioutil.ReadFile(filename)+		if err != nil {+			ts.Fatalf("%s: %v", filename, err)
			ts.Check(err)

The error from ReadFile already has the filename in it.

twpayne

comment created time in 14 days

Pull request review commentrogpeppe/go-internal

testscript: add unix2dos command

 func (ts *TestScript) cmdSymlink(neg bool, args []string) { 	ts.Check(os.Symlink(args[2], ts.MkAbs(args[0]))) } +// cmdUNIX2DOS converts files from UNIX line endings to DOS line endings.+func (ts *TestScript) cmdUNIX2DOS(neg bool, args []string) {+	if neg {+		ts.Fatalf("unsupported: ! unix2dos")+	}+	if len(args) < 1 {+		ts.Fatalf("usage: unix2dos paths...")+	}+	for _, arg := range args {+		filename := ts.MkAbs(arg)+		data, err := ioutil.ReadFile(filename)+		if err != nil {+			ts.Fatalf("%s: %v", filename, err)+		}+		data = bytes.Join(bytes.Split(data, []byte{'\n'}), []byte{'\r', '\n'})

I wonder about:

data = bytes.ReplaceAll(data, []byte{\n'}, []byte{'\r', '\n'})

which is both shorter and more efficient.

But also, I wonder if it would be better if this command was idempotent.

How about:

func unix2DOS(data []byte) []byte {
	var out []byte
	for scan := bufio.NewScanner(); scan.Scan(); {
		out = append(out, scan.Bytes())
		out = append(out, '\r', '\n')
	}
	return out
}

That way you can call unix2DOS without thinking too much.

twpayne

comment created time in 14 days

PullRequestReviewEvent
PullRequestReviewEvent

push eventrogpeppe/go-generics-experiment

Roger Peppe

commit sha 992e1722296ba2bfc433b46a3a4cacc5ae2c366f

remove build artifacts

view details

push time in 14 days

create barnchrogpeppe/go-generics-experiment

branch : master

created branch time in 14 days

created repositoryrogpeppe/go-generics-experiment

created time in 14 days

push eventrogpeppe-contrib/ajwerner-btree

Roger Peppe

commit sha a4ea600d0c22dcd397036ad4eb2a1acd5bcbac4a

remove type param from Node

view details

Roger Peppe

commit sha 30823f2c0799a86b86ed33ae2be7592bc3399083

fix for latest go2go version

view details

Roger Peppe

commit sha 43575fb8a3ea1c8478d9156896dd4ba90a263381

use new syntax; tests now panic :)

view details

push time in 14 days

PullRequestReviewEvent

issue commentcuelang/cue

trailing newline not allowed in \() string interpolation

aside from convenience, how would you justify a trailing comma in `"(foo,)"?

Convenience seems like a reasonable argument to me. After all, Go allowed trailing commas in function arg lists and other places for a similar reason. This doesn't seem so different to me.

rogpeppe

comment created time in 15 days

issue commentcuelang/cue

cmd/cue: unused import prevents definition

Yes it seems like that was why the error was spurious. If v it's fixed now, then great!

rogpeppe

comment created time in 15 days

issue commentgolang/go

proposal: testing: t.Cleanup should run on panic

Cleanup definitely should run on panic. That was always part of the design (and the original code at least tested for that AFAIR) and if it doesn't, it should be considered a bug IMHO. Furthermore, even if one Cleanup function panics, the others should still run. .

powerman

comment created time in 16 days

issue openedcuelang/cue

cmd/cue: yaml import fails on string containing zero code point

commit 456108de1c31cd5c61045ec20832c3723f2ae9b2

The following YAML fails on import:

b: "\0"

The error message is:

b: invalid string: invalid syntax:
    ./y.yaml:1:5

The syntax appears correct, and 0 is a valid Unicode code point, so this should be OK.

created time in 17 days

issue openedgo-yaml/yaml

v3: string with leading tab isn't quoted correctly.

commit eeeca48fe7764f320e4870d231902bf9c1be2c08

If I marshal a string that starts with a tab, it isn't quoted correctly and cannot be read back in.

https://play.golang.org/p/OaXOsH0OgHe

created time in 17 days

issue closedgo-yaml/yaml

v3: documentation omissions

The docs don't mention (but should) that encoding.TextMarshaler and encoding.TextUnmarshaler implementations are respected when marshaling and unmarshaling values.

They probably should also mention that a MarshalYAML can return a *Node value if it wants fine control over marshaling behaviour. I don't think that it's possible to infer this from the docs alone.

closed time in 17 days

rogpeppe

issue openedgo-yaml/yaml

v3: documentation omissions

The docs don't mention (but should) that encoding.TextMarshaler and encoding.TextUnmarshaler implementations are respected when marshaling and unmarshaling values.

They probably should also mention that a MarshalYAML can return a *Node value if it wants fine control over marshaling behaviour. I don't think that it's possible to infer this from the docs alone.

created time in 17 days

issue openedgo-yaml/yaml

v3: documentation omissions

The docs don't mention (but should) that encoding.TextMarshaler and encoding.TextUnmarshaler implementations are respected when marshaling and unmarshaling values.

They probably should also mention that a MarshalYAML can return a *Node value if it wants fine control over marshaling behaviour. I don't think that it's possible to infer this from the docs alone.

created time in 17 days

push eventrogpeppe/godate

Roger Peppe

commit sha 58b236ecb5628c490e4f951067245d2b8296005c

adjust usage text slightly

view details

push time in 17 days

push eventrogpeppe/godate

Roger Peppe

commit sha 93ce1f86bcc2e325c5240deb24f8f4171186a260

support time zone listing

view details

push time in 17 days

push eventrogpeppe/godate

Roger Peppe

commit sha 613227d66fb1a779442e541680c917117dbfaa7e

support time zone listing

view details

push time in 17 days

PR opened influxdata/influxdb

fix: http: add required name to LabelCreateRequest

The label creation operation always requires a name, so make the OpenAPI specification reflect that.

<!-- Checkboxes below this note can be erased if not applicable to your Pull Request. -->

+4 -4

0 comment

1 changed file

pr created time in 18 days

push eventinfluxdata/influxdb

Roger Peppe

commit sha 3e4c4028e6f758d446a3b161160cbe80b03e29ed

fix: http: add required name to LabelCreateRequest The label creation operation always requires a name, so make the OpenAPI specification reflect that.

view details

push time in 18 days

create barnchinfluxdata/influxdb

branch : rogpeppe-002-label-create-requires-name

created branch time in 18 days

issue commentgolang/go

testing: document rules for using TB

Personally, I think most of those rules should be relaxed. We could document that Fatalf just calls runtime.Goexit (and document that it won't actually abort the entire test). We could allow Logf to run after the test has ended (and document that the logged value won't appear if it has).

Having extra rules on the testing infrastructure makes it harder to diagnose the real problems, in my view, and sometimes non-conformance is accidental (for example if a goroutine that's calling Logf happens not to have been shut down properly before the test has completed).

dsymonds

comment created time in 18 days

issue openedcuelang/cue

cmd/cue: importing still uses old-style YAML

<!-- Please answer these questions before submitting your issue. Thanks! For questions please use one of our forums: https://cuelang.slack.com/ -->

What version of CUE are you using (cue version)?

<pre> $ cue version

</pre>

<!-- If you built from source, specify what git tag or commit was used. -->

Does this issue reproduce with the latest release?

What did you do?

<!-- If possible, provide a recipe for reproducing the error. -->

What did you expect to see?

What did you see instead?

created time in 18 days

push eventinfluxdata/influxdb

Roger Peppe

commit sha f8013ac12daba92a44359e12646329eec19506f9

fix: remove unnecessary replace directives from go.mod From online discussion, the /dev/null replacement was "a hacky way to prevent people from re-introducing a dependency on the then-deprecated platform repo", and nothing in the module depends at all on the erroneously capitalized logrus repo: ``` % go list -m github.com/influxdata/influxdb/v2 % go mod why -m github.com/Sirupsen/logrus (main module does not need module github.com/Sirupsen/logrus) ``` Replace directives are potentially dangerous as they can change semantics for importers of public packages which won't inherit the same replace directives, so it's best to avoid them if possible.

view details

Roger Peppe

commit sha deb99b38850494524bfe86bc1bf38b42ccef391b

Merge pull request #19493 from influxdata/rogpeppe-001-remove-replace-directives fix: remove unnecessary replace directives from go.mod

view details

push time in 19 days

PR merged influxdata/influxdb

fix: remove unnecessary replace directives from go.mod

From online discussion, the /dev/null replacement was "a hacky way to prevent people from re-introducing a dependency on the then-deprecated platform repo", and nothing in the module depends at all on the erroneously capitalized logrus repo:

% go list -m
github.com/influxdata/influxdb/v2
% go mod why -m github.com/Sirupsen/logrus
(main module does not need module github.com/Sirupsen/logrus)

Replace directives are potentially dangerous as they can change semantics for importers of public packages which won't inherit the same replace directives, so it's best to avoid them if possible.

<!-- Checkboxes below this note can be erased if not applicable to your Pull Request. -->

+0 -4

1 comment

1 changed file

rogpeppe

pr closed time in 19 days

issue commentgolang/go

proposal: testing: add TB.Setenv()

@mdlayher

IMHO fetching data from environment variables should only ever be done in package main, and then plumbed throughout the rest of your program using regular Go data types. I don't think this would encourage good code hygiene since environment variables are effectively globals.

Even if that's the case, it's still a good idea to test that code in main that fetches the data from environment variables, and for that, a Setenv primitive can be very helpful. Moreover, sometimes there's no way to plumb things through without breaking backward compatibility, and, bad practice or not, environment variables can be a pragmatic solution in that case.

Although I haven't used it a huge amount (I count 73 uses in my code base), I've found this primitive to be very useful at times. It's not entirely trivial to get right either, because code can be sensitive to whether an environment variable is present vs empty, so just calling os.Setenv with the old value isn't always sufficient.

FWIW almost all of the uses were to test code that was there precisely to get environment variables and turn them into configuration available to the rest of the system.

I support this proposal (with modifications to unset instead as reset when appropriate) particularly with the isParallel check, which isn't something that external code can do, and nicely guards against a potential pitfall.

sagikazarmark

comment created time in 20 days

Pull request review commentfrankban/quicktest

Introduce top level Assert, Check and Patch functions

 import ( 	"testing" ) +// Check runs the given check using the provided t and continues execution in+// case of failure. For instance:+//+//     qt.Check(t, answer, qt.Equals, 42)+//     qt.Check(t, got, qt.IsNil, qt.Commentf("iteration %d", i))+//+// Additional args (not consumed by the checker), when provided, are included as+// comments in the failure output when the check fails.+func Check(t testing.TB, got interface{}, checker Checker, args ...interface{}) bool {+	return check(checkParams{
return New(t).Check(t, got, checker, args...)

(and for Assert)

? then you can remove the extra check function too.

frankban

comment created time in 21 days

PullRequestReviewEvent
PullRequestReviewEvent
more