profile
viewpoint
Maximillian von Briesen mobyvb Atlanta http://mobyvb.com I make stuff

mobyvb/jamcircle 6

Jammin on the web

mobyvb/grandcentraljam 2

A web app for version control in music production

divonbriesen/devfoundry 1

for working with DevFoundry DubMeth

justinpotts/arewesortedyet 1

Is the @Mozilla ball pit sorted yet?

mobyvb/ball-pit-sorting 1

a useful, everyday script to help you sort ball pits

mobyvb/3750Prototype 0

A prototype for a VR thing for my design class (move along)

mobyvb/arewesortedyet 0

Is the @Mozilla ball pit sorted yet?

push eventstorj/storj

Moby von Briesen

commit sha 4910c696f90ca2b8078a0cd24cda18a852e7789d

ci,scripts/tests/rolling-upgrade: run rolling upgrade test on private jenkins Change-Id: Ic1c9f7539ee0ac371bcb856bdbcac2ff6c0ccc65

view details

push time in 25 days

push eventstorj/storj

Moby von Briesen

commit sha fa9d7836bde92b3f707fd2fdeb7ff7ce6daab7b1

ci,scripts/tests/rolling-upgrade: run rolling upgrade test on private jenkins Change-Id: Ic1c9f7539ee0ac371bcb856bdbcac2ff6c0ccc65

view details

push time in 25 days

push eventstorj/storj

Moby von Briesen

commit sha 59096b3987657b84016e802d4a14a7251f1428e0

ci,scripts/tests/rolling-upgrade: run rolling upgrade test on private jenkins Change-Id: Ic1c9f7539ee0ac371bcb856bdbcac2ff6c0ccc65

view details

push time in 25 days

push eventstorj/storj

Moby von Briesen

commit sha e9316c79e669e06de4393721667304f875d84a9a

ci,scripts/tests/rolling-upgrade: run rolling upgrade test on private jenkins Change-Id: Ic1c9f7539ee0ac371bcb856bdbcac2ff6c0ccc65

view details

push time in 25 days

push eventstorj/storj

Egon Elbre

commit sha 858f349a1a774044473b8e51692ea9f69ff4a154

lib/uplinkc: add user agent Change-Id: I6627851112823a72e4001b45d9a1b80ba76b8346

view details

Ivan Fraixedes

commit sha 46c8d2e9c72d68f956a6b810f115a7341d7a1cb9

private/testplanet: Wait until peer ends when closing it Close a peer didn't guarantee that the peer ended its services and we want that when a StopPeer method returns the peer service is actually finished. Change-Id: If97f41b7e404990555640c71e097ebc719678ae7

view details

Yingrong Zhao

commit sha c6854accdf1dea5cf2721291148dbff6c13c7b29

scripts: add test-versions stage to private Jenkins test-sim-versions.sh tests upgrading the satellite, storagenodes, and uplinks from the most recent release to master, and ensures that compatibility across all uplink versions since v0.15 is maintained. Change-Id: I80a54236d0eb2d681716caf4b825a883bdc25ef1

view details

Egon Elbre

commit sha 9e4d8331701a9b015b46a73d5bcb079c02983e11

private/testplanet: use default interval The default interval tries to balance: 1. ensure that most things run at least once during tests 2. ensure that they won't run over 10 times Change-Id: I911b57b595ffbef1963654bf4a42efad1534b058

view details

Andrew Harding

commit sha 62c58f4a9a98bca36509cc2e8f60780626cbad5b

satellite: consistent report range arguments This change updates the three satellite report commands that accept date ranges to parse and treat those dates uniformly. - End dates are now uniformly exclusive. Exclusive end dates helps operators avoid one-off errors on month boundaries, as in the operator does not have to remember how many days are in that month and can just run the report from the 1st (inclusive) through the 1st (exclusive). - Fixed the date range validity check which only failed if the start date came after the end date (it should have failed dates that were equal since the check happened after adjusting for inclusivity). Change-Id: Ib2ee1c71ddb916c6e1906834d5ff0dc47d1a5801

view details

Egon Elbre

commit sha ea455b6df0423637a5aa14898c1d6fd36f252181

all: remove code to default to grpc We have moved to drpc so we don't need to have code for building with grpc only. Change-Id: I55732314dca0d5b4ce1132b68de4186a15d91b21

view details

Qweder93

commit sha e47ec84deee72ce1241f8d50c21be3a2f909fe9e

storagenode notification service and api added Change-Id: I36898d7c43e1768e0cae0da8d83bb20b16f0cdde

view details

Ethan

commit sha b959ccbae6bb3fcc9e1c357414a4cf8127ff7b48

satellite/gracefulexit: Use proper rpc status codes for disqualified nodes and too many connections Change-Id: I41380026175e7678c7cd3d44211de8eb86ce4d0f

view details

Egon Elbre

commit sha 6fc009f6e447e16a2096a0308bc37efd0c201d47

uplink/eestream: move Pad to encryption package to break dependency to eestream Change-Id: I0c9bc3c65f161d79812196ac8285405e6be04c9e

view details

Yingrong Zhao

commit sha 6e71591b9b74bd49a74b63861d06ca63e50cc5d9

satellitedb;storagenodedb: remove unnecessary use of DB transactions in graceful exit Change-Id: Ief0a28c6750c130896b48bfebfbea7fb3caa810f

view details

Egon Elbre

commit sha 98243360d6873661107325710cfa42ac44cccf22

cmd/storagenode-updater: faster update test flate compression with default settings plays very poorly together with race, causing test to take a significant amount of time. Use pass-through compression to avoid the issue. Improves test from 2m45s to 17s. Change-Id: Iadf1381c538736d48e018164697bdfd3356e24b8

view details

Egon Elbre

commit sha 31fbdcc8f7967e23fbb1051ef0afe13ac5f304e3

pkg/encryption: better EncodingDecodingStress The test case wasn't testing all the combinations. But, testing all 3 byte combinations would take too long, instead test special values and +- 1 of them, with some additional noise characters. Change-Id: If53ff25863a1f27c534922bd399fbbbdfefda441

view details

Egon Elbre

commit sha acb4435a67d03c73974634ec27792b578955afa8

satellite/satellitedb: improve Cockroach migrate test Load schemas in parallel instead of one-by-one. Optimizes from 2m30s to 1m15s. Change-Id: I0bf6381a0ae99b44271fe55d4ee658683064c097

view details

ccase

commit sha 6f1eaef8d4d579cef54a8605a2e5de8483f5fb41

cmd/uplink: Pass -- in tests to avoid treating generated arg strings as flags. Change-Id: I41c50b9f645b57ddc8832b0fc92f1c6bfaf2de8d

view details

Egon Elbre

commit sha 006baa9ca650b15447c26cc9b1d3c619a3b50641

pkg/rpc: remove drpc aliases We need to split up pb package, which means we cannot have a core package that depends on them. Change-Id: I7f4f6fd82f89a51a9b2ad08bf2b1207253b8a215

view details

Egon Elbre

commit sha d55288cf68c22fd6755e27a75e3ad9bcee05d6e2

pkg/rpc: replace methods with direct calls to pb Change-Id: I8bd015d8d316a2c12c1daceca1d9fd257f6f57bc

view details

Egon Elbre

commit sha 3849b9a21a5e75510a40c6d3b481e948cf7266e9

pkr/rpc: remove RawConn Change-Id: I61bc10b82f178a16f27279b85b627553d122c174

view details

Isaac Hess

commit sha 7d1e28ea307d4778b141f3f97f8556bdfa5428c4

storagenode: Include trash space when calculating space used This commit adds functionality to include the space used in the trash directory when calculating available space on the node. It also includes this trash value in the space used cache, with methods to keep the cache up-to-date as files are trashed, restored, and emptied. As part of the commit, the RestoreTrash and EmptyTrash methods have slightly changed signatures. RestoreTrash now also returns the keys that were restored, while EmptyTrash also returns the total disk space recovered. Each of these changes makes it possible to keep the cache up-to-date and know how much space is being used/recovered. Also changed is the signature of PieceStoreAccess.ContentSize method. Previously this method returns only the content size of the blob, removing the size of any header data. This method has been renamed `Size` and returns both the full disk size and content size of the blob. This allows us to only stat the file once, and in some instances (i.e. cache) knowing the full file size is useful. Note: This commit simply adds the trash size data to the piece size data we were already collecting. The piece size data is not accurate for all use-cases (e.g. because it does not contain piece header data); however, this commit does not fix that problem. Now that the ContentSize (Size) method returns the full size of the file, it should be easier to fix this problem in a future commit. Change-Id: I4a6cae09e262c8452a618116d1dc66b687f59f85

view details

Kaloyan Raev

commit sha 7df3c9efc30991ce9fb1b30f796252a507ef66cc

cmd/uplink: use arguments in share command as allowed path prefixes Fixes Least Authority Issue F: https://storjlabs.atlassian.net/browse/V3-3409 If the --allowed-path-prefix flag is not set to the `share` command, any command arguments will be used as allowed path prefixes. This patch also improves the output of the `share` command to print the state of all restrictions, so users can confirm they match their intention. Change-Id: Id1b4df20b182d3fe04cb2196feea090975fce8b4

view details

Fadila

commit sha 115b8b0fc8ac6421d134be9c8fe1a7cf33c99e8e

storagenode/piecestore: delete several pieces in a single request This is part of the deletion performance improvement. See https://storjlabs.atlassian.net/browse/V3-3349 Change-Id: Idcd83a302f2bd5cc3299e1a4195a7e177f452599

view details

push time in 25 days

Pull request review commentstorj/storj

notifications draft

 func (service *Service) IsOnline(node *NodeDossier) bool { 	return time.Since(node.Reputation.LastContactSuccess) < service.config.Node.OnlineWindow } +// IsOnline checks if a node is 'online' based on the collected statistics.+func (service *Service) IsNew(node *NodeDossier) bool {+	return time.Since(node.Reputation.LastContactSuccess) < service.config.Node.OnlineWindow

You need access to a NodeSelectionConfig (see FindStorageNodeWithPreferences below). The values are configured above the dB layer in satellite/overlay. Then you check that the total audit count and total uptime count are less than the values in that struct At an appointment right now but can talk more tomorrow.

rikysya

comment created time in a month

delete branch storj/docs

delete branch : postgres

delete time in a month

GollumEvent

PR opened storj/docs

Test-network.md: add simpler method for creating initial postgres database

To create the initial teststorj database, use a one line docker command rather than suggesting logging into the postgres shell + running a sql command.

+4 -4

0 comment

1 changed file

pr created time in a month

create barnchstorj/docs

branch : postgres

created branch time in a month

push eventstorj/storj

Moby von Briesen

commit sha 3da7bdd82f72d4fac5c5a6c7ba470b1e58ba7db2

add sleep (HACK)

view details

push time in 2 months

push eventstorj/storj

Moby von Briesen

commit sha 71869a3245923e1050413bb4d402529b84b65a7b

add sleep (HACK)

view details

push time in 2 months

push eventstorj/storj

Moby von Briesen

commit sha 123d229c92b5a8662309bad68a030763b65c9363

add port to createdb call

view details

push time in 2 months

push eventstorj/storj

Moby von Briesen

commit sha 9c480a0ee571f88a890e9c59fd2981b3b43d0a25

remove -it

view details

push time in 2 months

push eventstorj/storj

Moby von Briesen

commit sha afdc30be0243a04935abdff597540acdaf1b2e7a

remove postgres container on err (HACK)

view details

push time in 2 months

push eventstorj/storj

Moby von Briesen

commit sha 8f8f359e18d4948233baeeaec4c114dada3f12d6

remove postgres container on err (HACK)

view details

push time in 2 months

push eventstorj/storj

Moby von Briesen

commit sha 2ef4810bd396fe488ef602887cb6081fa099bea2

create db without psql

view details

push time in 2 months

push eventstorj/storj

Moby von Briesen

commit sha 3a5959e4bac13f246d2405b00e14c75566538812

remove `steps`

view details

push time in 2 months

push eventstorj/storj

Moby von Briesen

commit sha 0311f8d6ebd39adf0017152f87f1254d887665e5

start postgres in background

view details

push time in 2 months

push eventstorj/storj

Moby von Briesen

commit sha bcb515378f55dca28307d70d69cb080e3a5b8d7c

temporarily comment out slack messages + emails

view details

Moby von Briesen

commit sha ed8d66829d309f32752bd92d06b32673d1eefcaa

Merge branch 'green/automated-test-versions' of github.com:storj/storj into green/automated-test-versions

view details

push time in 2 months

push eventstorj/storj

Moby von Briesen

commit sha 5b74a0a8c96ec2fd22f6eb9de3c9c2fffb7a2a63

update jenkinsfile

view details

push time in 2 months

push eventstorj/storj

Moby von Briesen

commit sha e81cfdd4cf69b9410142980836e9704625f5c9d0

only setup uplink if directory does not exist

view details

Moby von Briesen

commit sha 97af2da313a4286a16c9ae4c4b6394ec5bf82ba6

Merge branch 'green/automated-test-versions' of github.com:storj/storj into green/automated-test-versions

view details

push time in 2 months

push eventstorj/storj

Moby von Briesen

commit sha 0e8db3c5104b6cf33f576216310819d0165d9690

add temporary test-versions-custom-ul-cfg

view details

Moby von Briesen

commit sha fe52607866e1a62386655fc0c69d2032747c4bf9

Merge branch 'green/automated-test-versions' of github.com:storj/storj into green/automated-test-versions

view details

push time in 2 months

Pull request review commentstorj/storj

script: automated test for testing different version matches between peers

+// Copyright (C) 2019 Storj Labs, Inc.+// See LICENSE for copying information.++// +build ignore++package main++import (+	"errors"+	"fmt"+	"io/ioutil"+	"log"+	"os"+	"os/exec"+	"strings"++	"gopkg.in/yaml.v2"+)++var SkippedUplinkVersions = []string{+	"v0.10.1",+	"v0.10.0", "v0.10.2",+	"v0.11.0", "v0.11.1", "v0.11.2", "v0.11.3", "v0.11.4", "v0.11.5", "v0.11.6", "v0.11.7",+	"v0.12.0", "v0.12.1", "v0.12.2", "v0.12.3", "v0.12.4", "v0.12.5", "v0.12.6",+	"v0.13.0", "v0.13.1", "v0.13.2", "v0.13.3", "v0.13.4", "v0.13.5", "v0.13.6",+}++type VersionsTest struct {+	Stage1 *Stage `yaml:"stage1"`+	Stage2 *Stage `yaml:"stage2"`+}++type Stage struct {+	SatelliteVersion    string   `yaml:"sat_version"`+	UplinkVersions      []string `yaml:"uplink_versions"`+	StoragenodeVersions []string `yaml:"storagenode_versions"`+}++func main() {+	if err := run(); err != nil {+		log.Fatal(err)+	}+}++func run() error {+	if len(os.Args) < 3 {+		return errors.New("Please provide path to script file and yaml file via command line")+	}++	scriptFile := os.Args[1]+	yamlFile := os.Args[2]

Yingrong actually has modified the script to get the uplink, storagenode, and satellite so we don't need a config.yaml anymore. Basically we are planning on doing the following:

`latest_release` - the last actual release (not including the current/ongoing release)
`master` - the latest code (which the current/ongoing release will be based on)
`uplink_versions` - a list of uplink releases to test (e.g. `26.3, 25.3, 24.5, ... 15.4`)

`stage 1`
* set up a clean storj-sim based on the `latest_release`
* for every uplink in `uplink_versions`, upload three files (`bucket-v0.XX.X/file-#` )

`stage 2`
* start satellite on `master`
* start storagenodes 0-4 on `latest_release`
* start storagenodes 5-9 on `master`
* for every uplink in `uplink_versions`, download every file that was uploaded in `stage 1` and verify correctness

All the versions discussed above should be able to be acquired without having to maintain an actual set of versions somewhere.

VinozzZ

comment created time in 2 months

delete branch storj/storj

delete branch : green/test-bad-migration

delete time in 2 months

PR closed storj/storj

satellite/satellitedb: add bad migration cla-signed

What: Make sure the migration test is failing on jenkins when it is supposed to.

Why: Jenkins should fail for this PR. But it is not failing.

Please describe the tests:

  • Test 1:
  • Test 2:

Please describe the performance impact:

Code Review Checklist (to be filled out by reviewer)

  • [ ] NEW: Are there any Satellite database migrations? Are they forwards and backwards compatible?
  • [ ] Does the PR describe what changes are being made?
  • [ ] Does the PR describe why the changes are being made?
  • [ ] Does the code follow our style guide?
  • [ ] Does the code follow our testing guide?
  • [ ] Is the PR appropriately sized? (If it could be broken into smaller PRs it should be)
  • [ ] Does the new code have enough tests? (every PR should have tests or justification otherwise. Bug-fix PRs especially)
  • [ ] Does the new code have enough documentation that answers "how do I use it?" and "what does it do?"? (both source documentation and higher level, diagrams?)
  • [ ] Does any documentation need updating?
  • [ ] Do the database access patterns make sense?
+483 -0

0 comment

2 changed files

mobyvb

pr closed time in 2 months

Pull request review commentstorj/storj

script: automated test for testing different version matches between peers

+// Copyright (C) 2019 Storj Labs, Inc.+// See LICENSE for copying information.++// +build ignore++package main++import (+	"errors"+	"fmt"+	"io/ioutil"+	"log"+	"os"+	"os/exec"+	"strings"++	"gopkg.in/yaml.v2"+)++var SkippedUplinkVersions = []string{+	"v0.10.1",+	"v0.10.0", "v0.10.2",+	"v0.11.0", "v0.11.1", "v0.11.2", "v0.11.3", "v0.11.4", "v0.11.5", "v0.11.6", "v0.11.7",+	"v0.12.0", "v0.12.1", "v0.12.2", "v0.12.3", "v0.12.4", "v0.12.5", "v0.12.6",+	"v0.13.0", "v0.13.1", "v0.13.2", "v0.13.3", "v0.13.4", "v0.13.5", "v0.13.6",+}++type VersionsTest struct {+	Stage1 *Stage `yaml:"stage1"`+	Stage2 *Stage `yaml:"stage2"`+}++type Stage struct {+	SatelliteVersion    string   `yaml:"sat_version"`+	UplinkVersions      []string `yaml:"uplink_versions"`+	StoragenodeVersions []string `yaml:"storagenode_versions"`+}++func main() {+	if err := run(); err != nil {+		log.Fatal(err)+	}+}++func run() error {+	if len(os.Args) < 3 {+		return errors.New("Please provide path to script file and yaml file via command line")+	}++	scriptFile := os.Args[1]+	yamlFile := os.Args[2]

I am not sure if I remember the reason (@isaachess might), but I think it could have been that a yaml file would be easier for someone like Jens to maintain. We actually originally had the versions hardcoded into test-sim-versions.sh, then we wanted to split them out into a yaml file since maintaining them in bash was unsustainable. But importing a yaml file from bash is not trivial, so we created the go file to facilitate that. At that point, maybe we could have just forgotten about the yaml file entirely and done that stuff from Go. I don't have a strong preference one way or the other.

VinozzZ

comment created time in 2 months

push eventstorj/storj

Ivan Fraixedes

commit sha c400471bbc8ab7d81744df57bf4468dd559614cd

satellite/metainfo: Fix some docs comments Fix a documentation comment for one method and apply our code conventions to some that I stumbled. Change-Id: I3baf5d004a128dcd561c3e27c080aab345c64461

view details

Michal Niewrzal

commit sha b17d40ffd44b2dc43cd52ce2ec61a2bb63e86e5f

segment-reaper: delete command logic (#3660)

view details

Ivan Fraixedes

commit sha 8d49a99ad875cc973965cb6ae35fb8d04fca84e4

uplink/metainfo: Fix doc comments Fix some documentation comments in a few methods and apply our code style conventions to all of them in this source file of the package. Change-Id: I78186cb1e7d45020d6cf7bbea8a39d146ada1cb6

view details

Michal Niewrzal

commit sha 27462f68e9a80bb7a10aad19e297cf3e5b6d87b5

cmd/segment-reaper: add detecting and printing zombie segments (#3598)

view details

Michal Niewrzal

commit sha ffd570eb04524a5077adc9f6df87a4047c673cf3

uplink/metainfo: remove additional GetObject from download This change removes one additional metainfo.GetObject call during download. Change-Id: I86b4e5165f299069e3e18944fe79c697c6a514d3

view details

Nikolay Yurchenko

commit sha ea92c6860094e7e07411e0487c11df087087c6da

web/satelllite: default cc delete disabling (#3695)

view details

Vitalii Shpital

commit sha fa5288c254fc6bb9e60202dadd36b902006d2070

satellitedb: bucket search fixed (#3594)

view details

Nikolai Siedov

commit sha c6776ae6bbf5b211fc6346a179e301c3b750e577

error messages fixed (#3712)

view details

Vitalii Shpital

commit sha fe5cfeb61b28c95f2665dd994f7dfddbbfcd7548

web/storagenode: disk space chart's tooltip size fixed (#3684)

view details

Vitalii Shpital

commit sha 6ef359aba2ff67f5d1599789358dc2bb6a4d6f23

web/storagenode: blurred checks hint added for all satellites state (#3709)

view details

Egon Elbre

commit sha 56a3b62befd2ba5e808d2331eefe8c583fc6b180

satellite/satellitedb: ensure migration tests run (#3706) satellitedb migration tests ran against multiple base versions, however after the merging all the steps the base versions didn't exists anymore - which meant none of the migration tests were actually running.

view details

Maximillian von Briesen

commit sha 225aa93e1a097a1890b8db3b07e51ca9cdb73fb3

Merge branch 'master' into green/test-bad-migration

view details

push time in 2 months

Pull request review commentstorj/storj

script: automated test for testing different version matches between peers

+- stage1:

I wonder if we can get jenkins to only run the test if this file has been updated. Maybe a question to ask Stefan.

VinozzZ

comment created time in 2 months

Pull request review commentstorj/storj

script: automated test for testing different version matches between peers

+// Copyright (C) 2019 Storj Labs, Inc.+// See LICENSE for copying information.++// +build ignore++package main++import (+	"errors"+	"fmt"+	"io/ioutil"+	"log"+	"os"+	"os/exec"+	"strings"++	"gopkg.in/yaml.v2"+)++var SkippedUplinkVersions = []string{+	"v0.10.1",+	"v0.10.0", "v0.10.2",+	"v0.11.0", "v0.11.1", "v0.11.2", "v0.11.3", "v0.11.4", "v0.11.5", "v0.11.6", "v0.11.7",+	"v0.12.0", "v0.12.1", "v0.12.2", "v0.12.3", "v0.12.4", "v0.12.5", "v0.12.6",+	"v0.13.0", "v0.13.1", "v0.13.2", "v0.13.3", "v0.13.4", "v0.13.5", "v0.13.6",+}++type VersionsTest struct {+	Stage1 *Stage `yaml:"stage1"`+	Stage2 *Stage `yaml:"stage2"`+}++type Stage struct {+	SatelliteVersion    string   `yaml:"sat_version"`+	UplinkVersions      []string `yaml:"uplink_versions"`+	StoragenodeVersions []string `yaml:"storagenode_versions"`+}++func main() {+	if err := run(); err != nil {+		log.Fatal(err)+	}+}++func run() error {+	if len(os.Args) < 3 {+		return errors.New("Please provide path to script file and yaml file via command line")+	}++	scriptFile := os.Args[1]+	yamlFile := os.Args[2]++	b, err := ioutil.ReadFile(yamlFile)+	if err != nil {+		return err+	}++	var tests []*VersionsTest+	if err := yaml.Unmarshal(b, &tests); err != nil {+		return err+	}++	var filteredUplinkTagList []string+	for _, test := range tests {+		filteredUplinkTagList, err = getVersions(test, filteredUplinkTagList)+		if err != nil {+			return err+		}+		if len(test.Stage1.UplinkVersions) < 1 {+			test.Stage1.UplinkVersions = filteredUplinkTagList+		}+		if len(test.Stage2.UplinkVersions) < 1 {+			test.Stage2.UplinkVersions = filteredUplinkTagList+		}++		if err := runTest(test, scriptFile); err != nil {+			return err+		}+	}++	return nil+}++func runTest(test *VersionsTest, scriptFile string) error {+	stage1SNVersions := formatMultipleVersions(test.Stage1.StoragenodeVersions)+	stage2SNVersions := formatMultipleVersions(test.Stage2.StoragenodeVersions)+	stage1UplinkVersions := formatMultipleVersions(test.Stage1.UplinkVersions)+	stage2UplinkVersions := formatMultipleVersions(test.Stage2.UplinkVersions)+	cmd := exec.Command(scriptFile, test.Stage1.SatelliteVersion, stage1UplinkVersions, stage1SNVersions, test.Stage2.SatelliteVersion, stage2UplinkVersions, stage2SNVersions)+	cmd.Stdout = os.Stdout+	cmd.Stderr = os.Stderr+	return cmd.Run()+}++func formatMultipleVersions(snvs []string) string {+	var s string+	for i, snv := range snvs {+		space := " "+		if i == 0 {+			space = ""+		}+		s = fmt.Sprintf("%s%s%s", s, space, snv)+	}+	return s+}++func getVersions(test *VersionsTest, filteredTagList []string) ([]string, error) {

nit - getVersions -> getUplinkVersions

VinozzZ

comment created time in 2 months

Pull request review commentstorj/storj

script: automated test for testing different version matches between peers

+// Copyright (C) 2019 Storj Labs, Inc.+// See LICENSE for copying information.++// +build ignore++package main++import (+	"errors"+	"fmt"+	"io/ioutil"+	"log"+	"os"+	"os/exec"+	"strings"++	"gopkg.in/yaml.v2"+)++var SkippedUplinkVersions = []string{+	"v0.10.1",+	"v0.10.0", "v0.10.2",+	"v0.11.0", "v0.11.1", "v0.11.2", "v0.11.3", "v0.11.4", "v0.11.5", "v0.11.6", "v0.11.7",+	"v0.12.0", "v0.12.1", "v0.12.2", "v0.12.3", "v0.12.4", "v0.12.5", "v0.12.6",+	"v0.13.0", "v0.13.1", "v0.13.2", "v0.13.3", "v0.13.4", "v0.13.5", "v0.13.6",+}++type VersionsTest struct {+	Stage1 *Stage `yaml:"stage1"`+	Stage2 *Stage `yaml:"stage2"`+}++type Stage struct {+	SatelliteVersion    string   `yaml:"sat_version"`+	UplinkVersions      []string `yaml:"uplink_versions"`+	StoragenodeVersions []string `yaml:"storagenode_versions"`+}++func main() {+	if err := run(); err != nil {+		log.Fatal(err)+	}+}++func run() error {+	if len(os.Args) < 3 {+		return errors.New("Please provide path to script file and yaml file via command line")+	}++	scriptFile := os.Args[1]+	yamlFile := os.Args[2]++	b, err := ioutil.ReadFile(yamlFile)+	if err != nil {+		return err+	}++	var tests []*VersionsTest+	if err := yaml.Unmarshal(b, &tests); err != nil {+		return err+	}++	var filteredUplinkTagList []string+	for _, test := range tests {+		filteredUplinkTagList, err = getVersions(test, filteredUplinkTagList)+		if err != nil {+			return err+		}+		if len(test.Stage1.UplinkVersions) < 1 {+			test.Stage1.UplinkVersions = filteredUplinkTagList+		}+		if len(test.Stage2.UplinkVersions) < 1 {+			test.Stage2.UplinkVersions = filteredUplinkTagList+		}++		if err := runTest(test, scriptFile); err != nil {+			return err+		}+	}++	return nil+}++func runTest(test *VersionsTest, scriptFile string) error {+	stage1SNVersions := formatMultipleVersions(test.Stage1.StoragenodeVersions)+	stage2SNVersions := formatMultipleVersions(test.Stage2.StoragenodeVersions)+	stage1UplinkVersions := formatMultipleVersions(test.Stage1.UplinkVersions)+	stage2UplinkVersions := formatMultipleVersions(test.Stage2.UplinkVersions)+	cmd := exec.Command(scriptFile, test.Stage1.SatelliteVersion, stage1UplinkVersions, stage1SNVersions, test.Stage2.SatelliteVersion, stage2UplinkVersions, stage2SNVersions)+	cmd.Stdout = os.Stdout+	cmd.Stderr = os.Stderr+	return cmd.Run()+}++func formatMultipleVersions(snvs []string) string {+	var s string+	for i, snv := range snvs {+		space := " "+		if i == 0 {+			space = ""+		}+		s = fmt.Sprintf("%s%s%s", s, space, snv)+	}+	return s+}++func getVersions(test *VersionsTest, filteredTagList []string) ([]string, error) {+	if len(test.Stage1.UplinkVersions) > 0 && len(test.Stage2.UplinkVersions) > 0 || len(filteredTagList) > 0 {+		return filteredTagList, nil+	}+	tags, err := exec.Command("bash", "-c", `git fetch --tags -q && git tag | sort | uniq | grep "v[0-9]"`).Output()+	if err != nil {+		return nil, err+	}+	stringTags := string(tags)+	tagList := strings.Split(strings.TrimSpace(stringTags), "\n")+	// skip specified versions if there's any+	for _, tag := range tagList {+		shouldSkip := false+		for _, skip := range SkippedUplinkVersions {

nit - maybe better to make a SkippedUplinkMap[string]bool to make this process simpler

VinozzZ

comment created time in 2 months

Pull request review commentstorj/storj

script: automated test for testing different version matches between peers

+// Copyright (C) 2019 Storj Labs, Inc.+// See LICENSE for copying information.++// +build ignore++package main++import (+	"errors"+	"fmt"+	"io/ioutil"+	"log"+	"os"+	"os/exec"+	"strings"++	"gopkg.in/yaml.v2"+)++var SkippedUplinkVersions = []string{+	"v0.10.1",+	"v0.10.0", "v0.10.2",+	"v0.11.0", "v0.11.1", "v0.11.2", "v0.11.3", "v0.11.4", "v0.11.5", "v0.11.6", "v0.11.7",+	"v0.12.0", "v0.12.1", "v0.12.2", "v0.12.3", "v0.12.4", "v0.12.5", "v0.12.6",+	"v0.13.0", "v0.13.1", "v0.13.2", "v0.13.3", "v0.13.4", "v0.13.5", "v0.13.6",+}++type VersionsTest struct {+	Stage1 *Stage `yaml:"stage1"`+	Stage2 *Stage `yaml:"stage2"`+}++type Stage struct {+	SatelliteVersion    string   `yaml:"sat_version"`+	UplinkVersions      []string `yaml:"uplink_versions"`+	StoragenodeVersions []string `yaml:"storagenode_versions"`+}++func main() {+	if err := run(); err != nil {+		log.Fatal(err)+	}+}++func run() error {+	if len(os.Args) < 3 {+		return errors.New("Please provide path to script file and yaml file via command line")+	}++	scriptFile := os.Args[1]+	yamlFile := os.Args[2]++	b, err := ioutil.ReadFile(yamlFile)+	if err != nil {+		return err+	}++	var tests []*VersionsTest+	if err := yaml.Unmarshal(b, &tests); err != nil {+		return err+	}++	var filteredUplinkTagList []string+	for _, test := range tests {+		filteredUplinkTagList, err = getVersions(test, filteredUplinkTagList)+		if err != nil {+			return err+		}+		if len(test.Stage1.UplinkVersions) < 1 {+			test.Stage1.UplinkVersions = filteredUplinkTagList+		}+		if len(test.Stage2.UplinkVersions) < 1 {+			test.Stage2.UplinkVersions = filteredUplinkTagList+		}++		if err := runTest(test, scriptFile); err != nil {+			return err+		}+	}++	return nil+}++func runTest(test *VersionsTest, scriptFile string) error {+	stage1SNVersions := formatMultipleVersions(test.Stage1.StoragenodeVersions)+	stage2SNVersions := formatMultipleVersions(test.Stage2.StoragenodeVersions)+	stage1UplinkVersions := formatMultipleVersions(test.Stage1.UplinkVersions)+	stage2UplinkVersions := formatMultipleVersions(test.Stage2.UplinkVersions)+	cmd := exec.Command(scriptFile, test.Stage1.SatelliteVersion, stage1UplinkVersions, stage1SNVersions, test.Stage2.SatelliteVersion, stage2UplinkVersions, stage2SNVersions)+	cmd.Stdout = os.Stdout+	cmd.Stderr = os.Stderr+	return cmd.Run()+}++func formatMultipleVersions(snvs []string) string {+	var s string+	for i, snv := range snvs {+		space := " "+		if i == 0 {+			space = ""+		}+		s = fmt.Sprintf("%s%s%s", s, space, snv)+	}+	return s+}++func getVersions(test *VersionsTest, filteredTagList []string) ([]string, error) {+	if len(test.Stage1.UplinkVersions) > 0 && len(test.Stage2.UplinkVersions) > 0 || len(filteredTagList) > 0 {

I don't fully understand what this condition is checking and why

VinozzZ

comment created time in 2 months

Pull request review commentstorj/storj

script: automated test for testing different version matches between peers

+// Copyright (C) 2019 Storj Labs, Inc.+// See LICENSE for copying information.++// +build ignore++package main++import (+	"errors"+	"fmt"+	"io/ioutil"+	"log"+	"os"+	"os/exec"+	"strings"++	"gopkg.in/yaml.v2"+)++var SkippedUplinkVersions = []string{

can we add a comment above this explaining why we skip these versions?

VinozzZ

comment created time in 2 months

Pull request review commentstorj/storj

satellite/repair: fix repair concurrency

 type Config struct { // // architecture: Worker type Service struct {-	log      *zap.Logger-	queue    queue.RepairQueue-	config   *Config-	Limiter  *sync2.Limiter-	Loop     sync2.Cycle-	repairer *SegmentRepairer+	log        *zap.Logger+	queue      queue.RepairQueue+	config     *Config+	JobLimiter *semaphore.Weighted+	Loop       sync2.Cycle+	repairer   *SegmentRepairer }  // NewService creates repairing service func NewService(log *zap.Logger, queue queue.RepairQueue, config *Config, repairer *SegmentRepairer) *Service { 	return &Service{-		log:      log,-		queue:    queue,-		config:   config,-		Limiter:  sync2.NewLimiter(config.MaxRepair),-		Loop:     *sync2.NewCycle(config.Interval),-		repairer: repairer,+		log:        log,+		queue:      queue,+		config:     config,+		JobLimiter: semaphore.NewWeighted(int64(config.MaxRepair)),+		Loop:       *sync2.NewCycle(config.Interval),+		repairer:   repairer, 	} }  // Close closes resources func (service *Service) Close() error { return nil } +// WaitForPendingRepairs waits for all ongoing repairs to complete.+//+// NB: this assumes that service.config.MaxRepair will never be changed once this Service instance+// is initialized. If that is not a valid assumption, we should keep a copy of its initial value to+// use here instead.+func (service *Service) WaitForPendingRepairs() {+	// No error return is possible here; context.Background() can't be canceled+	_ = service.JobLimiter.Acquire(context.Background(), int64(service.config.MaxRepair))

why not take ctx as an arg to WaitForPendingRepairs and pass it into Acquire so that if we shut down the satellite the repairer does not deadlock?

thepaul

comment created time in 2 months

Pull request review commentstorj/storj

satellite/satellitedb: ensure migration tests run

 func pgMigrateTest(t *testing.T, connStr string) { 	snapshots, err := loadSnapshots(connStr) 	require.NoError(t, err) -	for _, snapshot := range snapshots.List {-		base := snapshot-		// versions 0 to 4 can be a starting point-		if base.Version < minBaseVersion || maxBaseVersion < base.Version {-			continue-		}--		t.Run(strconv.Itoa(base.Version), func(t *testing.T) {-			log := zaptest.NewLogger(t)-			schemaName := "migrate/satellite/" + strconv.Itoa(base.Version) + pgutil.CreateRandomTestingSchemaName(8)-			connstr := pgutil.ConnstrWithSchema(connStr, schemaName)+	base := snapshots.List[0]

That makes sense. But it looks like nothing else has changed - so even if snapshots.List only has one item in it, it would still run in an identical way with the old code. So how does this change fix the test to ensure it runs?

egonelbre

comment created time in 2 months

Pull request review commentstorj/storj

satellite/satellitedb: ensure migration tests run

 func pgMigrateTest(t *testing.T, connStr string) { 	snapshots, err := loadSnapshots(connStr) 	require.NoError(t, err) -	for _, snapshot := range snapshots.List {-		base := snapshot-		// versions 0 to 4 can be a starting point-		if base.Version < minBaseVersion || maxBaseVersion < base.Version {-			continue-		}--		t.Run(strconv.Itoa(base.Version), func(t *testing.T) {-			log := zaptest.NewLogger(t)-			schemaName := "migrate/satellite/" + strconv.Itoa(base.Version) + pgutil.CreateRandomTestingSchemaName(8)-			connstr := pgutil.ConnstrWithSchema(connStr, schemaName)+	base := snapshots.List[0]

I don't understand how this fixes the problem. Wouldn't the old code still attempt snapshots.List[0]?

egonelbre

comment created time in 2 months

PR opened storj/storj

satellite/satellitedb: add bad migration

What:

Why:

Please describe the tests:

  • Test 1:
  • Test 2:

Please describe the performance impact:

Code Review Checklist (to be filled out by reviewer)

  • [ ] NEW: Are there any Satellite database migrations? Are they forwards and backwards compatible?
  • [ ] Does the PR describe what changes are being made?
  • [ ] Does the PR describe why the changes are being made?
  • [ ] Does the code follow our style guide?
  • [ ] Does the code follow our testing guide?
  • [ ] Is the PR appropriately sized? (If it could be broken into smaller PRs it should be)
  • [ ] Does the new code have enough tests? (every PR should have tests or justification otherwise. Bug-fix PRs especially)
  • [ ] Does the new code have enough documentation that answers "how do I use it?" and "what does it do?"? (both source documentation and higher level, diagrams?)
  • [ ] Does any documentation need updating?
  • [ ] Do the database access patterns make sense?
+483 -0

0 comment

2 changed files

pr created time in 2 months

create barnchstorj/storj

branch : green/test-bad-migration

created branch time in 2 months

Pull request review commentstorj/storj

satellite/gracefulexit: Add graceful exit completed/failed receipt verification to satellite CLI

 func cmdQDiag(cmd *cobra.Command, args []string) (err error) { 	return w.Flush() } +func cmdVerifyGracefulExitReceipt(cmd *cobra.Command, args []string) (err error) {+	ctx, _ := process.Ctx(cmd)++	identity, err := runCfg.Identity.Load()+	if err != nil {+		zap.S().Fatal(err)+	}++	// Check the receipt is not empty+	nodeID, err := storj.NodeIDFromString(args[0])

if the first arg is nodeID, the usage description and minimum args needs to be updated, right?

ethanadams

comment created time in 2 months

Pull request review commentstorj/storj

satellite/metainfo, satellite/repair: add metric for download failed due to not enough pieces available

 func (ec *ECRepairer) Get(ctx context.Context, limits []*pb.AddressedOrderLimit, 	limiter.Wait()  	if successfulPieces < es.RequiredCount() {+		mon.Meter("download_failed_not_enough_pieces_repair").Mark(1) //locked

can we update the error message below to include the segment path?

VinozzZ

comment created time in 2 months

push eventstorj/storj

Maximillian von Briesen

commit sha 1339252cbe3213750ea09f579d416e8bd1bf9f11

satellite/gracefulexit: refactor concurrency (#3624) Update PendingMap structure to also handle concurrency control between the sending and receiving sides of the graceful exit endpoint.

view details

push time in 2 months

delete branch storj/storj

delete branch : green/ge-concurrency-refactor

delete time in 2 months

PR merged storj/storj

Reviewers
satellite/gracefulexit: refactor concurrency Request Code Review cla-signed

What: Update PendingMap structure to also handle concurrency control between the sending and receiving sides of the graceful exit endpoint.

Why: We were using many different concurrency primitives/structs to manage the logic in the graceful exit endpoint and it was very difficult to understand. https://storjlabs.atlassian.net/browse/V3-3163

Please describe the tests:

  • Test 1:
  • Test 2:

Please describe the performance impact:

Code Review Checklist (to be filled out by reviewer)

  • [x] NEW: Are there any Satellite database migrations? Are they forwards and backwards compatible?
  • [x] Does the PR describe what changes are being made?
  • [x] Does the PR describe why the changes are being made?
  • [x] Does the code follow our style guide?
  • [x] Does the code follow our testing guide?
  • [x] Is the PR appropriately sized? (If it could be broken into smaller PRs it should be)
  • [x] Does the new code have enough tests? (every PR should have tests or justification otherwise. Bug-fix PRs especially)
  • [x] Does the new code have enough documentation that answers "how do I use it?" and "what does it do?"? (both source documentation and higher level, diagrams?)
  • [x] Does any documentation need updating?
  • [x] Do the database access patterns make sense?
+495 -163

0 comment

4 changed files

mobyvb

pr closed time in 2 months

Pull request review commentstorj/storj

satellite/gracefulexit: refactor concurrency

 func (endpoint *Endpoint) handleSucceeded(ctx context.Context, stream processStr 	if err != nil { 		return Error.Wrap(err) 	}-	transferQueueItem, err := endpoint.db.GetTransferQueueItem(ctx, exitingNodeID, transfer.path, transfer.pieceNum)+	transferQueueItem, err := endpoint.db.GetTransferQueueItem(ctx, exitingNodeID, transfer.Path, transfer.PieceNum) 	if err != nil { 		return Error.Wrap(err) 	} -	err = endpoint.updatePointer(ctx, transfer.originalPointer, exitingNodeID, receivingNodeID, string(transfer.path), transfer.pieceNum, transferQueueItem.RootPieceID)+	err = endpoint.updatePointer(ctx, transfer.OriginalPointer, exitingNodeID, receivingNodeID, string(transfer.Path), transfer.PieceNum, transferQueueItem.RootPieceID) 	if err != nil { 		// remove the piece from the pending queue so it gets retried-		pending.delete(originalPieceID)+		deleteErr := pending.Delete(originalPieceID) -		return Error.Wrap(err)+		return Error.Wrap(errs.Combine(err, deleteErr)) 	}  	var failed int64 	if transferQueueItem.FailedCount != nil && *transferQueueItem.FailedCount >= endpoint.config.MaxFailuresPerPiece { 		failed = -1 	} -	err = endpoint.db.IncrementProgress(ctx, exitingNodeID, transfer.pieceSize, 1, failed)+	err = endpoint.db.IncrementProgress(ctx, exitingNodeID, transfer.PieceSize, 1, failed) 	if err != nil { 		return Error.Wrap(err) 	} -	err = endpoint.db.DeleteTransferQueueItem(ctx, exitingNodeID, transfer.path, transfer.pieceNum)+	err = endpoint.db.DeleteTransferQueueItem(ctx, exitingNodeID, transfer.Path, transfer.PieceNum) 	if err != nil { 		return Error.Wrap(err) 	} -	pending.delete(originalPieceID)+	err = pending.Delete(originalPieceID)+	if err != nil {+		return err

The error inside Delete is created with Error.New so I don't think we need to wrap it.

mobyvb

comment created time in 2 months

Pull request review commentstorj/storj

private/testplanet: add a mock referral manager server into testplanet

+// Copyright (C) 2019 Storj Labs, Inc.+// See LICENSE for copying information++package testplanet++import (+	"os"+	"path/filepath"++	"storj.io/storj/pkg/pb"+	"storj.io/storj/pkg/peertls/extensions"+	"storj.io/storj/pkg/peertls/tlsopts"+	"storj.io/storj/pkg/server"+)++// newReferralManager initializes a referral manager server+func (planet *Planet) newReferralManager() (*server.Server, error) {+	prefix := "referralmanager"+	log := planet.log.Named(prefix)+	referralmanagerDir := filepath.Join(planet.directory, prefix)++	if err := os.MkdirAll(referralmanagerDir, 0700); err != nil {+		return nil, err+	}++	identity, err := planet.NewIdentity()+	if err != nil {+		return nil, err+	}++	config := server.Config{+		Address:        "127.0.0.1:0",+		PrivateAddress: "127.0.0.1:0",++		Config: tlsopts.Config{+			RevocationDBURL:    "bolt://" + filepath.Join(referralmanagerDir, "revocation.db"),+			UsePeerCAWhitelist: true,+			PeerIDVersions:     "*",+			Extensions: extensions.Config{+				Revocation:          false,+				WhitelistSignedLeaf: false,+			},+		},+	}++	var endpoints pb.ReferralManagerServer+	if planet.config.Reconfigure.ReferralManagerServer != nil {

nit - I would add a comment saying // only create a referral manager server if testplanet was reconfigured with a custom referral manager endpoint

VinozzZ

comment created time in 2 months

push eventstorj/storj

Natalie Villasana

commit sha b7a8ffcdff0fa9b38da37fc6e448cc69ff5986a0

pkg/pb/referralmanager: update to add satellite ID to Get Tokens request (#3625)

view details

Moby von Briesen

commit sha f933662d849a0426f0aa4e10dbd520a2a40d75da

errcheck

view details

Moby von Briesen

commit sha 5beb32baba4e0d1fc52f6090e0624139e367dcce

Merge branch 'master' of github.com:storj/storj into green/ge-concurrency-refactor

view details

push time in 2 months

push eventstorj/storj

Moby von Briesen

commit sha f3fd02e236a9fe82d99c0f799d72c55885021147

return "finished error" from isFinishedPromise even if DoneSending has already been called

view details

push time in 2 months

push eventstorj/storj

Michal Niewrzal

commit sha d96df2691a977da9ccc461867b874c0f0a4aae8e

satellite/metainfo: improve Loop comments (#3595)

view details

Moby von Briesen

commit sha f11961d779b2825d0c359ef72e45a7f04f933502

rename Finish() to DoneSending()

view details

Moby von Briesen

commit sha 8696803509382085c8698fb56dbd1e07a3aaccad

Merge branch 'master' of github.com:storj/storj into green/ge-concurrency-refactor

view details

push time in 2 months

push eventstorj/storj

Rafael Antonio Ribeiro Gomes

commit sha 273977176186ac7beafb851655546502a062dcf3

storagenode: add bandwidth metrics (#3623) * storagenode: add bandwidth metrics * remove unecessary metric

view details

Moby von Briesen

commit sha e3c25456b259b842ba60c055069f44ff596f29e2

nit rename

view details

Moby von Briesen

commit sha 0201112b3b12a8aed867828764ebd271144efde8

Merge branch 'master' of github.com:storj/storj into green/ge-concurrency-refactor

view details

push time in 2 months

push eventgrafael/storj

JT Olio

commit sha 40012e5790bd880b3971c44311d759bc3a29c0ec

satellite/metainfo: continue instead of return (fixing my bad advice) Change-Id: I57ad3de8e3f705429bad98ce976879c4d5e905c9

view details

Yingrong Zhao

commit sha b995406ff9b2363b2224322151d50b787d9f6905

satellite/satellitedb: separate uuid creation from db layer (#3600)

view details

Nikolai Siedov

commit sha 6a4389d3e14d6dd190ebeb367be395e6a8f38042

satellite/console: apiKeys case-insensitive search added (#3621)

view details

Nikolay Yurchenko

commit sha 2030d676504d0af8a4dc7e8e017a76bc8a37ed08

web/satellite: charges summary fix (#3619)

view details

Yaroslav Vorobiov

commit sha c72c44356428eca030df76fedaa43ab1a9db1d0a

satellite/payments: add cents values to transaction info (#3614)

view details

Yaroslav Vorobiov

commit sha 87c7a2ff425565dab9a39f15bca359fe5af1e5b4

satellite/payments: token deposit accept cents (#3628)

view details

Matt Robinson

commit sha 976881f72bfaa2f2c61f332bfa6fe9a7a703ff12

satellite/console: Add security headers (#3615) * satellite/console: Add X-Frame-Options and Referrer-Policy security headers * Update to use CSP instead of XFO and include tardigrade.io * Make FrameAncestors a config option * Update satellite-config lock * Make help text for FrameAncestors better

view details

Maximillian von Briesen

commit sha 1c8f63f962f8d45ae16dc936eb242b53cb9da097

Merge branch 'master' into metric_bandwidth

view details

push time in 2 months

Pull request review commentstorj/storj

storage: Improve doc comments delete methods

 func (dir *Dir) iterateStorageFormatVersions(ctx context.Context, ref storage.Bl }  // Delete deletes blobs with the specified ref (in all supported storage formats).+//+// It doesn't return an error if the blob is not found by any reason or it+// cannot be deleted at this moment and it's delayed. func (dir *Dir) Delete(ctx context.Context, ref storage.BlobRef) (err error) { 	defer mon.Task()(&ctx)(&err) 	return dir.iterateStorageFormatVersions(ctx, ref, dir.DeleteWithStorageFormat) } -// DeleteWithStorageFormat deletes the blob with the specified ref for one specific format version+// DeleteWithStorageFormat deletes the blob with the specified ref for one+// specific format version. The method tries the following strategies, in order+// of preference until one succeeds:+//+// * moves the blob to garbage dir.+// * directly deletes the blob.+// * push the blobs to queue for retrying later.+//+// It doesn't return an error if the piece isn't found by any reason.
// It doesn't return an error if the piece isn't found for any reason.
ifraixedes

comment created time in 2 months

Pull request review commentstorj/storj

storage: Improve doc comments delete methods

 func (dir *Dir) iterateStorageFormatVersions(ctx context.Context, ref storage.Bl }  // Delete deletes blobs with the specified ref (in all supported storage formats).+//+// It doesn't return an error if the blob is not found by any reason or it
// It doesn't return an error if the blob is not found for any reason or it
ifraixedes

comment created time in 2 months

Pull request review commentstorj/storj

storage: Improve doc comments delete methods

 func (store *blobStore) StatWithStorageFormat(ctx context.Context, ref storage.B 	return info, Error.Wrap(err) } -// Delete deletes blobs with the specified ref+// Delete deletes blobs with the specified ref.+//+// It doesn't return an error if the blob isn't found by any reason or it cannot
// It doesn't return an error if the blob isn't found for any reason or it cannot
ifraixedes

comment created time in 2 months

push eventstorj/storj

Moby von Briesen

commit sha b4f25e7103a8a825d210f1af28448bd8ee0f492a

linter

view details

push time in 2 months

push eventstorj/storj

Moby von Briesen

commit sha 857caf216798306c3eac690cd377232192e0e003

cancel context in error cases

view details

push time in 2 months

push eventstorj/storj

Moby von Briesen

commit sha 62c6e714b2f422b4265de3f888f9df14d54d8472

condense Delete + err returns

view details

Moby von Briesen

commit sha ee3af84e627b5b24cf5d2b1a56b67053d261be7f

add optional error to pendingMap.Finish

view details

Moby von Briesen

commit sha 85f003aff488e3d2a21752607c87c8be9010ed09

replace handleError with pending.Finish(err) calls

view details

push time in 2 months

push eventstorj/storj

Jess G

commit sha e9c3194c820464435bf00cebc4877b135665fae9

satellitedb: merge migration into one step (#3551) * merge migration * rm migration versions * rm unneeded migration test data * create index w/postgres + crdb compatible syntax * add default to offers.invitee_credit_duration_days * changes so that schema matches from master to branch * change to be crdb compatible * add check to confirm db version * mv version check to migration * update tests * add minversion to sadb migration, update tests * confirm min version for all dbs in a migration * add validate migration to sadb * fix lint err * rm min version check from migrate * change sadb check * hard code min db version * fix comment

view details

Rafael Antonio Ribeiro Gomes

commit sha da39c71d35fde96efc42b72478176bb957977870

storagenode: add new metric satellite.request (#3610) * storagenode: add new metric satellite.request * storagenode: metrics fixed * switch from Counter to Meter

view details

JT Olio

commit sha fe8d556b4ef04299cc83c0e2faf7eebd31f5634b

add sourcerer hall-of-fame to README https://github.com/sourcerer-io/hall-of-fame

view details

Vitalii Shpital

commit sha 16f0e998b16a84ea3b502152b981583063da7b32

web/satellite: project selection dropdown bug fixed (#3605)

view details

Yehor Butko

commit sha 9ca547acb1286443555a9eaf0bf8754f261fd727

web/satellite: project charges (#3611)

view details

Vitalii Shpital

commit sha 49694b2c7a50107f64dddcefdd572d0f074e181e

web/satellite: successful reset password page styling bug fixed (#3612)

view details

Vitalii Shpital

commit sha 61c8bcc9a6a6bcf456b94e805135bbd5491069e0

web/storagenode: egress chart implemented (#3574)

view details

Matt Robinson

commit sha b5707d1e5dbf613a10b9608fcad9dbd98f744c22

scripts: make update-tools.sh more verbose (#3572) * Make update-tools.sh more verbose * Was checking the wrong filehandle

view details

Kaloyan Raev

commit sha 6d728d6ea09789a04d5ebaacc2ddc73057183de0

storagenode/collect: delete piece 24 hours after expiration (#3613)

view details

Isaac Hess

commit sha 6aeddf2f53e46d7a560f533b013d49729b4a58ae

storagenode/pieces: Add Trash and RestoreTrash to piecestore (#3575) * storagenode/pieces: Add Trash and RestoreTrash to piecestore * Add index for expiration trash

view details

Ivan Fraixedes

commit sha c2e605e81ee9f04e3ffe425a6e857fe04930950e

satellite/metainfo: Don't return error in loop when path has less than 4 parts (#3616) * satellite/metainfo: Rollback path parts check in loop We have to rollback the changes applied in checking the rawPath parts from 4 to 3 because the production prointerDB is still storing buckets. * satellite/metainfo: Don't return path parts less 4 Don't return an error in the metainfo loop iterator when a path doesn't have 4 parts because it belongs to bucket metadata, not an actual object.

view details

JT Olio

commit sha 40012e5790bd880b3971c44311d759bc3a29c0ec

satellite/metainfo: continue instead of return (fixing my bad advice) Change-Id: I57ad3de8e3f705429bad98ce976879c4d5e905c9

view details

Yingrong Zhao

commit sha b995406ff9b2363b2224322151d50b787d9f6905

satellite/satellitedb: separate uuid creation from db layer (#3600)

view details

Nikolai Siedov

commit sha 6a4389d3e14d6dd190ebeb367be395e6a8f38042

satellite/console: apiKeys case-insensitive search added (#3621)

view details

Nikolay Yurchenko

commit sha 2030d676504d0af8a4dc7e8e017a76bc8a37ed08

web/satellite: charges summary fix (#3619)

view details

Yaroslav Vorobiov

commit sha c72c44356428eca030df76fedaa43ab1a9db1d0a

satellite/payments: add cents values to transaction info (#3614)

view details

Yaroslav Vorobiov

commit sha 87c7a2ff425565dab9a39f15bca359fe5af1e5b4

satellite/payments: token deposit accept cents (#3628)

view details

Matt Robinson

commit sha 976881f72bfaa2f2c61f332bfa6fe9a7a703ff12

satellite/console: Add security headers (#3615) * satellite/console: Add X-Frame-Options and Referrer-Policy security headers * Update to use CSP instead of XFO and include tardigrade.io * Make FrameAncestors a config option * Update satellite-config lock * Make help text for FrameAncestors better

view details

Moby von Briesen

commit sha 0c08592388aff75509c52858a459c19b260f0d9b

Merge branch 'master' of github.com:storj/storj into green/ge-concurrency-refactor

view details

Moby von Briesen

commit sha 121a7a1761a5f07d961defca635723c23eb7c32d

update documentation on pending map

view details

push time in 2 months

pull request commentstorj/storj

pkg/pb/referralmanager: update to add satellite ID to Get Tokens request

Maybe instead of tokens, token_secrets would be clearer?

navillasa

comment created time in 2 months

PR opened storj/storj

satellite/gracefulexit: refactor concurrency Request Code Review

What: Update PendingMap structure to also handle concurrency control between the sending and receiving sides of the graceful exit endpoint.

Why: We were using many different concurrency primitives/structs to manage the logic in the graceful exit endpoint and it was very difficult to understand. https://storjlabs.atlassian.net/browse/V3-3163

Please describe the tests:

  • Test 1:
  • Test 2:

Please describe the performance impact:

Code Review Checklist (to be filled out by reviewer)

  • [ ] NEW: Are there any Satellite database migrations? Are they forwards and backwards compatible?
  • [ ] Does the PR describe what changes are being made?
  • [ ] Does the PR describe why the changes are being made?
  • [ ] Does the code follow our style guide?
  • [ ] Does the code follow our testing guide?
  • [ ] Is the PR appropriately sized? (If it could be broken into smaller PRs it should be)
  • [ ] Does the new code have enough tests? (every PR should have tests or justification otherwise. Bug-fix PRs especially)
  • [ ] Does the new code have enough documentation that answers "how do I use it?" and "what does it do?"? (both source documentation and higher level, diagrams?)
  • [ ] Does any documentation need updating?
  • [ ] Do the database access patterns make sense?
+429 -141

0 comment

4 changed files

pr created time in 2 months

push eventstorj/storj

Moby von Briesen

commit sha 7a8147f62136d07f1403bd60dc53c81b456692ba

use new pending map to handle concurrency in ge endpoint

view details

push time in 2 months

push eventstorj/storj

Moby von Briesen

commit sha 96bcce2b8de0932825194f3bf24843cbb9733df6

add pending finished promise

view details

Moby von Briesen

commit sha 1bb0b5db63562a1f087f066718cfd06f2142ca1b

update tests and fix pending map

view details

push time in 2 months

push eventstorj/storj

Moby von Briesen

commit sha 1f97e865cf36b97435d338f531d2eeee99646770

finish writing pending tests

view details

Moby von Briesen

commit sha b507274ed446847a59b950ba774411806adf0ccb

fix compilation issues

view details

push time in 2 months

Pull request review commentstorj/storj

storagenode: add bandwidth metrics

 func (service *Service) AvailableBandwidth(ctx context.Context) (_ int64, err er 		return 0, Error.Wrap(err) 	} 	allocatedBandwidth := service.allocatedBandwidth-	return allocatedBandwidth - usage, nil+	availableBandwidth := allocatedBandwidth - usage++	mon.IntVal("allocated_bandwidth").Observe(allocatedBandwidth) //locked+	mon.IntVal("usage_bandwidth").Observe(usage)                  //locked+	mon.IntVal("available_bandwidth").Observe(availableBandwidth) //locked

If I were to guess, this is overall bandwidth for a month. Which means it would probably reset every month. But I'm not personally familiar with this area of the codebase.

grafael

comment created time in 2 months

Pull request review commentstorj/storj

storagenode: add bandwidth metrics

 func (service *Service) AvailableBandwidth(ctx context.Context) (_ int64, err er 		return 0, Error.Wrap(err) 	} 	allocatedBandwidth := service.allocatedBandwidth-	return allocatedBandwidth - usage, nil+	availableBandwidth := allocatedBandwidth - usage++	mon.IntVal("allocated_bandwidth").Observe(allocatedBandwidth) //locked+	mon.IntVal("usage_bandwidth").Observe(usage)                  //locked+	mon.IntVal("available_bandwidth").Observe(availableBandwidth) //locked

I would recommend reporting allocatedBandwidth and availableBandwidth but getting rid of usage

grafael

comment created time in 2 months

Pull request review commentstorj/storj

storagenode: add bandwidth metrics

 func (service *Service) AvailableBandwidth(ctx context.Context) (_ int64, err er 		return 0, Error.Wrap(err) 	} 	allocatedBandwidth := service.allocatedBandwidth-	return allocatedBandwidth - usage, nil+	availableBandwidth := allocatedBandwidth - usage++	mon.IntVal("allocated_bandwidth").Observe(allocatedBandwidth) //locked+	mon.IntVal("usage_bandwidth").Observe(usage)                  //locked+	mon.IntVal("available_bandwidth").Observe(availableBandwidth) //locked

Since any one of these can be derived with the other two, it should only be necessary to have two of them in queries. @benjaminsirb has told me in the past that the datascience team can just do calculations in the query if they need a piece of information that can be derived from others.

grafael

comment created time in 2 months

GollumEvent

PR opened storj/docs

style: add section on intentional metrics
+11 -0

0 comment

1 changed file

pr created time in 2 months

create barnchstorj/docs

branch : metrics

created branch time in 2 months

push eventstorj/storj

Yehor Butko

commit sha 9ca547acb1286443555a9eaf0bf8754f261fd727

web/satellite: project charges (#3611)

view details

Vitalii Shpital

commit sha 49694b2c7a50107f64dddcefdd572d0f074e181e

web/satellite: successful reset password page styling bug fixed (#3612)

view details

Vitalii Shpital

commit sha 61c8bcc9a6a6bcf456b94e805135bbd5491069e0

web/storagenode: egress chart implemented (#3574)

view details

Matt Robinson

commit sha b5707d1e5dbf613a10b9608fcad9dbd98f744c22

scripts: make update-tools.sh more verbose (#3572) * Make update-tools.sh more verbose * Was checking the wrong filehandle

view details

Maximillian von Briesen

commit sha 0c9755ae241567a5244865dbfff12794d45a09f1

Merge branch 'master' into orange/expiration-24hours

view details

push time in 2 months

push eventstorj/storj

Kaloyan Raev

commit sha f4a626bbc63f6379256bbe73ce81f570471c5505

storagenode-updater: re-enable auto-update storagenode-updater (#3588) storagenode-updater: re-enable auto-update for storagenode-updater

view details

Nikolai Siedov

commit sha 3fe518d5476d28ba1578d51c6434b7e4a3772432

satellite: added ability to inject stripe public key post build (#3560)

view details

Nikolay Yurchenko

commit sha fdcf328469a8fee6ee73ff82ecdd10d792df350d

short name field removed from registration page (#3584)

view details

Michal Niewrzal

commit sha ec41a51bbbe98cc930b10b091841b4ba891bb012

metainfo/loop: refactor iterator to be reusable (#3578)

view details

Michal Niewrzal

commit sha 8ea09d55561a0a5aa441a8d1237859818e60df98

cmd/segment-reaper: add iteration over DB (#3576)

view details

littleskunk

commit sha c52c7275ad84fc0aa3c66c8dc463154a7897864b

satellite/repair: reduce upload timeout (#3597)

view details

Natalie Villasana

commit sha 3a0b71ae5b8a0f2db7ecb5bdda3543cd0619a48f

pkg/pb/referralmanager: update RedeemTokenRequest, rm ReserveToken (#3593)

view details

Jeff Wendling

commit sha f3b20215b0d7e90d249c440b5883b5a3c6a2a6ad

pkg/{rpc,server,tlsopts}: pick larger defaults for buffer sizes these may not be optimal but they're probably better based on our previous testing. we can tune better in the future now that the groundwork is there. Change-Id: Iafaee86d3181287c33eadf6b7eceb307dda566a6

view details

Jeff Wendling

commit sha 53176dcb0ee9c0bccd11f04d830295b068913ffe

pkg/rpc/rpcstatus: do not depend on grpc/drpc build mode if your server is built to make drpc connections, clients can still connect with grpc. thus, your responses to grpc clients must still look the same, so we have to have all of our status wrapping include codes for both the drpc and grpc servers to return the right thing. Change-Id: If99fa0e674dec2e20ddd372a827f1c01b4d305b2

view details

Ivan Fraixedes

commit sha b5b4543df32f4a93219c0581481e1a3c2c7ace39

docs/blueprints: Deletion performance (#3520) Creates the blueprint document for changing the mechanism to delete pieces and improve its performance.

view details

Nikolai Siedov

commit sha 24318d74b3041685ef8c76f572a5531523a2a554

storagenode/console: show satellite url in satellite selection (#3602)

view details

Michal Niewrzal

commit sha 5964502ce0f1f4f9807545f4e38b76da6ed403ca

uplink/metainfo: remove GetObject from download Batch (#3596)

view details

Ivan Fraixedes

commit sha 46ad27c5f22a220d825ddbb86b11ad4b19d17a3b

docs/blueprints: Change adapt endpoint by creating (#3601) Replace a sentence which indicates that we're creating and new endpoint in the storage node side rather than adapting the current one.

view details

Ivan Fraixedes

commit sha 8e1e4cc342573bcb4b9037754ccd3eab56a0b0e0

piecestore: Fix invalid comment and typos (#3604)

view details

Maximillian von Briesen

commit sha 8653dda2b10a0fa702c91bc4ddceb33f3405c210

satellite/audit: do not contain nodes for unknown errors (#3592) * skip unknown errors (wip) * add tests to make sure nodes that time out are added to containment * add bad blobs store * call "Skipped" "Unknown" * add tests to ensure unknown errors do not trigger containment * add monkit stats to lockfile * typo * add periods to end of bad blobs comments

view details

Yaroslav Vorobiov

commit sha 3d94c3fc9909d5c0c9edfd89bd84ac65dd1635ee

satellite/payments: stripe client use satellite logger, give credits when applying transaction to balance (#3603)

view details

Jess G

commit sha e9c3194c820464435bf00cebc4877b135665fae9

satellitedb: merge migration into one step (#3551) * merge migration * rm migration versions * rm unneeded migration test data * create index w/postgres + crdb compatible syntax * add default to offers.invitee_credit_duration_days * changes so that schema matches from master to branch * change to be crdb compatible * add check to confirm db version * mv version check to migration * update tests * add minversion to sadb migration, update tests * confirm min version for all dbs in a migration * add validate migration to sadb * fix lint err * rm min version check from migrate * change sadb check * hard code min db version * fix comment

view details

Rafael Antonio Ribeiro Gomes

commit sha da39c71d35fde96efc42b72478176bb957977870

storagenode: add new metric satellite.request (#3610) * storagenode: add new metric satellite.request * storagenode: metrics fixed * switch from Counter to Meter

view details

JT Olio

commit sha fe8d556b4ef04299cc83c0e2faf7eebd31f5634b

add sourcerer hall-of-fame to README https://github.com/sourcerer-io/hall-of-fame

view details

Vitalii Shpital

commit sha 16f0e998b16a84ea3b502152b981583063da7b32

web/satellite: project selection dropdown bug fixed (#3605)

view details

push time in 2 months

Pull request review commentstorj/storj

satellite/satellitedb: separate uuid creation from db layer

 func (s *Service) CreateUser(ctx context.Context, user CreateUser, tokenSecret R 		if currentReward != nil { 			var refID *uuid.UUID 			if refUserID != "" {-				refID, err = uuid.Parse(refUserID)+				refID, err = uuid.Parse(refID) 				if err != nil { 					return Error.Wrap(err) 				} 			}-			newCredit, err := NewCredit(currentReward, Invitee, u.ID, refID)+			newCredit, err := NewCredit(currentReward, Invitee, u.ID, refToken)

where does refToken come from? Why this change?

VinozzZ

comment created time in 2 months

Pull request review commentstorj/storj

satellite/satellitedb: separate uuid creation from db layer

 func (s *Service) CreateUser(ctx context.Context, user CreateUser, tokenSecret R 		if currentReward != nil { 			var refID *uuid.UUID 			if refUserID != "" {-				refID, err = uuid.Parse(refUserID)+				refID, err = uuid.Parse(refID)

This change looks like a mistake to me - shouldn't it still be refUserID?

VinozzZ

comment created time in 2 months

Pull request review commentstorj/storj

storagenode: add new metric satellite.request

 storj.io/storj/satellite/repair/checker."remote_segments_lost" IntVal storj.io/storj/satellite/repair/checker."remote_segments_needing_repair" IntVal storj.io/storj/satellite/satellitedb."audit_reputation_alpha" FloatVal storj.io/storj/satellite/satellitedb."audit_reputation_beta" FloatVal+storj.io/storj/storagenode/contact."satellite_contact_request" Meter

Yeah I agree. I think we should also add it to the PR checklist. I can put that and updating the style doc on my todo list for tomorrow

grafael

comment created time in 2 months

create barnchstorj/storj

branch : green/ge-concurrency-refactor

created branch time in 2 months

Pull request review commentstorj/storj

storagenode: add new metric satellite.request

 storj.io/storj/satellite/repair/checker."remote_segments_lost" IntVal storj.io/storj/satellite/repair/checker."remote_segments_needing_repair" IntVal storj.io/storj/satellite/satellitedb."audit_reputation_alpha" FloatVal storj.io/storj/satellite/satellitedb."audit_reputation_beta" FloatVal+storj.io/storj/storagenode/contact."satellite_contact_request" Meter

Excluding the mon.Task() calls that are in basically every function across the codebase (and probably consist of the majority of the mon.X() calls you are referring to), my understanding after talking with Patrick is that any specific stat that we added with intent should be in the lock file. We should eventually be adding locks to the existing metrics that aren't currently locked. I added some earlier today to some audit metrics that were added before monkit.lock existed. What datascience is concerned about is that when the code inevitably changes, that metrics that we added intentionally will go away, either because they were not kept track of properly, or because code was refactored in a way that made the metric obsolete. So if we are deliberately adding a metric (mon.IntVal, mon.FloatVal, mon.Counter, mon.Meter, etc...), we should make sure to lock it.

grafael

comment created time in 2 months

Pull request review commentstorj/storj

storagenode: add new metric satellite.request

 storj.io/storj/satellite/repair/checker."remote_segments_lost" IntVal storj.io/storj/satellite/repair/checker."remote_segments_needing_repair" IntVal storj.io/storj/satellite/satellitedb."audit_reputation_alpha" FloatVal storj.io/storj/satellite/satellitedb."audit_reputation_beta" FloatVal+storj.io/storj/storagenode/contact."satellite_contact_request" Meter

I actually think we should be adding things here by default. I just had a brief convo with Patrick about it. The purpose of monkit.lock is to allow us to have a dictionary for the stats we are reporting to monkit. That way we don't accidentally remove reporting when code moves around. He couldn't think about any situation off the top of his head where we'd be reporting a monkit stat from this repository and we wouldn't want it in the lock file.

grafael

comment created time in 2 months

Pull request review commentstorj/storj

storagenode: add new metric satellite.request

 func (chore *Chore) Run(ctx context.Context) (err error) { 		}  		for _, satellite := range satellites {+			mon.Counter("satellite_gracefulexit_request").Inc(1) //locked

Oh and you can also do .Inc(len(satellites)) instead of incrementing inside the loop. I didn't think about that before.

grafael

comment created time in 2 months

Pull request review commentstorj/storj

storagenode: add new metric satellite.request

 func (chore *Chore) Run(ctx context.Context) (err error) { 		}  		for _, satellite := range satellites {+			mon.Counter("satellite_gracefulexit_request").Inc(1) //locked

could also be worth having mon.IntVal("satellite_num_gracefulexits").Observe(len(satellites)) right after the ListGracefulExits call above.

grafael

comment created time in 2 months

Pull request review commentstorj/storj

storagenode: add new metric satellite.request

 func (chore *Chore) Run(ctx context.Context) (err error) { 			interval := initialBackOff 			attempts := 0 			for {++				mon.Counter("satellite.request").Inc(1)

For future reference, you can click the refresh button next to a name in the reviewers panel to re-request someone's review, and they will be notified. Usually I will wait for that instead of comment replies.

grafael

comment created time in 2 months

Pull request review commentstorj/storj

storagenode: add new metric satellite.request

 func (chore *Chore) Run(ctx context.Context) (err error) {  	err = chore.Loop.Run(ctx, func(ctx context.Context) (err error) { 		defer mon.Task()(&ctx)(&err)++		mon.Counter("satellite.request").Inc(1)

we'll need all graceful exits (even if there are no graceful exits) If this is true, then the original position is where you want the metric. Or you need to have a different metric in that position. The metric inside the loop will be triggered for every satellite (if 3 satellites are exiting, it will increment the counter 3 times, but if 0 are exiting, it won't increment the counter at all).

grafael

comment created time in 2 months

Pull request review commentstorj/storj

satellite/satellitedb: separate uuid creation from db layer

 func (users *users) GetByEmail(ctx context.Context, email string) (_ *console.Us // Insert is a method for inserting user into the database func (users *users) Insert(ctx context.Context, user *console.User) (_ *console.User, err error) { 	defer mon.Task()(&ctx)(&err)-	userID, err := uuid.New()-	if err != nil {-		return nil, err++	if user.ID.IsZero() {

If we implement the random generation at a lower level (in CreateUser in service.go), I don't think randomness validation is necessary since we will be sure of how it is being generated.

VinozzZ

comment created time in 2 months

Pull request review commentstorj/storj

storagenode: add new metric satellite.request

 func (chore *Chore) Run(ctx context.Context) (err error) { 			interval := initialBackOff 			attempts := 0 			for {++				mon.Counter("satellite.request").Inc(1)

Also, I would suggest satellite_request instead of satellite.request since we have generally used underscores in other metric names.

grafael

comment created time in 2 months

Pull request review commentstorj/storj

storagenode: add new metric satellite.request

 func (chore *Chore) Run(ctx context.Context) (err error) {  	err = chore.Loop.Run(ctx, func(ctx context.Context) (err error) { 		defer mon.Task()(&ctx)(&err)++		mon.Counter("satellite.request").Inc(1)

Maybe it would be better to have this metric be a different one so they can be distinguished from one another? Also, this will only increment the counter one time for all graceful exits. In fact, it will even increment the counter if there are no graceful exits, at every interval. You probably want to add the metric in the loop below (line 73ish)

grafael

comment created time in 2 months

Pull request review commentstorj/storj

storagenode: add new metric satellite.request

 func (chore *Chore) Run(ctx context.Context) (err error) { 			interval := initialBackOff 			attempts := 0 			for {++				mon.Counter("satellite.request").Inc(1)

You can add //locked to the end of this line and then run go generate ./scripts/check-monitoring.go to add the stat to the monkit.lock file.

grafael

comment created time in 2 months

push eventstorj/storj

Ivan Fraixedes

commit sha 8e1e4cc342573bcb4b9037754ccd3eab56a0b0e0

piecestore: Fix invalid comment and typos (#3604)

view details

Moby von Briesen

commit sha da9da249875045798f0b7967ebf11c775733758f

add periods to end of bad blobs comments

view details

Moby von Briesen

commit sha 62274222f8ef21786c8f7cfae2eb09370ceeed6b

Merge branch 'master' of github.com:storj/storj into green/audit-classification

view details

push time in 2 months

push eventstorj/storj

Jeff Wendling

commit sha 53176dcb0ee9c0bccd11f04d830295b068913ffe

pkg/rpc/rpcstatus: do not depend on grpc/drpc build mode if your server is built to make drpc connections, clients can still connect with grpc. thus, your responses to grpc clients must still look the same, so we have to have all of our status wrapping include codes for both the drpc and grpc servers to return the right thing. Change-Id: If99fa0e674dec2e20ddd372a827f1c01b4d305b2

view details

Ivan Fraixedes

commit sha b5b4543df32f4a93219c0581481e1a3c2c7ace39

docs/blueprints: Deletion performance (#3520) Creates the blueprint document for changing the mechanism to delete pieces and improve its performance.

view details

Nikolai Siedov

commit sha 24318d74b3041685ef8c76f572a5531523a2a554

storagenode/console: show satellite url in satellite selection (#3602)

view details

Michal Niewrzal

commit sha 5964502ce0f1f4f9807545f4e38b76da6ed403ca

uplink/metainfo: remove GetObject from download Batch (#3596)

view details

Ivan Fraixedes

commit sha 46ad27c5f22a220d825ddbb86b11ad4b19d17a3b

docs/blueprints: Change adapt endpoint by creating (#3601) Replace a sentence which indicates that we're creating and new endpoint in the storage node side rather than adapting the current one.

view details

Moby von Briesen

commit sha e7ce1ebde92fdd27f34f01fd283d13738d92cae1

typo

view details

Moby von Briesen

commit sha 57b8ba2cfae22296a214a55c2f070372563a8c7b

Merge branch 'master' of github.com:storj/storj into green/audit-classification

view details

push time in 2 months

push eventstorj/storj

Moby von Briesen

commit sha a41a29aefb18b78d789c7a9c7a29e20d3db53463

add tests to ensure unknown errors do not trigger containment

view details

Moby von Briesen

commit sha b7679c9fabb3c304b5bbc0951908843460937bb4

add monkit stats to lockfile

view details

push time in 2 months

pull request commentstorj/storj

satellite/satellitedb: separate uuid creation from db layer

@rikysya if we keep user ID creation in the DB layer, it will mean we cannot redeem a referral token in one step. We would have to verify and reserve the token, then create the user on the satellite, then finally register the new user's ID with the token on the referral manager. If we create the user ID outside of the DB layer, we can validate and redeem the referral token in one step, then create the user on the satellite. Then we do not have to worry about the token getting stuck in the middle of validation and redemption if there is some error creating the user.

I am not sure what a better solution is, but it would be nice if we could avoid two separate requests to the referral manager.

VinozzZ

comment created time in 2 months

push eventstorj/storj

Maximillian von Briesen

commit sha 2524cc5d42fdbf605690e0175c51c67815b41644

pkg/pb: remove unneeded proto imports (#3585)

view details

Jeff Wendling

commit sha ecd2ef4a21cc0aac4551fd1ea0407b8d324df689

all: build release fully dprc and test in mixed mode Change-Id: I3bded3edf25a0b113601c8b12ecf1337f596649b

view details

littleskunk

commit sha 8b3444e0883fbdd961f8f4a3b2b6432d83e53ea4

satellite/nodeselection: don't select nodes that haven't checked in for a while (#3567) * satellite/nodeselection: dont select nodes that havent checked in for a while * change testplanet online window to one minute * remove satellite reconfigure online window = 0 in repair tests * pass timestamp into UpdateCheckIn * change timestamp to timestamptz * edit tests to set last_contact_success to 4 hours ago * fix syntax error * remove check for last_contact_success > last_contact_failure in IsOnline

view details

Kaloyan Raev

commit sha f4a626bbc63f6379256bbe73ce81f570471c5505

storagenode-updater: re-enable auto-update storagenode-updater (#3588) storagenode-updater: re-enable auto-update for storagenode-updater

view details

Nikolai Siedov

commit sha 3fe518d5476d28ba1578d51c6434b7e4a3772432

satellite: added ability to inject stripe public key post build (#3560)

view details

Nikolay Yurchenko

commit sha fdcf328469a8fee6ee73ff82ecdd10d792df350d

short name field removed from registration page (#3584)

view details

Michal Niewrzal

commit sha ec41a51bbbe98cc930b10b091841b4ba891bb012

metainfo/loop: refactor iterator to be reusable (#3578)

view details

Michal Niewrzal

commit sha 8ea09d55561a0a5aa441a8d1237859818e60df98

cmd/segment-reaper: add iteration over DB (#3576)

view details

littleskunk

commit sha c52c7275ad84fc0aa3c66c8dc463154a7897864b

satellite/repair: reduce upload timeout (#3597)

view details

Moby von Briesen

commit sha 64568a3119011a34ee3b4f6abe265aa7245eddb5

Merge branch 'master' of github.com:storj/storj into green/audit-classification

view details

Natalie Villasana

commit sha 3a0b71ae5b8a0f2db7ecb5bdda3543cd0619a48f

pkg/pb/referralmanager: update RedeemTokenRequest, rm ReserveToken (#3593)

view details

Jeff Wendling

commit sha f3b20215b0d7e90d249c440b5883b5a3c6a2a6ad

pkg/{rpc,server,tlsopts}: pick larger defaults for buffer sizes these may not be optimal but they're probably better based on our previous testing. we can tune better in the future now that the groundwork is there. Change-Id: Iafaee86d3181287c33eadf6b7eceb307dda566a6

view details

Moby von Briesen

commit sha bc1a7ff3329fa88fa72bfe71e3b35ea5c05d388f

add tests to make sure nodes that time out are added to containment

view details

Moby von Briesen

commit sha 28ea5a150e29b0f8bdcc4b31b99e4163df76e56b

add bad blobs store

view details

Moby von Briesen

commit sha 08d90d702cf8a9d2e7a34160a9f1726beda493ac

call "Skipped" "Unknown"

view details

Moby von Briesen

commit sha 3a373e45ffe644d1eb4bb07a85696ea740f1f079

Merge branch 'master' of github.com:storj/storj into green/audit-classification

view details

push time in 2 months

pull request commentstorj/storj

satellite/satellitedb: separate uuid creation from db layer

Looks good but I'll hold off on approving until someone from the Kiev team has a chance to take a look.

VinozzZ

comment created time in 2 months

pull request commentstorj/storj

pkg/pb/referralmanager: update RedeemTokenRequest, rm ReserveToken

Let's make sure to wait on #3600 before removing the reserve token stuff

navillasa

comment created time in 2 months

Pull request review commentstorj/storj

pkg/pb/referralmanager: update RedeemTokenRequest, rm ReserveToken

 message GetTokensResponse {  repeated bytes token = 1; } -message ReserveTokenRequest {-    bytes token = 1;-    bytes redeeming_satellite_id = 2 [(gogoproto.customtype) = "NodeID", (gogoproto.nullable) = false];-}--message ReserveTokenResponse{}- message RedeemTokenRequest {     bytes token = 1;     bytes user_id = 2;

I think it would make sense to change this to redeem_user_id and redeem_satellite_id just to eliminate any confusion on the referral manager side

navillasa

comment created time in 2 months

Pull request review commentstorj/storj

satellite/satellitedb: separate uuid creation from db layer

 func (s *Service) CreateUser(ctx context.Context, user CreateUser, tokenSecret R 	}  	err = withTx(tx, func(tx DBTx) error {+		if err != nil {

why is this error check here?

VinozzZ

comment created time in 2 months

Pull request review commentstorj/storj

satellite/satellitedb: separate uuid creation from db layer

 func TestUsers(t *testing.T) { 		require.NoError(t, err)  		// create an user with no partnerID+		userID, err := uuid.New()

no need for this if you are creating with testrand.UUID below

VinozzZ

comment created time in 2 months

Pull request review commentstorj/storj

docs/blueprints: Deletion performance

+# Deletion Performance++## Abstract++This document describes design for improvements to deletion speed.++## Background++Current design requires uplinks to send deletion information to all the storage nodes. This ends up taking a considerable amount of time.++There are few goals with regards to deletions:++- We need to ensure that uplinks are responsive.+- We need to ensure that storage nodes aren't storing too much garbage, since it reduces overall capacity of the network.+- We need to ensure that satellites aren't storing too much garbage, since that increases running cost of the satellite.++We have traced the uplink when removing files of different sizes and we obtained the following results:++<table style="border: 0.2rem solid black">+<thead style="border: 0.1rem solid black">+<tr style="border: 0.1rem solid black">+<td rowspan="3" style="border: 0.1rem solid black">File size</td>+<td rowspan="3" style="border: 0.1rem solid black">Inline</td>+<td colspan="6" style="width:69.56pt; " >Communication time in milliseconds</td>+</tr>+<tr style="border: 0.1rem solid black">+<td rowspan="2" style="border: 0.1rem solid black">Total</td>+<td colspan="4" style="border: 0.1rem solid black">Satellite</td>+<td rowspan="2" style="border: 0.1rem solid black">Storage Nodes</td>+</tr style="border: 0.1rem solid black">+<tr>+<td style="border: 0.1rem solid black">Dialing</td>+<td style="border: 0.1rem solid black">Information gathering (bucket, Object, …)</td>+<td>Mark beginning of deletion &amp; get list of segments</td>+<td style="border: 0.1rem solid black">Begin delete segment (delete segment metadata and return the list of order limits and private key of each piece)</td>+</tr>+</thead>+<tbody>+<tr style="border: 0.1em solid grey"><td style="border: 0.1em solid grey">1 Kib</td><td style="border: 0.1em solid grey">yes</td><td style="border: 0.1em solid grey">858</td><td style="border: 0.1em solid grey">277</td><td style="border: 0.1em solid grey">272</td><td style="border: 0.1em solid grey">144</td><td style="border: 0.1em solid grey">140</td><td style="border: 0.1em solid grey">0</td></tr><tr style="border: 0.1em solid grey"><td style="border: 0.1em solid grey">4 Kib</td><td style="border: 0.1em solid grey">yes</td><td style="border: 0.1em solid grey">910</td><td style="border: 0.1em solid grey">293</td><td style="border: 0.1em solid grey">278</td><td style="border: 0.1em solid grey">144</td><td style="border: 0.1em solid grey">142</td><td style="border: 0.1em solid grey">0</td></tr><tr style="border: 0.1em solid grey"><td style="border: 0.1em solid grey">5 Kib</td><td style="border: 0.1em solid grey">no</td><td style="border: 0.1em solid grey">1959</td><td style="border: 0.1em solid grey">328</td><td style="border: 0.1em solid grey">275</td><td style="border: 0.1em solid grey">142</td><td style="border: 0.1em solid grey">513</td><td style="border: 0.1em solid grey">652</td></tr><tr style="border: 0.1em solid grey"><td style="border: 0.1em solid grey">10 Kib</td><td style="border: 0.1em solid grey">no</td><td style="border: 0.1em solid grey">2451</td><td style="border: 0.1em solid grey">308</td><td style="border: 0.1em solid grey">278</td><td style="border: 0.1em solid grey">141</td><td style="border: 0.1em solid grey">560</td><td style="border: 0.1em solid grey">1134</td></tr><tr style="border: 0.1em solid grey"><td style="border: 0.1em solid grey">2.2 Mib</td><td style="border: 0.1em solid grey">no</td><td style="border: 0.1em solid grey">2643</td><td style="border: 0.1em solid grey">325</td><td style="border: 0.1em solid grey">285</td><td style="border: 0.1em solid grey">149</td><td style="border: 0.1em solid grey">560</td><td style="border: 0.1em solid grey">1273</td></tr><tr style="border: 0.1em solid grey"><td style="border: 0.1em solid grey">256 Mib</td><td style="border: 0.1em solid grey">no</td><td style="border: 0.1em solid grey">7591</td><td style="border: 0.1em solid grey">333</td><td style="border: 0.1em solid grey">275</td><td style="border: 0.1em solid grey">145</td><td style="border: 0.1em solid grey">1539</td><td style="border: 0.1em solid grey">6644</td></tr>+</tbody>+</table>++We extracted the data of the table the following trace graph _SVG_ files that you can find in a [Github gist](https://gist.github.com/ifraixedes/b178035b53161cb391b67026b70cba52).++## Design++Delegate to the satellite to make the delete requests to the storage nodes.++Uplink will send the delete request to the satellite and wait for its response.++The satellite will send the delete requests to all the storage nodes that have segments and delete the segments from the pointerDB. Finally, it will respond to the uplink.++The satellite will communicate with the storage nodes:++- Using the current network protocol, RPC/TCP.+- The request will be sent with long-tail cancellation in the same way as upload.+- It won't send requests to offline storage nodes.+- It will send requests concurrently as much as possible.++The satellite will use a backpressure mechanism for ensuring that it's responsive with uplink.+The garbage collection will delete the pieces in those storage nodes that didn't receive the delete requests.++The satellite could use a connectionless network protocol, like UDP, to send delete request to the storage nodes. We discard to introduce this change in the first implementation and consider it in the future if we struggle to accomplish the goals.++## Rationale++We have found some alternative approaches.+We list them, presenting their advantages and trade-offs regarding the designed approach.++### (1) Uplink delete pieces of the storage nodes more efficiently++As it currently does, uplink would delete the pieces from the storage nodes but with the following changes:++1. Reduce timeouts for delete requests.+1. Undeleted pieces will eventually get garbage collected, so we can allow some of them to get lost.+1. Uplink would send the request without waiting for the deletion to happen. For example, nodes could internally delete things async.+1. Send delete segments request concurrently as much as possible.++Additionally:++- We could change transport protocol for delete requests to a connectionless protocol, like UDP, for reducing dialing time.+- We could probabilistically skip deleting pieces to minimize the number of requests. For example, we could only send the deletion requests to only 50% of the pieces.++Advantages:++- No extra running and maintenance costs on the satellite side.++Disadvantages:++- Storage nodes will have more garbage than currently because of not waiting for storage nodes to confirm the operation.+- Storage nodes will have more garbage if we use a connectionless transport protocol+- Storage nodes will have more garbage if we use a probabilistic approach.+++### (2) Satellite delete pieces of the storage nodes reliably++Uplink will communicate with the satellite as it currently does.++The satellite will take care of communicating with the storage nodes for deleting the pieces using RPC.++Advantages:++- Uplink deletion operation will be independent of the size of the file, guaranteeing always being responsive.+- It doesn't present a risk of leaving garbage in the storage nodes when deletion operation is interrupted.+- In general, the storage nodes will have less garbage because of deletions.++Disadvantages:++- The satellite requires a new chore to delete the pieces of the storage nodes. The increment of network traffic, computation, and data to track the segments to delete will increase the running costs.+- The satellite will have another component incrementing the cost of the operation as monitoring, replication, etc.+++### (3) Satellite delete pieces of the storage nodes unreliably++Uplink will communicate with the satellite as it currently does.++The satellite will take care of communicating with the storage nodes for deleting the pieces using a connectionless protocol like UDP.++Advantages:++- Uplink deletion operation will be independent of the size of the file, guaranteeing always being responsive.+- It doesn't present a risk of leaving garbage in the storage nodes when deletion operation is interrupted.++Disadvantages:++- The satellite requires a new chore to delete the pieces of the storage nodes. The increment of network traffic, computation, and data to track the segments to delete will increase the running costs.+- The satellite will have another component incrementing the cost of the operation as monitoring, replication, etc.++### Conclusion++The alternative approach (1):++- It is similar to the current mechanism but with some improvements towards the goals.+- It doesn't add more load to the satellite, but we cannot trust in the uplink in deleting the pieces or informing the non-deleted ones.++The alternative approaches (2) and (3) are similar.++Approach (2) has the advantage of guaranteeing less garbage left on the storage nodes at the expense of requiring more network traffic.++Approach (3) requires less network traffic, but it may not require less computation considering that we may need to encrypt the data sent through UDP.++Approach (3) may get rid of the satellite garbage faster.++Both approaches, (2) and (3), present the problem of increasing garbage on the satellite side.++Taking one of these approaches will require a study on how to keep the less amount of garbage as possible.++## Implementation++### (1) Storage nodes:++1. Adapt protocol buffers definitions for delete operation.+       Satellite doesn't have to send order limits; it only has to send piece ID.+1. Adapt the delete endpoint to receive requests from the satellite.++### (2) Satellite:++1. Implement delete request logic with backpressure mechanism to only confirm the operation when certain amount of storage nodes confirm successful.+   It's like what we do for upload long-tail cancellations.+1. Adapt protocol buffers definitions for delete operation.+   The current uplink RPC requests are:++        1. `BeginDeleteObject` - Retreives the _stream ID_.+        1. `ListSegments` - Uses _stream ID_  for retrieving the position index of all the segments.+        1. `BeginDeleteSegments` - Uses the _stream ID_ and position index for retrieving a list of _addressed order limits_.++   Because uplink won't send the delete requests to the storage nodes, the delete operation we can simplify it with one satellite request. The satellite will respond, with an empty body, when the deletion ends.++### (3) Uplink:++1. Change logic to not send delete requests to storage nodes.+1. Uplink `rm` command will wait until satellite responds.++### Considerations++If we plan to release the feature in several steps:++1. Implement and independently release (1) and (2) without removing the logic of the current functionality.+1. Implement and release (3).+1. Announce when previous versions will stop to work properly.+1. Remove and independently release the delete old logic from the storage node and satellite.+++## Open issues (if applicable)++1. Should we track how much each storage node is storing extra due not sending deletes? For the storage nodes that accumulate too much garbage, we could send garbage collection request outside of the normal schedule.+1. Discuss backward-incompatibility that we may introduce when adapting the protocol buffer definitions in the storage node and satellite side. Should we reuse the current definition as much as possible although it isn't ideal?+1. How many storage nodes must confirm the deletion successful to allow the satellite to return a response to the uplink?

Why can't we just respond to the uplink as soon as we delete from metainfo and wait for the storagenode deletion responses independently? That way from the uplink perspective the delete is instantaneous and we do the deletion from the storagenodes behind the scenes?

ifraixedes

comment created time in 2 months

PR opened storj/storj

satellite/audit: do not contain nodes for unknown errors WIP

What: In some of the cases during audit (Verify + Reverify), skip nodes instead of containing them. We only want to contain nodes when the connection closes or times out during the request. In other error cases we just want to skip them.

Why: We do not want to risk disqualifying nodes for bugs that are not their fault. Until we are confident that there are no bugs in our code, we should not penalize nodes for unknown errors. This PR will allow us to reenable disqualification and feel confident that any disqualified nodes deserve to be disqualified. https://storjlabs.atlassian.net/browse/V3-3024

Please describe the tests:

  • Test 1:
  • Test 2:

Please describe the performance impact:

Code Review Checklist (to be filled out by reviewer)

  • [ ] NEW: Are there any Satellite database migrations? Are they forwards and backwards compatible?
  • [ ] Does the PR describe what changes are being made?
  • [ ] Does the PR describe why the changes are being made?
  • [ ] Does the code follow our style guide?
  • [ ] Does the code follow our testing guide?
  • [ ] Is the PR appropriately sized? (If it could be broken into smaller PRs it should be)
  • [ ] Does the new code have enough tests? (every PR should have tests or justification otherwise. Bug-fix PRs especially)
  • [ ] Does the new code have enough documentation that answers "how do I use it?" and "what does it do?"? (both source documentation and higher level, diagrams?)
  • [ ] Does any documentation need updating?
  • [ ] Do the database access patterns make sense?
+35 -8

0 comment

3 changed files

pr created time in 2 months

create barnchstorj/storj

branch : green/audit-classification

created branch time in 2 months

Pull request review commentstorj/storj

satellite/nodeselection: don't select nodes that haven't checked in for a while

 func BenchmarkOverlay(b *testing.B) { 						Timestamp:  now, 						Release:    false, 					},-				}, overlay.NodeSelectionConfig{-					UptimeReputationLambda: 0.99,-					UptimeReputationWeight: 1.0,-					UptimeReputationDQ:     0,-				})+				},+					now,+					overlay.NodeSelectionConfig{+						UptimeReputationLambda: 0.99,+						UptimeReputationWeight: 1.0,+						UptimeReputationDQ:     0,+					})

nit - indentation

littleskunk

comment created time in 2 months

push eventstorj/storj

Maximillian von Briesen

commit sha 2524cc5d42fdbf605690e0175c51c67815b41644

pkg/pb: remove unneeded proto imports (#3585)

view details

push time in 2 months

delete branch storj/storj

delete branch : green/referral-pb

delete time in 2 months

more