profile
viewpoint
isaacs isaacs npm, Inc. Oakland CA http://blog.izs.me npm inventor, former CEO, now Chairman. Former Node BDFL. Author of JavaScripts. All opinions are my own. Literally all of them. I own them all. He/him.

indutny/caine 144

Friendly butler

isaacs/abbrev-js 137

Like ruby's Abbrev module

isaacs/async-cache 116

Cache your async lookups and don't fetch the same thing more than necessary.

dawsbot/config-chain 95

Handle configuration once and for all

isaacs/block-stream 46

A stream of fixed-size blocks

isaacs/.vim 41

My vim settings

dominictarr/stream-punks 40

discussion repo for streams

isaacs/back-to-markdown.css 19

Turns any markdown editor into a WYSIWYG editor

isaacs/ahyane 11

A blog engine that builds html from text files.

isaacs/blog.izs.me 8

Gatsby app that powers my blog

pull request commentnpm/rfcs

acceptDependencies package.json field

So, this is implemented in arborist, and status quo will be for it to make its way into npm 7.x. It seems pretty uncontroversial, but adding it to the agenda for us to at least officially say so in the RFC meeting.

coreyfarrell

comment created time in 7 hours

pull request commentnpm/rfcs

RFC: Add staging workflow for CI and human interoperability

In your example, what happens if i've staged 1.2.4 and then i end up staging 2.0.0 and publishing that, and never publishing any 1.x above 1.2.3? What happens to those depending on the staged v1.2.4?

So, iiuc, the flow is:

  • publish ..., 1.2.2, 1.2.3
  • stage 1.2.4
  • someone does npm i --include-staged foo@1.2.4 (they'd have to specify a version; even with --include-staged, real published versions will be prioritized)
  • stage 2.0.0
  • prompte 2.0.0

In the current v7.0.0-beta with @npmcli/arborist and npm-pick-manifest status quo, the user who installed the staged 1.2.4 will have https://registry.npmjs.org/foo/-/sha512-asdfasdfasdf/foo-1.2.4.tgz in their package-lock.json, and they'll just keep on fetching it.

One potential issue to be noted here is that their package.json will save ^1.2.4 in the dependencies object, so anyone installing their package will also have to use --include-staged or get an ETARGET error. I'm inclined to say this is a minor/acceptable footgun, since staged deps will be unlikely to be used in this case anyway. (And it's somewhat easily worked around; we could even pretty easily add a warning printed when installing a staged dep to say that publishing this package won't work until the staged dep is promoted.)

One nice feature about this vs floating a git dep is that there's no added work to switch to the "real" version once it gets promoted. The dep will just start working without the --include-staged flag. (You'd still need to float a git dep if waiting on a PR or release, of course.)

In other words, while I can concede that it is important to inspect the artifact before promoting it, i don't think the standard install flow - flag or not - is the appropriate place to do it.

I mean, technically speaking, if we give people access to the staged tarball, we are giving them access to publish it. It feels a little overly paternalistic (not to mention more work for the CLI and registry) to require it to be a radically different flow.

I do think "install the package" is a common way that you'll want to go about inspecting a staged publish. In my own experience with situations where I want to get someone to poke at a possible publish, I'm most often saying something like "Can you npm i isaacs/foo#some-branch and see if that fixes your problem?" It's pretty inconvenient if I'd have to download the artifact, put it somewhere, then ask them to fetch it from there, etc.

  1. The --stage flag on install only works when installing a single package

(I'd rephrase this from "only works when installing a single package" to "only works for the packages explicitly listed] in command line positional arguments". I'd assume that if I do npm i foo@1.2.3 bar@2.3.4 --include-staged it should give me staged versions of both foo and bar.)

This is a tricky one. It's not hard to do, necessarily (we'd just drop that from the options when resolving for any package that isn't in the explicitRequests list), but it does cut off the capability to test multiple staged deps all together, and makes the package.json issue worse.

What if I do actually require something that's only in a staged version? And I want to test that thing I'm doing in some other app?

For example, let's say that nyc stages a v16.0.0 release. It has some new features, and it's a big release, so I want to take some time and run a battery of analysis and tests as I'm developing the next major of tap that depends on it, and I want to do that in parallel with the nyc stage acceptance process.

It'd be nice if I could install the staged version, and explicitly save it as a dep. Then, do my work, get it where I want it, but I want to play test it in an app that needs some of that new functionality. So, I go over to that app, and run npm i tap@16.0.0 --include-staged. Should it be unable to fetch the staged nyc version? If so, we're asking people to think about metadeps, and it's a long-standing fundamental design principle that npm doesn't ask people to do that as a general rule.

  1. Installing from --stage is not reflected in the package.json or package-lock.json

This is much easier. We could say that --include-staged implies --save=false by default. I would want to split these two concerns apart, though, just for clarity.

Saving to the package.json automatically, yeah, that's a footgun. Because, then if I publish my package, and it's not staged, you still would have to do --include-staged to get my dep, which might not be what I want. (I should still be able to explicitly save it, by specifying --save on the command line.)

Not saving to the package-lock.json seems like a much bigger step to me. If the URL is accessible, and includes the content-address, then there's really no harm in letting it be there, and I'd be worried about quietly getting into a state where the lockfile doesn't reflect the files on disk, so devs are working on different systems without realizing it, the integrity is harder to inspect, etc. If it turns out that a staged publish is compromised, checked-in lockfiles are a pretty good way to detect the extent of the damage (just based on what we've seen with tracking down malware impact in published packages).

djsauble

comment created time in 8 hours

create barnchisaacs/node

branch : npm-6.14.1

created branch time in 9 hours

PR opened nodejs/node

deps: upgrade npm to 6.14.1

6.14.1 (2020-02-26)

  • 303e5c11e hosted-git-info@2.8.7 Fixes a regression where scp-style git urls are passed to the WhatWG URL parser, which does not handle them properly. (@isaacs)

6.14.0 (2020-02-25)

FEATURES

BUG FIXES

DEPENDENCIES

DOCUMENTATION

MISCELLANEOUS

+11248 -9577

0 comment

244 changed files

pr created time in 9 hours

create barnchnpm/node

branch : npm-6.14.1

created branch time in 9 hours

push eventnpm/cli

isaacs

commit sha 303e5c11e7db34cf014107aecd2e81c821bfde8d

hosted-git-info@2.8.7

view details

isaacs

commit sha 1de223bd2109f0789c03b2e669549bc15087f6fd

docs: changelog for 6.14.1

view details

isaacs

commit sha 3b9c13599a0af1bb0b4d80fc7a9b925e0b518d2c

6.14.1

view details

push time in 10 hours

PR merged npm/cli

Release v6.14.1

6.14.1 (2020-02-26)

  • 303e5c11e hosted-git-info@2.8.7 Fixes a regression where scp-style git urls are passed to the WhatWG URL parser, which does not handle them properly. (@isaacs)
+60 -26

1 comment

6 changed files

isaacs

pr closed time in 10 hours

push eventnpm/cli

isaacs

commit sha 303e5c11e7db34cf014107aecd2e81c821bfde8d

hosted-git-info@2.8.7

view details

isaacs

commit sha 1de223bd2109f0789c03b2e669549bc15087f6fd

docs: changelog for 6.14.1

view details

isaacs

commit sha 3b9c13599a0af1bb0b4d80fc7a9b925e0b518d2c

6.14.1

view details

push time in 10 hours

release npm/cli

v6.14.1

released time in 10 hours

created tagnpm/cli

tagv6.14.1

the package manager for JavaScript

created time in 10 hours

push eventnpm/cli

isaacs

commit sha 3b9c13599a0af1bb0b4d80fc7a9b925e0b518d2c

6.14.1

view details

push time in 10 hours

Pull request review commentnpm/libnpmaccess

fix: remove figgy-pudding

 cmd.lsCollaborators.stream = (spec, user, opts) => {     opts = user     user = undefined   }-  opts = AccessConfig(opts)   spec = npar(spec)   validate('OSO|OZO', [spec, user, opts])   const uri = `/-/package/${eu(spec.name)}/collaborators`-  return npmFetch.json.stream(uri, '*', opts.concat({-    query: { format: 'cli', user: user || undefined },-    mapJson (value, [key]) {-      if (value === 'read') {-        return [key, 'read-only']-      } else if (value === 'write') {-        return [key, 'read-write']-      } else {-        return [key, value]-      }-    }-  }))+  opts.query = { format: 'cli', user: user || undefined }+  opts.mapJSON = mapJSON

Not guaranteed to have a fresh opts object, so this should be:

opts = {
  ...opts,
  query: { format: 'cli', user: user || undefined },
  mapJSON
}
claudiahdz

comment created time in 10 hours

Pull request review commentnpm/libnpmaccess

fix: remove figgy-pudding

 cmd.lsCollaborators = (spec, user, opts) => {     opts = user     user = undefined   }-  opts = AccessConfig(opts)-  return pwrap(opts, () => {-    return getStream.array(-      cmd.lsCollaborators.stream(spec, user, opts)-    ).then(data => data.reduce((acc, [key, val]) => {-      if (!acc) {-        acc = {}-      }-      acc[key] = val-      return acc-    }, null))+  return new Promise((resolve, reject) => {+    return cmd.lsCollaborators.stream(spec, user, opts)+      .collect()+      .then(data => {+        const collabList = data.reduce((acc, [key, val]) => {+          if (!acc) {+            acc = {}+          }+          acc[key] = val+          return acc+        }, null)+        return resolve(collabList)+      })+      .catch(reject)

No need to wrap in a promise. This whole bit can be just:

    return cmd.lsCollaborators.stream(spec, user, opts)
      .collect()
      .then(data => data.reduce((acc, [key, val]) => {
        if (!acc) {
          acc = {}
        }
        acc[key] = val
        return acc
      }, null))
claudiahdz

comment created time in 10 hours

Pull request review commentnpm/libnpmaccess

fix: remove figgy-pudding

 cmd.lsCollaborators.stream = (spec, user, opts) => {     opts = user     user = undefined   }-  opts = AccessConfig(opts)   spec = npar(spec)   validate('OSO|OZO', [spec, user, opts])   const uri = `/-/package/${eu(spec.name)}/collaborators`-  return npmFetch.json.stream(uri, '*', opts.concat({-    query: { format: 'cli', user: user || undefined },-    mapJson (value, [key]) {-      if (value === 'read') {-        return [key, 'read-only']-      } else if (value === 'write') {-        return [key, 'read-write']-      } else {-        return [key, value]-      }-    }-  }))+  opts.query = { format: 'cli', user: user || undefined }+  opts.mapJSON = mapJSON+  return npmFetch.json.stream(uri, '*', opts) }  cmd.tfaRequired = (spec, opts) => setRequires2fa(spec, true, opts) cmd.tfaNotRequired = (spec, opts) => setRequires2fa(spec, false, opts)-function setRequires2fa (spec, required, opts) {-  opts = AccessConfig(opts)-  return new opts.Promise((resolve, reject) => {-    spec = npar(spec)-    validate('OBO', [spec, required, opts])-    const uri = `/-/package/${eu(spec.name)}/access`-    return npmFetch(uri, opts.concat({-      method: 'POST',-      body: { publish_requires_tfa: required },-      spec,-      ignoreBody: true-    })).then(resolve, reject)-  }).then(() => true)+function setRequires2fa (spec, required, opts = {}) {+  return new Promise((resolve, reject) => {+    try {+      spec = npar(spec)+      validate('OBO', [spec, required, opts])+      const uri = `/-/package/${eu(spec.name)}/access`+      return npmFetch(uri, {+        ...opts,+        method: 'POST',+        body: { publish_requires_tfa: required },+        spec,+        ignoreBody: true+      })+        .then(() => resolve(true))+        .catch(reject)+    } catch (err) {+      return reject(err)+    }

Simpler way:

const setRequires2fa = (spec, required, opts = {}) => Promise.resolve().then(() => {
  spec = npar(spec)
  validate('OBO', [spec, required, opts])
  const uri = `/-/package/${eu(spec.name)}/access`
  return npmFetch(uri, {
    ...opts,
    method: 'POST',
    body: { publish_requires_tfa: required },
    spec,
    ignoreBody: true
  })
})

Or to really golf it out, even:

const setRequires2fa = (spec, required, opts = {}) => Promise.resolve().then(() =>
  npmFetch(`/-/package/${eu(spec.name)}/access`, {
    ...opts,
    method: 'POST',
    body: { publish_requires_tfa: required },
    spec: npar(spec),
    ignoreBody: true
  }))
claudiahdz

comment created time in 10 hours

Pull request review commentnpm/libnpmaccess

fix: remove figgy-pudding

 cmd.lsCollaborators.stream = (spec, user, opts) => {     opts = user     user = undefined   }

else if (!opts) opts = {}

claudiahdz

comment created time in 10 hours

Pull request review commentnpm/libnpmaccess

fix: remove figgy-pudding

 const npar = spec => {   }   return spec }+const mapJSON = (value, [key]) => {+  if (value === 'read') {+    return [key, 'read-only']+  } else if (value === 'write') {+    return [key, 'read-write']+  } else {+    return [key, value]+  }+}  const cmd = module.exports = {}  cmd.public = (spec, opts) => setAccess(spec, 'public', opts) cmd.restricted = (spec, opts) => setAccess(spec, 'restricted', opts)-function setAccess (spec, access, opts) {-  opts = AccessConfig(opts)-  return pwrap(opts, () => {-    spec = npar(spec)-    validate('OSO', [spec, access, opts])-    const uri = `/-/package/${eu(spec.name)}/access`-    return npmFetch(uri, opts.concat({-      method: 'POST',-      body: { access },-      spec-    }))-  }).then(res => res.body.resume() && true)+function setAccess (spec, access, opts = {}) {+  return new Promise((resolve, reject) => {+    try {+      spec = npar(spec)+      validate('OSO', [spec, access, opts])+      const uri = `/-/package/${eu(spec.name)}/access`+      return npmFetch(uri, {+        ...opts,+        method: 'POST',+        body: { access },+        spec+      })+        .then(() => resolve(true))+        .catch(reject)+    } catch (err) {+      return reject(err)+    }+  }) } -cmd.grant = (spec, entity, permissions, opts) => {-  opts = AccessConfig(opts)-  return pwrap(opts, () => {-    spec = npar(spec)-    const { scope, team } = splitEntity(entity)-    validate('OSSSO', [spec, scope, team, permissions, opts])-    if (permissions !== 'read-write' && permissions !== 'read-only') {-      throw new Error('`permissions` must be `read-write` or `read-only`. Got `' + permissions + '` instead')+cmd.grant = (spec, entity, permissions, opts = {}) => {+  return new Promise((resolve, reject) => {+    try {+      spec = npar(spec)+      const { scope, team } = splitEntity(entity)+      validate('OSSSO', [spec, scope, team, permissions, opts])+      if (permissions !== 'read-write' && permissions !== 'read-only') {+        throw new Error('`permissions` must be `read-write` or `read-only`. Got `' + permissions + '` instead')+      }+      const uri = `/-/team/${eu(scope)}/${eu(team)}/package`+      return npmFetch(uri, {+        ...opts,+        method: 'PUT',+        body: { package: spec.name, permissions },+        scope,+        spec,+        ignoreBody: true+      })+        .then(() => resolve(true))+        .catch(reject)+    } catch (err) {+      return reject(err)     }-    const uri = `/-/team/${eu(scope)}/${eu(team)}/package`-    return npmFetch(uri, opts.concat({-      method: 'PUT',-      body: { package: spec.name, permissions },-      scope,-      spec,-      ignoreBody: true-    }))-  }).then(() => true)+  }) } -cmd.revoke = (spec, entity, opts) => {-  opts = AccessConfig(opts)-  return pwrap(opts, () => {-    spec = npar(spec)-    const { scope, team } = splitEntity(entity)-    validate('OSSO', [spec, scope, team, opts])-    const uri = `/-/team/${eu(scope)}/${eu(team)}/package`-    return npmFetch(uri, opts.concat({-      method: 'DELETE',-      body: { package: spec.name },-      scope,-      spec,-      ignoreBody: true-    }))-  }).then(() => true)+cmd.revoke = (spec, entity, opts = {}) => {+  return new Promise((resolve, reject) => {+    try {+      spec = npar(spec)+      const { scope, team } = splitEntity(entity)+      validate('OSSO', [spec, scope, team, opts])+      const uri = `/-/team/${eu(scope)}/${eu(team)}/package`+      return npmFetch(uri, {+        ...opts,+        method: 'DELETE',+        body: { package: spec.name },+        scope,+        spec,+        ignoreBody: true+      })+        .then(() => resolve(true))+        .catch(reject)+    } catch (err) {+      return reject(err)+    }+  }) }  cmd.lsPackages = (entity, opts) => {-  opts = AccessConfig(opts)-  return pwrap(opts, () => {-    return getStream.array(-      cmd.lsPackages.stream(entity, opts)-    ).then(data => data.reduce((acc, [key, val]) => {-      if (!acc) {-        acc = {}-      }-      acc[key] = val-      return acc-    }, null))+  return new Promise((resolve, reject) => {+    return cmd.lsPackages.stream(entity, opts)+      .collect()+      .then(data => {+        const packageList = data.reduce((acc, [key, val]) => {+          if (!acc) {+            acc = {}+          }+          acc[key] = val+          return acc+        }, null)+        return resolve(packageList)+      })+      .catch(reject)   }) } -cmd.lsPackages.stream = (entity, opts) => {+cmd.lsPackages.stream = (entity, opts = {}) => {   validate('SO|SZ', [entity, opts])-  opts = AccessConfig(opts)   const { scope, team } = splitEntity(entity)   let uri   if (team) {     uri = `/-/team/${eu(scope)}/${eu(team)}/package`   } else {     uri = `/-/org/${eu(scope)}/package`   }-  opts = opts.concat({-    query: { format: 'cli' },-    mapJson (value, [key]) {-      if (value === 'read') {-        return [key, 'read-only']-      } else if (value === 'write') {-        return [key, 'read-write']-      } else {-        return [key, value]-      }-    }-  })-  const ret = new PassThrough({ objectMode: true })+  opts.query = { format: 'cli' }+  opts.mapJSON = mapJSON

I think we want to not mutate the opts here, because it hasn't been cloned. Safer to do:

opts = {
  ...opts,
  query: { format: 'cli' },
  mapJSON
}
claudiahdz

comment created time in 10 hours

Pull request review commentnpm/libnpmaccess

fix: remove figgy-pudding

 cmd.lsCollaborators = (spec, user, opts) => {     opts = user     user = undefined   }

Need a else if (!opts) opts = {} condition handled here as well.

claudiahdz

comment created time in 10 hours

Pull request review commentnpm/libnpmaccess

fix: remove figgy-pudding

 const npar = spec => {   }   return spec }+const mapJSON = (value, [key]) => {+  if (value === 'read') {+    return [key, 'read-only']+  } else if (value === 'write') {+    return [key, 'read-write']+  } else {+    return [key, value]+  }+}  const cmd = module.exports = {}  cmd.public = (spec, opts) => setAccess(spec, 'public', opts) cmd.restricted = (spec, opts) => setAccess(spec, 'restricted', opts)-function setAccess (spec, access, opts) {-  opts = AccessConfig(opts)-  return pwrap(opts, () => {-    spec = npar(spec)-    validate('OSO', [spec, access, opts])-    const uri = `/-/package/${eu(spec.name)}/access`-    return npmFetch(uri, opts.concat({-      method: 'POST',-      body: { access },-      spec-    }))-  }).then(res => res.body.resume() && true)+function setAccess (spec, access, opts = {}) {+  return new Promise((resolve, reject) => {+    try {+      spec = npar(spec)+      validate('OSO', [spec, access, opts])+      const uri = `/-/package/${eu(spec.name)}/access`+      return npmFetch(uri, {+        ...opts,+        method: 'POST',+        body: { access },+        spec+      })+        .then(() => resolve(true))+        .catch(reject)+    } catch (err) {+      return reject(err)+    }+  }) } -cmd.grant = (spec, entity, permissions, opts) => {-  opts = AccessConfig(opts)-  return pwrap(opts, () => {-    spec = npar(spec)-    const { scope, team } = splitEntity(entity)-    validate('OSSSO', [spec, scope, team, permissions, opts])-    if (permissions !== 'read-write' && permissions !== 'read-only') {-      throw new Error('`permissions` must be `read-write` or `read-only`. Got `' + permissions + '` instead')+cmd.grant = (spec, entity, permissions, opts = {}) => {+  return new Promise((resolve, reject) => {+    try {+      spec = npar(spec)+      const { scope, team } = splitEntity(entity)+      validate('OSSSO', [spec, scope, team, permissions, opts])+      if (permissions !== 'read-write' && permissions !== 'read-only') {+        throw new Error('`permissions` must be `read-write` or `read-only`. Got `' + permissions + '` instead')+      }+      const uri = `/-/team/${eu(scope)}/${eu(team)}/package`+      return npmFetch(uri, {+        ...opts,+        method: 'PUT',+        body: { package: spec.name, permissions },+        scope,+        spec,+        ignoreBody: true+      })+        .then(() => resolve(true))+        .catch(reject)+    } catch (err) {+      return reject(err)     }-    const uri = `/-/team/${eu(scope)}/${eu(team)}/package`-    return npmFetch(uri, opts.concat({-      method: 'PUT',-      body: { package: spec.name, permissions },-      scope,-      spec,-      ignoreBody: true-    }))-  }).then(() => true)+  }) } -cmd.revoke = (spec, entity, opts) => {-  opts = AccessConfig(opts)-  return pwrap(opts, () => {-    spec = npar(spec)-    const { scope, team } = splitEntity(entity)-    validate('OSSO', [spec, scope, team, opts])-    const uri = `/-/team/${eu(scope)}/${eu(team)}/package`-    return npmFetch(uri, opts.concat({-      method: 'DELETE',-      body: { package: spec.name },-      scope,-      spec,-      ignoreBody: true-    }))-  }).then(() => true)+cmd.revoke = (spec, entity, opts = {}) => {+  return new Promise((resolve, reject) => {+    try {+      spec = npar(spec)+      const { scope, team } = splitEntity(entity)+      validate('OSSO', [spec, scope, team, opts])+      const uri = `/-/team/${eu(scope)}/${eu(team)}/package`+      return npmFetch(uri, {+        ...opts,+        method: 'DELETE',+        body: { package: spec.name },+        scope,+        spec,+        ignoreBody: true+      })+        .then(() => resolve(true))+        .catch(reject)+    } catch (err) {+      return reject(err)+    }+  }) }  cmd.lsPackages = (entity, opts) => {-  opts = AccessConfig(opts)-  return pwrap(opts, () => {-    return getStream.array(-      cmd.lsPackages.stream(entity, opts)-    ).then(data => data.reduce((acc, [key, val]) => {-      if (!acc) {-        acc = {}-      }-      acc[key] = val-      return acc-    }, null))+  return new Promise((resolve, reject) => {

No need to wrap in a promise here. So, the whole function just resolves to:

cmd.lsPackages = (entity, opts) => cmd.lsPackages.stream(entity, opts)
  .collect()
  .then(data => data.reduce((acc, [key, val]) => {
    if (!acc) {
      acc = {}
    }
    acc[key] = val
    return acc
  }, null)
claudiahdz

comment created time in 10 hours

Pull request review commentnpm/libnpmaccess

fix: remove figgy-pudding

 const npar = spec => {   }   return spec }+const mapJSON = (value, [key]) => {+  if (value === 'read') {+    return [key, 'read-only']+  } else if (value === 'write') {+    return [key, 'read-write']+  } else {+    return [key, value]+  }+}  const cmd = module.exports = {}  cmd.public = (spec, opts) => setAccess(spec, 'public', opts) cmd.restricted = (spec, opts) => setAccess(spec, 'restricted', opts)-function setAccess (spec, access, opts) {-  opts = AccessConfig(opts)-  return pwrap(opts, () => {-    spec = npar(spec)-    validate('OSO', [spec, access, opts])-    const uri = `/-/package/${eu(spec.name)}/access`-    return npmFetch(uri, opts.concat({-      method: 'POST',-      body: { access },-      spec-    }))-  }).then(res => res.body.resume() && true)+function setAccess (spec, access, opts = {}) {+  return new Promise((resolve, reject) => {+    try {+      spec = npar(spec)+      validate('OSO', [spec, access, opts])+      const uri = `/-/package/${eu(spec.name)}/access`+      return npmFetch(uri, {+        ...opts,+        method: 'POST',+        body: { access },+        spec+      })+        .then(() => resolve(true))+        .catch(reject)+    } catch (err) {+      return reject(err)+    }+  }) } -cmd.grant = (spec, entity, permissions, opts) => {-  opts = AccessConfig(opts)-  return pwrap(opts, () => {-    spec = npar(spec)-    const { scope, team } = splitEntity(entity)-    validate('OSSSO', [spec, scope, team, permissions, opts])-    if (permissions !== 'read-write' && permissions !== 'read-only') {-      throw new Error('`permissions` must be `read-write` or `read-only`. Got `' + permissions + '` instead')+cmd.grant = (spec, entity, permissions, opts = {}) => {+  return new Promise((resolve, reject) => {+    try {+      spec = npar(spec)+      const { scope, team } = splitEntity(entity)+      validate('OSSSO', [spec, scope, team, permissions, opts])+      if (permissions !== 'read-write' && permissions !== 'read-only') {+        throw new Error('`permissions` must be `read-write` or `read-only`. Got `' + permissions + '` instead')+      }+      const uri = `/-/team/${eu(scope)}/${eu(team)}/package`+      return npmFetch(uri, {+        ...opts,+        method: 'PUT',+        body: { package: spec.name, permissions },+        scope,+        spec,+        ignoreBody: true+      })+        .then(() => resolve(true))+        .catch(reject)+    } catch (err) {+      return reject(err)     }-    const uri = `/-/team/${eu(scope)}/${eu(team)}/package`-    return npmFetch(uri, opts.concat({-      method: 'PUT',-      body: { package: spec.name, permissions },-      scope,-      spec,-      ignoreBody: true-    }))-  }).then(() => true)+  }) } -cmd.revoke = (spec, entity, opts) => {-  opts = AccessConfig(opts)-  return pwrap(opts, () => {-    spec = npar(spec)-    const { scope, team } = splitEntity(entity)-    validate('OSSO', [spec, scope, team, opts])-    const uri = `/-/team/${eu(scope)}/${eu(team)}/package`-    return npmFetch(uri, opts.concat({-      method: 'DELETE',-      body: { package: spec.name },-      scope,-      spec,-      ignoreBody: true-    }))-  }).then(() => true)+cmd.revoke = (spec, entity, opts = {}) => {+  return new Promise((resolve, reject) => {

I think this whole bit, with the try/reject and all, can be simplified to just replace pwrap(... with Promise.resolve().then(...

Then you can get rid of the then/catch on the npmFetch returned promise, and instead just return that promise. Anything thrown in the function will reject the promise.

claudiahdz

comment created time in 11 hours

PR opened npm/cli

Release v6.14.1

6.14.1 (2020-02-26)

  • 303e5c11e hosted-git-info@2.8.7 Fixes a regression where scp-style git urls are passed to the WhatWG URL parser, which does not handle them properly. (@isaacs)
+58 -24

0 comment

6 changed files

pr created time in 11 hours

push eventnpm/cli

isaacs

commit sha 303e5c11e7db34cf014107aecd2e81c821bfde8d

hosted-git-info@2.8.7

view details

isaacs

commit sha 1de223bd2109f0789c03b2e669549bc15087f6fd

docs: changelog for 6.14.1

view details

push time in 11 hours

create barnchnpm/cli

branch : release-v6.14.1

created branch time in 11 hours

created tagnpm/hosted-git-info

tagv3.0.4

Provides metadata and conversions from repository urls for Github, Bitbucket and Gitlab

created time in 11 hours

push eventnpm/hosted-git-info

isaacs

commit sha 0835306193ca508d2b571ee8d608197a9a8b0448

fix: Do not pass scp-style URLs to the WhatWG url.URL Fix #60 PR-URL: https://github.com/npm/hosted-git-info/pull/63 Credit: @isaacs Close: #63 Reviewed-by: @isaacs

view details

isaacs

commit sha 8e0b0ec70b01e048a260abd68a703a6c912cd5af

chore(release): 3.0.4

view details

push time in 11 hours

PR closed npm/hosted-git-info

Reviewers
fix: Do not pass scp-style URLs to the WhatWG url.URL

Fix #60

+19 -4

1 comment

3 changed files

isaacs

pr closed time in 11 hours

issue closednpm/hosted-git-info

[BUG] Failed to parse git url

Hi, I think the new version just breaks the git URL parser. https://github.com/npm/hosted-git-info/blob/6f39e93bae9162663af6f15a9d10bce675dd5de3/index.js#L112

Now it failed to handle dependency like this: git+ssh://git@git.unlucky.com:RND/electron-tools/some-tool#2.0.1

Can you release a hotfix for this? It breaks a lot of things and hard to work around in projects.

node version: 12.0.0

@isaacs @darcyclarke

closed time in 11 hours

link89

PR closed npm/hosted-git-info

fix: do not use url.URL to support early node 6 and scp-style URLs

This gets rid of using url.URL, which fixes #60 and #61

This in theory could also be applied to the latest branch. While node v6 isn't supported in the latest branch, it may be a more elegant fix to #60 than leaving url.URL in place.

+5 -10

2 comments

3 changed files

billneff79

pr closed time in 11 hours

pull request commentnpm/hosted-git-info

fix: do not use url.URL to support early node 6 and scp-style URLs

Ah, I'd already landed #62 when I saw this, sorry. I agree, this is a bit of a cleaner fix for legacy v6 node users, but since that node version is only barely supported anyway, I think it's fine either way. Closing this for now, if we get complaints about urlencoding in auth in urls in node v6 less than 6.12, we can revisit it :)

billneff79

comment created time in 11 hours

created tagnpm/hosted-git-info

tagv2.8.7

Provides metadata and conversions from repository urls for Github, Bitbucket and Gitlab

created time in 11 hours

push eventnpm/hosted-git-info

isaacs

commit sha f2cdfcf33ad2bd3bd1acdba0326281089f53c5b1

fix: Do not pass scp-style URLs to the WhatWG url.URL Fix #60 (for the legacy branch)

view details

isaacs

commit sha 2d0bb6615ecb8f9ef1019bc0737aab7f6449641f

fix: Do not attempt to use url.URL when unavailable Fix #61 This should not be ported to the latest branch, as Node.js v6 support was dropped there anyway. PR-URL: https://github.com/npm/hosted-git-info/pull/62 Credit: @isaacs Close: #62 Reviewed-by: @isaacs

view details

isaacs

commit sha 7440afa859162051c191e55d8ecfaf69a193b026

chore(release): 2.8.7

view details

push time in 11 hours

push eventnpm/hosted-git-info

isaacs

commit sha bb123d285cac851110fc3b7a0055646f5341f656

fix: Do not attempt to use url.URL when unavailable Fix #61 This should not be ported to the latest branch, as Node.js v6 support was dropped there anyway.

view details

push time in 12 hours

PR opened npm/hosted-git-info

Reviewers
fix: Do not pass scp-style URLs to the WhatWG url.URL

Fix #60

+19 -4

0 comment

3 changed files

pr created time in 12 hours

PR opened npm/hosted-git-info

Reviewers
fix: Do not pass scp-style URLs to the WhatWG url.URL

Fix #60 (for the legacy branch)

+19 -4

0 comment

3 changed files

pr created time in 12 hours

create barnchnpm/hosted-git-info

branch : isaacs/fix-60-legacy

created branch time in 12 hours

create barnchnpm/hosted-git-info

branch : isaacs/fix-60

created branch time in 12 hours

issue commentnpm/hosted-git-info

[BUG] Failed to parse git url

Confirmed, we'll take a look at this today.

link89

comment created time in 13 hours

issue commentnpm/cli

[BUG] Fails to install npm packages from private github repos in docker since v6.11.0

Does it work when you run the git commands directly, not through npm?

Can you try executing git ls-remote -h -t https://github.com/user/private-repo with that ~/.netrc file in place?

What is the user account that this is running as? Is it possible that the echo command is adding things to a home directory that does not match the owner of the cwd where git is running? (You could figure this out by adding id and ls -l . to the RUN command.)

If the first command works fine, but the user running all this is root, and the owner of the directory is something other than root, then you'll have to put the .netrc in that user's home directory instead of in root's, because the git command won't run as root, I'd expect it'd be running as the owner of the directory (the better to avoid root-owned files and folders in a local installation).

simonkotwicz

comment created time in 13 hours

pull request commentnpm/libnpmhook

fix: remove figgy-pudding

LGTM!

claudiahdz

comment created time in 17 hours

PR opened npm/cli

WIP: using the de-figgy-pudding-ified npm-profile module

Integration of https://github.com/npm/npm-profile/pull/8

Don't land until that is landed and published and this is rebased to use the published version

+2371 -7846

0 comment

130 changed files

pr created time in a day

push eventnpm/cli

isaacs

commit sha 382582fc0c2b757562e3d98dda5d5daefda5e262

global installs, save in proper locations Updates arborist and pulls in npm-registry-fetch and pacote. Note that, as of this commit, Referer is not sent for registry requests by default.

view details

isaacs

commit sha cc212df3ad02b892cd2a12d7e49c26a81966f89d

wip: npm-profile, the de-figged version, floating from git

view details

isaacs

commit sha 73ef803c58f660cd1b4ad59334865d3d2454ba44

Use de-figged npm-profile

view details

push time in a day

push eventnpm/cli

isaacs

commit sha 382582fc0c2b757562e3d98dda5d5daefda5e262

global installs, save in proper locations Updates arborist and pulls in npm-registry-fetch and pacote. Note that, as of this commit, Referer is not sent for registry requests by default.

view details

push time in a day

created tagnpm/arborist

tagv0.0.0-pre.11

npm's tree doctor

created time in a day

PR merged npm/arborist

Reviewers
further simplify adding new deps to the root

The previous revision of the buildIdealTree/reify add option converted the package.json-esque data structure into instead passing a list of specs for each location in the package, since the name couldn't be known 100% of the time up front.

However, that still puts too much of the burden of placing deps onto the caller.

When you run npm install foo, if foo is already a devDependency, then the updated save spec gets added there, unless specified otherwise. Since we aren't guaranteed to know the name of the dep until we've resolved it, how is the caller (ie, the npm CLI) supposed to know where to put it in the objecet?

At the expense of a slight amount of flexibility, this changes things further so that the add option mirrors exactly the positional arguments passed to the npm install command: it's just a list of specs, that's it.

Two more options were added:

  • saveType
  • saveBundle

If the saveType is not set, then added deps are added wherever they already exist, or to dependencies if they're not already present. Otherwise, it should be set to one of the edge type values:

  • 'prod'
  • 'dev'
  • 'optional'
  • 'peer'
  • 'peerOptional'

If saveBundle is true, then newly added deps are added to the bundleDependencies list as well.

+405 -440

0 comment

13 changed files

isaacs

pr closed time in a day

push eventnpm/arborist

isaacs

commit sha 3457ac0cf2627515ca5e9bcaf9fbb96f7ddf3340

add a utility for updating a dep spec in a manifest

view details

isaacs

commit sha 640527939d734c0550aa9f648d4254b27dc591f4

further simplify adding new deps to the root The previous revision of the buildIdealTree/reify `add` option converted the package.json-esque data structure into instead passing a list of specs for each location in the package, since the name couldn't be known 100% of the time up front. However, that still puts too much of the burden of placing deps onto the caller. When you run `npm install foo`, if `foo` is already a devDependency, then the updated save spec gets added there, unless specified otherwise. Since we aren't guaranteed to know the _name_ of the dep until we've resolved it, how is the caller (ie, the npm CLI) supposed to know where to put it in the objecet? At the expense of a slight amount of flexibility, this changes things further so that the `add` option mirrors exactly the positional arguments passed to the `npm install` command: it's just a list of specs, that's it. Two more options were added: - saveType - saveBundle If the saveType is not set, then added deps are added wherever they already exist, or to `dependencies` if they're not already present. Otherwise, it should be set to one of the edge type values: - 'prod' - 'dev' - 'optional' - 'peer' - 'peerOptional' If `saveBundle` is true, then newly added deps are added to the `bundleDependencies` list as well.

view details

isaacs

commit sha af5315e296b29efe4091aabb3e1deb3b553f7299

perf: don't reload edges for non-dep node.package updates Considering that maybe that Node.package setter is more of a footgun than it's worth. It's nice to know that the deps will always be kept up to date, but throwing away and reloading edges just because you pulled in something tangential feels like it's bad-clever.

view details

isaacs

commit sha b3a311eae221802e6c55ee485abd9a0eff3fbb47

pacote@11.1.0 No longer sending 'Referer: undefined' on registry requests.

view details

isaacs

commit sha b6984de44cd4a9d3043da63a6e96e55088a39ad3

0.0.0-pre.11

view details

push time in a day

delete branch npm/arborist

delete branch : isaacs/no-referer

delete time in a day

push eventnpm/arborist

isaacs

commit sha b3a311eae221802e6c55ee485abd9a0eff3fbb47

pacote@11.1.0 No longer sending 'Referer: undefined' on registry requests.

view details

push time in a day

issue commentnpm/pacote

[BUG] Some git commands are executed under destination directory's owner account

@x-yuri So, iiuc, it is fixed, just not in the commit that we thought fixed it?

x-yuri

comment created time in a day

issue commentnpm/cli

[BUG] Failed to install npm package from git in docker since v6.11.0

@simonkotwicz That's a different issue. Can you please post a new issue describing the situation, ideally with a more fleshed-out reproduction case?

doochik

comment created time in a day

issue commentnpm/minipass-fetch

[BUG] .size option is broken

Well, this is a reimplementation/fork of node-fetch, but I think in terms of intended API, they're both aiming to be an implementation of the WhatWG fetch() spec for Node.js. And WhatWG fetch() doesn't have a size option at all, so this is an extension of the spec either way. We're not strictly committed to matching node-fetch in every point. For example, minipass-fetch supports a trailers promise, which was dropped from the spec due to a lack of browser commitment, and never made it into node-fetch. (I might opt to drop it from minipass-fetch, since it's pretty niche anyway, but for now it's not hurting anyone.)

But you do make a good point, it would be simpler to switch back and forth between them if the options that do exist in both had the same semantics. And node-fetch came first. Ok, I'm convinced. In the use case where I'm passing size, it also passes size to cacache (where it does mean "exact size") so any errors on that will get caught, just in a different spot.

Pizzacus

comment created time in a day

pull request commentnpm/libnpmsearch

fix: remove figgy-pudding

LGTM!

claudiahdz

comment created time in a day

Pull request review commentnpm/libnpmaccess

fix: remove figgy-pudding

   "name": "libnpmaccess",   "version": "3.0.2",   "description": "programmatic library for `npm access` commands",-  "author": {-    "name": "Kat Marchán",-    "email": "kzm@zkat.tech",-    "twitter": "maybekatz"-  },+  "author": "Kat Marchán <kzm@sykosomatic.org>",   "license": "ISC",   "scripts": {     "prerelease": "npm t",     "release": "standard-version -s",     "postrelease": "npm publish && git push --follow-tags",     "pretest": "standard",-    "test": "tap -J --100 test/*.js",-    "update-coc": "weallbehave -o . && git add CODE_OF_CONDUCT.md && git commit -m 'docs(coc): updated CODE_OF_CONDUCT.md'",-    "update-contrib": "weallcontribute -o . && git add CONTRIBUTING.md && git commit -m 'docs(contributing): updated CONTRIBUTING.md'"+    "test": "tap -J --100 test/*.js"   },   "devDependencies": {     "nock": "^9.6.1",     "standard": "^14.3.0",     "standard-version": "^7.0.0",-    "tap": "^12.7.0",-    "weallbehave": "*",-    "weallcontribute": "*"+    "tap": "^12.7.0"

Can bump this to latest, too.

claudiahdz

comment created time in 2 days

Pull request review commentnpm/libnpmaccess

fix: remove figgy-pudding

 const npar = spec => {   }   return spec }+const mapJson = (value, [key]) => {

Should be spelled mapJSON rather than mapJson.

claudiahdz

comment created time in 2 days

Pull request review commentnpm/libnpmaccess

fix: remove figgy-pudding

 const npar = spec => {   }   return spec }+const mapJson = (value, [key]) => {+  if (value === 'read') {+    return [key, 'read-only']+  } else if (value === 'write') {+    return [key, 'read-write']+  } else {+    return [key, value]+  }+}  const cmd = module.exports = {}  cmd.public = (spec, opts) => setAccess(spec, 'public', opts) cmd.restricted = (spec, opts) => setAccess(spec, 'restricted', opts) function setAccess (spec, access, opts) {

Should get a default value, ..., opts = {}) {

claudiahdz

comment created time in 2 days

Pull request review commentnpm/libnpmaccess

fix: remove figgy-pudding

   "homepage": "https://npmjs.com/package/libnpmaccess",   "dependencies": {     "aproba": "^2.0.0",-    "figgy-pudding": "^3.5.1",     "get-stream": "^4.0.0",-    "npm-package-arg": "^6.1.0",+    "npm-package-arg": "^8.0.0",     "npm-registry-fetch": "^4.0.0"

Should bump this to 8.0.0.

claudiahdz

comment created time in 2 days

Pull request review commentnpm/libnpmaccess

fix: remove figgy-pudding

   "homepage": "https://npmjs.com/package/libnpmaccess",   "dependencies": {     "aproba": "^2.0.0",-    "figgy-pudding": "^3.5.1",     "get-stream": "^4.0.0",

Need to remove get-stream stuff and replace with Minipass.

claudiahdz

comment created time in 2 days

Pull request review commentnpm/libnpmpublish

fix: remove figgy-pudding

 function publish (manifest, tarball, opts) {     }     const spec = npa.resolve(manifest.name, manifest.version)     // NOTE: spec is used to pick the appropriate registry/auth combo.-    opts = opts.concat(manifest.publishConfig, { spec })+    opts = {+      algorithms: ['sha512'],+      tag: 'latest',+      ...opts,+      ...manifest.publishConfig,

Oh.... this is interesting...

The publishConfig option in package.json traditionally used the config names that you'd put in npmrc or on the command line. (Ie, the css-case variants and some non-canonical names.)

This is relevant because it means that "publishConfig": { "tag": "next" } will set opts.tag instead of opts.defaultTag.

So, I think we have to add a method in here to convert any known aliases that are relevant options for publish. The alternative is that we switch from defaultTag to tag everywhere (which is a pretty extensive change), or have users put defaultTag in publishConfig instead of tag (which is a pretty big user footgun).

claudiahdz

comment created time in 2 days

Pull request review commentnpm/libnpmpublish

fix: remove figgy-pudding

 function buildMetadata (spec, auth, registry, manifest, tardata, opts) {     readme: manifest.readme || ''   } -  if (opts.access) root.access = opts.access+  if (access) root.access = access    if (!auth.token) {     root.maintainers = [{ name: auth.username, email: auth.email }]     manifest.maintainers = JSON.parse(JSON.stringify(root.maintainers))   }    root.versions[manifest.version] = manifest-  const tag = manifest.tag || opts.tag+  const tag = manifest.tag || _tag

Wait, what? You can put a tag field in package.json, and it'll use that?? I've been doing publishConfig: { tag: ... } forever like an animal. TIL!

claudiahdz

comment created time in 2 days

Pull request review commentnpm/libnpmpublish

fix: remove figgy-pudding

 function buildMetadata (spec, auth, registry, manifest, tardata, opts) {     readme: manifest.readme || ''

We might be able to stop doing this, since we only serve the readme that's in the actual tarball nowadays anyway. Would make publishes go faster.

claudiahdz

comment created time in 2 days

Pull request review commentnpm/libnpmpublish

fix: remove figgy-pudding

 const ssri = require('ssri') const url = require('url') const validate = require('aproba') -const PublishConfig = figgyPudding({-  access: {},-  algorithms: { default: ['sha512'] },-  npmVersion: {},-  tag: { default: 'latest' },-  Promise: { default: () => Promise }-})- module.exports = publish function publish (manifest, tarball, opts) {

Need a default value for these function calls to preserve how the FP function would handle missing opts. , opts = {})

claudiahdz

comment created time in 2 days

Pull request review commentnpm/libnpmpublish

fix: remove figgy-pudding

 function patchedManifest (spec, auth, base, opts) { }  function buildMetadata (spec, auth, registry, manifest, tardata, opts) {+  const { access, tag: _tag, algorithms } = opts

tag -> defaultTag here as well

claudiahdz

comment created time in 2 days

Pull request review commentnpm/libnpmpublish

fix: remove figgy-pudding

 function buildMetadata (spec, auth, registry, manifest, tardata, opts) {     readme: manifest.readme || ''   } -  if (opts.access) root.access = opts.access+  if (access) root.access = access    if (!auth.token) {     root.maintainers = [{ name: auth.username, email: auth.email }]     manifest.maintainers = JSON.parse(JSON.stringify(root.maintainers))   }    root.versions[manifest.version] = manifest-  const tag = manifest.tag || opts.tag+  const tag = manifest.tag || _tag

That is not even slightly documented in npm help package.json.

claudiahdz

comment created time in 2 days

Pull request review commentnpm/libnpmpublish

fix: remove figgy-pudding

 function publish (manifest, tarball, opts) {       const metadata = buildMetadata(         spec, auth, reg, pubManifest, tardata, opts       )-      return npmFetch(spec.escapedName, opts.concat({++      return npmFetch(spec.escapedName, {+        ...opts,         method: 'PUT',         body: metadata,         ignoreBody: true-      })).catch(err => {+      }).catch(err => {         if (err.code !== 'E409') { throw err }-        return npmFetch.json(spec.escapedName, opts.concat({+        return npmFetch.json(spec.escapedName, {+          ...opts,           query: { write: true }-        })).then(+        }).then(           current => patchMetadata(current, metadata, opts)         ).then(newMetadata => {-          return npmFetch(spec.escapedName, opts.concat({+          return npmFetch(spec.escapedName, {+            ...opts,             method: 'PUT',             body: newMetadata,             ignoreBody: true-          }))+          })         })       })     })   }).then(() => true) }  function patchedManifest (spec, auth, base, opts) {+  const { npmVersion } = opts   const manifest = cloneDeep(base)   manifest._nodeVersion = process.versions.node

Should this be opts.nodeVersion? We allow overriding the effective node version for installation, but maybe we ought not to for publishing?

claudiahdz

comment created time in 2 days

Pull request review commentnpm/libnpmpublish

fix: remove figgy-pudding

 function publish (manifest, tarball, opts) {     }     const spec = npa.resolve(manifest.name, manifest.version)     // NOTE: spec is used to pick the appropriate registry/auth combo.-    opts = opts.concat(manifest.publishConfig, { spec })+    opts = {+      algorithms: ['sha512'],+      tag: 'latest',

This should be defaultTag. (Moved to that canonical name to fit with the usage in npm-pick-manifest and friends.)

claudiahdz

comment created time in 2 days

pull request commentnpm/libnpmhook

fix: remove figgy-pudding

One more thing, I think the README.md file needs to be updated. Remove the opts.Promise, and the outdated contribution boilerplate.

claudiahdz

comment created time in 2 days

Pull request review commentnpm/libnpmhook

fix: remove figgy-pudding

   "license": "ISC",   "dependencies": {     "aproba": "^2.0.0",-    "figgy-pudding": "^3.4.1",     "get-stream": "^4.0.0",-    "npm-registry-fetch": "^3.8.0"+    "npm-registry-fetch": "^7.0.1"

Should be 8.0.0

claudiahdz

comment created time in 2 days

Pull request review commentnpm/libnpmhook

fix: remove figgy-pudding

   "license": "ISC",   "dependencies": {     "aproba": "^2.0.0",-    "figgy-pudding": "^3.4.1",     "get-stream": "^4.0.0",-    "npm-registry-fetch": "^3.8.0"+    "npm-registry-fetch": "^7.0.1"   },   "devDependencies": {     "nock": "^9.6.1",     "standard": "^11.0.1",     "standard-version": "^4.4.0",-    "tap": "^12.0.1",-    "weallbehave": "^1.2.0",-    "weallcontribute": "^1.0.8"+    "tap": "^12.0.1"

Let's install latest tap here, as well.

claudiahdz

comment created time in 2 days

Pull request review commentnpm/libnpmhook

fix: remove figgy-pudding

     "postrelease": "npm publish && git push --follow-tags",     "pretest": "standard",     "release": "standard-version -s",-    "test": "tap -J --100 --coverage test/*.js",-    "update-coc": "weallbehave -o . && git add CODE_OF_CONDUCT.md && git commit -m 'docs(coc): updated CODE_OF_CONDUCT.md'",-    "update-contrib": "weallcontribute -o . && git add CONTRIBUTING.md && git commit -m 'docs(contributing): updated CONTRIBUTING.md'"+    "test": "tap -J --100 --coverage test/*.js"

Once tap is updated to latest, this can just be tap test/*.js, or move any non-test stuff in test/ to test/fixtures and make this script just "test": "tap"

claudiahdz

comment created time in 2 days

Pull request review commentnpm/libnpmhook

fix: remove figgy-pudding

 cmd.rm = (id, opts) => { }  cmd.update = (id, endpoint, secret, opts) => {-  opts = HooksConfig(opts)   validate('SSSO', [id, endpoint, secret, opts])-  return fetch.json(`/-/npm/v1/hooks/hook/${eu(id)}`, opts.concat({+  return fetch.json(`/-/npm/v1/hooks/hook/${eu(id)}`, {+    ...opts,     method: 'PUT',     body: {endpoint, secret}-  }, opts))+  }) }  cmd.find = (id, opts) => {

Should get a default value in all of these, since they might not get an options object. cmd.update = (..., opts = {}) => { (Previously, the HooksConfig function would turn undefined into an object.)

claudiahdz

comment created time in 2 days

push eventnpm/arborist

isaacs

commit sha 2653a034f8afd14d7e04428fcde279bdd6cc6b5f

pacote@11.1.0

view details

push time in 2 days

create barnchnpm/cli

branch : isaacs/defig-profile

created branch time in 2 days

PR opened npm/cli

Isaacs/simplify supported node

Builds atop of https://github.com/npm/cli/pull/697, which is included in this.

+9 -17

0 comment

2 changed files

pr created time in 2 days

push eventnpm/cli

Kyle Getz

commit sha 55916b130ef52984584678f2cc17c15c1f031cb5

fix: check `npm.config` before accessing its members Sometimes, `npm.config` can be missing entirely, but there are several places where `npm.config.foo` is accessed blindly, resulting in these kinds of errors and stack traces: TypeError: Cannot read property 'get' of undefined at errorMessage (.../lib/utils/error-message.js:38:39) ... TypeError: Cannot read property 'loaded' of undefined at exit (.../lib/utils/error-handler.js:97:27) ... LBYL by checking `npm.config` first. Addresses a small part of #502. PR-URL: https://github.com/npm/cli/pull/508 Credit: @ Close: #508 Reviewed-by: @Darcy Clarke

view details

Sean Healy

commit sha fbb5f0e50e54425119fa3f03c5de93e4cb6bfda7

First take on breaking out the lifecycle hooks.

view details

Michael Perrotte

commit sha 284c1c055a28c4b334496101799acefe3c54ceb3

docs: updated scripts docs in using-npm section - A continuation of @seanhealy's work PR-URL: https://github.com/npm/cli/pull/729 Credit: @ Close: #729 Reviewed-by: @Darcy Clarke

view details

Netanel Gilad

commit sha 7d0cd65b23c0986b631b9b54d87bbe74902cc023

access: grant is ok with non-scoped PR-URL: https://github.com/npm/cli/pull/733 Credit: @ Close: #733 Reviewed-by: @Darcy Clarke

view details

Michael Perrotte

commit sha 85c79636df31bac586c0e380c4852ee155a7723c

feat: added script to update dist-tags PR-URL: https://github.com/npm/cli/pull/736 Credit: @ Close: #736 Reviewed-by: @Darcy Clarke

view details

Dave Nicolson

commit sha 1c272832d048300e409882313305c416dc6f21a2

Correct typo PR-URL: https://github.com/npm/cli/pull/787 Credit: @ Close: #787 Reviewed-by: @Darcy Clarke

view details

Ajay Narain Mathur

commit sha f6ff417767d52418cc8c9e7b9731ede2c3916d2e

updated script to say postinstall to show intention PR-URL: https://github.com/npm/cli/pull/936 Credit: @ Close: #936 Reviewed-by: @Darcy Clarke

view details

Vitaliy Markitanov

commit sha 373224b16e019b7b63d8f0b4c5d4adb7e5cb80dd

Update npm-publish.md Fixed wrong links in See also section PR-URL: https://github.com/npm/cli/pull/939 Credit: @ Close: #939 Reviewed-by: @Darcy Clarke

view details

Jordan Harband

commit sha 30f170877954acd036cb234a581e4eb155049b82

fund: support multiple funding sources See https://github.com/npm/rfcs/pull/68 PR-URL: https://github.com/npm/cli/pull/731 Credit: @ Close: #731 Reviewed-by: @Darcy Clarke

view details

Darcy Clarke

commit sha f14b594ee9dbfc98ed0b65c65d904782db4f31ad

chownr@1.1.4

view details

Darcy Clarke

commit sha 77044150b763d67d997f9ff108219132ea922678

npm-packlist@1.4.8

view details

Darcy Clarke

commit sha 1d112461ad8dc99e5ff7fabb5177e8c2f89a9755

npm-registry-fetch@4.0.3

view details

Darcy Clarke

commit sha a47fed7603a6ed31dcc314c0c573805f05a96830

readable-stream@3.6.0

view details

Darcy Clarke

commit sha 8b379b213dd23a27c8f3c119c9b123553d81cdd4

hosted-git-info@2.8.6

view details

Darcy Clarke

commit sha ad132702b7e1c8ed08702cf62f4b7c6ec6fa1893

docs: changelog for 6.14.0

view details

Jordan Harband

commit sha e34373f27a6b89b4a57b33d75da281343e9b5e9e

allow new majors of node to be automatically considered supported PR-URL: https://github.com/npm/cli/pull/697 Credit: @ljharb Close: #697 Reviewed-by: @isaacs

view details

isaacs

commit sha 83a7b4501f3e400163b662d6324386bbd18645ee

Use a package.json engines field to specify support That is what the `engines` in package.json is for, after all.

view details

push time in 2 days

create barnchnpm/cli

branch : isaacs/simplify-supported-node

created branch time in 2 days

pull request commentnpm/rfcs

RFC: Add staging workflow for CI and human interoperability

I believe this RFC may help with the "one shot" publish issue we had in lerna, where we switched to using a tmp dist-tag.

If I'm understanding you correctly, this is about the case where you have a bunch of packages you want to publish all together in a sync'ed version?

This would actually be slightly safer than using a temp dist-tag, because you wouldn't have the race condition period where mismatched versions are published and installable.

Granted, the version referenced by the latest dist-tag has long been prioritized, so if 1.2.3 is on latest, and you've soft-published 1.2.4 on the tmp dist-tag, then anyone fetching 1.2.x will get 1.2.3 and not 1.2.4. And, if you're not installing against latest, and it's a tmp dist-tag floating on top of next or something, then you won't have that coverage, but obviously way fewer people will be fetching that version range in the first place.

It does bring up an interesting point about the promotion flow, though. Should it be somehow transactional across multiple packages and versions? How much of a race condition is acceptable in this case? Even if there is a window of mismatch, it's going to be much smaller than with actually uploading a bunch of tarballs, so maybe it's fine?

djsauble

comment created time in 2 days

Pull request review commentnpm/libnpmsearch

fix: remove figgy-pudding

 'use strict' -const figgyPudding = require('figgy-pudding') const getStream = require('get-stream') const npmFetch = require('npm-registry-fetch') -const SearchOpts = figgyPudding({-  detailed: { default: false },-  limit: { default: 20 },-  from: { default: 0 },-  quality: { default: 0.65 },-  popularity: { default: 0.98 },-  maintenance: { default: 0.5 },-  sortBy: {}-})- module.exports = search function search (query, opts) {   return getStream.array(search.stream(query, opts))

I see, yeah, it should be a minipass object stream once npm-registry-stream is updated to latest.

claudiahdz

comment created time in 2 days

Pull request review commentnpm/libnpmsearch

fix: remove figgy-pudding

 'use strict' -const figgyPudding = require('figgy-pudding') const getStream = require('get-stream') const npmFetch = require('npm-registry-fetch') -const SearchOpts = figgyPudding({-  detailed: { default: false },-  limit: { default: 20 },-  from: { default: 0 },-  quality: { default: 0.65 },-  popularity: { default: 0.98 },-  maintenance: { default: 0.5 },-  sortBy: {}-})- module.exports = search function search (query, opts) {   return getStream.array(search.stream(query, opts))

Probalby something like: return search.stream(query, opts).collect() (assuming search.stream() returns a minipass.)

claudiahdz

comment created time in 2 days

Pull request review commentnpm/libnpmsearch

fix: remove figgy-pudding

 function searchStream (query, opts) {         popularity: opts.popularity,         maintenance: opts.maintenance       },-      mapJson (obj) {+      mapJson: (obj) => {

Ah!! So THAT's why we had that feature in npm-registry-fetch.

It should be spelled mapJSON though.

claudiahdz

comment created time in 2 days

Pull request review commentnpm/libnpmsearch

fix: remove figgy-pudding

   "bugs": "https://github.com/npm/libnpmsearch/issues",   "homepage": "https://npmjs.com/package/libnpmsearch",   "dependencies": {-    "figgy-pudding": "^3.5.1",     "get-stream": "^4.0.0",     "npm-registry-fetch": "^4.0.0"

s/4.0.0/8.0.0/ :)

claudiahdz

comment created time in 2 days

Pull request review commentnpm/libnpmsearch

fix: remove figgy-pudding

   "bugs": "https://github.com/npm/libnpmsearch/issues",   "homepage": "https://npmjs.com/package/libnpmsearch",   "dependencies": {-    "figgy-pudding": "^3.5.1",     "get-stream": "^4.0.0",

Is this still used anywhere? Should be replaced with a minipass thing instead, if so.

claudiahdz

comment created time in 2 days

Pull request review commentnpm/libnpmsearch

fix: remove figgy-pudding

     "release": "standard-version -s",     "postrelease": "npm publish && git push --follow-tags",     "pretest": "standard",-    "test": "tap -J --100 test/*.js",-    "update-coc": "weallbehave -o . && git add CODE_OF_CONDUCT.md && git commit -m 'docs(coc): updated CODE_OF_CONDUCT.md'",-    "update-contrib": "weallcontribute -o . && git add CONTRIBUTING.md && git commit -m 'docs(contributing): updated CONTRIBUTING.md'"+    "test": "tap -J --100 test/*.js"

I think you can have this just be "test": "tap", and then add a section like:

{
  "tap": {
    "check-coverage": true
  }
}

And rename test/util to test/fixtures so that tap will ignore it.

claudiahdz

comment created time in 2 days

pull request commentnpm/rfcs

RFC: Add staging workflow for CI and human interoperability

You tested it before staging it, so no, imo that's not the whole point - the point is to separate "prepare the publishable artifact and securely transmit it" from "propagate it publicly, which requires extra auth".

But you're explicitly saying that "transmitting" it is not done securely. It doesn't require TFA to stage a package. (And, it's likely a token that far more people have access to.) Then, later, I provide TFA and a more restricted access token to promote it from staged to published (the secure step).

So I'm supposed to just put my stamp of approval on something without inspecting it first? That doesn't seem like a good idea.

djsauble

comment created time in 2 days

pull request commentnpm/rfcs

RFC: Add staging workflow for CI and human interoperability

While there are some potential footguns avoided by adding the restriction that only package owners can load staged packages, it adds significant complexity on both the server and client implementations, and (depending on how it's implemented) potentially adds a few even more concerning footguns. I hope, in this message, to make it clear that is not feasible, so we can drop it from this discussion.

Let's say we do add the restriction that staged packages cannot be installed by non-owners. This can be implemented (as far as I can tell) in 5 different broad strategies, all of them with profound challenges.

  1. Strictly a client-side restriction. The CLI just won't select packages for installation from the stagedVersions set unless the current user is in the maintainers list.
  2. We exclude the stagedVersions object from the packument if the fetching user is not in the maintainers list.
  3. We block the download of a staged tarball URL if the user is not in the maintainers list.
  4. We use a different packument target entirely, and in order to download staged versions (even for inspection), one has to use a different packument endpoint.
  5. Staged published artifacts do not exist in any packument at all, and some other out-of-band means would be implemented to find out about them. (This is really a special case of 4, where "other packument target" might be a completely different non-packument endpoint.)

(1) is, of course, not a real restriction in any sort of security sense, but that might be fine. Ie, a staged package isn't really securely hidden, let's say, just restricted by a handshake agreement that prevents the CLI from trying to fetch it unless the user is an owner. Leaving aside the question of whether other npm clients would enforce the same restriction, the bigger challenge comes about when we consider that the npm-pick-manifest module would have to know, at install time, your npm username. Without making a fetch to the registry, it doesn't know that. We only store an opaque bearer token that tells the server who you are. By design, for security, your npmrc file does not contain your username. In order to get the username from a bearer token, you have to hit the /-/whoami endpoint. (This way, npmrc files containing formerly-valid logins don't leak any information if the token is revoked.)

Having to hit a registry endpoint to be able to even select a package version for installation is pretty crummy. Authed requests are slower and more costly, so we try to minimize them as much as possible. Any unscoped packages bypass that part of the registry stack entirely, and the auth stack for scoped packages has a lot of shortcuts to quickly jump from a bearer token and action to an answer about whether that's allowed. What shows up in the maintainers list is a cached projection of the actual authorization set. (Also, we'd have to always fetch in fullMetadata mode, because corgis don't list maintainers.)

The fact that it's opt-in would help with the perf somewhat, but there's still the question of maintenance burden and complexity, since npm-registry-fetch now depends on more than just the packument as a discrete object. And once the staged package tarball url is in a lockfile, we'll have to track that it's staged (and, when doing that install, whether the user has the right to download that thing or not). It makes my head spin trying to work out the edge cases in this. If someone else wants to take it on in a fork or an alternative client, I'll be curious how it works out, but I'm not doing this in npm/cli. Too hard.

(2) imposes similar complexity, but on the server side. Remember that auth stack that we try to hit as little as possible? Now we need to keep 3 forms of the packument, and check the auth state on every packument request. I don't primarily work on that portion of the stack, but I'm confident that it would impose significant burden on us to do that, effectively making every package as costly to serve as a private package. (Which, on each individual fetch, is not a huge cost, but multiplied by ~70Bn package downloads a month, adds up quick.) So that's not gonna happen any time soon.

(3) suffers from essentially the same performance concern as (2), but on tarball urls instead of packument urls, which is potentially worse. The mitigating factor here that might make it less infeasible is that the staged tarball URLs themselves could have a discernible pattern (which is good for other reasons anyway), so we'd save having to make the check on every fetch. However, this one has the "even worse footgun" I alluded to. If you install something, and get a tarball url in your shrinkwrap, then someone else (who isn't a maintainer, or just isn't logged in) tries to download it, they get a 404. This is exactly the DX issue that got us down this road to begin with.

(4) and (5) -- ie, create a net new API surface for staged publishes -- are probably the least infeasible from an operational point of view, but also will take the longest to spec and deliver, and frankly, might never actually happen. Also, it's worth noting that "staged publishes that only a select few can install, from a completely different endpoint" is exactly what private registries provide. And, if the new registry endpoint for staged publishes isn't something you can npm install from (ie, option (5)), then it's pretty useless. The number one way to inspect a published npm package is to install it and run tests against it. (Ideally in CI, where a matrix of platform and engine versions can be tests in parallel.)

In all but (1), "staged publishes that only a select few can install" becomes a backdoor way to have a mix of private and public code at the same namespace entry, which can exist for both scoped and unscoped packages. If we were ever to offer something like that, we'd charge for it, because that's difficult to implement, costly to operate, and primarily serves the interests of corporate users. And (1) is a ton of work for me and my team that I'm not eager to have us do. Assuming that the public registry makes staged versions of public packages available to all, nothing's stopping any other registry implementation from doing this, of course. I'd love to have some data (or at least, anecdata) about how that works out. It might still not be a compelling enough case to get over the challenges at npm public registry scale, but it'd be more compelling than not having it, for sure.

I'm sorry. I don't think that's going to happen.

The good news is, just reducing the priority of staged versions below that of published versions solves these problems exceedingly simply, and gets rid of most of the hazards. And an implementation exists that supports it today, in the latest versions of npm-pick-manifest, pacote, and @npmcli/arborist. It pretty much impossible to get a staged version by accident. You have to both opt into considering staged versions, and there has to be no published version that satisfies the requested range. This prevents the "floodgates to get a sip of water" issue entirely, because those versions will never be chosen if there is any other option. (It's like deprecation, but stronger.)

If "testing a staged version" is a non-goal, then I don't think this is worth doing, honestly. Why would you ever stage something, if not to test it before committing to it being published? Isn't that the whole point?

Here's what I'd like to focus on, which I see as the main remaining obstacles to this:

  • The dist.tarball url of staged packages. I like the content-address idea, could be long-lived even when the "real" artifact url is used moving forward.
  • The handshake required to promote from staged to published. I'm thinking, I don't wanna have a situation where a bad actor stages a new thing right as you're about to promote it. The workflow has to be worked out there, though, because what you really want is to say "I want to promote this thing that I got 15 minutes ago", and you're unlikely to grab the integrity as you're installing it initially for inspection. Maybe provide the artifact itself and we check that it matches? What's the UX for "I'd like to start testing staged version XYZ, I'll promote it once I'm satisfied"?
djsauble

comment created time in 2 days

issue commentnpm/pacote

`before` doesn't work without `fullMetadata`

It helps that you posted a bug on a module that we're currently in the midst of editing for other reasons 😆

altano

comment created time in 2 days

PR opened npm/npm-profile

V5

What / Why

Bring this module into the modern era.

Fixes https://github.com/npm/statusboard/issues/77

+6691 -8549

0 comment

33 changed files

pr created time in 2 days

PR closed npm/npm-profile

allow uri-safe usernames

Related to https://github.com/npm/npm-user-validate/issues/12, email addresses are valid usernames in some systems.

Currently, even if the user validator supports email addresses (as uri-safe usernames), this will encode them again turning % characters into %25.

This implementation will ensure backward compatibility with anything currently depending on it by decoding before re-encoding. Non-encoded strings will still be encoded, but already encoded strings will not be double encoded.

+2 -2

1 comment

1 changed file

jlambert121

pr closed time in 2 days

pull request commentnpm/npm-profile

allow uri-safe usernames

https://github.com/npm/npm-user-validate/pull/13#issuecomment-590641822

jlambert121

comment created time in 2 days

issue closednpm/npm-user-validate

Cannot use uri encoded string in username

This is a scenario being encountered in a private npm registry.

If your username is, for example, "me@github.com", then the non-url-safe error is thrown as expected. If you try to encode this yourself as "me%40github.com" the check just compares "me%40github.com" against "me%2540github.com" which fails unexpectedly. Anything beyond this would obviously be an infinite loop to nowhere.

A solution to solve this use case would be to decode the username before encoding it and then comparing the value. Can anyone think of any values that would not legitimately validate or that would unexpectedly pass using this solution?

closed time in 2 days

mharrington622

issue commentnpm/npm-user-validate

Cannot use uri encoded string in username

https://github.com/npm/npm-user-validate/pull/13#issuecomment-590641822

mharrington622

comment created time in 2 days

PR closed npm/npm-user-validate

allow uri-safe usernames (Closes #12)

Email addresses are currently not supported as valid usernames as part of npm login, although some systems use email addresses for usernames.

Converting an email address to URI-safe doesn't currently pass validation because the % character is translated to %25. This seems to be the simplest change that supports correctly URI-encoded usernames without breaking any of the existing tests.

This PR is blocked by https://github.com/npm/npm-profile/pull/6 as implementing this without that one will result in the inability to log in because the encoded username will be encoded again.

Closes #12

+7 -1

1 comment

2 changed files

jlambert121

pr closed time in 2 days

pull request commentnpm/npm-user-validate

allow uri-safe usernames (Closes #12)

I think the security ramifications of this are just too high. If you're using CouchDB for logins, don't use characters in your npm username that require url encoding. If you want to do something fancier, then implement WebLogin, so that you don;t ever touch a username to begin with.

jlambert121

comment created time in 2 days

create barnchnpm/npm-profile

branch : v5

created branch time in 2 days

push eventnpm/arborist

isaacs

commit sha 80b800ac9cd966ef3aaa09d2a9a0267faf1c03de

pacote@11.1.0 No longer sending 'Referer: undefined' on registry requests.

view details

push time in 2 days

PR merged npm/pacote

update to referer-less npm-registry-fetch

Depends on https://github.com/npm/npm-registry-fetch/pull/25

Land that PR before merging this commit.

+105 -31

0 comment

13 changed files

isaacs

pr closed time in 2 days

push eventnpm/pacote

isaacs

commit sha 1294e5d95ea27bd326a9d0c765344c916b60e1cd

npm-registry-fetch@8.0.0 This drops support for the 'opts.refer' option, and no longer sends a referer header to registry requests.

view details

isaacs

commit sha a8de163ffe658caf9ff4c1432956e79b555125b1

Add CI and project settings

view details

isaacs

commit sha f5973f8949b85f8fe120dfd640b82a652432186d

Use canonical config names

view details

isaacs

commit sha 777ef25a0603bcf49abc175673161e3f6ed0c054

test: update to pass with new zlib impl in node 13.8

view details

isaacs

commit sha 9cc931e9696acd337af17d40dd13ada544d52588

Do not club this.fullMetadata after set initially Fix #30

view details

isaacs

commit sha cc6254584b4b5b62fbaf419975ec987c517ec2fd

test: remove Windows from CI for now It's just too flaky and hard to get paths filtered out of the snapshots properly.

view details

isaacs

commit sha d9f6a917ae0469f309435e78ec72669858e4d429

test: allow timeout error from MacOS on node v10 Not sure why Darwin on Node v10 is sometimes returning a core node error rather than a reconfigured fetch error, but allow it.

view details

isaacs

commit sha 622ed99d912ab0353e3a9193239148c1bed38981

11.1.0

view details

push time in 2 days

issue closednpm/pacote

`before` doesn't work without `fullMetadata`

The undocumented fullMetadata option is required to fetch times, and without this the before flag appears to not work.

Repro:

$ npm view safe-write-stream time 
{
  ...
  '1.0.4': '2016-02-07T00:21:03.628Z', <=== Expected when specifying before: 2017-01-01
  '1.0.5': '2017-02-09T03:26:22.308Z'
}

fullMetadata: undefined node -e "require('pacote').manifest('safe-write-stream@^1.0.4', { before: new Date('2017-01-01') }).then(r => console.log(r.version));" Result: 1.0.5 ❌

fullMetadata: false node -e "require('pacote').manifest('safe-write-stream@^1.0.4', { fullMetadata: true, before: new Date('2017-01-01') }).then(r => console.log(r.version));" Result: 1.0.4 ✔

closed time in 2 days

altano

created tagnpm/pacote

tagv11.1.0

npm fetcher

created time in 2 days

push eventnpm/pacote

isaacs

commit sha d9f6a917ae0469f309435e78ec72669858e4d429

test: allow timeout error from MacOS on node v10 Not sure why Darwin on Node v10 is sometimes returning a core node error rather than a reconfigured fetch error, but allow it.

view details

push time in 3 days

created tagnpm/write-file-atomic

tagv3.0.3

Write files in an atomic fashion w/configurable ownership

created time in 3 days

push eventnpm/write-file-atomic

Kevin Martensson

commit sha 2159ad2993cadac3597d9d242ef29ef2ae3b51f0

Don't error on `ENOSYS`, `EINVAL` or `EPERM` when changing ownership or mode Fixes #49. EDIT(isaacs): Rebased latest, added tests to get back to 100% coverage. PR-URL: https://github.com/npm/write-file-atomic/pull/55 Credit: @kevva Close: #55 Reviewed-by: @isaacs

view details

isaacs

commit sha eb8dff15f83f16be1e0b89be54fa80200356614a

3.0.3

view details

push time in 3 days

PR closed npm/write-file-atomic

Don't error on `ENOSYS`, `EINVAL` or `EPERM` when changing ownership

This adds back the same behaviour as graceful-fs which was removed in https://github.com/npm/write-file-atomic/commit/8c93fa326c0b792d3a6e40ccf296f572c61d9e08.

Fixes #49.

+94 -22

0 comment

2 changed files

kevva

pr closed time in 3 days

more