profile
viewpoint

GoogleChrome/lighthouse 20478

Automated auditing, performance metrics, and best practices for the web.

GoogleChrome/lighthouse-ci 3429

Automate running Lighthouse for every commit, viewing the changes, and preventing regressions

GoogleChrome/chrome-launcher 653

Launch Google Chrome with ease from node.

eugeneware/jpeg-js 408

A pure javascript JPEG encoder and decoder for node.js

aslushnikov/tracium 60

A blessed Chrome Trace parser.

patrickhulce/cuzillion 6

'cuz there are still a zillion pages to check in 2020

patrickhulce/destiny-gun-damage 3

Calculator and database for destiny gun damage.

kumquatexpress/YelpHelp 1

Cool project that does cool things over the Yelp dataset.

patrickhulce/codereview 1

Web application to assist with grading homeworks and assessing code style.

dxu/PennApps2013S 0

fun stuff

PR closed GoogleChrome/lighthouse

Reviewers
TYPE: added code of conduct to project cla: yes waiting4reviewer

<!-- Thank you for submitting a pull request! See CONTRIBUTING.MD for help in getting a change landed. https://github.com/GoogleChrome/lighthouse/blob/master/CONTRIBUTING.md -->

Summary added code of conduct <!-- Link any documentation https://opensource.google/docs/releasing/template/CODE_OF_CONDUCT/-->

+42 -0

1 comment

1 changed file

jivthesh

pr closed time in 3 hours

pull request commentGoogleChrome/lighthouse

TYPE: added code of conduct to project

this landed with https://github.com/GoogleChrome/lighthouse/pull/11212 already @jivthesh , was this reopen a mistake?

jivthesh

comment created time in 3 hours

issue closedGoogleChrome/lighthouse

<!-- Before creating an issue please make sure you are using the latest version and have checked for duplicate issues. -->

<!-- Before creating an Accessibility issue please test that it is reproducible upstream with axe (https://www.deque.com/axe/) first and file the issue there if necessary. -->

Provide the steps to reproduce

  1. Run LH on <affected url>

<!-- If your page is only local, or is liable to change, consider uploading a repro so that we can more easily debug the problem. Some services that will help are: https://jsbin.com/, https://surge.sh/ -->

What is the current behavior?

What is the expected behavior?

Environment Information

  • Affected Channels: <!-- CLI, Node, Extension, DevTools -->
  • Lighthouse version:
  • Chrome version: <!-- chrome://version/ -->
  • Node.js version:
  • Operating System:

Related issues

closed time in 3 hours

masa77736

issue closedGoogleChrome/lighthouse

asdf

<!-- Before creating an issue please make sure you are using the latest version and have checked for duplicate issues. -->

<!-- Before creating an Accessibility issue please test that it is reproducible upstream with axe (https://www.deque.com/axe/) first and file the issue there if necessary. -->

Provide the steps to reproduce

  1. Run LH on <affected url>

<!-- If your page is only local, or is liable to change, consider uploading a repro so that we can more easily debug the problem. Some services that will help are: https://jsbin.com/, https://surge.sh/ -->

What is the current behavior?

What is the expected behavior?

Environment Information

  • Affected Channels: <!-- CLI, Node, Extension, DevTools -->
  • Lighthouse version:
  • Chrome version: <!-- chrome://version/ -->
  • Node.js version:
  • Operating System:

Related issues

closed time in 3 hours

masa77736

issue commentGoogleChrome/lighthouse

suggest font-display optional if font is preloaded, to reduce layout shift

Thanks @xiaochengh that's great to know! Perhaps the associated codelab should be updated to follow that style as well then.

connorjclark

comment created time in 3 hours

issue commentGoogleChrome/lighthouse

Adding Network initiator details in network-requests audit

Seems reasonable to include, but it'd be relatively low priority for us to add given that far richer information is already available via the artifacts API. You're welcome to open a PR for it if you'd like to see it happen though :)

ssivanatarajan

comment created time in 3 hours

issue commentGoogleChrome/lighthouse

suggest font-display optional if font is preloaded, to reduce layout shift

This test illustrates the challenges of preloading Google Fonts :/

The font format referenced by Google Fonts changes subtly based on the UA with woff to desktop and woff2 to mobile (unclear why?)

connorjclark

comment created time in 19 hours

issue commentGoogleChrome/lighthouse

async css pattern results in recording duplicate network requests

On first blush this sounds like a Chromium bug @connorjclark was there a Lighthouse action item to take? :)

connorjclark

comment created time in a day

push eventpatrickhulce/third-party-web

Patrick Hulce

commit sha 46be8611b512be4bb3bad1fd39b054713a08d161

feat: products inherit from entities

view details

push time in 2 days

push eventpatrickhulce/third-party-web

Patrick Hulce

commit sha 0094a651ca2f8023bbf931335bd8fab3b184a781

fix: proper matomo company

view details

Patrick Hulce

commit sha cbce336621fbd8b0c0c93a077cb57d32ff44beaa

refactor: move to entities.js

view details

Patrick Hulce

commit sha 3f1a057a19ca592785ec1c57813e1ed82489d3c2

refactor: remove all JSON5 references

view details

Patrick Hulce

commit sha d8154dcffa6008b15945800b301b515ad98c6d87

chore: lint fixes

view details

Patrick Hulce

commit sha 107e6dd9bf0176a98d5d940d948059293bfeb0e1

refactor: use regex in entities.js

view details

Patrick Hulce

commit sha 15710c2fc8cad41a003f7b44a2a728887bcaacbf

feat: add nostats subset

view details

push time in 2 days

issue commentGoogleChrome/lighthouse

Establish an audit naming guide

the verbs in some of our audits (eg "uses-preload", etc) kill me give me a hard time regularly since they dont share much with their titles.

😢 but, but, the uses-blah is exactly during the phase of trying to align the audits with their titles

okay looks like it's okay "uses" and "use".

I got lost here 😆 is uses- ok afterall? 😃

In general I agree I generally prefer the noun of the stuff/topic being surfaced but for all the ones that are uses-X it's where the natural way to describe the thing being surfaced is the opposite direction, i.e. people never really talk about "non-responsive images" we just talk about "responsive images" so "uses-responsive-images" feels better than "non-responsive-images". I would ideally want to avoid a situation where we're consistent on nouns but then keep flip flopping on whether it's the positive or negative form and get back into the title confusion situation all over again.

patrickhulce

comment created time in 2 days

issue commentGoogleChrome/lighthouse

suggest font-display optional if font is preloaded, to reduce layout shift

Also @paulirish I just saw

  1. the documented optimizations don't have much to do with preload. :/

What led you to believe this? From my observations even when the font is already at the exact same depth in the discovery tree such that you would normally never preload it, it still has the effect of the optional font actually getting used.

https://melodic-class.glitch.me/font-preload-optional.html

image

If you're referring to the m83 changes having nothing to do with preload, I agree there, but at the very least they inspired the web.dev article and the work here :)

connorjclark

comment created time in 2 days

issue commentGoogleChrome/lighthouse

suggest font-display optional if font is preloaded, to reduce layout shift

I also don't think Lighthouse should tell devs to preload all their fonts.

  1. I don't see the need for utmost urgency with swap and fallback. If you're already OK with the layout shifts then why create more early network contention with fonts? It's not really holding up any perf metrics and the jank coming ~600ms later isn't a gamechanger. It's a good idea if you don't have anything else going on, but a blanket "preload all of your fonts" would be quite bad advice for sites that use a lot of fonts.
  2. It's kind of a major pain to try and preload fonts coming from a third-party from URLs you don't know are stable. Knowing this I'd definitely prefer to be judicious with this advice and save it for when it has a large impact.
  3. Very related to 2, but uses-rel-preload has a 1st party requirement for a reason and violating it should have a significant benefit (like avoiding a situation where your font isn't used at all). If we'd like to revisit that 1st party decision then we should modify the existing uses-rel-preload audit rather than a separate audit entirely.
connorjclark

comment created time in 2 days

issue commentGoogleChrome/lighthouse

gcp fleet collection feedback

Go for any and all of it :) I personally like the confirmation, but can see the need for auto. Does your GCP account have a limit on instance count? --skip-trace-download? It lets you rerun the exact flow with lighthouse -A which I think is pretty handy for debugging and reproducibility

connorjclark

comment created time in 2 days

Pull request review commentGoogleChrome/lighthouse

core(renderer): display dash gauge for categories with entirely n/a audits

 class CategoryRenderer {       percentageEl.title = Util.i18n.strings.errorLabel;     } -    if (this.hasApplicableAudits(category)) {+    // Render a numerical score if the category has applicable audits, or no audits whatsoever.+    if (category.auditRefs.length === 0 || this.hasApplicableAudits(category)) {

I agree it doesn't make sense, but I can imagine a company makes a custom config where such a scenario happens. I'm confused though @connorjclark I thought you just wanted this function to handle the 0 audit case are you saying now we should ignore what happens when there are 0 audits?

saavannanavati

comment created time in 2 days

Pull request review commentGoogleChrome/lighthouse

new_audit: add jankless-font audit

 const defaultConfig = {         {id: 'long-tasks', weight: 0, group: 'diagnostics'},         {id: 'non-composited-animations', weight: 0, group: 'diagnostics'},         {id: 'unsized-images', weight: 0, group: 'diagnostics'},+        {id: 'jankless-font', weight: 0, group: 'diagnostics'},

Nope, I consider them breaking for LHCI which you might be remembering? But we've already added a few audits since 6.0 to best practices IIRC.

lemcardenas

comment created time in 2 days

Pull request review commentGoogleChrome/lighthouse

core(renderer): display dash gauge for categories with entirely n/a audits

 class CategoryRenderer {       percentageEl.title = Util.i18n.strings.errorLabel;     } -    if (this.hasApplicableAudits(category)) {+    // Render a numerical score if the category has applicable audits, or no audits whatsoever.+    if (category.auditRefs.length === 0 || this.hasApplicableAudits(category)) {

we do? I don't have any context and am just jumping in out of surprise :) but if someone creates a category without any audits I would expect a gray "not applicable" rather than a red 0

saavannanavati

comment created time in 2 days

Pull request review commentGoogleChrome/lighthouse

new_audit: add jankless-font audit

 const defaultConfig = {         {id: 'long-tasks', weight: 0, group: 'diagnostics'},         {id: 'non-composited-animations', weight: 0, group: 'diagnostics'},         {id: 'unsized-images', weight: 0, group: 'diagnostics'},+        {id: 'jankless-font', weight: 0, group: 'diagnostics'},

I agree weight 1 makes sense, if we end up finding edge cases where this isn't a good idea then we can always reevaluate :)

lemcardenas

comment created time in 2 days

Pull request review commentGoogleChrome/lighthouse

new_audit: add jankless-font audit

 const defaultConfig = {         {id: 'long-tasks', weight: 0, group: 'diagnostics'},         {id: 'non-composited-animations', weight: 0, group: 'diagnostics'},         {id: 'unsized-images', weight: 0, group: 'diagnostics'},+        {id: 'jankless-font', weight: 0, group: 'diagnostics'},

so based on the confirmation in #11227 the advice in this audit is actually to improve the chances that the font gets used and doesn't really have any impact on the performance metrics or CLS (the layout shift doesn't happen in latest chrome whether you preload or not)

given that, I'm thinking it might better belong in "Best Practices - UX" group

lemcardenas

comment created time in 2 days

issue openedGoogleChrome/lighthouse

Establish an audit naming guide

Summary

With a lot of new audits recently we've had to discuss audit names several times. I think we should formalize this into specific guidance on naming and attempt to standardize our existing audit names as much as possible :)

Doing a quick review...

  • noun of the stuff/topic being surfaced e.g. dom-size, deprecations, non-compsitable-animations, unsized-images, doctype
    • def the most common, shoot for this?
  • phrase that describes the behavior the page should exhibit e.g. no-vulnerable-libraries, works-offline, external-anchors-use-rel-noopener, uses-http2
    • mostly complete the phrase "the page ..."
    • some complete the phrase "the page uses/has ..."
    • some are complete phrases on their own

created time in 2 days

issue closedGoogleChrome/lighthouse

Protocol Error: Could not obtain database names

<!-- Before creating an issue please make sure you are using the latest version and have checked for duplicate issues. -->

<!-- Before creating an Accessibility issue please test that it is reproducible upstream with axe (https://www.deque.com/axe/) first and file the issue there if necessary. -->

Provide the steps to reproduce

  1. Opened Chrome DevTools, found Lighthouse tab, clicked Generate Report for facesofcovid.net.

(The version I loaded is from commit 7601d29 of https://github.com/LillianWight/FacesOfCOVID).

What is the current behavior?

First, I got the Protocol Timeout error (I reported the issue in the Pinned section). I cleared the site history, then ran LH again. This time I got the above issue. The full error is:

Protocol error (Network.emulateNetworkConditions): Could not obtain database names.

Channel: DevTools
Initial URL: https://facesofcovid.net/
Chrome Version: 84.0.4147.125
Stack Trace: Error: Protocol error (Network.emulateNetworkConditions): Could not obtain database names.
    at Function.fromProtocolMessage (devtools://devtools/remote/serve_file/@d0784639447f2e10d32ebaf9861092b20cfde286/lighthouse_worker/lighthouse_worker_module.js:1898:121)
    at eval (devtools://devtools/remote/serve_file/@d0784639447f2e10d32ebaf9861092b20cfde286/lighthouse_worker/lighthouse_worker_module.js:1417:246)

I ran LH again. This time I got:

Untitled

Environment Information

  • Affected Channels: <!-- CLI, Node, Extension, DevTools -->
  • Lighthouse version: n/a - I'm new to development and I just used what was within the DevTools
  • Chrome version: Version 84.0.4147.125 (Official Build) (64-bit)
  • Node.js version:
  • Operating System: Windows 10

Related issues

Nothing other than what is described here.

closed time in 2 days

LillianWight

issue commentGoogleChrome/lighthouse

Protocol Error: Could not obtain database names

Great! I'm going to go ahead and close this issue then, but feel free to chime in if it continues to be problematic.

LillianWight

comment created time in 2 days

Pull request review commentGoogleChrome/lighthouse

new_audit: add large-javascript-libraries audit

+/**+ * @license Copyright 2020 The Lighthouse Authors. All Rights Reserved.+ * Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0+ * Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License.+ */++/**+ * @fileoverview This audit checks a page for any large JS libraries with smaller alternatives.+ * These libraries can be replaced with functionally equivalent, smaller ones.+ */++'use strict';+/** @typedef {{repository: string, lastScraped: number|'Error', versions: Record<string, {gzip: number}>}} BundlePhobiaLibrary */+/** @typedef {{gzip: number, name: string, repository: string}} MinifiedBundlePhobiaLibrary */++/** @type {Record<string, BundlePhobiaLibrary>} */+const libStats = require('../lib/large-javascript-libraries/bundlephobia-database.json');++/** @type {Record<string, string[]>} */+const librarySuggestions = require('../lib/large-javascript-libraries/library-suggestions.js')+  .suggestions;++const Audit = require('./audit.js');+const i18n = require('../lib/i18n/i18n.js');++const UIStrings = {+  /** Title of a Lighthouse audit that provides detail on large Javascript libraries that are used on the page that have better alternatives. This descriptive title is shown when to users when no known unnecessarily large libraries are detected on the page.*/+  title: 'Avoids unnecessarily large JavaScript libraries',+  /** Title of a Lighthouse audit that provides detail on large Javascript libraries that are used on the page that have better alternatives. This descriptive title is shown when to users when some known unnecessarily large libraries are detected on the page.*/+  failureTitle: 'Replace unnecessarily large JavaScript libraries',+  /** Description of a Lighthouse audit that tells the user why they should care about the large Javascript libraries that have better alternatives. This is displayed after a user expands the section to see more. No character length limits. */+  description: 'Large JavaScript libraries can lead to poor performance. ' ++    'Prefer smaller, functionally equivalent libraries to reduce your bundle size.' ++    ' [Learn more](https://developers.google.com/web/fundamentals/performance/webpack/decrease-frontend-size#optimize_dependencies).',+  /** Label for a column in a data table. Entries will be names of large JavaScript libraries that could be replaced. */+  columnLibraryName: 'Library',+  /** [ICU Syntax] Label for the Large JavaScrip Libraries audit identifying how many large libraries were found. */+  displayValue: `{libraryCount, plural,+    =1 {1 large library found}+    other {# large libraries found}+    }`,+};++const str_ = i18n.createMessageInstanceIdFn(__filename, UIStrings);++class LargeJavascriptLibraries extends Audit {+  /**+   * @return {LH.Audit.Meta}+   */+  static get meta() {+    return {+      id: 'large-javascript-libraries',+      title: str_(UIStrings.title),+      failureTitle: str_(UIStrings.failureTitle),+      description: str_(UIStrings.description),+      requiredArtifacts: ['Stacks'],+    };+  }++  /**+   * @param {LH.Artifacts} artifacts+   * @return {LH.Audit.Product}+   */+  static audit(artifacts) {+    /** @type {Array<{original: MinifiedBundlePhobiaLibrary, suggestions: MinifiedBundlePhobiaLibrary[]}>} */+    const libraryPairings = [];+    const detectedLibs = artifacts.Stacks.filter(stack => stack.detector === 'js');++    const seenLibraries = new Set();++    for (const detectedLib of detectedLibs) {+      if (!detectedLib.npm || !libStats[detectedLib.npm]) continue;++      const suggestions = librarySuggestions[detectedLib.npm];+      if (!suggestions) continue;++      if (seenLibraries.has(detectedLib.npm)) continue;+      seenLibraries.add(detectedLib.npm);++      let version = 'latest';+      if (detectedLib.version && libStats[detectedLib.npm].versions[detectedLib.version]) {+        version = detectedLib.version;+      }++      const originalLib = libStats[detectedLib.npm].versions[version];++      /** @type {Array<{name: string, repository: string, gzip: number}>} */+      const smallerSuggestions = [];+      for (const suggestion of suggestions) {+        if (libStats[suggestion].versions['latest'].gzip > originalLib.gzip) continue;

I think those are slightly separate discussions @saavannanavati.

I'm not sure we strictly need this continue because we should never add a suggestion to our set if it's bigger than the target library :) but it feels reasonable to leave in as a baseline check

saavannanavati

comment created time in 2 days

push eventpatrickhulce/third-party-web

Mikael Schirén

commit sha 3b67ed094c69a6cc27583b44415744753ada4443

feat: add Matomo entity (#105)

view details

push time in 2 days

PR merged patrickhulce/third-party-web

Adding Matomo entity

Tagged as analytics, but could also have tag-manager on the address. Should fix #48

+7 -1

0 comment

1 changed file

mikkeschiren

pr closed time in 2 days

Pull request review commentpatrickhulce/third-party-web

feat: add more products

       'widget.intercom.io',       'nexus-websocket-a.intercom.io',     ],+    products: [+      {+        name: 'Intercom Widget',+        urlPatterns: [+          '.*widget\\.intercom\\.io.*',+          '.*js\\.intercomcdn\\.com/shim\\.latest\\.js'+        ],+        facades: [+          {

good call I'll do the JS refactor and publish

adamraine

comment created time in 2 days

push eventpatrickhulce/third-party-web

Adam Raine

commit sha 04792a97a0dbb1dab7033241cd12c873a0bb739e

feat: add more product coverage (#110)

view details

push time in 2 days

delete branch patrickhulce/third-party-web

delete branch : more-products

delete time in 2 days

PR merged patrickhulce/third-party-web

feat: add more products

Adds products with facades for:

  • Vimeo Embedded player
  • Intercom Widget
  • Help Scout Beacon
  • Facebook Messenger Customer Chat
  • Drift Live Chat
+91 -14

0 comment

2 changed files

adamraine

pr closed time in 2 days

Pull request review commentpatrickhulce/third-party-web

feat: add more products

       'widget.intercom.io',       'nexus-websocket-a.intercom.io',     ],+    products: [+      {+        name: 'Intercom Widget',+        urlPatterns: [+          '.*widget\\.intercom\\.io.*',+          '.*js\\.intercomcdn\\.com/shim\\.latest\\.js'+        ],+        facades: [+          {

the amount of duplication here suggests we might have gotten our schema wrong 😆

if we see this happen with more facades in the future we might want to extract these and have an ID for them or something

adamraine

comment created time in 2 days

Pull request review commentGoogleChrome/lighthouse

new_audit: add large-javascript-libraries audit

+/**+ * @license Copyright 2020 The Lighthouse Authors. All Rights Reserved.+ * Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0+ * Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License.+ */++'use strict';+

I think @brendankenny might have been hoping for a fileoverview in this file to explain how these suggestions were decided, what criteria we might use to add new ones, etc.

Maybe the description could be brief and link to the doc you produced where this discussion took place?

saavannanavati

comment created time in 2 days

Pull request review commentGoogleChrome/lighthouse

new_audit: add large-javascript-libraries audit

+/**+ * @license Copyright 2020 The Lighthouse Authors. All Rights Reserved.+ * Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0+ * Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License.+ */++/**+ * @fileoverview This script generates a database of library statistics required for+ * the large-javascript-libraries audit. The data is scraped from BundlePhobia and+ * includes things like the library transfer size, and GitHub URL for each version of+ * a library. This script must be run ever so often to keep the database up-to-date.+ */++'use strict';++/* eslint-disable no-console */++/** @typedef {import('bundle-phobia-cli').BundlePhobiaLibrary} BundlePhobiaLibrary */++const fs = require('fs');+const path = require('path');+const getPackageVersionList = require('bundle-phobia-cli').fetchPackageStats.getPackageVersionList;+const fetchPackageStats = require('bundle-phobia-cli').fetchPackageStats.fetchPackageStats;+const databasePath = path.join(__dirname,+  '../lib/large-javascript-libraries/bundlephobia-database.json');++/** @type {Record<string, string[]>} */+const suggestionsJSON = require('../lib/large-javascript-libraries/library-suggestions.js')+  .suggestions;+++/** @type {string[]} */+const largeLibraries = Object.keys(suggestionsJSON);++/** @type {string[]} */+const suggestedLibraries = Object.values(+  suggestionsJSON).reduce((arr, lib) => arr.concat(lib), []);++const totalLibraries = largeLibraries.length + suggestedLibraries.length;++/** @type {Record<string, {lastScraped: number | 'Error', repository: string, versions: any}>} */+let database = {};+if (fs.existsSync(databasePath)) {+  database = require(databasePath);+}++/**+ * Returns true if this library has been scraped from BundlePhobia in the past hour.+ * This is used to rate-limit the number of network requests we make to BundlePhobia.+ * @param {string} library+ * @return {boolean}+ */+function hasBeenRecentlyScraped(library) {+  if (!database[library]) return false;++  const lastScraped = database[library].lastScraped;+  if (lastScraped === 'Error') return false;++  return (Date.now() - lastScraped) / (1000 * 60 * 60) < 1;+}++/**+ * Returns true if the object represents valid BundlePhobia JSON.+ * The version string must not match this false-positive expression: '{number} packages'.+ * @param {any} library+ * @return {library is BundlePhobiaLibrary}+ */+function validateLibraryObject(library) {+  return library.hasOwnProperty('name') &&+    library.hasOwnProperty('size') &&+    library.hasOwnProperty('gzip') &&+    library.hasOwnProperty('description') &&+    library.hasOwnProperty('repository') &&+    library.hasOwnProperty('version') &&+    !library.version.match(/^([0-9]+) packages$/);+}++/**+ * Save BundlePhobia stats for a given npm library to the database.+ * @param {string} library+ * @param {number} index+ * @param {number} numVersionsToFetchLimit+ */+async function collectLibraryStats(library, index, numVersionsToFetchLimit) {+  console.log(`\n◉ (${index}/${totalLibraries}) ${library} `);++  if (hasBeenRecentlyScraped(library)) {+    console.log(`   ❕ Skipping`);+    return;+  }++  /** @type {Array<BundlePhobiaLibrary>} */+  const libraries = [];+  /** @type {'Error'|number} */+  let lastScraped = Date.now();++  const versions = await getPackageVersionList(library, numVersionsToFetchLimit);+  for (const version of versions) {+    try {+      const libraryJSON = await fetchPackageStats(version);+      if (validateLibraryObject(libraryJSON)) libraries.push(libraryJSON);+    } catch (e) {+      console.log(`   ❌ Failed to fetch stats | ${version}`);+      lastScraped = 'Error';+    }+  }++  for (let index = 0; index < libraries.length; index++) {+    const library = libraries[index];++    if (index === 0) {+      database[library.name] = {+        repository: library.repository,+        lastScraped,+        versions: {},+      };+    }++    if (index === 0 ||+      Math.abs(library.gzip - database[library.name].versions['latest'].gzip) > 512) {
      // Only include the version information if it's sufficiently different from latest.
      Math.abs(library.gzip - database[library.name].versions['latest'].gzip) > 512) {
saavannanavati

comment created time in 2 days

issue closedGoogleChrome/lighthouse-ci

Update the GitHub App to avoid deprecated events

posting a reminder for myself

We're contacting you regarding upcoming changes for your GitHub Apps:
  - Lighthouse CI

We no longer support two events which your GitHub Apps may rely on, "integration_installation" and "integration_installation_repositories".

These events can be replaced with the "installation" and "installation_repositories" events respectively.

The "integration_installation" and "integration_installation_repositories" events will be removed after October 1st, 2020.

Please visit https://developer.github.com/changes/2020-04-15-replacing-the-installation-and-installation-repositories-events for more information about suggested changes, brownouts, and removal dates.

closed time in 2 days

patrickhulce

issue commentGoogleChrome/lighthouse-ci

Update the GitHub App to avoid deprecated events

We don't actually use these, we're good here

patrickhulce

comment created time in 2 days

push eventpatrickhulce/jest-image-snapshot

Patrick Hulce

commit sha d946d41fd1e11a12416b3996a822d99dcc34b809

test CI fix

view details

push time in 2 days

push eventpatrickhulce/jest-image-snapshot

Patrick Hulce

commit sha 23e57bd47dc81e33c7c480d00107c9b3b1a72a7e

remove leftover merge artifact

view details

push time in 2 days

pull request commentamericanexpress/jest-image-snapshot

feat: add obsolete snapshot reporting

Updated with feedback @anescobar1991 and added an end-to-end test for this but I'm not sure what to do about coverage since the statements being covered are in child processes that aren't instrumented. Any guidance here?

patrickhulce

comment created time in 2 days

push eventpatrickhulce/jest-image-snapshot

Patrick Hulce

commit sha 2cfa894e6aa9d73106bef4ea08cc40fd06222439

feedback and cleanup

view details

Patrick Hulce

commit sha a37c32c2ccf1ec0545de142a76dbb0eae6012dc5

Merge branch 'list_obsolete' of github.com:patrickhulce/jest-image-snapshot into list_obsolete

view details

push time in 2 days

Pull request review commentamericanexpress/jest-image-snapshot

feat: add obsolete snapshot reporting

+const fs = require("fs");+const path = require("path");++const TOUCHED_FILE_LIST_PATH = path.join(+  process.cwd(),+  ".jest-image-snapshot-touched-files"+);++const IS_ENABLED = !!process.env.JEST_IMAGE_SNAPSHOT_TRACK_OBSOLETE;++class ObsoleteReporter {+  static isEnabled() {+    return IS_ENABLED;+  }++  static markTouchedFile(filePath) {+    if (!IS_ENABLED) return;+    const fd = fs.openSync(TOUCHED_FILE_LIST_PATH, "as");+    fs.writeSync(fd, `${filePath}\n`);+    fs.closeSync(fd);+  }++  onRunStart() {+    if (!IS_ENABLED) return;+    if (fs.existsSync(TOUCHED_FILE_LIST_PATH)) {+      fs.unlinkSync(TOUCHED_FILE_LIST_PATH);+    }+  }++  onRunComplete(contexts, results) {+    if (!IS_ENABLED) return;+    const touchedFiles = Array.from(+      new Set(+        fs+          .readFileSync(TOUCHED_FILE_LIST_PATH, "utf-8")+          .split("\n")+          .filter((file) => file && fs.existsSync(file))+      )+    );+    const imageSnapshotDirectories = Array.from(+      new Set(touchedFiles.map((file) => path.dirname(file)))+    );+    const allFiles = imageSnapshotDirectories+      .map((dir) => fs.readdirSync(dir).map((file) => path.join(dir, file)))+      .reduce((a, b) => a.concat(b))+      .filter((file) => file.endsWith("-snap.png"));+    const obsoleteFiles = allFiles.filter(+      (file) => !touchedFiles.includes(file)+    );++    if (fs.existsSync(TOUCHED_FILE_LIST_PATH)) {+      fs.unlinkSync(TOUCHED_FILE_LIST_PATH);+    }++    console.log({

the original point was to just print the outdated files for the user to remove them manually, so yes a log would be necessary in that case.

for now, I've updated to automatically remove the files, but that feels slightly riskier IMO.

patrickhulce

comment created time in 2 days

Pull request review commentamericanexpress/jest-image-snapshot

feat: add obsolete snapshot reporting

 function configureToMatchImageSnapshot({     const snapshotsDir = customSnapshotsDir || path.join(path.dirname(testPath), SNAPSHOTS_DIR);     const diffDir = customDiffDir || path.join(snapshotsDir, '__diff_output__');     const baselineSnapshotPath = path.join(snapshotsDir, `${snapshotIdentifier}-snap.png`);-+    ObsoleteReporter.markTouchedFile(baselineSnapshotPath);

Sure 👍

patrickhulce

comment created time in 2 days

Pull request review commentamericanexpress/jest-image-snapshot

feat: add obsolete snapshot reporting

+const fs = require("fs");+const path = require("path");++const TOUCHED_FILE_LIST_PATH = path.join(+  process.cwd(),+  ".jest-image-snapshot-touched-files"+);++const IS_ENABLED = !!process.env.JEST_IMAGE_SNAPSHOT_TRACK_OBSOLETE;++class ObsoleteReporter {+  static isEnabled() {+    return IS_ENABLED;+  }++  static markTouchedFile(filePath) {+    if (!IS_ENABLED) return;+    const fd = fs.openSync(TOUCHED_FILE_LIST_PATH, "as");

fd is fairly common for a file descriptor in my experience, but I can understand it's opaque for those that haven't seen the pattern before.

touchedListFileDescriptor?

patrickhulce

comment created time in 2 days

Pull request review commentamericanexpress/jest-image-snapshot

feat: add obsolete snapshot reporting

 const path = require('path'); const Chalk = require('chalk').constructor; const { diffImageToSnapshot, runDiffImageToSnapshot } = require('./diff-snapshot'); const fs = require('fs');+const ObsoleteReporter = require('./obsolete-reporter');

😆 Sounds good, any suggestions? SnapshotRemovalReporter?

patrickhulce

comment created time in 2 days

issue commentGoogleChrome/lighthouse

aria-valid-attr-value: allow numbers with comma as decimal separator

Thanks for filing @placetobejohan! Have you tried to search for answers in axe-core? The logic for the accessibility audits is all controlled by that project. We'll respect whatever they decide there.

placetobejohan

comment created time in 2 days

issue commentGoogleChrome/lighthouse

Working service worker, but Lighthouse does not recognize

Thanks for filing @a-tonchev! I can reproduce this we'll have to look into it.

a-tonchev

comment created time in 2 days

issue commentGoogleChrome/lighthouse

LightHouse HTML report generated in Debian not rendering in browser

Thanks for filing @karthigeyan-saran! I know what's going on here. We'll work on a fix 👍

The source-location renderer needs to be able to handle URLs that aren't valid (because they don't create <a> elements)

https://github.com/GoogleChrome/lighthouse/blob/f08ab37e8942205c4bbf91d56609f06631c67ce9/lighthouse-core/report/html/renderer/details-renderer.js#L547-L549

karthigeyan-saran

comment created time in 2 days

issue closedGoogleChrome/lighthouse-ci

Something went wrong with recording the trace over your page load. Please run Lighthouse again. (NO_LCP)

I'm constantly getting this error in lighthouse CI running in Headless Chrome in Gitlab CI. The same is working fine when tried locally. Due to this issue the performance number is always 0. Attaching a screenshot for reference

image

closed time in 2 days

RitikPatni

issue commentGoogleChrome/lighthouse-ci

Something went wrong with recording the trace over your page load. Please run Lighthouse again. (NO_LCP)

Thanks for filing @RitikPatni! We're tracking this over in https://github.com/GoogleChrome/lighthouse/issues/11180

RitikPatni

comment created time in 2 days

issue commenteugeneware/jpeg-js

SOI not found jpeg-js with react-native-image-picker

The file extension does not say anything about the files actual contents unfortunately.

If this is a JPEG that opens in the vast majority of other parsers or editors than we would welcome a copy of the file to try it here. But 99/100 this error means the buffer passed to jpeg-js was malformed.

zuntilZ

comment created time in 2 days

issue commentGoogleChrome/lighthouse

Protocol Error: Could not obtain database names

Thanks for filing @LillianWight!

I'm not sure what is causing Protocol error (Network.emulateNetworkConditions): Could not obtain database names., from Chromium source that message is only raised in connection with IndexedDB which is very unrelated to Network.emulateNetworkConditions... I also couldn't reproduce it.

The NO_FCP error is much more common. Did you happen to background the tab while you waited for results? When using the DevTools Lighthouse (or doing any performance profiling really) the tab needs to be visible and kept in the foreground or it will be throttled/prevented from painting.

LillianWight

comment created time in 3 days

issue commentGoogleChrome/lighthouse

Improve BenchmarkIndex to land device targeting

So some good stuff came out of this and we might have finally accomplished the title of the issue :)

tl;dr - V8 team gave us advice on how to tweak our microbenchmark to be more resilient, a simple average of the two tweaked benchmarks now correlates with JS execution time better than any other JS benchmark tested, and they're even going to add it to their waterfall to be alerted about major changes to it 🎉

Root Cause

The bimodality appears to be caused by GC heuristics used by Chrome. The identified V8 CL changes the page size which normally increases GC performance but in Chrome's slow path cause far more GC interruptions.

Before After (slow) After (Fast)
34% time spent in GC 71% time spent in GC 24% time spent in GC
image image image

The Fix

Our benchmark creates a string length of 100k which just pushes past a threshold that causes this crazy slow GC path. By reporting the iterations on a shorter string of length 10k and dividing the resulting index by 10 we end up with nearly identical benchmark results to the fast path on our length 100k string but now we always fall into the fast GC path :)

The Improvement

The allocation/GC-dependence of this benchmark was sort of a feature since cheap devices tend to struggle with memory ops, but V8 team suggested trying a benchmark that preallocates an array of 100k and just copies elements into it. By combining the results of this tweaked benchmark with our previous one, we actually get a new benchmark that correlates with JS execution time on sites better than every other web benchmark we've tested and is only beaten out by GeekBench 🎉

I'll open a PR for the tweaked combo benchmark and we can continue with our previous plans :)

patrickhulce

comment created time in 3 days

startedpaulirish/lite-youtube-embed

started time in 3 days

Pull request review commentpatrickhulce/third-party-web

Add get API to get product

+export interface IFacade {+  name: string+  repo: string+}++export interface IProduct {+  name: string+  urlPatterns: string[]+  facades: IFacade[]
  facades?: IFacade[]
adamraine

comment created time in 3 days

Pull request review commentpatrickhulce/third-party-web

Add get API to get product

       'img.youtube.com',       'fcmatch.youtube.com',     ],+    products: [+      {+        name: 'YouTube Embedded Player',+        urlPatterns: [+          '.*\\.youtube\\.com/embed/.*',+        ],+        facades: [+          {+            name: 'Lite YouTube',+            repo: 'https://github.com/paulirish/lite-youtube-embed#other-lite-embeds',
            repo: 'https://github.com/paulirish/lite-youtube-embed',
adamraine

comment created time in 3 days

Pull request review commentpatrickhulce/third-party-web

Add get API to get product

+export interface IFacade {+  name: string+  repo: string+}++export interface IProduct {+  name: string+  urlPatterns: string[]+  facades: IFacade[]+}+ export interface IEntity {   name: string   company: string   homepage?: string   categories: string[]   domains: string[]+  products: IProduct[]
  products?: IProduct[]
adamraine

comment created time in 3 days

Pull request review commentpatrickhulce/third-party-web

Add get API to get product

       'img.youtube.com',       'fcmatch.youtube.com',     ],+    products: [+      {+        name: 'YouTube Embedded Player',+        urlPatterns: [+          '.*\\.youtube\\.com/embed/.*',

yep 👍

adamraine

comment created time in 3 days

Pull request review commentGoogleChrome/lighthouse

misc: add gcp fleet creation scripts

 DIRNAME="$( cd "$( dirname "${BASH_SOURCE[0]}" )" && pwd )" LH_ROOT="$DIRNAME/../../.." cd $DIRNAME -GCLOUD_USER=$(gcloud config get-value account | awk -F '@' '{print $1}')+GCLOUD_USER=$(gcloud config get-value account | awk -F '@' '{gsub("[^a-z]","",$1); print $1}')

ha well you're welcome to propose an alternative that removes invalid characters :) (scripts wouldn't run because my gcloud user is patrick.hulce)

patrickhulce

comment created time in 3 days

Pull request review commentpatrickhulce/third-party-web

Add get API to get product

 function getEntityInDataset(entityByDomain, entityByRootDomain, originOrURL) {   return undefined } +function getProductInDataset(entityByDomain, entityByRootDomain, originOrURL) {+  const entity = getEntityInDataset(entityByDomain, entityByRootDomain, originOrURL);+  const products = entity && entity.products;+  if (!products) return undefined;+  for (const product of products) {+    for (const pattern of product.urlPatterns) {+      const regex = new RegExp(pattern);+      if (regex.test(originOrURL)) {+        return product;

I think we should construct products such that only one matches.

the point of .products is supposed to be a disambiguation of an entity. I'm not sure what it would mean for multiple to match :)

adamraine

comment created time in 3 days

Pull request review commentpatrickhulce/third-party-web

Add get API to get product

 function getEntityInDataset(entityByDomain, entityByRootDomain, originOrURL) {   return undefined } +function getProductInDataset(entityByDomain, entityByRootDomain, originOrURL) {

can you add all this to the typedefs? https://github.com/patrickhulce/third-party-web/blob/master/lib/index.d.ts

adamraine

comment created time in 3 days

Pull request review commentpatrickhulce/third-party-web

Add get API to get product

       'img.youtube.com',       'fcmatch.youtube.com',     ],+    products: [+      {+        name: 'YouTube Embedded Player',+        urlPatterns: [+          '.*\\.youtube\\.com/embed/.*',

this makes the strongest case yet for finally just giving up on json and moving this file to entities.js I think I might do that before publish

adamraine

comment created time in 3 days

Pull request review commentpatrickhulce/third-party-web

Add get API to get product

 describe('getEntity', () => {   }) }) +describe('getProduct', () => {+  it('works on basic url', () => {+    expect(getProduct('https://www.youtube.com/embed/alGcULGtiv8')).toMatchInlineSnapshot(`

we don't have any statistics here that need updating so let's use toEqual instead of a snapshot :)

adamraine

comment created time in 3 days

push eventGoogleChrome/lighthouse

Patrick Hulce

commit sha ec0b32d87e17503fecd18dba310fd78cc6f43bf3

usability tweaks

view details

push time in 3 days

push eventGoogleChrome/lighthouse

Patrick Hulce

commit sha 64e8515c301fcb7841166324a84ea95bb3e87f54

misc: add gcp fleet creation scripts

view details

push time in 3 days

PR opened GoogleChrome/lighthouse

Reviewers
misc: add gcp fleet creation scripts

Summary I'll leave this to you @connorjclark to tweak as you wanted, but this is the model that fit well in https://github.com/patrickhulce/dzl-lighthouse/tree/master/cwv/collection

+135 -7

0 comment

5 changed files

pr created time in 3 days

create barnchGoogleChrome/lighthouse

branch : gcp_collect_fleet

created branch time in 3 days

issue commentGoogleChrome/lighthouse

suggest font-display optional if font is preloaded, to reduce layout shift

Got it, great! Thank you so much @xiaochengh !

connorjclark

comment created time in 3 days

issue commentGoogleChrome/lighthouse

suggest font-display optional if font is preloaded, to reduce layout shift

Awesome, thanks for chiming in @xiaochengh! It sounds like we should proceed as planned with your existing implementation in #11255 @lemcardenas 👍 (and perhaps add a note about allowing document.fonts.load in the future as well)

For clarity @xiaochengh, does preloading an optional font have any impact on the render cycle or prevent layout shifts? Or is it just about ensuring that the font is used?

Re-reading @housseindjirdeh's article and from your and @tdresser 's comments it sounds like the layout shifts issue specifically are avoided with all optional fonts in m83+ regardless of preload status?

connorjclark

comment created time in 3 days

issue commentGoogleChrome/lighthouse

Surface manifest and service worker URLs in JSON results

caused by lighthouse itself, I believe?

Our fetch of the manifest does not happen while we're recording devtools logs though, so I would assume this happens somehow as a result of us loading the page and there being a link[rel=manifest]

rviscomi

comment created time in 3 days

MemberEvent

Pull request review commentGoogleChrome/lighthouse

new_audit: add jankless-font audit

+/**+ * @license Copyright 2020 The Lighthouse Authors. All Rights Reserved.+ * Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0+ * Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License.+ */+/**+ * @fileoverview+ * Audit that checks whether fonts that use `font-display: optional` were preloaded.+ */++'use strict';++const Audit = require('./audit.js');+const i18n = require('./../lib/i18n/i18n.js');+const FontDisplay = require('./../audits/font-display.js');+const PASSING_FONT_DISPLAY_REGEX = /^(optional)$/;+const NetworkRecords = require('../computed/network-records.js');++const UIStrings = {+  /** Title of a Lighthouse audit that provides detail on whether fonts that used `font-display: optional` were preloaded. This descriptive title is shown to users when all fonts that used `font-display: optional` were preloaded. */+  title: 'Fonts with `font-display: optional` are preloaded',+  /** Title of a Lighthouse audit that provides detail on whether fonts that used `font-display: optional` were preloaded. This descriptive title is shown to users when one or more fonts used `font-display: optional` and were not preloaded. */+  failureTitle: 'Fonts with `font-display: optional` are not preloaded',+  /** Description of a Lighthouse audit that tells the user why they should preload fonts if they are using `font-display: optional`. This is displayed after a user expands the section to see more. No character length limits. 'Learn More' becomes link text to additional documentation. */+  description: 'Preloading fonts that use `font-display: optional` can help reduce layout shifts and improve CLS. [Learn More](https://web.dev/optimize-cls/#web-fonts-causing-foutfoit)',+};++const str_ = i18n.createMessageInstanceIdFn(__filename, UIStrings);++class JanklessFontAudit extends Audit {+  /**+   * @return {LH.Audit.Meta}+   */+  static get meta() {+    return {+      id: 'jankless-font',+      title: str_(UIStrings.title),+      failureTitle: str_(UIStrings.failureTitle),+      description: str_(UIStrings.description),+      requiredArtifacts: ['devtoolsLogs', 'URL', 'CSSUsage'],+    };+  }++  /**+   * Finds which font URLs were attempted to be preloaded,+   * ignoring those that failed to be reused and were requested again.+   * @param {Array<LH.Artifacts.NetworkRequest>} networkRecords+   * @return {Set<string>}+   */+  static getURLsAttemptedToPreload(networkRecords) {+    const attemptedURLs = networkRecords+      .filter(req => req.resourceType === 'Font')+      .filter(req => req.isLinkPreload)+      .map(req => req.url);++    return new Set(attemptedURLs);+  }++  /**+   * @param {LH.Artifacts} artifacts+   * @param {LH.Audit.Context} context+   * @return {Promise<LH.Audit.Product>}+   */+  static async audit(artifacts, context) {+    const devtoolsLog = artifacts.devtoolsLogs[this.DEFAULT_PASS];+    const networkRecords = await NetworkRecords.request(devtoolsLog, context);++    // Gets the URLs of fonts where font-display: optional.+    const optionalFontURLs =+      FontDisplay.findFontDisplayDeclarations(artifacts, PASSING_FONT_DISPLAY_REGEX).passingURLs;++    // Gets the URLs of fonts attempted to be preloaded.+    const preloadedFontURLs =+      JanklessFontAudit.getURLsAttemptedToPreload(networkRecords);++    const results = Array.from(optionalFontURLs)+      .filter(url => !preloadedFontURLs.has(url))+      .map(url => {+        return {url: url};+      });++    /** @type {LH.Audit.Details.Table['headings']} */+    const headings = [+      {key: 'url', itemType: 'url', text: str_(i18n.UIStrings.columnURL)},+      // TODO: show the CLS that could have been saved if font was preloaded.

Yeah it would be awesome if we could but probably not feasible. I'm working on trying to come up with a way to evaluate CLS attribution based on the 3p CWV request-blocking experiments, but even that probably wouldn't help much in this font case (also from recent discussion in #11227 it seems like CLS might not be affected by the preload either way and this is mostly about whether the font is used at all?)

lemcardenas

comment created time in 3 days

issue commentGoogleChrome/lighthouse

suggest font-display optional if font is preloaded, to reduce layout shift

Alternatively @tdresser if you were just phrasing your insider Chrome knowledge as a question to be nicer and not actually wondering what happens like we are, then please continue to share your teachings are welcome! 😆

connorjclark

comment created time in 3 days

issue commentGoogleChrome/lighthouse

suggest font-display optional if font is preloaded, to reduce layout shift

That doesn't require the font to be preloaded, does it?

The linked article says specifically

Starting in Chrome 83, link rel="preload" and font-display: optional can be combined to remove layout jank completely Optimizations have landed in Chrome 83 to entirely remove the first render cycle for optional fonts that are preloaded with <link rel="preload'>

I interpret those statements to mean this optimization that the first render cycle is skipped only applies if the font is preloaded.

Later on though it says...

Although it is not necessary to preload an optional font, it greatly improves the chance for it to load before the first render cycle, especially if it is not yet stored in the browser's cache.

@housseindjirdeh would you mind clarifying your intent?

My practical experiments suggest the optional font will only be used if <link rel="preload"> is used, even if the preload doesn't actually result in the request happening any earlier.

connorjclark

comment created time in 3 days

issue commentGoogleChrome/lighthouse-ci

Has anyone experienced some slowness on ligthhouse to fetch data?

Hm, so I guess everything is just dramatically slower for your instance if the recomputation time is >1.5 minutes and the cached time is ~20 seconds.

I'm not sure what could be causing that. We haven't really seen anything close to that slow for cached fetching of statistics.

I suppose we can add the indexes (which we should do anyway) and see if it goes away, but I'm not sure what the true root cause is there.

thiagosanches

comment created time in 3 days

issue closedGoogleChrome/lighthouse-ci

Is there a way to get build diff url?

I would like to post the build diff url to our slack after a build complete, but I either need the build diff url or the build.id to construct the url myself. Is there a way to get that information?

https://github.com/GoogleChrome/lighthouse-ci/blob/master/packages/cli/src/upload/upload.js#L456

closed time in 3 days

kmcrawford

issue commentGoogleChrome/lighthouse-ci

Is there a way to get build diff url?

Yep! The links to each build diff are written to the .lighthouseci/links.json file after every upload.

https://github.com/GoogleChrome/lighthouse-ci/blob/0c5db087c9f9c279dbaa2d86197f2c7948810e22/packages/cli/test/cli.test.js#L240-L248

kmcrawford

comment created time in 3 days

issue closedGoogleChrome/lighthouse-ci

build diff url

I would like to post the build diff url to our slack after a build complete, but I either need the build diff url or the build.id to construct the url myself. Is there a way to get that information?

https://github.com/GoogleChrome/lighthouse-ci/blob/master/packages/cli/src/upload/upload.js#L456

closed time in 3 days

kmcrawford

issue commentGoogleChrome/lighthouse-ci

build diff url

duplicate https://github.com/GoogleChrome/lighthouse-ci/issues/411

kmcrawford

comment created time in 3 days

issue closedGoogleChrome/lighthouse

Empty robots.txt is reported as not valid

When a robots.txt of 0 bytes is created, e.g. touch robots.txt it's reported as

robots.txt is not valid Lighthouse was unable to download a robots.txt file

Site example: https://plurrrr.com/

closed time in 3 days

john-bokma

issue commentGoogleChrome/lighthouse

Empty robots.txt is reported as not valid

This is actually the same root issue as https://github.com/GoogleChrome/lighthouse/issues/4386 which is much broader and applies to many areas of Lighthouse. We'll de-dupe into there.

john-bokma

comment created time in 3 days

Pull request review commentGoogleChrome/lighthouse

new_audit: add large-javascript-libraries audit

+{+  "date-fns": {+    "repository": "https://github.com/date-fns/date-fns.git",+    "lastScraped": 1596239787705,+    "versions": {+      "2.15.0": {+        "gzip": 18463+      },+      "latest": {+        "gzip": 18463+      },+      "2.14.0": {+        "gzip": 18304+      },+      "2.13.0": {+        "gzip": 17928+      },+      "2.12.0": {+        "gzip": 17484+      },+      "2.11.1": {+        "gzip": 17450+      },+      "2.11.0": {+        "gzip": 17406+      },+      "2.10.0": {+        "gzip": 17385+      },+      "2.9.0": {+        "gzip": 17355+      },+      "2.8.1": {+        "gzip": 17166+      },+      "2.8.0": {+        "gzip": 17165+      }+    }+  },+  "luxon": {+    "repository": "https://github.com/moment/luxon.git",+    "lastScraped": 1596239802128,+    "versions": {+      "1.24.1": {

That's why I think we should store each library version separately

To be clear, we absolutely should. I'm just saying if the gzipped size of a specific version of the library is within a few bytes of the fallback then don't both saving that specific version information.

saavannanavati

comment created time in 3 days

Pull request review commentGoogleChrome/lighthouse

new_audit: add large-javascript-libraries audit

+{+  "date-fns": {+    "repository": "https://github.com/date-fns/date-fns.git",+    "lastScraped": 1596239787705,+    "versions": {+      "2.15.0": {+        "gzip": 18463+      },+      "latest": {+        "gzip": 18463+      },+      "2.14.0": {+        "gzip": 18304+      },+      "2.13.0": {+        "gzip": 17928+      },+      "2.12.0": {+        "gzip": 17484+      },+      "2.11.1": {+        "gzip": 17450+      },+      "2.11.0": {+        "gzip": 17406+      },+      "2.10.0": {+        "gzip": 17385+      },+      "2.9.0": {+        "gzip": 17355+      },+      "2.8.1": {+        "gzip": 17166+      },+      "2.8.0": {+        "gzip": 17165+      }+    }+  },+  "luxon": {+    "repository": "https://github.com/moment/luxon.git",+    "lastScraped": 1596239802128,+    "versions": {+      "1.24.1": {

If we're defaulting to latest anyway, that would be equally error-prone, assuming that BundlePhobia's numbers are inaccurate.

That's kind of exactly my point, these sizes are all so hand-wavvy anyway. Why are we trying to be accurate to the byte with library version information when the user will

we're throwing away data precision

Data precision that the end user of Lighthouse never sees ;)

we prove that BP isn't accounting for something like compressing.

I'm not sure I understand this. We already have proof. They're reporting the gzipped size and the websites exist that do not serve gzipped content (could be brotli or uncompressed).

saavannanavati

comment created time in 3 days

issue closedeugeneware/jpeg-js

SOI not found jpeg-js with react-native-image-picker

const Buffer = require('buffer').Buffer;
global.Buffer = Buffer;
const  RNFS = require('react-native-fs');

  ImagePicker.launchImageLibrary({ mediaType: 'photo'},
(response) => {
    var base64 = response.data;
	var path = response.path;
	
	// check exists path and read file again
	RNFS.exists(path).then((exists) => {
	  if (exists) {
		RNFS.readFile(path, 'base64').then((res) => {
		// convert base64 to buffer
		   var jpegData = Buffer.from(res, 'base64'); 
		   
		   // Error DEOCDE : SOI  not found
		   var result = jpeg.decode(jpegData); 
		});
	  }
	});
});

i using react native , react-native-image-picker , jpeg-js , buffer , react-native-fs ,jsQR

closed time in 3 days

zuntilZ

issue commenteugeneware/jpeg-js

SOI not found jpeg-js with react-native-image-picker

Thanks for filing @zuntilZ! That means the provided buffer is not a valid JPEG.

zuntilZ

comment created time in 3 days

Pull request review commentGoogleChrome/lighthouse

report: add per-resource breakdowns to the Third Party impact audit

 class ThirdPartySummary extends Audit {    * @param {Array<LH.Artifacts.NetworkRequest>} networkRecords    * @param {Array<LH.Artifacts.TaskNode>} mainThreadTasks    * @param {number} cpuMultiplier-   * @return {Map<ThirdPartyEntity, {mainThreadTime: number, transferSize: number, blockingTime: number}>}+   * @return {{byEntity: Map<ThirdPartyEntity, Summary>, byURL: Map<string, Summary>, urls: Map<ThirdPartyEntity, string[]>}}    */-  static getSummaryByEntity(networkRecords, mainThreadTasks, cpuMultiplier) {-    /** @type {Map<ThirdPartyEntity, {mainThreadTime: number, transferSize: number, blockingTime: number}>} */-    const entities = new Map();-    const defaultEntityStat = {mainThreadTime: 0, blockingTime: 0, transferSize: 0};+  static getSummaries(networkRecords, mainThreadTasks, cpuMultiplier) {+    /** @type {Map<string, Summary>} */ const byURL = new Map();+    /** @type {Map<ThirdPartyEntity, Summary>} */ const byEntity = new Map();+    const defaultStat = {mainThreadTime: 0, blockingTime: 0, transferSize: 0};      for (const request of networkRecords) {-      const entity = thirdPartyWeb.getEntity(request.url);-      if (!entity) continue;--      const entityStats = entities.get(entity) || {...defaultEntityStat};-      entityStats.transferSize += request.transferSize;-      entities.set(entity, entityStats);+      const urlStat = byURL.get(request.url) || {...defaultStat};+      urlStat.transferSize += request.transferSize;+      byURL.set(request.url, urlStat);     }      const jsURLs = BootupTime.getJavaScriptURLs(networkRecords);      for (const task of mainThreadTasks) {-      const attributeableURL = BootupTime.getAttributableURLForTask(task, jsURLs);-      const entity = thirdPartyWeb.getEntity(attributeableURL);-      if (!entity) continue;+      const attributableURL = BootupTime.getAttributableURLForTask(task, jsURLs); -      const entityStats = entities.get(entity) || {...defaultEntityStat};+      const urlStat = byURL.get(attributableURL) || {...defaultStat};       const taskDuration = task.selfTime * cpuMultiplier;       // The amount of time spent on main thread is the sum of all durations.-      entityStats.mainThreadTime += taskDuration;+      urlStat.mainThreadTime += taskDuration;       // The amount of time spent *blocking* on main thread is the sum of all time longer than 50ms.       // Note that this is not totally equivalent to the TBT definition since it fails to account for FCP,       // but a majority of third-party work occurs after FCP and should yield largely similar numbers.-      entityStats.blockingTime += Math.max(taskDuration - 50, 0);-      entities.set(entity, entityStats);+      urlStat.blockingTime += Math.max(taskDuration - 50, 0);+      byURL.set(attributableURL, urlStat);+    }++    // Map each URL's stat to a particular third party entity.+    /** @type {Map<ThirdPartyEntity, string[]>} */ const urls = new Map();+    for (const [url, urlStat] of byURL.entries()) {+      const entity = thirdPartyWeb.getEntity(url);+      if (!entity) {+        byURL.delete(url);+        continue;+      }+      const entityStat = byEntity.get(entity) || {...defaultStat};+      entityStat.transferSize += urlStat.transferSize;+      entityStat.mainThreadTime += urlStat.mainThreadTime;+      entityStat.blockingTime += urlStat.blockingTime;+      byEntity.set(entity, entityStat);++      const entityURLs = urls.get(entity) || [];+      entityURLs.push(url);+      urls.set(entity, entityURLs);     } -    return entities;+    return {byURL, byEntity, urls};+  }++  /**+   * @param {ThirdPartyEntity} entity+   * @param {{byEntity: Map<ThirdPartyEntity, Summary>, byURL: Map<string, Summary>, urls: Map<ThirdPartyEntity, string[]>}} summaries+   * @param {Summary} stats+   * @return {Array<URLSummary>}+   */+  static getSubItems(entity, summaries, stats) {+    const entityURLs = summaries.urls.get(entity) || [];+    let items = entityURLs+      .map(url => /** @type {URLSummary} */ ({url, ...summaries.byURL.get(url)}))+      // Filter out any cases where byURL was missing entries.+      .filter((stat) => stat.transferSize > 0)+      // Sort by blocking time first, then transfer size to break ties.+      .sort((a, b) => (b.blockingTime - a.blockingTime) || (b.transferSize - a.transferSize));++    const runningSummary = {transferSize: 0, blockingTime: 0};+    const minTransferSize = Math.max(MIN_TRANSFER_SIZE_FOR_SUBITEMS, stats.transferSize / 20);+    const maxSubItems = Math.min(MAX_SUBITEMS, items.length);+    const i = 0;+    for (let i = 0;+      i < maxSubItems && (items[i].blockingTime || items[i].transferSize > minTransferSize);+      i++) {+      runningSummary.transferSize += items[i].transferSize;+      runningSummary.blockingTime += items[i].blockingTime;+    }+    if (!runningSummary.blockingTime || !runningSummary.transferSize) {+      // Don't bother breaking down if there are no large resources.+      return [];+    }+    // Only show the top N entries for brevity. If there is more than one remaining entry+    // we'll replace the tail entries with single remainder entry.+    items = items.slice(0, i);

Oh I forgot this use of i drat, heh, how about storing numberOfItems on runningSummary to make it super explicit?

warrengm

comment created time in 4 days

push eventGoogleChrome/lighthouse

Patrick Hulce

commit sha f8eaf86f98ad04dac4c171413de89361dc6ce813

Apply suggestions from code review Co-authored-by: Connor Clark <cjamcl@google.com>

view details

push time in 4 days

Pull request review commentGoogleChrome/lighthouse

core(tracehouse): split timeOrigin determination out of computeTraceOfTab

 class TraceProcessor {     // Filter to just events matching the frame ID, just to make sure.     const frameEvents = keyEvents.filter(e => e.args.frame === mainFrameIds.frameId); -    // Our navStart will be the last frame navigation in the trace-    const navigationStart = frameEvents.filter(this._isNavigationStartOfInterest).pop();-    if (!navigationStart) throw this.createNoNavstartError();-    const timeOriginEvt = navigationStart;+    // Compute our time origin to use for all relative timings.+    const timeOriginEvt = this.computeTimeOrigin(+      {keyEvents, frameEvents, mainFrameIds},+      timeOriginDeterminationMethod+    );++    // Compute the key frame timings for the main frame.+    const frameTimings = this.computeKeyTimingsForFrame(frameEvents, {timeOriginEvt});++    // subset all trace events to just our tab's process (incl threads other than main)

heh. this was just a move :)

patrickhulce

comment created time in 4 days

PR opened GoogleChrome/lighthouse

core(tracehouse): split timeOrigin determination out of computeTraceOfTab

Summary Another noop tracehouse change that will make it easier for fraggle rock and tracehouse consumers in the future. This PR splits apart the computeTraceOfTab method into a few components.

  • computeTimeOrigin, a way to choose how you want your time origin to be computed
  • computeKeyTimingsForFrame, the bulk of the previous trace of tab logic given a specific frame and time origin
  • computeTraceEnd, helper function to prevent that bug from sneaking into the two places we use it :)

It has one user-facing impact which is a minor bug fix to our traceEnd computation. Previously we did not consider the maximum ts + dur time so it changes by a few ms in some situations.

Related Issues/PRs ref https://github.com/GoogleChrome/lighthouse/issues/8984 https://github.com/GoogleChrome/lighthouse/pull/10325 https://github.com/GoogleChrome/lighthouse/issues/9519

+139 -34

0 comment

3 changed files

pr created time in 4 days

create barnchGoogleChrome/lighthouse

branch : time_origin_method

created branch time in 4 days

push eventGoogleChrome/lighthouse

Connor Clark

commit sha 7a462a256865f47441ee4c2e29cae5ab823ed3b8

core(font-size): remove deprecated DOM.getFlattenedDocument (#11248)

view details

push time in 4 days

delete branch GoogleChrome/lighthouse

delete branch : rm-deprecated-getflattenddoc

delete time in 4 days

PR merged GoogleChrome/lighthouse

core(font-size): remove deprecated DOM.getFlattenedDocument cla: yes waiting4reviewer

Fixes #11210 (went with option resolve many at once using push node to frontend method)

Reduced the scope of the FontSize artifact by exporting only what we need, not an entire CrdpNode.

Test url http://misc-hoten.surge.sh/lh-ui-location-font-size/ from https://github.com/GoogleChrome/lighthouse/pull/9354 shows that results are unchanged.

image

+178 -160

0 comment

6 changed files

connorjclark

pr closed time in 4 days

issue closedGoogleChrome/lighthouse

drop usage of deprecated DOM.getFlattenedDocument

we only use this in the font-size gatherer.

see https://chromium-review.googlesource.com/c/chromium/src/+/2335483

closed time in 4 days

connorjclark

Pull request review commentGoogleChrome/lighthouse

report: add per-resource breakdowns to the Third Party impact audit

 /**  * @license Copyright 2017 The Lighthouse Authors. All Rights Reserved.- * Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0- * Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License.+ * Licensed under the Apache License, Version 2.0 (the License'); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0

super nit: revert these license quotes

warrengm

comment created time in 4 days

Pull request review commentGoogleChrome/lighthouse

report: add per-resource breakdowns to the Third Party impact audit

 describe('Third party summary', () => {         mainThreadTime: 127.15300000000003,         blockingTime: 18.186999999999998,         transferSize: 30827,+        subItems: {+          items: [+            {+              blockingTime: 18.186999999999998,+              mainThreadTime: 127.15300000000003,+              transferSize: 30827,+              url: 'https://www.googletagmanager.com/gtm.js?id=GTM-Q5SW',+            },+          ],+          type: 'subitems',+        },       },       {         entity: {           text: 'Google Analytics',           type: 'link',           url: 'https://www.google.com/analytics/analytics/',         },-        mainThreadTime: 95.15600000000005,+        mainThreadTime: 95.15599999999999,         blockingTime: 0,         transferSize: 20913,+        subItems: {+          items: [+            {+              blockingTime: 0,+              mainThreadTime: 55.246999999999986,+              transferSize: 12906,+              url: 'https://www.google-analytics.com/analytics.js',+            },+            {+              blockingTime: 0,+              transferSize: 8007,+              url: expect.any(String),+            },+          ],+          type: 'subitems',+        },       },     ]);+    expect(results.details.items[1].subItems.items[1].url).toBeDisplayString('Other resources');

heh, I bet our users just love the ergonomics of working with subitems 😆

warrengm

comment created time in 4 days

Pull request review commentGoogleChrome/lighthouse

report: add per-resource breakdowns to the Third Party impact audit

 class ThirdPartySummary extends Audit {    * @param {Array<LH.Artifacts.NetworkRequest>} networkRecords    * @param {Array<LH.Artifacts.TaskNode>} mainThreadTasks    * @param {number} cpuMultiplier-   * @return {Map<ThirdPartyEntity, {mainThreadTime: number, transferSize: number, blockingTime: number}>}+   * @return {{byEntity: Map<ThirdPartyEntity, Summary>, byURL: Map<string, Summary>, urls: Map<ThirdPartyEntity, string[]>}}    */-  static getSummaryByEntity(networkRecords, mainThreadTasks, cpuMultiplier) {-    /** @type {Map<ThirdPartyEntity, {mainThreadTime: number, transferSize: number, blockingTime: number}>} */-    const entities = new Map();-    const defaultEntityStat = {mainThreadTime: 0, blockingTime: 0, transferSize: 0};+  static getSummaries(networkRecords, mainThreadTasks, cpuMultiplier) {+    /** @type {Map<string, Summary>} */ const byURL = new Map();+    /** @type {Map<ThirdPartyEntity, Summary>} */ const byEntity = new Map();+    const defaultStat = {mainThreadTime: 0, blockingTime: 0, transferSize: 0};      for (const request of networkRecords) {-      const entity = thirdPartyWeb.getEntity(request.url);-      if (!entity) continue;--      const entityStats = entities.get(entity) || {...defaultEntityStat};-      entityStats.transferSize += request.transferSize;-      entities.set(entity, entityStats);+      const urlStat = byURL.get(request.url) || {...defaultStat};+      urlStat.transferSize += request.transferSize;+      byURL.set(request.url, urlStat);     }      const jsURLs = BootupTime.getJavaScriptURLs(networkRecords);      for (const task of mainThreadTasks) {-      const attributeableURL = BootupTime.getAttributableURLForTask(task, jsURLs);-      const entity = thirdPartyWeb.getEntity(attributeableURL);-      if (!entity) continue;+      const attributableURL = BootupTime.getAttributableURLForTask(task, jsURLs); -      const entityStats = entities.get(entity) || {...defaultEntityStat};+      const urlStat = byURL.get(attributableURL) || {...defaultStat};       const taskDuration = task.selfTime * cpuMultiplier;       // The amount of time spent on main thread is the sum of all durations.-      entityStats.mainThreadTime += taskDuration;+      urlStat.mainThreadTime += taskDuration;       // The amount of time spent *blocking* on main thread is the sum of all time longer than 50ms.       // Note that this is not totally equivalent to the TBT definition since it fails to account for FCP,       // but a majority of third-party work occurs after FCP and should yield largely similar numbers.-      entityStats.blockingTime += Math.max(taskDuration - 50, 0);-      entities.set(entity, entityStats);+      urlStat.blockingTime += Math.max(taskDuration - 50, 0);+      byURL.set(attributableURL, urlStat);+    }++    // Map each URL's stat to a particular third party entity.+    /** @type {Map<ThirdPartyEntity, string[]>} */ const urls = new Map();+    for (const [url, urlStat] of byURL.entries()) {+      const entity = thirdPartyWeb.getEntity(url);+      if (!entity) {+        byURL.delete(url);+        continue;+      }+      const entityStat = byEntity.get(entity) || {...defaultStat};+      entityStat.transferSize += urlStat.transferSize;+      entityStat.mainThreadTime += urlStat.mainThreadTime;+      entityStat.blockingTime += urlStat.blockingTime;+      byEntity.set(entity, entityStat);++      const entityURLs = urls.get(entity) || [];+      entityURLs.push(url);+      urls.set(entity, entityURLs);     } -    return entities;+    return {byURL, byEntity, urls};+  }++  /**+   * @param {ThirdPartyEntity} entity+   * @param {{byEntity: Map<ThirdPartyEntity, Summary>, byURL: Map<string, Summary>, urls: Map<ThirdPartyEntity, string[]>}} summaries+   * @param {Summary} stats+   * @return {Array<URLSummary>}+   */+  static getSubItems(entity, summaries, stats) {+    const entityURLs = summaries.urls.get(entity) || [];+    let items = entityURLs+      .map(url => /** @type {URLSummary} */ ({url, ...summaries.byURL.get(url)}))+      // Filter out any cases where byURL was missing entries.+      .filter((stat) => stat.transferSize > 0)+      // Sort by blocking time first, then transfer size to break ties.+      .sort((a, b) => (b.blockingTime - a.blockingTime) || (b.transferSize - a.transferSize));++    const runningSummary = {transferSize: 0, blockingTime: 0};

🚲 🏠 runningSubitemTotals or subitemSummary maybe?

warrengm

comment created time in 4 days

Pull request review commentGoogleChrome/lighthouse

report: add per-resource breakdowns to the Third Party impact audit

 class ThirdPartySummary extends Audit {    * @param {Array<LH.Artifacts.NetworkRequest>} networkRecords    * @param {Array<LH.Artifacts.TaskNode>} mainThreadTasks    * @param {number} cpuMultiplier-   * @return {Map<ThirdPartyEntity, {mainThreadTime: number, transferSize: number, blockingTime: number}>}+   * @return {{byEntity: Map<ThirdPartyEntity, Summary>, byURL: Map<string, Summary>, urls: Map<ThirdPartyEntity, string[]>}}    */-  static getSummaryByEntity(networkRecords, mainThreadTasks, cpuMultiplier) {-    /** @type {Map<ThirdPartyEntity, {mainThreadTime: number, transferSize: number, blockingTime: number}>} */-    const entities = new Map();-    const defaultEntityStat = {mainThreadTime: 0, blockingTime: 0, transferSize: 0};+  static getSummaries(networkRecords, mainThreadTasks, cpuMultiplier) {+    /** @type {Map<string, Summary>} */ const byURL = new Map();+    /** @type {Map<ThirdPartyEntity, Summary>} */ const byEntity = new Map();+    const defaultStat = {mainThreadTime: 0, blockingTime: 0, transferSize: 0};      for (const request of networkRecords) {-      const entity = thirdPartyWeb.getEntity(request.url);-      if (!entity) continue;--      const entityStats = entities.get(entity) || {...defaultEntityStat};-      entityStats.transferSize += request.transferSize;-      entities.set(entity, entityStats);+      const urlStat = byURL.get(request.url) || {...defaultStat};+      urlStat.transferSize += request.transferSize;+      byURL.set(request.url, urlStat);     }      const jsURLs = BootupTime.getJavaScriptURLs(networkRecords);      for (const task of mainThreadTasks) {-      const attributeableURL = BootupTime.getAttributableURLForTask(task, jsURLs);-      const entity = thirdPartyWeb.getEntity(attributeableURL);-      if (!entity) continue;+      const attributableURL = BootupTime.getAttributableURLForTask(task, jsURLs); -      const entityStats = entities.get(entity) || {...defaultEntityStat};+      const urlStat = byURL.get(attributableURL) || {...defaultStat};       const taskDuration = task.selfTime * cpuMultiplier;       // The amount of time spent on main thread is the sum of all durations.-      entityStats.mainThreadTime += taskDuration;+      urlStat.mainThreadTime += taskDuration;       // The amount of time spent *blocking* on main thread is the sum of all time longer than 50ms.       // Note that this is not totally equivalent to the TBT definition since it fails to account for FCP,       // but a majority of third-party work occurs after FCP and should yield largely similar numbers.-      entityStats.blockingTime += Math.max(taskDuration - 50, 0);-      entities.set(entity, entityStats);+      urlStat.blockingTime += Math.max(taskDuration - 50, 0);+      byURL.set(attributableURL, urlStat);+    }++    // Map each URL's stat to a particular third party entity.+    /** @type {Map<ThirdPartyEntity, string[]>} */ const urls = new Map();+    for (const [url, urlStat] of byURL.entries()) {+      const entity = thirdPartyWeb.getEntity(url);+      if (!entity) {+        byURL.delete(url);+        continue;+      }+      const entityStat = byEntity.get(entity) || {...defaultStat};+      entityStat.transferSize += urlStat.transferSize;+      entityStat.mainThreadTime += urlStat.mainThreadTime;+      entityStat.blockingTime += urlStat.blockingTime;+      byEntity.set(entity, entityStat);++      const entityURLs = urls.get(entity) || [];+      entityURLs.push(url);+      urls.set(entity, entityURLs);     } -    return entities;+    return {byURL, byEntity, urls};+  }++  /**+   * @param {ThirdPartyEntity} entity+   * @param {{byEntity: Map<ThirdPartyEntity, Summary>, byURL: Map<string, Summary>, urls: Map<ThirdPartyEntity, string[]>}} summaries+   * @param {Summary} stats+   * @return {Array<URLSummary>}+   */+  static getSubItems(entity, summaries, stats) {+    const entityURLs = summaries.urls.get(entity) || [];+    let items = entityURLs+      .map(url => /** @type {URLSummary} */ ({url, ...summaries.byURL.get(url)}))+      // Filter out any cases where byURL was missing entries.+      .filter((stat) => stat.transferSize > 0)+      // Sort by blocking time first, then transfer size to break ties.+      .sort((a, b) => (b.blockingTime - a.blockingTime) || (b.transferSize - a.transferSize));++    const runningSummary = {transferSize: 0, blockingTime: 0};+    const minTransferSize = Math.max(MIN_TRANSFER_SIZE_FOR_SUBITEMS, stats.transferSize / 20);+    const maxSubItems = Math.min(MAX_SUBITEMS, items.length);+    let i = 0;+    while (i < maxSubItems &&

can I sell you on just making this a for loop and checking if runningSummary.transferSize === 0 instead of i === 0? :)

warrengm

comment created time in 4 days

push eventGoogleChrome/lighthouse

Snyk bot

commit sha d9b3caba902a54420b3cab4c128b69e0e0a115c0

deps(snyk): update snyk snapshot (#11046)

view details

Umar Hansa

commit sha 18b62824b154ce8d1d204a6a1279ca4e6aa3357f

core(hreflang): assert that the href is fully qualified (#11022)

view details

Paul Irish

commit sha 159cb8428cfb91452b1561ecaa0e8415e9eba742

tests(smoke): use caltrainschedule instead of polymer shop (#11052)

view details

Paul Irish

commit sha 3857461c18b2268c3dac6dc7185759088e93fe6d

docs: update architecture.md (#11040)

view details

Connor Clark

commit sha 62780a56cfd167018b55500e0ae60467776d76a3

misc(compare-runs): fix error when no lh-flags arg passed (#11015)

view details

Connor Clark

commit sha 1101b5e85e051458783fd8f0657dbac000bae49c

core(fetcher): ensure fetch doesn't cause unhandled promise (#11036) Co-authored-by: Patrick Hulce <patrick.hulce@gmail.com>

view details

Brendan Kenny

commit sha 6c4bfeec80fcdd4f72541313bc3d994d707cbed0

misc: remove last extendedInfo in LH.Audit.Product (#11067)

view details

lemcardenas

commit sha aed2a642748c388c81549711ba2ca5cff4bb68e1

core(gather-runner): error on non-HTML (#11042)

view details

Adam

commit sha e37cd86c53a6b2c9521d8d4732b223dd48e9a8e5

docs(readme): update Foo integration (#11050)

view details

Brendan Kenny

commit sha 3acd4035e4e68d3fdd87b3e98c96ad9d4639e0f0

core(font-display): dedupe warnings by font origin (#11068)

view details

Connor Clark

commit sha de072589d07d867403adb31728210829a4c943b5

misc(release): tweaks (#11021)

view details

Emilio Garza

commit sha 170046afebec7f39913db120729e3e16dc3884e8

core(link-text): removing inicio from blocklist resolves #11026 (#11073)

view details

moli

commit sha 92996816625eb0bdf15b9b156978afedb6a69bd5

docs: fix typo in viewer readme for loading json from url (#11080)

view details

Connor Clark

commit sha 78a446e9b0fd74e3083499477d4f488a7d1ea510

core(legacy-javascript): make opportunity (#10881)

view details

Brendan Kenny

commit sha 64ac8f1c7db0fac53ab6280bc104e89ba697c0d3

tests: use latest windows image on appveyor (#11083)

view details

Connor Clark

commit sha 2a582f3f5da62626a1e9837e569b04949d23a2ff

tests(smoke): skip expectation with _chromeMajorVersion (#10976)

view details

jazyan

commit sha 74492bc37d8bea8edc8a8b33a1eff90dd3f2c8ab

i18n(import): new audit strings, KiB, and updated urls for 6.1 (#11082) * add updated i18n strings * remove legacy-javascript

view details

Connor Clark

commit sha 5acd30fde3055c5e623484b3648b814b028c9f40

core(is-on-https): add mixed-content resolution (#10975)

view details

Connor Clark

commit sha 95308d9b9f649f9597d1285f8cb2f105c8eb7fe5

core(cls): add back early shift events if they were ignored (#11079)

view details

Муравьёв Семён

commit sha d43a1c8e5d781856ee30e33bf9dce0b5a0754255

docs (architecture): correct link to description guidelines (#11089)

view details

push time in 4 days

push eventGoogleChrome/lighthouse

Patrick Hulce

commit sha a200502a5ab1b4be77a65d665b2f02f3bc48346d

adam feedback

view details

push time in 4 days

Pull request review commentGoogleChrome/lighthouse

core(tracehouse): add CPU trace profiler model

+/**+ * @license Copyright 2020 The Lighthouse Authors. All Rights Reserved.+ * Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0+ * Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License.+ */+'use strict';++/**+ * @fileoverview+ *+ * This model converts the `Profile` and `ProfileChunk` mega trace events from the `disabled-by-default-v8.cpu_profiler`+ * category into B/E-style trace events that main-thread-tasks.js already knows how to parse into a task tree.+ *+ * The CPU profiler measures where time is being spent by sampling the stack (See https://www.jetbrains.com/help/profiler/Profiling_Guidelines__Choosing_the_Right_Profiling_Mode.html+ * for a generic description of the differences between tracing and sampling).+ *+ * A `Profile` event is a record of the stack that was being executed at different sample points in time.+ * It has a structure like this:+ *+ *    nodes: [function A, function B, function C]+ *    samples: [node with id 2, node with id 1, ...]+ *    timeDeltas: [4125μs since last sample, 121μs since last sample, ...]+ *+ * Helpful prior art:+ * @see https://cs.chromium.org/chromium/src/third_party/devtools-frontend/src/front_end/sdk/CPUProfileDataModel.js?sq=package:chromium&g=0&l=42+ * @see https://github.com/v8/v8/blob/99ca333b0efba3236954b823101315aefeac51ab/tools/profile.js+ * @see https://github.com/jlfwong/speedscope/blob/9ed1eb192cb7e9dac43a5f25bd101af169dc654a/src/import/chrome.ts#L200+ */++/**+ * @typedef CpuProfile+ * @property {string} id+ * @property {number} pid+ * @property {number} tid+ * @property {number} startTime+ * @property {Required<LH.TraceCpuProfile>['nodes']} nodes+ * @property {Array<number>} samples+ * @property {Array<number>} timeDeltas+ */++class CpuProfilerModel {+  /**+   * @param {CpuProfile} profile+   */+  constructor(profile) {+    this._profile = profile;+    this._nodesById = this._createNodeMap();+    this._activeNodeArraysById = this._createActiveNodeArrays();+  }++  /**+   * Initialization function to enable O(1) access to nodes by node ID.+   * @return {Map<number, CpuProfile['nodes'][0]>}+   */+  _createNodeMap() {+    /** @type {Map<number, CpuProfile['nodes'][0]>} */+    const map = new Map();+    for (const node of this._profile.nodes) {+      map.set(node.id, node);+    }++    return map;+  }++  /**+   * Initialization function to enable O(1) access to the set of active nodes in the stack by node ID.+   * @return {Map<number, Array<number>>}+   */+  _createActiveNodeArrays() {+    /** @type {Map<number, Array<number>>} */+    const map = new Map();+    /** @param {number} id @return {Array<number>} */+    const getActiveNodes = id => {+      if (map.has(id)) return map.get(id) || [];++      const node = this._nodesById.get(id);+      if (!node) throw new Error(`No such node ${id}`);+      if (typeof node.parent === 'number') {+        const array = getActiveNodes(node.parent).concat([id]);+        map.set(id, array);+        return array;+      } else {+        return [id];+      }+    };++    for (const node of this._profile.nodes) {+      map.set(node.id, getActiveNodes(node.id));+    }++    return map;+  }++  /**+   * Returns all the node IDs in a stack when a specific nodeId is at the top of the stack+   * (i.e. a stack's node ID and the node ID of all of its parents).+   *+   * @param {number} nodeId+   * @return {Array<number>}+   */+  _getActiveNodeIds(nodeId) {+    const activeNodeIds = this._activeNodeArraysById.get(nodeId);+    if (!activeNodeIds) throw new Error(`No such node ID ${nodeId}`);+    return activeNodeIds;+  }++  /**+   * Generates the necessary B/E-style trace events for a single transition from stack A to stack B+   * at the given timestamp.+   *+   * Example:+   *+   *    timestamp 1234+   *    previousNodeIds 1,2,3+   *    currentNodeIds 1,2,4+   *+   *    yields [end 3 at ts 1234, begin 4 at ts 1234]+   *+   * @param {number} timestamp+   * @param {Array<number>} previousNodeIds+   * @param {Array<number>} currentNodeIds+   * @return {Array<LH.TraceEvent>}+   */+  _createStartEndEventsForTransition(timestamp, previousNodeIds, currentNodeIds) {+    const startNodes = currentNodeIds+      .filter(id => !previousNodeIds.includes(id))+      .map(id => this._nodesById.get(id))+      .filter(/** @return {node is CpuProfile['nodes'][0]} */ node => !!node);+    const endNodes = previousNodeIds+      .filter(id => !currentNodeIds.includes(id))+      .map(id => this._nodesById.get(id))+      .filter(/** @return {node is CpuProfile['nodes'][0]} */ node => !!node);++    /** @param {CpuProfile['nodes'][0]} node @return {LH.TraceEvent} */+    const createEvent = node => ({+      ts: timestamp,+      pid: this._profile.pid,+      tid: this._profile.tid,+      dur: 0,+      ph: 'I',+      name: 'FunctionCall-ProfilerModel',+      cat: 'lighthouse',+      args: {data: {callFrame: node.callFrame}},+    });++    /** @type {Array<LH.TraceEvent>} */+    const startEvents = startNodes.map(createEvent).map(evt => ({...evt, ph: 'B'}));+    /** @type {Array<LH.TraceEvent>} */+    const endEvents = endNodes.map(createEvent).map(evt => ({...evt, ph: 'E'}));+    return [...endEvents.reverse(), ...startEvents];+  }++  /**+   * Creates B/E-style trace events from a CpuProfile object created by `collectProfileEvents()`+   *+   * @return {Array<LH.TraceEvent>}+   */+  createStartEndEvents() {+    const profile = this._profile;+    const length = profile.samples.length;+    if (profile.timeDeltas.length !== length) throw new Error(`Invalid CPU profile length`);++    /** @type {Array<LH.TraceEvent>} */+    const events = [];++    let timestamp = profile.startTime;+    /** @type {Array<number>} */+    let lastActiveNodeIds = [];+    for (let i = 0; i < profile.samples.length; i++) {+      const nodeId = profile.samples[i];+      const timeDelta = Math.max(profile.timeDeltas[i], 1);+      const node = this._nodesById.get(nodeId);+      if (!node) throw new Error(`Missing node ${nodeId}`);++      timestamp += timeDelta;+      const activeNodeIds = this._getActiveNodeIds(nodeId);+      events.push(+        ...this._createStartEndEventsForTransition(timestamp, lastActiveNodeIds, activeNodeIds)+      );+      lastActiveNodeIds = activeNodeIds;+    }++    events.push(+      ...this._createStartEndEventsForTransition(timestamp, lastActiveNodeIds, [])+    );++    return events;+  }++  /**+   * Creates B/E-style trace events from a CpuProfile object created by `collectProfileEvents()`+   *+   * @param {CpuProfile} profile+   * @return {Array<LH.TraceEvent>}+   */+  static createStartEndEvents(profile) {+    const model = new CpuProfilerModel(profile);+    return model.createStartEndEvents();+  }++  /**+   * Merges the data of all the `ProfileChunk` trace events into a single CpuProfile object for consumption+   * by `createStartEndEvents()`.+   *+   * @param {Array<LH.TraceEvent>} traceEvents+   * @return {Array<CpuProfile>}+   */+  static collectProfileEvents(traceEvents) {+    /** @type {Map<string, CpuProfile>} */+    const profiles = new Map();+    for (const event of traceEvents) {+      if (event.name !== 'Profile' && event.name !== 'ProfileChunk') continue;+      if (typeof event.id !== 'string') continue;++      const cpuProfileArg = (event.args.data && event.args.data.cpuProfile) || {};+      const timeDeltas =+        (event.args.data && event.args.data.timeDeltas) || cpuProfileArg.timeDeltas;

Added a comment and tests for this case 👍

patrickhulce

comment created time in 4 days

Pull request review commentGoogleChrome/lighthouse

core(tracehouse): add CPU trace profiler model

+/**+ * @license Copyright 2020 The Lighthouse Authors. All Rights Reserved.+ * Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0+ * Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License.+ */+'use strict';++/**+ * @fileoverview+ *+ * This model converts the `Profile` and `ProfileChunk` mega trace events from the `disabled-by-default-v8.cpu_profiler`+ * category into B/E-style trace events that main-thread-tasks.js already knows how to parse into a task tree.+ *+ * The CPU profiler measures where time is being spent by sampling the stack (See https://www.jetbrains.com/help/profiler/Profiling_Guidelines__Choosing_the_Right_Profiling_Mode.html+ * for a generic description of the differences between tracing and sampling).+ *+ * A `Profile` event is a record of the stack that was being executed at different sample points in time.+ * It has a structure like this:+ *+ *    nodes: [function A, function B, function C]+ *    samples: [node with id 2, node with id 1, ...]+ *    timeDeltas: [4125μs since last sample, 121μs since last sample, ...]+ *+ * Helpful prior art:+ * @see https://cs.chromium.org/chromium/src/third_party/devtools-frontend/src/front_end/sdk/CPUProfileDataModel.js?sq=package:chromium&g=0&l=42+ * @see https://github.com/v8/v8/blob/99ca333b0efba3236954b823101315aefeac51ab/tools/profile.js+ * @see https://github.com/jlfwong/speedscope/blob/9ed1eb192cb7e9dac43a5f25bd101af169dc654a/src/import/chrome.ts#L200+ */++/**+ * @typedef CpuProfile+ * @property {string} id+ * @property {number} pid+ * @property {number} tid+ * @property {number} startTime+ * @property {Required<LH.TraceCpuProfile>['nodes']} nodes+ * @property {Array<number>} samples+ * @property {Array<number>} timeDeltas+ */++class CpuProfilerModel {+  /**+   * @param {CpuProfile} profile+   */+  constructor(profile) {+    this._profile = profile;+    this._nodesById = this._createNodeMap();+    this._activeNodeArraysById = this._createActiveNodeArrays();+  }++  /**+   * Initialization function to enable O(1) access to nodes by node ID.+   * @return {Map<number, CpuProfile['nodes'][0]>}+   */+  _createNodeMap() {+    /** @type {Map<number, CpuProfile['nodes'][0]>} */+    const map = new Map();+    for (const node of this._profile.nodes) {+      map.set(node.id, node);+    }++    return map;+  }++  /**+   * Initialization function to enable O(1) access to the set of active nodes in the stack by node ID.+   * @return {Map<number, Array<number>>}+   */+  _createActiveNodeArrays() {+    /** @type {Map<number, Array<number>>} */+    const map = new Map();+    /** @param {number} id @return {Array<number>} */+    const getActiveNodes = id => {+      if (map.has(id)) return map.get(id) || [];++      const node = this._nodesById.get(id);+      if (!node) throw new Error(`No such node ${id}`);+      if (typeof node.parent === 'number') {+        const array = getActiveNodes(node.parent).concat([id]);+        map.set(id, array);+        return array;+      } else {+        return [id];+      }+    };++    for (const node of this._profile.nodes) {+      map.set(node.id, getActiveNodes(node.id));+    }++    return map;+  }++  /**+   * Returns all the node IDs in a stack when a specific nodeId is at the top of the stack+   * (i.e. a stack's node ID and the node ID of all of its parents).+   *+   * @param {number} nodeId+   * @return {Array<number>}+   */+  _getActiveNodeIds(nodeId) {+    const activeNodeIds = this._activeNodeArraysById.get(nodeId);+    if (!activeNodeIds) throw new Error(`No such node ID ${nodeId}`);+    return activeNodeIds;+  }++  /**+   * Generates the necessary B/E-style trace events for a single transition from stack A to stack B+   * at the given timestamp.+   *+   * Example:+   *+   *    timestamp 1234+   *    previousNodeIds 1,2,3+   *    currentNodeIds 1,2,4+   *+   *    yields [end 3 at ts 1234, begin 4 at ts 1234]+   *+   * @param {number} timestamp+   * @param {Array<number>} previousNodeIds+   * @param {Array<number>} currentNodeIds+   * @return {Array<LH.TraceEvent>}+   */+  _createStartEndEventsForTransition(timestamp, previousNodeIds, currentNodeIds) {+    const startNodes = currentNodeIds+      .filter(id => !previousNodeIds.includes(id))+      .map(id => this._nodesById.get(id))+      .filter(/** @return {node is CpuProfile['nodes'][0]} */ node => !!node);+    const endNodes = previousNodeIds+      .filter(id => !currentNodeIds.includes(id))+      .map(id => this._nodesById.get(id))+      .filter(/** @return {node is CpuProfile['nodes'][0]} */ node => !!node);++    /** @param {CpuProfile['nodes'][0]} node @return {LH.TraceEvent} */+    const createEvent = node => ({+      ts: timestamp,+      pid: this._profile.pid,+      tid: this._profile.tid,+      dur: 0,+      ph: 'I',+      name: 'FunctionCall-ProfilerModel',+      cat: 'lighthouse',+      args: {data: {callFrame: node.callFrame}},+    });++    /** @type {Array<LH.TraceEvent>} */+    const startEvents = startNodes.map(createEvent).map(evt => ({...evt, ph: 'B'}));+    /** @type {Array<LH.TraceEvent>} */+    const endEvents = endNodes.map(createEvent).map(evt => ({...evt, ph: 'E'}));+    return [...endEvents.reverse(), ...startEvents];+  }++  /**+   * Creates B/E-style trace events from a CpuProfile object created by `collectProfileEvents()`+   *+   * @return {Array<LH.TraceEvent>}+   */+  createStartEndEvents() {+    const profile = this._profile;+    const length = profile.samples.length;+    if (profile.timeDeltas.length !== length) throw new Error(`Invalid CPU profile length`);++    /** @type {Array<LH.TraceEvent>} */+    const events = [];++    let timestamp = profile.startTime;+    /** @type {Array<number>} */+    let lastActiveNodeIds = [];+    for (let i = 0; i < profile.samples.length; i++) {+      const nodeId = profile.samples[i];+      const timeDelta = Math.max(profile.timeDeltas[i], 1);+      const node = this._nodesById.get(nodeId);+      if (!node) throw new Error(`Missing node ${nodeId}`);++      timestamp += timeDelta;+      const activeNodeIds = this._getActiveNodeIds(nodeId);+      events.push(+        ...this._createStartEndEventsForTransition(timestamp, lastActiveNodeIds, activeNodeIds)+      );+      lastActiveNodeIds = activeNodeIds;+    }++    events.push(+      ...this._createStartEndEventsForTransition(timestamp, lastActiveNodeIds, [])+    );++    return events;+  }++  /**+   * Creates B/E-style trace events from a CpuProfile object created by `collectProfileEvents()`+   *+   * @param {CpuProfile} profile+   * @return {Array<LH.TraceEvent>}+   */+  static createStartEndEvents(profile) {+    const model = new CpuProfilerModel(profile);+    return model.createStartEndEvents();+  }++  /**+   * Merges the data of all the `ProfileChunk` trace events into a single CpuProfile object for consumption+   * by `createStartEndEvents()`.+   *+   * @param {Array<LH.TraceEvent>} traceEvents+   * @return {Array<CpuProfile>}+   */+  static collectProfileEvents(traceEvents) {

Sounds good I'll codify up the edge cases mentioned in https://github.com/GoogleChrome/lighthouse/pull/11072#discussion_r468824754 👍

patrickhulce

comment created time in 4 days

more