profile
viewpoint
Matt Robenolt mattrobenolt Sentry Carson City, NV https://mattrobenolt.com 🍪🍪🍪🍪🍪🍪🍪🍪🍪🍪🍪🍪🍪🍪🍪🍪🍪🍪🍪🍪

brainsik/virtualenv-burrito 845

One command to have a working virtualenv + virtualenvwrapper environment.

disqus/django-bitfield 331

A BitField extension for Django Models

disqus/DISQUS-API-Recipes 265

Cook all the things!

disqus/django-db-utils 158

Utilities for your Django Database

disqus/disqus-python 151

Disqus API bindings for Python

disqus/disqus-php 146

Disqus API bindings for PHP

disqus/backbone.uniquemodel 95

Backbone.js plugin for ensuring unique model instances across your app

dcramer/piplint 60

Piplint validates your current environment against requirements files

disqus/channels 29

A demo of a modern forum powered by DISQUS

issue commentgetsentry/sentry

Enable native Redis cluster

Yeah, this is very nontrivial to do and haven't had much motivation to do the work yet. We will eventually. But it hasn't been important enough for us yet.

jaketlarson

comment created time in 4 hours

PullRequestReviewEvent

push eventgetsentry/xds

Matt Robenolt

commit sha 318dcfcde08d1f3e3c68b17e84d218c275ce33e9

docs: namespace limitation no longer exists

view details

push time in 2 days

delete branch getsentry/xds

delete branch : feat/all-namespaces

delete time in 2 days

push eventgetsentry/xds

Michal Kuffa

commit sha e3c3a6597438d8d1334bac8bac2fa1d9bc2381d5

Endpoints from all namespaces (#10) This PR removes the constraint for endpoints to be in the default namespace and collect/watch endpoints from all namespaces instead.

view details

push time in 2 days

PR merged getsentry/xds

Endpoints from all namespaces

This PR removes the constraint for endpoints to be in the default namespace and collect/watch endpoints from all namespaces instead.

+124 -9

0 comment

5 changed files

beezz

pr closed time in 2 days

PullRequestReviewEvent
PullRequestReviewEvent

issue commentgetsentry/onpremise

Short timeout ion Snuba causing installation to fail.

Given that on-premise only works on a single machine, I'm not sure what situation where you'd need a connect timeout more than a even like, 50ms. I think increasing to 10s just masks a problem that the system is massively overloaded that it can't connect over localhost in 10 seconds. Even with a real network hop, there's no valid reason to even be more than 1000ms.

alexanderilyin

comment created time in 2 days

pull request commentgetsentry/sentry

fix(ads.js): Bring back ads.js for naive ad-blocker detection

Isn't this a direct conflict with the exclude_package_data entry?

BYK

comment created time in 3 days

PullRequestReviewEvent
PullRequestReviewEvent
PullRequestReviewEvent
PullRequestReviewEvent

push eventgetsentry/redash

Matt Robenolt

commit sha f177b7133b8ea345672a8d97348c2c07c85b5dd8

i hate it

view details

push time in 8 days

push eventgetsentry/redash

Matt Robenolt

commit sha d5ca9932f41633d7ac64cdd7638c32b681da8b78

pin dependencies for python 2.7 to build

view details

push time in 9 days

Pull request review commentgetsentry/sentry

ref: sonarcloud scan rollup

 def call_script(client, keys, args):             script[0].registered_client = None         return script[0](keys, args, client) +    call_script.__doc__ = u"""Executes {!r} as a Lua script on a Redis server.++Takes the client to execute the script on as the first argument,+followed by the values that will be provided as ``KEYS`` and ``ARGV``+to the script as two sequence arguments.""".format(+        path+    )

Maybe a good alternative would be:

call_script.__doc__ = call_script.__doc__.format(path)

And keep the docstring in place so it's together with the function, just without formatting in place.

joshuarli

comment created time in 9 days

PullRequestReviewEvent
PullRequestReviewEvent

pull request commentgetsentry/sentry

ref: sonarcloud scan rollup

What. Some of this stuff just looks wrong and changes behaviors.

joshuarli

comment created time in 10 days

delete branch mattrobenolt/redis-py-cluster

delete branch : fix-nameerror

delete time in 13 days

issue commentsegmentio/topicctl

Support running `apply` and other commands with input from stdin

Just to add another bump for support here, another use case for us would be another tool which generated the configs that we could write to stdout and pass in to topicctl.

Something like: sentry config generate-topicctl | topicctl apply -

I think the doors will open up for lots of possibilities if we can do this for us.

mattrobenolt

comment created time in 14 days

pull request commentgetsentry/sentry

ref(sentry-apps): adds backfill for sentry app creator

Yeah, it'll be fine. Just hope none of them are mutated in another process since that mutation is going to be clobbered from the .save() call.

scefali

comment created time in 16 days

pull request commentgetsentry/sentry

ref(sentry-apps): adds backfill for sentry app creator

Maybe read how the queryset wrapper works. It's not just one query, it repeatedly runs the query. That's why I keep saying that with an unoptimized query, the queryset wrapper will only make it worse and better to not use it if it's an expensive query. If it's a cheap query, the queryset wrapper is a good improvement.

scefali

comment created time in 16 days

pull request commentgetsentry/sentry

ref(sentry-apps): adds backfill for sentry app creator

getsentry=# select count(*) from sentry_sentryapp;
 count
-------
  5323
(1 row)

getsentry=# select count(*) from sentry_sentryapp where date_deleted is NULL and creator_user_id is NULL;
 count
-------
  4617
(1 row)

I mean, honestly just remove the Postgres filter, it'll just be cheaper to iterate all and filter in Python. The filter isn't doing much and the query is more expensive to do it with the filter that matches almost every row anyways.

scefali

comment created time in 16 days

Pull request review commentgetsentry/sentry

ci(gcb): Publish Py3 images with '-py3' suffix

 steps:     docker push getsentry/sentry:$COMMIT_SHA     docker tag us.gcr.io/$PROJECT_ID/sentry:$COMMIT_SHA getsentry/sentry:latest     docker push getsentry/sentry:latest+- name: 'gcr.io/cloud-builders/docker'+  id: docker-push-py3+  waitFor:+    - e2e-test-py3+  secretEnv: ['DOCKER_PASSWORD']+  entrypoint: 'bash'+  args:+  - '-e'+  - '-c'+  - |+    # Only push to Docker Hub from master+    [ "$BRANCH_NAME" != "master" ] && exit 0+    # Need to pull the image first due to Kaniko+    docker pull us.gcr.io/$PROJECT_ID/sentry:$COMMIT_SHA-py3+    echo "$$DOCKER_PASSWORD" | docker login --username=sentrybuilder --password-stdin+    docker tag us.gcr.io/$PROJECT_ID/sentry:$COMMIT_SHA getsentry/sentry:$SHORT_SHA-py3

Then I clearly am not understanding how any of this works, haha

BYK

comment created time in 16 days

PullRequestReviewEvent

Pull request review commentgetsentry/sentry

ci(gcb): Publish Py3 images with '-py3' suffix

 steps:     docker push getsentry/sentry:$COMMIT_SHA     docker tag us.gcr.io/$PROJECT_ID/sentry:$COMMIT_SHA getsentry/sentry:latest     docker push getsentry/sentry:latest+- name: 'gcr.io/cloud-builders/docker'+  id: docker-push-py3+  waitFor:+    - e2e-test-py3+  secretEnv: ['DOCKER_PASSWORD']+  entrypoint: 'bash'+  args:+  - '-e'+  - '-c'+  - |+    # Only push to Docker Hub from master+    [ "$BRANCH_NAME" != "master" ] && exit 0+    # Need to pull the image first due to Kaniko+    docker pull us.gcr.io/$PROJECT_ID/sentry:$COMMIT_SHA-py3+    echo "$$DOCKER_PASSWORD" | docker login --username=sentrybuilder --password-stdin+    docker tag us.gcr.io/$PROJECT_ID/sentry:$COMMIT_SHA getsentry/sentry:$SHORT_SHA-py3

Nevermind, I see why.

BYK

comment created time in 16 days

PullRequestReviewEvent

Pull request review commentgetsentry/sentry

ci(gcb): Publish Py3 images with '-py3' suffix

 steps:     docker push getsentry/sentry:$COMMIT_SHA     docker tag us.gcr.io/$PROJECT_ID/sentry:$COMMIT_SHA getsentry/sentry:latest     docker push getsentry/sentry:latest+- name: 'gcr.io/cloud-builders/docker'+  id: docker-push-py3+  waitFor:+    - e2e-test-py3+  secretEnv: ['DOCKER_PASSWORD']+  entrypoint: 'bash'+  args:+  - '-e'+  - '-c'+  - |+    # Only push to Docker Hub from master+    [ "$BRANCH_NAME" != "master" ] && exit 0+    # Need to pull the image first due to Kaniko+    docker pull us.gcr.io/$PROJECT_ID/sentry:$COMMIT_SHA-py3+    echo "$$DOCKER_PASSWORD" | docker login --username=sentrybuilder --password-stdin+    docker tag us.gcr.io/$PROJECT_ID/sentry:$COMMIT_SHA getsentry/sentry:$SHORT_SHA-py3

Doesn't $COMMIT_SHA need a py3 suffix here?

BYK

comment created time in 16 days

PullRequestReviewEvent
PullRequestReviewEvent
PullRequestReviewEvent
CommitCommentEvent
CommitCommentEvent

Pull request review commentgetsentry/sentry

ref(sentry-apps): adds backfill for sentry app creator

+# -*- coding: utf-8 -*-+# Generated by Django 1.11.29 on 2020-10-06 17:57+from __future__ import unicode_literals+from sentry.utils.query import RangeQuerySetWrapperWithProgressBar++from django.db import migrations++def backfill_one(audit_log_entry, SentryApp):+    name = audit_log_entry.data.get("sentry_app")+    if name:+        # find the sentry app for that org, name, that's not deleted+        sentry_app = SentryApp.objects.filter(+            name=name,+            owner=audit_log_entry.organization,+            date_deleted__isnull=True,+            creator_user__isnull=True,+        ).first()+        # if there is a match and the user exists,+        # update with the creator+        user = audit_log_entry.actor+        if sentry_app and user:+            sentry_app.creator_user = user+            sentry_app.creator_label = user.email or user.username+            sentry_app.save()+++def backfill_sentry_app_creator(apps, schema_editor):+    """+    Backills the creator fields of SentryApp from+    the audit log table+    """+    SentryApp = apps.get_model("sentry", "SentryApp")+    AuditLogEntry = apps.get_model("sentry", "AuditLogEntry")++    queryset = AuditLogEntry.objects.filter(event=113) # sentry app add

The event=113 bit isn't going to help anything fwiw, but the organization_id bit would be better. So yeah, I think tackling it from the otherside and just getting AuditLogEntry records from the relevant org ids would be more efficient.

scefali

comment created time in 16 days

PullRequestReviewEvent

Pull request review commentgetsentry/sentry

ref(sentry-apps): adds backfill for sentry app creator

+# -*- coding: utf-8 -*-+# Generated by Django 1.11.29 on 2020-10-06 17:57+from __future__ import unicode_literals+from sentry.utils.query import RangeQuerySetWrapperWithProgressBar++from django.db import migrations++def backfill_one(audit_log_entry, SentryApp):+    name = audit_log_entry.data.get("sentry_app")+    if name:+        # find the sentry app for that org, name, that's not deleted+        sentry_app = SentryApp.objects.filter(+            name=name,+            owner=audit_log_entry.organization,+            date_deleted__isnull=True,+            creator_user__isnull=True,+        ).first()+        # if there is a match and the user exists,+        # update with the creator+        user = audit_log_entry.actor+        if sentry_app and user:+            sentry_app.creator_user = user+            sentry_app.creator_label = user.email or user.username+            sentry_app.save()+++def backfill_sentry_app_creator(apps, schema_editor):+    """+    Backills the creator fields of SentryApp from+    the audit log table+    """+    SentryApp = apps.get_model("sentry", "SentryApp")+    AuditLogEntry = apps.get_model("sentry", "AuditLogEntry")++    queryset = AuditLogEntry.objects.filter(event=113) # sentry app add

There's no index from what I can tell, so having the filter here is pretty detrimental. It has to be removed and do the filter in python. We have to iterate all rows in the table.

scefali

comment created time in 16 days

PullRequestReviewEvent

Pull request review commentgetsentry/sentry

ref(sentry-apps): adds backfill for sentry app creator

+# -*- coding: utf-8 -*-+# Generated by Django 1.11.29 on 2020-10-06 17:57+from __future__ import unicode_literals+from sentry.utils.query import RangeQuerySetWrapperWithProgressBar++from django.db import migrations++def backfill_one(audit_log_entry, SentryApp):+    name = audit_log_entry.data.get("sentry_app")+    if name:+        # find the sentry app for that org, name, that's not deleted+        sentry_app = SentryApp.objects.filter(+            name=name,+            owner=audit_log_entry.organization,

Use owner_id=audit_log_entry.organization_id to avoid a query.

scefali

comment created time in 16 days

PullRequestReviewEvent

Pull request review commentgetsentry/sentry

ref(sentry-apps): adds backfill for sentry app creator

+# -*- coding: utf-8 -*-+# Generated by Django 1.11.29 on 2020-10-06 17:57+from __future__ import unicode_literals+from sentry.utils.query import RangeQuerySetWrapperWithProgressBar++from django.db import migrations++def backfill_one(audit_log_entry, SentryApp):+    name = audit_log_entry.data.get("sentry_app")+    if name:+        # find the sentry app for that org, name, that's not deleted+        sentry_app = SentryApp.objects.filter(+            name=name,+            owner=audit_log_entry.organization,+            date_deleted__isnull=True,+            creator_user__isnull=True,+        ).first()

Please don't use .first() ever. It's not what you want and will make the query slower.

scefali

comment created time in 16 days

Pull request review commentgetsentry/sentry

ref(sentry-apps): adds backfill for sentry app creator

+# -*- coding: utf-8 -*-+# Generated by Django 1.11.29 on 2020-10-06 17:57+from __future__ import unicode_literals+from sentry.utils.query import RangeQuerySetWrapperWithProgressBar++from django.db import migrations++def backfill_one(audit_log_entry, SentryApp):+    name = audit_log_entry.data.get("sentry_app")+    if name:+        # find the sentry app for that org, name, that's not deleted+        sentry_app = SentryApp.objects.filter(+            name=name,+            owner=audit_log_entry.organization,+            date_deleted__isnull=True,+            creator_user__isnull=True,+        ).first()+        # if there is a match and the user exists,+        # update with the creator+        user = audit_log_entry.actor+        if sentry_app and user:+            sentry_app.creator_user = user+            sentry_app.creator_label = user.email or user.username+            sentry_app.save()+++def backfill_sentry_app_creator(apps, schema_editor):+    """+    Backills the creator fields of SentryApp from+    the audit log table+    """+    SentryApp = apps.get_model("sentry", "SentryApp")+    AuditLogEntry = apps.get_model("sentry", "AuditLogEntry")++    queryset = AuditLogEntry.objects.filter(event=113) # sentry app add

Remove this filter. Unless you know for a fact you have an index on this. Otherwise, it'll take forever to run.

scefali

comment created time in 16 days

PullRequestReviewEvent
PullRequestReviewEvent

Pull request review commentgetsentry/sentry-data-schemas

feat(packaging) Creates a python package to simplify importing the schema in python projects.

+import os+import setuptools+import shutil++# Copies the schema in the module so that setuptools is able to+# find the file and add it to the package.+if os.path.isfile("../relay/event.schema.json"):+    shutil.copyfile(+        "../relay/event.schema.json",+        "./sentry_data_schemas/event.schema.json"+    )++with open("README.md", "r") as fh:+    long_description = fh.read()++setuptools.setup(+    name="sentry-data-schemas",+    version="0.0.1",+    author="Sentry",+    license="BSL-1.1",+    author_email="hello@sentry.io",

I htink we have oss@sentry.io @BYK ?

fpacifici

comment created time in 16 days

Pull request review commentgetsentry/sentry-data-schemas

feat(packaging) Creates a python package to simplify importing the schema in python projects.

+import os+import setuptools+import shutil++# Copies the schema in the module so that setuptools is able to+# find the file and add it to the package.+if os.path.isfile("../relay/event.schema.json"):+    shutil.copyfile(+        "../relay/event.schema.json",+        "./sentry_data_schemas/event.schema.json"+    )++with open("README.md", "r") as fh:+    long_description = fh.read()++setuptools.setup(+    name="sentry-data-schemas",+    version="0.0.1",+    author="Sentry",+    license="BSL-1.1",

Make sure license is bumped here for whatever yall end up doing.

fpacifici

comment created time in 16 days

Pull request review commentgetsentry/sentry-data-schemas

feat(packaging) Creates a python package to simplify importing the schema in python projects.

+import os+import setuptools+import shutil++# Copies the schema in the module so that setuptools is able to+# find the file and add it to the package.+if os.path.isfile("../relay/event.schema.json"):+    shutil.copyfile(+        "../relay/event.schema.json",+        "./sentry_data_schemas/event.schema.json"+    )++with open("README.md", "r") as fh:+    long_description = fh.read()++setuptools.setup(+    name="sentry-data-schemas",+    version="0.0.1",+    author="Sentry",+    license="BSL-1.1",+    author_email="hello@sentry.io",+    description="Sentry shared data schemas",+    long_description=long_description,+    long_description_content_type="text/markdown",+    url="https://github.com/getsentry/sentry-data-schemas",+    packages=setuptools.find_packages(),+    include_package_data=True,+    install_requires=[+        "jsonschema-typed-v2==0.8.0"

I'd suggest making this less strict here since it's a library. If we decide to bump to say 0.8.1 in sentry core, we gotta bump here as well and tag a new release. I assume we can just leave this without any version qualifier or at least safely a range within semver.

fpacifici

comment created time in 16 days

Pull request review commentgetsentry/sentry-data-schemas

feat(packaging) Creates a python package to simplify importing the schema in python projects.

+import os+import setuptools+import shutil++# Copies the schema in the module so that setuptools is able to+# find the file and add it to the package.+if os.path.isfile("../relay/event.schema.json"):+    shutil.copyfile(+        "../relay/event.schema.json",+        "./sentry_data_schemas/event.schema.json"+    )++with open("README.md", "r") as fh:+    long_description = fh.read()++setuptools.setup(+    name="sentry-data-schemas",+    version="0.0.1",+    author="Sentry",+    license="BSL-1.1",+    author_email="hello@sentry.io",+    description="Sentry shared data schemas",+    long_description=long_description,+    long_description_content_type="text/markdown",+    url="https://github.com/getsentry/sentry-data-schemas",+    packages=setuptools.find_packages(),+    include_package_data=True,+    install_requires=[+        "jsonschema-typed-v2==0.8.0"+    ],+    python_requires='>=3.8',

I'm not going to be consuming this, but this seems pretty restrictive if we wanna use this elsewhere considering we hardly depend on python 3.8 (only in snuba), and our SDKs would never enforce this or our server for a while.

fpacifici

comment created time in 16 days

PullRequestReviewEvent
PullRequestReviewEvent

pull request commentGrokzen/redis-py-cluster

fix(client): Possible UnboundLocalError within execute finally block

The test failure seems like a flaky test.

mattrobenolt

comment created time in 16 days

PullRequestReviewEvent

PR opened Grokzen/redis-py-cluster

fix(client): Possible UnboundLocalError within execute finally block

I'm not sure what all cases trigger this, but within the finally block, there is no guarantee previously that the variable connection was even defined.

In out case, we hit a SlotNotCoveredError, which cascaded into an UnboundLocalError from within the finally block.

This change explicitly declares the connection variable so it can safely be checked later if it needs to be freed.

+5 -3

0 comment

1 changed file

pr created time in 17 days

create barnchmattrobenolt/redis-py-cluster

branch : fix-nameerror

created branch time in 17 days

PullRequestReviewEvent

Pull request review commentgetsentry/sentry

feat(dev): Add pretty formatting option to devserver

+# -*- coding: utf8 -*-++from __future__ import absolute_import, print_function++import re++# Sentry colors taken from our design system. Might not look good on all+# termianl themes tbh+COLORS = {+    "white": (255, 255, 255),+    "green": (77, 199, 13),+    "orange": (255, 119, 56),+    "red": (250, 71, 71),+}++SERVICE_COLORS = {+    "server": (108, 95, 199),+    "worker": (255, 194, 39),+    "webpack": (61, 116, 219),+    "cron": (255, 86, 124),+    "relay": (250, 71, 71),+}+++def colorize_code(pattern):+    code = int(pattern.group("code"))+    method = pattern.group("method")++    style = (COLORS["red"], COLORS["white"])++    if code >= 200 and code < 300:+        style = (COLORS["green"], COLORS["white"])+    if code >= 400 and code < 500:+        style = (COLORS["orange"], COLORS["white"])+    if code >= 500:+        style = (COLORS["red"], COLORS["white"])++    return u"{bg}{fg} {code} {reset} {method:4}".format(+        bg="\x1b[48;2;%s;%s;%sm" % (style[0]),+        fg="\x1b[38;2;%s;%s;%sm" % (style[1]),+        reset="\x1b[0m",+        code=code,+        method=method,+    )+++def colorize_reboot(pattern):+    return u"{bg}{fg}[ RELOADING ]{reset} {info_fg}{info}".format(+        bg="\x1b[48;2;%s;%s;%sm" % COLORS["red"],+        fg="\x1b[38;2;%s;%s;%sm" % COLORS["white"],+        info_fg="\x1b[38;2;%s;%s;%sm" % COLORS["white"],+        reset="\x1b[0m",+        info=pattern.group(0),+    )+++def colorize_booted(pattern):+    return u"{bg}{fg}[ UWSGI READY ]{reset} {info_fg}{info}".format(+        bg="\x1b[48;2;%s;%s;%sm" % COLORS["green"],+        fg="\x1b[38;2;%s;%s;%sm" % COLORS["white"],+        info_fg="\x1b[38;2;%s;%s;%sm" % COLORS["white"],+        reset="\x1b[0m",+        info=pattern.group(0),+    )+++def colorize_traceback(pattern):+    return u"{bg}  {reset} {info_fg}{info}".format(+        bg="\x1b[48;2;%s;%s;%sm" % COLORS["red"],+        info_fg="\x1b[38;2;%s;%s;%sm" % COLORS["red"],+        reset="\x1b[0m",+        info=pattern.group(0),+    )+++def monkeypatch_honcho_write(self, message):+    name = message.name if message.name is not None else ""+    name = name.rjust(self.width)++    if isinstance(message.data, bytes):+        string = message.data.decode("utf-8", "replace")+    else:+        string = message.data++    # Colorize requests+    string = re.sub(+        r"(?P<method>GET|POST|PUT|HEAD|DELETE) (?P<code>[0-9]{3})", colorize_code, string+    )+    # Colorize reboots+    string = re.sub(r"Gracefully killing worker [0-9]+ .*\.\.\.", colorize_reboot, string)+    # Colorize reboot complete+    string = re.sub(r"WSGI app [0-9]+ \(.*\) ready in [0-9]+ seconds .*", colorize_booted, string)+    # Mark python tracebacks+    string = re.sub(r"Traceback \(most recent call last\).*", colorize_traceback, string)

Python caches them internally.

EvanPurkhiser

comment created time in 17 days

PullRequestReviewEvent

Pull request review commentgetsentry/sentry

feat(dev): Add pretty formatting option to devserver

 def devserver(     # A better log-format for local dev when running through honcho,     # but if there aren't any other daemons, we don't want to override.     if daemons:-        uwsgi_overrides["log-format"] = '"%(method) %(status) %(uri) %(proto)" %(size)'+        uwsgi_overrides["log-format"] = "%(method) %(status) %(uri) %(proto) %(size)"     else:-        uwsgi_overrides["log-format"] = '[%(ltime)] "%(method) %(status) %(uri) %(proto)" %(size)'+        uwsgi_overrides["log-format"] = "[%(ltime)] %(method) %(status) %(uri) %(proto) %(size)"

It was intended to mirror apache log format. The chunk in quotes is the "request line" of HTTP.

EvanPurkhiser

comment created time in 17 days

PullRequestReviewEvent
PullRequestReviewEvent
PullRequestReviewEvent
PullRequestReviewEvent

Pull request review commentgetsentry/sentry

ci(gcb): Fix test.sh exit code on fail

 steps:     # The following trick is from https://stackoverflow.com/a/52400857/90297 with great gratuity     echo '{"version": "3.4", "networks":{"default":{"external":{"name":"cloudbuild"}}}}' > docker-compose.override.yml     ./install.sh-    ./test.sh || docker-compose logs nginx web relay+    if ! ./test.sh; then+      echo "Test failed.";+      docker-compose ps;+      docker-compose logs nginx web relay;+      exit -1;

Do docs suggest exiting -1? This isn't even a valid exit code, and actually wraps around to mean 255.

BYK

comment created time in 17 days

PullRequestReviewEvent
CommitCommentEvent

Pull request review commentgetsentry/sentry

build(docker): Start building Python 3 images

 steps:   ]   timeout: 180s - name: 'us.gcr.io/$PROJECT_ID/sentry-builder:$COMMIT_SHA'+  id: builder-run   env: [     'SOURCE_COMMIT=$COMMIT_SHA'   ]   timeout: 360s - name: 'gcr.io/kaniko-project/executor:v0.22.0'+  id: runtime-image-py2+  waitFor:+    - builder-run   args: [     '--cache=true',     '--build-arg', 'SOURCE_COMMIT=$COMMIT_SHA',     '--destination=us.gcr.io/$PROJECT_ID/sentry:$COMMIT_SHA',     '-f', './docker/Dockerfile'   ]   timeout: 300s-# Smoke tests+- name: 'gcr.io/kaniko-project/executor:v0.22.0'+  id: runtime-image-py3+  waitFor:+    - builder-run+  args: [+    '--cache=true',+    '--build-arg', 'SOURCE_COMMIT=$COMMIT_SHA',+    '--build-arg', 'PY_VER=3.6',

+1 here, let's keep this strictly pinned to match. We can just bump the patch release, but I don't fully trust docker python images without strict pinning. Things like pip will get upgraded or whatever in a new release that is opaque to us until it breaks, etc.

BYK

comment created time in 20 days

PullRequestReviewEvent
PullRequestReviewEvent

issue commentpython-diamond/Diamond

Concern around lack of activity

Datadog

DStape

comment created time in 20 days

issue commentgetsentry/sentry

Tracking which DSN generated an Event

We tag with key_id in the raw event payload for a while now. We don’t surface it in the UI though, but the data is there since https://github.com/getsentry/sentry/pull/6488

To me, this was the PR meant to close this Issue. If we wanted to expose this in the UI, that’s a further product decision.

mattrobenolt

comment created time in 22 days

PullRequestReviewEvent
PullRequestReviewEvent
PullRequestReviewEvent
CommitCommentEvent
PullRequestReviewEvent

Pull request review commentgetsentry/sentry

fix(files) Catch integrity errors caused by concurrency

 def delete_unreferenced_blobs(blob_ids):         try:             # Need to delete the record to ensure django hooks run.             FileBlob.objects.get(id=blob_id).delete()-        except FileBlob.DoesNotExist:+        except (IntegrityError, FileBlob.DoesNotExist):
try:
  fb = FileBlob.objects.get(id=file_blob_id)
except FileBlob.DoesNotExist:
  pass
else:
  try:
    with transaction.atomic():
      fb.delete()
  except IntegrityError:
    pass

Would be the slightly better pattern here.

markstory

comment created time in a month

PullRequestReviewEvent

Pull request review commentgetsentry/sentry

fix(files) Catch integrity errors caused by concurrency

 def delete_unreferenced_blobs(blob_ids):         try:             # Need to delete the record to ensure django hooks run.             FileBlob.objects.get(id=blob_id).delete()-        except FileBlob.DoesNotExist:+        except (IntegrityError, FileBlob.DoesNotExist):

You might wanna move other database operations out of this transaction now just to limit it, but probably not a big deal relatively.

markstory

comment created time in a month

PullRequestReviewEvent

Pull request review commentgetsentry/sentry

fix(files) Catch integrity errors caused by concurrency

 def delete_unreferenced_blobs(blob_ids):         try:             # Need to delete the record to ensure django hooks run.             FileBlob.objects.get(id=blob_id).delete()-        except FileBlob.DoesNotExist:+        except (IntegrityError, FileBlob.DoesNotExist):

You need to wrap this in an atomic block then, otherwise it’ll taint the transaction.

markstory

comment created time in a month

PullRequestReviewEvent

pull request commentgetsentry/sentry

fix(api): Organization Slug Trailing Space

Yeah, it's more that an UPDATE in a single query over the entire organization table is going to lock the whole table. There's no index on what you're filtering on, so it's scanning every row.

Even if it's slower, it's better and safer here to just use the ranged wrapper to iterate every row and just check in Python.

mgaeta

comment created time in a month

PullRequestReviewEvent

pull request commentdocker-library/docs

Add Sentry deprecation notice

I guess also worth adding, the support ticket I submit did not get a response back.

BYK

comment created time in a month

issue commentsegmentio/topicctl

Support running `apply` and other commands with input from stdin

Yeah, for sure. And I didn't want to start a large refactor to support this without at least some communication. It's easy enough to write to disk as an intermediate solution.

mattrobenolt

comment created time in a month

pull request commentcertifi/gocertifi

Make raw PEM cert bundle string public

Maybe this would be better as a function call instead that returned the private variable. Just in case.

func Certificates() string {
  return pemcerts
}

Or similar. I'm not crazy about the name.

yosh

comment created time in a month

PR closed certifi/gocertifi

certifi: 2020-09-08

Update certif.go to 2020-09-08.

Run

$ date +%F-%Xz
2020-09-08-04:37:04+0900
$ go generate
+128 -149

1 comment

1 changed file

zchee

pr closed time in a month

pull request commentcertifi/gocertifi

certifi: 2020-09-08

I just re-bumped it on latest for today in case.

Thanks! https://github.com/certifi/gocertifi/commit/2c3bb06c6054e133430498817d26ac003d08f020

zchee

comment created time in a month

created tagcertifi/gocertifi

tag2020.09.22

(Go Distribution) A carefully curated collection of Root Certificates for validating the trustworthiness of SSL certificates while verifying the identity of TLS hosts.

created time in a month

push eventcertifi/gocertifi

Matt Robenolt

commit sha 2c3bb06c6054e133430498817d26ac003d08f020

2020.09.22

view details

push time in a month

issue openedsegmentio/topicctl

Support running `apply` and other commands with input from stdin

I'm working towards building something that leverages topicctl for our kafka cluster in kubernetes, and with this, my plan is to have a simple wrapper around topicctl that listened to a Kubernetes ConfigMap containing cluster configs. On ConfigMap changes, it'd apply to the cluster.

To better support that, it'd be nice to be able to pass everything through stdin. I looked briefly at code thinking it'd be simple, but there are some assumptions made with file system layout (discovering cluster config.yaml) and not handling multiple documents (#26).

I think ideally, it'd be nice to pass one big document through stdin that contained the cluster config as well as the topics through stdin. Then our loop in Kubernetes will be:

while:
  configmap <- configmap.wait_for_change()
  configmap -> subprocess topicctl apply -

Am I just off base on wanting to use topicctl in this manner? It seems like a pretty obvious thing for us to do. We can work around the issues by having another intermediate step where we write all the configs to disk first, then call topicctl apply topics/*yml or whatever, but is an unnecessary step if we can pass through stdin.

Let me know your thoughts, we haven't started building anything for this yet, just brainstorming.

created time in a month

PullRequestReviewEvent

pull request commentgetsentry/onpremise

fix(kafka): Enable auto topic creation

Yeah, I think this is just all fall out of upgrading the kafka image beyond what we use, since apparently we did rely on this implicit behavior. We don't maintain a canonical list of topics needed outside of Snuba yet for this apparently.

BYK

comment created time in a month

issue openedsegmentio/topicctl

Support for multiple yaml documents in topic config

It'd be nice to be able to apply a single say, topics.yaml file that was 1 file with the yaml document separator.

Such as:

meta:
  name: topic-default
  cluster: local-cluster
  environment: local-env
  region: local-region
  description: |
    Topic that uses default (any) strategy for assigning partition brokers.

spec:
  partitions: 3
  replicationFactor: 2
  retentionMinutes: 100
  placement:
    strategy: any
  settings:
    cleanup.policy: delete
    max.message.bytes: 5542880
---
meta:
  name: topic-static
  cluster: local-cluster
  environment: local-env
  region: local-region
  description: |
    Topic that uses static broker assignments.

spec:
  partitions: 10
  replicationFactor: 2
  retentionMinutes: 290
  placement:
    strategy: static
    staticAssignments:
      - [3, 4]
      - [5, 6]
      - [2, 1]
      - [2, 3]
      - [5, 1]
      - [2, 1]
      - [1, 3]
      - [2, 4]
      - [1, 3]
      - [2, 4]
---
meta:
  name: topic-in-rack
  cluster: local-cluster
  environment: local-env
  region: local-region
  description: |
    Topic that uses in-rack strategy for assigning brokers.

spec:
  partitions: 9
  replicationFactor: 2
  retentionMinutes: 100
  placement:
    strategy: in-rack

created time in a month

pull request commentgetsentry/onpremise

fix(kafka): Enable auto topic creation

On one hand I'm ok with this, on the other hand, I'm not sure if this falls under plans that @getsentry/sns were doing to better handle migrations? I did a while back, stuff in Snuba to explicitly define and manage topics.

There's also this tool that @b1naryth1ef was going to look into for Single Tenant, but we haven't investigated yet. https://github.com/segmentio/topicctl

I'm just not sure if auto topic creation is just a good long term answer, if it was, I feel like we'd just do this in SaaS and ST and ignore trying to manage anything manually.

BYK

comment created time in a month

PullRequestReviewEvent

pull request commentgetsentry/sentry

Revert "fix(jira): adds content security policy"

Why not? Any context on what the issue was would be useful.

scefali

comment created time in a month

PullRequestReviewEvent

pull request commentgetsentry/sentry

fix(jira): adds content security policy

Good clarification with a comment.

scefali

comment created time in a month

more