profile
viewpoint
Harry Waye hazzadous PostHog London

hazzadous/cloth 1

EC2 tasks for Fabric

hazzadous/apex 0

Build, deploy, and manage AWS Lambda functions with ease (with Go support!).

hazzadous/apps-script-samples 0

Apps Script samples for Google Workspace products.

hazzadous/cabot 0

Self-hosted, easily-deployable monitoring and alerts service - like a lightweight PagerDuty

hazzadous/charts 0

Curated applications for Kubernetes

hazzadous/ClickHouse 0

ClickHouse® is a free analytics DBMS for big data

Pull request review commentPostHog/posthog

fix(retention): actor bug

 def retention(self, request: request.Request) -> response.Response:                 {"message": "Could not retrieve team", "detail": "Could not validate team associated with user"},                 status=400,             )-        filter = RetentionFilter(request=request)+        filter = RetentionFilter(request=request, team=team)

non-blocking

We could make this non-optional, that way we'd catch any other places we're missing the arg. Unless of course it's not required. Also, I guess team is actually derived from request, this we could enforce the desired invariant by RetentionFilter using the request derived team directly.

EDsCODE

comment created time in 5 hours

PullRequestReviewEvent
PullRequestReviewEvent
PullRequestReviewEvent

Pull request review commentPostHog/charts-clickhouse

Make StorageClass installation optional in AWS/GCP

+import pytest++from utils import cleanup_k8s, get_clickhouse_statefulset_spec, helm_install, wait_for_pods_to_be_ready++HELM_INSTALL_CMD = """+helm upgrade \+    --install \+    -f ../../ci/values/kubetest/test_clickhouse_persistence_disabled.yaml \+    --timeout 30m \+    --create-namespace \+    --namespace posthog \+    posthog ../../charts/posthog \+    --wait-for-jobs \+    --wait+"""+++@pytest.fixture+def setup(kube):+    cleanup_k8s()+    helm_install(HELM_INSTALL_CMD)+    wait_for_pods_to_be_ready(kube)+++def test_volume_claim(setup, kube):+    statefulset_spec = get_clickhouse_statefulset_spec(kube)++    # Verify the spec.volumes configuration+    volumes = statefulset_spec.template.spec.volumes+    assert all(volume.config_map is not None for volume in volumes), "All the spec.volumes should be of type config_map"

Yeah I'm not totally up to speed here but I think validating that there is no PVC would be more clear?

Anyway, it's not a big deal

guidoiaquinti

comment created time in a day

Pull request review commentPostHog/posthog.com

self-host/deploy/upgrade-notes.md - add update info for v10

 If you didn’t make any customization to those, there’s nothing you need to d  - drops support for Kubernetes 1.19 as it has reached end of life on 2021-10-28 - adds support for Kubernetes 1.23 released on 2021-12-07++### Upgrading from 9.x.x++10.0.0 introduces two major changes:++1. as of today we've been including additional `StorageClass` definition into our Helm chart when installing it on AWS or GCP platforms. Starting from this release, we will not do that anymore and we will rely on the cluster default storage class. If you still want to install those additional storage classes, simply set `installCustomStorageClass: true` in your `values.yaml`. If you are planning to use the default storage class, make sure you are running with our [requirement settings](https://posthog.com/docs/self-host/deploy/aws#cluster-requirements) (`allowVolumeExpansion` set to `true` and `reclaimPolicy` set to `Retain`).++2. we have renamed few chart inputs in order to reduce confusion and align our naming convention to the industry standards:++    - `clickhouseOperator.enabled` -> `clickhouse.enabled`

Can we raise an error within helm if we have and clickhouseOperator keys @guidoiaquinti ?

guidoiaquinti

comment created time in a day

PullRequestReviewEvent

Pull request review commentPostHog/charts-clickhouse

Make StorageClass installation optional in AWS/GCP

+import pytest++from utils import cleanup_k8s, get_clickhouse_statefulset_spec, helm_install, wait_for_pods_to_be_ready++HELM_INSTALL_CMD = """+helm upgrade \+    --install \+    -f ../../ci/values/kubetest/test_clickhouse_persistence_disabled.yaml \+    --timeout 30m \+    --create-namespace \+    --namespace posthog \

Ah ok cool, in my test I did use non-posthog namespace but yeah as long as the tests are passing and checking the right things I don't think it matters.

guidoiaquinti

comment created time in a day

PullRequestReviewEvent
PullRequestReviewEvent
PullRequestReviewEvent

Pull request review commentPostHog/charts-clickhouse

Make StorageClass installation optional in AWS/GCP

+import pytest++from utils import cleanup_k8s, get_clickhouse_statefulset_spec, helm_install, wait_for_pods_to_be_ready++HELM_INSTALL_CMD = """+helm upgrade \+    --install \+    -f ../../ci/values/kubetest/test_clickhouse_persistence_disabled.yaml \+    --timeout 30m \+    --create-namespace \+    --namespace posthog \+    posthog ../../charts/posthog \+    --wait-for-jobs \+    --wait+"""+++@pytest.fixture+def setup(kube):+    cleanup_k8s()+    helm_install(HELM_INSTALL_CMD)+    wait_for_pods_to_be_ready(kube)+++def test_volume_claim(setup, kube):+    statefulset_spec = get_clickhouse_statefulset_spec(kube)++    # Verify the spec.volumes configuration+    volumes = statefulset_spec.template.spec.volumes+    assert all(volume.config_map is not None for volume in volumes), "All the spec.volumes should be of type config_map"

non-blocking

Is volume.config_map is not None saying that persistence is disabled?

guidoiaquinti

comment created time in a day

Pull request review commentPostHog/charts-clickhouse

Make StorageClass installation optional in AWS/GCP

+import pytest++from utils import cleanup_k8s, get_clickhouse_statefulset_spec, helm_install, wait_for_pods_to_be_ready++HELM_INSTALL_CMD = """+helm upgrade \+    --install \+    -f ../../ci/values/kubetest/test_clickhouse_persistence_disabled.yaml \+    --timeout 30m \+    --create-namespace \+    --namespace posthog \

non-blocking

Worth putting these test cases into separate namespaces for better isolation?

guidoiaquinti

comment created time in a day

PullRequestReviewEvent

Pull request review commentPostHog/charts-clickhouse

Make StorageClass installation optional in AWS/GCP

+import pytest

Now that we have actual kubetest, could we remove the test_something.py?

guidoiaquinti

comment created time in a day

PullRequestReviewEvent

pull request commentPostHog/charts-clickhouse

Make StorageClass installation optional in AWS/GCP

I've run through and it looks like it:

  1. removes the storageclass
  2. keeps the pv, and doesn't delete this after pod restart

https://gist.github.com/hazzadous/f078934a2367f3433c09c8d7a5c0ba13

guidoiaquinti

comment created time in a day

pull request commentPostHog/charts-clickhouse

Make StorageClass installation optional in AWS/GCP

Verified that if a AWS/GCP user doesn't read the release note and runs an upgrade with the default installCustomStorageClass: false the custom StorageClass is not removed and/or volumes don't get deleted: TODO

@guidoiaquinti I'll test this on staging environment, by hand will do for now?

guidoiaquinti

comment created time in a day

Pull request review commentPostHog/posthog

Speed up lifecycle query

 def test_test_account_filters_with_groups(self):             self.team,         ) -        self.assertEqual(sorted(res["status"] for res in result), ["dormant", "new", "resurrecting", "returning"])-        for res in result:-            if res["status"] == "dormant":-                self.assertEqual(res["data"], [0, -1, 0, 0, -1, 0, 0, 0])-            elif res["status"] == "returning":-                self.assertEqual(res["data"], [0, 0, 0, 0, 0, 0, 0, 0])-            elif res["status"] == "resurrecting":-                self.assertEqual(res["data"], [1, 0, 0, 1, 0, 0, 0, 0])-            elif res["status"] == "new":-                self.assertEqual(res["data"], [0, 0, 0, 0, 0, 0, 0, 0])+        self.assertLifecycleResults(+            result,+            [+                {"status": "dormant", "data": [0, -1, 0, 0, -1, 0, 0, 0]},+                {"status": "new", "data": [0, 0, 0, 0, 0, 0, 0, 0]},+                {"status": "resurrecting", "data": [1, 0, 0, 1, 0, 0, 0, 0]},+                {"status": "returning", "data": [0, 0, 0, 0, 0, 0, 0, 0]},+            ],+        )++    @snapshot_clickhouse_queries+    def test_lifecycle_edge_cases(self):+        # This test tests behavior when created_at is different from first matching event and dormant/resurrecting/returning logic

I would generally go for explicit and minimal to make if super obvious the behaviour I was checking, but that's personal preference so not a blocker

macobo

comment created time in 5 days

PullRequestReviewEvent

Pull request review commentPostHog/posthog

Speed up lifecycle query

-from datetime import datetime, timedelta-from typing import Any, Callable, Dict, List, Tuple, Union+from datetime import datetime+from typing import Callable, Dict, List, Tuple -from dateutil.relativedelta import relativedelta from django.db.models.query import Prefetch-from rest_framework.exceptions import ValidationError from rest_framework.request import Request  from ee.clickhouse.client import sync_execute-from ee.clickhouse.models.action import format_action_filter+from ee.clickhouse.models.entity import get_entity_filtering_params from ee.clickhouse.models.person import get_persons_by_uuids-from ee.clickhouse.models.property import parse_prop_clauses+from ee.clickhouse.queries.event_query import ClickhouseEventQuery from ee.clickhouse.queries.person_distinct_id_query import get_team_distinct_ids_query+from ee.clickhouse.queries.person_query import ClickhousePersonQuery from ee.clickhouse.queries.trends.util import parse_response-from ee.clickhouse.queries.util import get_earliest_timestamp, get_time_diff, get_trunc_func_ch, parse_timestamps+from ee.clickhouse.queries.util import get_earliest_timestamp, parse_timestamps from ee.clickhouse.sql.trends.lifecycle import LIFECYCLE_PEOPLE_SQL, LIFECYCLE_SQL-from posthog.constants import TREND_FILTER_TYPE_ACTIONS from posthog.models.entity import Entity from posthog.models.filters import Filter+from posthog.models.filters.mixins.utils import cached_property from posthog.queries.lifecycle import LifecycleTrend +# Lifecycle takes an event/action, time range, interval and for every period, splits the users who did the action into 4:+#+# 1. NEW - Users who did the action during interval and were also created during that period+# 2. RESURRECTING - Users who did the action during this interval, but not one prior+# 3. RETURNING - Users who did the action during this interval and prior one+# 4. DORMANT - Users who did not do the action during this period but did an action the previous period+#+# To do this, we need for every period (+1 prior to the first period), list of person_ids who did the event/action+# during that period and their creation dates. -class ClickhouseLifecycle(LifecycleTrend):-    def get_interval(self, interval: str) -> Tuple[Union[timedelta, relativedelta], str, str]:-        if interval == "hour":-            return timedelta(hours=1), "1 HOUR", "1 MINUTE"-        elif interval == "day":-            return timedelta(days=1), "1 DAY", "1 HOUR"-        elif interval == "week":-            return timedelta(weeks=1), "1 WEEK", "1 DAY"-        elif interval == "month":-            return relativedelta(months=1), "1 MONTH", "1 DAY"-        else:-            raise ValidationError("{interval} not supported") +class ClickhouseLifecycle(LifecycleTrend):     def _format_lifecycle_query(self, entity: Entity, filter: Filter, team_id: int) -> Tuple[str, Dict, Callable]:-        date_from = filter.date_from--        if not date_from:-            date_from = get_earliest_timestamp(team_id)--        interval = filter.interval-        num_intervals, seconds_in_interval, _ = get_time_diff(interval, filter.date_from, filter.date_to, team_id)-        interval_increment, interval_string, sub_interval_string = self.get_interval(interval)-        trunc_func = get_trunc_func_ch(interval)-        event_query = ""-        event_params: Dict[str, Any] = {}--        props_to_filter = [*filter.properties, *entity.properties]-        prop_filters, prop_filter_params = parse_prop_clauses(props_to_filter, group_properties_joined=False)--        _, _, date_params = parse_timestamps(filter=filter, team_id=team_id)--        if entity.type == TREND_FILTER_TYPE_ACTIONS:-            try:-                action = entity.get_action()-                event_query, event_params = format_action_filter(action)-            except:-                return "", {}, self._parse_result(filter, entity)-        else:-            event_query = "event = %(event)s"-            event_params = {"event": entity.id}+        event_query, event_params = LifecycleEventQuery(team_id=team_id, filter=filter).get_query()          return (-            LIFECYCLE_SQL.format(-                interval=interval_string,-                trunc_func=trunc_func,-                event_query=event_query,-                filters=prop_filters,-                sub_interval=sub_interval_string,-                GET_TEAM_PERSON_DISTINCT_IDS=get_team_distinct_ids_query(team_id),-            ),-            {-                "team_id": team_id,-                "prev_date_from": (date_from - interval_increment).strftime(-                    "%Y-%m-%d{}".format(" %H:%M:%S" if filter.interval == "hour" else " 00:00:00")-                ),-                "num_intervals": num_intervals,-                "seconds_in_interval": seconds_in_interval,-                **event_params,-                **date_params,-                **prop_filter_params,-            },+            LIFECYCLE_SQL.format(events_query=event_query, interval_expr=filter.interval),

Can we assume that filter.interval has been validated and therefore is safe to include in the format?

macobo

comment created time in 5 days

Pull request review commentPostHog/posthog

Speed up lifecycle query

 _LIFECYCLE_EVENTS_QUERY = """-WITH person_activity_including_previous_period AS (-    SELECT DISTINCT -        person_id, -        {trunc_func}(events.timestamp) start_of_period +SELECT+    person_id, -    FROM events-        JOIN ({GET_TEAM_PERSON_DISTINCT_IDS}) pdi -            ON events.distinct_id = pdi.distinct_id--    WHERE team_id = %(team_id)s AND {event_query} {filters}--    GROUP BY -        person_id, -        start_of_period-        -    HAVING -        start_of_period <= toDateTime(%(date_to)s) -        AND start_of_period >= toDateTime(%(prev_date_from)s)+    /*+        We want to put the status of each period onto it's own line, so we+        can easily aggregate over them. With the inner query we end up with a structure like: -), person_activity_as_array AS (-    SELECT DISTINCT -        person_id, -        groupArray({trunc_func}(events.timestamp)) start_of_period +        person_id  |  period_of_activity  | status_of_activity  | dormant_status_of_period_after_activity -    FROM events-        JOIN ({GET_TEAM_PERSON_DISTINCT_IDS}) pdi -            ON events.distinct_id = pdi.distinct_id+        However, we want to have something of the format: -    WHERE -        team_id = %(team_id)s -        AND {event_query} {filters}-        AND toDateTime(events.timestamp) <= toDateTime(%(date_to)s) -        AND {trunc_func}(events.timestamp) >= toDateTime(%(date_from)s)-        -    GROUP BY person_id-), periods AS (-    SELECT -        {trunc_func}(toDateTime(%(date_to)s) - number * %(seconds_in_interval)s) AS start_of_period -        -    FROM numbers(%(num_intervals)s)-)+        person_id  | period_of_activity          |  status_of_activity+        person_id  | period_just_after_activity  |  dormant_status_of_period_after_activity -SELECT -    activity_pairs.person_id AS person_id,-    activity_pairs.initial_period AS initial_period,-    activity_pairs.next_period AS next_period, -    if(-        initial_period = toDateTime('0000-00-00 00:00:00'), -        'dormant', +        such that we can simply aggregate over person_id, period.+    */+    arrayJoin(+        arrayZip(+            [period, period + INTERVAL 1 {interval_expr}],+            [initial_status, if(next_is_active, '', 'dormant')]+        )+    ) AS period_status_pairs,+    period_status_pairs.1 as start_of_period,+    period_status_pairs.2 as status+FROM (+    SELECT+        person_id,+        period,+        created_at,         if(-            next_period = initial_period + INTERVAL {interval}, -            'returning', +            dateTrunc(%(interval)s, created_at) = period,+            'new',             if(-                next_period > earliest + INTERVAL {interval}, -                'resurrecting', -                'new'+                previous_activity + INTERVAL 1 {interval_expr} = period,+                'returning',+                'resurrecting'             )-        )-    ) as status--FROM (-    /*-         Get person period activity paired with the next adjacent period activity-    */-    SELECT -        person_id, -        initial_period, -        min(next_period) as next_period -+        ) AS initial_status,+        period + INTERVAL 1 {interval_expr} = following_activity AS next_is_active,+        previous_activity,+        following_activity     FROM (-        SELECT -            person_id, -            base.start_of_period as initial_period, -            subsequent.start_of_period as next_period--        FROM person_activity_including_previous_period base-            JOIN person_activity_including_previous_period subsequent -                ON base.person_id = subsequent.person_id--        WHERE subsequent.start_of_period > base.start_of_period+        SELECT+            person_id,+            any(period) OVER (PARTITION BY person_id ORDER BY period ROWS BETWEEN 1 PRECEDING AND 1 PRECEDING) as previous_activity,

Love it, does this mean we only have one JOIN on pdis/persons? That should be a massive save on memory I guess and speed.

macobo

comment created time in 5 days

Pull request review commentPostHog/posthog

Speed up lifecycle query

 def test_filter_test_accounts(self):                 request,             ) -    return TestLifecycle---def _create_action(**kwargs):-    team = kwargs.pop("team")-    name = kwargs.pop("name")-    action = Action.objects.create(team=team, name=name)-    ActionStep.objects.create(action=action, event=name)-    action.calculate_events()-    return action+        def assertLifecycleResults(self, results, expected):+            sorted_results = [+                {"status": r["status"], "data": r["data"]} for r in sorted(results, key=lambda r: r["status"])+            ]+            sorted_expected = list(sorted(expected, key=lambda r: r["status"])) +            self.assertEquals(sorted_results, sorted_expected) -class TestDjangoLifecycle(lifecycle_test_factory(Trends, Event.objects.create, Person.objects.create, _create_action)):  # type: ignore

non-blocking

Where did these go?

macobo

comment created time in 5 days

Pull request review commentPostHog/posthog

Speed up lifecycle query

 def test_test_account_filters_with_groups(self):             self.team,         ) -        self.assertEqual(sorted(res["status"] for res in result), ["dormant", "new", "resurrecting", "returning"])-        for res in result:-            if res["status"] == "dormant":-                self.assertEqual(res["data"], [0, -1, 0, 0, -1, 0, 0, 0])-            elif res["status"] == "returning":-                self.assertEqual(res["data"], [0, 0, 0, 0, 0, 0, 0, 0])-            elif res["status"] == "resurrecting":-                self.assertEqual(res["data"], [1, 0, 0, 1, 0, 0, 0, 0])-            elif res["status"] == "new":-                self.assertEqual(res["data"], [0, 0, 0, 0, 0, 0, 0, 0])+        self.assertLifecycleResults(+            result,+            [+                {"status": "dormant", "data": [0, -1, 0, 0, -1, 0, 0, 0]},+                {"status": "new", "data": [0, 0, 0, 0, 0, 0, 0, 0]},+                {"status": "resurrecting", "data": [1, 0, 0, 1, 0, 0, 0, 0]},+                {"status": "returning", "data": [0, 0, 0, 0, 0, 0, 0, 0]},+            ],+        )++    @snapshot_clickhouse_queries+    def test_lifecycle_edge_cases(self):+        # This test tests behavior when created_at is different from first matching event and dormant/resurrecting/returning logic

What specific edge case is this? Could we spell out explicitly? I'm guessing it's specifically that there will be no NEW status? Are there any other cases we're testing for here?

macobo

comment created time in 5 days

Pull request review commentPostHog/posthog

Speed up lifecycle query

 def get_people(         from posthog.api.person import PersonSerializer          return PersonSerializer(people, many=True).data+++class LifecycleEventQuery(ClickhouseEventQuery):+    _filter: Filter++    def get_query(self):+        date_query, date_params = self._get_date_filter()+        self.params.update(date_params)++        prop_query, prop_params = self._get_props(self._filter.properties)+        self.params.update(prop_params)++        person_query, person_params = self._get_person_query()+        self.params.update(person_params)++        groups_query, groups_params = self._get_groups_query()+        self.params.update(groups_params)++        entity_params, entity_format_params = get_entity_filtering_params(+            self._filter.entities[0], self._team_id, table_name=self.EVENT_TABLE_ALIAS+        )+        self.params.update(entity_params)++        return (+            f"""+            SELECT DISTINCT+                person_id,+                toDateTime(dateTrunc(%(interval)s, events.timestamp)) AS period,+                person.created_at AS created_at+            FROM events AS {self.EVENT_TABLE_ALIAS}+            {self._get_distinct_id_query()}+            {person_query}+            {groups_query}+            WHERE team_id = %(team_id)s+            {entity_format_params["entity_query"]}+            {date_query}+            {prop_query}+        """,+            self.params,+        )++    @cached_property+    def _person_query(self):+        return ClickhousePersonQuery(self._filter, self._team_id, self._column_optimizer, extra_fields=["created_at"],)++    def _get_date_filter(self):+        _, _, date_params = parse_timestamps(filter=self._filter, team_id=self._team_id)+        params = {**date_params, "interval": self._filter.interval}+        # :TRICKY: We fetch all data even for the period before the graph starts up until the end of the last period+        return (+            f"""+            AND timestamp >= toDateTime(dateTrunc(%(interval)s, toDateTime(%(date_from)s))) - INTERVAL 1 {self._filter.interval}+            AND timestamp < toDateTime(dateTrunc(%(interval)s, toDateTime(%(date_to)s))) + INTERVAL 1 {self._filter.interval}+        """,+            params,+        )++    def _determine_should_join_distinct_ids(self):

non-blocking

We specify a return type for _determine_should_join_persons, should we do the same for consistency here?

macobo

comment created time in 5 days

PullRequestReviewEvent

Pull request review commentPostHog/posthog

Speed up lifecycle query

 def test_test_account_filters_with_groups(self):             self.team,         ) -        self.assertEqual(sorted(res["status"] for res in result), ["dormant", "new", "resurrecting", "returning"])-        for res in result:-            if res["status"] == "dormant":-                self.assertEqual(res["data"], [0, -1, 0, 0, -1, 0, 0, 0])-            elif res["status"] == "returning":-                self.assertEqual(res["data"], [0, 0, 0, 0, 0, 0, 0, 0])-            elif res["status"] == "resurrecting":-                self.assertEqual(res["data"], [1, 0, 0, 1, 0, 0, 0, 0])-            elif res["status"] == "new":-                self.assertEqual(res["data"], [0, 0, 0, 0, 0, 0, 0, 0])+        self.assertLifecycleResults(+            result,+            [+                {"status": "dormant", "data": [0, -1, 0, 0, -1, 0, 0, 0]},+                {"status": "new", "data": [0, 0, 0, 0, 0, 0, 0, 0]},+                {"status": "resurrecting", "data": [1, 0, 0, 1, 0, 0, 0, 0]},+                {"status": "returning", "data": [0, 0, 0, 0, 0, 0, 0, 0]},+            ],+        )++    @snapshot_clickhouse_queries+    def test_lifecycle_edge_cases(self):+        # This test tests behavior when created_at is different from first matching event and dormant/resurrecting/returning logic+        with freeze_time("2020-01-11T12:00:00Z"):+            Person.objects.create(distinct_ids=["person1"], team_id=self.team.pk)++        journeys_for(+            {+                "person1": [+                    {"event": "$pageview", "timestamp": datetime(2020, 1, 12, 12),},

Are all these events needed? If we are just concerned with the NEW case mentioned above, then would one event be a sufficient for a minimal test case?

macobo

comment created time in 5 days

PullRequestReviewEvent

pull request commentPostHog/posthog

Speed up lifecycle query

omg look at those benchmarks @macobo

I'll have a look after lunch

macobo

comment created time in 5 days

PR closed PostHog/posthog

perf(lifecycle): use created_at as the earliest person activity performance

Previously we were using the first event that matched the filtering parameters. This could be expensive if there are lots of events/users and event filtering doesn't utilize sorting or index skipping much.

Instead we use the created_at as the date of first activity, regardless of any filtering that may have been applied to events. Note that this may not be as selective as the query on events, but fingers crossed this is an outlier. Note that this change also diverges from the current functionality in that previously we would consider the first activity for a specific event type, but now create_at is implicitly the earliest of any event.

This PR doesn't handle an optimisation of further filtering the persons by any person filters that may be applied to ensure the right hand earliest JOIN is as small as it could be.

NOTE: I had to remove django models auto_now_add=True as we by design we cannot override it.

We need to set created_at to the time of first event ingested, in tests they are set datetime.now(), and passing this created_at in explicitly doesn't help as we can't over auto_add_now. In tests we're relying on created_at from the postgres model to propagate this value to clickhouse so it can be used for querying.

I'm making an assumption that plugin-server is setting this field correctly, which is a separate code path so these changes I'm making I don't think are actually used in production.

Refers to https://github.com/PostHog/posthog/issues/7382

<!-- If the answer is manually, please include a quick step-by-step on how to test this PR. -->

Please describe.

+63 -57

1 comment

8 changed files

hazzadous

pr closed time in 6 days

more