profile
viewpoint
If you are wondering where the data of this site comes from, please visit https://api.github.com/users/hedgehog/events. GitMemory does not store any data, but only uses NGINX to cache data for a period of time. The idea behind GitMemory is simply to give users a better reading experience.

auxesis/cucumber-nagios 264

Systems testing plugin for Nagios with Cucumber + Webrat + Mechanize + Net::SSH

hedgehog/cucumber-nagios 10

systems testing plugin for Nagios with Cucumber + Webrat + Mechanize + Net::SSH

hedgehog/async-rack 4

Makes middleware that ships with Rack bullet-proof for async responses.

hedgehog/async_sinatra 2

A plugin for Sinatra to provide a DSL extension for using Thin for asynchronous responses

hedgehog/b3 2

A BigBlueButton library for the API and BDD

hedgehog/amazon-ec2 1

A Ruby Gem that gives you full access to several of the Amazon Web Services API from your Ruby/Ruby on Rails apps

hedgehog/apt 1

A Chef cookbook for apt

hedgehog/Aquarium 1

An AOP library for Ruby

hedgehog/aruba 1

CLI Steps for Cucumber

hedgehog/beam 1

Watch this space

PR opened ravendb/ravendb

RavenDB-16813: fix time series tombstones clean, fix test
+468 -15

0 comment

7 changed files

pr created time in 8 minutes

push eventravendb/ravendb

Arkadiusz Palinski

commit sha 1cfc2e8ca6e7574c7c2b5735739206b317f0edb6

RavenDB-16803 / RavenDB-16507 Fixing another case where a newly created read transaction could get an already disposed pager state of a scratch file. This time it applies to scratches from the recycle area. Those can be disposed by ScratchBufferPool.Free() call but then we need to have UpdateCacheForPagerStatesOfAllScratches() before the actual dispose.

view details

Arkadiusz Palinski

commit sha be27481a063f91b7e7620165834743018711ef83

RavenDB-16803 / RavenDB-16507 Fixing another case where a newly created read transaction could get an already disposed pager state of a scratch file. This time it applies to scratches from the recycle area. Those can be disposed by ScratchBufferPool.Free() call but then we need to have UpdateCacheForPagerStatesOfAllScratches() before the actual dispose.

view details

Arkadiusz Palinski

commit sha 168cedd0a0106c88cbdf611109a1ba4c0e63f778

Merge branch 'v5.1' of github.com:ravendb/ravendb into v5.2

view details

Arkadiusz Paliński

commit sha 8a9807549270764c3ef486c911b98e752d4fa937

Merge pull request #12312 from arekpalinski/v5.2 5.1 to 5.2 merge

view details

Danielle9897

commit sha d605e53a5f50bae48fb3e4e4dbf95f11cef0ea75

RavenDB-15216 Link to RQL documentation in Query->Syntax popup

view details

Danielle9897

commit sha efc81c7c8dc45fb59969263c6a64ef643047aa05

RavenDB-15883 Server uptime isn't refreshed by the server dashboard

view details

Danielle9897

commit sha ccd80be325437942faa3f09935ce338988faf476

RavenDB-15883 Server uptime isn't refreshed by the server dashboard - Fix review comment

view details

Danielle9897

commit sha 1f359b2d7a185b88416675eb4b800a803748ee2b

RavenDB-16503 Add max cores to cluster - value not stored in textbox when toggle state is changed

view details

Danielle9897

commit sha 5eef4095699c9c99f8bcfb59f294f6aa722b86cd

RavenDB-16114 Group fanout performance hint into single dialog

view details

Danielle9897

commit sha 8dcfe89d2ccc7fb57c11b37fd7d0e7f4bb6f79b6

RavenDB-16114 Group fanout performance hint into single dialog - Fix review comments

view details

Rafał Kwiatkowski

commit sha 1eb7b0db6c8377382d8986b69208d333e3a531d4

RavenDB-16550 Change reset index icon + add info in confirmation

view details

Karmel Indych

commit sha 80a2c9d9f58d65efea01a0ad83c8fddd1f2e6dcb

RavenDB-16821 move PendingRollingIndexException + add DeploymentMode to ToIndexDefinition

view details

Rafał Kwiatkowski

commit sha ee6b90b1bd771cc186805daca6287bbc858cd225

RavenDB-16584 Index throttling - Time scalse display in uneven interval (more segnificated in high value of `ThrottlingTimeInterval`)

view details

Danielle9897

commit sha e48ed213c92bb0f65f51be0518fc9c53491c2e8a

RavenDB-16727 Curl Command Dropdown (Import & Export)

view details

Danielle9897

commit sha 3c5c410173ff711949eb247e968769a523dbae8f

RavenDB-16727 Curl Command Dropdown (Import & Export) - Fix review comments

view details

Danielle9897

commit sha 016584bc7c99f5ee9fcfab901509da9383d829f8

RavenDB-16727 Curl Command Dropdown (Import & Export) - Fix review comments 2

view details

Grisha Kotler

commit sha 431c6cb17bcfd5642e06faf2d13651b265d196e1

RavenDB-16825 - convert from MemoryStream to FileStream if needed

view details

Grisha Kotler

commit sha 628b26a714f1c94108da3b27866477fb245be479

RavenDB-16825 - small refactoring

view details

aviv

commit sha bc915eb0d44db808cc4d2d5843d8d68ee479f60b

RavenDB-16385 : Go over all TS tests and replace `DateTime.Today` since it is sensitive to daylight saving

view details

Pawel Pekrol

commit sha 99ae510ba3090b4bcc0543a3a38d1259723b83b8

RavenDB-7070 use AlmostEquals when comparing doubles

view details

push time in 2 hours

PR merged ravendb/ravendb

RavenDB-13053 Merge 5.2 into sharding
+7007 -2447

0 comment

394 changed files

karmeli87

pr closed time in 2 hours

push eventravendb/ravendb

Karmel Indych

commit sha 48298ae7e391c535915766950da3219524ed95da

RavenDB-16911 make sure database deletion is under r/w lock

view details

push time in 2 hours

push eventlibreswan/libreswan

Paul Wouters

commit sha add9388cef83e6137787d4bb16a6123951651bd8

testing: try out CodeQL Analysis from GitHub

view details

push time in 2 hours

Pull request review commenthanami/hanami

Use a class for the application settings

+# frozen_string_literal: true++require "hanami/application/settings/dotenv_store"+require "dotenv"++RSpec.describe Hanami::Application::Settings::DotenvStore do+  describe "#with_dotenv_loaded" do+    let(:store) { described_class.new }++    context "dotenv available" do+      let(:dotenv) { spy(:dotenv) }++      before do+        allow(store).to receive(:require).and_call_original+        stub_const "Dotenv", dotenv+      end++      it "requires and loads a range of dotenv files, specific to the current HANAMI_ENV" do+        store.with_dotenv_loaded++        expect(store).to have_received(:require).with("dotenv").ordered+        expect(dotenv).to have_received(:load).ordered.with(+          ".env.development.local",+          ".env.local",+          ".env.development",+          ".env"+        )+      end++      it "returns self" do+        expect(store.with_dotenv_loaded).to be(store)+      end++      context "HANAMI_ENV is 'test'" do+        before do+          @hanami_env = ENV["HANAMI_ENV"]+          ENV["HANAMI_ENV"] = "test"+        end++        after do+          ENV["HANAMI_ENV"] = @hanami_env+        end

@solnic with the new implementation where the environment is injected that's no longer needed :slightly_smiling_face:

waiting-for-dev

comment created time in 3 hours

issue commentdrolbr/Overpass-API

Inconsistent error messages among output types

Actually you are right wget returns 400 Bad Request, which I didn't see because I was only looking at output. But I found out that curl also returns syntax error as e.g. ... <p><strong style="color:#FF0000">Error</strong>: line 2: parse error: Key expected - ';' found. </p> ...

And the same happens also for bbox out of bounds e.g. with [bbox:-90,-180,90,280] so curl solved this for me.

Mashin6

comment created time in 3 hours

issue closedlibreswan/libreswan

Migration to organization

It is possible to migrate the personal account to organization?

Converting:

  • https://help.github.com/articles/converting-a-user-into-an-organization/

It is possible to rename this account and create the organization and move/transfer the repository to the organization.

Rename account:

  • https://help.github.com/en/enterprise/2.13/user/articles/changing-your-github-username

Create an organization:

  • https://help.github.com/en/github/setting-up-and-managing-organizations-and-teams/creating-a-new-organization-from-scratch

Transfer a repository:

  • https://help.github.com/en/github/administering-a-repository/transferring-a-repository

Like:

  • http://github.com/gnome
  • http://github.com/kde
  • ...

closed time in 3 hours

Neustradamus

issue commentlibreswan/libreswan

Migration to organization

finally done :)

Neustradamus

comment created time in 3 hours

issue commentdrolbr/Overpass-API

Short occurences of "File_Blocks_Index: Data file size does not match block size"

Could be a bug. The description so far does no suffice to pinpoint something.

boldtrn

comment created time in 4 hours

Pull request review commenth2o/h2o

Add fuzzer for HTTP/3

+/*+ * Copyright (c) 2021 Fastly, Inc.+ *+ * Permission is hereby granted, free of charge, to any person obtaining a copy+ * of this software and associated documentation files (the "Software"), to+ * deal in the Software without restriction, including without limitation the+ * rights to use, copy, modify, merge, publish, distribute, sublicense, and/or+ * sell copies of the Software, and to permit persons to whom the Software is+ * furnished to do so, subject to the following conditions:+ *+ * The above copyright notice and this permission notice shall be included in+ * all copies or substantial portions of the Software.+ *+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE+ * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER+ * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING+ * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS+ * IN THE SOFTWARE.+ */++#include <assert.h>+#include <malloc.h>+#include <stdbool.h>+#include "khash.h"+#include "quicly.h"+#include "quicly/sendstate.h"+#include "quicly/recvstate.h"+#include "quicly_mock.h"++KHASH_MAP_INIT_INT64(quicly_stream_t, quicly_stream_t *)++mquicly_context_t mquicly_context;++struct st_quicly_conn_t {+    struct _st_quicly_conn_public_t super;+    khash_t(quicly_stream_t) * streams;+};++struct st_quicly_send_context_t {+};

Do we use this?

hfujita

comment created time in 4 hours

Pull request review commenth2o/h2o

Add fuzzer for HTTP/3

 void on_stream_destroy(quicly_stream_t *qs, int err)         pre_dispose_request(stream);     if (!stream->req_disposed)         h2o_dispose_request(&stream->req);+    /* in case the stream is destroyed before the buffer is fully consumed */+    h2o_buffer_dispose(&stream->recvbuf.buf);

Nice catch! I think this is a memory leak that we have?

To give an example, I think we are leaking memory when we have an incomplete HTTP/3 frame in stream->recvbuf.buf then receive STREAM_RESET & STOP_SENDING.

hfujita

comment created time in 15 hours

Pull request review commenth2o/h2o

Add fuzzer for HTTP/3

+/*+ * Copyright (c) 2021 Fastly, Inc.+ *+ * Permission is hereby granted, free of charge, to any person obtaining a copy+ * of this software and associated documentation files (the "Software"), to+ * deal in the Software without restriction, including without limitation the+ * rights to use, copy, modify, merge, publish, distribute, sublicense, and/or+ * sell copies of the Software, and to permit persons to whom the Software is+ * furnished to do so, subject to the following conditions:+ *+ * The above copyright notice and this permission notice shall be included in+ * all copies or substantial portions of the Software.+ *+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE+ * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER+ * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING+ * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS+ * IN THE SOFTWARE.+ */++#include <assert.h>+#include <malloc.h>+#include <stdbool.h>+#include "khash.h"+#include "quicly.h"+#include "quicly/sendstate.h"+#include "quicly/recvstate.h"+#include "quicly_mock.h"++KHASH_MAP_INIT_INT64(quicly_stream_t, quicly_stream_t *)++mquicly_context_t mquicly_context;++struct st_quicly_conn_t {+    struct _st_quicly_conn_public_t super;+    khash_t(quicly_stream_t) * streams;+};++struct st_quicly_send_context_t {+};++static quicly_conn_t *create_connection(quicly_context_t *ctx, bool is_client, struct sockaddr *remote_addr,+                                        struct sockaddr *local_addr)+{+    quicly_conn_t *conn = calloc(1, sizeof(*conn));+    assert(conn != NULL);+    conn->super.ctx = ctx;+    if (is_client) {+        conn->super.local.bidi.next_stream_id = 0;+        conn->super.local.uni.next_stream_id = 2;+        conn->super.remote.bidi.next_stream_id = 1;+        conn->super.remote.uni.next_stream_id = 3;+    } else {+        conn->super.local.bidi.next_stream_id = 1;+        conn->super.local.uni.next_stream_id = 3;+        conn->super.remote.bidi.next_stream_id = 0;+        conn->super.remote.uni.next_stream_id = 2;+    }+    conn->streams = kh_init(quicly_stream_t);++    conn->super.local.address.sa = *local_addr;+    conn->super.remote.address.sa = *remote_addr;++    return conn;+}++int quicly_accept(quicly_conn_t **conn, quicly_context_t *ctx, struct sockaddr *dest_addr, struct sockaddr *src_addr,+                  quicly_decoded_packet_t *packet, quicly_address_token_plaintext_t *address_token,+                  const quicly_cid_plaintext_t *new_cid, ptls_handshake_properties_t *handshake_properties)+{+    *conn = create_connection(ctx, false, src_addr, dest_addr);
    *conn = create_connection(ctx, 0, src_addr, dest_addr);

I do not mind either ways (I tend to believe that @hfujita is the owner of file), but we might want to use 1 / 0 instead of true / false, as we use that in other places (e.g., the return value of quicly_is_blocked), and because it would be better to use the same custom throughout the project.

hfujita

comment created time in 4 hours

Pull request review commentravendb/ravendb

RavenDB-16447: Wrap query with clauses dynamically

 public void AndAlso()              tokens.AddLast(QueryOperatorToken.And);

indeed

omerlewitz

comment created time in 4 hours

issue closeddrolbr/Overpass-API

Как унаследовать, адрес для подъезда или POI, если он проставлен на контуре дома?

Как унаследовать, адрес для подъезда или POI, если он проставлен на контуре дома? Подскажите , как это сделать через http://overpass-turbo.eu/ То есть, адрес проставлен на домах, а на подъезда или POI,, он отсутствует. в формате .csv

closed time in 4 hours

Sowa1980

issue closeddrolbr/Overpass-API

как получить служебные данные ПО, reverter, коментарий в .csv?

как получить служебные данные ПО, reverter, коментарий?

подскажет запрос overpass-turbo.eu, что бы извлечь эти данные из области

<tag k="source" v="Спутниковые снимки: Bing,."/> <tag k="created_by" v="JOSM/1.5 (17428 ru)"/> <tag k="created_by" v="reverter_plugin/35248;JOSM/1.5 (15553 ru)"/> <tag k="comment" v="Смоленская область правки"/>

closed time in 4 hours

Sowa1980

issue closeddrolbr/Overpass-API

Unexpected slowness for some simple global counts

I'm missing some documentation on how to reason about the slowness of some simple-looking queries that count entities globally.

Times out:

[out:json];
(
  node["amenity"="restaurant"];
);
out count;

Works (in 15 seconds):

[out:json];
(
  node["amenity"="restaurant"]["diet:vegan"="only"];
);
out count;

Takes around 180 seconds:

[date:"2012-09-12T06:55:00Z"]
[out:json];
(
  node["amenity"="restaurant"]["diet:vegan"="only"];
);
out count;

closed time in 4 hours

tuukka

issue commentdrolbr/Overpass-API

Unexpected slowness for some simple global counts

For a little bit of background: the query is in no way simple for the database. The database reads all restaurants from all over the world for this purpose from the disk. The command out count exists to save on network bandwidth, the previous solution had been to use out ids and to count lines. Usually, this kind of questions is well answered by Taginfo. Thus, it is unlikely that this kind of task will ever win a trade-off against other features.

tuukka

comment created time in 4 hours

issue commentdrolbr/Overpass-API

update_database - huge files consume the whole ram

Could you please pose your question again in different words? I have no idea what you mean here. What source file is imported with which command?

officer-merge

comment created time in 4 hours

issue commentdrolbr/Overpass-API

is there a machine readable alternative to /api/status OR where is the content/logic of api/status defined?

The responsible source code is in src/overpass_api/dispatch/dispatcher_server.cc line 342 and following. As I understand from the linked issue the real problem here is that you might get the status from a different server than the query was sent to?

jannefleischer

comment created time in 4 hours

issue closeddrolbr/Overpass-API

Inconsistent error messages among output types

Currently when something goes wrong with a query the api responds in different way depending on which out: type was selected.

  1. when syntax error or bbox out of range fails silently (csv, json, xml)
  2. when query ran and timed out returns only header (csv)
  3. When no slots available and query times out while waiting in the queue fails silently (csv, json, xml)

Particularly problematic is csv because it gives no error messages at all.

closed time in 4 hours

Mashin6

issue commentdrolbr/Overpass-API

Inconsistent error messages among output types

A syntax error should result in a HTTP 400 Bad Request. I do not understand what you mean by "bbox out of range". A timeout is indeed not signalled in csv, because there is no obvious place for error messages in csv. Load shedding is indicated by HTTP 504 if the server is overloaded or HTTP 429 if the server had seen too many requests from the IP address. Please see the user's manual.

Mashin6

comment created time in 4 hours

Pull request review commenthanami/hanami

Use a class for the application settings

+# frozen_string_literal: true

Among other things, this file is unit testing the behaviour of of this class when dotenv is or isn't around. I think it's important to have that covered. It's not an implementation detail for sake of this class, it's actually the whole purpose of the class.

waiting-for-dev

comment created time in 4 hours

issue commentrethinkdb/rethinkdb

Is rethinkdb still dead?

@icnahom MongoDB, JaguarDB, GridDB

manyopensource

comment created time in 5 hours

issue commentrethinkdb/rethinkdb

Is rethinkdb still dead?

Just checking before going with a different database.

I'm curious to know what other database you are considering to use. I can't find a nice RethinkDB alternative that's completely free easy to use.

manyopensource

comment created time in 5 hours

Pull request review commentravendb/ravendb

RavenDB-16769

 protected override IEnumerator<ToOlapItem> ConvertTimeSeriesDeletedRangeEnumerat         protected override int LoadInternal(IEnumerable<OlapTransformedItems> records, DocumentsOperationContext context, OlapEtlStatsScope scope)         {             var count = 0;-+            var uploadedFiles = new List<UploadInfo>();+            var localFiles = new List<string>();             var outerScope = scope.For(EtlOperations.LoadLocal, start: false);--            foreach (var transformed in records)+            try             {-                outerScope.NumberOfFiles++;-                string localPath = null;-                try+                foreach (var transformed in records)                 {-                    string folderName, fileName, safeFolderName;+                    outerScope.NumberOfFiles++;+                     using (outerScope.Start())                     using (var loadScope = outerScope.For($"{EtlOperations.LoadLocal}/{outerScope.NumberOfFiles}"))                     {-                        localPath = transformed.GenerateFile(out folderName, out safeFolderName, out fileName);+                        string localPath = transformed.GenerateFile(out UploadInfo uploadInfo);+                        localFiles.Add(localPath); -                        loadScope.FileName = fileName;+                        loadScope.FileName = uploadInfo.FileName;                         loadScope.NumberOfFiles = 1;                          count += transformed.Count;++                        if (AnyRemoteDestinations == false) +                            continue;++                        UploadToRemoteDestinations(localPath, uploadInfo, scope);+                        uploadedFiles.Add(uploadInfo);                     }+                }+            }+            catch (Exception)+            {+                // an error occurred during the upload phase.+                // try delete the uploaded files in order to avoid duplicates.+                // we will re upload these files in the next batch. -                    if (AnyRemoteDestinations)-                        UploadToServer(localPath, folderName, fileName, safeFolderName, scope);+                foreach (var uploadInfo in uploadedFiles)+                {+                    TryDeleteFromRemoteDestinations(uploadInfo);                 }-                finally++                foreach (var file in localFiles)

This is not how ETL works :)

aviv86

comment created time in 5 hours

PR opened ravendb/ravendb

RavenDB-13053 Merge 5.2 into sharding
+7007 -2447

0 comment

394 changed files

pr created time in 5 hours

pull request commenth2o/h2o

probe: add h2o:h3_packet_forward_ignored to capture ignored packet forwarding

Thank you for the changes. LGTM.

gfx

comment created time in 5 hours