profile
viewpoint
If you are wondering where the data of this site comes from, please visit https://api.github.com/users/fishcakez/events. GitMemory does not store any data, but only uses NGINX to cache data for a period of time. The idea behind GitMemory is simply to give users a better reading experience.

elixir-ecto/postgrex 833

PostgreSQL driver for Elixir

elixir-ecto/connection 242

Connection behaviour for connection processes

elixir-ecto/db_connection 234

Database connection behaviour

fishcakez/dbg 158

Tracing for Elixir

fishcakez/dialyze 82

Mix dialyzer task

fishcakez/core 43

Library for selective receive OTP processes

fishcakez/acceptor_pool 25

gen_tcp acceptor pool

fishcakez/backoff_supervisor 4

Backoff Supervisor

fishcakez/cliserv 3

Example TCP pool (half duplex and multiplex)

fishcakez/agent_telnet 2

Agent Appup Example

issue commentninenines/cowboy

Possible Regression/Leak in Cowboy 2.9

I setup a trace to trace the following

{:cowboy_telemetry_h, :init, 3},
{:cowboy_telemetry_h, :terminate, 3},
{MyApp.Tracer, :continue_trace, 3},
{MyApp.Tracer, :start_trace, 2},
{MyApp.Tracer, :finish_trace, 0}

MyApp.Tracer is our glue code between telemetry and Spandex.

After a bunch of munging of the data. Most of the traces look like this:

#PID<0.25506.2215>      001624072079398090      cowboy_telemetry_h:init
#PID<0.25506.2215>      001624072079398124      Elixir.MyApp.Tracer:start_trace ["http.request" | _]
#PID<0.25506.2215>      001624072079430092      cowboy_telemetry_h:terminate
#PID<0.25506.2215>      001624072079472165      Elixir.MyApp.Tracer:finish_trace

I found a process stuck in this loop through:

#PID<0.6986.2161>       001624072077862889      cowboy_telemetry_h:init
#PID<0.6986.2161>       001624072077862924      Elixir.MyApp.Tracer:continue_trace ["http.request" | _]
#PID<0.6986.2161>       001624072077880694      cowboy_telemetry_h:terminate
#PID<0.6986.2161>       001624072078261498      cowboy_telemetry_h:init
#PID<0.6986.2161>       001624072078261563      Elixir.MyApp.Tracer:continue_trace ["http.request" | _]
#PID<0.6986.2161>       001624072078279640      cowboy_telemetry_h:terminate
#PID<0.6986.2161>       001624072078469414      cowboy_telemetry_h:init
#PID<0.6986.2161>       001624072078469472      Elixir.MyApp.Tracer:continue_trace ["http.request" | _]
#PID<0.6986.2161>       001624072078550234      cowboy_telemetry_h:terminate
#PID<0.6986.2161>       001624072079042759      cowboy_telemetry_h:init
#PID<0.6986.2161>       001624072079042810      Elixir.MyApp.Tracer:continue_trace ["http.request" | _]
#PID<0.6986.2161>       001624072079098776      cowboy_telemetry_h:terminate
...

It never calls MyApp.Tracer:finish_trace even though cowboy_telemetry_h:terminate was called, before calling init/continue_trace again (which would cause the error log).

I can imagine this is caused by one of a few things:

  • cowboy_telemetry isn't emitting the telemetry event on terminate
  • cowboy_telemetry is emitting the event, but our handler has disconnected because it crashed
  • Something is crashing between terminate and Spandex that would have caused the Cowboy process to die in 2.8 but is now being gracefully caught and the process is re-used in 2.9
jeffutter

comment created time in an hour

issue openedenvoyproxy/envoy

Prioritize route tracing config over runtime

Currently runtime global sampling etc. settings are applied after all other settings including per route tracing config. This probably wasn't meant to be this way.

created time in 2 hours

pull request commentenvoyproxy/envoy

adding version history file for v1.18.3

@phlax seems again issue with glint.. error as follows.. could you please advise some editior here to fix it

2021-06-18T16:19:11.0257679Z ##[error]: TESTS FAILED: 2021-06-18T16:19:11.0258368Z ##[error]: main@ /source/ci/format_pre.sh :47 (glint) 2021-06-18T16:19:11.0258931Z ##[error]: Please fix your editor to ensure: 2021-06-18T16:19:11.0260121Z ##[error]: - no trailing whitespace 2021-06-18T16:19:11.0260851Z ##[error]: - no mixed tabs/spaces 2021-06-18T16:19:11.0261554Z ##[error]: - all files end with a newline 2021-06-18T16:19:12.1375137Z ##[error]Bash exited with code '1'.

ankatare

comment created time in 2 hours

Pull request review commentelixir-ecto/ecto

Add Ecto.Enum.mappings/2

 defmodule Ecto.EnumTest do                   %{                     on_dump: %{bar: "baar", baz: "baaz", foo: "fooo"},                     on_load: %{"baar" => :bar, "baaz" => :baz, "fooo" => :foo},-                    values: [:foo, :bar, :baz],+                    mappings: [foo: "fooo", bar: "baar", baz: "baaz"],

My reason to chose a keyword list over a map is the fact that the user inputs a keyword list when creating a Ecto.Enum field, so it seemed consistent returning the exact same thing the user inputted. Yet, as you pointed out, this didn't reflect the exact mappings that are done on on_dump and on_load (these are maps, and the last one would take precedence). I opened another PR that prevents the user from inputting keyword lists with duplicate keys.

v0idpwn

comment created time in 3 hours

PR opened elixir-ecto/ecto

Raise on duplicate values in Ecto.Enum

Duplicate values make so that only the last of the duplicates is taken into account, due to the fact that the keyword list is cast into a map for the on_load and on_dump values. With this commit we can avoid this scenario in compile time.

Pointed out by @wojtekmach on #3676

+55 -0

0 comment

2 changed files

pr created time in 3 hours

pull request commentenvoyproxy/envoy

remove support for v2 UNSUPPORTED_REST_LEGACY

@htuch yeah... actually build failing locally .. will push soon

ankatare

comment created time in 3 hours

Pull request review commentenvoyproxy/envoy

Listener: reset the file event before initialize new one

 class HttpInspectorTest : public testing::Test {   HttpInspectorTest()       : cfg_(std::make_shared<Config>(store_)),         io_handle_(std::make_unique<Network::IoSocketHandleImpl>(42)) {}-  ~HttpInspectorTest() override { io_handle_->close(); }+  ~HttpInspectorTest() override {+    io_handle_->close();+    filter_.reset();+    EXPECT_EQ(false, io_handle_->isFileEventInitialized());

sorry, I'm not sure I understand your comment. do you mean try to find a way to initializeFileEvent again, then using the ASSERT test the case?

soulxu

comment created time in 4 hours

Pull request review commentenvoyproxy/envoy

Listener: reset the file event before initialize new one

 using ConfigSharedPtr = std::shared_ptr<Config>; class Filter : public Network::ListenerFilter, Logger::Loggable<Logger::Id::filter> { public:   Filter(const ConfigSharedPtr config);+  ~Filter() override {+    if (cb_) {+      cb_->socket().ioHandle().resetFileEvents();

Yes, it could be owned by other filter. Like we enable tls inspect and http inspect at same time, tls inspect successed, then http inspect timeout, both filter will reset the file event in the destruction. But it should be fine since the reset can be execute multiple times.

soulxu

comment created time in 4 hours

issue closedenvoyproxy/envoy

Extend post path normalization actions to other normalization transformations

Presently only decoding %2F allows rejecting or redirecting of requests. This needs to be extended to other operations.

Extends Issue #6589

Action item for CVE-2021-29492

closed time in 4 hours

yanavlasov

issue commentenvoyproxy/envoy

Extend post path normalization actions to other normalization transformations

This issue has been automatically closed because it has not had activity in the last 37 days. If this issue is still valid, please ping a maintainer and ask them to label it as "help wanted" or "no stalebot". Thank you for your contributions.

yanavlasov

comment created time in 4 hours

issue commentenvoyproxy/envoy

envoy access log clarification

This issue has been automatically marked as stale because it has not had activity in the last 30 days. It will be closed in the next 7 days unless it is tagged "help wanted" or "no stalebot" or other activity occurs. Thank you for your contributions.

debbyku

comment created time in 4 hours

Pull request review commentenvoyproxy/envoy

listener: refactor active tcp socket and active tcp listener

 #include "source/common/common/linked_object.h" #include "source/common/stream_info/stream_info_impl.h" #include "source/server/active_listener_base.h"+#include "source/server/active_tcp_socket.h"  namespace Envoy { namespace Server {  struct ActiveTcpConnection; using ActiveTcpConnectionPtr = std::unique_ptr<ActiveTcpConnection>;-struct ActiveTcpSocket;-using ActiveTcpSocketPtr = std::unique_ptr<ActiveTcpSocket>; class ActiveConnections; using ActiveConnectionsPtr = std::unique_ptr<ActiveConnections>;

We still have the issue of TypedActiveStreamListenerBase and different collections. I would prefer that the two collections share a base class and eliminate the need for template base classes like this one.

lambdai

comment created time in 5 hours

Pull request review commentenvoyproxy/envoy

listener: refactor active tcp socket and active tcp listener

 class ActiveStreamListenerBase : public ActiveListenerImplBase {   /**    * Create a new connection from a socket accepted by the listener.    */-  virtual void newConnection(Network::ConnectionSocketPtr&& socket,-                             std::unique_ptr<StreamInfo::StreamInfo> stream_info) PURE;+  void newConnection(Network::ConnectionSocketPtr&& socket,+                     std::unique_ptr<StreamInfo::StreamInfo> stream_info) {+    // Find matching filter chain.+    const auto filter_chain = config_->filterChainManager().findFilterChain(*socket);+    if (filter_chain == nullptr) {+      RELEASE_ASSERT(socket->addressProvider().remoteAddress() != nullptr, "");+      ENVOY_LOG(debug, "closing connection from {}: no matching filter chain found",+                socket->addressProvider().remoteAddress()->asString());+      stats_.no_filter_chain_match_.inc();+      stream_info->setResponseFlag(StreamInfo::ResponseFlag::NoRouteFound);+      stream_info->setResponseCodeDetails(+          StreamInfo::ResponseCodeDetails::get().FilterChainNotFound);+      emitLogs(*config_, *stream_info);+      socket->close();+      return;+    }+    stream_info->setFilterChainName(filter_chain->name());+    auto transport_socket = filter_chain->transportSocketFactory().createTransportSocket(nullptr);+    stream_info->setDownstreamSslConnection(transport_socket->ssl());+    auto server_conn_ptr = dispatcher().createServerConnection(+        std::move(socket), std::move(transport_socket), *stream_info);+    if (const auto timeout = filter_chain->transportSocketConnectTimeout();+        timeout != std::chrono::milliseconds::zero()) {+      server_conn_ptr->setTransportSocketConnectTimeout(timeout);+    }+    server_conn_ptr->setBufferLimits(config_->perConnectionBufferLimitBytes());+    RELEASE_ASSERT(server_conn_ptr->addressProvider().remoteAddress() != nullptr, "");+    const bool empty_filter_chain = !config_->filterChainFactory().createNetworkFilterChain(+        *server_conn_ptr, filter_chain->networkFilterFactories());+    if (empty_filter_chain) {+      ENVOY_CONN_LOG(debug, "closing connection from {}: no filters", *server_conn_ptr,+                     server_conn_ptr->addressProvider().remoteAddress()->asString());+      server_conn_ptr->close(Network::ConnectionCloseType::NoFlush);+    }+    newActiveConnection(*filter_chain, std::move(server_conn_ptr), std::move(stream_info));+  }++  virtual void newActiveConnection(const Network::FilterChain& filter_chain,+                                   Network::ServerConnectionPtr server_conn_ptr,

Please add method comment.

lambdai

comment created time in 6 hours

Pull request review commentenvoyproxy/envoy

listener: refactor active tcp socket and active tcp listener

 class ActiveStreamListenerBase : public ActiveListenerImplBase {   /**    * Create a new connection from a socket accepted by the listener.    */-  virtual void newConnection(Network::ConnectionSocketPtr&& socket,-                             std::unique_ptr<StreamInfo::StreamInfo> stream_info) PURE;+  void newConnection(Network::ConnectionSocketPtr&& socket,+                     std::unique_ptr<StreamInfo::StreamInfo> stream_info) {+    // Find matching filter chain.+    const auto filter_chain = config_->filterChainManager().findFilterChain(*socket);+    if (filter_chain == nullptr) {+      RELEASE_ASSERT(socket->addressProvider().remoteAddress() != nullptr, "");+      ENVOY_LOG(debug, "closing connection from {}: no matching filter chain found",+                socket->addressProvider().remoteAddress()->asString());+      stats_.no_filter_chain_match_.inc();+      stream_info->setResponseFlag(StreamInfo::ResponseFlag::NoRouteFound);+      stream_info->setResponseCodeDetails(+          StreamInfo::ResponseCodeDetails::get().FilterChainNotFound);+      emitLogs(*config_, *stream_info);+      socket->close();+      return;+    }+    stream_info->setFilterChainName(filter_chain->name());+    auto transport_socket = filter_chain->transportSocketFactory().createTransportSocket(nullptr);+    stream_info->setDownstreamSslConnection(transport_socket->ssl());+    auto server_conn_ptr = dispatcher().createServerConnection(+        std::move(socket), std::move(transport_socket), *stream_info);+    if (const auto timeout = filter_chain->transportSocketConnectTimeout();+        timeout != std::chrono::milliseconds::zero()) {+      server_conn_ptr->setTransportSocketConnectTimeout(timeout);+    }+    server_conn_ptr->setBufferLimits(config_->perConnectionBufferLimitBytes());+    RELEASE_ASSERT(server_conn_ptr->addressProvider().remoteAddress() != nullptr, "");+    const bool empty_filter_chain = !config_->filterChainFactory().createNetworkFilterChain(+        *server_conn_ptr, filter_chain->networkFilterFactories());+    if (empty_filter_chain) {+      ENVOY_CONN_LOG(debug, "closing connection from {}: no filters", *server_conn_ptr,+                     server_conn_ptr->addressProvider().remoteAddress()->asString());+      server_conn_ptr->close(Network::ConnectionCloseType::NoFlush);+    }+    newActiveConnection(*filter_chain, std::move(server_conn_ptr), std::move(stream_info));+  }++  virtual void newActiveConnection(const Network::FilterChain& filter_chain,

nit: protected:

this method shouldn't be accessed directly from outside the class.

lambdai

comment created time in 6 hours

pull request commentenvoyproxy/envoy

[http] add llhttp parser implementation

Please see precheck format presubmit error.

/wait

021-06-17T20:30:38.8262526Z ERROR: ./docs/root/version_history/current.rst:82: Version history not in alphabetical order (crash support vs admission control): please check placement of line
2021-06-17T20:30:38.8264348Z  * admission control: added :ref:`admission control <envoy_v3_api_field_extensions.filters.http.admission_control.v3alpha.AdmissionControl.rps_threshold>` option that when average RPS of the sampling window is below this threshold, the filter will not throttle requests. Added :ref:`admission control <envoy_v3_api_field_extensions.filters.http.admission_control.v3alpha.AdmissionControl.max_rejection_probability>` option to set an upper limit on the probability of rejection.. 
asraa

comment created time in 6 hours

issue closederlang/rebar3

Transient crash when upgrading a custom compiler plugin

The issue

When upgrading a custom compiler plugin (it compiles JSON Schema v4 to Erlang), a transient crash (undef) happens. It happens consistently when an upgrade actually happens, but does fine (and re-compiles it fine), when you re-run.

I'll try and compile a minimal test case, if necessary.

Environment

Unfortunately, I'm not able to open-source the project yet, but I'm working on it. The plugin is your run of the mill rebar_compiler behaviour, does nothing weird, and calls any specific code only in the compile callback -- otherwise pretty similar to e.g. rebar_compiler_yrl.

Rebar3 report
 version 3.15.1
 generated at 2021-04-20T07:58:25+00:00
=================
Please submit this along with your issue at https://github.com/erlang/rebar3/issues (and feel free to edit out private information, if any)
-----------------
Task: plugins
Entered as:
  plugins upgrade jsstyped_rebar
-----------------
Operating System: win32
ERTS: Erlang/OTP 23 [erts-11.2] [source] [64-bit] [smp:8:8] [ds:8:8:10] [async-threads:1]
Root Directory: c:/Program Files/erl-23.3
Library directory: c:/Program Files/erl-23.3/lib
-----------------
Loaded Applications:
bbmustache: 1.10.0
certifi: 2.5.3
cf: 0.3.1
common_test: 1.20
compiler: 7.6.7
crypto: 4.9
cth_readable: 1.5.1
dialyzer: 4.3.1
edoc: 0.12
erlware_commons: 1.4.0
eunit: 2.6
eunit_formatters: 0.5.0
getopt: 1.0.1
hipe: 4.0.1
inets: 7.3.2
kernel: 7.3
providers: 1.8.1
public_key: 1.10
relx: 4.4.0
sasl: 4.0.2
snmp: 5.8
ssl_verify_fun: 1.1.6
stdlib: 3.14.1
syntax_tools: 2.5
tools: 3.4.4

-----------------
Escript path: c:/Users/alex0/Documents/Sources/InnoChain/innobpapi/../rebar3
Providers:
  app_discovery as clean compile compile cover ct deps dialyzer do edoc escriptize eunit get-deps help install install_deps list lock new path pkgs release relup report repos shell state tar tree unlock update upgrade upgrade upgrade version xref

Current behaviour

Running the command after the ref has updated causes a crash.

DIAGNOSTIC=1 rebar3 plugins upgrade jsstyped_rebar
===> Load global config file c:/Users/alex0/.config/rebar3/rebar.config
===> Setting paths to [deps]
===> Compile (apps)
===> Setting paths to [plugins]
===> Setting paths to [deps]
===> Setting paths to [plugins]
===> Setting paths to [plugins]
===> Expanded command sequence to be run: []
===> Running provider: do
===> Expanded command sequence to be run: [{plugins,upgrade}]
===> Running provider: {plugins,upgrade}
===> Getting definition for package root from repo hexpm (#{api_url => <<"https://hex.pm/api">>,name => <<"hexpm">>,
         repo_name => <<"hexpm">>,repo_organization => undefined,
         repo_url => <<"https://repo.hex.pm">>,repo_verify => true,
         repo_verify_origin => true})
===> Hex get_package request failed: {ok,
                                             {403,
                                              #{<<"accept-ranges">> =>
                                                 <<"bytes">>,
                                                <<"age">> => <<"0">>,
                                                <<"connection">> =>
                                                 <<"keep-alive">>,
                                                <<"content-length">> =>
                                                 <<"243">>,
                                                <<"content-type">> =>
                                                 <<"application/xml">>,
                                                <<"date">> =>
                                                 <<"Tue, 20 Apr 2021 07:56:09 GMT">>,
                                                <<"server">> => <<"AmazonS3">>,
                                                <<"via">> => <<"1.1 varnish">>,
                                                <<"x-amz-id-2">> =>
                                                 <<"YoKrIbQztO4SKzdpN+lSLB0SleMjweFmTEbixfgsvtGD4xVIBAfNcMPFAmjub0VYrHt3yp3iqRg=">>,
                                                <<"x-amz-request-id">> =>
                                                 <<"TEWM59EZ2W52YHES">>,
                                                <<"x-cache">> =>
                                                 <<"MISS, MISS">>,
                                                <<"x-cache-hits">> =>
                                                 <<"0, 0">>,
                                                <<"x-served-by">> =>
                                                 <<"cache-dca17765-DCA, cache-fra19177-FRA">>,
                                                <<"x-timer">> =>
                                                 <<"S1618905370.711057,VS0,VE125">>},
                                              <<"<?xml version=\"1.0\" encoding=\"UTF-8\"?>\n<Error><Code>AccessDenied</Code><Message>Access Denied</Message><RequestId>TEWM59EZ2W52YHES</RequestId><HostId>YoKrIbQztO4SKzdpN+lSLB0SleMjweFmTEbixfgsvtGD4xVIBAfNcMPFAmjub0VYrHt3yp3iqRg=</HostId></Error>">>}}
===> Failed to update package root from repo hexpm
===> Failed to fetch updates for package root from repo hexpm
===> sh info:
        cwd: "c:/Users/alex0/Documents/Sources/InnoChain/innobpapi"
        cmd: git --version

===>    opts: []

===> Port Cmd: cmd /q /c git --version
Port Opts: [exit_status,
            {line,16384},
            use_stdio,stderr_to_stdout,hide,eof,binary]

===> sh info:
        cwd: "c:/Users/alex0/Documents/Sources/InnoChain/innobpapi"
        cmd: git fetch origin v011_support

===>    opts: [{cd,"c:/Users/alex0/Documents/Sources/InnoChain/innobpapi/_build/default/plugins/jsstyped_rebar"}]

===> Port Cmd: cmd /q /c git fetch origin v011_support
Port Opts: [{cd,"c:/Users/alex0/Documents/Sources/InnoChain/innobpapi/_build/default/plugins/jsstyped_rebar"},
            exit_status,
            {line,16384},
            use_stdio,stderr_to_stdout,hide,eof,binary]

===> sh info:
        cwd: "c:/Users/alex0/Documents/Sources/InnoChain/innobpapi"
        cmd: git log HEAD..origin/v011_support --oneline

===>    opts: [{cd,"c:/Users/alex0/Documents/Sources/InnoChain/innobpapi/_build/default/plugins/jsstyped_rebar"}]

===> Port Cmd: cmd /q /c git log HEAD..origin/v011_support --oneline
Port Opts: [{cd,"c:/Users/alex0/Documents/Sources/InnoChain/innobpapi/_build/default/plugins/jsstyped_rebar"},
            exit_status,
            {line,16384},
            use_stdio,stderr_to_stdout,hide,eof,binary]

===> Checking git branch v011_support for updates
===> Upgrading jsstyped_rebar (from {git,"git@gitlab.com:InnoChain/jsstyped_rebar.git",
                          {branch,"v011_support"}})
===> sh info:
        cwd: "c:/Users/alex0/Documents/Sources/InnoChain/innobpapi"
        cmd: git clone  git@gitlab.com:InnoChain/jsstyped_rebar.git .tmp_dir248202452179 -b v011_support --single-branch

===>    opts: [{cd,"c:/Users/alex0/AppData/Local/Temp"}]

===> Port Cmd: cmd /q /c git clone  git@gitlab.com:InnoChain/jsstyped_rebar.git .tmp_dir248202452179 -b v011_support --single-branch
Port Opts: [{cd,"c:/Users/alex0/AppData/Local/Temp"},
            exit_status,
            {line,16384},
            use_stdio,stderr_to_stdout,hide,eof,binary]

===> sh info:
        cwd: "c:/Users/alex0/Documents/Sources/InnoChain/innobpapi"
        cmd: rd /q /s "c:\\Users\\alex0\\Documents\\Sources\\InnoChain\\innobpapi\\_build\\default\\plugins\\jsstyped_rebar"

===>    opts: [{use_stdout,false},return_on_error]

===> Port Cmd: cmd /q /c rd /q /s "c:\\Users\\alex0\\Documents\\Sources\\InnoChain\\innobpapi\\_build\\default\\plugins\\jsstyped_rebar"
Port Opts: [exit_status,
            {line,16384},
            use_stdio,stderr_to_stdout,hide,eof,binary]

===> Moving checkout "c:/Users/alex0/AppData/Local/Temp/.tmp_dir248202452179" to "c:/Users/alex0/Documents/Sources/InnoChain/innobpapi/_build/default/plugins/jsstyped_rebar"
===> sh info:
        cwd: "c:/Users/alex0/Documents/Sources/InnoChain/innobpapi"
        cmd: robocopy /move /e "c:\\Users\\alex0\\AppData\\Local\\Temp\\.tmp_dir248202452179" "c:\\Users\\alex0\\Documents\\Sources\\InnoChain\\innobpapi\\_build\\default\\plugins\\jsstyped_rebar" 1> nul

===>    opts: [{use_stdout,false},return_on_error]

===> Port Cmd: cmd /q /c robocopy /move /e "c:\\Users\\alex0\\AppData\\Local\\Temp\\.tmp_dir248202452179" "c:\\Users\\alex0\\Documents\\Sources\\InnoChain\\innobpapi\\_build\\default\\plugins\\jsstyped_rebar" 1> nul
Port Opts: [exit_status,
            {line,16384},
            use_stdio,stderr_to_stdout,hide,eof,binary]

===> No upgrade needed for jsx
===> Compile (plugins)
===> Running hooks for compile in app jsstyped_rebar (c:/Users/alex0/Documents/Sources/InnoChain/innobpapi/_build/default/plugins/jsstyped_rebar) with configuration:
===>    {pre_hooks, []}.
===> run_hooks("c:/Users/alex0/Documents/Sources/InnoChain/innobpapi/_build/default/plugins/jsstyped_rebar", pre_hooks, compile) -> no hooks defined

===> Running hooks for erlc_compile in app jsstyped_rebar (c:/Users/alex0/Documents/Sources/InnoChain/innobpapi/_build/default/plugins/jsstyped_rebar) with configuration:
===>    {pre_hooks, []}.
===> run_hooks("c:/Users/alex0/Documents/Sources/InnoChain/innobpapi/_build/default/plugins/jsstyped_rebar", pre_hooks, erlc_compile) -> no hooks defined

===> Setting paths to [deps]
===> Analyzing applications...
===> Uncaught error in rebar_core. Run with DIAGNOSTIC=1 to see stacktrace or consult rebar3.crashdump
===> Uncaught error: undef
===> When submitting a bug report, please include the output of `rebar3 report "your command"`

Expected behaviour

Not crashing, probably -- this is such a weird issue.

closed time in 6 hours

ElectronicRU

issue commenterlang/rebar3

Transient crash when upgrading a custom compiler plugin

Unfortunately, yeah :( I think this can be closed for now, I wasn't able to find a smaller breaking example. Once I negotiate releasing the plugin code to open source land, I'll reopen this.

ElectronicRU

comment created time in 6 hours

Pull request review commentenvoyproxy/envoy

upstream: update host's socket factory when metadata is updated.

 class HostDescriptionImpl : virtual public HostDescription,   Outlier::DetectorHostMonitorPtr outlier_detector_;   HealthCheckHostMonitorPtr health_checker_;   std::atomic<uint32_t> priority_;-  Network::TransportSocketFactory& socket_factory_;+  std::reference_wrapper<Network::TransportSocketFactory> socket_factory_;

This needs a ABSL_GUARDED_BY annotation as well? Should this use same mutex as metadata?

cpakulski

comment created time in 6 hours

push eventenvoyproxy/envoy

Christoph Pakulski

commit sha 4533ea1897c278477836c72ad5a124148aff4d10

postgres: validate message syntax before parsing (#16575) Commit Message: Validate postgres messages before parsing. Additional Description: Introduced InSync and OutOfSync states in decoder. When decoder detects a wrongly formatted message, it stops parsing and moves to OutOfSync state. Continuing parsing after encountering message with wrong syntax may lead to interpreting random bytes as length of the message and possibly causing OOM. Risk Level: Low Testing: Added unit tests and run full regression tests. Docs Changes: No. Release Notes: No. Platform Specific Features: Fixes #12340 Signed-off-by: Christoph Pakulski <christoph@tetrate.io>

view details

push time in 6 hours

PR merged envoyproxy/envoy

Reviewers
postgres: validate message syntax before parsing

Commit Message: Validate postgres messages before parsing.

Additional Description: Introduced InSync and OutOfSync states in decoder. When decoder detects a wrongly formatted message, it stops parsing and moves to OutOfSync state. Continuing parsing after encountering message with wrong syntax may lead to interpreting random bytes as length of the message and possibly causing OOM.

Risk Level: Low Testing: Added unit tests and run full regression tests. Docs Changes: No. Release Notes: No. Platform Specific Features: Fixes #12340

+960 -442

9 comments

10 changed files

cpakulski

pr closed time in 6 hours

issue closedenvoyproxy/envoy

[postgres_proxy] assert failure with untrusted buffer when onData()

I'm currently working on fuzz test(which generates random bytes for onData() and onWrite() to see whether we could crash the Envoy) for network-level filters. When I was testing on postres_proxy filter with some untrusted data, an assert failure occurred inside linearize: RELEASE_ASSERT(size <= length(), "Linearize size exceeds buffer size"); https://github.com/envoyproxy/envoy/blob/master/source/extensions/filters/network/postgres_proxy/postgres_decoder.cc#L201

This error only happens in fuzzer or when upstream server is on bad state, so it is not security-critical now. But I think that we could deal with this error more gracefully. (So that we could make the filter more robust to upstream errors, and enable the fuzzer to continue testing it).

My idea is that we could make it just like other invalid error handles in this file, which is return false;, before calling linearize? This solution looks like this(from line 200):

auto bytesToRead = length - 4;
if(bytesToRead>data.length()){
  return false;
}
message.assign(std::string(static_cast<char*>(data.linearize(bytesToRead)), bytesToRead));

This issue can be reproduced in unit test by adding a case as below(test/extensions/filters/network/postgres_proxy/postgres_decoder_test.cc):

TEST_P(PostgresProxyFrontendEncrDecoderTest, AssertFailure) {
  std::string str_data;
  for(int i=0;i<8;i++){
    str_data.push_back('\0');
  }
  Buffer::OwnedImpl data(str_data);
  decoder_->onData(data, false);
}

If anyone has a better idea, please share with me or make a pull request and link it here. Thanks! /cc @dio /cc @fabriziomello /cc @cpakulski

closed time in 6 hours

jianwen612

Pull request review commentenvoyproxy/envoy

HTTP2 Proactive GOAWAY on Drain - Preamble

 template <typename... CallbackArgs> class CallbackManager {   const std::shared_ptr<bool> still_alive_{std::make_shared<bool>(true)}; }; +/**+ * @brief Utility class for managing callbacks across multiple threads.+ *+ * Callback registration (via locks) and callback execution (via dispatchers and shared_ptr's) are+ * thread-safe. However the main ThreadSafeCallbackManager instance is assumed to exist on a single+ * thread during it's lifetime, the same as the provided dispatcher it is constructed with.+ *+ * @note This is not safe to use for instances in which the lifetimes of the threads registering+ * callbacks is less than the thread that owns the callback manager due to an assumption that+ * dispatchers registered alongside callbacks remain valid, even if the callback expires.+ *+ * @see CallbackManager for a non-thread-safe version+ */+class ThreadSafeCallbackManager {+  struct CallbackHolder;+  using CallbackListEntry = std::tuple<std::weak_ptr<CallbackHolder>, Event::Dispatcher&>;++public:+  using Callback = std::function<void()>;++  /**+   * @param dispatcher Dispatcher relevant to the thread in which the callback manager is+   * created/managed+   */+  explicit ThreadSafeCallbackManager(Event::Dispatcher& dispatcher) : dispatcher_(dispatcher) {}++  /**+   * @brief Add a callback.+   * @param dispatcher Dispatcher from the same thread as the registered callback. This will be used+   *                   to schedule the execution of the callback.+   * @param callback callback to add+   * @return ThreadSafeCallbackHandlePtr a handle that can be used to remove the callback.+   */+  ABSL_MUST_USE_RESULT ThreadSafeCallbackHandlePtr add(Event::Dispatcher& dispatcher,+                                                       Callback callback);

Going back to the code in the followup PR: https://github.com/envoyproxy/envoy/pull/16201/files#r650306747

I wonder if there's some potential for child callbacks to not need to use weak_ptr themselves to tell if it is safe to invoke the main body of the callback. Here's an idea:

  • Require that ThreadSafeCallbackHandlePtr be deleted in the dispatcher thread by calling ASSERT(dispatcher.isThreadSafe()); from the ThreadSafeCallbackHandlePtr destructor
  • Since ThreadSafeCallbackHandlePtr destruction and the callback both execute in the dispatcher thread, you can tell if the callback is still usable if a still_alive owned by the ThreadSafeCallbackHandlePtr is still alive.

I think this would also allow you to use unique_ptr instead of shared_ptr for ThreadSafeCallbackHandlePtr. Also, I think that would allow you to return unique_ptr from DrainManagerImpl::createChildManager

murray-stripe

comment created time in 7 hours

Pull request review commentenvoyproxy/envoy

HTTP2 Proactive GOAWAY on Drain - Preamble

+#include "source/common/common/callback_impl.h"++namespace Envoy {+namespace Common {++ThreadSafeCallbackHandlePtr ThreadSafeCallbackManager::add(Event::Dispatcher& dispatcher,+                                                           Callback callback) {+  Thread::LockGuard lock(lock_);+  auto new_callback = std::make_shared<CallbackHolder>(*this, callback);+  callbacks_.push_back(CallbackListEntry(std::weak_ptr<CallbackHolder>(new_callback), dispatcher));+  // Get the list iterator of added callback handle, which will be used to remove itself from+  // callbacks_ list.+  new_callback->it_ = (--callbacks_.end());+  return new_callback;+}++void ThreadSafeCallbackManager::runCallbacks() {+  Thread::LockGuard lock(lock_);+  for (auto it = callbacks_.cbegin(); it != callbacks_.cend();) {+    auto& [cb, cb_dispatcher] = *(it++);++    // sanity check cb is valid before attempting to schedule a dispatch+    if (cb.expired()) {+      continue;+    }++    cb_dispatcher.post([cb = cb] {+      // Once we're running on the thread that scheduled the callback, validate the+      // callback is still valid and execute+      std::shared_ptr<CallbackHolder> cb_shared = cb.lock();+      if (cb_shared != nullptr) {+        cb_shared->cb_();+      }+    });+  }+}++size_t ThreadSafeCallbackManager::size() const noexcept {+  Thread::LockGuard lock(lock_);+  return callbacks_.size();+}++void ThreadSafeCallbackManager::remove(typename std::list<CallbackListEntry>::iterator& it) {+  Thread::LockGuard lock(lock_);+  callbacks_.erase(it);+}++ThreadSafeCallbackManager::CallbackHolder::CallbackHolder(ThreadSafeCallbackManager& parent,+                                                          Callback cb)+    : cb_(cb), parent_dispatcher_(parent.dispatcher_), parent_(parent),+      still_alive_(parent.still_alive_) {}++ThreadSafeCallbackManager::CallbackHolder::~CallbackHolder() {+  parent_dispatcher_.post([still_alive = still_alive_, &parent = parent_, it = it_]() mutable {+    // We're running on the same thread the parent is managed on. We can assume there is+    // no race between checking if alive and calling remove.+    if (!still_alive.expired()) {+      parent.remove(it);+    }+  });+}

You may need some second opinions on some of these things but here's an issue I see: If you can't guarantee that the parent is still around you also risk ending up in a state where the parent dispatcher is already deleted. I would consider changing the code structure so the parent out lives the CallbackHolder, even if that implies use of shared_from_this and keeping a shared_ptr to ThreadSafeCallbackManager in the callback holder.

Some revamps of the Envoy shutdown sequence are needed in order to simplify some of these threading and object interactions.

murray-stripe

comment created time in 8 hours

delete branch elixir-ecto/db_connection

delete branch : josevalim-patch-1

delete time in 7 hours

pull request commentelixir-ecto/db_connection

Update spec

:green_heart: :blue_heart: :purple_heart: :yellow_heart: :heart:

josevalim

comment created time in 7 hours

push eventelixir-ecto/db_connection

José Valim

commit sha 6850853977a08ca9bbc7fc5d9d2b07f028720a6c

Update spec (#242) Closes #241

view details

push time in 7 hours

PR merged elixir-ecto/db_connection

Update spec

Closes #241

+4 -2

0 comment

1 changed file

josevalim

pr closed time in 7 hours

issue closedelixir-ecto/db_connection

Typespec Inconsistency

The return value here is a 5 element tuple, rather than the 4 element tuple declared in the typespec:

The typespec:

https://github.com/elixir-ecto/db_connection/blob/4d7e789943f1c3a0b54b871f3807a04303e666dc/lib/db_connection/holder.ex#L50

The offenders:

https://github.com/elixir-ecto/db_connection/blob/4d7e789943f1c3a0b54b871f3807a04303e666dc/lib/db_connection/holder.ex#L57

https://github.com/elixir-ecto/db_connection/blob/4d7e789943f1c3a0b54b871f3807a04303e666dc/lib/db_connection/holder.ex#L77

closed time in 7 hours

isaacsanders

PR opened elixir-ecto/db_connection

Update spec

Closes #241

+4 -2

0 comment

1 changed file

pr created time in 7 hours

create barnchelixir-ecto/db_connection

branch : josevalim-patch-1

created branch time in 7 hours