profile
viewpoint
If you are wondering where the data of this site comes from, please visit https://api.github.com/users/envoyproxy/events. GitMemory does not store any data, but only uses NGINX to cache data for a period of time. The idea behind GitMemory is simply to give users a better reading experience.

envoyproxy/envoy 17986

Cloud-native high-performance edge/middle/service proxy

envoyproxy/protoc-gen-validate 1668

protoc plugin to generate polyglot message validators

envoyproxy/ratelimit 1314

Go/gRPC service designed to enable generic rate limit scenarios from different types of applications.

envoyproxy/go-control-plane 929

Go implementation of data-plane-api

envoyproxy/envoy-mobile 454

Client HTTP and networking library based on the Envoy project for iOS, Android, and more.

envoyproxy/data-plane-api 437

[READ ONLY MIRROR] Envoy REST/proto API definitions and documentation.

envoyproxy/envoy-filter-example 216

Example of consuming Envoy and adding a custom filter

envoyproxy/envoy-wasm 198

*ATTENTION!: The content of this repo is merged into https://github.com/envoyproxy/envoy and future development is happening there.

envoyproxy/java-control-plane 196

Java implementation of an Envoy gRPC control plane

envoyproxy/nighthawk 177

L7 (HTTP/HTTPS/HTTP2) performance characterization tool

pull request commentenvoyproxy/go-control-plane

Actually wire up Envoy logging to ALS for integration test cases

@alecholmez Do you mind taking a look? The change is bit lengthier (i.g touching all the bootstrap yamls) than I'd like it to be. However, due to https://github.com/envoyproxy/envoy/issues/3660 ALS cluster seems can only be configured as static resource.

Let me know your thoughts. Thanks!

unicell

comment created time in 2 minutes

fork akshayjshah/envoy

Cloud-native high-performance edge/middle/service proxy

https://www.envoyproxy.io

fork in 10 minutes

Pull request review commentenvoyproxy/envoy

oauth2: Allow to customize cookie names

 struct FilterStats {   ALL_OAUTH_FILTER_STATS(GENERATE_COUNTER_STRUCT) }; +/**+ * Helper structure to hold custom cookie names.+ */+struct CookieNames {+  CookieNames(+      const envoy::extensions::filters::http::oauth2::v3alpha::OAuth2Credentials::CookieNames&+          cookie_names)+      : CookieNames(cookie_names.bearer_token(), cookie_names.oauth_hmac(),+                    cookie_names.oauth_expires()) {}++  CookieNames(const std::string& bearer_token, const std::string& oauth_hmac,+              const std::string& oauth_expires)+      : bearer_token_(bearer_token.empty() ? "BearerToken" : bearer_token),+        oauth_hmac_(oauth_hmac.empty() ? "OauthHMAC" : oauth_hmac),+        oauth_expires_(oauth_expires.empty() ? "OauthExpires" : oauth_expires) {}++  std::string bearer_token_;+  std::string oauth_hmac_;+  std::string oauth_expires_;

can they be const? :D

dio

comment created time in 18 minutes

PullRequestReviewEvent

pull request commentenvoyproxy/envoy

protos: add support for go_option package

Locally running ./tools/api/generate_go_protobuf.py is a useful way to debug these issues

remyleone

comment created time in 24 minutes

pull request commentenvoyproxy/envoy

protos: add support for go_option package

I see some issues like:

path: undefined v3

Once applying the rules_go update with this, I think there's some misunderstanding in the generated go code about that, but you'll probably have some better ideas there

remyleone

comment created time in 26 minutes

pull request commentenvoyproxy/envoy

protos: add support for go_option package

So we need to update this, turns out rules_go already bumps this, so now we have ourselves a circular dependency of PRs. I've been waiting on this for https://github.com/envoyproxy/envoy/pull/17438, but we'll have to combine them.

remyleone

comment created time in 37 minutes

pull request commentenvoyproxy/envoy

protos: add support for go_option package

Looks like this hit this issue https://github.com/golang/protobuf/issues/1276 which has since been fixed there (by removing that false positive warning entirely).

remyleone

comment created time in 39 minutes

pull request commentenvoyproxy/envoy

http: Add support for appending request/response headers only if they are absent

@mattklein123 I think the scope of this PR has become huge. Will it be okay to create separate smaller PRs to do the ExtAuthZ and RateLimit changes? I can also create a separate PR for Mutation changes. It'd be easier to get reviews on smaller PRs.

agrawroh

comment created time in 43 minutes

issue closedenvoyproxy/envoy

Question: Can envoy mirror tcp traffic when acting as a tcp proxy?

I'm running envoy in tcp proxy mode. I know envoy can mirror traffic when acting in http mode, but not sure if that is possible in tcp proxy mode. I don't see any documentation which suggests mirroring is supported in tcp mode, but I still wanted to ask.

closed time in an hour

samitpal

issue commentenvoyproxy/envoy

Question: Can envoy mirror tcp traffic when acting as a tcp proxy?

Ok, thanks! Closing the issue.

samitpal

comment created time in an hour

push eventenvoyproxy/envoy-mobile

Mike Schore

commit sha eae479913c0a646261ceb959fbc8c53a3075a4cc

remove hack Signed-off-by: Mike Schore <mike.schore@gmail.com>

view details

push time in an hour

issue closedenvoyproxy/ratelimit

Leverage ratelimit to ban IPs permanently

Hi,

I am using ratelimit envoyproxy service in conjunction with istio to perform the global rate-limit service in the istio-ingressgateway pods that are in front of my infrastructure inside a kubernetes cluster. My goal is to be able to get the IPs that are being rate limited and store them in prometheus and use some process that can read from prom and do some logic, like ban those IPs permanently via a cloud service like Google cloud Armor or one alike.

I am using this action in my "rate_limits filter"

    rate_limits:
      - actions:
        - request_headers:
            descriptor_key: remote_address_second
            header_name: x-envoy-external-address
        - destination_cluster: {}

So this is what in redis store looks like

entrypoint-v1-entrypoint_remote_address_second_188.2.75.xx_destination_cluster_outbound|80||$this_is_the_destination_cluster_1627892892

But with statsd-prom-exporter activated in prometheus I am able to see just this ( there is no IP ) so I can't have all the dimensions in the metric

ratelimit.service.rate_limit.entrypoint-v1-entrypoint.remote_address_minute.destination_cluster_outbound|80||$this_is_the_destination_cluster.over_limit: 62

Is there any way I can "see" in statsd ( :6070/stats ) the rate limit actions in the metric? being in this case the IP ( header x-envoy-external-address )

Is there any modification needed in this code to achieve that?

closed time in an hour

santinoncs

issue commentenvoyproxy/ratelimit

Leverage ratelimit to ban IPs permanently

This issue has been automatically closed because it has not had activity in the last 37 days. If this issue is still valid, please ping a maintainer and ask them to label it as "help wanted" or "no stalebot". Thank you for your contributions.

santinoncs

comment created time in an hour

Pull request review commentenvoyproxy/envoy

access logging: introduce critical ALS endpoint

 #pragma once +#include <chrono> #include <memory> +#include "envoy/common/time.h" #include "envoy/data/accesslog/v3/accesslog.pb.h" #include "envoy/event/dispatcher.h" #include "envoy/extensions/access_loggers/grpc/v3/als.pb.h" #include "envoy/grpc/async_client_manager.h" #include "envoy/local_info/local_info.h" #include "envoy/service/accesslog/v3/als.pb.h"+#include "envoy/stats/stats_macros.h" #include "envoy/thread_local/thread_local.h" +#include "source/common/common/linked_object.h"+#include "source/common/grpc/buffered_async_client.h" #include "source/extensions/access_loggers/common/grpc_access_logger.h"  namespace Envoy { namespace Extensions { namespace AccessLoggers { namespace GrpcCommon { +static constexpr absl::string_view GRPC_LOG_STATS_PREFIX = "access_logs.grpc_access_log.";++#define CRITICAL_ACCESS_LOGGER_GRPC_CLIENT_STATS(COUNTER, GAUGE)                                   \+  COUNTER(critical_logs_message_timeout)                                                           \+  COUNTER(critical_logs_nack_received)                                                             \+  COUNTER(critical_logs_ack_received)                                                              \+  GAUGE(pending_critical_logs, Accumulate)++struct GrpcCriticalAccessLogClientGrpcClientStats {+  CRITICAL_ACCESS_LOGGER_GRPC_CLIENT_STATS(GENERATE_COUNTER_STRUCT, GENERATE_GAUGE_STRUCT)+};++class GrpcCriticalAccessLogClient {+public:+  using RequestType = envoy::service::accesslog::v3::CriticalAccessLogsMessage;+  using ResponseType = envoy::service::accesslog::v3::CriticalAccessLogsResponse;++  struct CriticalLogStream : public Grpc::AsyncStreamCallbacks<ResponseType> {+    explicit CriticalLogStream(GrpcCriticalAccessLogClient& parent) : parent_(parent) {}++    // Grpc::AsyncStreamCallbacks+    void onCreateInitialMetadata(Http::RequestHeaderMap&) override {}+    void onReceiveInitialMetadata(Http::ResponseHeaderMapPtr&&) override {}+    void onReceiveMessage(std::unique_ptr<ResponseType>&& message) override {+      const auto& id = message->id();++      switch (message->status()) {+      case envoy::service::accesslog::v3::CriticalAccessLogsResponse::ACK:+        parent_.stats_.critical_logs_ack_received_.inc();+        parent_.stats_.pending_critical_logs_.dec();+        parent_.client_->onSuccess(id);+        break;+      case envoy::service::accesslog::v3::CriticalAccessLogsResponse::NACK:+        parent_.stats_.critical_logs_nack_received_.inc();+        parent_.client_->onError(id);+        break;+      default:+        return;+      }+    }+    void onReceiveTrailingMetadata(Http::ResponseTrailerMapPtr&&) override {}+    void onRemoteClose(Grpc::Status::GrpcStatus, const std::string&) override {+      parent_.client_->cleanup();+    }++    GrpcCriticalAccessLogClient& parent_;+  };++  class InflightMessageTtlManager {+  public:+    InflightMessageTtlManager(Event::Dispatcher& dispatcher,+                              GrpcCriticalAccessLogClientGrpcClientStats& stats,+                              Grpc::BufferedAsyncClient<RequestType, ResponseType>& client,+                              std::chrono::milliseconds message_ack_timeout)+        : dispatcher_(dispatcher), message_ack_timeout_(message_ack_timeout), stats_(stats),+          client_(client), timer_(dispatcher_.createTimer([this] { callback(); })) {}++    ~InflightMessageTtlManager() { timer_->disableTimer(); }++    void setDeadline(absl::flat_hash_set<uint32_t>&& ids) {+      auto expires_at = dispatcher_.timeSource().monotonicTime() + message_ack_timeout_;+      deadline_.emplace(expires_at, std::move(ids));+      timer_->enableTimer(message_ack_timeout_);

what if the time is already enabled? I imagine; If the flush interval is shorter than message_ack_timeout_ then this setDeadline potentially would be called periodically before this timer triggers, as a result this timer will never be called? (Maybe I'm worried too much though :D

Shikugawa

comment created time in 2 hours

PullRequestReviewEvent

Pull request review commentenvoyproxy/envoy

access logging: introduce critical ALS endpoint

 #pragma once +#include <chrono> #include <memory> +#include "envoy/common/time.h" #include "envoy/data/accesslog/v3/accesslog.pb.h" #include "envoy/event/dispatcher.h" #include "envoy/extensions/access_loggers/grpc/v3/als.pb.h" #include "envoy/grpc/async_client_manager.h" #include "envoy/local_info/local_info.h" #include "envoy/service/accesslog/v3/als.pb.h"+#include "envoy/stats/stats_macros.h" #include "envoy/thread_local/thread_local.h" +#include "source/common/common/linked_object.h"+#include "source/common/grpc/buffered_async_client.h" #include "source/extensions/access_loggers/common/grpc_access_logger.h"  namespace Envoy { namespace Extensions { namespace AccessLoggers { namespace GrpcCommon { +static constexpr absl::string_view GRPC_LOG_STATS_PREFIX = "access_logs.grpc_access_log.";++#define CRITICAL_ACCESS_LOGGER_GRPC_CLIENT_STATS(COUNTER, GAUGE)                                   \+  COUNTER(critical_logs_message_timeout)                                                           \+  COUNTER(critical_logs_nack_received)                                                             \+  COUNTER(critical_logs_ack_received)                                                              \+  GAUGE(pending_critical_logs, Accumulate)++struct GrpcCriticalAccessLogClientGrpcClientStats {+  CRITICAL_ACCESS_LOGGER_GRPC_CLIENT_STATS(GENERATE_COUNTER_STRUCT, GENERATE_GAUGE_STRUCT)+};++class GrpcCriticalAccessLogClient {+public:+  using RequestType = envoy::service::accesslog::v3::CriticalAccessLogsMessage;+  using ResponseType = envoy::service::accesslog::v3::CriticalAccessLogsResponse;++  struct CriticalLogStream : public Grpc::AsyncStreamCallbacks<ResponseType> {+    explicit CriticalLogStream(GrpcCriticalAccessLogClient& parent) : parent_(parent) {}++    // Grpc::AsyncStreamCallbacks+    void onCreateInitialMetadata(Http::RequestHeaderMap&) override {}+    void onReceiveInitialMetadata(Http::ResponseHeaderMapPtr&&) override {}+    void onReceiveMessage(std::unique_ptr<ResponseType>&& message) override {+      const auto& id = message->id();++      switch (message->status()) {+      case envoy::service::accesslog::v3::CriticalAccessLogsResponse::ACK:+        parent_.stats_.critical_logs_ack_received_.inc();+        parent_.stats_.pending_critical_logs_.dec();+        parent_.client_->onSuccess(id);+        break;+      case envoy::service::accesslog::v3::CriticalAccessLogsResponse::NACK:+        parent_.stats_.critical_logs_nack_received_.inc();+        parent_.client_->onError(id);+        break;+      default:+        return;+      }+    }+    void onReceiveTrailingMetadata(Http::ResponseTrailerMapPtr&&) override {}+    void onRemoteClose(Grpc::Status::GrpcStatus, const std::string&) override {+      parent_.client_->cleanup();+    }++    GrpcCriticalAccessLogClient& parent_;+  };++  class InflightMessageTtlManager {+  public:+    InflightMessageTtlManager(Event::Dispatcher& dispatcher,+                              GrpcCriticalAccessLogClientGrpcClientStats& stats,+                              Grpc::BufferedAsyncClient<RequestType, ResponseType>& client,+                              std::chrono::milliseconds message_ack_timeout)+        : dispatcher_(dispatcher), message_ack_timeout_(message_ack_timeout), stats_(stats),+          client_(client), timer_(dispatcher_.createTimer([this] { callback(); })) {}++    ~InflightMessageTtlManager() { timer_->disableTimer(); }++    void setDeadline(absl::flat_hash_set<uint32_t>&& ids) {+      auto expires_at = dispatcher_.timeSource().monotonicTime() + message_ack_timeout_;+      deadline_.emplace(expires_at, std::move(ids));+      timer_->enableTimer(message_ack_timeout_);+    }++  private:+    void callback() {+      const auto now = dispatcher_.timeSource().monotonicTime();+      std::vector<MonotonicTime> expired_timepoints;+      absl::flat_hash_set<uint32_t> expired_message_ids;++      // Extract timeout message ids.+      auto it = deadline_.lower_bound(now);+      while (it != deadline_.end()) {+        for (auto&& id : it->second) {+          expired_message_ids.emplace(id);+        }+        expired_timepoints.push_back(it->first);+        ++it;+      }++      // Clear buffered message ids on the set of waiting timeout.+      for (auto&& timepoint : expired_timepoints) {+        deadline_.erase(timepoint);+      }++      // Restore pending messages to buffer due to timeout.+      for (auto&& id : expired_message_ids) {+        const auto& message_buffer = client_.messageBuffer();++        if (message_buffer.find(id) == message_buffer.end()) {+          continue;+        }++        auto& message = message_buffer.at(id);+        if (message.first == Grpc::BufferState::PendingFlush) {

In what case we face this branch? Maybe worth a comment.

Shikugawa

comment created time in 2 hours

Pull request review commentenvoyproxy/envoy

access logging: introduce critical ALS endpoint

 #pragma once +#include <chrono> #include <memory> +#include "envoy/common/time.h" #include "envoy/data/accesslog/v3/accesslog.pb.h" #include "envoy/event/dispatcher.h" #include "envoy/extensions/access_loggers/grpc/v3/als.pb.h" #include "envoy/grpc/async_client_manager.h" #include "envoy/local_info/local_info.h" #include "envoy/service/accesslog/v3/als.pb.h"+#include "envoy/stats/stats_macros.h" #include "envoy/thread_local/thread_local.h" +#include "source/common/common/linked_object.h"+#include "source/common/grpc/buffered_async_client.h" #include "source/extensions/access_loggers/common/grpc_access_logger.h"  namespace Envoy { namespace Extensions { namespace AccessLoggers { namespace GrpcCommon { +static constexpr absl::string_view GRPC_LOG_STATS_PREFIX = "access_logs.grpc_access_log.";++#define CRITICAL_ACCESS_LOGGER_GRPC_CLIENT_STATS(COUNTER, GAUGE)                                   \+  COUNTER(critical_logs_message_timeout)                                                           \+  COUNTER(critical_logs_nack_received)                                                             \+  COUNTER(critical_logs_ack_received)                                                              \+  GAUGE(pending_critical_logs, Accumulate)++struct GrpcCriticalAccessLogClientGrpcClientStats {+  CRITICAL_ACCESS_LOGGER_GRPC_CLIENT_STATS(GENERATE_COUNTER_STRUCT, GENERATE_GAUGE_STRUCT)+};++class GrpcCriticalAccessLogClient {+public:+  using RequestType = envoy::service::accesslog::v3::CriticalAccessLogsMessage;+  using ResponseType = envoy::service::accesslog::v3::CriticalAccessLogsResponse;++  struct CriticalLogStream : public Grpc::AsyncStreamCallbacks<ResponseType> {+    explicit CriticalLogStream(GrpcCriticalAccessLogClient& parent) : parent_(parent) {}++    // Grpc::AsyncStreamCallbacks+    void onCreateInitialMetadata(Http::RequestHeaderMap&) override {}+    void onReceiveInitialMetadata(Http::ResponseHeaderMapPtr&&) override {}+    void onReceiveMessage(std::unique_ptr<ResponseType>&& message) override {+      const auto& id = message->id();++      switch (message->status()) {+      case envoy::service::accesslog::v3::CriticalAccessLogsResponse::ACK:+        parent_.stats_.critical_logs_ack_received_.inc();+        parent_.stats_.pending_critical_logs_.dec();+        parent_.client_->onSuccess(id);+        break;+      case envoy::service::accesslog::v3::CriticalAccessLogsResponse::NACK:+        parent_.stats_.critical_logs_nack_received_.inc();+        parent_.client_->onError(id);+        break;+      default:+        return;+      }+    }+    void onReceiveTrailingMetadata(Http::ResponseTrailerMapPtr&&) override {}+    void onRemoteClose(Grpc::Status::GrpcStatus, const std::string&) override {+      parent_.client_->cleanup();+    }++    GrpcCriticalAccessLogClient& parent_;+  };++  class InflightMessageTtlManager {+  public:+    InflightMessageTtlManager(Event::Dispatcher& dispatcher,+                              GrpcCriticalAccessLogClientGrpcClientStats& stats,+                              Grpc::BufferedAsyncClient<RequestType, ResponseType>& client,+                              std::chrono::milliseconds message_ack_timeout)+        : dispatcher_(dispatcher), message_ack_timeout_(message_ack_timeout), stats_(stats),+          client_(client), timer_(dispatcher_.createTimer([this] { callback(); })) {}++    ~InflightMessageTtlManager() { timer_->disableTimer(); }++    void setDeadline(absl::flat_hash_set<uint32_t>&& ids) {+      auto expires_at = dispatcher_.timeSource().monotonicTime() + message_ack_timeout_;

nit

      const auto expires_at = dispatcher_.timeSource().monotonicTime() + message_ack_timeout_;
Shikugawa

comment created time in 2 hours

Pull request review commentenvoyproxy/envoy

access logging: introduce critical ALS endpoint

 #pragma once +#include <chrono> #include <memory> +#include "envoy/common/time.h" #include "envoy/data/accesslog/v3/accesslog.pb.h" #include "envoy/event/dispatcher.h" #include "envoy/extensions/access_loggers/grpc/v3/als.pb.h" #include "envoy/grpc/async_client_manager.h" #include "envoy/local_info/local_info.h" #include "envoy/service/accesslog/v3/als.pb.h"+#include "envoy/stats/stats_macros.h" #include "envoy/thread_local/thread_local.h" +#include "source/common/common/linked_object.h"+#include "source/common/grpc/buffered_async_client.h" #include "source/extensions/access_loggers/common/grpc_access_logger.h"  namespace Envoy { namespace Extensions { namespace AccessLoggers { namespace GrpcCommon { +static constexpr absl::string_view GRPC_LOG_STATS_PREFIX = "access_logs.grpc_access_log.";++#define CRITICAL_ACCESS_LOGGER_GRPC_CLIENT_STATS(COUNTER, GAUGE)                                   \+  COUNTER(critical_logs_message_timeout)                                                           \+  COUNTER(critical_logs_nack_received)                                                             \+  COUNTER(critical_logs_ack_received)                                                              \+  GAUGE(pending_critical_logs, Accumulate)++struct GrpcCriticalAccessLogClientGrpcClientStats {+  CRITICAL_ACCESS_LOGGER_GRPC_CLIENT_STATS(GENERATE_COUNTER_STRUCT, GENERATE_GAUGE_STRUCT)+};++class GrpcCriticalAccessLogClient {+public:+  using RequestType = envoy::service::accesslog::v3::CriticalAccessLogsMessage;+  using ResponseType = envoy::service::accesslog::v3::CriticalAccessLogsResponse;++  struct CriticalLogStream : public Grpc::AsyncStreamCallbacks<ResponseType> {+    explicit CriticalLogStream(GrpcCriticalAccessLogClient& parent) : parent_(parent) {}++    // Grpc::AsyncStreamCallbacks+    void onCreateInitialMetadata(Http::RequestHeaderMap&) override {}+    void onReceiveInitialMetadata(Http::ResponseHeaderMapPtr&&) override {}+    void onReceiveMessage(std::unique_ptr<ResponseType>&& message) override {+      const auto& id = message->id();++      switch (message->status()) {+      case envoy::service::accesslog::v3::CriticalAccessLogsResponse::ACK:+        parent_.stats_.critical_logs_ack_received_.inc();+        parent_.stats_.pending_critical_logs_.dec();+        parent_.client_->onSuccess(id);+        break;+      case envoy::service::accesslog::v3::CriticalAccessLogsResponse::NACK:+        parent_.stats_.critical_logs_nack_received_.inc();+        parent_.client_->onError(id);+        break;+      default:+        return;+      }+    }+    void onReceiveTrailingMetadata(Http::ResponseTrailerMapPtr&&) override {}+    void onRemoteClose(Grpc::Status::GrpcStatus, const std::string&) override {+      parent_.client_->cleanup();+    }++    GrpcCriticalAccessLogClient& parent_;+  };++  class InflightMessageTtlManager {+  public:+    InflightMessageTtlManager(Event::Dispatcher& dispatcher,+                              GrpcCriticalAccessLogClientGrpcClientStats& stats,+                              Grpc::BufferedAsyncClient<RequestType, ResponseType>& client,+                              std::chrono::milliseconds message_ack_timeout)+        : dispatcher_(dispatcher), message_ack_timeout_(message_ack_timeout), stats_(stats),+          client_(client), timer_(dispatcher_.createTimer([this] { callback(); })) {}++    ~InflightMessageTtlManager() { timer_->disableTimer(); }++    void setDeadline(absl::flat_hash_set<uint32_t>&& ids) {+      auto expires_at = dispatcher_.timeSource().monotonicTime() + message_ack_timeout_;+      deadline_.emplace(expires_at, std::move(ids));+      timer_->enableTimer(message_ack_timeout_);+    }++  private:+    void callback() {+      const auto now = dispatcher_.timeSource().monotonicTime();+      std::vector<MonotonicTime> expired_timepoints;+      absl::flat_hash_set<uint32_t> expired_message_ids;++      // Extract timeout message ids.+      auto it = deadline_.lower_bound(now);+      while (it != deadline_.end()) {+        for (auto&& id : it->second) {+          expired_message_ids.emplace(id);+        }+        expired_timepoints.push_back(it->first);+        ++it;+      }++      // Clear buffered message ids on the set of waiting timeout.+      for (auto&& timepoint : expired_timepoints) {+        deadline_.erase(timepoint);+      }++      // Restore pending messages to buffer due to timeout.+      for (auto&& id : expired_message_ids) {+        const auto& message_buffer = client_.messageBuffer();++        if (message_buffer.find(id) == message_buffer.end()) {+          continue;+        }++        auto& message = message_buffer.at(id);+        if (message.first == Grpc::BufferState::PendingFlush) {+          client_.onError(id);+          stats_.critical_logs_message_timeout_.inc();+        }+      }

I imagine this can be simplified and become less heavy something like this:

      const auto now = dispatcher_.timeSource().monotonicTime();
      auto it = deadline_.lower_bound(now);
      const auto& message_buffer = client_.messageBuffer();
      while (it != deadline_.end()) {
        for (auto&& id : it->second) {
          const auto message = message_buffer.find(id);
          if (message_buffer.find(id) == message_buffer.end()) {
            continue;
          }
          if (message->second.first== Grpc::BufferState::PendingFlush) {
            client_.onError(id);
            stats_.critical_logs_message_timeout_.inc();
          }
        }
        ++it;
      }
      deadline_.erase(deadline_.begin(), it);
Shikugawa

comment created time in 2 hours

PullRequestReviewEvent
PullRequestReviewEvent

Pull request review commentenvoyproxy/envoy

TLS: add plumbing for getting SNI from the TLS socket

 const std::string& SslHandshakerImpl::subjectPeerCertificate() const {   return cached_subject_peer_certificate_; } +const std::string& SslHandshakerImpl::requestedServerName() const {

Should we populate the SNI into ConnectionInfoProvider? https://github.com/envoyproxy/envoy/blob/e7ebb0290d5527a002761d9d0aff51cb5f7c98cc/envoy/network/socket.h#L132-L135

Since tls inspect filter is doing the same thing. Otherwise, we will have multiple places to access the SNI.

yanavlasov

comment created time in 2 hours

PullRequestReviewEvent
PullRequestReviewEvent

pull request commentenvoyproxy/envoy

extension: new extensions for stateful session sticky

Some problems to be solved:

🤔 **Should this new API route level or filter level ? In the current implementation, the new API is at the filter level, which is part of the router Filter API. But should it be part of the route? **

🤔 I added a new extension to support different types of session state, and provide cookie based session state as a basic implementation, which requires the support of maintainer.

wbpcode

comment created time in 2 hours

startedenvoyproxy/protoc-gen-validate

started time in 2 hours

pull request commentenvoyproxy/envoy

extension: new extensions for stateful session sticky

cc @mattklein123 @snowp

wbpcode

comment created time in 2 hours

pull request commentenvoyproxy/envoy

extension: new extensions for stateful session sticky

CC @envoyproxy/api-shepherds: Your approval is needed for changes made to api/envoy/. envoyproxy/api-shepherds assignee is @adisuissa CC @envoyproxy/api-watchers: FYI only for changes made to api/envoy/.

<details> <summary>:cat:</summary>

Caused by: https://github.com/envoyproxy/envoy/pull/18207 was opened by wbpcode.

see: more, trace. </details>

wbpcode

comment created time in 2 hours

PR opened envoyproxy/envoy

extension: new extensions for stateful session sticky

Commit Message: "extension: new extensions for stateful session sticky" Additional Description:

PR after #17848 ##17290.

In the #17290, we added new cross priority host map for fast host searching. In the #17848, we extend LoadBalancerContext interface to provide override host and select upstream host by the override host.

Finally, in this PR, we expanded a new API to allow users to extract the state of the session from the request, and change the result of load balancing by setting the override host value.

Related doc: https://docs.google.com/document/d/1IU4b76AgOXijNa4sew1gfBfSiOMbZNiEt5Dhis8QpYg/edit?usp=sharing

Risk Level: Mid. Testing: Added. Docs Changes: Waiting. Release Notes: Waiting. Platform Specific Features: N/A.

+782 -1

0 comment

28 changed files

pr created time in 2 hours

startedenvoyproxy/envoy

started time in 2 hours