profile
viewpoint

scalyr/kafka-connect-scalyr 9

Scalyr Kafka Connect Sink

GitSullied/scalyr-agent-2 1

The source code for Scalyr Agent 2, the daemon process Scalyr customers run on their servers to collect metrics and logs.

czerwingithub/karma-html2js-preprocessor 0

A Karma plugin. Convert HTML files into JS strings to serve them in a script tag.

czerwingithub/microservices-demo 0

Sample cloud-native microservices application composed of 10 tiers

czerwingithub/openssl 0

TLS/SSL and crypto library

czerwingithub/scalyr-agent-2 0

The source code for Scalyr Agent 2, the daemon process Scalyr customers run on their servers to collect metrics and logs.

czerwingithub/scalyr-aws-serverless 0

Holds AWS Lambda functions used to integrate various AWS services (CloudWatch Logs) with Scalyr

scalyr/scalyr-chef 0

Chef recipes for installing the Scalyr Agent

delete branch scalyr/scalyr-agent-2

delete branch : healthcheck_changelog

delete time in 6 days

push eventscalyr/scalyr-agent-2

yanscalyr

commit sha cc57209c51a568113a614c2dc1c9802f5b36b1f8

Release notes and change log for health check (#612) Added documentation notes for the new health check feature.

view details

push time in 6 days

Pull request review commentscalyr/scalyr-agent-2

Release notes and change log for health check

  ## 2.1.8 - TBD +* The `status -v` and the new `status -H` command contain health check information and will have a return code+ of `2` if the health check is failing.++ The health check considers the time since the Agent last attempted to upload logs, these attempts don't need to

End sentence with "upload logs" and new sentence "These attempts"

yanscalyr

comment created time in 6 days

Pull request review commentscalyr/scalyr-agent-2

Release notes and change log for health check

  ## 2.1.8 - TBD +* The `status -v` and the new `status -H` command contain health check information and will have a return code+ of `2` if the health check is failing.++ The health check considers the time since the Agent last attempted to upload logs, these attempts don't need to+ succeed to be considered healthy. The default time before the Agent is considered unhealthy after not making any+ attempts is `60.0` seconds, this can be changed with the `healthy_max_time_since_last_copy_attempt` configuration

end sentence at "60.0 seconds." and new sentence starting with "This"

yanscalyr

comment created time in 6 days

Pull request review commentscalyr/scalyr-agent-2

Release notes and change log for health check

 Scalyr Agent 2 Changes By Release Packaged by Steven Czerwinski <czerwin@scalyr.com> on Aug 3, 2020 12:30 -0800 ---> +Features:+* `status -v` command now contains health check information, and will have a return code of `2` if the health check has failed. New optional flag for the `status` CLI command `-H` returns a short status with only health check info, and new configuration feature `healthy_max_time_since_last_copy_attempt` defines how many seconds is acceptable for the Agent to not attempt to send up logs before the health check should fail, defaulting to `60.0`. For more information, please refer to the release notes document.

end sentence after "health check info" and start the next sentence with "A new configuration feature.."

yanscalyr

comment created time in 6 days

Pull request review commentscalyr/scalyr-agent-2

Release notes and change log for health check

 Scalyr Agent 2 Changes By Release Packaged by Steven Czerwinski <czerwin@scalyr.com> on Aug 3, 2020 12:30 -0800 ---> +Features:+* `status -v` command now contains health check information, and will have a return code of `2` if the health check has failed. New optional flag for the `status` CLI command `-H` returns a short status with only health check info, and new configuration feature `healthy_max_time_since_last_copy_attempt` defines how many seconds is acceptable for the Agent to not attempt to send up logs before the health check should fail, defaulting to `60.0`. For more information, please refer to the release notes document.

But the world "The" in front of status -v

yanscalyr

comment created time in 6 days

push eventscalyr/scalyr-agent-2

czerwingithub

commit sha 4f4e42f9f1764eb0daed659aec2234f8ecdeeb64

Bug/duplicate upload due to race (#607) Fixed somewhat subtle bug that would lead to a log file being uploaded multiple times. The bug is triggered when pipelined requests is enabled. Essentially, if the list of log processors changes between when an `/addEvents` request is created and when its response is processed, then we could associate the wrong callback with the wrong log file. In some cases, this results in a log file mistakenly recorded as being "done". During the next scan, this "done" file would be noticed again, appearing to the Scalyr Agent as if it was a completely new log file.. resulting in that that log file being completely reuploaded again. The fix is pretty straight-forward. Use a key for mapping the callbacks to its log processor that is stable even if the list of log processors changes.

view details

push time in 6 days

delete branch scalyr/scalyr-agent-2

delete branch : bug/duplicateUploadDueToRace

delete time in 6 days

PR merged scalyr/scalyr-agent-2

Reviewers
Bug/duplicate upload due to race

Background

In Scalyr Agent release 2.1.6, we essentially turned on pipelining upload requests by default. Pipelining is when we start to prepare next /addEvents upload request while still waiting on the response from the previous request (i.e., reading log content, serializing the log bytes into a network request, etc). This allows us to overlap the round-trip-time of the first request with the construction of the second. As long as the first request is accepted by the server, we can immediately turn around and send the second request.

The pipelining feature has been in our code base for several years now and used by customers who needed this performance optimization to maximum their upload throughput. By making it default, we now have a larger number of customers using it. Unfortunately, it turned out there was a somewhat subtle bug in how we constructed pipeline requests that caused issues for customers with particular log patterns.

Bug

The essence of the bug is that we erroneously rely on the list of log processors being the same from when we create an /addEvents request and when we receive the response for that request. That's because we keep a map for the callbacks that must be invoked for all of the log files with content in that request keyed by the index of the log processor associated with that log file in the processor list. If the processor list changes between when we create the request and when we receive its response, then we will mistakenly associate the wrong callbacks with the wrong log processors (log files).

In non-pipelined requests, this is not an issue because that state can't really change between when the request is created and we get the response. No other work is done while that request is in-flight. However, with pipelined requests, the second request is created before the first request's response is processed. If processing the first request results in changes to the log processor list (such as removing a log processor because its log file has been deleted or become stale), then the second request's callbacks will be invoked on the wrong processors.

Symptoms

This bug manifested in a very puzzling manner. The end result was that a given log file would be reuploaded to Scalyr starting from its start (i.e., byte offset zero). This was caused by a fairly complex chain of events. At the core, because the wrong callbacks were being invoked for the wrong log file, a given log file would be marked as "done" even though it wasn't really done. Typically, a log file is only done when it has been deleted from the file system (otherwise, we are always expecting new bytes to show up in the log file). So, when the Scalyr Agent recorded a log file as done, but then saw it again when it next scanned for new log files -- it would assume it was a completely new log file. After all, the old one has been "deleted" from the file system (or so it though). This resulted in the Scalyr Agent reuploading the log file from the start.

Solution

The solution is really pretty straight-forward. Instead of mapping which callbacks are associated with which LogProcessors based on its position in the LogProcessor list, we use a more stable index. In particular, LogProcessors already have a unique id. We use that as the key to callback dictionary. Since this key is stable even with changes to the LogProcessor list, there is no more mixing up the callbacks.

+142 -18

2 comments

4 changed files

czerwingithub

pr closed time in 6 days

Pull request review commentscalyr/scalyr-agent-2

Bug/duplicate upload due to race

 def __init__(         # happens).  However, this is how the old agent worked and the UI relies on it, so we just keep the old system         # going for now.         self.__thread_name = "Lines for file %s" % file_path++        # Note: thread id is also used as a unique identifier for LogFileProcessors (see #CT-107)+        # So even if "thread_id" is no longer used by the Scalyr API, we still need it as a unique+        # integer identifier for the LogFileProcessor         self.__thread_id = LogFileProcessor.generate_unique_id()

Yeah, Imron and I had that exact same mini-debate. We agreed this is good enough for now.

czerwingithub

comment created time in 6 days

Pull request review commentscalyr/scalyr-agent-2

Bug/duplicate upload due to race

 Scalyr Agent 2 Changes By Release ================================= -## 2.1.8 "TBD" - July 10, 2020+## 2.1.8 "Titan" - August 3, 2020  <!----Packaged by Steven Czerwinski <czerwin@scalyr.com> on Jul 20, 2020 08:30 -0800+Packaged by Steven Czerwinski <czerwin@scalyr.com> on Aug 3, 2020 12:30 -0800 ---> +Bug fixes:+* Fixed race condition in pipelined requests which could lead to duplicate log upload, especially for systems with a large number of inactive log files.  Log files would be reuploaded from their start over short period of time (seconds to minutes).

Good point. I'd added that note to this comment.

czerwingithub

comment created time in 6 days

push eventscalyr/scalyr-agent-2

Steven Czerwinski

commit sha 0a45d33718dfaf5b2d295e74868eaf67368527bc

Address review comments Added notes to the change log to identify which customer configurations could trigger the bug. Also added a comment to the code to help explain the importance of a stable mapping.

view details

push time in 6 days

pull request commentscalyr/scalyr-agent-2

Bug/duplicate upload due to race

@ArthurKamalov @Kami I'd appreciate if one of you could take a look at this. This is the fix for the duplicate log upload issue and we are looking to cut a release on Monday. Thanks.

czerwingithub

comment created time in 7 days

PR opened scalyr/scalyr-agent-2

Bug/duplicate upload due to race

Background

In Scalyr Agent release 2.1.6, we essentially turned on pipelining upload requests by default. Pipelining is when we start to prepare next /addEvents upload request while still waiting on the response from the previous request (i.e., reading log content, serializing the log bytes into a network request, etc). This allows us to overlap the round-trip-time of the first request with the construction of the second. As long as the first request is accepted by the server, we can immediately turn around and send the second request.

The pipelining feature has been in our code base for several years now and used by customers who needed this performance optimization to maximum their upload throughput. By making it default, we now have a larger number of customers using it. Unfortunately, it turned out there was a somewhat subtle bug in how we constructed pipeline requests that caused issues for customers with particular log patterns.

Bug

The essence of the bug is that we erroneously rely on the list of log processors being the same from when we create an /addEvents request and when we receive the response for that request. That's because we keep a map for the callbacks that must be invoked for all of the log files with content in that request keyed by the index of the log processor associated with that log file in the processor list. If the processor list changes between when we create the request and when we receive its response, then we will mistakenly associate the wrong callbacks with the wrong log processors (log files).

In non-pipelined requests, this is not an issue because that state can't really change between when the request is created and we get the response. No other work is done while that request is in-flight. However, with pipelined requests, the second request is created before the first request's response is processed. If processing the first request results in changes to the log processor list (such as removing a log processor because its log file has been deleted or become stale), then the second request's callbacks will be invoked on the wrong processors.

Symptoms

This bug manifested in a very puzzling manner. The end result was that a given log file would be reuploaded to Scalyr starting from its start (i.e., byte offset zero). This was caused by a fairly complex chain of events. At the core, because the wrong callbacks were being invoked for the wrong log file, a given log file would be marked as "done" even though it wasn't really done. Typically, a log file is only done when it has been deleted from the file system (otherwise, we are always expecting new bytes to show up in the log file). So, when the Scalyr Agent recorded a log file as done, but then saw it again when it next scanned for new log files -- it would assume it was a completely new log file. After all, the old one has been "deleted" from the file system (or so it though). This resulted in the Scalyr Agent reuploading the log file from the start.

Solution

The solution is really pretty straight-forward. Instead of mapping which callbacks are associated with which LogProcessors based on its position in the LogProcessor list, we use a more stable index. In particular, LogProcessors already have a unique id. We use that as the key to callback dictionary. Since this key is stable even with changes to the LogProcessor list, there is no more mixing up the callbacks.

+139 -18

0 comment

4 changed files

pr created time in 7 days

create barnchscalyr/scalyr-agent-2

branch : bug/duplicateUploadDueToRace

created branch time in 7 days

Pull request review commentscalyr/scalyr-agent-2

Fix pyinstaller spec monitors

 ACTIVESTATE SOFTWARE INC. ("ACTIVESTATE") IS WILLING TO LICENSE THE SOFTWARE ONL    a. You are granted worldwide, perpetual, paid up, royalty free, non-exclusive rights to install and use the Software subject to the terms and conditions contained herein.  -   b. You may: (i) copy the Software for archival purposes, (ii) copy the Software for personal use purposes, (iii) use, copy, and distribute the Software solely for Your organization's internal non-production use and or internal business operation purposes including copying the Software to other workstations inside Your organization, (iv) redistribute parts of the Software outside of Your organization only as part of a Wrapped Application utilizing executable generators such as PerlApp, Perl2Exe, PAR, TclApp, py2app, or py2exe. Any copy must contain the original Software's proprietary notices in unaltered form. "Wrapped Application" means a single-file executable wherein all binary components are encapsulated in a single binary however You may not expose the base programming language as a scripting language within your own application program to end users.+   b. You may: (i) copy the Software for archival purposes, (ii) copy the Software for personal use purposes, (iii) use, copy, and distribute the Software solely for Your organization's internal non-production use and or internal business operation purposes including copying the Software to other workstations inside Your organization, (iv) redistribute parts of the Software outside of Your organization only as part of a Wrapped Application utilizing executable generators such as PerlApp, Perl2Exe, PAR, TclApp, py2app, py2exe or PyInstaller. Any copy must contain the original Software's proprietary notices in unaltered form. "Wrapped Application" means a single-file executable wherein all binary components are encapsulated in a single binary however You may not expose the base programming language as a scripting language within your own application program to end users.

I don't think we should be changing the text for another library's LICENSE. We should rever this change. After all, they do say you cannot alter the notices in any form.

In practice, we don't need to have PyInstaller listed explicitly here anyway because they say "executable generators such as " -- it is not meant to be an exhaustive list.

ArthurKamalov

comment created time in 7 days

Pull request review commentscalyr/scalyr-agent-2

Fix pyinstaller spec monitors

 from PyInstaller.utils.hooks import collect_submodules  block_cipher = None -windows_monitors = ['windows_process_metrics', 'windows_system_metrics', 'windows_event_log_monitor']+windows_monitors = [

This is a duplicate of what we do here: https://github.com/scalyr/scalyr-agent-2/blob/master/setup.py#L78

I suspect there's not an easy way to have both setup.py and this file point at a common definition of this. If that's true, can you at least add comments in both places reference to the other location so we know we have to update both?

You should know I also created a static pylint checker that made sure the one in setup.py is up to date. So, if we ever get rid of the setup.py, we should change that checker to look at this location. You could add that to the comment. See this: https://github.com/scalyr/scalyr-agent-2/blob/master/pylint_plugins/py2exe_checker.py#L44

ArthurKamalov

comment created time in 9 days

push eventscalyr/scalyr-agent-2

Steven Czerwinski

commit sha 360478f4175b12b20b5889561d72de2fb3772ffc

Added testcase for bug causing duplicate log upload issue

view details

push time in 10 days

Pull request review commentscalyr/scalyr-aws-serverless

fix permissions

+## 1.0.9 - July 23, 2020++Bug fixes:+* Fixed issue causing "RequestEntityTooLarge" exceptions when updating the Lambda policy in accounts +with a large number of log groups. Previously, the each log group was given permission to invoke the Lambda. +Now, all log groups within the AWS account are allowed to invoke the Lambda.. 

Remove second period.

ArthurKamalov

comment created time in 16 days

Pull request review commentscalyr/scalyr-aws-serverless

fix permissions

+## 1.0.9 - July 23, 2020++Bug fixes:+* Fixed issue causing "RequestEntityTooLarge" exceptions when updating the Lambda policy in accounts +with a large number of log groups. Previously, the each log group was given permission to invoke the Lambda. 

remove "the" before "each".

ArthurKamalov

comment created time in 16 days

Pull request review commentscalyr/scalyr-aws-serverless

fix permissions

+## 1.0.9 - July 23, 2020++Bug fixes:+* Previously, the permissions for the log groups were added one by one

I would write this as

Fixed issue causing "RequestEntityTooLarge" exceptions when updating the Lambda policy in accounts with a large number of log groups. Previously, the each log group was given permission to invoke the Lambda. Now, all log groups within the AWS account are allowed to invoke the Lambda.

ArthurKamalov

comment created time in 17 days

Pull request review commentscalyr/scalyr-aws-serverless

fix permissions

+## 1.0.9 - July 23, 2020++Bug fixes:+* Previously, the permissions for the log groups were added one by one+ and in case if amount of group was large enough it would cause permission size limitation error.+ Now the permission to invoke Streamer lambda is added for all log groups. + +Misc:+* Added documentation for the manual test deployment.++## 1.0.8 - July 02, 2020++* Update documentation with new config option, plus release and development notes++## 1.0.7 - March 24, 2020++* Simple formatting changing the order of execution for sampling and timestamp+* Add tests

Let's also add:

1.0.6 - January 28, 2020

Feature:

  • Added sampling_rules and redaction_rules to the available log group options. Allows sampling and redacting log lines before being sent to Scalyr.
ArthurKamalov

comment created time in 17 days

Pull request review commentscalyr/scalyr-aws-serverless

fix permissions

+## 1.0.9 - July 23, 2020++Bug fixes:+* Previously, the permissions for the log groups were added one by one+ and in case if amount of group was large enough it would cause permission size limitation error.+ Now the permission to invoke Streamer lambda is added for all log groups. + +Misc:+* Added documentation for the manual test deployment.++## 1.0.8 - July 02, 2020++* Update documentation with new config option, plus release and development notes

How about:

Feature:

  • Added new attributes log option allowing you to add fixed attributes to all events belonging to a log group.
ArthurKamalov

comment created time in 17 days

Pull request review commentscalyr/scalyr-aws-serverless

fix permissions

+## 1.0.9 - July 23, 2020++Bug fixes:+* Previously, the permissions for the log groups were added one by one+ and in case if amount of group was large enough it would cause permission size limitation error.+ Now the permission to invoke Streamer lambda is added for all log groups. + +Misc:+* Added documentation for the manual test deployment.++## 1.0.8 - July 02, 2020++* Update documentation with new config option, plus release and development notes++## 1.0.7 - March 24, 2020++* Simple formatting changing the order of execution for sampling and timestamp

How about: Feature:

  • Added prefix_timestamp log group option which will prefix all log lines with the event timestamp.
ArthurKamalov

comment created time in 17 days

Pull request review commentscalyr/scalyr-agent-2

CT-109: Unmatched group error

 def process_line(self, input_line):          modified_it = False -        for redaction_rule in self.__redaction_rules:-            (input_line, redaction) = self.__apply_redaction_rule(-                input_line, redaction_rule-            )-            modified_it = modified_it or redaction+        try:+            for redaction_rule in self.__redaction_rules:+                (input_line, redaction) = self.__apply_redaction_rule(+                    input_line, redaction_rule+                )+                modified_it = modified_it or redaction+        except re.error as e:+            if e.message == "unmatched group":  # pylint: disable=no-member+                log.error(+                    "Error while getting the default gateway: %s. Please make sure any redaction rules only reference groups that are guaranteed to match.",+                    six.text_type(e.message),  # pylint: disable=no-member+                    limit_once_per_x_secs=300,+                    limit_key="redaction_unmatched_group",+                )+            else:+                log.error(+                    "Error while applying redaction rule: %s",+                    six.text_type(e.message),  # pylint: disable=no-member

Let's also add in the text of the redaction rule here.

yanscalyr

comment created time in 18 days

Pull request review commentscalyr/scalyr-agent-2

CT-109: Unmatched group error

 def process_line(self, input_line):          modified_it = False -        for redaction_rule in self.__redaction_rules:-            (input_line, redaction) = self.__apply_redaction_rule(-                input_line, redaction_rule-            )-            modified_it = modified_it or redaction+        try:+            for redaction_rule in self.__redaction_rules:+                (input_line, redaction) = self.__apply_redaction_rule(+                    input_line, redaction_rule+                )+                modified_it = modified_it or redaction+        except re.error as e:+            if e.message == "unmatched group":  # pylint: disable=no-member+                log.error(+                    "Error while getting the default gateway: %s. Please make sure any redaction rules only reference groups that are guaranteed to match.",

Default gateway? I think you mean something like "Error while executing redaction rules".

Also, let's include the text of the redaction rule as well here.

yanscalyr

comment created time in 18 days

Pull request review commentscalyr/scalyr-aws-serverless

fix permissions

 def process_create_log_group_event(event):         else:             LOGGER.info(f"Loaded LogGroupOptions: " + json.dumps(log_group_options)) +        lambda_add_permission(os.environ['DESTINATION_ARN'], os.environ['AWS_ACCOUNT_ID'],os.environ['AWS_REGION'])

Do we need to do this in response to a "create log group" event? Since we already added a permission during stack creation to let all log groups invoke this lamba, we should be fine (I think).

ArthurKamalov

comment created time in 18 days

Pull request review commentscalyr/scalyr-aws-serverless

fix permissions

 def lambda_add_permission(log_group_name, destination_arn, account_id, region):         LAMBDA.add_permission(             FunctionName=destination_arn,             # Uses the seed to generate a reproducible alphanumeric string-            StatementId=uuid5(UUID_SEED, log_group_name).hex,+            StatementId=uuid5(UUID_SEED, STREAMER_FUNCTION_PERMISSION_NAME).hex,             Action='lambda:InvokeFunction',             Principal=f"logs.{region}.amazonaws.com",-            SourceArn=f"arn:aws:logs:{region}:{account_id}:log-group:{log_group_name}:*",+            SourceArn=f"arn:aws:logs:{region}:{account_id}:*",             SourceAccount=account_id         )     except LAMBDA.exceptions.ResourceConflictException as e:         # The statement id provided already exists, we must have added the permission in a previous run         LOGGER.info(f"Warning, lambda permission already exists. No action taken: {e}")     except:-        LOGGER.exception(f"Error adding lambda permission: {log_group_name}")+        LOGGER.exception(f"Error adding lambda permission: {STREAMER_FUNCTION_PERMISSION_NAME}")         raise     else:-        LOGGER.info(f"Added lambda permission: {log_group_name}")+        LOGGER.info(f"Added lambda permission: {STREAMER_FUNCTION_PERMISSION_NAME}")  -def lambda_remove_permission(log_group_name, destination_arn):+def lambda_remove_permission(destination_arn):     """Removes the lambda:InvokeFunction permission statement from the Streamer Lambda     fuction, the logGroup will no longer be allowed to deliver logEvents

Remove references to logGroup in this documentation.

ArthurKamalov

comment created time in 18 days

Pull request review commentscalyr/scalyr-agent-2

AGENT-386: Health check functionality for Kubernetes.

 def generate_status(self, warn_on_rate_limit=False):             for entry in self.__log_matchers:                 result.log_matchers.append(entry.generate_status()) +            if self.__config.enable_health_check:+                result.health_check_result = "Good"+                if (+                    time.time()+                    > self.__last_attempt_time

Yeah, I agree w/ Tomaz. It doesn't make sense to not always have this on. My apologies for not also thinking of that earlier.

yanscalyr

comment created time in a month

Pull request review commentscalyr/scalyr-agent-2

AGENT-386: Health check functionality for Kubernetes.

 def register_for_status_requests(self, handler):         """         self.__status_handler = handler +    def register_for_health_check(self, handler):

I don't think you should need this anymore.

yanscalyr

comment created time in a month

Pull request review commentscalyr/scalyr-agent-2

AGENT-386: Health check functionality for Kubernetes.

 def __init__(self):         self.__termination_handler = None         # The method to invoke when status is requested by another process.         self.__status_handler = None+        # The method to invoke when health check is requested by another process.+        self.__health_check_handler = None

I don't think you should need this anymore.

yanscalyr

comment created time in a month

Pull request review commentscalyr/scalyr-agent-2

AGENT-386: Health check functionality for Kubernetes.

 def register_for_status_requests(self, handler):         """         pass +    def register_for_health_check(self, handler):

I don't think you should need this anymore.

yanscalyr

comment created time in a month

Pull request review commentscalyr/scalyr-agent-2

AGENT-386: Health check functionality for Kubernetes.

 def __run(self, controller):         # a file because a user has run the 'detailed_status' command.         self.__controller.register_for_status_requests(self.__report_status_to_file) +        # Register handler for when we get an interrupt signal.  That indicates we should dump the status to+        # a file because a user has run the 'detailed_status' command.+        self.__controller.register_for_health_check(self.__report_health_to_file)

Do you need this anymore?

yanscalyr

comment created time in a month

Pull request review commentscalyr/scalyr-agent-2

AGENT-386: Health check functionality for Kubernetes.

 _SERVICE_DESCRIPTION_ = "Collects logs and metrics and forwards them to Scalyr.com" # A custom control message that is used to signal the agent should generate a detailed status report. _SERVICE_CONTROL_DETAILED_REPORT_ = win32service.SERVICE_USER_DEFINED_CONTROL - 1+# A custom control message that is used to signal the agent should generate a health check report.+_SERVICE_CONTROL_HEALTH_CHECK_ = win32service.SERVICE_USER_DEFINED_CONTROL - 2

I don't think you should need this anymore. We should just use the normal status -v signal.

yanscalyr

comment created time in a month

Pull request review commentscalyr/scalyr-agent-2

AGENT-386: Health check functionality for Kubernetes.

 def __calculate_overall_stats(          return result -    def __report_status_to_file(self):-        # type: () -> str+    def __report_status_to_file(self, health_check=False):

I don't think you really need to pass in health_check anymore, right? The format should already be set to JSON and so we should just return back the normal JSON status object.

yanscalyr

comment created time in a month

Pull request review commentscalyr/scalyr-agent-2

AGENT-386: Health check functionality for Kubernetes.

 def __report_status_to_file(self):          return final_file_path +    def __report_health_to_file(self):

I don't believe you should need this anymore.

yanscalyr

comment created time in a month

Pull request review commentscalyr/scalyr-agent-2

AGENT-386: Health check functionality for Kubernetes.

 def strip_domain_from_default_server_host(self):         """         return self.__get_config().get_bool("strip_domain_from_default_server_host") +    @property

Add test cases to configuration_test.py for your two new options here.

yanscalyr

comment created time in a month

Pull request review commentscalyr/scalyr-agent-2

AGENT-386: Health check functionality for Kubernetes.

 def __detailed_status(self, data_directory, status_format="text"):             )             return 1 +        return_code = 0         fp = open(status_file)         for line in fp:             print(line.rstrip())+            if health_check and line.rstrip() != "Health check: Good":

I think we should, even if we are doing status -v (so not an explicit health check), we should return a non-zero status code if the health check has failed. We should also return different return code, like 2. That way, scripts can tell the difference between the agent not running and a failed health code.

Of course, if we do status -v and the agent does not have health checks enabled, then we shouldn't return a status code based on the health check. We could do that by only emitting the health check information in the status -v output if the health check is enabled. Then, in this code, we just look for a line that starts with Health check and see if it is good or bad... setting some variable to True or False depending on the status. If we don't find any line that starts with Health check then we set that variable to None.

We should also be sure this handles the JSON format for the status correctly as well.

yanscalyr

comment created time in a month

Pull request review commentscalyr/scalyr-agent-2

AGENT-386: Health check functionality for Kubernetes.

 def stop(self):         default=False,         help="For status command, prints detailed information about running agent.",     )+    parser.add_option(+        "-H",+        "--health_check",+        action="store_true",+        dest="health_check",+        default=False,+        help="For status command, prints health check status. Return code will be 0 for a passing check, and 1 for failing",

What happens if the health check option is not enabled for the agent process? We should print a helpful message when they invoke scalyr-agent-2 status -h.

yanscalyr

comment created time in a month

Pull request review commentscalyr/scalyr-agent-2

AGENT-386: Health check functionality for Kubernetes.

 def generate_status(self, warn_on_rate_limit=False):             for entry in self.__log_matchers:                 result.log_matchers.append(entry.generate_status()) +            if self.__config.enable_health_check:+                result.health_check_result = "Good"+                if (+                    time.time()+                    > self.__last_attempt_time+                    + self.__config.healthy_max_time_since_last_copy_attempt+                ):+                    result.health_check_result = (

Can you add in a unit test that tests this method? In particular, verify it returns a bad message if the last copy attempt has been exceeded.

yanscalyr

comment created time in a month

Pull request review commentscalyr/scalyr-agent-2

AGENT-386: Health check functionality for Kubernetes.

 def generate_status(self, warn_on_rate_limit=False):             for entry in self.__log_matchers:                 result.log_matchers.append(entry.generate_status()) +            if self.__config.enable_health_check:+                result.health_check_result = "Good"+                if (+                    time.time()+                    > self.__last_attempt_time

What is __last_attempt_time initialized to before it actually has made an attempt? None? We will need to handle that.

yanscalyr

comment created time in a month

Pull request review commentscalyr/scalyr-agent-2

AGENT-386: Health check functionality for Kubernetes.

 def __detailed_status(self, data_directory, status_format="text"):             fp.write(status_format)          # Signal to the running process.  This should cause that process to write to the status file-        result = self.__controller.request_agent_status()+        if health_check:+            result = self.__controller.request_agent_health_check()

This also should allow you to remove a bunch of the changes you made related to registering and handling the new health check handler.

yanscalyr

comment created time in a month

Pull request review commentscalyr/scalyr-agent-2

AGENT-386: Health check functionality for Kubernetes.

 def stop_agent_service(self, quiet):         pass      def request_agent_status(self):-        """Invoked by a process that is not the agent to request the current agent dump the current detail+        """Invoked by a process that is not the agent to request the current agent dump the current health

I think you meant to leave this as detail

yanscalyr

comment created time in a month

Pull request review commentscalyr/scalyr-agent-2

AGENT-386: Health check functionality for Kubernetes.

 def __detailed_status(self, data_directory, status_format="text"):             fp.write(status_format)          # Signal to the running process.  This should cause that process to write to the status file-        result = self.__controller.request_agent_status()+        if health_check:+            result = self.__controller.request_agent_health_check()

I don't think we should have a separate signal for the health check. Rather, the health check should just use the normal -v signal but be selective about what it emits to stdout.

Most likely, the "right" way to do this is to have the health check request the verbose status in JSON and then it just parses that JSON looking for the health check field. It returns success or not based on that.

yanscalyr

comment created time in a month

push eventscalyr/scalyr-agent-2

yanscalyr

commit sha f79a4dbfd5920955f47ee0c4084d46de5c8f7088

Add a release note about possibly not having expected configuration in custom docker images. (#593)

view details

Steven Czerwinski

commit sha fda1682111fc4e075e034e378ba6dd1a5cc20edf

Merge branch 'release' of github.com:scalyr/scalyr-agent-2

view details

push time in a month

push eventscalyr/scalyr-agent-2

Steven Czerwinski

commit sha dee1fbd2146072188b624d5b7a7456f31da186ab

Update release notes to mention move to py2installer

view details

Jenkins Automation

commit sha 1f4c9964ba68148a28bb4a1f1afee25a0d72d842

Agent release 2.1.7

view details

yanscalyr

commit sha 4706f3994fcb0f50bd1030f64ebeee633fc647b6

Mention increase in memory usage in the "Rama" release notes

view details

Steven Czerwinski

commit sha 7e50025373e1a70e87a2e8cb95e863cf28c1786e

Merge branch 'release' of github.com:scalyr/scalyr-agent-2

view details

push time in a month

push eventscalyr/scalyr-agent-2

czerwingithub

commit sha d244540f1ce1ed262fc10764baceea07319ec1c1

Update release notes to mention move to py2installer (#583)

view details

push time in a month

delete branch scalyr/scalyr-agent-2

delete branch : updateReleaseNotes

delete time in a month

create barnchscalyr/scalyr-agent-2

branch : updateReleaseNotes

created branch time in a month

pull request commentscalyr/scalyr-agent-2

Update VERSION, CHANGELOG, and RELEASE_NOTES for version 2.1.7

And, add a mention of the bug fix for the duplicate K8s logs on restart as you mentioned in Slack.

yanscalyr

comment created time in 2 months

Pull request review commentscalyr/scalyr-agent-2

Update VERSION, CHANGELOG, and RELEASE_NOTES for version 2.1.7

 Scalyr Agent 2 Changes By Release ================================= +## 2.1.7 "Serenity" - June 24, 2020++<!---+Packaged by Yan Shnayder <yan@scalyr.com> on Jun 24, 2020 16:30 -0800+--->++Features:+* New configuration option `k8s_logs` allows configuring of Kubernetes logs similarly to the `logs` configuration option. Please see the [RELEASE_NOTES](https://github.com/scalyr/scalyr-agent-2/blob/master/RELEASE_NOTES.md) for more details.

Finally, I think you can change the hyperlink to : https://github.com/scalyr/scalyr-agent-2/blob/master/RELEASE_NOTES.md#217-serenity---june-24-2020

yanscalyr

comment created time in 2 months

Pull request review commentscalyr/scalyr-agent-2

Update VERSION, CHANGELOG, and RELEASE_NOTES for version 2.1.7

 Scalyr Agent 2 Changes By Release ================================= +## 2.1.7 "Serenity" - June 24, 2020++<!---+Packaged by Yan Shnayder <yan@scalyr.com> on Jun 24, 2020 16:30 -0800+--->++Features:+* New configuration option `k8s_logs` allows configuring of Kubernetes logs similarly to the `logs` configuration option. Please see the [RELEASE_NOTES](https://github.com/scalyr/scalyr-agent-2/blob/master/RELEASE_NOTES.md) for more details.

Also say "configuration but matches based on Kubernetes pod, namespace, and container name" instead of "configuration option" after logs.

yanscalyr

comment created time in 2 months

Pull request review commentscalyr/scalyr-agent-2

Update VERSION, CHANGELOG, and RELEASE_NOTES for version 2.1.7

 Scalyr Agent 2 Changes By Release ================================= +## 2.1.7 "Serenity" - June 24, 2020++<!---+Packaged by Yan Shnayder <yan@scalyr.com> on Jun 24, 2020 16:30 -0800+--->++Features:+* New configuration option `k8s_logs` allows configuring of Kubernetes logs similarly to the `logs` configuration option. Please see the [RELEASE_NOTES](https://github.com/scalyr/scalyr-agent-2/blob/master/RELEASE_NOTES.md) for more details.

I would tweak this and say "New configuration feature " instead of "New configuration option".

yanscalyr

comment created time in 2 months

pull request commentscalyr/scalyr-agent-2

CT-25 - add support for k8s_logs in configuration file

Ok, everything looks good. However, we should check out why the smoke-k8s test is failing. It looks like the unittest-35-windows might be an executor problem -- it says all of the tests have passed but it fails to shutdown the tests properly.

imron

comment created time in 2 months

Pull request review commentscalyr/scalyr-agent-2

CT-25 - add support for k8s_logs in configuration file

 def query_stats(self):         return self.query_api("/stats/summary")  +class K8sConfigBuilder(object):+    """Builds log configs for containers based on config snippets found in the `k8s_logs` field of+    the config file.+    """++    def __init__(+        self,+        k8s_log_configs,+        logger,+        rename_no_original,+        rename_logfile=None,+        parse_format="json",+    ):+        """+        @param k8s_log_configs: The config snippets from the configuration+        @param logger: A scalyr logger+        @param rename_no_original: A bool, used to prevent the original log file name from being added to the attributes.+        @param rename_logfile: A value for renaming a logfile - can contain variable substitutions+        @param parse_format: The parse format of this log config+        """++        if rename_logfile is None:+            rename_logfile = "/${container_runtime}/${container_name}.log"++        self.__k8s_log_configs = k8s_log_configs+        self._logger = logger+        self.__rename_logfile = rename_logfile+        self.__rename_no_original = rename_no_original+        self.__parse_format = parse_format++    def _check_match(self, element, name, value, glob):+        """+        Checks to see if we have a match against the glob for a certain value+        @param element: The index number of the element in the k8s_config list+        @param name: A string containing the name of the field we are evaluating (used for debug log purposes)+        @param value: The value of the field to evaluate the glob against+        @param glob: A string containing a glob to evaluate+        """+        result = False+        if glob is not None and value is not None:+            # ignore this config if value doesn't match the glob+            if fnmatch.fnmatch(value, glob):+                result = True+            else:+                self._logger.log(+                    scalyr_logging.DEBUG_LEVEL_2,+                    "Ignoring k8s_log item %d because %s '%s' doesn't match '%s'"+                    % (element, name, value, glob,),+                )+        return result++    def get_log_config(self, info, k8s_info, parser):+        """+        Creates a log_config from various attributes and then applies any `k8s_logs` configs that+        might apply to this log+        @param info: A dict containing docker information about the container we are creating a config for+        @param k8s_info: A dict containing k8s information about hte container we are creating a config for

s/hte/the/

imron

comment created time in 2 months

Pull request review commentscalyr/scalyr-agent-2

CT-25 - add support for k8s_logs in configuration file

 def query_stats(self):         return self.query_api("/stats/summary")  +class K8sConfigBuilder(object):+    """Builds log configs for containers based on config snippets found in the `k8s_logs` field of+    the config file.+    """++    def __init__(+        self, k8s_log_configs, logger, rename_no_original, parse_format="json"+    ):+        """+        @param k8s_log_configs: The config snippets from the configuration+        @param logger: A scalyr logger+        @param rename_no_original: A bool, used to prevent the original log file name from being added to the attributes.+        @param parse_format: The parse format of this log config+        """+        self.__k8s_log_configs = k8s_log_configs+        self._logger = logger+        self.__rename_no_original = rename_no_original+        self.__parse_format = parse_format++    def _check_match(self, element, name, value, glob):+        """+        Checks to see if we have a match against the glob for a certain value+        @param element: The index number of the element in the k8s_config list+        @param name: A string containing the name of the field we are evaluating (used for debug log purposes)+        @param value: The value of the field to evaluate the glob against+        @param glob: A string containing a glob to evaluate+        """+        result = False+        if glob is not None and value is not None:+            # ignore this config if value doesn't match the glob+            if fnmatch.fnmatch(value, glob):+                result = True+            else:+                self._logger.log(+                    scalyr_logging.DEBUG_LEVEL_2,+                    "Ignoring k8s_log item %d because %s '%s' doesn't match '%s'"+                    % (element, name, value, glob,),+                )+        return result++    def get_log_config(+        self, info, k8s_info, container_attributes, parser, rename_logfile,+    ):+        """+        Creates a log_config from various attributes and then applies any `k8s_logs` configs that+        might apply to this log+        @param info: A dict containing docker information about the container we are creating a config for+        @param k8s_info: A dict containing k8s information about hte container we are creating a config for+        @param container_attributes: A set of attributes to add to the log config of this container+        @param parser: A string containing the name of the parser to use for this log config+        @param rename_logfile: A string containing the name to use for the renamed log file+        @return: A dict containing a log_config, or None if we couldn't create a valid config (i.e. log_path was empty)+        """++        # Make sure we have a log_path for the log config+        path = info.get("log_path", None)+        if not path:+            return None++        # Build up the default config that we will use+        result = {+            "parser": parser,+            "path": path,+            "parse_format": self.__parse_format,+            "attributes": container_attributes,+            "rename_logfile": rename_logfile,+            "rename_no_original": self.__rename_no_original,

Part of the reason I'm pushing on these questions is that I don't think it's a good user experience if overriding some options for the log file forces you change something you may not want to change.

Take the rename_logfile config option. By default, k8s logs are renamed to "/${CONTAINER_RT}/${CONTAINER_NAME}". However, if you override the log config for particular log to set a parser, there's no way for you to also get the same "/${CONTAINER_RT}/${CONTAINER_NAME}" name. That's because you have to specify some value for the rename_logfile config option.. but you can't specify one that will provide the exact same behavior. Now, we can take the approach you took and just have that one be the default if you don't specify rename_logfile. But, that seems like almost a hidden behavior.

Instead, we should just officially same the default for the K8s log file rename is "/${CONTAINER_RT}/${CONTAINER_NAME}". We actually already support using template strings for filename substitution. We do have container name... we just don't have container runtime. So, if we add that, then we could make the default be a template string that then doesn't change on a per-container basis. yes, the value of the string is different after the substitution, but the default value is the same.

What to do for the parser is a little tougher. Part of me thinks we should just have a different user-visible way for the customer to specify the parser, since it has a different meaning anyway. Call it default_parser instead of parser. We document default_parser as the parser to use if there is no parser specified for the container via an annotation. That's different than parser which means "always use this value as the parser".

What do you think about those ideas? Use default_parser and have a templated-value for the default rename_logfile. We just would have to add in the runtime name as a templated variable.

imron

comment created time in 2 months

Pull request review commentscalyr/scalyr-agent-2

CT-25 - add support for k8s_logs in configuration file

 def query_stats(self):         return self.query_api("/stats/summary")  +class K8sConfigBuilder(object):+    """Builds log configs for containers based on config snippets found in the `k8s_logs` field of+    the config file.+    """++    def __init__(+        self, k8s_log_configs, logger, rename_no_original, parse_format="json"+    ):+        """+        @param k8s_log_configs: The config snippets from the configuration+        @param logger: A scalyr logger+        @param rename_no_original: A bool, used to prevent the original log file name from being added to the attributes.+        @param parse_format: The parse format of this log config+        """+        self.__k8s_log_configs = k8s_log_configs+        self._logger = logger+        self.__rename_no_original = rename_no_original+        self.__parse_format = parse_format++    def _check_match(self, element, name, value, glob):+        """+        Checks to see if we have a match against the glob for a certain value+        @param element: The index number of the element in the k8s_config list+        @param name: A string containing the name of the field we are evaluating (used for debug log purposes)+        @param value: The value of the field to evaluate the glob against+        @param glob: A string containing a glob to evaluate+        """+        result = False+        if glob is not None and value is not None:+            # ignore this config if value doesn't match the glob+            if fnmatch.fnmatch(value, glob):+                result = True+            else:+                self._logger.log(+                    scalyr_logging.DEBUG_LEVEL_2,+                    "Ignoring k8s_log item %d because %s '%s' doesn't match '%s'"+                    % (element, name, value, glob,),+                )+        return result++    def get_log_config(+        self, info, k8s_info, container_attributes, parser, rename_logfile,+    ):+        """+        Creates a log_config from various attributes and then applies any `k8s_logs` configs that+        might apply to this log+        @param info: A dict containing docker information about the container we are creating a config for+        @param k8s_info: A dict containing k8s information about hte container we are creating a config for+        @param container_attributes: A set of attributes to add to the log config of this container+        @param parser: A string containing the name of the parser to use for this log config+        @param rename_logfile: A string containing the name to use for the renamed log file+        @return: A dict containing a log_config, or None if we couldn't create a valid config (i.e. log_path was empty)+        """++        # Make sure we have a log_path for the log config+        path = info.get("log_path", None)+        if not path:+            return None++        # Build up the default config that we will use+        result = {+            "parser": parser,+            "path": path,+            "parse_format": self.__parse_format,+            "attributes": container_attributes,+            "rename_logfile": rename_logfile,+            "rename_no_original": self.__rename_no_original,+        }++        # If we don't have any k8s information then we don't match against k8s_log_configs+        if k8s_info is None:+            return result++        # Now apply log configs+        for i, config in enumerate(self.__k8s_log_configs):+            # We check for glob matches against `k8s_pod_glob`, `k8s_namespace_glob` and `k8s_container_glob`++            # Check for the pod glob+            pod_glob = config.get("k8s_pod_glob", None)+            pod_name = k8s_info.get("pod_name", None)+            if not self._check_match(i, "pod_name", pod_name, pod_glob):

Ah, right... I missed that part. Ok, never mind about my suggestion.

imron

comment created time in 2 months

Pull request review commentscalyr/scalyr-agent-2

CT-25 - add support for k8s_logs in configuration file

 def query_stats(self):         return self.query_api("/stats/summary")  +class K8sConfigBuilder(object):+    """Builds log configs for containers based on config snippets found in the `k8s_logs` field of+    the config file.+    """++    def __init__(+        self, k8s_log_configs, logger, rename_no_original, parse_format="json"+    ):+        """+        @param k8s_log_configs: The config snippets from the configuration+        @param logger: A scalyr logger+        @param rename_no_original: A bool, used to prevent the original log file name from being added to the attributes.+        @param parse_format: The parse format of this log config+        """+        self.__k8s_log_configs = k8s_log_configs+        self._logger = logger+        self.__rename_no_original = rename_no_original+        self.__parse_format = parse_format++    def _check_match(self, element, name, value, glob):+        """+        Checks to see if we have a match against the glob for a certain value+        @param element: The index number of the element in the k8s_config list+        @param name: A string containing the name of the field we are evaluating (used for debug log purposes)+        @param value: The value of the field to evaluate the glob against+        @param glob: A string containing a glob to evaluate+        """+        result = False+        if glob is not None and value is not None:+            # ignore this config if value doesn't match the glob+            if fnmatch.fnmatch(value, glob):+                result = True+            else:+                self._logger.log(+                    scalyr_logging.DEBUG_LEVEL_2,+                    "Ignoring k8s_log item %d because %s '%s' doesn't match '%s'"+                    % (element, name, value, glob,),+                )+        return result++    def get_log_config(+        self, info, k8s_info, container_attributes, parser, rename_logfile,+    ):+        """+        Creates a log_config from various attributes and then applies any `k8s_logs` configs that+        might apply to this log+        @param info: A dict containing docker information about the container we are creating a config for+        @param k8s_info: A dict containing k8s information about hte container we are creating a config for+        @param container_attributes: A set of attributes to add to the log config of this container+        @param parser: A string containing the name of the parser to use for this log config+        @param rename_logfile: A string containing the name to use for the renamed log file+        @return: A dict containing a log_config, or None if we couldn't create a valid config (i.e. log_path was empty)+        """++        # Make sure we have a log_path for the log config+        path = info.get("log_path", None)+        if not path:+            return None++        # Build up the default config that we will use+        result = {+            "parser": parser,+            "path": path,+            "parse_format": self.__parse_format,+            "attributes": container_attributes,+            "rename_logfile": rename_logfile,+            "rename_no_original": self.__rename_no_original,

If I understand this correctly, parser could either be docker or a value from the pod label. rename_logfile will always be a fixed format (a combination of CRI and the container name) just varies based on the actual container specifics. And, attributes is always just the K8s attributes that we add in.

Is that correct?

I wonder if we should respect the parser from the k8s pods if there is a configuration that matches this pod. I worry that it might be confusing to the customer if we try to merge in information from the various configuration mechanisms we have. Part of me wants to say "pick one and stick with it" because otherwise it is confusing.

What do you think?

imron

comment created time in 2 months

Pull request review commentscalyr/scalyr-agent-2

CT-25 - add support for k8s_logs in configuration file

 def query_stats(self):         return self.query_api("/stats/summary")  +class K8sConfigBuilder(object):+    """Builds log configs for containers based on config snippets found in the `k8s_logs` field of+    the config file.+    """++    def __init__(+        self, k8s_log_configs, logger, rename_no_original, parse_format="json"+    ):+        """+        @param k8s_log_configs: The config snippets from the configuration+        @param logger: A scalyr logger+        @param rename_no_original: A bool, used to prevent the original log file name from being added to the attributes.+        @param parse_format: The parse format of this log config+        """+        self.__k8s_log_configs = k8s_log_configs+        self._logger = logger+        self.__rename_no_original = rename_no_original+        self.__parse_format = parse_format++    def _check_match(self, element, name, value, glob):+        """+        Checks to see if we have a match against the glob for a certain value+        @param element: The index number of the element in the k8s_config list+        @param name: A string containing the name of the field we are evaluating (used for debug log purposes)+        @param value: The value of the field to evaluate the glob against+        @param glob: A string containing a glob to evaluate+        """+        result = False+        if glob is not None and value is not None:+            # ignore this config if value doesn't match the glob+            if fnmatch.fnmatch(value, glob):+                result = True+            else:+                self._logger.log(+                    scalyr_logging.DEBUG_LEVEL_2,+                    "Ignoring k8s_log item %d because %s '%s' doesn't match '%s'"+                    % (element, name, value, glob,),+                )+        return result++    def get_log_config(+        self, info, k8s_info, container_attributes, parser, rename_logfile,+    ):+        """+        Creates a log_config from various attributes and then applies any `k8s_logs` configs that+        might apply to this log+        @param info: A dict containing docker information about the container we are creating a config for+        @param k8s_info: A dict containing k8s information about hte container we are creating a config for+        @param container_attributes: A set of attributes to add to the log config of this container+        @param parser: A string containing the name of the parser to use for this log config+        @param rename_logfile: A string containing the name to use for the renamed log file+        @return: A dict containing a log_config, or None if we couldn't create a valid config (i.e. log_path was empty)+        """++        # Make sure we have a log_path for the log config+        path = info.get("log_path", None)+        if not path:+            return None++        # Build up the default config that we will use+        result = {+            "parser": parser,+            "path": path,+            "parse_format": self.__parse_format,+            "attributes": container_attributes,+            "rename_logfile": rename_logfile,+            "rename_no_original": self.__rename_no_original,+        }++        # If we don't have any k8s information then we don't match against k8s_log_configs+        if k8s_info is None:+            return result++        # Now apply log configs+        for i, config in enumerate(self.__k8s_log_configs):+            # We check for glob matches against `k8s_pod_glob`, `k8s_namespace_glob` and `k8s_container_glob`++            # Check for the pod glob+            pod_glob = config.get("k8s_pod_glob", None)+            pod_name = k8s_info.get("pod_name", None)+            if not self._check_match(i, "pod_name", pod_name, pod_glob):

How about modifying _check_match to take in both the k8s_info and config objects, and then just pass in k8s_pod_glob. That way you could do the gets for the pod_glob and pod_name in that method. It would mean just passing in one more argument that you are now, but then you can eliminate 6 duplicate lines in this code block. You can then also then write this block to be a single if statement:

  if self._check_match(i, "pod_name", ... ) or
    self._check_match(i, "k8s_container_name", ...) or
    self._check_match(i, "pod_namespace", ...):
    # match found.

Avoiding the sprinkle of continue statements throughout a body like this usually is better for readability.

imron

comment created time in 2 months

Pull request review commentscalyr/logstash-output-scalyr

Hard code in a working cert to avoid SSL issues

 In the above example, the Logstash pipeline defines a file input that reads from  - Path to SSL bundle file. -`config :ssl_ca_bundle_path, :validate => :string, :default =>  "/etc/ssl/certs/ca-bundle.crt"`+`config :ssl_ca_bundle_path, :validate => :string, :default => nil`

Ok, I'm going to suggest a slightly different approach here. What if we leave this config option as is. Instead, we add in an "append Scalyr cert" (you can come up with a better name) option. If that is true (and it defaults to true), then we copy what is in ssl_ca_bundle_path and add in the embedded Scalyr cert.

That way we can rely both on the system certs and our embedded certificate. This might be necessary if they are proxying our traffic through an SSL proxy on their end... And just also future proofs us a bit. The Scalyr CA cert expires in 4 years.

yanscalyr

comment created time in 2 months

Pull request review commentscalyr/logstash-output-scalyr

Hard code in a working cert to avoid SSL issues

 In the above example, the Logstash pipeline defines a file input that reads from  - Path to SSL bundle file. -`config :ssl_ca_bundle_path, :validate => :string, :default =>  "/etc/ssl/certs/ca-bundle.crt"`+`config :ssl_ca_bundle_path, :validate => :string, :default => nil`

Document what nil means. (If nil, then we use an embedded certificate that will verify the scalyr.com).

yanscalyr

comment created time in 2 months

Pull request review commentscalyr/logstash-output-scalyr

Hard code in a working cert to avoid SSL issues

 def multi_receive(events)     while !multi_event_request_array.to_a.empty?       begin         multi_event_request = multi_event_request_array.pop-        @client_session.post_add_events(multi_event_request[:body])-        sleep_interval = 0-        result.push(multi_event_request)+        if !multi_event_request.nil?

Can you add a comment as to why this might be nil?

yanscalyr

comment created time in 2 months

Pull request review commentscalyr/scalyr-agent-2

CT-25 - add support for k8s_logs in configuration file

 def __get_last_request_for_log(self, path):          return scalyr_util.seconds_since_epoch(result) -    def __create_log_config(self, parser, path, attributes, parse_format="raw"):-        """Convenience function to create a log_config dict from the parameters"""+    def __create_log_config(+        self, info, k8s_info, container_attributes, parser, rename_logfile+    ):+        """+        Creates a log_config from various attributes and then applies any `k8s_logs` configs that+        might apply to this log+        @return: A dict containing a log_config, or None if we couldn't create a valid config (i.e. log_path was empty)+        """ -        return {+        # Make sure we have a log_path for the log config+        path = info.get("log_path", None)+        if not path:+            return None++        # Build up the default config that we will use+        result = {             "parser": parser,             "path": path,-            "parse_format": parse_format,-            "attributes": attributes,+            "parse_format": self.__parse_format,+            "attributes": container_attributes,+            "rename_logfile": rename_logfile,         } +        # This is for a hack to prevent the original log file name from being added to the attributes.+        if self.__use_v2_attributes and not self.__use_v1_and_v2_attributes:+            result["rename_no_original"] = True

I think we want to apply this hack implicitly to any log configuration we return.

imron

comment created time in 2 months

Pull request review commentscalyr/scalyr-agent-2

CT-25 - add support for k8s_logs in configuration file

 def __get_last_request_for_log(self, path):          return scalyr_util.seconds_since_epoch(result) -    def __create_log_config(self, parser, path, attributes, parse_format="raw"):-        """Convenience function to create a log_config dict from the parameters"""+    def __create_log_config(+        self, info, k8s_info, container_attributes, parser, rename_logfile+    ):+        """+        Creates a log_config from various attributes and then applies any `k8s_logs` configs that+        might apply to this log+        @return: A dict containing a log_config, or None if we couldn't create a valid config (i.e. log_path was empty)+        """ -        return {+        # Make sure we have a log_path for the log config+        path = info.get("log_path", None)+        if not path:+            return None++        # Build up the default config that we will use+        result = {             "parser": parser,             "path": path,-            "parse_format": parse_format,-            "attributes": attributes,+            "parse_format": self.__parse_format,+            "attributes": container_attributes,+            "rename_logfile": rename_logfile,         } +        # This is for a hack to prevent the original log file name from being added to the attributes.+        if self.__use_v2_attributes and not self.__use_v1_and_v2_attributes:+            result["rename_no_original"] = True++        # If we don't have any k8s information then we don't match against k8s_log_configs+        if k8s_info is None:+            return result++        # Now apply log configs+        for i, config in enumerate(self.__k8s_log_configs):+            # We check for glob matches against `k8s_pod_glob`, `k8s_namespace_glob` and `k8s_container_glob`++            # Check for the pod glob+            pod_glob = config.get("k8s_pod_glob", None)+            pod_name = k8s_info.get("pod_name", None)+            if pod_glob is not None and pod_name is not None:

When you make the new abstraction, let's see if you can remove this duplicate code here (this check is pretty much repeated for each attribute).

imron

comment created time in 2 months

Pull request review commentscalyr/scalyr-agent-2

CT-25 - add support for k8s_logs in configuration file

 def __get_last_request_for_log(self, path):          return scalyr_util.seconds_since_epoch(result) -    def __create_log_config(self, parser, path, attributes, parse_format="raw"):-        """Convenience function to create a log_config dict from the parameters"""+    def __create_log_config(+        self, info, k8s_info, container_attributes, parser, rename_logfile+    ):+        """+        Creates a log_config from various attributes and then applies any `k8s_logs` configs that+        might apply to this log+        @return: A dict containing a log_config, or None if we couldn't create a valid config (i.e. log_path was empty)+        """ -        return {+        # Make sure we have a log_path for the log config+        path = info.get("log_path", None)+        if not path:+            return None++        # Build up the default config that we will use+        result = {             "parser": parser,             "path": path,-            "parse_format": parse_format,-            "attributes": attributes,+            "parse_format": self.__parse_format,+            "attributes": container_attributes,+            "rename_logfile": rename_logfile,         } +        # This is for a hack to prevent the original log file name from being added to the attributes.+        if self.__use_v2_attributes and not self.__use_v1_and_v2_attributes:+            result["rename_no_original"] = True++        # If we don't have any k8s information then we don't match against k8s_log_configs+        if k8s_info is None:+            return result++        # Now apply log configs+        for i, config in enumerate(self.__k8s_log_configs):+            # We check for glob matches against `k8s_pod_glob`, `k8s_namespace_glob` and `k8s_container_glob`++            # Check for the pod glob+            pod_glob = config.get("k8s_pod_glob", None)+            pod_name = k8s_info.get("pod_name", None)+            if pod_glob is not None and pod_name is not None:+                # ignore this config if pod_name doesn't match the glob+                if not fnmatch.fnmatch(pod_name, pod_glob):+                    self._logger.log(+                        scalyr_logging.DEBUG_LEVEL_2,+                        "Ignoring k8s_log item %d because pod_name '%s' doesn't match '%s'"+                        % (i, pod_name, pod_glob,),+                    )+                    continue++            # Check for the namespace glob+            namespace_glob = config.get("k8s_namespace_glob", None)+            pod_namespace = k8s_info.get("pod_namespace", None)+            if namespace_glob is not None and pod_namespace is not None:+                # ignore this config if pod_namespace doesn't match the glob+                if not fnmatch.fnmatch(pod_namespace, namespace_glob):+                    self._logger.log(+                        scalyr_logging.DEBUG_LEVEL_2,+                        "Ignoring k8s_log item %d because pod_namespace '%s' doesn't match '%s'"+                        % (i, pod_namespace, namespace_glob,),+                    )+                    continue++            # Check for the k8s container name glob+            container_glob = config.get("k8s_container_glob", None)+            k8s_container = k8s_info.get("k8s_container_name", None)+            if container_glob is not None and k8s_container is not None:+                # ignore this config if k8s_container doesn't match the glob+                if not fnmatch.fnmatch(k8s_container, container_glob):+                    self._logger.log(+                        scalyr_logging.DEBUG_LEVEL_2,+                        "Ignoring k8s_log item %d because k8s_container_name '%s' doesn't match '%s'"+                        % (i, k8s_container, container_glob,),+                    )+                    continue++            self._logger.log(+                scalyr_logging.DEBUG_LEVEL_2,+                "Applying k8s_log config item %d.  Matched pod_name ('%s', '%s'), pod_namespace ('%s', '%s') and k8s_container_name ('%s', '%s')"+                % (+                    i,+                    pod_name,+                    pod_glob,+                    pod_namespace,+                    namespace_glob,+                    k8s_container,+                    container_glob,+                ),+            )+            # We have the first matching config.  Apply the log config and break+            # Note, we can't just .update() because otherwise the attributes dict+            # may get overridden, plus we also need to exclude `path`+            for key, value in six.iteritems(config):+                # Ignore `path` so people can't override it

Hmm.. so you are essentially performing a merge with the default attributes on every match you get. When you move this to an abstraction, I would advocate you merge in the default attributes during the instance creation so that you don't have to do this over and over again.

imron

comment created time in 2 months

Pull request review commentscalyr/scalyr-agent-2

CT-25 - add support for k8s_logs in configuration file

 def __get_last_request_for_log(self, path):          return scalyr_util.seconds_since_epoch(result) -    def __create_log_config(self, parser, path, attributes, parse_format="raw"):-        """Convenience function to create a log_config dict from the parameters"""+    def __create_log_config(+        self, info, k8s_info, container_attributes, parser, rename_logfile+    ):+        """+        Creates a log_config from various attributes and then applies any `k8s_logs` configs that+        might apply to this log+        @return: A dict containing a log_config, or None if we couldn't create a valid config (i.e. log_path was empty)+        """ -        return {+        # Make sure we have a log_path for the log config+        path = info.get("log_path", None)+        if not path:+            return None++        # Build up the default config that we will use+        result = {             "parser": parser,             "path": path,-            "parse_format": parse_format,-            "attributes": attributes,+            "parse_format": self.__parse_format,+            "attributes": container_attributes,+            "rename_logfile": rename_logfile,         } +        # This is for a hack to prevent the original log file name from being added to the attributes.+        if self.__use_v2_attributes and not self.__use_v1_and_v2_attributes:+            result["rename_no_original"] = True++        # If we don't have any k8s information then we don't match against k8s_log_configs+        if k8s_info is None:+            return result++        # Now apply log configs+        for i, config in enumerate(self.__k8s_log_configs):

Ok, I would suggest breaking essentially this method out as a small abstraction for testing purposes. I think its initialization should take container_attributes, parser, rename_logfile, and rename_no_original (either true or false).

imron

comment created time in 2 months

push eventscalyr/scalyr-agent-2

Jenkins Automation

commit sha 67460c81933c44d57ee3a2a272b79de4c9fc7205

Agent release 2.1.6

view details

Steven Czerwinski

commit sha a84820cb17b1e4d445628c8198c9005a3b12f246

Merge branch 'release' of github.com:scalyr/scalyr-agent-2

view details

push time in 2 months

push eventscalyr/scalyr-agent-2

czerwingithub

commit sha 78f4d921e74306bfdf42174ebcd11e5d715652eb

Fix/undo sonar cloud changes (#566) * Revert "Fix small style related issues detected by Sonar Cloud (#563)" This reverts commit b789d75a762c2bf8d20caa60d7f8a4edbf19712e. * Revert "Update sonar cloud settings. (#562)" This reverts commit 04688522a66c2ae054c32947ff617592f8e352d3. * Revert "Add SonarCloud config, various small code fixes (#560)" This reverts commit adc6b59cea560f1ac24b8fcc1860acc31e534412.

view details

push time in 2 months

delete branch scalyr/scalyr-agent-2

delete branch : fix/undoSonarCloudChanges

delete time in 2 months

PR merged scalyr/scalyr-agent-2

Fix/undo sonar cloud changes

Rolling back recent changes suggested by Sonar because it is breaking the Windows build. We need to publish the release, so we can add these changes back to the master branch once the release is pushed.

+97 -136

1 comment

55 changed files

czerwingithub

pr closed time in 2 months

PR opened scalyr/scalyr-agent-2

Reviewers
Fix/undo sonar cloud changes

Rolling back recent changes suggested by Sonar because it is breaking the Windows build. We need to publish the release, so we can add these changes back to the master branch once the release is pushed.

+97 -136

0 comment

55 changed files

pr created time in 2 months

create barnchscalyr/scalyr-agent-2

branch : fix/undoSonarCloudChanges

created branch time in 2 months

push eventscalyr/scalyr-agent-2

czerwingithub

commit sha 5ad9d16cafc48112c88be171d75608f5520976fa

Update RELEASE_NOTES.md (#565) Tweak documentation about maximum send rate enforcement to provide some specific examples.

view details

push time in 2 months

delete branch scalyr/scalyr-agent-2

delete branch : docs/releaseTweaks

delete time in 2 months

PR merged scalyr/scalyr-agent-2

Tweaking maximum send rate release notes

Provide some specific examples.

+6 -1

0 comment

1 changed file

czerwingithub

pr closed time in 2 months

PR opened scalyr/scalyr-agent-2

Reviewers
Tweaking maximum send rate release notes

Provide some specific examples.

+6 -1

0 comment

1 changed file

pr created time in 2 months

create barnchscalyr/scalyr-agent-2

branch : docs/releaseTweaks

created branch time in 2 months

Pull request review commentscalyr/scalyr-agent-2

AGENT-406: Add rate limiting related changes to CHANGELOG and RELEASE_NOTES

 Scalyr Agent 2 Changes By Release Packaged by Arthur Kamalov <arthur@scalyr.com> on Jun 4, 2020 13:30 -0800 ---> +Features:+* New configuration option `max_send_rate_enforcement` allows setting a limit on the rate at which the Agent will upload log bytes to Scalyr. You may wish to set this if you are worried about bursts of log data from problematic files and want to avoid getting charged for these bursts.+* New default overrides for a number of configuration parameters that will result in a higher throughput for the Agent. If you were relying on the lower throughput as a makeshift rate limiter we recommend setting the new `max_send_rate_enforcement` configuration option to an acceptable rate.

end sentence with or "legacy" to maintain the current behavior. See the RELEASE_NOTES for more details. and make that last part a link.

yanscalyr

comment created time in 2 months

Pull request review commentscalyr/scalyr-agent-2

AGENT-406: Add rate limiting related changes to CHANGELOG and RELEASE_NOTES

 # Release Notes +## 2.1.6 "Rama" - June 4, 2020++* There are a number of new default overrides to increase Agent throughput:++  ```+  "max_allowed_request_size": 5900000+  "pipeline_threshold": 0+  "min_request_spacing_interval": 0.0+  "max_request_spacing_interval": 5.0+  "max_log_offset_size": 200000000+  "max_existing_log_offset_size": 200000000+  ```++  Increased throughput may result in a larger amount of logs uploaded to Scalyr if the Agent has been skipping logs+  before this upgrade, and as a result a larger bill.++  If you are interested in avoiding higher throughput, these options are tied to the new `max_send_rate_enforcement`,+  you can disable these overrides by setting `max_send_rate_enforcement` to `"legacy"`.+  If you want to set a rate value for `max_send_rate_enforcement` but still disable the overrides you need to set+  `disable_max_send_rate_enforcement_overrides` to `true`.++* `max_send_rate_enforcement` defaults to `"unlimited"`, which will not rate limit at all and have the above overrides+  in effect. This option accepts a rate value of a format `"<rate><unit_numerator>/<unit_denominator>""`.++  `<rate>` Accepts an integer or float value.++  `<unit_numerator>` Accepts one of bytes (`B`), kilobytes(`KB`), megabytes (`MB`), gigabytes (`GB`), and terabytes+  (`TB`). It also takes into account kibibytes(`KiB`), mebibytes (`MiB`), gibibytes (`GiB`), and tebibytes+  (`TiB`). To avoid confusion it only accepts units in bytes with a capital `B`, and not bits with a lowercase `b`.++  `<unit_denominator>` Accepts a unit of time, one of seconds (`s`), minutes (`m`), hours (`h`), days (`d`), and weeks+  (`w`).

And one more note which says "Note, this will rate limit in terms of raw log bytes uploaded to Scalyr which may not be same as charged log volume if you have additional fields and other enrichments turned on."

yanscalyr

comment created time in 2 months

Pull request review commentscalyr/scalyr-agent-2

AGENT-406: Add rate limiting related changes to CHANGELOG and RELEASE_NOTES

 # Release Notes +## 2.1.6 "Rama" - June 4, 2020++* There are a number of new default overrides to increase Agent throughput:++  ```+  "max_allowed_request_size": 5900000+  "pipeline_threshold": 0+  "min_request_spacing_interval": 0.0+  "max_request_spacing_interval": 5.0+  "max_log_offset_size": 200000000+  "max_existing_log_offset_size": 200000000+  ```++  Increased throughput may result in a larger amount of logs uploaded to Scalyr if the Agent has been skipping logs+  before this upgrade, and as a result a larger bill.++  If you are interested in avoiding higher throughput, these options are tied to the new `max_send_rate_enforcement`,+  you can disable these overrides by setting `max_send_rate_enforcement` to `"legacy"`.+  If you want to set a rate value for `max_send_rate_enforcement` but still disable the overrides you need to set+  `disable_max_send_rate_enforcement_overrides` to `true`.++* `max_send_rate_enforcement` defaults to `"unlimited"`, which will not rate limit at all and have the above overrides

I would start this with The max_send_rate_enforcement option defaults ...

yanscalyr

comment created time in 2 months

Pull request review commentscalyr/scalyr-agent-2

AGENT-406: Add rate limiting related changes to CHANGELOG and RELEASE_NOTES

 # Release Notes +## 2.1.6 "Rama" - June 4, 2020++* There are a number of new default overrides to increase Agent throughput:++  ```+  "max_allowed_request_size": 5900000+  "pipeline_threshold": 0+  "min_request_spacing_interval": 0.0+  "max_request_spacing_interval": 5.0+  "max_log_offset_size": 200000000+  "max_existing_log_offset_size": 200000000+  ```++  Increased throughput may result in a larger amount of logs uploaded to Scalyr if the Agent has been skipping logs+  before this upgrade, and as a result a larger bill.++  If you are interested in avoiding higher throughput, these options are tied to the new `max_send_rate_enforcement`,+  you can disable these overrides by setting `max_send_rate_enforcement` to `"legacy"`.+  If you want to set a rate value for `max_send_rate_enforcement` but still disable the overrides you need to set

Remove this sentence. disable_max_send_rate_enforcement_overrides is meant to be a hidden option.

yanscalyr

comment created time in 2 months

Pull request review commentscalyr/scalyr-agent-2

AGENT-406: Add rate limiting related changes to CHANGELOG and RELEASE_NOTES

 # Release Notes +## 2.1.6 "Rama" - June 4, 2020++* There are a number of new default overrides to increase Agent throughput:++  ```+  "max_allowed_request_size": 5900000+  "pipeline_threshold": 0+  "min_request_spacing_interval": 0.0+  "max_request_spacing_interval": 5.0+  "max_log_offset_size": 200000000+  "max_existing_log_offset_size": 200000000+  ```++  Increased throughput may result in a larger amount of logs uploaded to Scalyr if the Agent has been skipping logs+  before this upgrade, and as a result a larger bill.++  If you are interested in avoiding higher throughput, these options are tied to the new `max_send_rate_enforcement`,

Replace with:

If you are interested in relying on the legacy behavior, you may set the `max_send_rate_enforcement` option to `legacy` either by setting it in your `agent.json` configuration file, or by setting the `SCALYR_MAX_SEND_RATE_ENFORCEMENT` environment variable to `legacy`.  
yanscalyr

comment created time in 2 months

pull request commentscalyr/scalyr-agent-2

AGENT-397: Improved logging for investigating skipped bytes

That part of the code is reached after calling send_events here:

https://github.com/scalyr/scalyr-agent-2/blob/f96346854742f7e565d098549d5b30ba8cdd6810/scalyr_agent/copying_manager.py#L785

The method we call in the blocking_time section is the callback that is returned by that call that did the compression, it only reads the response.

Ok, thanks for verifying that. Sgtm.

yanscalyr

comment created time in 2 months

Pull request review commentscalyr/scalyr-agent-2

AGENT-397: Improved logging for investigating skipped bytes

 def run(self):                                 pipeline_time = 0.0                              # Now block for the response.+                            blocking_response_time_start = time.time()                             (result, bytes_sent, full_response) = get_response()+                            blocking_response_time_end = time.time()

Sorry, I meant removing it from total_blocking_response_time. I'd like to try to have the blocking_response_time just be the time we are blocking on the server response. How hard would that be to do?

yanscalyr

comment created time in 2 months

Pull request review commentscalyr/scalyr-agent-2

AGENT-397: Improved logging for investigating skipped bytes

 def __verify_main_config_and_apply_defaults(         self.__verify_or_set_optional_bool(             config, "disable_leak_bandwidth_stats", False, description, apply_defaults         )+        self.__verify_or_set_optional_bool(+            config,+            "disable_leak_copy_manager_stats",

Those were status messages put in to help look specifically for memory leak issues.

yanscalyr

comment created time in 2 months

Pull request review commentscalyr/scalyr-agent-2

AGENT-397: Improved logging for investigating skipped bytes

 def run(self):                                 pipeline_time = 0.0                              # Now block for the response.+                            blocking_response_time_start = time.time()                             (result, bytes_sent, full_response) = get_response()+                            blocking_response_time_end = time.time()

Note, that I think this might also include the compression time. We should look into that.

yanscalyr

comment created time in 2 months

Pull request review commentscalyr/scalyr-agent-2

AGENT-397: Improved logging for investigating skipped bytes

 def __init__(self):          # The total number of log bytes copied to the Scalyr servers.         self.total_bytes_copied = 0+        # The number of bytes that still need to be sent to the Scalyr servers.

There are some existing tests in agent_status_test.py. You should add a few here, mainly to verify that when you add status objects together, the right fields get summed up.

yanscalyr

comment created time in 2 months

Pull request review commentscalyr/scalyr-agent-2

AGENT-397: Improved logging for investigating skipped bytes

 def __verify_main_config_and_apply_defaults(         self.__verify_or_set_optional_bool(             config, "disable_leak_bandwidth_stats", False, description, apply_defaults         )+        self.__verify_or_set_optional_bool(+            config,+            "disable_leak_copy_manager_stats",

I don't think you mean leak here.

yanscalyr

comment created time in 2 months

Pull request review commentscalyr/scalyr-agent-2

AGENT-397: Improved logging for investigating skipped bytes

 def __calculate_overall_stats(self, base_overall_stats):             result.rss_size,         ) = self.__controller.get_usage_info() +        if copy_manager_warnings:+            result.avg_bytes_copied_rate = (+                result.total_bytes_copied - self.__last_total_bytes_copied+            ) / self.__config.copying_manager_stats_log_interval+            result.avg_bytes_produced_rate = (

You could define two variables here called total_bytes_produced and last_total_bytes_produced. It'd probably make the code a little more readable. Alternatively, add a comment here that says you compute total produced by adding bytes_skipped, bytes_copied, and bytes_pending.

yanscalyr

comment created time in 2 months

Pull request review commentscalyr/scalyr-agent-2

AGENT-397: Improved logging for investigating skipped bytes

 def __log_bandwidth_stats(self, overall_stats):             )         ) -    def __calculate_overall_stats(self, base_overall_stats):+    def __log_copy_manager_stats(self, overall_stats):+        """Logs the copy_manager_status message that we periodically write to the agent log to give copying manager+        stats.++        This includes such metrics as the amount of times through the main look and time spent in various sections.

/look/loop/

yanscalyr

comment created time in 2 months

Pull request review commentscalyr/scalyr-agent-2

AGENT-397: Improved logging for investigating skipped bytes

 def __run(self, controller):         last_overall_stats_report_time = time.time()         # We only emit the bandwidth stats once every minute.  Track when we last reported it.         last_bw_stats_report_time = last_overall_stats_report_time+        # We only emit the copying_manager stats once every 5 minutes.  Track when we last reported it.+        last_copy_manager_stats_report_time = last_overall_stats_report_time

To improve readability here, let's declare a variable called current_time and set time.time() and use it in all of the above statements. It looks a little odd when reading this line without that context.

yanscalyr

comment created time in 2 months

Pull request review commentscalyr/scalyr-agent-2

Prepare documentation for 2.1.6 release

 Packaged by Arthur Kamalov <arthur@scalyr.com> on Jun 4, 2020 13:30 -0800 Minor updates * Default value for `max_line_size` has been raised to 49900. If you have this value in your configuration you may wish to not set it anymore to use the new default. +Bug fix

Insteadof fix put fixes

ArthurKamalov

comment created time in 2 months

pull request commentscalyr/scalyr-agent-2

CT-52: Increase default max_line_size

Actually, @yanscalyr Can you do me a favor and add a note about the change of default for this in CHANGELOG.md. It should include a note to customers that if they have set this value in their own configuration files, they may wish to not set it anymore to use the new default.

yanscalyr

comment created time in 2 months

push eventscalyr/scalyr-loadgen

yanscalyr

commit sha 65b28915d2730896071c6cd55ec0fbea77852720

Update apiVersion in the telemetry daemonset yaml

view details

czerwingithub

commit sha dc3a75b4bd49f81be83322a5145f78807118cdd9

Merge pull request #2 from scalyr/api-version Update apiVersion in the telemetry daemonset yaml

view details

push time in 2 months

push eventscalyr/scalyr-agent-2

Jenkins Automation

commit sha 0cc903ecd4746ae559804bc53acd3a335eb71838

Agent release 2.1.5

view details

Steven Czerwinski

commit sha 8a4333b548c5d15392ba7e177414857395c3543c

Fix release name

view details

push time in 2 months

more