profile
viewpoint
If you are wondering where the data of this site comes from, please visit https://api.github.com/users/atheriel/events. GitMemory does not store any data, but only uses NGINX to cache data for a period of time. The idea behind GitMemory is simply to give users a better reading experience.
Aaron Jacobs atheriel Crescendo Technology Toronto, Canada http://unconj.ca

atheriel/helm-twitch 17

An Emacs package to interact with Twitch.tv via Helm.

atheriel/evil-ledger 16

More Evil in ledger-mode.

atheriel/accessibility 12

A Python C Extension that wraps the Accessibility API for Mac OS X.

atheriel/httpproblems 4

Report errors from R APIs using "Problem Details" (RFC 7807)

atheriel/fluent-bit-rsconnect 2

A Fluent Bit filter plugin for use with RStudio Connect.

atheriel/grunge 2

A coherent noise library in its very early stages.

atheriel/cranlift 1

Serve CRAN-like Repositories as RESTful APIs

atheriel/galleryscraper 1

A simple little web scraper for pulling images off a "gallery"-style page, written in Python.

atheriel/atom-language-rust 0

Rust language support in Atom

atheriel/cancensus 0

R wrapper for calling CensusMapper APIs

create barnchatheriel/xrprof

branch : flamegraph-tools

created branch time in 25 days

fork atheriel/kubectl-flame

Kubectl plugin for effortless profiling on kubernetes

fork in 25 days

issue commentmarkvanderloo/tinytest

Support for `expect_match()`

Looks great, thanks!

atheriel

comment created time in a month

push eventjohnmyleswhite/log4r

Aaron Jacobs

commit sha 788e0b2427df406eb4a03e6bb4d8488dde0ee058

Bump to development version 0.3.2.9000. This is to prevent version ambiguity when the package is built from source -- e.g. by R-universe -- as well as being available on CRAN.

view details

Aaron Jacobs

commit sha 28100e23fa06af83cad93066b6c1c47a0d583de4

Bump the Roxygen version.

view details

Aaron Jacobs

commit sha 64f3fc2976ba1d6d08d029a8c8cacde2bc376b06

Use strict R headers.

view details

Aaron Jacobs

commit sha a70e450542e26f5493658dc95de2ef6cabeb86c2

Add rlog and loggit to the performance vignette.

view details

Aaron Jacobs

commit sha 922b6df3818937d0f68f3709e7e168d6cee82021

Fix accidental file truncation when append = FALSE. Fixes #17.

view details

push time in a month

issue closedjohnmyleswhite/log4r

Only last log saved to file when `append = FALSE` to `file_appender`

Thanks for this easy-to-use package.

I want my log file to be overwritten every time I run a script. When I specify append = FALSE option to file_appender in a script with multiple logging requests and run it by Rscript, only the last log is recorded in the log file --- though I want all logs in the script to be recorded. Is that intended? I'm a novice in logging, so please forgive me if I'm asking a stupid question.

Minimal example (R 4.0.3): Name the following as test.R.

library(log4r)
logfile <- "tmp.log"
logger <- logger(appenders = file_appender(logfile, append = FALSE))

info(logger, "Execution started")
p <- 1 + 2 + 3
info(logger, paste0("Result is ", p))
info(logger, "Execution completed")

Run Rscript test.R. Resulting tmp.log:

INFO  [2020-12-19 17:02:41] Execution completed

closed time in a month

shgoke

delete branch atheriel/fluent-bit

delete branch : fix-segfault-with-unset-upload-id

delete time in a month

startedskeeto/growable-buf

started time in a month

delete branch atheriel/fluent-bit

delete branch : log-onigmo-alloc-failures

delete time in 2 months

push eventatheriel/fluent-bit

Stephen Lee

commit sha e93fe4dcfed6ed7bca9390610fa1610d0dfc4de1

out_s3: added sequential index feature Specify $INDEX in s3_key_format to add an index that increments every time it uploads a file. Sequential indexing is compatible with all other s3_key_format features, so $UUID and $TAG can be specified at the same time as $INDEX. When $INDEX is specified but $UUID is not, a random string will not be appended to the end of the key name. If FluentBit crashes or fails to upload, the index will be preserved in the filesystem in a metadata file located in ${store_dir}/index_metadata and reused on startup. This configuration option uses native C functions so is incompatible with multi-threading. Tested through unit testing and various input plugins (exec, random, etc). Signed-off-by: Stephen Lee <sleemamz@amazon.com>

view details

Stephen Lee

commit sha e3d0c602afaeee65935a460488f89414018ea291

aws_util: added index recognition for flb_get_s3_key In flb_get_s3_key, if $INDEX is specified in the format string, flb_get_s3_key will replace $INDEX with the unsigned 64 bit integer seq_index. Signed-off-by: Stephen Lee <sleemamz@amazon.com>

view details

Stephen Lee

commit sha e5bcfc29a07bea5ecf97b9298c56e17e9d6338f8

tests: internal: added unit tests for flb_get_s3_key Signed-off-by: Stephen Lee <sleemamz@amazon.com>

view details

Stephen Lee

commit sha 5d9125e9b6c968d5eaf41b1ef3d94da1bb8005d5

out_s3: added static file path configuration option By default, when a dynamic key formatter like $UUID is not specified in s3_key_format, a UUID is automatically appended to the end of the key. If static_file_path is set to true, this behavior is disabled. This patch has been tested using test configuration files using various input plugins (exec, random, etc) as well as valgrind. Signed-off-by: Stephen Lee <sleemamz@amazon.com>

view details

Richard Burakowski

commit sha 84602e00f2e9f64dd7a3d7b47d64bcfa0d8031e7

out_loki: avoid passing mp_sbuf twice This is a follow on from PR #3839 That issue possibly came about because the same msgpack_sbuffer is being passed twice in function calls: both directly as mp_sbuf and indirectly as part of mp_pck, leading to issues if the relationship isn't clear. This change avoids passing mp_sbuf and instead just uses mp_pck. Signed-off-by: Richard Burakowski <richard.burakowski@gmail.com>

view details

Leonardo Alminana

commit sha ef5fb2c677b0321cd9e243cd7a2786cd4688c249

output_thread: fixed multiple initialization of local_thread_instance in emulated TLS Signed-off-by: Leonardo Alminana <leonardo@calyptia.com>

view details

Leonardo Alminana

commit sha 2383c7b5e35e34e24a3116153f70558fe92ce592

network : enable transport layer protocol selection for dns Signed-off-by: Leonardo Alminana <leonardo@calyptia.com>

view details

Leonardo Alminana

commit sha 56a347c0a8271299684c50517e3fe95acf969c65

network: refactored the async DNS client network: refactored the async DNS client so instead of using a custom event the event type is handled by the event loop which allows us to control whether or not to invoke the callback depending on the lookup context status, moreover, it keeps us from accessing an already released lookup context in the case that there are two events for the lookup socket (read and close) because we do the cleanup process at the end of the event loop cycle. Signed-off-by: Leonardo Alminana <leonardo@calyptia.com>

view details

Leonardo Alminana

commit sha 27dca0391246c66488de2a254f65dbed4aa989de

network: added async DNS UDP timeout management network: Fixed some error codes and added socket type tracking to determine if we need to use the artificial timeout mechanism or not. Note: When a UDP timeout is detected we cannot rely on socket shutdown as a way to trigger the cleanup mechanism so we need to remove the socket from the event loop, flag the error condition, resume the coroutine and release the lookup context right there in the timeout handler. Signed-off-by: Leonardo Alminana <leonardo@calyptia.com>

view details

Leonardo Alminana

commit sha 6cf049db89683d96b3c74048e811679cba821711

style: modified conditionals to match the coding style Signed-off-by: Leonardo Alminana <leonardo@calyptia.com>

view details

Leonardo Alminana

commit sha 66b919aece60f2a449e01605837557ade67bd798

output_thread: fixed the event offset for the lookup context Signed-off-by: Leonardo Alminana <leonardo@calyptia.com>

view details

Leonardo Alminana

commit sha b114afd95eadae289691589b3da7a17be7dfe755

engine: fixed the event offset for the lookup context Signed-off-by: Leonardo Alminana <leonardo@calyptia.com>

view details

Leonardo Alminana

commit sha c11bc652e69a8550f224a62c6b44ca871dbd7e1c

network: fixed the event offset for the lookup context Signed-off-by: Leonardo Alminana <leonardo@calyptia.com>

view details

Leonardo Alminana

commit sha 8d64f29e5021c4575d1cf510405c329c4cf408d1

upstream: renamed the dns mode option Signed-off-by: Leonardo Alminana <leonardo@calyptia.com>

view details

Leonardo Alminana

commit sha 02adc353d51b5b62e90e8a90358acb2ffeda3cc9

network: moved the response event back to the base of the lookup context and removed the string comparisson to determine which dns mode we are using Signed-off-by: Leonardo Alminana <leonardo@calyptia.com>

view details

Stephen Lee

commit sha 06507a272c6231f2979d44a3a17ca957a3c234c2

out_s3: fixed potential segfault on file discard In the upload queue logic, if we exceed the acceptable number of upload errors in a row, we remove the entry from the upload queue. However, instead of immediately moving onto the next entry, we try to set upload_time. This will cause a segfault. Adding a continue after removing entry will fix this issue. Signed-off-by: Stephen Lee <sleemamz@amazon.com>

view details

Fujimoto Seiji

commit sha 544fa896a5c362a979cd251deb990061b9c2aa96

out_s3: add Apache Arrow support Apache Arrow is an efficient columnar data format that is suitable for statistical analysis, and popular in machine learning community. https://arrow.apache.org/ With this patch merged, users now can specify 'arrow' as the compression type like this: [OUTPUT] Name s3 Bucket some-bucket total_file_size 1M use_put_object On Compression arrow which makes Fluent Bit convert the request buffer into Apache Arrow format before uploading. Signed-off-by: Fujimoto Seiji <fujimoto@ceptord.net> Reviewed-by: Sutou Kouhei <kou@clear-code.com> Reviewed-by: Wesley Pettit <wppttt@amazon.com>

view details

Aaron Jacobs

commit sha 398cd06c0499e7963927b082ae304db7e0c47a85

out_s3: fix NULL dereference when upload_id is unset When upload_id is not set in create_multipart_upload() (which can occur in some failure modes), subsequent retries will segfault when checking this ID in complete_multipart_upload(). This commit simply checks that the upload ID has been set and returns the usual error code instead of crashing if not. It also logs an error message for the user. Discussed in #3838. Signed-off-by: Aaron Jacobs <aaron.jacobs@crescendotechnology.com>

view details

push time in 2 months

Pull request review commentfluent/fluent-bit

out_s3: fix NULL dereference when upload_id is unset

 int complete_multipart_upload(struct flb_s3 *ctx,     struct flb_http_client *c = NULL;     struct flb_aws_client *s3_client; +    if (!m_upload->upload_id) {+        return -1;+    }

I've added one, let me know if it's not an appropriate format.

atheriel

comment created time in 2 months

PullRequestReviewEvent

push eventatheriel/fluent-bit

Richard Burakowski

commit sha 3e64f24ad95c7341820aafef9a2e24c4d4a65050

out_loki: delay mp_sbuf->data derefence (#3796) msgpack_pack_str_body will return with a new mp_sbuf->data if mp_sbuf->data was resized with realloc() Signed-off-by: Richard Burakowski <richard.burakowski@gmail.com>

view details

Stephen Lee

commit sha 9173a44e2509c980625138ca1d1031b9c9a8a597

out_s3: added data ordering preservation feature Normally, when an upload request fails, there is a high chance for the last received chunk to be swapped with a later chunk, resulting in data shuffling. This feature prevents this shuffling by using a queue logic for uploads. By default, this feature is turned on. Signed-off-by: Stephen Lee <sleemamz@amazon.com>

view details

Stephen Lee

commit sha 082baa7dc29d43c3173e90e225461cf6739b5383

out_s3: log_key configuration option implemented By default, the whole log record will be sent to S3. If you specify a key name with this option, then only the value of that key will be sent to S3. For example, if you are using the Fluentd Docker log driver, you can specify log_key log and only the log message will be sent to S3. If the key is not found, it will skip that record. This patch has been tested using test configuration files and various input plugins (random, exec, etc). The resulting output, as expected, only contained values of the specified log_key. Signed-off-by: Stephen Lee <sleemamz@amazon.com>

view details

Eduardo Silva

commit sha 5451e40a1edac22460576a1f37f4149778001b51

multiline: fix states rules handling Signed-off-by: Eduardo Silva <edsiper@gmail.com>

view details

Eduardo Silva

commit sha 7fe3676b00c92604a172a246fd8239e5ba4d96c3

tests: internal: multiline: extend tests for issue #3817 Signed-off-by: Eduardo Silva <edsiper@gmail.com>

view details

Eduardo Silva

commit sha 382e9d0aa45c116cc5cdd30aed809f316e5ad959

filter_multiline: flush before return and new option 'debug_flush' Signed-off-by: Eduardo Silva <edsiper@gmail.com>

view details

Eduardo Silva

commit sha 655e17d0f47b75a5e8c692c101ab60706216b6ea

in_tail: add custom keys to multiline payload Signed-off-by: Eduardo Silva <edsiper@gmail.com>

view details

Aaron Jacobs

commit sha 5d51473810ea125b82d4e5b4668df76830ad5ede

http_client: log allocation failures for request headers Calls to flb_realloc() normally log allocation failures with flb_errno(). This commit adds this missing logging. This should help disambiguate errors in flb_http_do() that are due to memory issues as opposed to HTTP-level issues. Signed-off-by: Aaron Jacobs <aaron.jacobs@crescendotechnology.com>

view details

richardburakowski

commit sha ac7d4f555fd823ed97a7f12f485ec822a572a75c

Merge branch 'fluent:master' into issue-3796

view details

Takahiro Yamashita

commit sha 96cf62ae538bcd7a592c859a4628822970686845

Merge pull request #3839 from richardburakowski/issue-3796 out_loki: delay mp_sbuf->data derefence (#3796)

view details

Jesse Rittner

commit sha 3174392886de4ce04b1282c3106cab6acbabe158

lib: fix race between flb_start and flb_destroy Signed-off-by: Jesse Rittner <rittneje@gmail.com>

view details

Eduardo Silva

commit sha 0a15e1b404f05d19d7b74f4943b0f10aa3227f92

lib: cmetrics: upgrade to v0.1.6 Signed-off-by: Eduardo Silva <edsiper@gmail.com>

view details

Eduardo Silva

commit sha b0c4efd65f9d1c0e2b89b779ccf705121c0122ca

out_prometheus_remote_write: concatenate cmetrics buffers This patch generate a concatenation of multiple cmetrics buffers when received on the flush callback. Serializing the payloads works just fine (tested with Grafana cloud service). In addition I've added some debug callbacks for testing, e.g: [2021/07/25 20:35:31] [debug] [output:prometheus_remote_write:...0] cmetrics msgpack size: 289736 [2021/07/25 20:35:31] [debug] [output:prometheus_remote_write:...0] cmetric_id=0 decoded 0-72434 payload_size=41486 [2021/07/25 20:35:31] [debug] [output:prometheus_remote_write:...0] cmetric_id=1 decoded 72434-144868 payload_size=41486 [2021/07/25 20:35:31] [debug] [output:prometheus_remote_write:...0] cmetric_id=2 decoded 144868-217302 payload_size=41486 [2021/07/25 20:35:31] [debug] [output:prometheus_remote_write:...0] cmetric_id=3 decoded 217302-289736 payload_size=41486 [2021/07/25 20:35:31] [debug] [output:prometheus_remote_write:...0] final payload size: 165944 [2021/07/25 20:35:32] [ info] [output:prometheus_remote_write:...0] prometheus-us-central1.grafana.net:443, HTTP status=200 [2021/07/25 20:35:32] [debug] [output:prometheus_remote_write:...0] http_post result FLB_OK Signed-off-by: Eduardo Silva <edsiper@gmail.com>

view details

Aaron Jacobs

commit sha 4af07e4e9edfbc9a2611c9d187434d73dc4d1b30

out_s3: fix NULL dereference when upload_id is unset When upload_id is not set in create_multipart_upload() (which can occur in some failure modes), subsequent retries will segfault when checking this ID in complete_multipart_upload(). This commit simply checks that the upload ID has been set and returns the usual error code instead of crashing if not. It also logs an error message for the user. Discussed in #3838. Signed-off-by: Aaron Jacobs <aaron.jacobs@crescendotechnology.com>

view details

push time in 2 months

pull request commentfluent/fluent-bit

regex: log allocation failures in Onigmo

@nokute78 Rebased to fix the out-of-date branch.

atheriel

comment created time in 2 months

push eventatheriel/fluent-bit

Eduardo Silva

commit sha 0a15e1b404f05d19d7b74f4943b0f10aa3227f92

lib: cmetrics: upgrade to v0.1.6 Signed-off-by: Eduardo Silva <edsiper@gmail.com>

view details

Eduardo Silva

commit sha b0c4efd65f9d1c0e2b89b779ccf705121c0122ca

out_prometheus_remote_write: concatenate cmetrics buffers This patch generate a concatenation of multiple cmetrics buffers when received on the flush callback. Serializing the payloads works just fine (tested with Grafana cloud service). In addition I've added some debug callbacks for testing, e.g: [2021/07/25 20:35:31] [debug] [output:prometheus_remote_write:...0] cmetrics msgpack size: 289736 [2021/07/25 20:35:31] [debug] [output:prometheus_remote_write:...0] cmetric_id=0 decoded 0-72434 payload_size=41486 [2021/07/25 20:35:31] [debug] [output:prometheus_remote_write:...0] cmetric_id=1 decoded 72434-144868 payload_size=41486 [2021/07/25 20:35:31] [debug] [output:prometheus_remote_write:...0] cmetric_id=2 decoded 144868-217302 payload_size=41486 [2021/07/25 20:35:31] [debug] [output:prometheus_remote_write:...0] cmetric_id=3 decoded 217302-289736 payload_size=41486 [2021/07/25 20:35:31] [debug] [output:prometheus_remote_write:...0] final payload size: 165944 [2021/07/25 20:35:32] [ info] [output:prometheus_remote_write:...0] prometheus-us-central1.grafana.net:443, HTTP status=200 [2021/07/25 20:35:32] [debug] [output:prometheus_remote_write:...0] http_post result FLB_OK Signed-off-by: Eduardo Silva <edsiper@gmail.com>

view details

Aaron Jacobs

commit sha 8dcf76e6f7ad21abfe69cf27a5e9acafd5bb695e

regex: log allocation failures in Onigmo Calls to flb_malloc() normally log allocation failures with flb_errno(). This commit adds the same logging for allocation failures in the Onigmo library. This should help disambiguate errors in flb_regex_do() that are due to memory issues as opposed to actual regular expression match failures. Signed-off-by: Aaron Jacobs <aaron.jacobs@crescendotechnology.com>

view details

push time in 2 months

delete branch atheriel/fluent-bit

delete branch : http-client-alloc-failure-logging

delete time in 2 months

issue commentfluent/fluent-bit

Segmentation fault when uploading logs to s3

I believe there were some connection-related improvements in the 1.7.x series that could have fixed this, it's probably worth upgrading to 1.7.9 or the 1.8.x releases.

lucastt

comment created time in 2 months

PR opened fluent/fluent-bit

out_s3: fix NULL dereference when upload_id is unset

When upload_id is not set in create_multipart_upload() (which can occur in some failure modes seen in #3838), subsequent retries will segfault when checking this ID in complete_multipart_upload().

This PR simply checks that the upload ID has been set and returns the usual error code instead of crashing if not. I don't have a way to reproduce the errors that could lead to this state, unfortunately.

Discussed in #3838.


Enter [N/A] in the box, if an item is not applicable to your change.

Testing Before we can approve your change; please submit the following in a comment:

  • [N/A] Example configuration file for the change
  • [N/A] Debug log output from testing the change <!-- Invoke Fluent Bit and Valgrind as: $ valgrind ./bin/fluent-bit <args> -->
  • [N/A] Attached Valgrind output that shows no leaks or memory corruption was found

Documentation <!-- Docs can be edited at https://github.com/fluent/fluent-bit-docs -->

  • [N/A] Documentation required for this feature

<!-- Doc PR (not required but highly recommended) -->


Fluent Bit is licensed under Apache 2.0, by submitting this pull request I understand that this code will be released under the terms of that license.

+4 -0

0 comment

1 changed file

pr created time in 2 months

create barnchatheriel/fluent-bit

branch : fix-segfault-with-unset-upload-id

created branch time in 2 months

issue commentfluent/fluent-bit

s3 signv4 issue after upgrading to 1.8.2

The actual segfault is due to a NULL deference when the upload ID is uninitialized, which is a bug in its own right. But that won't resolve whatever bug is causing your signing issue.

maciekm

comment created time in 2 months

issue commentfluent/fluent-bit

Nested fields are not removed with modify filter

This is probably a dupe of #2152, which had some progress made recently in #3695.

wiegandf

comment created time in 2 months

issue commentfluent/fluent-bit

Kubernetes filter regex failures can cause a denial of service

Unfortunately the Kubernetes exclude annotation doesn't work in this case (it was my first thought), because the error happens before we get any pod metadata at all (including annotations).

But more generally:

Maybe you do want the fluent bit logs though, but you just want to avoid the ddos loop scenario..?

I'd like to avoid ignoring other issues surfaced in Fluent Bit's logs, yes.

After some more thought: what is particular about this case is that once the regex fails, it seems to continue failing indefinitely. I don't think you want to warn on every log record if the underlying problem is the same, so it may be better to teach the Kubernetes filter plugin to cache tags failing to match the regex and warn only once -- after which it will silently ignore them.

atheriel

comment created time in 2 months

delete branch atheriel/fluent-bit

delete branch : http-client-malformed-response-logging

delete time in 2 months

issue commentfluent/fluent-bit

Multiple Regex entries for grep filter not working

Still an active issue.

cokegen

comment created time in 2 months

PR opened fluent/fluent-bit

http_client: warn when flb_http_do() fails due to malformed data

Most consumers of flb_http_do() do not log much information when it fails (e.g. [warn] http_do=-1), resulting in difficult debugging.

This commit simply adds a warning when the HTTP response could not be processed correctly (i.e. it is malformed in some way).

This should help disambiguate errors in flb_http_do() (such as the one discussed in #3301).


Enter [N/A] in the box, if an item is not applicable to your change.

Testing Before we can approve your change; please submit the following in a comment:

  • [N/A] Example configuration file for the change
  • [N/A] Debug log output from testing the change <!-- Invoke Fluent Bit and Valgrind as: $ valgrind ./bin/fluent-bit <args> -->
  • [N/A] Attached Valgrind output that shows no leaks or memory corruption was found

Documentation <!-- Docs can be edited at https://github.com/fluent/fluent-bit-docs -->

  • [N/A] Documentation required for this feature

<!-- Doc PR (not required but highly recommended) -->


Fluent Bit is licensed under Apache 2.0, by submitting this pull request I understand that this code will be released under the terms of that license.

+3 -0

0 comment

1 changed file

pr created time in 2 months

push eventatheriel/fluent-bit

Aaron Jacobs

commit sha a99fd924b761998ccf3db1eb7ad6ae1f400a5045

http_client: warn when flb_http_do() fails due to malformed data Most consumers of flb_http_do() do not log much information when it fails (e.g. "[warn] http_do=-1"), resulting in difficult debugging. This commit simply adds a warning when the HTTP response could not be processed correctly (i.e. it is malformed in some way). This should help disambiguate errors in flb_http_do(). Signed-off-by: Aaron Jacobs <aaron.jacobs@crescendotechnology.com>

view details

push time in 2 months

create barnchatheriel/fluent-bit

branch : http-client-malformed-response-logging

created branch time in 2 months

PR opened fluent/fluent-bit

http_client: log allocation failures for request headers

Calls to flb_realloc() normally log allocation failures with flb_errno(). This PR adds this missing logging for when the request header buffer cannot be resized.

This should help disambiguate errors in flb_http_do() that are due to memory issues as opposed to HTTP-level issues.


Enter [N/A] in the box, if an item is not applicable to your change.

Testing Before we can approve your change; please submit the following in a comment:

  • [N/A] Example configuration file for the change
  • [N/A] Debug log output from testing the change <!-- Invoke Fluent Bit and Valgrind as: $ valgrind ./bin/fluent-bit <args> -->
  • [N/A] Attached Valgrind output that shows no leaks or memory corruption was found

Documentation <!-- Docs can be edited at https://github.com/fluent/fluent-bit-docs -->

  • [N/A] Documentation required for this feature

<!-- Doc PR (not required but highly recommended) -->


Fluent Bit is licensed under Apache 2.0, by submitting this pull request I understand that this code will be released under the terms of that license.

+1 -0

0 comment

1 changed file

pr created time in 2 months

create barnchatheriel/fluent-bit

branch : http-client-alloc-failure-logging

created branch time in 2 months

PR opened fluent/fluent-bit

regex: log allocation failures in Onigmo

Calls to flb_malloc() and friends normally log allocation failures with flb_errno(). This PR adds the same logging for allocation failures in the Onigmo library.

This should help disambiguate errors in flb_regex_do() that are due to memory issues as opposed to actual regular expression match failures.


Enter [N/A] in the box, if an item is not applicable to your change.

Testing Before we can approve your change; please submit the following in a comment:

  • [N/A] Example configuration file for the change
  • [N/A] Debug log output from testing the change <!-- Invoke Fluent Bit and Valgrind as: $ valgrind ./bin/fluent-bit <args> -->
  • [N/A] Attached Valgrind output that shows no leaks or memory corruption was found

Documentation <!-- Docs can be edited at https://github.com/fluent/fluent-bit-docs -->

  • [N/A] Documentation required for this feature

<!-- Doc PR (not required but highly recommended) -->


Fluent Bit is licensed under Apache 2.0, by submitting this pull request I understand that this code will be released under the terms of that license.

+1 -0

0 comment

1 changed file

pr created time in 2 months

create barnchatheriel/fluent-bit

branch : log-onigmo-alloc-failures

created branch time in 2 months