profile
viewpoint
Jaime Soriano Pastor jsoriano @elastic Berlin/Teruel Software Engineer at @elastic Beats

jsoriano/compose-elastic-mysql 2

Elastic, Kibana and metricbeat with mysql

elastic/sarama 1

Sarama is a Go library for Apache Kafka 0.8, and up.

jsoriano/docker-librarian-puppet 1

Dockerfiles for base images that use librarian-puppet to be provisioned

elastic/badger 0

Fast key-value DB in Go.

jsoriano/aptly 0

aptly - Debian repository management tool

jsoriano/awscreds 0

Script to convert output of AWS STS to credentials file

jsoriano/beaglebone-tools 0

Tools for beaglebone

jsoriano/beats 0

:tropical_fish: Beats - Lightweight shippers for Elasticsearch & Logstash

jsoriano/bosun 0

Time Series Alerting Framework

jsoriano/cfdev 0

A fast and easy local Cloud Foundry experience on native hypervisors, powered by LinuxKit with VPNKit

pull request commentelastic/beats

Add more Golang tests to filestream input

Pinging @elastic/integrations-services (Team:Services)

kvch

comment created time in 6 minutes

pull request commentelastic/beats

Add more Golang tests to filestream input

This pull request doesn't have a Team:<team> label.

kvch

comment created time in 6 minutes

PR opened elastic/beats

Reviewers
Add more Golang tests to filestream input

What does this PR do?

Increase the test coverage of filestream input.

Related issues

Requires #22788

+1211 -247

0 comment

15 changed files

pr created time in 6 minutes

Pull request review commentelastic/beats

Refactoring of filestream input backend

 func (r *resource) stateSnapshot() state { 		TTL:     r.internalState.TTL, 		Updated: r.internalState.Updated, 		Cursor:  cursor,+		Meta:    r.cursorMeta, 	}

We might want to change this copyNithNewKey:

  • pending operations can not update the copy (no reference to the new resource). Therefore the copy must be marked in sync and pending ops should be ignored (set pendingCursor to nil and internalInsync to true). Alternatively we might have to think about introducing some shared update-state that is aware of active copies, but this might be overkill for now.
  • The entry is New, but subject to cleanup due to the current value of TTL. Would it make sense to set Updated to time.Now? I'm on the fence here. Maybe it makes sense to keep the old Update state, as the actual source we do read from did not change. Is Updated also updated by pending operations after an ACK? In this case we might want to set Update: time.Now() if the current resource has pending operations, but keep the older timestamp if the resource is "inactive"
kvch

comment created time in 10 minutes

fork nimeacuerdo/marathon

Cross-platform test runner written for Android and iOS projects

https://malinskiy.github.io/marathon/

fork in 15 minutes

issue commentelastic/beats

script processor support for Heartbeat

Pinging @elastic/uptime (Team:Uptime)

arpit1305

comment created time in 23 minutes

push eventelastic/beats

Noémi Ványi

commit sha 3a1d1aee6d2f144fbf57f16713f870972169a73b

Add support for reading from UNIX datagram sockets (#22699) ## What does this PR do? This PR adds support for reading from UNIX datagram sockets both from the `unix` input and the `syslog` input. A new option is added to select the type of the socket named `socket_type`. Available options are: `stream` and `datagram`. ## Why is it important? A few applications which send logs over Unix sockets, use datagrams not streams. From now on, Filebeat can accept input from these applications as well. Closes #18632

view details

push time in 32 minutes

PR merged elastic/beats

Reviewers
Add support for reading from UNIX datagram sockets Team:Services

What does this PR do?

This PR adds support for reading from UNIX datagram sockets both from the unix input and the syslog input. A new option is added to select the type of the socket named socket_type. Available options are: stream and datagram.

Why is it important?

A few applications which send logs over Unix sockets, use datagrams not streams. From now on, Filebeat can accept input from these applications as well.

Checklist

  • [x] My code follows the style guidelines of this project
  • [x] I have commented my code, particularly in hard-to-understand areas
  • [x] I have made corresponding changes to the documentation - [ ] I have made corresponding change to the default configuration files
  • [x] I have added tests that prove my fix is effective or that my feature works
  • [x] I have added an entry in CHANGELOG.next.asciidoc or CHANGELOG-developer.next.asciidoc.

Related issues

Closes #18632

+880 -392

7 comments

27 changed files

kvch

pr closed time in 32 minutes

issue closedelastic/beats

Add support for ~UDP~ Datagram unix socket input

Describe the enhancement:

There is support for unix socket inputs in filebeat from version 7.8

This is done using protocol.unix as described here

This creates a TCP socket - but eg rsyslog assumes UDP

for ref, the ticket where support was added https://github.com/elastic/beats/issues/10934

Describe a specific use case for the enhancement or feature:

use with any syslog engine that outputs to a UDP socket

closed time in 32 minutes

phlax

issue commentelastic/beats

script processor support for Heartbeat

This issue doesn't have a Team:<team> label.

arpit1305

comment created time in 36 minutes

issue openedelastic/beats

script processor support for Heartbeat

currently "script" processor is not supported for Heartbeat unlike metricbeat. Script processors are very useful when we want to enhance the data before we send to logstash when we want to enhance data with complex conditional logic.

Can we add the support of it for heartbeat please?

created time in 36 minutes

push eventshirou/gopsutil

Lomanic

commit sha 07887a9e9ff29f218bdc92463f16ea3a1dfbed2b

[mem][linux] Add mocked test for VirtualMemory() and fix SReclaimable SUnreclaim retrieval

view details

Lomanic

commit sha cd25417bd7c08d6621184581fe576696910ef7b8

[mem][linux] Fix #1002 only try to parse /proc/meminfo numeric values on fields we're interested in

view details

shirou

commit sha 478eb4c76adf46eb175d1c729e25d0993ee418f3

Merge pull request #1004 from Lomanic/issue1002 [mem][linux] Fix #1002 only try to parse /proc/meminfo numeric values on fields we're interested in

view details

push time in 39 minutes

PR merged shirou/gopsutil

[mem][linux] Fix #1002 only try to parse /proc/meminfo numeric values on fields we're interested in package:mem v3

Also add mocked VirtualMemory() tests.

Fixes #1002

+686 -10

0 comment

8 changed files

Lomanic

pr closed time in 39 minutes

issue closedshirou/gopsutil

`fillFromMeminfoWithContext` cannot parse `/proc/meminfo` with multi-valued fields

Describe the bug

The function fillFromMeminfoWithContext cannot parse /proc/meminfo on a particular configuration that has multi-valued fields in /proc/meminfo.

To Reproduce I found this running the stock configuration of Telegraf on the router, and traced it back to just running:

mem.VirtualMemory()

Expected behavior Memory statistics should be parsed.

Environment (please complete the following information):

  • [ ] Windows: [paste the result of ver]
  • [x] Linux: [paste contents of /etc/os-release and the result of uname -a]
  • [ ] Mac OS: [paste the result of sw_vers and uname -a
  • [ ] FreeBSD: [paste the result of freebsd-version -k -r -u and uname -a]
  • [ ] OpenBSD: [paste the result of uname -a] This is specifically DD-WRT on kernel 4.4 running on an armel router:
root@DD-WRT:~# uname -a
Linux DD-WRT 4.4.187 #650 SMP PREEMPT Tue Aug 6 11:38:46 +04 2019 armv7l DD-WR

Additional context

Here are the complete contents of /proc/meminfo on this machine:

root@DD-WRT:~# cat /proc/meminfo
        total:    used:    free:  shared: buffers:  cached:
Mem:  260579328 136073216 124506112        0  4915200 94064640
Swap:        0        0        0
MemTotal:         254472 kB
MemFree:          121588 kB
MemShared:             0 kB
Buffers:            4800 kB
Cached:            91860 kB
SwapCached:            0 kB
Active:           106236 kB
Inactive:           8380 kB
MemAvailable:     210156 kB
Active(anon):      17956 kB
Inactive(anon):        0 kB
Active(file):      88280 kB
Inactive(file):     8380 kB
Unevictable:           0 kB
Mlocked:               0 kB
HighTotal:        131072 kB
HighFree:          66196 kB
LowTotal:         123400 kB
LowFree:           55392 kB
SwapTotal:             0 kB
SwapFree:              0 kB
Dirty:                 0 kB
Writeback:             0 kB
AnonPages:         17992 kB
Mapped:            37884 kB
Shmem:                 0 kB
Slab:               9076 kB
SReclaimable:       2700 kB
SUnreclaim:         6376 kB
KernelStack:         624 kB
PageTables:          396 kB
NFS_Unstable:          0 kB
Bounce:                0 kB
WritebackTmp:          0 kB
CommitLimit:      127236 kB
Committed_AS:      24968 kB
VmallocTotal:    1949696 kB
VmallocUsed:           0 kB
VmallocChunk:          0 kB

I checked a few other ARM systems as well as an amd64 system and do not see the first two lines on any of them.

closed time in 39 minutes

don-code

Pull request review commentshirou/gopsutil

[mem][linux] Fix #1002 only try to parse /proc/meminfo numeric values on fields we're interested in

 func fillFromMeminfoWithContext(ctx context.Context) (*VirtualMemoryStat, *Virtu 		value := strings.TrimSpace(fields[1]) 		value = strings.Replace(value, " kB", "", -1) -		t, err := strconv.ParseUint(value, 10, 64)-		if err != nil {-			return ret, retEx,err-		} 		switch key { 		case "MemTotal":+			t, err := strconv.ParseUint(value, 10, 64)+			if err != nil {+				return ret, retEx, err+			} 			ret.Total = t * 1024 		case "MemFree":+			t, err := strconv.ParseUint(value, 10, 64)+			if err != nil {+				return ret, retEx, err+			} 			ret.Free = t * 1024 		case "MemAvailable":+			t, err := strconv.ParseUint(value, 10, 64)+			if err != nil {+				return ret, retEx, err+			} 			memavail = true 			ret.Available = t * 1024 		case "Buffers":+			t, err := strconv.ParseUint(value, 10, 64)+			if err != nil {+				return ret, retEx, err+			} 			ret.Buffers = t * 1024 		case "Cached":+			t, err := strconv.ParseUint(value, 10, 64)+			if err != nil {+				return ret, retEx, err+			} 			ret.Cached = t * 1024 		case "Active":+			t, err := strconv.ParseUint(value, 10, 64)+			if err != nil {+				return ret, retEx, err+			} 			ret.Active = t * 1024 		case "Inactive":+			t, err := strconv.ParseUint(value, 10, 64)+			if err != nil {+				return ret, retEx, err+			} 			ret.Inactive = t * 1024 		case "Active(anon)":+			t, err := strconv.ParseUint(value, 10, 64)+			if err != nil {+				return ret, retEx, err+			} 			retEx.ActiveAnon = t * 1024 		case "Inactive(anon)":+			t, err := strconv.ParseUint(value, 10, 64)+			if err != nil {+				return ret, retEx, err+			} 			retEx.InactiveAnon = t * 1024 		case "Active(file)":+			t, err := strconv.ParseUint(value, 10, 64)+			if err != nil {+				return ret, retEx, err+			} 			activeFile = true 			retEx.ActiveFile = t * 1024 		case "Inactive(file)":+			t, err := strconv.ParseUint(value, 10, 64)+			if err != nil {+				return ret, retEx, err+			} 			inactiveFile = true 			retEx.InactiveFile = t * 1024 		case "Unevictable":+			t, err := strconv.ParseUint(value, 10, 64)+			if err != nil {+				return ret, retEx, err+			} 			retEx.Unevictable = t * 1024 		case "WriteBack":+			t, err := strconv.ParseUint(value, 10, 64)+			if err != nil {+				return ret, retEx, err+			} 			ret.WriteBack = t * 1024 		case "WriteBackTmp":+			t, err := strconv.ParseUint(value, 10, 64)+			if err != nil {+				return ret, retEx, err+			} 			ret.WriteBackTmp = t * 1024 		case "Dirty":+			t, err := strconv.ParseUint(value, 10, 64)+			if err != nil {+				return ret, retEx, err+			} 			ret.Dirty = t * 1024 		case "Shmem":+			t, err := strconv.ParseUint(value, 10, 64)+			if err != nil {+				return ret, retEx, err+			} 			ret.Shared = t * 1024 		case "Slab":+			t, err := strconv.ParseUint(value, 10, 64)+			if err != nil {+				return ret, retEx, err+			} 			ret.Slab = t * 1024-		case "Sreclaimable":+		case "SReclaimable":+			t, err := strconv.ParseUint(value, 10, 64)+			if err != nil {+				return ret, retEx, err+			} 			sReclaimable = true 			ret.Sreclaimable = t * 1024-		case "Sunreclaim":+		case "SUnreclaim":

wow, didn't noticed, sorry. it is because replacing on v3.

Lomanic

comment created time in 42 minutes

pull request commentelastic/beats

system/socket: Add ip_local_out alternative

:green_heart: Build Succeeded

<!-- BUILD BADGES-->

the below badges are clickable and redirect to their specific view in the CI or DOCS Pipeline View Test View Changes Artifacts preview

<!-- BUILD SUMMARY--> <details><summary>Expand to view the summary</summary> <p>

Build stats

  • Build Cause: Pull request #22787 opened

  • Start Time: 2020-11-30T12:44:58.218+0000

  • Duration: 21 min 21 sec

Test stats :test_tube:

Test Results
Failed 0
Passed 228
Skipped 33
Total 261

</p> </details>

<!-- TEST RESULTS IF ANY-->

<!-- STEPS ERRORS IF ANY-->

:green_heart: Flaky test report

Tests succeeded.

<!-- BUILD SUMMARY--> <details><summary>Expand to view the summary</summary> <p>

Test stats :test_tube:

Test Results
Failed 0
Passed 228
Skipped 33
Total 261

</p> </details>

<!--COMMENT_GENERATED_WITH_ID_comment.id-->

adriansr

comment created time in an hour

pull request commentelastic/beats

system/socket: Add ip_local_out alternative

This pull request doesn't have a Team:<team> label.

adriansr

comment created time in an hour

pull request commentelastic/beats

system/socket: Add ip_local_out alternative

Pinging @elastic/security-external-integrations (Team:Security-External Integrations)

adriansr

comment created time in an hour

PR opened elastic/beats

system/socket: Add ip_local_out alternative Auditbeat Team:Security-External Integrations bug review

What does this PR do?

This PR adds a new function alternative, __ip_local_out for selecting a proper ip_local_out function, and fixes guess_ip_local_out logic in order to account for this new function.

The new order of precedence is:

  • ip_local_out_sk (kernels before 3.16)
  • __ip_local_out (for kernels where ip_local_out calls are inlined)
  • ip_local_out (all others).

Why is it important?

In some systems, the socket dataset won't start with an error:

unable to guess one or more required parameters: guess_ip_local_out failed: timeout while waiting for event

This is caused by Auditbeat expecting a kprobe set to ip_local_out to trigger, but it never does. The reason is that calls to this function might have been inlined. In those cases we need to attach the kprobe to __ip_local_out instead.

Checklist

<!-- Mandatory Add a checklist of things that are required to be reviewed in order to have the PR approved

List here all the items you have verified BEFORE sending this PR. Please DO NOT remove any item, striking through those that do not apply. (Just in case, strikethrough uses two tildes. Scratch this.) -->

  • [ ] My code follows the style guidelines of this project
  • [ ] I have commented my code, particularly in hard-to-understand areas
  • [ ] I have made corresponding changes to the documentation
  • [ ] I have made corresponding change to the default configuration files
  • [ ] I have added tests that prove my fix is effective or that my feature works
  • [ ] I have added an entry in CHANGELOG.next.asciidoc or CHANGELOG-developer.next.asciidoc.

Related issues

Relates #18755

+16 -7

0 comment

2 changed files

pr created time in an hour

issue commentelastic/beats

Add event rate quota per Cloud Foundry organization

Starting to think a bit about what the configuration for such a processor might look like, here's an initial proposal (not married to any of the setting names, of course):

processors:
- rate_limiter:
    global: "500/m"             # optional, but either global or by_field or both must be specified
    by_field:                # optional, but either global or by_field or both must be specified
    - field: foo.bar         # required
      value: "56/s"           # optional, but either value or values or both must be specified
      values:                # optional, but either value or values or both must be specified
      - baz: "4500/h"          # required

The way the above example configuration would be interpreted is:

  • For events that have foo.bar == "baz", a rate limit of 4500 events per hour will be applied.
  • For all other events that have a foo.bar field present, a rate limit of 56 events per second will be applied.
  • For all other events, a rate limit of 500 events per minute will be applied.

This configuration would allow for complex rate limiting policies to be configured while also allowing simple ones to be configured quite simply. For example, to enforce a rate limit of 500 events per second for events from cloud foundry org acme, the configuration would look like:

processors:
- rate_limiter:
  by_field:
    - field: cloudfoundry.org.name: "acme"
      value: "500 /s"

I'm deliberately leaving aside the choice of rate limiting algorithm (fixed window, sliding window, token bucket, leaky bucket) for now. Just trying to focus on the configuration UX first.

WDYT @jsoriano?

jsoriano

comment created time in an hour

pull request commentelastic/beats

Add support for reading from UNIX datagram sockets

/test filebeat

kvch

comment created time in an hour

issue commentelastic/beats

Make metricbeat sql module more versatile

This usage (getting data from an SQL table in the form of logs) would be better in filebeat rather than metricsbeat, as the spirit is closer to how log collection operate (have a single instance of each record) rather than metrics (collection of a sensors at a certain point in time).

dagwieers

comment created time in an hour

issue commentelastic/beats

Enabled MADV_DONTNEED in systemd and docker files

Pinging @elastic/integrations-services (Team:Services)

urso

comment created time in 2 hours

issue commentelastic/beats

Enabled MADV_DONTNEED in systemd and docker files

This issue doesn't have a Team:<team> label.

urso

comment created time in 2 hours

PR closed elastic/beats

[CI][DO NOT MERGE]test: try to guess what happen Team:Automation

<!-- Type of change Please label this PR with one of the following labels, depending on the scope of your change:

  • Bug
  • Enhancement
  • Breaking change
  • Deprecation
  • Cleanup
  • Docs -->

What does this PR do?

<!-- Mandatory Explain here the changes you made on the PR. Please explain the WHAT: patterns used, algorithms implemented, design architecture, message processing, etc. -->

Why is it important?

<!-- Mandatory Explain here the WHY, or the rationale/motivation for the changes. -->

Checklist

<!-- Mandatory Add a checklist of things that are required to be reviewed in order to have the PR approved

List here all the items you have verified BEFORE sending this PR. Please DO NOT remove any item, striking through those that do not apply. (Just in case, strikethrough uses two tildes. Scratch this.) -->

  • [ ] My code follows the style guidelines of this project
  • [ ] I have commented my code, particularly in hard-to-understand areas
  • [ ] I have made corresponding changes to the documentation
  • [ ] I have made corresponding change to the default configuration files
  • [ ] I have added tests that prove my fix is effective or that my feature works
  • [ ] I have added an entry in CHANGELOG.next.asciidoc or CHANGELOG-developer.next.asciidoc.

Author's Checklist

<!-- Recommended Add a checklist of things that are required to be reviewed in order to have the PR approved -->

  • [ ]

How to test this PR locally

<!-- Recommended Explain here how this PR will be tested by the reviewer: commands, dependencies, steps, etc. -->

Related issues

<!-- Recommended Link related issues below. Insert the issue link or reference after the word "Closes" if merging this should automatically close it.

  • Closes #123
  • Relates #123
  • Requires #123
  • Superseds #123 -->

Use cases

<!-- Recommended Explain here the different behaviors that this PR introduces or modifies in this project, user roles, environment configuration, etc.

If you are familiar with Gherkin test scenarios, we recommend its usage: https://cucumber.io/docs/gherkin/reference/ -->

Screenshots

<!-- Optional Add here screenshots about how the project will be changed after the PR is applied. They could be related to web pages, terminal, etc, or any other image you consider important to be shared with the team. -->

Logs

<!-- Recommended Paste here output logs discovered while creating this PR, such as stack traces or integration logs, or any other output you consider important to be shared with the team. -->

+21 -695

0 comment

1 changed file

kuisathaverat

pr closed time in 2 hours

issue openedelastic/beats

Enabled MADV_DONTNEED in systemd and docker files

The go runtime defaults to MADV_FREE by default. In that case pages are still assigned with the Beat, even after the pages have been "returned" to the OS. The kernel will eventually claim these pages, but RSS memory usage is reported as very high (which it is not necessarily). This seems to mess with the OOM killer at times.

As workaround one can tell the runtime to use MADV_DONTNEED via GODEBUG="madvdontneed=1". We should introduce this environment variable to the systemd file, init file, and docker file by default.

created time in 2 hours

startedheetch/confita

started time in 3 hours

startedQihoo360/pika

started time in 3 hours

issue commentelastic/beats

Override Default Filebeat Module Pipeline

Thanks for that @ycombinator I'll have a look at those links.

I take your point re. posting here in a github Issue, I asked a similar question here - https://discuss.elastic.co/t/specify-module-pipeline-in-kubernetes-annotation/256802 - but it's not getting a lot of traction :|

PBarnard

comment created time in 3 hours

more