JustinAzoff/bro-pdns 170

Passive DNS collection using Bro

JustinAzoff/asnlookup 27

IP Address to ASN/prefix/owner/cc lookup server

JustinAzoff/bannerscanner 11

simple tcp port scanner + banner grabber

JustinAzoff/bro-statsd-plugin 9

Statsd client for Bro.

csirtgadgets/csirtg-fm-v1 8

The FASTEST way to Consume Threat Intelligence

JustinAzoff/bro-react 8

react stuff

corelight/json-tcp-lb 6

line based tcp load balancing proxy.

JustinAzoff/asnlookup-client-python 3

Python client for asnlookup server

corelight/zeek-smb-clear-state 2

reduce amount of tracked smb state

issue closedzeek/zeek

Why doesn't Zeek pick up HTTP requests on localhost?

To reproduce, on Ubuntu 20.04 as root:

apt-get update -y
apt-get install -y --no-install-recommends nginx
systemctl start nginx

Install Zeek to PREFIX=/opt/zeek. zeek version 4.0.4.

Create $PREFIX/etc/node.cfg:


Run zeekctl deploy. Confirm zeekctl status shows as running.

Then tail -f $PREFIX/logs/current/*.log

In another shell, curl http://localhost or curl

No http.log is created at all. Why not?

Zeek does appear to see the connection otherwise, as conn.log shows


(though weirdly resp_ip_bytes is 0. Don't know why either.)

I can even do for i in $(seq 100); do curl; done and get 100 200 OK responses. Still no http.log shows up. I can see in Wireshark that a capture of lo clearly shows the HTTP request from src=, dest= that is on interface lo.

What am I missing here?

closed time in 2 days


issue commentzeek/zeek

Why doesn't Zeek pick up HTTP requests on localhost?


Do you have a reporter.log? It should have this in it.

Your interface is likely receiving invalid TCP checksums, most likely from NIC checksum offloading.  By default, packets with invalid checksums are discarded by Zeek unless using the -C command-line option or toggling the 'ignore_checksums' variable.  Alternatively, disable checksum offloading by the network adapter to ensure Zeek analyzes the actual checksums that are transmitted.


redef ignore_checksums = T;

to /opt/zeek/share/zeek/site/local.zeek

that will fix it.


comment created time in 2 days

issue openededenhill/librdkafka

Reduced consumer performance, from rd_kafka_max_block_ms?


I've been digging into a consumer performance issue where the consumer performance maxes out around 100k-200k events/sec, even though cpu usage is very low. I initially observed this issue by recording a perf profile of the consumer and viewing the resulting profile using FlameScope. That showed a very curious profile.. the consumer was busy for about 200ms, but then was doing nothing for almost a full second.


I next ran the consumer using full debugging and processed the output with a small script looking for gaps in the output. That zeroed in on the following:

%7|1640796556.691|FETCHADD|app#consumer-1| [thrd:]: Removed mytopic [0] from fetch list (0 entries, opv 2): queued.min.messages exceeded
%7|1640796557.625|FETCH|app#consumer-1| [thrd:]: Topic mytopic [0] in state active at offset 89651296895 (0/100000 msgs, 0/65536 kb queued, opv 2) is fetchable

I wasn't able to determine if there was a configuration option that controlled this sleep, or if there is some other option that could be changed to improve this behavior. In the meantime, making the following change increases the performance 3-4x:

    sed -i 's/rd_kafka_max_block_ms = 1000/rd_kafka_max_block_ms = 100/' src/rdkafka_broker.c;

How to reproduce

Starting point, everything stock from current master (2b76b65212e5efda213961d5f84e565038036270). I have also reproduced this on 1.5 and 1.6.2

$ ./examples/rdkafka_performance  -C -p 0 -b localhost:9092  -t conn  -B 20000
[usual error snip]
% 1780000 messages (1024509505 bytes) consumed in 1004ms: 1771431 msgs/s (1019.58 MB/s)
% 3520000 messages (2026011227 bytes) consumed in 2013ms: 1748239 msgs/s (1006.24 MB/s)
% 5240000 messages (3016732823 bytes) consumed in 3013ms: 1738783 msgs/s (1001.04 MB/s)
% 6980000 messages (4018372979 bytes) consumed in 4020ms: 1736128 msgs/s (999.49 MB/s)
% 8720000 messages (5021771004 bytes) consumed in 5027ms: 1734583 msgs/s (998.93 MB/s)
% 10460000 messages (6024357351 bytes) consumed in 6030ms: 1734587 msgs/s (999.02 MB/s)
% 11440000 messages (6589884954 bytes) consumed in 6599ms: 1733379 msgs/s (998.49 MB/s)

So, great performance, everything is fine.

Now, modify the consumer so it does a bit of work for each message:

--- a/examples/rdkafka_performance.c
+++ b/examples/rdkafka_performance.c
@@ -301,6 +301,13 @@ static void msg_consume(rd_kafka_message_t *rkmessage, void *opaque) {
         cnt.offset = rkmessage->offset;
         cnt.bytes += rkmessage->len;
+        for(int i=0; i < 10;i++) {
+            char *body = strdup(rkmessage->payload);
+            size_t mylen = strlen(body);
+            free(body);
+            if (mylen == 123123)
+                printf("this will never happen");
+        }

After making this change, the performance is now:

% 360000 messages (207195922 bytes) consumed in 1636ms: 219982 msgs/s (126.61 MB/s)
% 540000 messages (310813898 bytes) consumed in 2683ms: 201194 msgs/s (115.80 MB/s)
% 720000 messages (414425623 bytes) consumed in 3730ms: 193023 msgs/s (111.10 MB/s)
% 900000 messages (518041197 bytes) consumed in 4731ms: 190234 msgs/s (109.50 MB/s)
% 1060000 messages (610139475 bytes) consumed in 5731ms: 184943 msgs/s (106.45 MB/s)
% 1220000 messages (702210869 bytes) consumed in 6734ms: 181162 msgs/s (104.27 MB/s)

However, it's not because it's busy, it's because it is idle. If you run top -d .1 and look at the cpu usage, it will look like this


Now, if we do the

sed -i 's/rd_kafka_max_block_ms = 1000/rd_kafka_max_block_ms = 100/' src/rdkafka_broker.c;

the performance is much better:

% 540000 messages (310813898 bytes) consumed in 1014ms: 532100 msgs/s (306.27 MB/s)
% 1080000 messages (621647739 bytes) consumed in 2019ms: 534679 msgs/s (307.76 MB/s)
% 1640000 messages (943918308 bytes) consumed in 3032ms: 540738 msgs/s (311.23 MB/s)
% 2200000 messages (1266391974 bytes) consumed in 4041ms: 544334 msgs/s (313.34 MB/s)
% 2780000 messages (1600059472 bytes) consumed in 5072ms: 548100 msgs/s (315.47 MB/s)
% 3340000 messages (1922388203 bytes) consumed in 6081ms: 549243 msgs/s (316.13 MB/s)
% 3920000 messages (2256312784 bytes) consumed in 7110ms: 551324 msgs/s (317.34 MB/s)
% 4480000 messages (2578838504 bytes) consumed in 8126ms: 551310 msgs/s (317.35 MB/s)

and the cpu stays busy:


It's likely still pausing for upwards of 100ms, but that is short enough to keep the system busy.


IMPORTANT: We will close issues where the checklist has not been completed.

Please provide the following information:

  • [x] librdkafka version (release number or git tag): 2b76b65212e5efda213961d5f84e565038036270
  • [x] Apache Kafka version: 2.0.0
  • [x] librdkafka client configuration: defaults from ./examples/rdkafka_performance
  • [x] Operating system: custom
  • [x] Provide logs (with debug=.. as necessary) from librdkafka
  • [x] Provide broker log excerpts
  • [ ] Critical issue

created time in 23 days

issue commentJustinAzoff/zeek-jemalloc-profiling

Cannot get profiling to work

This one is ok:

Warning: no plugin found in /opt/zeek/lib64/zeek/plugins/packages/zeek-jemalloc-profiling/

that file is not a zeekctl plugin, you can just remove it from that directory.

I just updated the pkg to prevent that from being installed there.

when I perform zeekctl jeprof.check it says jemalloc profiling enbaled: False.

If you are getting that it is because your build of jemalloc does not have profiling enabled. You'll need to rebuild jemalloc with --enable-prof


comment created time in a month

push eventJustinAzoff/zeek-jemalloc-profiling

Justin Azoff

commit sha d512563eca9a41912d6eef9543e8aa2236508066

prevent zkg from installing process as a plugin Move things around so that zkg doesn't install

view details

push time in a month

issue commentzeek/zeek

Coredump: 4.1.0 - Due to memory exhustion on the logger node

but RAM remained static at 29G.

That's probably normal if you are not using jemalloc. I think the glibc allocator doesn't return freed memory back to the OS. jemalloc does normally, though I believe there might be some corner cases like if the system is using swap it won't try to do that.


comment created time in a month

issue openedzeek/package-manager

Improve caching when creating multiple bundles

We have some automation that puts together multiple bundles with different sets of packages. Lately this has gotten to be quite slow. It appears that this is because git information isn't always cached.

I created a test manifest that fully pins each package to a specific commit, but there looks to be a missed opportunity to use cached package information:

here's how I am testing:

[bundle] = 6222dfc6948cb567e6f54b8b0e2a1f83c3b9a052 = b462f27066ea878c91fd604387f3d585e2fef325 = 381851e9d100406b8a5aec07884bf55feb1017b4 = 454691a54356373d7af8324c008516f8f5725f27

then I run

zkg -vvv bundle --force ./bundles/test.bundle --manifest ./test-faster.manifest

Each run the output looks like

$time zkg -vvv bundle --force ./bundles/test.bundle --manifest ./test-faster.manifest 
2021-12-16 14:52:45 DEBUG    init Manager version 2.12.0
2021-12-16 14:52:45 DEBUG    found source clone of "zeek" at /Users/justin/.zkg/clones/source/zeek
2021-12-16 14:52:46 DEBUG    getting info on ""
2021-12-16 14:52:46 DEBUG    checked out "", branch/version "6222dfc6948cb567e6f54b8b0e2a1f83c3b9a052"
2021-12-16 14:52:46 DEBUG    getting info on ""
2021-12-16 14:52:47 DEBUG    checked out "", branch/version "b462f27066ea878c91fd604387f3d585e2fef325"
2021-12-16 14:52:47 DEBUG    getting info on ""
2021-12-16 14:52:48 DEBUG    checked out "", branch/version "381851e9d100406b8a5aec07884bf55feb1017b4"
2021-12-16 14:52:48 DEBUG    getting info on ""
2021-12-16 14:52:49 DEBUG    checked out "", branch/version "454691a54356373d7af8324c008516f8f5725f27"
2021-12-16 14:52:49 DEBUG    getting info on ""
2021-12-16 14:52:50 DEBUG    checked out "", branch/version "6222dfc6948cb567e6f54b8b0e2a1f83c3b9a052"
2021-12-16 14:52:50 DEBUG    getting info on ""
2021-12-16 14:52:50 DEBUG    checked out "", branch/version "b462f27066ea878c91fd604387f3d585e2fef325"
2021-12-16 14:52:50 DEBUG    getting info on ""
2021-12-16 14:52:51 DEBUG    checked out "", branch/version "381851e9d100406b8a5aec07884bf55feb1017b4"
2021-12-16 14:52:51 DEBUG    getting info on ""
2021-12-16 14:52:52 DEBUG    checked out "", branch/version "454691a54356373d7af8324c008516f8f5725f27"
Bundle successfully written: ./bundles/test.bundle

real	0m9.381s

I figure this can't work if you used = master as the version, but pinning a specific version should be able to be cached, right?

created time in a month


issue openedcorelight/cve-2021-44228

LOG4J_RCE tag is added to all http log entries

    add c$http$tags[LOG4J_RCE];

is in the wrong place and is being added to ALL http logs.

created time in a month

issue commentzeek/zeek

Reduce redundant string allocations

Yeah.. it's a little improvement but does cut down on a bunch of allocations. the tricky part is a TON of the baselines change


comment created time in a month

issue commentzeek/zeek

half duplex http connections can cause unbounded state growth

Oh, yes, I forgot about this. No rush. I wanted to look into that comment:

+       #If there is a status code, don't expire it.
+       #Can I just return -1 here to remove the timer?

I'm not sure if there is a way for expire_func to return something to indicate that the item should never expire. triggering every 5mins and doing nothing works, just isn't as clean.


comment created time in a month

issue commentzeek/zeek

Some input related errors do not include the file/line information

ah, filename and the line itself is really probably good enough. the Location is just going to be the script that has the add_input with the filename in question.


comment created time in 2 months

issue commentzeek/zeek

Some input related errors do not include the file/line information

The errors like this one are pretty good:

EventErrorEvent, Could not convert line 'name\x09127.0.0.1' of ../input.log to Val. Ignoring line., Reporter::WARNING

It contains the line that failed to load and which file it was seen in. If it could have the line number of the input file that could also be nice, but not critical.

Knowing the name of the input file that failed to load is usually enough context.. from the filename being loaded you can infer which script is responsible and what you need to fix.


comment created time in 2 months

push eventJustinAzoff/pcap_simplify

Johanna Amann

commit sha 409d6f767dad8488a80ff1a4a6afcafdcaee211c

Fix module path

view details


commit sha 50371b6b8a2a2ad5d14112f89713b31d6359e6b7

Merge pull request #1 from 0xxon/topic/johanna/fix-module-path Fix module path

view details

push time in 2 months

pull request commentJustinAzoff/pcap_simplify

Fix module path

Oh, yes, that definitely needs to not be typoed!


comment created time in 2 months

PR merged JustinAzoff/pcap_simplify

Fix module path

Fix typo in module path that can cause some complaints by go.

+1 -1

0 comment

1 changed file


pr closed time in 2 months

issue commentzeek/zeek

SQLite writer painfully slow

This is likely fsync overhead. Are you on HDDs and not SSDs? If you run zeek while in a tmpfs directory (check mount | grep tmpfs) does it run much faster?

The inserts can't really all be ran in a single transaction, but it should probably be batched. There was some work being done to rework the zeek logging api to be more batch friendly, but I don't recall the status of that.

In the meantime there might be a few options. If you are processing a ton of pcaps, you can probably do all the processing under tmpfs, and then just move the resulting databases to persistent storage when complete. If you don't have enough free memory to use tmpfs, you could disable fsync in sqlite. As long as you aren't worried about a sudden power failure during processing, there isn't much risk involved. Something like WAL mode could also be an improvement.


comment created time in 2 months

issue commentzeek/zeek

Missing HTTP host + uri in http.log

Does this help?


comment created time in 2 months

issue openedzeek/zeek

Some input related errors do not include the file/line information

We currently have a test that shows this issue: btest/scripts/base/frameworks/input/invalidset.zeek

The baseline is this (repeated twice, just showing the first 4)

EventErrorEvent, Could not convert line 'name\x09127.0.0.1' of ../input.log to Val. Ignoring line., Reporter::WARNING
EventErrorEvent, Error while reading set or vector, Reporter::WARNING
EventErrorEvent, Invalid value for subnet:, Reporter::WARNING
EventErrorEvent, Skipping input with missing non-optional value, Reporter::WARNING

The first error is good.. mentions the invalid line and the file it comes from. However, the other errors don't do this. "Skipping input with missing non-optional value" and "Error while reading set or vector" can be very difficult to track down the root cause.

it is easier to see this if the test is modified to only have the first line in the input file:

@TEST-START-FILE input.log
#separator \x09
#fields i       s
name    -

in that case, the btest outputs only this:

warning: Skipping input with missing non-optional value

created time in 2 months

issue commentJustinAzoff/dstat-raspberry-pi

dstat for raspberry pi constantly crashing

I don't think that error is caused by this script. Unfortunately the error doesn't say what 'name' is when it crashes. All this script does is read the value in /sys/devices/virtual/thermal/thermal_zone0/temp and divide it by 1000, and that shouldn't result in a value of 'infinity'.

maybe try running something like

dstat -whatever-flags-you-are-using ; cat /sys/devices/virtual/thermal/thermal_zone0/temp

to see what that file contains when dstat crashes.

Could also try running a 2nd instance of dstat without the raspberry pi plugin to see if that one crashes as well.


comment created time in 2 months

issue commentnturley3/zeek-http-suspect-data-exposure

Regex Performance Improvement

Yes, there's also another optimization that can be done and avoid using http_entity_data.

You can see an example of that here

using a file analyzer allows you to remove the file analyzer after enough data has been seen, so you can avoid running a regex on an entire 20GB download.


comment created time in 2 months

issue commentzeek/zeek

HTTP HOST Value Bug for IPv6

Note we have the same regex here:

though in that context it's probably fine to strip the port, but still has the v6 issue.

In a way this is similar to how http_request has original_URI and unescaped_URI. From a normalization standpoint most people probably want the header with the host stripped.. there's id.resp_p after all. From a malware standpoint it could be useful to know if an http client is sending an extraneous port.


comment created time in 2 months

issue commentzeek/zeek

Coredump at /usr/local/src/zeek-4.1.0/src/

Was this maybe out of memory? the crash is inside tcmalloc.


comment created time in 3 months