profile
viewpoint
hacktobeer Google Netherlands Having fun at Google

google/turbinia 519

Automation and Scaling of Digital Forensics Tools

log2timeline/dftimewolf 176

A framework for orchestrating forensic collection, processing and data export

hacktobeer/cloud-forensics-utils 0

Python library to carry out DFIR analysis on the Cloud

hacktobeer/commando-vm 0

Complete Mandiant Offensive VM (Commando VM), a fully customizable Windows-based pentesting virtual machine distribution. commandovm@fireeye.com

hacktobeer/dftimewolf 0

A framework for orchestrating forensic collection, processing and data export

hacktobeer/forseti-security 0

Forseti Security

hacktobeer/gcat 0

A fully featured backdoor that uses Gmail as a C&C server

hacktobeer/go-panasonic 0

cli client and golang package to control your airconditioner through the Panansonic Comfort Cloud.

issue commentgoogle/turbinia

Log or correlate node names

Add below to the worker and server pod yaml configuration files and afaik with that the nodenane will be available in the pod as an environment variable.

env:
        - name: NODE_NAME
          valueFrom:
            fieldRef:
              fieldPath: spec.nodeName

On Fri, Jan 14, 2022, 23:03 Wajih Yassine ***@***.***> wrote:

Some thoughts on how we can expose/correlate the node name to the logging is

  • Updating the environment variable of K8s deployment script to somehow pass the node name as an environment variable to the pod for turbinia to read. This environment variable will only be available however for kubernetes setups.
  • Mount /etc/hostname path from node and read that file instead to grab node name. For non k8s environment, this would be redundant to calling uname.nodename
  • When logger.setup is being initialized, append pod name into text file stored in the path TMP_RESOURCE_DIR and create new variable like RESOURCE_POD_LIST noting the file name. This fix may be the most straightforward since not having to retrieve the node name + add that into logging but we'd just need to SSH into a node + grep for pod name to do the correlation

— Reply to this email directly, view it on GitHub https://github.com/google/turbinia/issues/973#issuecomment-1013503604, or unsubscribe https://github.com/notifications/unsubscribe-auth/ABP5D4HA3ZCPP6LKG4XNVGTUWCMRZANCNFSM5L5IRQBQ . Triage notifications on the go with GitHub Mobile for iOS https://apps.apple.com/app/apple-store/id1477376905?ct=notification-email&mt=8&pt=524675 or Android https://play.google.com/store/apps/details?id=com.github.android&referrer=utm_campaign%3Dnotification-email%26utm_medium%3Demail%26utm_source%3Dgithub.

You are receiving this because you are subscribed to this thread.Message ID: ***@***.***>

aarontp

comment created time in 2 days

PullRequestReviewEvent

PR opened google/turbinia

Store e2e worker/server logs.

Gather and store e2e worker and server logs after a run for easy debugging. Fixes #857.

+4 -0

0 comment

1 changed file

pr created time in 3 days

create barnchhacktobeer/turbinia

branch : e2e-debug-logs

created branch time in 3 days

issue commentgoogle/turbinia

e2e tests: Gather worker/server logs

Getting the actual log files from the workers/server is a bit more complicated as we run a docker image (worker/server) in a GCE containeros instance and these get created dynamically each time a test runs. A smarter way would be to dump both server and worker logs from stackdriver as comntaineros stdout/stderr is logged there anyway.

aarontp

comment created time in 3 days

issue commentgoogle/turbinia

e2e tests: Gather worker/server logs

---debug-logs has been added to all e2e tests.

Will fetch worker/server logs in post test task.

aarontp

comment created time in 3 days

PullRequestReviewEvent

issue commentgoogle/timesketch

ELASTIC_HOST/PORT still needed for docker-compose setup to work

Let's move to Slack. Last comments on this:

  • opensearch does not need ports defined in the configuration as it does not need exposure to the host (aka outside of the docker-compose network).
  • if you can connect from inside one of the timesketch containers to opensearch it all works networking wise.

On Wed, Jan 12, 2022, 18:18 Mark Hallman ***@***.***> wrote:

hacktobeer, thanks so much. that does help fill in some of the holes in my docker networking knowledge. i was on that path but the issue is that the ping command is not in these containers; neither is apt or apt-get. I'm not sure what the base image is that is being used. These containers are different than the timesketch/elasticsearch containers that do have ping. All of the container to container networking can be verified in those containers because the tool are there. Ideas on other approaches to test container to container networking?

***@***.***:~# ***@***.***:~# docker exec -u root -it opensearch /bin/bash bash-4.2# ping timesketch-web bash: ping: command not found bash-4.2#

Back the issue at hand. We know that the docker-compse.yml needs to have the ports added to the opensearch section. I did that, but that still does not fix the problem. My simple test is can I reach opensearch from my browser http://localhost:9200 Which I can not do even after making the ports change to the docker-compose.yml and restarting all the containers. docker-compose down && docker-compose up -d

From all the network data the I collected I can't find anything glaring wrong. Thoughts?

Since this is more conversational, what do you think about moving this to the Open Source DFIR Slack Workspace?

— Reply to this email directly, view it on GitHub https://github.com/google/timesketch/issues/2103#issuecomment-1011276194, or unsubscribe https://github.com/notifications/unsubscribe-auth/ABP5D4AA72BMUG4FSUTX3SDUVWZXLANCNFSM5LWVXSWA . Triage notifications on the go with GitHub Mobile for iOS https://apps.apple.com/app/apple-store/id1477376905?ct=notification-email&mt=8&pt=524675 or Android https://play.google.com/store/apps/details?id=com.github.android&referrer=utm_campaign%3Dnotification-email%26utm_medium%3Demail%26utm_source%3Dgithub.

You are receiving this because you authored the thread.Message ID: ***@***.***>

hacktobeer

comment created time in 5 days

issue commentgoogle/timesketch

ELASTIC_HOST/PORT still needed for docker-compose setup to work

Okay, let's try to filter out some info.

  1. docker compose creates it's own network, host names are auto resolved as per name in the docker-compose config
  2. docker compose instances can connect to other instances without having to define ports/export definitions. 'ports' is for exposure to the external host, expose is only for documentation (docker ps etc) and has no effect on actual networking (except for a few edge cases)

So in a docker-compose setup you can have a container (opensearch) start a service listenening on port 9200 and all other containers are able to connect to it using eg nc opensearch 9200. No need to define any port/export in the configuration.

You can test this by getting a shell in one of the containers (eg docker exec -ti [container_id] sh and nc-ing/pinging any of the other containers by name.

Hope that clarifies some networking things. See https://docs.docker.com/compose/networking/#multi-host-networking

hacktobeer

comment created time in 5 days

issue openedgoogle/timesketch

ELASTIC_HOST/PORT still needed for docker-compose setup to work

Describe the bug When following the below quick start guide to install Timesketch it will not work as it still tries to search for ELASTIC_HOST and ELASTIC_PORT.

To Reproduce Steps to reproduce the behavior:

  1. Follow https://github.com/google/timesketch/blob/master/docs/guides/admin/install.md
  2. Login and create a New Investigation -> 500 internal server error
  3. Check the worker log for the error below
[2022-01-11 11:59:19,838] timesketch.app/ERROR Exception on /api/v1/sketches/1/ [GET]
Traceback (most recent call last):
  File "/usr/local/lib/python3.8/dist-packages/flask/app.py", line 1949, in full_dispatch_request
    rv = self.dispatch_request()
<...cut..?
  File "/usr/local/lib/python3.8/dist-packages/opensearchpy/connection/http_urllib3.py", line 136, in __init__
    super(Urllib3HttpConnection, self).__init__(
  File "/usr/local/lib/python3.8/dist-packages/opensearchpy/connection/base.py", line 155, in __init__
    if ":" in host:  # IPv6
TypeError: argument of type 'NoneType' is not iterable
  1. Add ELASTIC_HOST and ELASTIC_PORT to timesketch.conf and restart timesketch containers
  2. Create a new investigation and see it succeed.

Expected behaviour I expect the setup guide to give me a working Timesketch setup ;)

Desktop (please complete the following information):

  • OS: MacOS
  • Browser Chrome
  • Version 96.0.4664.110

created time in 6 days

PullRequestReviewEvent
PullRequestReviewEvent

push eventgoogle/turbinia

alimez

commit sha 7e01fba3cfad5cafc7d736a2df9f0faf6fb44f34

Google cloud bulk processing bug fix (#969) Set optional parameters to None in googleclouddisk job to prevent argparse error.

view details

push time in 7 days

PR merged google/turbinia

Reviewers
Google cloud bulk processing bug fix

googleclouddisk evidence type fails to initiate as two args are not in its namespace. This PR fixes that.

+5 -0

0 comment

1 changed file

alimez

pr closed time in 7 days

PullRequestReviewEvent
PullRequestReviewEvent

Pull request review commentgoogle/turbinia

Close results on early task setup failures

 def process_result(self, task_result):     Returns:       TurbiniaJob|None: The Job for the processed task, else None     """+    if task_result.successful is None:

I think we want to measure if this happens often, can you add a metric for this? This file already has some metrics defined so should be easy to add. https://github.com/google/turbinia/blob/72c918a59ff9b89691cd11715b6aef109140d241/turbinia/task_manager.py#L57

aarontp

comment created time in 10 days

PullRequestReviewEvent

PR opened google/turbinia

Upgrade Celery to address security vulnerability

Upgrade Celery to >=5.0.0 to address below vulnerability. https://github.com/google/turbinia/security/dependabot/requirements.txt/celery/open

Also: this new celery version does not require the 'celery' parameter on startup and depends on a newer version of vine.

+3 -3

0 comment

2 changed files

pr created time in 10 days

push eventhacktobeer/turbinia

hacktobeer

commit sha d31cac1c98686e9dc01672e6728c712fec2b8b44

New celery version does not require celery parameter

view details

push time in 10 days

create barnchhacktobeer/turbinia

branch : celery-culn

created branch time in 10 days

push eventgoogle/turbinia

hacktobeer

commit sha a67e463e359c5ce448f1a59c2cd796b858dff8a7

Remove server (-S) switch (#962) * Remove -S/--server references to prepare for PR #939

view details

push time in 17 days

PR merged google/turbinia

Remove server (-S) switch

To prepare for the removal of the server (-S) switch in #939 this PR removes all references to the switch in documentation, helper tools and startup scripts.

+10 -14

0 comment

7 changed files

hacktobeer

pr closed time in 17 days

PR opened google/turbinia

Remove server (-S) switch

To prepare for the removal of the server (-S) switch in #939 this PR removes all references to the switch in documentation, helper tools and startup scripts.

+10 -14

0 comment

7 changed files

pr created time in 18 days

create barnchhacktobeer/turbinia

branch : remove-server-swtich

created branch time in 18 days

issue closedgoogle/turbinia

Phone Numbers in Israel

Hi! I translated a Hebrew Wikipedia page to English, describing in details the structure and history of all numbers and prefixes in Israel. Anybody interested, please write to mguttman4@gmail.com and I'll send the document to him/her.

closed time in 24 days

mguttman

push eventgoogle/turbinia

Diana Kramer

commit sha 2a9bbef543e1ecff6dc7b7be25fd7488e5f87ebc

Prepend the request ID to the output path (#946) Prepend the request ID to the output path so all output is saved in a request ID folder.

view details

push time in a month

PR merged google/turbinia

Prepend the request ID to the output path

Fix #914

+17 -10

4 comments

3 changed files

dianakramer

pr closed time in a month

issue closedgoogle/turbinia

Results should be stored grouped per request

Currently Turbinia stores results in a flat folder structure with a folder per task. It would be a lot more convenient if output was stored in a structure with a folder per request. Example: https://github.com/google/turbinia/commit/b9bf9dcaac2cbadb217278c77a5b90d4d286760b

closed time in a month

hacktobeer

pull request commentgoogle/turbinia

Prepend the request ID to the output path

@dianakramer Thank you for this contribution and congratulations on your first Google org PR!

dianakramer

comment created time in a month

more