profile
viewpoint

google/timesketch 1823

Collaborative forensic timeline analysis

google/turbinia 519

Automation and Scaling of Digital Forensics Tools

google/docker-explorer 379

A tool to help forensicate offline docker acquisitions

google/cloud-forensics-utils 194

Python library to carry out DFIR analysis on the Cloud

log2timeline/dftimewolf 176

A framework for orchestrating forensic collection, processing and data export

GoogleCloudPlatform/security-response-automation 101

Take automated actions against threats and vulnerabilities.

aarontp/artifacts 0

Digital Forensics Artifact Repository

aarontp/binplist 0

Binary property list (plist) parser

aarontp/cloud-forensics-utils 0

Python library to carry out DFIR analysis on the Cloud

release google/turbinia

20220113

released time in 3 days

created taggoogle/turbinia

tag20220113

Automation and Scaling of Digital Forensics Tools

created time in 3 days

issue commentgoogle/turbinia

Log or correlate node names

FYI @wajihyassine

aarontp

comment created time in 3 days

issue openedgoogle/turbinia

Log or correlate node names

Now that we are using GKE with scaling, the nodes go up and down frequently, which leaves large amounts of log files that are easily attributed to the pod, but it's not easy to see what node those pods were running on after they have been turned down which can make debugging difficult. We should find a way to log the node name in addition to the pod name or find a way to correlate those after the fact. It's possible there is a simple kubectl command that can do this as well.

created time in 3 days

delete branch google/turbinia

delete branch : dfvfs-none-fix

delete time in 4 days

push eventgoogle/turbinia

Aaron Peterson

commit sha dad6c3c59b761295877721a70e07f66e7a4cdb5c

Update preprocessor path_spec handling (#971)

view details

push time in 4 days

PR opened google/turbinia

Update preprocessor path_spec handling
+3 -4

0 comment

1 changed file

pr created time in 4 days

create barnchgoogle/turbinia

branch : dfvfs-none-fix

created branch time in 4 days

push eventgoogle/turbinia

Wajih Yassine

commit sha 05ad566e690f093d6f21e2ce4043a9f209de5c41

add dfimagetools to requirements.txt (#970)

view details

push time in 5 days

PullRequestReviewEvent

Pull request review commentgoogle/turbinia

Adding group_name and reason fields

 def main():       required=False)   parser_rawdisk.add_argument(       '-n', '--name', help='Descriptive name of the evidence', required=False)+  parser_rawdisk.add_argument(

Hi @youjitwo ! Welcome to Turbinia and thanks/congrats on your first PR :)

I know this PR is still in draft mode, but I just wanted to make a high-level comment here that these flags are not specific to any piece of evidence, so we could probably add them to the base flags instead of each sub-command. See --request_id as a related example. Thanks!

youjitwo

comment created time in 6 days

PullRequestReviewEvent

delete branch aarontp/turbinia

delete branch : empty-evidence-fix

delete time in 9 days

push eventgoogle/turbinia

Aaron Peterson

commit sha 34316bfdea36860505763d45e511b54f9b622146

Close results on early task setup failures (#966) * Update test to check for success * Check specifically for None * Add server side check for invalid success status * splelling * check for status prior to updating it * Set result in test * mock proper object * check create_results call args * Clarify log message * Typo * Add metric for invalid success status

view details

push time in 9 days

PR merged google/turbinia

Close results on early task setup failures

The TurbiniaTaskResult.successful status was not getting updated properly upon early failures which meant that the client thought the Task was continuing to run in this case. Calling close() on the result will make sure that the worker log gets saved as well.

Also adding a server side check for this condition which forcefully sets successful=False which allows us to recover in case there are other times where the results aren't closed properly.

+27 -2

0 comment

3 changed files

aarontp

pr closed time in 9 days

PR merged google/turbinia

Add linux, macOS recipes, new bodyfile job (#957)

Changes summary

This pull request proposes to create a new Turbinia job (FileSystemTimelineJob) and associated task (FileSystemTimelineTask). The purpose of this job is to generate a file system timeline in bodyfile format using log2timeline/dfimagetools (list_file_entries).

A new BodyFile evidence class is created and is used as input to Plaso so it can process the generated bodyfile.

Two additional recipes are created for Linux and MacOS triage with Plaso and FileSystemTimelineJob.

Related Issues

Fixes #957 Fixes #958 Fixes #959

+372 -59

2 comments

17 changed files

jleaniz

pr closed time in 9 days

push eventgoogle/turbinia

Juan Leaniz

commit sha e5d4ce0313642dd6f1388c43beba8f61d28e9fd4

Add linux, macOS recipes, new bodyfile job (#957) (#964) * Add linux, macOS recipes, new bodyfile job (#957) * style and docstring fixes * docstring fix * fix imports and docstring * add unit tests * fix imports * modify plaso file filters * revert .devcontainer changes * fix docstring * changes to address review comments * fix typo * fix import order

view details

push time in 9 days

issue closedgoogle/turbinia

Create a FLS Job

Create a job for FLS sleuthkit to timeline a given filesystem. Note that some filesystems such as XFS may not work so ensure that the Evidence type has a filter to targeted OS types

closed time in 9 days

wajihyassine

issue closedgoogle/turbinia

Create a default recipe for processing macos artifacts with Plaso

Create a default recipe, similar to Windows Triage Recipe that will process a subset of important artifacts for MacOS hosts (e.g. /var/log, persistence locations, user home directories) which would speed up processing time

closed time in 9 days

wajihyassine

issue closedgoogle/turbinia

Create a default recipe for processing Linux artifacts with Plaso

Create a default recipe, similar to Windows Triage Recipe that will process a subset of important artifacts for Linux hosts (e.g. /var/log, crontabs, user home directories) which would speed up processing time

closed time in 9 days

wajihyassine
PullRequestReviewEvent

Pull request review commentgoogle/turbinia

Add linux, macOS recipes, new bodyfile job (#957)

+# -*- coding: utf-8 -*-+# Copyright 2022 Google Inc.+#+# Licensed under the Apache License, Version 2.0 (the "License");+# you may not use this file except in compliance with the License.+# You may obtain a copy of the License at+#+#      http://www.apache.org/licenses/LICENSE-2.0+#+# Unless required by applicable law or agreed to in writing, software+# distributed under the License is distributed on an "AS IS" BASIS,+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.+# See the License for the specific language governing permissions and+# limitations under the License.+"""Task to run dfimagetools FileEntryLister on disk partitions."""++from __future__ import unicode_literals++import os++from turbinia import TurbiniaException+from turbinia.workers import TurbiniaTask+from turbinia.evidence import EvidenceState as state+from turbinia.evidence import BodyFile++if TurbiniaTask.check_worker_role():+  try:+    from dfvfs.helpers import volume_scanner+    from dfimagetools import file_entry_lister+  except ImportError as exception:+    message = 'Could not import libraries: {0!s}'.format(exception)+    raise TurbiniaException(message)+++class FileSystemTimelineTask(TurbiniaTask):++  REQUIRED_STATES = [state.ATTACHED]++  def run(self, evidence, result):+    """Task to execute (dfimagetools) FileEntryLister.++    Args:+        evidence (Evidence object):  The evidence we will process.+        result (TurbiniaTaskResult): The object to place task results into.++    Returns:+        TurbiniaTaskResult object.+    """+    bodyfile_output = os.path.join(self.output_dir, 'file_system.bodyfile')+    output_evidence = BodyFile(source_path=bodyfile_output)+    number_of_entries = 0++    # Set things up for the FileEntryLister client. We will scan all+    # partitions in the volume.+    volume_scanner_options = volume_scanner.VolumeScannerOptions()+    volume_scanner_options.partitions = ['all']++    # Create the FileEntryLister client and generate the path specs+    # for all available partitions.+    entry_lister = file_entry_lister.FileEntryLister()

I chatted with jleaniz offline and most of this is directly adapted from the upstream script, so it probably doesn't need additional review.

jleaniz

comment created time in 9 days

PullRequestReviewEvent

Pull request review commentgoogle/turbinia

Close results on early task setup failures

 def process_result(self, task_result):     Returns:       TurbiniaJob|None: The Job for the processed task, else None     """+    if task_result.successful is None:

Good idea. Done.

aarontp

comment created time in 9 days

PullRequestReviewEvent

push eventaarontp/turbinia

Aaron Peterson

commit sha a5146f565ece4a2ac9240d153d492001504254c7

Add metric for invalid success status

view details

push time in 9 days

push eventgoogle/turbinia

hacktobeer

commit sha 4559d4af9267762a8995cfc415099a80a22ee4d3

Upgrade Celery to address security vulnerability (#967) * Add storage_file parameter to log2timeline * Upgrade celery due to vulnerability warning. * New celery version does not require celery parameter

view details

push time in 10 days

PR merged google/turbinia

Upgrade Celery to address security vulnerability

Upgrade Celery to >=5.0.0 to address below vulnerability. https://github.com/google/turbinia/security/dependabot/requirements.txt/celery/open

Also: this new celery version does not require the 'celery' parameter on startup and depends on a newer version of vine.

+3 -3

0 comment

2 changed files

hacktobeer

pr closed time in 10 days

more