profile
viewpoint

google/docker-explorer 379

A tool to help forensicate offline docker acquisitions

log2timeline/dftimewolf 176

A framework for orchestrating forensic collection, processing and data export

google/GiftStick 114

1-Click push forensics evidence to the cloud

sa3eed3ed/artifacts 0

Digital Forensics Artifact Repository

sa3eed3ed/dftimewolf 0

A framework for orchestrating forensic collection, processing and data export

sa3eed3ed/timesketch 0

Collaborative forensic timeline analysis

sa3eed3ed/turbinia 0

Automation and Scaling of Digital Forensics Tools

Pull request review commentgoogle/cloud-forensics-utils

Add BigQuery Jobs functionality and tests

+# -*- coding: utf-8 -*-+# Copyright 2021 Google Inc.+#+# Licensed under the Apache License, Version 2.0 (the "License");+# you may not use this file except in compliance with the License.+# You may obtain a copy of the License at+#+#      http://www.apache.org/licenses/LICENSE-2.0+#+# Unless required by applicable law or agreed to in writing, software+# distributed under the License is distributed on an "AS IS" BASIS,+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.+# See the License for the specific language governing permissions and+# limitations under the License.+"""Google BigQuery functionalities."""++from typing import TYPE_CHECKING, List, Dict, Any, Optional+from libcloudforensics.providers.gcp.internal import common++if TYPE_CHECKING:+  import googleapiclient.discovery++_BIGQUERY_API_VERSION = 'v2'++class GoogleBigQuery:+  """Class to call Google BigQuery APIs.++  Attributes:+    project_id: Google Cloud project ID.+  """++  def __init__(self, project_id: Optional[str] = None) -> None:+    """Initialize the GoogleBigQuery object.++    Args:+      project_id: Optional. Google Cloud project ID.+    """++    self.project_id = project_id++  def GoogleBigQueryApi(self) -> 'googleapiclient.discovery.Resource':+    """Get a Google BigQuery service object.++    Returns:+      A Google BigQuery service object.+    """++    return common.CreateService('bigquery', _BIGQUERY_API_VERSION)++  def ListBigQueryJobs(self) -> List[Dict[str, Any]]:+    """List jobs of Google BigQuery within a project.++    Returns:+      List[Dict[str, Any]]: List of jobs.

No need to add type here

      List of jobs.
dianakramer

comment created time in 14 days

PullRequestReviewEvent
PullRequestReviewEvent
PullRequestReviewEvent

Pull request review commentgoogle/cloud-forensics-utils

Add BigQuery Jobs functionality and tests

 def __init__(self,     self._cloudresourcemanager = None  # type: Optional[cloudresourcemanager_module.GoogleCloudResourceManager]     self._serviceusage = None  # type: Optional[serviceusage_module.GoogleServiceUsage]     # pylint: enable=line-too-long+    self._bigquery = None  # type: Optional[cloudsql_module.GoogleBigQuery]
    self._bigquery = None  # type: Optional[bigquery_module.GoogleBigQuery]
dianakramer

comment created time in 18 days

Pull request review commentgoogle/cloud-forensics-utils

Add BigQuery Jobs functionality and tests

 def GKEEnumerate(args: 'argparse.Namespace') -> None:         enumeration.ToJson(namespace=args.namespace), sys.stdout, indent=2)   else:     enumeration.Enumerate(namespace=args.namespace)+++def ListBigQueryJobs(args: 'argparse.Namespace') -> None:+  """List the BigQuery jobs of a Project.++  Args:+    args (argsparse.Namespace): Arguments from ArgumentParser.
    args: Arguments from ArgumentParser.
dianakramer

comment created time in 18 days

Pull request review commentgoogle/cloud-forensics-utils

Add BigQuery Jobs functionality and tests

+# -*- coding: utf-8 -*-+# Copyright 2021 Google Inc.+#+# Licensed under the Apache License, Version 2.0 (the "License");+# you may not use this file except in compliance with the License.+# You may obtain a copy of the License at+#+#      http://www.apache.org/licenses/LICENSE-2.0+#+# Unless required by applicable law or agreed to in writing, software+# distributed under the License is distributed on an "AS IS" BASIS,+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.+# See the License for the specific language governing permissions and+# limitations under the License.+"""Google BigQuery functionalities."""++from typing import TYPE_CHECKING, List, Dict, Any, Optional+from libcloudforensics.providers.gcp.internal import common++if TYPE_CHECKING:+  import googleapiclient.discovery+++class GoogleBigQuery:+  """Class to call Google BigQuery APIs.++  Attributes:+    project_id: Google Cloud project ID.+  """+  BIGQUERY_API_VERSION = 'v2'

I know you followed the convention in other files, but here it makes more sense to make this a module variable rather than a class variable, you can then use it right away by name rather than via self.BIG... Also if this will be only called in this module, you should prefix it with" _"

dianakramer

comment created time in 18 days

Pull request review commentgoogle/cloud-forensics-utils

Add BigQuery Jobs functionality and tests

+# -*- coding: utf-8 -*-+# Copyright 2021 Google Inc.+#+# Licensed under the Apache License, Version 2.0 (the "License");+# you may not use this file except in compliance with the License.+# You may obtain a copy of the License at+#+#      http://www.apache.org/licenses/LICENSE-2.0+#+# Unless required by applicable law or agreed to in writing, software+# distributed under the License is distributed on an "AS IS" BASIS,+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.+# See the License for the specific language governing permissions and+# limitations under the License.+"""Google BigQuery functionalities."""++from typing import TYPE_CHECKING, List, Dict, Any, Optional+from libcloudforensics.providers.gcp.internal import common++if TYPE_CHECKING:+  import googleapiclient.discovery+++class GoogleBigQuery:+  """Class to call Google BigQuery APIs.++  Attributes:+    project_id: Google Cloud project ID.+  """+  BIGQUERY_API_VERSION = 'v2'++  def __init__(self, project_id: Optional[str] = None) -> None:+    """Initialize the GoogleBigQuery object.++    Args:+      project_id (str): Optional. Google Cloud project ID.+    """++    self.project_id = project_id++  def GoogleBigQueryApi(self) -> 'googleapiclient.discovery.Resource':+    """Get a Google BigQuery service object.++    Returns:+      googleapiclient.discovery.Resource: A Google BigQuery service object.

same here and everywhere, no need to re-add types

dianakramer

comment created time in 18 days

PullRequestReviewEvent

Pull request review commentgoogle/cloud-forensics-utils

Add BigQuery Jobs functionality and tests

+# -*- coding: utf-8 -*-+# Copyright 2021 Google Inc.+#+# Licensed under the Apache License, Version 2.0 (the "License");+# you may not use this file except in compliance with the License.+# You may obtain a copy of the License at+#+#      http://www.apache.org/licenses/LICENSE-2.0+#+# Unless required by applicable law or agreed to in writing, software+# distributed under the License is distributed on an "AS IS" BASIS,+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.+# See the License for the specific language governing permissions and+# limitations under the License.+"""Google BigQuery functionalities."""++from typing import TYPE_CHECKING, List, Dict, Any, Optional+from libcloudforensics.providers.gcp.internal import common++if TYPE_CHECKING:+  import googleapiclient.discovery+++class GoogleBigQuery:+  """Class to call Google BigQuery APIs.++  Attributes:+    project_id: Google Cloud project ID.+  """+  BIGQUERY_API_VERSION = 'v2'++  def __init__(self, project_id: Optional[str] = None) -> None:+    """Initialize the GoogleBigQuery object.++    Args:+      project_id (str): Optional. Google Cloud project ID.

adding type to docs is obsolete if you already use type annotations, you can remove these from all docs and add them only to class attributes

dianakramer

comment created time in 18 days

PullRequestReviewEvent
IssuesEvent

issue openedgoogle/timesketch

Missing Columns in CSV export of a sketch

Describe the bug Missing Columns in CSV export of a sketch, when exporting a timeline, which leads to missing some of the important info.

To Reproduce Steps to reproduce the behavior:

  1. in a timeline, click export as CSV

Expected behavior expecting to get exact match to the fields of every entry in the CSV file, specially that they are all the same, even if different data fields, the missing ones should be filled with NANs

created time in a month

issue openedgoogle/timesketch

Searching for a word with non-ASCII char/s returns no result although there is a match

Describe the bug Searching for a word with non-ASCII char/s (ex: "déjeuner") returns no result although there is a match in the scetch To Reproduce Steps to reproduce the behavior:

  1. Go to any sketch with entries having special non-ascii chars
  2. search for a word in the sketch that has one of these chars
  3. no result will be returned

Expected behavior Return the matches

created time in a month

PullRequestReviewEvent
PullRequestReviewEvent

Pull request review commentgoogle/docker-explorer

Show the container exposed ports

 def __init__(self, docker_directory, container_id, docker_version=2):      self.log_path = container_info_dict.get('LogPath', None) +  def _GetConfigValue(+      self, configuration, key, default_value=None,+      ignore_container_config=False):+    """Returns the value of a configuration key in the parsed container file.++    Args:+      configuration(dict): the parsed state from the config.json file.+      key(str): the key we need the value from.+      default_value(object): what to return if the key can't be found.+      ignore_container_config(bool): whether or not to ignore the container's+        specific configuration (from the ContainerConfig) key.++    Returns:+      object: the extracted value.+    """+    image_config = configuration.get('Config', None)+    if not image_config:+      return default_value+    config_value = image_config.get(key, default_value)++    if not ignore_container_config:+      # If ContainerConfig has a different value for that key, return this one.+      container_config = configuration.get('ContainerConfig', None)+      if container_config:+        if key in container_config:+          return container_config.get(key, default_value)++    return config_value

    if not ignore_container_config:
      # If ContainerConfig has a different value for that key, return this one.
      container_config = configuration.get('ContainerConfig', None)
      if container_config:
        if key in container_config:
          return container_config.get(key, default_value)

    return image_config.get(key, default_value)

since config_value isn't used except for return

rgayon

comment created time in 2 months

PullRequestReviewEvent

Pull request review commentlog2timeline/dftimewolf

`gcp_forensics_gke`

 def _FindDisksToCopy(self) -> List[compute.GoogleComputeDisk]:      return disks_to_copy -modules_manager.ModulesManager.RegisterModule(GoogleCloudCollector)+class GKEDiskCopier(GoogleCloudCollector):++  def SetUp(self,+            analysis_project_name: str,+            remote_project_name: str,+            remote_cluster_name: str,+            remote_cluster_zone: str,+            workload_name: Optional[str]=None,+            workload_namespace: Optional[str]=None,+            incident_id: Optional[str]=None,+            zone: str='us-central1-f',+            create_analysis_vm: bool=True,+            boot_disk_size: float=50,+            boot_disk_type: str='pd-standard',+            cpu_cores: int=4,+            image_project: str='ubuntu-os-cloud',+            image_family: str='ubuntu-1804-lts') -> None:+    """Sets up a GKE disk collector.++    This method creates and starts an analysis VM in the analysis project and+    selects nodes whose boot disks will be copied from the remote cluster.++    If both the workload_name and workload_namespace are specified, only the+    nodes supporting the workload's pods will be copied. If they are not+    specified, all the nodes' disks will be copied to the analysis VM.++    If analysis_project_name is not specified, analysis_project will be same+    as remote_project.++    Args:+      analysis_project_name (str): Optional. name of the project that contains+          the analysis VM. Default is None.+      remote_project_name (str): name of the remote project where the disks+          must be copied from.+      remote_cluster_name (str): The name of the cluster to copy disks from.+      remote_cluster_zone (str): The zone of the cluster to copy disks from.+      workload_name (Optional[str]): Optional. The name of Kubernetes workload+          whose node disks to copy.+      workload_namespace (Optional[str]): Optional. The namespace of the+          Kubernetes workload whose node disks to copy.+      incident_id (Optional[str]): Optional. Incident identifier on which the+          name of the analysis VM will be based. Default is None, which means+          add no label and format VM name as+          "gcp-forensics-vm-{TIMESTAMP('%Y%m%d%H%M%S')}".+      zone (Optional[str]): Optional. GCP zone in which new resources should+          be created. Default is us-central1-f.+      create_analysis_vm (Optional[bool]): Optional. Create analysis VM in+          the analysis project. Default is True.+      boot_disk_size (Optional[float]): Optional. Size of the analysis VM boot+          disk (in GB). Default is 50.+      boot_disk_type (Optional[str]): Optional. Disk type to use.+          Default is pd-standard.+      cpu_cores (Optional[int]): Optional. Number of CPU cores to+          create the VM with. Default is 4.+      image_project (Optional[str]): Optional. Name of the project where the+          analysis VM image is hosted.+      image_family (Optional[str]): Optional. Name of the image to use to+          create the analysis VM.+    """+    # Check GKE cluster+    cluster = gke.GkeCluster(remote_project_name, remote_cluster_zone,+                             remote_cluster_name)++    if workload_name and workload_namespace:+      # Both workload name and namespace were specified, select nodes from the+      # cluster's workload+      workload = cluster.FindWorkload(workload_name, workload_namespace)+      if not workload:+        self.ModuleError('Workload not found.', critical=True)+      nodes = workload.GetCoveredNodes()+    elif workload_name or workload_namespace:+      # Either workload name or workload namespace was given, but not both+      self.ModuleError(+          'Both the workload name and namespace must be supplied.',+           critical=True)+      return+    else:+      # Nothing about a workload was specified, handle the whole cluster+      nodes = cluster.ListNodes()++    # Initialize fields and set up analysis VM+    self._SetUpProjects(analysis_project_name, remote_project_name, zone)+    self._SetUpAnalysisVm(incident_id, create_analysis_vm, boot_disk_size,+                          boot_disk_type, cpu_cores, image_project,+                          image_family)++    # Queue selected node's boot disks+    for node in nodes:+      disk = self.remote_project.compute.GetInstance(node.name).GetBootDisk()

can you add a check if there are other non-boot disks attached to the node, i.e. Persistent volumes, and print a log message saying something similar to: "this disk is attached as a Persistent volumes to node X and it will not be acquired."

zkck

comment created time in 2 months

PullRequestReviewEvent
PullRequestReviewEvent

Pull request review commentlog2timeline/dftimewolf

`gcp_forensics_gke`

 def _FindDisksToCopy(self) -> List[compute.GoogleComputeDisk]:      return disks_to_copy -modules_manager.ModulesManager.RegisterModule(GoogleCloudCollector)+class GKEDiskCopier(GoogleCloudCollector):++  def SetUp(self,+            analysis_project_name: str,+            remote_project_name: str,+            remote_cluster_name: str,+            remote_cluster_zone: str,+            workload_name: Optional[str]=None,+            workload_namespace: Optional[str]=None,+            incident_id: Optional[str]=None,+            zone: str='us-central1-f',+            create_analysis_vm: bool=True,+            boot_disk_size: float=50,+            boot_disk_type: str='pd-standard',+            cpu_cores: int=4,+            image_project: str='ubuntu-os-cloud',+            image_family: str='ubuntu-1804-lts') -> None:+    """Sets up a GKE disk collector.++    This method creates and starts an analysis VM in the analysis project and+    selects nodes whose boot disks will be copied from the remote cluster.++    If both the workload_name and workload_namespace are specified, only the+    nodes supporting the workload's pods will be copied. If they are not+    specified, all the nodes' disks will be copied to the analysis VM.++    If analysis_project_name is not specified, analysis_project will be same+    as remote_project.++    Args:+      analysis_project_name (str): Optional. name of the project that contains

here and everywhere else, since we have type annotations, adding the type to the doc string is obsolete, you should remove all these

zkck

comment created time in 2 months

Pull request review commentlog2timeline/dftimewolf

`gcp_forensics_gke`

 def _GetDisksFromInstance(       return list(remote_instance.ListDisks().values())     return [remote_instance.GetBootDisk()] -  def _FindDisksToCopy(self) -> List[compute.GoogleComputeDisk]:+  def _FindDisksToCopy(self,+                       remote_instance_name: str,+                       disk_names: List[str],+                       all_disks: bool) -> List[compute.GoogleComputeDisk]:

add args to the doc string

zkck

comment created time in 2 months

PullRequestReviewEvent

Pull request review commentlog2timeline/dftimewolf

`gcp_forensics_gke`

 def _GetDisksFromInstance(       return list(remote_instance.ListDisks().values())     return [remote_instance.GetBootDisk()] -  def _FindDisksToCopy(self) -> List[compute.GoogleComputeDisk]:+  def _FindDisksToCopy(self,+                       remote_instance_name: str,+                       disk_names: List[str],+                       all_disks: bool) -> List[compute.GoogleComputeDisk]:     """Determines which disks to copy depending on object attributes.

add Args to doc string

zkck

comment created time in 2 months

PullRequestReviewEvent

Pull request review commentlog2timeline/dftimewolf

`gcp_forensics_gke`

 jinja2==3.0.1; python_version >= '3.6' jmespath==0.10.0; python_version >= '2.6' and python_version not in '3.0, 3.1, 3.2, 3.3' jsonschema==3.2.0 kombu==4.6.11; python_version >= '2.7' and python_version not in '3.0, 3.1, 3.2, 3.3, 3.4'+libcloudforensics==20211014

to include latest changes

libcloudforensics==20211027
zkck

comment created time in 2 months

PullRequestReviewEvent

push eventsa3eed3ed/artifacts

Said Eid

commit sha 45120fa69cf0b6638f5455db67af205a2a462e67

change source type to path

view details

push time in 2 months

Pull request review commentForensicArtifacts/artifacts

Updating WindowsEventLogs artifact to include more event log files

 doc: Windows Event logs. sources: - type: ARTIFACT_GROUP
- type: PATH
sa3eed3ed

comment created time in 2 months

more