profile
viewpoint
Mike Cohen scudette @Velocidex Australia

google/rekall 1698

Rekall Memory Forensic Framework

google/rekall-profiles 93

Public Profile Repository for Rekall Memory Forensic.

aff4/pyaff4 32

The Python implementation of the AFF4 standard.

CCXLabs/CCXDigger 30

The CyberCX Digger project is designed to help Australian organisations determine if they have been impacted by certain high profile cyber security incidents. Digger provides threat hunting functionality packaged in a simple-to-use tool, allowing users to detect certain attacker activities; all for free.

botherder/volatility 23

An advanced memory forensics framework

rekall-innovations/rekall-test 2

Rekall Test Repository

rekall-innovations/rekall-capstone 1

A Distribution of capstone geared towards building in a python environment.

scudette/amc 1

Apache Media Center

scudette/dfrwseu-2015 1

Characterization of the Windows Kernel version variability for accurate Memory analysis.

aff4/aff4-snappy 0

Python bindings for the snappy google library

create barnchVelocidex/go-ntfs

branch : usn

created branch time in 2 days

issue commentVelocidex/WinPmem

BSOD on Windows 10 with VSM

Regarding the issue Viviane refers to with the difficulty of signing going forward - it is a real issue and these kinds of policies were proposed by Microsoft in the past but they always backtracked over them when people complained.

Regardless it seems that once the driver is signed and timestamped it should continue working into the future (even past the policy change date). See this quote from the policy

https://docs.microsoft.com/en-us/windows-hardware/drivers/install/deprecation-of-software-publisher-certificates-and-commercial-release-certificates#what-will-happen-to-my-existing-signed-driver-packages

The issue is only with us being able to release a bugfix or adapt the driver to a new kernel release - which this project does so rarely it might not be a real problem (we used the previous signed driver for about 4 years and only needed to re-release it recently).

From reading the policy document it appears that attenstation signing will continue working for Win10 - it is only an issue of running on old windows versions. Hopefully by the time we have an issue, those older windows versions are not going to be an issue anyway.

michaelafry

comment created time in 2 days

issue commentVelocidex/WinPmem

BSOD on Windows 10 with VSM

Just to clarify the rc2 solves the BSOD in the default setting (which is the PTE method). There is probably no reason for anyone to switch to the other methods deliberately but Vivian committed the fix to those methods just in case anyway.

Please report any issues with the rc2.

michaelafry

comment created time in 2 days

issue openedVelocidex/velociraptor

Make client search table infinite

Currently the search table only shows 50 clients. This is unintuitive as users expect to keep scrolling to see new clients.

The main difficulty in implementation is that search is implemented as a scan across the label index and since machines check in all the time the order of the index is not deterministic - so if we request another 50 clients from the server we are likely to get some of the same ones.

One option is to sort by client id to get a stable order but this mean that the initial view will take longer while we sort all the clients.

Another option is to maintain a cache of clients in a sorted file and then just use that to satisfy the query. The file can be refreshed occasionally. Disadvantage here is that new clients do not appear for a while.

Another solution is to partition the index into a btree so it is always sorted.

This is not an easy problem to solve well... need to think about it

created time in 3 days

issue commentVelocidex/velociraptor

GUI improvements

  • [ ] Add a close mark to label selection in remove label button.
scudette

comment created time in 3 days

issue closedVelocidex/velociraptor

New collection GUI resets defaults when the selection is empty

Currently the selection is reset when it is also an empty string. This means that the user can not erase the text box completely because it immediately fills with the default.

We should only fill in the default if the value is undefined not an empty string.

closed time in 3 days

scudette

issue commentVelocidex/velociraptor

New collection GUI resets defaults when the selection is empty

fixed in 0.5.1-1

scudette

comment created time in 3 days

issue closedVelocidex/velociraptor

Resources Misspelled for New Hunt Wizard

When configuring a new hunt through the Wizard, "Specify Resorces" should be "Specify Resources".

Version 0.5.1

Thanks, Eric

closed time in 3 days

animal704

issue commentVelocidex/velociraptor

Resources Misspelled for New Hunt Wizard

fixed in 0.5.1-1

animal704

comment created time in 3 days

push eventVelocidex/velociraptor

Vitaliy

commit sha c996f88e757fcf31a0a66d3e5e7dac7053797d11

Added OIDC provider for SSO to the config wizard (#700)

view details

push time in 3 days

PR merged Velocidex/velociraptor

Added OIDC provider for SSO to the config wizard

OIDC provider added as an option to the config wizard. related to #692

+23 -1

0 comment

1 changed file

vitaliy0x1

pr closed time in 3 days

push eventVelocidex/velociraptor

Mike Cohen

commit sha b8ca2841731a7e4d9cd264b82d10d2d857ef9102

Update keyboard shortcuts help screen. (#699)

view details

push time in 4 days

delete branch Velocidex/velociraptor

delete branch : keys2

delete time in 4 days

push eventVelocidex/velociraptor

Michael Cohen

commit sha 1825fdc4766135d33651a36980852bd6df2ae3b8

.

view details

push time in 4 days

push eventVelocidex/velociraptor

Michael Cohen

commit sha db7e026ef0dacc84912f238f950cbf5ba8ebb7c9

try this

view details

push time in 4 days

push eventVelocidex/velociraptor

Michael Cohen

commit sha a88554b1ca7aafaf9678428aad783fec696a1b57

fix test

view details

push time in 4 days

push eventVelocidex/velociraptor

Michael Cohen

commit sha ddf30dcbe169d38fe0caa7902037b82b48984991

fix test

view details

push time in 5 days

PR opened Velocidex/velociraptor

Update keyboard shortcuts help screen.
+23 -14

0 comment

2 changed files

pr created time in 5 days

create barnchVelocidex/velociraptor

branch : keys2

created branch time in 5 days

issue commentVelocidex/velociraptor

Windows.Forensics.SRUM artifact fails to parse database records

Hi Chris - thanks for the debugging work! This is perfect. You can see that there is a problem with joining up the VCNs in the ntfs parser. The issue is really here:

Run 1 (*parser.MappedReader):
  Mapping 9060352 -> 9060352 with *parser.RangeReader
    RangeReader with 1 runs:
    Run 0 (*parser.MappedReader):
      Mapping 0 -> 6602752 with *parser.PagedReader

This run corresponds to the VCN but it is of length 0! So it gets messed up.

I submitted a fix and I also added a recorder feature to the ntfs.exe tool - you can get the latest version from https://github.com/Velocidex/go-ntfs/actions

The idea with the recorder is that we can replay only the relevant sectors from the disk image into the parser. This way we can collect various tests of interesting NTFS artifacts very cheaply (that is without storing a full disk image - most recordings are around 20-30kb). What happens is that it saves each cluster the parser touches in a cache directory and then anyone can replay the same thing - for example to record:

ntfs.exe stat \\.\c: 68310 --record dirname

Then on another system (even on linux) should produce the exact same output:

ntfs stat /dev/null 68310 --record dirname

It just parses the mft entry but as a side effect it saves all the clusters it touched. Then we can just store them in the repo and run tests against the same data - the parser will be able to get the recorded clusters from the repo and then we can test for regression and fix bugs etc.

See https://github.com/Velocidex/go-ntfs/tree/master/tests/large_file_small_init for an example of how it works.

Anyway if you can check out the latest CI build and see if that works any better. If it still has issues I would appreciate if you also added the --record flag to the above commands and sent me the clusters then I can add it to the repo as a test as well.

chris-counteractive

comment created time in 5 days

push eventVelocidex/velociraptor

Mike Cohen

commit sha 62314e860316a66339b760588329a4f058a111a0

Fix for NTFS upload of sparse files. (#698)

view details

push time in 5 days

delete branch Velocidex/velociraptor

delete branch : ntfsfix

delete time in 5 days

PR opened Velocidex/velociraptor

Fix for NTFS upload of sparse files.
+14 -14

0 comment

7 changed files

pr created time in 5 days

create barnchVelocidex/velociraptor

branch : ntfsfix

created branch time in 5 days

delete branch Velocidex/go-ntfs

delete branch : binaries

delete time in 5 days

push eventVelocidex/go-ntfs

Mike Cohen

commit sha f05bbe6e01fe1346df83e835b81779f8f626b68d

Build binaries in CI (#33)

view details

push time in 5 days

PR merged Velocidex/go-ntfs

Build binaries in CI
+8 -0

0 comment

2 changed files

scudette

pr closed time in 5 days

PR opened Velocidex/go-ntfs

Build binaries in CI
+8 -0

0 comment

2 changed files

pr created time in 5 days

create barnchVelocidex/go-ntfs

branch : binaries

created branch time in 5 days

push eventVelocidex/go-ntfs

Mike Cohen

commit sha 3b32c3277f377615e2ac9bcd94f8d84d36b539b5

Added recorder and test suite (#32)

view details

push time in 5 days

delete branch Velocidex/go-ntfs

delete branch : testing

delete time in 5 days

PR merged Velocidex/go-ntfs

Added recorder and test suite

Fixed bug in VCN composition

+936 -69

0 comment

35 changed files

scudette

pr closed time in 5 days

push eventVelocidex/go-ntfs

Michael Cohen

commit sha 406fc7e1547c255e418d5289a730ce3649cf3427

fix test

view details

push time in 5 days

push eventVelocidex/go-ntfs

Michael Cohen

commit sha b99bc690660633a836cad370b6ce8f29e28ded95

fix test

view details

push time in 5 days

PR opened Velocidex/go-ntfs

Added recorder and test suite

Fixed bug in VCN composition

+929 -69

0 comment

35 changed files

pr created time in 5 days

create barnchVelocidex/go-ntfs

branch : testing

created branch time in 5 days

push eventVelocidex/velociraptor

Matthew Green

commit sha dd407752435988a58394d135cb9f999efca378bf

Add sinkhole (#697)

view details

push time in 6 days

PR merged Velocidex/velociraptor

Add sinkhole
+228 -7

2 comments

4 changed files

mgreen27

pr closed time in 6 days

pull request commentVelocidex/velociraptor

Add sinkhole

Cool this is awesome - just sync the golden file so we pass the tests :+1:

mgreen27

comment created time in 6 days

issue commentVelocidex/velociraptor

Windows.Forensics.SRUM artifact fails to parse database records

I just wanted to update this bug with my investigation of the ese parser. To double check the ese parser I collected the SRUM artifact and also saw that the network was returning 0 rows.

I grabbed exe2csv.exe and extracted the tables from the raw file which was uploaded and it saw 176 rows in the {DD6636C4-8929-4683-974E-22C046A43763} table. I then directly used the ese parser tool (https://github.com/Velocidex/go-ese) and dumped out the catalog (eseparser catalog srudb.dat) and then dumped the actual table (eseparser dump srudb.dat '{DD6636C4-8929-4683-974E-22C046A43763}')

This produced the correct number of rows so the parser appears to be working fine to me. The artifact however returns no rows. But this is because it tries to filter it by a regex https://github.com/Velocidex/velociraptor/blob/2dffd815e5de3bdd6499e65fe347ffdd1f6d21cc/artifacts/definitions/Windows/Forensics/SRUM.yaml#L81

The filter looks for the App as resolved by the AppId from the SruDbIdMapTable but on my system all the entries have an AppId 1 and UserId 2 and these do not have a name in the SruDbIdMapTable. So I think the issue is that the default regex requires a single char match (it is .) but the App resolves to an empty string. Setting ExecutableRegex to .* allows all rows to be displayed - on my system it seems that this field is just not set to anything sensible for this table (the other tables have normal things).

chris-counteractive

comment created time in 6 days

created tagVelocidex/go-ese

tagv0.1.0

Go implementation of an Extensible Storage Engine parser

created time in 6 days

release Velocidex/go-ese

v0.1.0

released time in 6 days

Pull request review commentVelocidex/velociraptor

Add sinkhole

+Queries:+  # Test Sinkhole remediation - output should be only default artifact entry+  - SELECT * FROM Artifact.Windows.Remediation.Sinkhole()++  # Test rolling back sinkhole - output none+  - SELECT * FROM Artifact.Windows.Remediation.Sinkhole(RestoreBackup="True")

Should we run another query here to ensure the file is rolled back (maybe hash before and after)

mgreen27

comment created time in 6 days

Pull request review commentVelocidex/velociraptor

Add sinkhole

+name: Windows.Remediation.Sinkhole+description: |+   **Apply a Sinkhole via Windows hosts file modification**  +   This content will apply modifications to the Windows hosts file by a +   configurable lookup table.  +   During application, the configuration is backed up and used as the base for +   subsequent changes.  +   +   Parameters:  +   HostsFile - path to hosts file+   HostsFileBackup - name to backup original hosts file. If reapplying policy.  +   this is the configuration used as base.  +   CommentPrefix -  prefix to add to description in hosts file comments.  +   RestoreBackup - checkbox to enable restoration of backup hosts file.  +   SinkholeTable - table of Domains to add to or modify in hosts file.+   +   NOTE:+   Modifying the hosts file may cause network communication issues. I have +   disabled any sinkhole settings on the Velociraptor configuration but there +   are no rail guards on other domains. Use with caution.++author: Matt Green - @mgreen27++required_permissions:+  - EXECVE+  +type: CLIENT++parameters:+  - name: HostsFile+    default: C:\Windows\System32\drivers\etc\hosts+  - name: HostsFileBackup+    default: C:\Windows\System32\drivers\etc\hosts.velociraptor.backup+  - name: CommentPrefix+    default: "Velociraptor sinkhole"+  - name: RestoreBackup+    description: "Restore hosts file backup"+    type: bool+  - name: SinkholeTable+    type: csv+    default: |+        Domain,Sinkhole,Description+        mega.co.nz,127.0.0.1,MEGASync file sharing+++sources:+  - precondition:+      SELECT OS From info() where OS = 'windows'++    query: |+      -- Extract sink hole requirements from table+      LET changes = SELECT +                Domain,+                Sinkhole,+                if(condition=Description,+                  then= CommentPrefix + ': ' + Description,+                  else= CommentPrefix) as Description+            FROM parse_csv(filename=SinkholeTable, accessor='data') ++      -- Check for backup to determine if sinkhole applied+      LET check_backup = SELECT FullPath FROM stat(filename=HostsFileBackup)++      -- Backup old config+      LET backup = copy(filename=HostsFile,dest=HostsFileBackup)++      -- Restore old config+      LET restore = SELECT * FROM chain(+            a=copy(filename=HostsFileBackup,dest=HostsFile),+            b={+                SELECT * +                FROM if(condition=RestoreBackup,+                    then={+                        SELECT * +                        FROM execve(argv=['cmd.exe', '/c',+                            'del','/F',HostsFileBackup])+                    })+            })++      -- Write hosts file+      LET write(DataBlob) = copy(filename=DataBlob,dest=HostsFile,accessor='data')+      +      -- FlushDNS+      LET flushdns = SELECT * +        FROM execve(argv=['cmd.exe', '/c','ipconfig','/flushdns'])+                +      -- Find existing entries to modify+      LET existing = SELECT+            parse_string_with_regex(+            string=Line,+            regex=[+                "^\\s+(?P<Resolution>[^\\s]+)\\s+" ++                "(?P<Hostname>[^\\s]+)\\s*\\S*$"+            ]) as Record,+            Line+        FROM parse_lines(filename=HostsFile)+        WHERE +            Line +            AND NOT Line =~ '^#'++      -- Parse a URL to get domain name.+      LET get_domain(URL) = parse_string_with_regex(+           string=URL, regex='^https?://(?P<Domain>[^:/]+)').Domain++      -- extract Velociraptor config for policy+      LET extracted_config <= SELECT * FROM foreach(+          row=config.server_urls,+            query={+                SELECT get_domain(URL=_value) AS Domain+                FROM scope()+            })++      -- Set existing entries to sinkholed values+      LET find_modline = SELECT * FROM foreach(row=changes,+            query={+                SELECT+                    format(format='\t%v\t\t%v\t\t# %v',+                    args=[Sinkhole,Domain,Description]) as Line,+                    Domain,+                    'modification' as Type+                FROM existing+                WHERE+                    Record.Hostname = Domain+                    AND NOT Domain in extracted_config.Domain+                GROUP BY Line+            })++      -- Add new hostsfile entries+      LET find_newline = SELECT * FROM foreach(row=changes,+            query={+                SELECT +                    format(format='\t%v\t\t%v\t\t# %v',+                        args=[Sinkhole,Domain,Description]) as Line,+                    Domain,+                    'new entry' as Type+                FROM scope()+                WHERE+                    NOT Domain in find_modline.Domain+                    AND NOT Domain in extracted_config.Domain+            })++      -- Determine which lines should stay the same+      LET find_line= SELECT +                Line,+                Record.Hostname as Domain,+                'old entry' as Type+            FROM existing+            WHERE +                NOT Domain in find_modline.Domain+                AND NOT Domain in find_newline.Domain++      -- Add all lines to staging object+      LET build_lines = SELECT Line FROM chain(+            a=find_modline,+            b=find_newline,+            c=find_line+      )+      +      -- Join lines from staging object+      LET HostsData = join(array=build_lines.Line,sep='\r\n')+++      -- Force start of backup or restore if applicable+      LET backup_restore <= if(condition= RestoreBackup,+                then= if(condition= check_backup,+                        then= restore,+                        else= log(message='Can not restore hosts file as backup does not exist.')),+                else= if(condition= check_backup,+                        then={+                            SELECT * FROM chain(

If this artifact is collected twice it does not overwrite the backup of the original hosts file right? This is the correct thing we just need to document it better.

mgreen27

comment created time in 6 days

Pull request review commentVelocidex/velociraptor

Add sinkhole

+name: Windows.Remediation.Sinkhole+description: |+   **Apply a Sinkhole via Windows hosts file modification**  +   This content will apply modifications to the Windows hosts file by a +   configurable lookup table.  +   During application, the configuration is backed up and used as the base for +   subsequent changes.  +   +   Parameters:  +   HostsFile - path to hosts file+   HostsFileBackup - name to backup original hosts file. If reapplying policy.  +   this is the configuration used as base.  +   CommentPrefix -  prefix to add to description in hosts file comments.  +   RestoreBackup - checkbox to enable restoration of backup hosts file.  +   SinkholeTable - table of Domains to add to or modify in hosts file.+   +   NOTE:+   Modifying the hosts file may cause network communication issues. I have +   disabled any sinkhole settings on the Velociraptor configuration but there +   are no rail guards on other domains. Use with caution.++author: Matt Green - @mgreen27++required_permissions:+  - EXECVE+  +type: CLIENT++parameters:+  - name: HostsFile+    default: C:\Windows\System32\drivers\etc\hosts+  - name: HostsFileBackup+    default: C:\Windows\System32\drivers\etc\hosts.velociraptor.backup+  - name: CommentPrefix+    default: "Velociraptor sinkhole"+  - name: RestoreBackup+    description: "Restore hosts file backup"

If this is ticked does it ignore all the other parameters and restore it anyway?

Should this be a different artifact? One to set and one to restore?

mgreen27

comment created time in 6 days

Pull request review commentVelocidex/velociraptor

Add sinkhole

+name: Windows.Remediation.Sinkhole+description: |+   **Apply a Sinkhole via Windows hosts file modification**  +   This content will apply modifications to the Windows hosts file by a +   configurable lookup table.  +   During application, the configuration is backed up and used as the base for +   subsequent changes.  +   +   Parameters:  +   HostsFile - path to hosts file

Maybe add these as descriptions to the parameters?

mgreen27

comment created time in 6 days

Pull request review commentVelocidex/velociraptor

Add sinkhole

+name: Windows.Remediation.Sinkhole+description: |+   **Apply a Sinkhole via Windows hosts file modification**  +   This content will apply modifications to the Windows hosts file by a +   configurable lookup table.  +   During application, the configuration is backed up and used as the base for +   subsequent changes.  +   +   Parameters:  +   HostsFile - path to hosts file+   HostsFileBackup - name to backup original hosts file. If reapplying policy.  +   this is the configuration used as base.  +   CommentPrefix -  prefix to add to description in hosts file comments.  +   RestoreBackup - checkbox to enable restoration of backup hosts file.  +   SinkholeTable - table of Domains to add to or modify in hosts file.+   +   NOTE:+   Modifying the hosts file may cause network communication issues. I have +   disabled any sinkhole settings on the Velociraptor configuration but there +   are no rail guards on other domains. Use with caution.++author: Matt Green - @mgreen27++required_permissions:+  - EXECVE+  +type: CLIENT++parameters:+  - name: HostsFile+    default: C:\Windows\System32\drivers\etc\hosts+  - name: HostsFileBackup+    default: C:\Windows\System32\drivers\etc\hosts.velociraptor.backup+  - name: CommentPrefix+    default: "Velociraptor sinkhole"+  - name: RestoreBackup+    description: "Restore hosts file backup"+    type: bool+  - name: SinkholeTable+    type: csv+    default: |+        Domain,Sinkhole,Description+        mega.co.nz,127.0.0.1,MEGASync file sharing+++sources:+  - precondition:+      SELECT OS From info() where OS = 'windows'++    query: |+      -- Extract sink hole requirements from table+      LET changes = SELECT +                Domain,+                Sinkhole,+                if(condition=Description,+                  then= CommentPrefix + ': ' + Description,+                  else= CommentPrefix) as Description+            FROM parse_csv(filename=SinkholeTable, accessor='data') ++      -- Check for backup to determine if sinkhole applied+      LET check_backup = SELECT FullPath FROM stat(filename=HostsFileBackup)++      -- Backup old config+      LET backup = copy(filename=HostsFile,dest=HostsFileBackup)

Should this be <=

mgreen27

comment created time in 6 days

PullRequestReviewEvent
PullRequestReviewEvent

push eventVelocidex/velociraptor

Mike Cohen

commit sha 2dffd815e5de3bdd6499e65fe347ffdd1f6d21cc

Reset artifact parameter default only if undefined. (#695)

view details

push time in 7 days

delete branch Velocidex/velociraptor

delete branch : bugfix

delete time in 7 days

create barnchVelocidex/velociraptor

branch : bugfix

created branch time in 7 days

issue openedVelocidex/velociraptor

New collection GUI resets defaults when the selection is empty

Currently the selection is reset when it is also an empty string. This means that the user can not erase the text box completely because it immediately fills with the default.

We should only fill in the default if the value is undefined not an empty string.

created time in 8 days

issue commentVelocidex/velociraptor

Windows.Forensics.SRUM artifact fails to parse database records

Thanks for reporting the GUI bug - I will open an issue for it - definitely funny :-).

So to recap there seem to be two issues - the first is the ntfs parsing which seems to have some problems extracting the file. The second is the ese parsing and comparison with other tools.

It is very possible that there are gaps in the ese parser and we can nail it down by applying the same parser on the same file with the external tools. I believe when we upload the file using the auto accessor it will use the file accessor because the file is not generally locked for reading so I expect the file to be uploaded perfectly correctly (through the API).

So there are two separate issues. If you can share the ese file (even privately) I can see what the difference in processing with ese2csv.exe.

For the ntfs we need to get more low level info about the way the ntfs parser is parsing the file. The parser has a stand alone tool here https://github.com/Velocidex/go-ntfs

You can get the parse of each mft entry like this:

ntfs.exe stat \\.\c: 81812 --verbose

This shows all the runs in each VCN entry (so in your case we have 4 mft entries 19153, 59934, 329708 and 24121).

You can also look at how the ntfs parser reconstructs the file using the runs command: image

So for example in your case I would expect to have 4 top level readers (one for each VCN) then each will be broken into smaller readers for each run. This is how the ntfs parser works - it builds a reader tree which maps each block in the file to a reader responsible for it (which might in turn map to a run or a sparse null reader).

So it would be really nice to see the output of runs command which is way more detailed than the velociraptor output, and then maybe the output of the stat command on the 5 entries (the VCN ones and the original one).

Thanks again!

chris-counteractive

comment created time in 8 days

created tagVelocidex/go-ntfs

tagv0.1.1

An NTFS file parser in Go

created time in 8 days

release Velocidex/go-ntfs

v0.1.1

ntfs.exe 4.24MB

released time in 8 days

push eventVelocidex/velociraptor

Vitaliy

commit sha 8d51461b2565e9f6ebf74c849c42357b909768c6

Added support of OpenID Connect for authentication. (#692)

view details

push time in 8 days

PR merged Velocidex/velociraptor

Added support of OpenID Connect for authentication.

To authenticate by OIDC a config file must include the following fields :

authenticator:
    type: OIDC
    oidc_issuer: <Issuer URL>
    oauth_client_id: <Client ID>
    oauth_client_secret: <Client Secret>

oidc_issuer (OIDC Issuer) field has to contain URL that has the /.well-known/openid-configuration endpoit (e.g. https://accounts.google.com).

Example:

authenticator:
   type: OIDC
   oidc_issuer: https://your-org-name.okta.com
   oauth_client_id: FtK9q1GpDM3rl5JKunTW
   oauth_client_secret: lmnD39m4Egu_0LjNHxxYS-yd77pb0SPqVWVEwajM
+927 -750

2 comments

6 changed files

vitaliy0x1

pr closed time in 8 days

pull request commentVelocidex/velociraptor

Added support of OpenID Connect for authentication.

This is very cool - we might be able to fold all the other oauth providers into this one?

We also need to add an option to the config wizard to populate this type of authenticator.

vitaliy0x1

comment created time in 8 days

issue commentVelocidex/velociraptor

GUI improvements

  • [ ] Timestamps should have a tooltip of how long ago it was (makes it easier to see e.g. last week, 3 days ago etc).
scudette

comment created time in 8 days

created tagVelocidex/velociraptor

tagv0.5.1

Digging Deeper....

created time in 8 days

push eventVelocidex/velociraptor

Mike Cohen

commit sha 44ab4811f79a4eca3c89c3d9df003319e8ee4ff8

Remove old Angular GUI (#691)

view details

push time in 8 days

delete branch Velocidex/velociraptor

delete branch : remove

delete time in 8 days

PR merged Velocidex/velociraptor

Remove old Angular GUI
+78 -34726

0 comment

303 changed files

scudette

pr closed time in 8 days

PR opened Velocidex/velociraptor

Remove old Angular GUI
+78 -34726

0 comment

303 changed files

pr created time in 8 days

create barnchVelocidex/velociraptor

branch : remove

created branch time in 8 days

push eventVelocidex/velociraptor

Mike Cohen

commit sha 0bd0a04db98c2c90a9c10c1aecd4c0dd43eb7de2

Added progress indication for downloading tools. (#689)

view details

push time in 9 days

delete branch Velocidex/velociraptor

delete branch : tools_progress

delete time in 9 days

push eventVelocidex/velociraptor

Michael Cohen

commit sha 15fc850ff009241b93fa6d7b95441c87ef76f3e3

GUI tweaks.

view details

push time in 9 days

push eventVelocidex/velociraptor

Michael Cohen

commit sha e4dbc9e1b149fb6007f714fbaaaa72637ce97ab0

Fixed notebook manipulations.

view details

push time in 9 days

push eventVelocidex/velociraptor

Michael Cohen

commit sha a023d43d6a5a02941cacbd11f1ea73e60887e3b9

Default artifacts to file accessor.

view details

push time in 9 days

issue commentVelocidex/velociraptor

Windows.Forensics.SRUM artifact fails to parse database records

Thanks for the thorough analysis!

So there are two moving parts here - extracting the file using raw NTFS parsing is one step and the second is parsing the ESE structures. The tricky thing is that is it hard to compare because the SRUM db is always changing so if we extracted it with one tool then another there is no guarantee they are the same anyway. Maybe if we did it really quickly there is a better chance they are the same?

From your analysis above it looks more like the ntfs extraction is somehow going wrong since you get more results with the file accessor than ntfs. What would be helpful is the output of istat.exe for the srumdb file.

I just tried this on my system and I found that the srumdb file is in fact sparse which might add to the confusion: image

You can tell because the file size is not exactly the same as the uploaded bytes. If you export the file via the prepare a zip method, Velociraptor will pack the uploaded file as is and also attach the index for it image

It does this because the file might be huge but really sparse (e.g. the USN log file). OTOH if you download the file directly from the upload tab then it will pad the sparse areas. image

It is quite safe to use the file accessor instead of the ntfs these days and we should probably do this for most artifacts - this is because the "file" access drops back to the "ntfs" accessor when the file is not locked (the ese file is locked internally but not from an OS perspective). Using the OS apis to copy the file is faster and safer but that said we still want to track down any problems with the ntfs parser if we can see them.

Could you please check the output of istat and then repeat with (the mft id can be seen in fls.exe or in Velociraptor with the mft parser):

velociraptor.exe query "SELECT * FROM parse_ntfs_ranges(device='\\\\.\\c:', inode='81812-128-3')"

image

chris-counteractive

comment created time in 9 days

push eventVelocidex/velociraptor

Michael Cohen

commit sha b2e1108467ae5ea40290a19d4f6dbb511329bf5a

Fix test

view details

push time in 9 days

push eventVelocidex/velociraptor

Michael Cohen

commit sha 51de5500c2ae363642795f073138aabe1b92275a

fixed test

view details

push time in 9 days

create barnchVelocidex/velociraptor

branch : tools_progress

created branch time in 9 days

issue commentVelocidex/WinPmem

The request could not be performed because of an I/O device error (4.0rc1)

Does this crash occur with the -2 flag? Using PTE mode?

When you say "trying to access memory" - are you trying to take an image or are you trying to use the driver in your own code?

If so - the only stable access method is the PTE mode which you will need to switch to using the required ioctrl. This is the only mode that captures the hyper V page faults correctly. Vivian added those same checks for the other modes in a later PR but that is not present in the signed driver.

igorrogov

comment created time in 9 days

issue commentVelocidex/velociraptor

GUI improvements

  • [ ] Copy hunt to rerun the same hunt again.
scudette

comment created time in 9 days

issue commentVelocidex/velociraptor

GUI improvements

  • [ ] be able to serve from a different base path
scudette

comment created time in 9 days

push eventVelocidex/velociraptor

Mike Cohen

commit sha f6327063f2cef241e7f2319b5a433a430e4a0ce9

Value JSON formatting in tables. (#688)

view details

push time in 10 days

delete branch Velocidex/velociraptor

delete branch : value

delete time in 10 days

PR opened Velocidex/velociraptor

Value JSON formatting in tables.
+90 -10

0 comment

5 changed files

pr created time in 10 days

create barnchVelocidex/velociraptor

branch : value

created branch time in 10 days

issue commentVelocidex/velociraptor

GUI improvements

  • [ ] Add delete client button
  • [ ] Have a UI to edit hunt descriptions
scudette

comment created time in 10 days

issue closedVelocidex/velociraptor

Add links to flow id and client id

Velociraptor internally uses flow id and client id to denote the flows and clients. It would be nice if we could click in any flow id in the table and be directed to the flow in question without having to copy paste it.

closed time in 10 days

scudette

issue commentVelocidex/velociraptor

Add links to flow id and client id

This is now done in the new UI

scudette

comment created time in 10 days

issue closedVelocidex/velociraptor

Linux: default accessor should not open pipes for reading.

On linux a named pipe is just a file - when yara scanning or hashing or uploading, depending on the VQL the client will attempt to open and read this file and may get stuck as a result waiting for data (until the query timeout).

While VQL can specifically check for WHERE Mode.IsRegular this is not intuitive and many people might forget.

The default accessor should avoid opening non regular files for reading. In order to read pipes, users should use the file_links accessor.

closed time in 10 days

scudette

issue commentVelocidex/velociraptor

Linux: default accessor should not open pipes for reading.

Fixed in the latest

scudette

comment created time in 10 days

issue closedVelocidex/velociraptor

Delete button doesn't work on errored artifacts

Version: 0.4.9 on Windows 10, tested in Chrome, new Edge, and Firefox

Steps to reproduce:

  1. Start the gui using velociraptor gui
  2. Run a collection that errors out due to size (e.g., a KapeFiles collection that exceeds the byte count). Let this run until it errors, displaying a "!" in the collected artifacts list: image
  3. Try to delete the collection using the trashcan button: image
  4. Observe that nothing happens 😃

Notes:

  1. Deleting successfully completed collected artifacts appears to work fine (those with a check mark ✔️).
  2. If you add a few successful artifacts after the errored one (say, a few ListDirectory items from the VFS browser), then try to delete it, it jumps to the top of the list, apparently "refreshed" in the index to appear newer in the sort order.
  3. Did not test with "normal" installation, just under the gui command.
  4. No errors show in the JS console; looks like that functionality is implemented with a post to the ArchiveFlow API with the client ID and flow ID, not sure the best way to see debugging info from those calls ... happy to help with more info if pointed in the right direction.
  5. It seems to be the error state, not the specific artifact, that matters. You can repro with a simpler, smaller artifact like PsList and limit the row constraint to something smaller than the normal result set (100 worked in testing).

Apologies if this is a known issue, couldn't find reference to it in the issues list or discord. Thanks!

closed time in 10 days

chris-counteractive

issue commentVelocidex/velociraptor

Delete button doesn't work on errored artifacts

The new GUI has real delete - i.e. the data is really nuked from disk. It should work.

chris-counteractive

comment created time in 10 days

issue commentVelocidex/velociraptor

Rewrite GUI in React

Done

scudette

comment created time in 10 days

issue closedVelocidex/velociraptor

Rewrite GUI in React

The current gui was borrowed from GRR in the early days - it is written in angular js and it is tired and unsupported.

We need to rewrite in something more modern - React seems to be the most popular framework at this time so it is a good choice.

closed time in 10 days

scudette

issue closedVelocidex/velociraptor

`artifact_definitions` always errors due to undeclared tools

Running the following query:

SELECT * FROM artifact_definitions(deps=False) LIMIT 1 

will always error since the artifact_definitions function causes the tools defined in all VQL to be pulled down to the server, this errors out due to the following two VQLs having tools which are meant to be uploaded using the velociraptor tool upload command, but they dont yet exist.

  1. https://github.com/Velocidex/velociraptor/blob/4bcd98db3f1d5107ba8e0fbfe05c3c04e449887f/artifacts/definitions/Windows/Search/Yara.yaml#L11
  2. https://github.com/Velocidex/velociraptor/blob/10ce7f1cc2485d3dbe976dbe6eb944250f0914d6/artifacts/definitions/Admin/Client/Upgrade.yaml#L13

Here is the query I ran: image

And here is the frontend stdout log showing the tools automatically being download: image

Is it possible to make it so that the artifact_definitions function doesn't cause VRaptor to pull down the declared tools? or maybe just have it spit out all the artifacts on the server when the names param is empty without actually validating them (since they are already on the server)

closed time in 10 days

CR-OmerYampel
more