profile
viewpoint

google/syzkaller 3893

syzkaller is an unsupervised coverage-guided kernel fuzzer

google/fscrypt 622

Go tool for managing Linux filesystem encryption

ebiggers/libdeflate 491

Heavily optimized library for DEFLATE/zlib/gzip compression and decompression

google/adiantum 431

Adiantum and HPolyC specification and test vectors

ebiggers/ntfs-3g-system-compression 78

NTFS-3G plugin for reading "system compressed" files

ebiggers/avl_tree 72

High performance C implementation of AVL trees

google/fscryptctl 72

Small C tool for Linux filesystem encryption

ebiggers/xpack 25

Experimental compression format (unmaintained, do not use!)

ebiggers/wimlib 23

Mirror of git://wimlib.net/wimlib: Library supporting the Windows Imaging Format (WIM). Please file issues on the official forums (https://wimlib.net/forums/viewforum.php?f=1) rather than here.

ebiggers/fat-fuse 8

Simple readonly FUSE driver for FAT filesystems

push eventebiggers/libdeflate

Eric Biggers

commit sha 241091e8a7604e4c8dfcfd56e50c65985ebc5c9b

deflate_compress: fix up GET_NUM_COUNTERS() GET_NUM_COUNTERS() actually returns a value about 4 times higher than intended, due to missing parentheses; ((num_syms) + 3 / 4) is supposed to be (((num_syms) + 3) / 4), or equivalently DIV_ROUND_UP(num_syms, 4) as it was in my original code from wimlib commit 394751ae1302. This value affects the performance of Huffman code generation. But oddly enough, I'm measuring that the actual behavior results in better performance than the intended behavior. This could be due to differences between DEFLATE and LZX (the compression format I had originally tuned this value to), or a different CPU, or both. Moreover, I can't make it significantly faster by trying other values. So, make GET_NUM_COUNTERS() intentionally expand to simply num_syms. Omit the rounding up to a multiple of 4, which doesn't seem helpful.

view details

Eric Biggers

commit sha 90e7211d39fcc1d14d48365ea20e668d10931535

deflate_compress: include bit reversal in gen_codewords() This way we don't have to iterate through all the codewords again.

view details

Eric Biggers

commit sha 194f6a447fe7f2f4318be970a11c5fc2fc67b179

deflate_compress: factor out deflate_write_match() Reduce code duplication between deflate_write_sequences() and deflate_write_item_list().

view details

Eric Biggers

commit sha 80364d70c6796ac565c5cd682cba13075611c532

deflate_compress: add deflate_write_bits() helper function

view details

Eric Biggers

commit sha 3cc3608e9c340e4996dff3d0633acf2ec537e12a

deflate_compress: improve some comments

view details

push time in 20 hours

delete branch ebiggers/libdeflate

delete branch : compress

delete time in 20 hours

PR merged ebiggers/libdeflate

More deflate_compress cleanups
+151 -188

0 comment

1 changed file

ebiggers

pr closed time in 20 hours

PR opened ebiggers/libdeflate

More deflate_compress cleanups
+151 -188

0 comment

1 changed file

pr created time in 21 hours

push eventebiggers/libdeflate

Eric Biggers

commit sha f9b957c8b026c7dba3e0c7c9883444e3089daace

deflate_compress: fix up GET_NUM_COUNTERS() GET_NUM_COUNTERS() actually returns a value about 4 times higher than intended, due to missing parentheses; ((num_syms) + 3 / 4) is supposed to be (((num_syms) + 3) / 4), or equivalently DIV_ROUND_UP(num_syms, 4) as it was in my original code from wimlib commit 394751ae1302. This value affects the performance of Huffman code generation. But oddly enough, I'm measuring that the actual behavior results in better performance than the intended behavior. This could be due to differences between DEFLATE and LZX (the compression format I had originally tuned this value to), or a different CPU, or both. Moreover, I can't make it significantly faster by trying other values. So, make GET_NUM_COUNTERS() intentionally expand to simply num_syms. Omit the rounding up to a multiple of 4, which doesn't seem helpful.

view details

Eric Biggers

commit sha f1e3555f6f1093cccc43e7b1dc6363d4bd7df5f2

deflate_compress: include bit reversal in gen_codewords() This way we don't have to iterate through all the codewords again.

view details

Eric Biggers

commit sha 49265730648a84bd7657ec2826f0bbc288aa09cc

deflate_compress: factor out deflate_write_match() Reduce code duplication between deflate_write_sequences() and deflate_write_item_list().

view details

Eric Biggers

commit sha e2a66c1b22a66933f1fadfbd65c4cbb4ae430b81

deflate_compress: add deflate_write_bits() helper function

view details

Eric Biggers

commit sha ef8f2e3948b90b4df05e737c7015cf7bbc27c38c

deflate_compress: improve some comments

view details

push time in 21 hours

create barnchebiggers/libdeflate

branch : compress

created branch time in 21 hours

push eventgoogle/fscrypt

Eric Biggers

commit sha 6eb31650b4dc42cd0a40a962a0d513eeb827d9f5

cli-tests: fix broken test I'm not sure how this passed the GitHub checks.

view details

Eric Biggers

commit sha 8f619f9478ef5d3d616908bd2b30cb85d8034020

Merge pull request #341 from google/fix-test cli-tests: fix broken test

view details

Eric Biggers

commit sha bf17c3e80daa975ac15d6146964ca294327d8fd9

filesystem: add back the mountsByPath map Add back the mountsByPath map, which indexes all Mounts by mountpoint. This is needed again. To avoid confusion, also rename two local variables named mountsByPath. mountsByPath won't contain nil entries, so also make AllFilesystems() use it instead of mountsByDevice. This shouldn't change its behavior. Update https://github.com/google/fscrypt/issues/339

view details

Eric Biggers

commit sha 65a445d4d01c09f43676180d779abbff0de40f1e

filesystem: add back canonicalizePath() Restore the canonicalizePath() function from before commit f2eb79fb5fb10275c014b55c13e28ff02d3b70a8, since it's needed again. Update https://github.com/google/fscrypt/issues/339

view details

Eric Biggers

commit sha 9ae233b1ab8f6db4e5367aa8c556cbb3d97e7ca0

filesystem: make FindMount() fall back to search by path This is needed to make FindMount() work on btrfs filesystems. Update https://github.com/google/fscrypt/issues/339

view details

Eric Biggers

commit sha 7d7fd8ba74e973c3e9644e2548ca565537344f6b

filesystem: fall back to path-only links if UUID cannot be determined This is needed to allow creating protector links to btrfs filesystems. Update https://github.com/google/fscrypt/issues/339

view details

push time in 2 days

delete branch google/fscrypt

delete branch : fix-test

delete time in 2 days

push eventgoogle/fscrypt

Eric Biggers

commit sha 6eb31650b4dc42cd0a40a962a0d513eeb827d9f5

cli-tests: fix broken test I'm not sure how this passed the GitHub checks.

view details

Eric Biggers

commit sha 8f619f9478ef5d3d616908bd2b30cb85d8034020

Merge pull request #341 from google/fix-test cli-tests: fix broken test

view details

push time in 2 days

PR merged google/fscrypt

cli-tests: fix broken test

I'm not sure how this passed the GitHub checks.

+1 -1

0 comment

1 changed file

ebiggers

pr closed time in 2 days

PR opened google/fscrypt

cli-tests: fix broken test

I'm not sure how this passed the GitHub checks.

+1 -1

0 comment

1 changed file

pr created time in 2 days

create barnchgoogle/fscrypt

branch : fix-test

created branch time in 2 days

issue commentgoogle/fscrypt

Can't encrypt ext4 filesystem if root is btrfs subvolume

I couldn't find a great solution, but https://github.com/google/fscrypt/pull/340 should fix this.

srd424

comment created time in 3 days

PR opened google/fscrypt

Allow the root directory to be a btrfs filesystem
+128 -29

0 comment

3 changed files

pr created time in 3 days

create barnchgoogle/fscrypt

branch : fix-btrfs

created branch time in 3 days

delete branch google/fscrypt

delete branch : broken-links

delete time in 3 days

push eventgoogle/fscrypt

Eric Biggers

commit sha 5ae7da4ee6582099de5d1b14733f8d58f4dc2816

filesystem: store mountpoint in link files as a fallback Currently, linked protectors use filesystem link files of the form "UUID=<uuid>". These links get broken if the filesystem's UUID changes, e.g. due to the filesystem being re-created even if the ".fscrypt" directory is backed up and restored. To prevent links from being broken (in most cases), start storing the mountpoint path in the link files too, in the form "UUID=<uuid>\nPATH=<path>\n". When following a link, try the UUID first, and if it doesn't work try the PATH. While it's possible that the path changed too, for login protectors (the usual use case of linked protectors) this won't be an issue as the path will always be "/". An alternative solution would be to fall back to scanning all filesystems for the needed protector descriptor. I decided not to do that, since relying on a global scan doesn't seem to be a good design. It wouldn't scale to large numbers of filesystems, it could cross security boundaries, and it would make it possible for adding a new filesystem to break fscrypt on existing filesystems. And if a global scan was an acceptable way to find protectors during normal use, then there would be no need for link files in the first place. Note: this change is backwards compatible (i.e., fscrypt will continue to recognize old link files) but not forwards-compatible (i.e., previous versions of fscrypt won't recognize new link files). Fixes https://github.com/google/fscrypt/issues/311

view details

Eric Biggers

commit sha fac30865c04de8f4698776e94dd86c7a88fd5da2

Merge pull request #337 from google/broken-links filesystem: store mountpoint in link files as a fallback

view details

push time in 3 days

PR merged google/fscrypt

Reviewers
filesystem: store mountpoint in link files as a fallback

Currently, linked protectors use filesystem link files of the form "UUID=<uuid>". These links get broken if the filesystem's UUID changes, e.g. due to the filesystem being re-created even if the ".fscrypt" directory is backed up and restored.

To prevent links from being broken (in most cases), start storing the mountpoint path in the link files too, in the form "UUID=<uuid>\nPATH=<path>\n". When following a link, try the UUID first, and if it doesn't work try the PATH. While it's possible that the path changed too, for login protectors (the usual use case of linked protectors) this won't be an issue as the path will always be "/".

An alternative solution would be to fall back to scanning all filesystems for the needed protector descriptor. I decided not to do that, since relying on a global scan doesn't seem to be a good design. It wouldn't scale to large numbers of filesystems, it could cross security boundaries, and it would make it possible for adding a new filesystem to break fscrypt on existing filesystems. And if a global scan was an acceptable way to find protectors during normal use, then there would be no need for link files in the first place.

Note: this change is backwards compatible (i.e., fscrypt will continue to recognize old link files) but not forwards-compatible (i.e., previous versions of fscrypt won't recognize new link files).

Fixes https://github.com/google/fscrypt/issues/311

+195 -54

0 comment

5 changed files

ebiggers

pr closed time in 3 days

issue closedgoogle/fscrypt

[Feature] try to find /.fscrypt directories in the case of a broken UUID link

Currently, users can create "linked" protectors that refer to a protector on a different filesystem. This is most commonly used to encrypt a directory on a non-root filesystem with a user's login protector (which is stored on the root filesystem). This links stored in a protectors/<protector-id>.link and have the format UUID=<filesystem-uuid>.

Right now, if a link is broken we just return an error: cannot follow filesystem link ... no device with UUID. As an enhancement, if we detect a broken link, we could search all the mounted filesystems for a compatible .fscrypt directory. Then we could use such a directory if we find it (for unlocking or for fscrypt status). We could also output a warning advising the user on how to fix the issue. Something like:

broken link detected
To fix run "echo -n UUID=12345678-abab-ffcd-1234-123456789012 > /mnt/.fscrypt/protectors/128347210983421.link"

closed time in 3 days

josephlr

delete branch google/fscrypt

delete branch : remove-protector-from-policy

delete time in 3 days

push eventgoogle/fscrypt

Eric Biggers

commit sha 57be034ce4700fb07c10b771628c1c63d8483d09

cli-tests: add helper functions to get protector descriptors

view details

Eric Biggers

commit sha 6ebd5a54eae2dfb16b66da649e75848fe6030b7f

cmd/fscrypt: don't load protector in remove-protector-from-policy Make remove-protector-from-policy work even if the protector cannot be loaded (for example, due to having been deleted already). Fixes https://github.com/google/fscrypt/issues/258 Fixes https://github.com/google/fscrypt/issues/272

view details

Eric Biggers

commit sha 7813af71eba05166e0c2f7056e094ca8756fbe8e

Merge pull request #338 from google/remove-protector-from-policy cmd/fscrypt: don't load protector in remove-protector-from-policy

view details

push time in 3 days

PR merged google/fscrypt

cmd/fscrypt: don't load protector in remove-protector-from-policy

Make remove-protector-from-policy work even if the protector cannot be loaded (for example, due to having been deleted already).

Fixes https://github.com/google/fscrypt/issues/258 Fixes https://github.com/google/fscrypt/issues/272

+108 -24

0 comment

8 changed files

ebiggers

pr closed time in 3 days

issue closedgoogle/fscrypt

Destroying an in-use protector should be forbidden?

With v0.2.9 you can successfully "metadata destroy --protector X" even though the protector is currently in use with a policy, and it then becomes impossible to remove-protector-from-policy because "protector metadata for X not found on filesystem Y".

Arguably, either the destroy should be forbidden OR all uses of the protector should be removed at the same point.

And a small bugette: if you attempt to unlock a directory which is using a policy which is "protected" by such a non-existent protector (in addition to one or more real, existing protectors) then you get a printf output with uninstantiated %placeholders:

The available protectors are: 1 - login protector for someuser 2 - custom protector "newprot" NOTE: %d of the %d protectors failed to load. You may need to mount a linked filesystem. Run with --verbose for more information.Enter the number of protector to use:

Other than this... great work! Thanks :)

closed time in 3 days

johnsutton

issue closedgoogle/fscrypt

Can not change login protector

Removing login protector fails.

` ➜ fscrypt status /mnt
ext4 filesystem "/mnt" has 2 protectors and 1 policy

PROTECTOR LINKED DESCRIPTION e9c9ed7ea8188b59 Yes (/) login protector for kamen eb043cdbd9a92c9d No custom protector "transferprot"

POLICY UNLOCKED PROTECTORS 1b2353ac3ff97803 Yes e9c9ed7ea8188b59, eb043cdbd9a92c9d ➜ fscrypt metadata remove-protector-from-policy --protector=/mnt:e9c9ed7ea8188b59 --policy=/mnt:1b2353ac3ff97803 --verbose 2020/10/24 03:56:07 parsed flag: mountpoint="/mnt" descriptor=e9c9ed7ea8188b59 2020/10/24 03:56:07 Reading config from "/etc/fscrypt.conf" 2020/10/24 03:56:07 creating context for "kamen" 2020/10/24 03:56:07 mnt_dir /run/snapd/ns/ufw.mnt: not a directory 2020/10/24 03:56:07 getting mnt_dir: /run/user/123/gvfs: permission denied 2020/10/24 03:56:07 mnt_dir /run/snapd/ns/snap-store.mnt: not a directory 2020/10/24 03:56:07 mnt_dir /run/snapd/ns/keepassxc.mnt: not a directory 2020/10/24 03:56:07 found ext4 filesystem "/mnt" (/dev/sdc8) 2020/10/24 03:56:07 Getting protector e9c9ed7ea8188b59 2020/10/24 03:56:07 could not read metadata at "/mnt/.fscrypt/protectors/e9c9ed7ea8188b59" fscrypt metadata remove-protector-from-policy: filesystem /mnt: descriptor e9c9ed7ea8188b59: could not find metadata `

I created a encrypted system on one machine with a login protector and then moved to another machine. I need to make a new login protector. Meanwhile I created a custom protector to get by. The problem is I can not remove it neither on the new machine nor on the old. I will loose the old machine in a few hours so if it is needed then this is urgent so please help.

closed time in 3 days

kamentomov

issue closedgoogle/fscryptctl

Execution

How to run this code exactly, any help please with some instruction

closed time in 3 days

jahidhasanlinix

issue commentgoogle/fscryptctl

Execution

It's explained in the README file.

Please also note that most users should use fscrypt (https://github.com/google/fscrypt) instead of fscryptctl.

jahidhasanlinix

comment created time in 3 days

issue commentgoogle/fscrypt

Can't encrypt ext4 filesystem if root is btrfs subvolume

fscrypt intentionally ignores some mountpoints. For example, if both a whole filesystem and a subtree of the same filesystem are mounted, then the subtree will be ignored, as fscrypt metadata should only be stored in the filesystem's root directory. This might be interacting badly with the btrfs use case where the root directory is a subvolume and the whole btrfs filesystem is mounted somewhere that is not the root directory. However, I thought that btrfs uses different device numbers for each subvolume, which would avoid this issue. I'll need to test it to figure out what's going on.

srd424

comment created time in 3 days

push eventebiggers/fsverity-utils

Eric Biggers

commit sha bdf36751928501fd61aba08220be6d971f9d15c7

Makefile: fix a typo Signed-off-by: Eric Biggers <ebiggers@google.com>

view details

Eric Biggers

commit sha 61493fd18b6799b5b79cc84fd79073f4b6edf6d8

Clarify the purpose of built-in signatures Signed-off-by: Eric Biggers <ebiggers@google.com>

view details

push time in 3 days

delete branch ebiggers/libdeflate

delete branch : len_t

delete time in 6 days

push eventebiggers/libdeflate

Eric Biggers

commit sha dc64ccfb2556e7f2a9aee7e120bd83b4a31bf9dc

deflate_decompress: remove len_t typedef Solaris already defines 'len_t' in <sys/types.h>, which causes a build error. This typedef isn't important, so just remove it and use u8 directly. Fixes https://github.com/ebiggers/libdeflate/issues/159

view details

push time in 6 days

more