profile
viewpoint
Vivek Goyal rhvgoyal Red Hat Inc.

projectatomic/container-storage-setup 146

Service to set up storage for Docker and other container systems

rhvgoyal/docker-mount 4

Tools to mount/unmount docker images

rhvgoyal/linux 3

Linux kernel source tree

rhvgoyal/virtiofs-tests 3

A placeholder for virtio-fs tests

rhvgoyal/moby 1

Docker - the open-source application container engine

rhvgoyal/anaconda-docs 0

Some documentation related to anaconda

rhvgoyal/atomic 0

Atomic Run Tool for installing/running/managing container images.

rhvgoyal/cli 0

The Docker CLI

issue commentkata-containers/runtime

Virtiofs performance issue with huge files

@ganeshmaharaj are you running virtiofsd with xattr enabled (-o xattr). I removed that and even with cache=always now I am getting around 250MB/s with random writes.

I narrowed it down further and looks like primary slowdown is with cache=always and randwrite only. randreads are just fine. And lot of this slowdown seems to be coming from call file_remove_privs(). It is resulting in lot of getxattr() and that means round extra round trip to server for every write.

ganeshmaharaj

comment created time in 19 hours

issue commentkata-containers/runtime

Virtiofs performance issue with huge files

I might have pasted 9p, cache=loose numbers wrong previously. This one should be correct.

9p, cache=loose

READ: bw=39.1MiB/s (41.0MB/s), 39.1MiB/s-39.1MiB/s (41.0MB/s-41.0MB/s), io= 3070MiB (3219MB), run=78471-78471msec WRITE: bw=13.1MiB/s (13.7MB/s), 13.1MiB/s-13.1MiB/s (13.7MB/s-13.7MB/s), io= 1026MiB (1076MB), run=78471-78471msec

ganeshmaharaj

comment created time in 2 days

issue commentkata-containers/runtime

Virtiofs performance issue with huge files

@ganeshmaharaj I have tried this in a VM and my observations are as follows.

  • I am not seeing the slowdown you are seeing. In fact cache=none is fastest with virtiofs. cache=always slows it down surprisingly. I am not sure why. Will have to debug. Even dax seems to slow it down. Part of the reason is that it is random IO and part of it is that this is old dax code base and some of the locking improvements might not be here.

Anyway, I tried following configuraiton with fio command you sent and following are my observations.

9p (cache=none)

Run status group 0 (all jobs): READ: bw=43.1MiB/s (45.2MB/s), 43.1MiB/s-43.1MiB/s (45.2MB/s-45.2MB/s), io= 3070MiB (3219MB), run=71260-71260msec WRITE: bw=14.4MiB/s (15.1MB/s), 14.4MiB/s-14.4MiB/s (15.1MB/s-15.1MB/s), io= 1026MiB (1076MB), run=71260-71260msec

9p (cache=loose)

READ: bw=43.1MiB/s (45.2MB/s), 43.1MiB/s-43.1MiB/s (45.2MB/s-45.2MB/s), io= 3070MiB (3219MB), run=71182-71182msec WRITE: bw=14.4MiB/s (15.1MB/s), 14.4MiB/s-14.4MiB/s (15.1MB/s-15.1MB/s), io= 1026MiB (1076MB), run=71182-71182msec

virtiofs (cache=none)

Run status group 0 (all jobs): READ: bw=211MiB/s (222MB/s), 211MiB/s-211MiB/s (222MB/s-222MB/s), io=3070Mi B (3219MB), run=14530-14530msec WRITE: bw=70.6MiB/s (74.0MB/s), 70.6MiB/s-70.6MiB/s (74.0MB/s-74.0MB/s), io= 1026MiB (1076MB), run=14530-14530msec

virtiofs(cache=none, dax)

READ: bw=105MiB/s (110MB/s), 105MiB/s-105MiB/s (110MB/s-110MB/s), io=3070Mi B (3219MB), run=29219-29219msec WRITE: bw=35.1MiB/s (36.8MB/s), 35.1MiB/s-35.1MiB/s (36.8MB/s-36.8MB/s), io= 1026MiB (1076MB), run=29219-29219msec

virtiofs(cache=always)

Run status group 0 (all jobs): READ: bw=44.9MiB/s (47.0MB/s), 44.9MiB/s-44.9MiB/s (47.0MB/s-47.0MB/s), io= 3070MiB (3219MB), run=68423-68423msec WRITE: bw=14.0MiB/s (15.7MB/s), 14.0MiB/s-14.0MiB/s (15.7MB/s-15.7MB/s), io= 1026MiB (1076MB), run=68423-68423msec

virtiofs(cache=always, dax)

READ: bw=99.0MiB/s (104MB/s), 99.0MiB/s-99.0MiB/s (104MB/s-104MB/s), io=307 0MiB (3219MB), run=30998-30998msec WRITE: bw=33.1MiB/s (34.7MB/s), 33.1MiB/s-33.1MiB/s (34.7MB/s-34.7MB/s), io= 1026MiB (1076MB), run=30998-30998msec

In summary, cache=none is very fast for this workload. dax is in the middle and cache=always seems very slow and is almost same as 9p.

I will spend some time figuring out why cache=always is slow. Otherwise in rest of the cases we are much faster than 9p (atleast in my tests).

I am using 5.8-rc4 kernel from virtio-fs-dev branch from gitlab and 5.0 qemu from virtio-fs-dev branch.

ganeshmaharaj

comment created time in 2 days

issue commentkata-containers/runtime

With virtio-fs and cache enabled, deleting a mapped file from host doesn't allow it to be recreated in the container to be saved back to host again which could result in data loss.

If cache=none fixes it, we should clsoe this issue. cache=always will not work because host and guest metadata will get out of sync and we don't have any notification mechanism yet to notify guest.

eadamsintel

comment created time in 2 days

issue commentkata-containers/tests

virtiofs: container get stuck running blogbench

I see vitiofs dax is enabled. Is it reproducible with dax disabled. There is a chance that issue is in dax memory range reclaim in guest which led to deadlock. Also are there any messages in guest console after the hang. (like lockdep warning).

devimc

comment created time in 2 days

push eventrhvgoyal/misc

Vivek Goyal

commit sha c0b41646fbf82b97ebaa5a8e8f672b9a61356f77

read-large: Add helper to read large file one page at a time Read few bytes from beginning of each 2MB page in a file. Signed-off-by: Vivek Goyal <vgoyal@redhat.com>

view details

push time in 7 days

push eventrhvgoyal/misc

Vivek Goyal

commit sha c55c76776815faa763ccd6b84656d05fb10e5093

write-large: Add a helper to write a string to each page of file Write a string to start of each 2MB page of a file. Signed-off-by: Vivek Goyal <vgoyal@redhat.com>

view details

push time in 7 days

pull request commentcontainers/storage

Store the pvcreate --metadatasize option in storage.conf

@mikemccracken What's lvm pv metadata. Is this driver not about creating thin pool? If not, can you please give an idea what this driver is doing. I thought we are creating a thin pool and then carving out thin devices for containers. Why do I need to specify metadata size for regular lvm. Doesn't lvm maintain that by itself.

rhatdan

comment created time in 15 days

pull request commentcontainers/storage

Store the pvcreate --metadatasize option in storage.conf

@rhatdan Default of 1K metadata size seems to be too small. container-storage-setup takes .1% of free space in volume group. So for a 100G volume group, metadata size will be 100MB. You might want to use same logic instead of a fixed size default.

rhatdan

comment created time in 15 days

pull request commentcontainers/storage

devmapper: increase LVM PV metadatasize

Agreed. This should be configurable by user. container-storage-setup had it part of config file as well where user could override the defaults.

mikemccracken

comment created time in 16 days

push eventrhvgoyal/misc

Vivek Goyal

commit sha 33dc9d0f5b62e524bf06ce86e9446b238a4e1ef1

fsync: Open file O_RDWR

view details

Vivek Goyal

commit sha 1085d459c19e95616d5e99432f6c5bd8a0fd2051

fsyncdir: Add an helper to fsync a directory

view details

push time in 21 days

push eventrhvgoyal/misc

Vivek Goyal

commit sha 9afcf048a495150dcaf56b31dc972ba9764152cb

fsync: Fix a bug in error message. Display correct file name

view details

push time in 21 days

push eventrhvgoyal/misc

Vivek Goyal

commit sha 692b296e9f472b424a3067e18602a3373c632ba0

Add a helper to call syncfs

view details

push time in 21 days

push eventrhvgoyal/misc

Vivek Goyal

commit sha ec8ec84de909b5de803348fdb46800f70c0b4a40

virtiofs: Add pjdfstests to the list of testsuites to run

view details

push time in 23 days

push eventrhvgoyal/misc

Vivek Goyal

commit sha 714b71709f3d3e19ddd15af0d864f1c678603379

virtiofs: Add more test description to testing requirement Add details of how to run xfstests for overlayfs on top of virtiofs.

view details

push time in a month

push eventrhvgoyal/misc

Vivek Goyal

commit sha f8859284a50227c50bdf81a30a9badec2bec2320

virtiofs: Add a test document Creating a document which describes what tests to run to make sure virtiofs is sane and we have not broken anything either in kernel or in virtiofsd.

view details

push time in a month

create barnchrhvgoyal/linux

branch : asyncpf-error-v1

created branch time in a month

push eventrhvgoyal/linux

Vivek Goyal

commit sha ccf0aa161f7a213938bb410af4425aa7c2b324a3

overlayfs: Fix redirect traversal on metacopy dentries Amir pointed me to metacopy test cases in unionmount-testsuite and I decided to run "./run --ov=10 --meta" and it failed while running test "rename-mass-5.py". Problem is w.r.t absolute redirect traversal on intermediate metacopy dentry. We do not store intermediate metacopy dentries and also skip current loop/layer and move onto lookup in next layer. But at the end of loop, we have logic to reset "poe" and layer index if currnently looked up dentry has absolute redirect. We skip all that and that means lookup in next layer will fail. Following is simple test case to reproduce this. - mkdir -p lower upper work merged lower/a lower/b - touch lower/a/foo.txt - mount -t overlay -o lowerdir=lower,upperdir=upper,workdir=work,metacopy=on none merged # Following will create absolute redirect "/a/foo.txt" on upper/b/bar.txt. - mv merged/a/foo.txt merged/b/bar.txt # unmount overlay and use upper as lower layer (lower2) for next mount. - umount merged - mv upper lower2 - rm -rf work; mkdir -p upper work - mount -t overlay -o lowerdir=lower2:lower,upperdir=upper,workdir=work,metacopy=on none merged # Force a metacopy copy-up - chown bin:bin merged/b/bar.txt # unmount overlay and use upper as lower layer (lower3) for next mount. - umount merged - mv upper lower3 - rm -rf work; mkdir -p upper work - mount -t overlay -o lowerdir=lower3:lower2:lower,upperdir=upper,workdir=work,metacopy=on none merged # ls merged/b/bar.txt ls: cannot access 'bar.txt': Input/output error Intermediate lower layer (lower2) has metacopy dentry b/bar.txt with absolute redirect "/a/foo.txt". We skipped redirect processing at the end of loop which sets poe to roe and sets the appropriate next lower layer index. And that means lookup failed in next layer. Fix this by continuing the loop for any intermediate dentries. We still do not save these at lower stack. With this fix applied unionmount-testsuite, "./run --ov-10 --meta" now passes. Signed-off-by: Vivek Goyal <vgoyal@redhat.com>

view details

push time in a month

push eventrhvgoyal/linux

Vivek Goyal

commit sha e804c16c28f732c9aad20649995c7cb05d1a9613

overlayfs: Simplify setting of origin for index lookup overlayfs can keep index of copied up files and directories and it seems to serve two primary puroposes. For regular files, it avoids breaking lower hardlinks over copy up. For directories it seems to be used for various error checks. During ovl_lookup(), we lookup for index using lower dentry in many a cases. That lower dentry is called "origin" and following is a summary of current logic. If there is no upperdentry, always lookup for index using lower dentry. For regular files it helps avoiding breaking hard links over copyup and for directories it seems to be just error checks. If there is an upperdentry, then there are 3 possible cases. - For directories, lower dentry is found using two ways. One is regular path based lookup in lower layers and second is using ORIGIN xattr on upper dentry. First verify that path based lookup lower dentry matches the one pointed by upper ORIGIN xattr. If yes, use this verified origin for index lookup. - For regular files (non-metacopy), there is no path based lookup in lower layers as lookup stops once we find upper dentry. So there is no origin verification. If there is ORIGIN xattr present on upper, use that to lookup index otherwise don't. - For regular metacopy files, again lower dentry is found using path based lookup as well as ORIGIN xattr on upper. Path based lookup is continued in this case to find lower data dentry for metacopy upper. So like directories we only use verified origin. If ORIGIN xattr is not present (Either because lower did not support file handles or because this is hardlink copied up with index=off), then don't use path lookup based lower dentry as origin. This is same as regular non-metacopy file case. Suggested-by: Amir Goldstein <amir73il@gmail.com> Signed-off-by: Vivek Goyal <vgoyal@redhat.com> Reviewed-by: Amir Goldstein <amir73il@gmail.com>

view details

Vivek Goyal

commit sha f70e1021165498bb52176aeb9e3815b118daf052

overlayfs: ovl_lookup(): Use only uppermetacopy state Currently we use a variable "metacopy" which signifies that dentry could be either uppermetacopy or lowermetacopy. Amir suggested that we can move code around and use d.metacopy in such a way that we don't need lowermetacopy and just can do away with uppermetacopy. So this patch replaces "metacopy" with "uppermetacopy". It also moves some code little higher to keep reading little simpler. Suggested-by: Amir Goldstein <amir73il@gmail.com> Signed-off-by: Vivek Goyal <vgoyal@redhat.com>

view details

Vivek Goyal

commit sha 9c526520d2786c246bf7607adc1696cf2b1d54e5

overlayfs: Initialize OVL_UPPERDATA in ovl_lookup() Currently ovl_get_inode() initializes OVL_UPPERDATA flag and for that it has to call ovl_check_metacopy_xattr() and check if metacopy xattr is present or not. yangerkun reported sometimes underlying filesystem might return -EIO and in that case error handling path does not cleanup properly leading to various warnings. Run generic/461 with ext4 upper/lower layer sometimes may trigger the bug as below(linux 4.19): [ 551.001349] overlayfs: failed to get metacopy (-5) [ 551.003464] overlayfs: failed to get inode (-5) [ 551.004243] overlayfs: cleanup of 'd44/fd51' failed (-5) [ 551.004941] overlayfs: failed to get origin (-5) [ 551.005199] ------------[ cut here ]------------ [ 551.006697] WARNING: CPU: 3 PID: 24674 at fs/inode.c:1528 iput+0x33b/0x400 ... [ 551.027219] Call Trace: [ 551.027623] ovl_create_object+0x13f/0x170 [ 551.028268] ovl_create+0x27/0x30 [ 551.028799] path_openat+0x1a35/0x1ea0 [ 551.029377] do_filp_open+0xad/0x160 [ 551.029944] ? vfs_writev+0xe9/0x170 [ 551.030499] ? page_counter_try_charge+0x77/0x120 [ 551.031245] ? __alloc_fd+0x160/0x2a0 [ 551.031832] ? do_sys_open+0x189/0x340 [ 551.032417] ? get_unused_fd_flags+0x34/0x40 [ 551.033081] do_sys_open+0x189/0x340 [ 551.033632] __x64_sys_creat+0x24/0x30 [ 551.034219] do_syscall_64+0xd5/0x430 [ 551.034800] entry_SYSCALL_64_after_hwframe+0x44/0xa9 One solution is to improve error handling and call iget_failed() if error is encountered. Amir thinks that this path is little intricate and there is not real need to check and initialize OVL_UPPERDATA in ovl_get_inode(). Instead caller of ovl_get_inode() can initialize this state. And this will avoid double checking of metacopy xattr lookup in ovl_lookup() and ovl_get_inode(). OVL_UPPERDATA is inode flag. So I was little concerned that initializing it outside ovl_get_inode() might have some races. But this is one way transition. That is once a file has been fully copied up, it can't go back to metacopy file again. And that seems to help avoid races. So as of now I can't see any races w.r.t OVL_UPPERDATA being set wrongly. So move settingof OVL_UPPERDATA inside the callers of ovl_get_inode(). ovl_obtain_alias() already does it. So only two callers now left are ovl_lookup() and ovl_instantiate(). Reported-by: yangerkun <yangerkun@huawei.com> Suggested-by: Amir Goldstein <amir73il@gmail.com> Signed-off-by: Vivek Goyal <vgoyal@redhat.com> Reviewed-by: Amir Goldstein <amir73il@gmail.com>

view details

push time in a month

push eventrhvgoyal/linux

Vivek Goyal

commit sha 62c76ebf1f30394df27f4b01050f3fb16b8908a5

overlayfs: Simplify setting of origin for index lookup overlayfs can keep index of copied up files and directories and it seems to serve two primary puroposes. For regular files, it avoids breaking lower hardlinks over copy up. For directories it seems to be used for various error checks. During ovl_lookup(), we lookup for index using lower dentry in many a cases. That lower dentry is called "origin" and following is a summary of current logic. If there is no upperdentry, always lookup for index using lower dentry. For regular files it helps avoiding breaking hard links over copyup and for directories it seems to be just error checks. If there is an upperdentry, then there are 3 possible cases. - For directories, lower dentry is found using two ways. One is regular path based lookup in lower layers and second is using ORIGIN xattr on upper dentry. First verify that path based lookup lower dentry matches the one pointed by upper ORIGIN xattr. If yes, use this verified origin for index lookup. - For regular files (non-metacopy), there is no path based lookup in lower layers as lookup stops once we find upper dentry. So there is no origin verification. If there is ORIGIN xattr present on upper, use that to lookup index otherwise don't. - For regular metacopy files, again lower dentry is found using path based lookup as well as ORIGIN xattr on upper. Path based lookup is continued in this case to find lower data dentry for metacopy upper. So like directories we only use verified origin. If ORIGIN xattr is not present (Either because lower did not support file handles or because this is hardlink copied up with index=off), then don't use path lookup based lower dentry as origin. This is same as regular non-metacopy file case. Suggested-by: Amir Goldstein <amir73il@gmail.com> Signed-off-by: Vivek Goyal <vgoyal@redhat.com>

view details

Vivek Goyal

commit sha 9148278231ec333067da12093c10613e9a1807ab

overlayfs: ovl_lookup(): Use only uppermetacopy state Currently we use a variable "metacopy" which signifies that dentry could be either uppermetacopy or lowermetacopy. Amir suggested that we can move code around and use d.metacopy in such a way that we don't need lowermetacopy and just can do away with uppermetacopy. So this patch replaces "metacopy" with "uppermetacopy". It also moves some code little higher to keep reading little simpler. Suggested-by: Amir Goldstein <amir73il@gmail.com> Signed-off-by: Vivek Goyal <vgoyal@redhat.com>

view details

Vivek Goyal

commit sha 1617975d8743f7951adab5911b9ec63936225e5a

overlayfs: Initialize OVL_UPPERDATA in ovl_lookup() Currently ovl_get_inode() initializes OVL_UPPERDATA flag and for that it has to call ovl_check_metacopy_xattr() and check if metacopy xattr is present or not. yangerkun reported sometimes underlying filesystem might return -EIO and in that case error handling path does not cleanup properly leading to various warnings. Run generic/461 with ext4 upper/lower layer sometimes may trigger the bug as below(linux 4.19): [ 551.001349] overlayfs: failed to get metacopy (-5) [ 551.003464] overlayfs: failed to get inode (-5) [ 551.004243] overlayfs: cleanup of 'd44/fd51' failed (-5) [ 551.004941] overlayfs: failed to get origin (-5) [ 551.005199] ------------[ cut here ]------------ [ 551.006697] WARNING: CPU: 3 PID: 24674 at fs/inode.c:1528 iput+0x33b/0x400 ... [ 551.027219] Call Trace: [ 551.027623] ovl_create_object+0x13f/0x170 [ 551.028268] ovl_create+0x27/0x30 [ 551.028799] path_openat+0x1a35/0x1ea0 [ 551.029377] do_filp_open+0xad/0x160 [ 551.029944] ? vfs_writev+0xe9/0x170 [ 551.030499] ? page_counter_try_charge+0x77/0x120 [ 551.031245] ? __alloc_fd+0x160/0x2a0 [ 551.031832] ? do_sys_open+0x189/0x340 [ 551.032417] ? get_unused_fd_flags+0x34/0x40 [ 551.033081] do_sys_open+0x189/0x340 [ 551.033632] __x64_sys_creat+0x24/0x30 [ 551.034219] do_syscall_64+0xd5/0x430 [ 551.034800] entry_SYSCALL_64_after_hwframe+0x44/0xa9 One solution is to improve error handling and call iget_failed() if error is encountered. Amir thinks that this path is little intricate and there is not real need to check and initialize OVL_UPPERDATA in ovl_get_inode(). Instead caller of ovl_get_inode() can initialize this state. And this will avoid double checking of metacopy xattr lookup in ovl_lookup() and ovl_get_inode(). OVL_UPPERDATA is inode flag. So I was little concerned that initializing it outside ovl_get_inode() might have some races. But this is one way transition. That is once a file has been fully copied up, it can't go back to metacopy file again. And that seems to help avoid races. So as of now I can't see any races w.r.t OVL_UPPERDATA being set wrongly. So move settingof OVL_UPPERDATA inside the callers of ovl_get_inode(). ovl_obtain_alias() already does it. So only two callers now left are ovl_lookup() and ovl_instantiate(). Reported-by: yangerkun <yangerkun@huawei.com> Suggested-by: Amir Goldstein <amir73il@gmail.com> Signed-off-by: Vivek Goyal <vgoyal@redhat.com>

view details

push time in 2 months

create barnchrhvgoyal/linux

branch : ovl-data-flag-fix

created branch time in 2 months

push eventrhvgoyal/misc

Vivek Goyal

commit sha cc90b6ef51f200c7fe6cf962e445555199ddfd3a

virtiofs: Add a test for futimens() Add a test to modify timestamps of a file. Signed-off-by: Vivek Goyal <vgoyal@redhat.com>

view details

push time in 2 months

push eventrhvgoyal/linux

Vivek Goyal

commit sha 29eb77236b8c5daa0d68e57d3b084c513f37cb16

virtiofs: Do not use fuse_fill_super_common() for fuse device installation fuse_fill_super_common() allocates and installs one fuse_device. Hence virtiofs allocates and install all fuse devices by itself except one. This makes logic little twisted. There does not seem to be any real need that why virtiofs can't allocate and install all fuse devices itself. So opt out of fuse device allocation and installation while calling fuse_fill_super_common(). Regular fuse still wants fuse_fill_super_common() to install fuse_device. It needs to prevent against races where two mounters are trying to mount fuse using same fd. In that case one will succeed while other will get -EINVAL. virtiofs does not have this issue because sget_fc() resolves the race w.r.t multiple mounters and only one instance of virtio_fs_fill_super() should be in progress for same filesystem. Signed-off-by: Vivek Goyal <vgoyal@redhat.com>

view details

push time in 2 months

push eventrhvgoyal/linux

Vivek Goyal

commit sha ce4c514b0a2ad6a39678921b0394c886b4c19831

fuse, virtiofs: Do not alloc/install fuse device in fuse_fill_super_common() As of now fuse_fill_super_common() allocates and installs one fuse device. Filesystems like virtiofs can have more than one filesystem queues and can have one fuse device per queue. Give, fuse_fill_super_common() only handles one device, virtiofs allocates and installes fuse devices for all queues except one. This makes logic little twisted and hard to understand. It probably is better to not do any device allocation/installation in fuse_fill_super_common() and let caller take care of it instead. Signed-off-by: Vivek Goyal <vgoyal@redhat.com>

view details

push time in 2 months

create barnchrhvgoyal/linux

branch : fuse-dev-install

created branch time in 3 months

create barnchrhvgoyal/linux

branch : ovl-setattr-fix

created branch time in 3 months

issue commentkata-containers/runtime

Unable to start docker in docker with virtio-fs

@dadux I just tested virtio-fs-dev branch and I am able to mount overlayfs on top of virtiofs without any issues. Only warning I get during mount time is "overlayfs: upper fs does not support tmpfile". You can ignore that warning as overlayfs will work even without tmpfile support from underlying fs.

awprice

comment created time in 3 months

issue commentkata-containers/runtime

Unable to start docker in docker with virtio-fs

@dadux You need to use virtio-fs-dev branch. That has the latest code.

https://gitlab.com/virtio-fs/linux/-/tree/virtio-fs-dev

Also make sure virtiofsd is running with "-o xattr" option enabled.

awprice

comment created time in 3 months

push eventrhvgoyal/misc

Vivek Goyal

commit sha f58cf471e5b590526b1c56ebb20414418f970c23

Add a test to trigger get_user_pages() Test triggering get_user_pages() on virtiofs dax mmaped file. Signed-off-by: Vivek Goyal <vgoyal@redhat.com>

view details

push time in 3 months

push eventrhvgoyal/misc

Vivek Goyal

commit sha 481619f766cf227d69059c7b421c64da65a41053

virtiofs: Move read-mcsafe.c and write-mcsafe.c in virtiofs dir Signed-off-by: Vivek Goyal <vgoyal@redhat.com>

view details

push time in 3 months

pull request commentcontainers/storage

overlay: recreate workdir on UpdateLayerIDMap

So this is in the context of setting up a new overlay mount. That is old upper directory is gone and a fresh upper directory is being setup. If yes, then it makes sense to start with a clean work/ directory as well.

giuseppe

comment created time in 3 months

more