profile
viewpoint

Ask questionsMounting a volume subpath with many files in it leaks node storage

What happened:

We have a cronjob copying files from a volume to a backup volume. The backup volume has lots of files in it (92288 at the moment).

The cronjob mounts both volumes with a subpath:

  • source volume has ~100 files to be backuped -- storage is azure-files
  • backup volume has 92k files (which seems to be the issue there -- the number of files) -- storage is azure-files

Each time we mount the volume, the undelying node loses 130MB of disk storage. Since our cronjob runs every minute, our node fills up at a rate of ~7Gbi per hour. I'm ok with the cronjob taking up temporary space, but these 130MB seem to be lost.

There seems to be some kind of garbage collection happening at some point which frees a bit of storage on the node, but we had to increase the OS disk storage to accomodate for this leak.

image

What you expected to happen:

I expect that mounting volumes doesn't take space in the nodes. Taking up space while running would be ok, but when the pods are GCd the space should be reclaimed.

How to reproduce it (as minimally and precisely as possible):

Create a volume and a cronjob that mounts that volume and runs every minutes. The cronjob can be empty ('sleep 1'). The volume should contain ~100k files.

The cron will run every minute, and leak a some amount of node storage.

Anything else we need to know?:

Environment:

  • Kubernetes version (use kubectl version): 1.15.11
  • Cloud provider or hardware configuration: Azure
  • OS (e.g: cat /etc/os-release): AKS Ubuntu
  • Kernel (e.g. uname -a):
  • Install tools:
  • Network plugin and version (if this is a network-related bug):
  • Others:
kubernetes/kubernetes

Answer questions Gui13

Not stale, we still have the issue! /unlabel stale

useful!
source:https://uonfu.com/
Github User Rank List