Uploaded image for project: 'CernVM'
  1. CernVM
  2. CVM-1625

Cache cleanup on big files suboptimal with larger chunk sizes



    • Improvement
    • Resolution: Fixed
    • Medium
    • CernVM-FS 2.5.2
    • None
    • CVMFS
    • None
    • x86_64-slc6-gcc48-opt


      Whenver a cvmfs client reads a file that is greater than kBigFile, currently set to 25MB, it makes sure that there is at least that much space before reading the file, running a small synchronous cache cleanup if needed. The problem with that is that if the file chunk size is bigger than that, and there are many large files in the repository bigger than that, there will be many of those small synchronous cleanups, and an asynchronous cleanup won't get triggered until the relatively rare times that smaller files get put into the cache. I think there should be some way around that, for example to turn kBigFile into a settable client parameter, or to encode the chunk size in the catalog and automatically increase that limit to be bigger than the chunk size, at least if the chunk size is reasonably sized compared to the size of the cache. Or perhaps there should be a way to trigger the asynchronous cleanup after a synchronous one, in order to clean up all the way down to the low water mark as if the synchronous cleanup hadn't happened.


        Issue Links



              jblomer Jakob Blomer
              dwd Dave Dykstra
              0 Vote for this issue
              3 Start watching this issue


                Actual Start:
                Actual End: