Uploaded image for project: 'CernVM'
  1. CernVM
  2. CVM-1524

Make cvmfs_server check easily automatable



    • Improvement
    • Status: Closed
    • Medium
    • Resolution: Fixed
    • None
    • CernVM-FS 2.9
    • CVMFS
    • None
    • x86_64-slc6-gcc48-opt


      I recently learned that cvmfs_server check on a RAL stratum 1 found numerous missing chunks, and I have been running cvmfs_server check -i since on a stratum 1 at UNL that is only a month old and it has found numerous corrupted files (the filesystem is a locally attached zfs, but I don't know if that's relevant). So I would like to make it easy to run cvmfs_server check on all repositories. These are my ideas:

      1. Add a "last_check" field to .cvmfs_status.json after a successful check. (Separately I would add a cvmfs-servermon test that looks for very old last_check timestamps, maybe 2 or 3 months. I expect it to take weeks to check everything, I will find out.).
      2. Add a cvmfs_server check -a option that will run on all repositories sorted by the oldest successful check or alphabetical among repositories that have never had one. It should continue even if a check fails. It would write output to /var/log/cvmfs/checks.log. It should also create a lock file so that only one can be running at a time, so it can be initiated daily from cron.  it should have a configuration variable specifying the minimum amount of time between -a checks.
      3. Support running on both release managers and stratum 1s.
      4. Add a cvmfs_server check -d option that, on a stratum 1, re-downloads files that have problems from the stratum 0.
      5. Change the "inspecting catalog" messages that now come into debug messages, they are not very helpful.
      6. Print statistics at the end of each repository.


        Issue Links



              dwd Dave Dykstra
              dwd Dave Dykstra
              0 Vote for this issue
              7 Start watching this issue