Details
-
Bug
-
Status: Open
-
High
-
Resolution: Unresolved
-
CernVM-FS 2.6.3
-
None
-
[root@cmsrm-wn145 (wn)~]# rpm -qa | grep cvmfs
cvmfs-2.6.3-1.el6.x86_64
cvmfs-config-egi-2.4-2.3.obs.el6.noarch
[root@cmsrm-wn145 (wn)~]# uname -a
Linux cmsrm-wn145.roma1.infn.it 2.6.32-754.25.1.el6.x86_64 #1 SMP Tue Dec 17 13:08:11 CST 2019 x86_64 x86_64 x86_64 GNU/Linux
[root@cmsrm-wn145 (wn)~]# cat /etc/redhat-release
Scientific Linux release 6.10 (Carbon)
[root@cmsrm-wn145 (wn)~] # rpm -qa | grep cvmfs cvmfs-2.6.3-1.el6.x86_64 cvmfs-config-egi-2.4-2.3.obs.el6.noarch [root@cmsrm-wn145 (wn)~] # uname -a Linux cmsrm-wn145.roma1.infn.it 2.6.32-754.25.1.el6.x86_64 #1 SMP Tue Dec 17 13:08:11 CST 2019 x86_64 x86_64 x86_64 GNU/Linux [root@cmsrm-wn145 (wn)~] # cat /etc/redhat-release Scientific Linux release 6.10 (Carbon)
-
Bug report
-
5 - Blocker
-
x86_64-slc6-gcc62-opt
-
Description
ongoing problem since more than a year. The node has 48 cores. Typically when there are >40 jobs (one per core) we run into this problem with too many symbolic links
[root@cmsrm-wn145 (wn)~]# ls /cvmfs/cms.cern.ch/
ls: cannot access /cvmfs/cms.cern.ch/: Too many levels of symbolic links
once the number of jobs decreases then it is possible to mount the directory again. In the past I noticed that it could be related to the number of open files
[root@cmsrm-wn145 (wn)~]# su - pilcms45
[pilcms45@cmsrm-wn145 ~]$ ulimit -a
core file size (blocks, -c) 0
data seg size (kbytes, -d) unlimited
scheduling priority (-e) 0
file size (blocks, -f) unlimited
pending signals (-i) 515173
max locked memory (kbytes, -l) 64
max memory size (kbytes, -m) unlimited
open files (-n) 2048
pipe size (512 bytes, -p) 8
POSIX message queues (bytes, -q) 819200
real-time priority (-r) 0
stack size (kbytes, -s) 10240
cpu time (seconds, -t) unlimited
max user processes (-u) 1024
virtual memory (kbytes, -v) unlimited
file locks (-x) unlimited