CernVM-FS Software Distribution Service
Installation & Configuration
The CernVM-File System 1 (CernVM-FS) provides a scalable, reliable and low-maintenance software distribution service:
- Read-only file system to deliver scientific software to client nodes
- Uses standard HTTP protocol, enabling a variety of web caches
- Files and directories are hosted on standard web servers
- Ensures data authenticity/integrity over untrusted connections
- Optimized for small files that are frequently opened/read as a whole
Clients mount a virtual file system to /cvmfs/$repo.$domain
- POSIX read-only file system in user space (FUSE module)
- Loads data on demand as soon as directories/files accessed
Create/update a CernVM-FS repository on a Release Manager Machine:
- Mounts a CernVM-FS repository in read/write mode
- Software updates installed to a writable scratch area
- Changed in scratch merged into the CernVM-FS repository
- Merge/publish are atomic operations controlled by the user
Development repositories are available on GitHub 2
- …CernVM-FS’s documentation 3 on Read The Docs
- …pre-build Linux package 4 are provided by the project
Add the CernVM-FS Yum package repository to a node 5:
dnf install -y epel-release
sudo dnf install -y https://ecsft.cern.ch/dist/cvmfs/cvmfs-release/cvmfs-release-latest.noarch.rpm
Clients
Command to check the client status…
systemctl status autofs # Automounter status
journalctl -u autofs # print logs
cvmfs_config probe # check the status of the configuration
cvmfs_config chksetup # check the CVMFS configuration
cvmfs_talk -i $repo revision # check the catalog revision number
# list available repositories on a server
curl http://$repo/cvmfs/info/v1/repositories.json
Installation
# install the package
dnf install -y cvmfs
# configure FUSE
echo user_allow_other > /etc/fuse.conf
# configure automount
echo '/cvmfs /etc/auto.cvmfs' > /etc/auto.master.d/cvmfs.autofs
# CVMFS default configuration
cat <<EOF >/etc/cvmfs/default.local
CVMFS_DEFAULT_DOMAIN=gsi.de
CVMFS_HTTP_PROXY=http://proxy01.example.org:3128;http://proxy02.example.org:3128;DIRECT
CVMFS_QUOTA_LIMIT=51200 # MB
CVMFS_MAX_TTL=15
EOF
# configure stratum 0
cat <<EOF >/etc/cvmfs/domain.d/example.org.conf
CVMFS_KEYS_DIR=/etc/cvmfs/keys/example.org
CVMFS_SERVER_URL="http://cvmfs-s0.example.org/cvmfs/@fqrn@"
EOF
# deploy the GSI shared public key
mkdir /etc/cvmfs/keys/example.org
cat <<EOF >/etc/cvmfs/keys/example.org/example.org.pub
-----BEGIN PUBLIC KEY-----
MIIBIjANBgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEAwK1G5ScR02rvc7nZCo4m
#....
-----END PUBLIC KEY-----
EOF
systemctl enable --now autofs
Configuration
Configuration files and directories:
# …configure FUSE with option `user_allow_other`
/etc/fuse.conf`
# …enable the mount directory `/cvmfs /etc/auto.cvmfs`
/etc/auto.master.d/cvmfs.autofs
# …default client configuration
/etc/cvmfs/default.conf
/etc/cvmfs/default.d/*.conf # packagre specific
# …local client configuration (overwrites *.conf)
/etc/cvmfs/default.local
# …domain specific configuration
/etc/cvmfs/domain.d/*.conf
# …repository specific configuration
/etc/cvmfs/config.d/*.conf
# domain-specific sub directories with public keys
/etc/cvmfs/keys/**/*.pub
Adding Repositories
Enable a CernVM-FS repository:
cat > /etc/cvmfs/default.local <<EOF
CVMFS_REPOSITORIES=alice.cern.ch,alice-ocdb.cern.ch
CVMFS_HTTP_PROXY="DIRECT"
EOF
# check configuration
cvmfs_config chksetup
cvmfs_config showconfig $repo
cvmfs_config probe
# mount a CVMFS file-system
systemctl restart autofs
# check the mouint
ls /cvmfs/alice.cern.ch
Proxies
CernVM-FS uses a dedicated HTTP proxy configuration, independent from system-wide settings. Instead of a single proxy, CernVM-FS uses a chain of load-balanced proxy groups. The CernVM-FS proxies are set by the CVMFS_HTTP_PROXY
parameter and should include a list of appropriate HTTP proxies:
# list the active proxy
>>> cvmfs_talk -i $repo proxy info
Load-balance groups:
[0] http://10.20.0.228:3128 (proxy01.example.org, +2h)
Active proxy: [0] http://10.20.0.228:3128
Following is an example from :
cat > /etc/cvmfs/default.local <<EOF
CVMFS_REPOSITORIES=alice.cern.ch,alice-ocdb.cern.ch
CVMFS_DEFAULT_DOMAIN=gsi.de
CVMFS_HTTP_PROXY=http://proxy01.example.org:3128;http://proxy02.example.org:3128;DIRECT
CVMFS_QUOTA_LIMIT=51200 # MB
CVMFS_MAX_TTL=15
EOF
Proxies within the same group are separated by a pipe character
|
, while groups are separated from each other by a semicolon character;
. Note that it is possible for a proxy group to consist of only one proxy. In the case of proxies that use a DNS round-robin entry, wherein a single host name resolves to multiple IP addresses, CVMFS automatically internally transforms the name into a load-balanced group, so you should use the host name and a semicolon. In order to limit the number of individual proxy servers used in a round-robin DNS entry, setCVMFS_MAX_IPADDR_PER_PROXY
. This can also limit the perceived “hang duration” while CernVM-FS performs fail-overs. 6
Name Service Switch
In case SSSD is used to lookup AutoFs maps, then it is required to configure NSS (Name Service Switch) to resolve local Automount configurations before sending a lookup to LDAP (over SSSD).
The automount
database name in /etc/nsswitch.conf
needs following query precedents:
automount: files sss
Changed to this configuration require a restart of SSSD and AutoFs:
>>> systemctl restart sssd.service autofs.service
# check if the mount point is mapped correctly
>>> automount -m | grep cvmfs
Mount point: /cvmfs
map: /etc/auto.cvmfs
Upgrade
Hotpatching and reloading…
- …allows reloading most of the code without unmounting the file system.
- …current active code unloaded …newly installed binaries is loaded
# ...first colume indicates the client version
cvmfs_config stat
# ...list the client versions
cvmfs_talk version
# ...reload client on a repository
cvmfs_config reload $repo_name
# ...more details on the client modules
cvmfs_talk hotpatch history
Servers
Install the required packages
dnf install -y cvmfs cvmfs-server
# install an HTTP server
dnf -y install httpd \
&& systemctl enable httpd \
&& systemctl start httpd
# open the firewall (if required)
firewall-cmd --permanent --add-service=http \
&& firewall-cmd --reload
Create a Repository
Use the cvmfs_server
command create a new repository:
cvmfs_server list # list available repos
cvmfs_server mkfs $repo # create a new repository
cvmfs_server rmfs $repo # remove repository
cvmfs_server info
- Mounts the repository under
/cvmfs/$repo
(read/write access) - Repository names resembles a DNS scheme (by convention)
- Globally unique name that indicates where/who the publishing of content
- Names include
[A-Za-z0-9]-_.
(max 60 characters)
Repository Keys
Create backups of signing key files in /etc/cvmfs/keys
Repositories uses two sets of keys:
/etc/cvmfs/keys/$repo.key # Repository key
/etc/cvmfs/keys/$repo.pub # Distributed to clients, verifies authenticity
/etc/cvmfs/keys/$repo.master.ke # Signs the repository key
- Signatures are only good for 30 days by default
- Run
cvmfs_server resign
again before they expire - Convenient to share the master key between all repositories in a domain
- Keep the master key especially safe from being stolen
- Supports the ability to store the master key in a smartcard (Yubikey)
Configuration
/etc/cvmfs/repositories.d/$repo/server.conf # Server configuration file
/etc/cvmfs/repositories.d/$repo/client.conf # Client configuration file
/srv/cvmfs/$repo # Repository storage location
/srv/cvmfs/$repo/.cvmfspublished # Manifest file of the repository
/srv/cvmfs/$repo/.cvmfswhitelist # Trusted repository certificates
/srv/cvmfs/$repo/data # Content Addressable Storage (CAS)
Internal state of the repository:
/var/spool/cvmfs/$repo # Server spool area
/var/spool/cvmfs/$repo/cache # Client cache
/var/spool/cvmfs/$repo/rdonly # Client mount point
/var/spool/cvmfs/$repo/scratch # Writable union file system
Publishing
cvmfs_server transaction $repo
# install content
echo "foobar" > /cvmfs/$repo/foobar && rm /cvmfs/$repo/new_repository
cvmfs_server publish $repo
Debugging
Check the log files
# typically…
grep cvmfs2 /var/log/messages
# …systemd way
SYSTEMD_LESS=FRXMK journalctl -p err _COMM=cvmfs2
Revisions
Each update to a CVMFS repository is indicated by a new revision…
- …in general all clients be on the latest revision 7.
- …check for the current revision with
cvmfs_talk
attr -g revision /cvmfs/$repo
# …or
cvmfs_talk -i $repo revision
Check the revision of all configured CVMFS repositories on nodes:
date ; rush -b '
repos=$(cvmfs_config showconfig | grep REPOSITORY_NAME | cut -d= -f2)
for repo in $repos
do
ls /cvmfs/$repo >/dev/null
echo $repo $(cvmfs_talk -i $repo revision)
done
'
IO Errors
Users identify I/O errors with following error message on file access:
cannot open path/to/example/file.txt: Input/output error
If possible simply drain the node and reboot, otherwise use following command from top to bottom. Test if the IO errors resolve by repeatedly accessing the file in-between each step:
cvmfs_config reload $repo
helps under some circumstancecvmfs_talk remount -i $repo
typically doesn’t workumount /cvmfs/$repo
(require not open file handels)- …do not use
umount -fl
since it will block a clean re-mount - …remount will zero all error counters
- …sometimes it requires multiple re-mounts to resolve IO problems
- …do not use
cvmfs_config killall
kills all processes accessing all repositories- …be very careful using this in an production environment!
List IO errors on mounted CVMFS repositories:
date ; crush -B '
# get the list of mounted CVMFS repos
repos=$(df | grep ^cvmfs2 | tr -s " " | cut -d" " -f6 | cut -d/ -f3)
for repo in $repos
do
# check if IO errors are not zero
cd /cvmfs/$repo && \
err=$(cvmfs_config stat -v $repo | grep "Errors: [1-9]") && \
echo $repo $err
sleep 1
done
'
Get a list of all nodes with IO errors:
date ; crush -B '
repos=$(df | grep ^cvmfs2 | tr -s " " | cut -d" " -f6 | cut -d/ -f3)
for repo in $repos
do
cvmfs_config stat -v $repo | grep "Errors: [1-9]" >/dev/null && \
echo error
done | uniq
'
Connection Issues
In case a connection with an CVMFS server via the configured proxies is having issues error messages similar to the following will be printed to /var/log/messages
:
>>> grep cvmfs /var/log/messages | grep manifest
... cvmfs2: ($repo) failed to download repository manifest (6 - proxy connection problem)
…check at scale with a command similar to:
date ; crush -b -- "
journalctl -u autofs \
| grep '$repo.*failed.*manifest' \
| cut -d' ' -f1-2,6 | sort -k1,2 | uniq -c
"
Make sure to exclude problems on the HTTP proxies and the repository servers…
- …sometimes restart of
autofs.service
is required if the mount is not working… - …at scale make sure to not restart unless necessary
findmnt -T /cvmfs/$repo \
|| cd /cvmfs/$repo \
|| systemctl restart autofs
Verify a clean mount at scale:
date ; crush -b -- "ls /cvmfs/$repo >/dev/null"
DNS Resolution
DNS resolution of the proxies is cached by CVMFS…
- …update to a proxy IP-address will be indicated in the logs
- …the client rotates automatically to the next proxy if the active instance failes
... cvmfs2: (alice.cern.ch) switching proxy from http://131.225.188.246:3126 to http://10.20.1.203:3128
... cvmfs2: (alice.cern.ch) DNS entries for proxy proxy01.example.org changed, adjusting ... cvmfs2: (alice.cern.ch) recovered from offline mode
Use curl
to download a repository manifest 8 to test if a CVMFS server is accessible:
curl -OL http://cvmfs-server.example.org/cvmfs/$repo/.cvmfspublished
Footnotes
CernVM-FS Project Page
https://cernvm.cern.ch/fs↩︎CernVM-FS Source Code, GitHub
https://github.com/cvmfs/cvmfs↩︎CernVM-FS Documentation
https://cvmfs.readthedocs.io/en/stable↩︎CernVM-FS Linux Packages
https://ecsft.cern.ch/dist/cvmfs↩︎CernVM-FS Core Packages, Installation Documentation
https://cernvm.cern.ch/portal/filesystem/downloads↩︎CernVM-FS Client Configuration - Proxy List
https://cvmfs.readthedocs.io/en/stable/cpt-configure.html#proxy-lists↩︎CernVM-FS Repository Update Propagation
https://cvmfs.readthedocs.io/en/stable/cpt-repo.html#repository-update-propagation↩︎CernVM-FS Repository Manifest (.cvmfspublished)
https://cvmfs.readthedocs.io/en/stable/cpt-details.html#revision#repository-manifest-cvmfspublished↩︎