Andrew File System

from Wikipedia, the free encyclopedia

AFS in the OSI layer model
application AFS file service AFS volserver VLDB PTDB BDB
UBIK
meeting Rx
transport UDP
network IP
Network access Ethernet Token
ring
FDDI ...

The Andrew File System ( AFS ) is a network protocol for distributed network file systems . It enables horizontal scalability of secondary storage and integrates client-side caching . Classic network file systems like NFS have the advantage that secondary storage expansions and server swaps are completely transparent from the user's point of view . This is realized by an additional abstraction level between the file namespace and the data objects of the AFS.

concept

In addition to file sharing, the AFS protocol also includes protocols and databases for name resolution of users and groups. Use of the operating system's own user and group namespaces is not intended either on the AFS client or on the server side. As a rule, an administrator will synchronize the AFS users and groups against a central directory. Authentication takes place using "tokens" derived from Kerberos tickets

The various functions required for AFS (client, file server, database server) work in separate processes, i. d. Usually on separate machines.

A local cache on AFS clients with guaranteed integrity ensures load reduction on file servers and latency reduction on clients. AFS is suitable for LAN and WAN operation. A network-wide cache consistency guarantee is integrated into the log. Authentication is done on the server side. The AFS file namespace is "multi-user suitable" on the client side: all users can use the same paths (as with NFS ), but act on the server side with correct rights. Access rights are defined via ACLs , but only per directory. AFS makes it possible to set up a centrally administered, uniform file name space on all clients in a cell with little effort. AFS servers usually work under Linux , Solaris or AIX , but other Unix variants are supported as the server platform. All currently available AFS server processes work in the user space.

There are various programs that implement AFS as a protocol. AFS clients are available for a variety of operating systems - i. d. Usually free of license fees. High-performance AFS servers are available free of charge for Linux and other Unix operating systems. AFS servers with special functions are commercially available.

The AFS is capable of manually triggered data replication . It is not economical in the AFS to replicate data often (e.g. once per minute).

Structure of the AFS

Independent administrative units in the AFS are called cells. A cell includes one or more database servers and one or more file servers . AFS clients are loosely assigned to a "home cell" - but not (as is the case with Windows domain membership, for example ) connected to it via shared secret. Data partitions that contain instances of volumes reside on file servers. Volume instances can contain symbolic links to other volume instances (also in other AFS cells). These "volume mount points" are the transition points between volumes in the file namespace. A volume (usually root.afs ) is mounted by the AFS client at a defined location ( / afs under Unix ) in the file system and forms the root of this AFS namespace, although cycles in the directory structure are also possible thanks to the symbolic links.

Cells

There are numerous AFS cells around the world - especially in larger institutions such as universities. Cells are managed independently and can also be public. Public cells have the following properties:

  • All AFS database servers and AFS file servers have public IP addresses
  • The cell's database server must have been made public (either through an entry in a special file on OpenAFS or through publication in the DNS ).

Cells can also trust each other, which allows users of a cell to be granted rights in ACLs of AFS directories. This trust is realized through the appropriate Kerberos mechanisms.

Volumes

The term volume in the context of AFS stands for two things:

  • An entry in the VLDB (Volume Database) that points to different instances of a volume on one or more file servers of the same AFS cell.
  • An object on a file server that contains directories, files, and references to other volumes. In this article the term volume instance or instance is used for such an object to better distinguish it .

Volumes and volume instances are only managed by the administrator. They have an adjustable maximum size. This is used in a similar way to a quota , but applies to the volume and not to individual users. There are four types of volume instances:

RW instances
Instances that can be read and written to - e.g. B. Home directories of users. Such an instance exists i. d. Usually for each volume. Other instance types are optional.
RO instances
Read-only manually created copies of RW instances. Several RO instances can be created from each RW instance and distributed on different file servers. Such instances are used for data that is rarely changed - e.g. B. software directories or structural directories in which z. B. the user homedirectories. AFS clients search for a functioning RO instance independently via VLDB. The existence of several RO instances makes the data of a volume redundant. AFS clients make this redundancy transparent for users. The administrator can manually arrange for the current status of the corresponding RW instance to be replicated in all RO instances of the same volume.
Backup instances
This instance type is always located on the same data partition as the assigned RW instance - the storage is different to the RW instance ( copy-on-write ), which is why such an instance cannot replace a physical backup.
Temporary clones
Such instances are created in order to e.g. B. Moving volumes between file servers. If such temporary clones were not used, write access to the RW instance in the name of data integrity would have to be prevented as long as the corresponding process is running. These instances are only used internally by the AFS.

The file server keeps statistics for all volume instances, in which accesses are recorded according to read / write, local network / other network and some other criteria. OpenAFS file servers also have a mode to output extensive logging information about accesses to instances - optionally directly to other programs (via pipe).

File server

AFS file servers contain one or more data partitions, which in turn contain volume instances. In principle, the AFS network protocol does not care about the format in which the volumes are stored on the data partitions. What all AFS implementations have in common, however, is that you cannot recognize the file structure of the AFS namespace when you look at a partition on the file server.

It is therefore also not possible to release the data partitions using another file sharing protocol.

RW instances can be moved between servers during productive operation - read and write access to the instance data is still possible. This enables the maintenance of file servers without losing access to the data stored there.

In today's most widely used AFS implementation (OpenAFS), the file server consists of several processes (some of which consist of several threads ):

  • fileserver - this serves the requests from AFS clients for files in the AFS namespace.
  • volserver - this server process is mainly used by administrators. It provides functions that affect entire volume instances (e.g. cloning volumes, switching volumes on or off, sending volumes through the network, ...)
  • salvager - The salvager tests and repairs AFS's own administrative structures on the host partitions of a file server. This is e.g. B. necessary after a crash (and then happens automatically) to ensure the consistency of the stored data.

Since AFS is just a protocol, a file server can also be B. hide a tape robot that stores AFS files on tertiary storage media (e.g. MR-AFS).

File servers can have multiple IP addresses. AFS clients simply switch to the next when one file server network interface fails. For this reason, clients regularly test the accessibility of all file server network interfaces they are dealing with.

Database server

The database servers are networked with one another and manage two or more databases. The following are mandatory:

  • PTDB (Protection DataBase) - manages users of the cell and user groups. A special feature is that users can create and edit groups themselves to a certain extent and use them in ACLs of the AFS. Warning: This database is not a directory service for user data such as home directories, e-mail addresses or passwords.
  • VLDB (Volume DataBase) - keeps records of volumes (see below in the article) on file servers. It also saves the list of assigned IP addresses from each file server.

The following databases are also common:

  • BDB (Backup DataBase) - manages tapes that have been written to by special AFS server processes as part of backups with data.
  • KDB (Kerberos DataBase) - this database manages user passwords (actually Kerberos keys). The protocol used between the AFS client and the KDB server is, however, a forerunner of the already outdated Kerberos v4 protocol . Newly established cells today usually use a Kerberos v5- based server that is operated independently of the AFS databases.

All databases are managed by one process per database server . The UBIK protocol is used for this. This means that read and write access to the AFS databases is still possible when more than half of the database servers can still be reached via the network. Only one accessible database server is required for read access. So if there are 5 database servers, z. For example, one would be migrated to a new machine and the failure of another would still not cost write access. When the failed database servers are online again, they automatically synchronize the data with one another.

The complex synchronization mechanism requires exact synchronization of the internal clocks of the database server. If the times of any two database servers differ by more than 10 s, the database blocks write access.

Database servers are the only objects that an AFS client needs to know in order to access a given cell. This can be done either via a local file (CellServDB) or via the Domain Name System (via AFSDB Resource Record ).

Other server processes

The bosserver is used on all AFS servers. Similar to the init process on Unix systems, it manages a list of processes that have to run on a server. The running processes then identify an AFS server as a database server, file server or both (not recommended). This list and a few other things can be managed over the network.

In some AFS cells, so-called update servers and update clients are used, which update other server software (e.g. file server processes) if necessary.

A so-called butc is used on AFS tape controllers (read: AFS backup servers) to receive data from file servers and store them on tape or on hard drives.

Network protocol

Nowadays, AFS works exclusively via UDP , but with RX there is an abstraction layer that in principle also allows other protocols such as TCP - there are plans to implement exactly this for OpenAFS.

The Rx protocol works in authenticated mode (i.e. if a user does not work without logging in first), always signed - usually also encrypted. This relates e.g. B. also on transfers between the AFS client and AFS file server.

AFS is very sensitive to firewalls. The following ( UDP ) ports must be activated between servers and clients as well as between the servers among themselves:

  • For AFS in general: 7000, 7001, 7002, 7003, 7005, 7007
  • If the AFS backup system is used, then additionally: 7021, 7025–7032
  • If Kerberos5 is used, then additionally: 88

Apart from currently unknown security vulnerabilities, all these ports are considered secure and can therefore also be reached via the Internet.

AFS works with fixed port numbers and therefore has no problems with common NAT routers.

safety

The security of AFS is guaranteed by the fact that every AFS server (database and file server) receives a symmetrical key (shared secret) that is uniform across the cell. This key is also known to the Kerberos server and can therefore be used to reliably authenticate users. The key is 56 bits wide and therefore no longer state-of-the-art.

Data transfers are signed with a session key that is also 56 bit wide and, if necessary, encrypted with AFS 'own algorithm called fcrypt.

In the case of anonymous access to the AFS (i.e. whenever a user does not have an AFS token), there is no way for the client to securely authenticate the file server, which means that neither the integrity nor the confidentiality of data transfers can be guaranteed.

weaknesses

If a file server is compromised and the cell key falls into the hands of an attacker, it is possible for the attacker to act with superuser rights on all file servers, to read out the data of all users and also to change them. DFS , the "former successor" of AFS, is clearing up this problem, there is still no solution in sight for AFS.

The narrow key width is also a problem and makes brute force attacks possible. By using session keys, the risk is still relatively low and cannot be compared with the weakness e.g. B. from WEP .

The missing integrity check for anonymous accesses is a critical weak point, since the most common AFS client variant "OpenAFS" uses a shared cache. Files anonymously fetched from the file server are therefore also returned to logged-in AFS users if they access them. If no countermeasures are taken, an attacker can with a little effort bypass the integrity check for logged-in users. This vulnerability is not critical for single-user machines on which users only work in an authenticated manner. However, multi-user systems are particularly at risk. There is currently no known practical attack.

Countermeasures

The following organizational measures must be taken against the problem of the cell-wide uniform key:

  • Protect AFS servers paranoid and only activate the most important services on them
  • Keep all AFS servers in locked rooms and restrict access to those responsible for the server
  • Store AFS keys in an encrypted file system. The security of this measure has decreased significantly due to more recent knowledge about possible physical attacks on DRAM modules

Only a new implementation of the security layer of the RPC protocol used (Rx) can help against the narrow key width. There are companies that offer AFS programming services and, for a fee, address such problems. Regular key changes reduce the risk of successful brute force attacks.

In order to exclude the described attacks against the integrity of transmitted data, anonymous AFS access must be prevented on the respective client. This is only practicable on machines to which no normal users have authenticated access (shell accounts, FTP, WebDAV , ...). All services must always work with a token on such a computer. Also, cron jobs must not be forgotten.

File system semantics

To simplify matters, the file namespace in the AFS is set by the administrator i. d. Usually built up without cycles. However, there can be no guarantee for this as soon as users are given the right to create volume mount points or change rights. This can e.g. B. pose a problem for backup software.

The file system recognizes three object types:

  • Directories - these contain files, other directories and mount points + an ACL that regulates access rights.
  • Files - files in modern AFS cells (e.g. from OpenAFS 1.4) - if the client and server support this - can be larger than 2 GB. They have exactly one data stream, common Unix metadata such as user ID and group ID. The Unix permissions are not used for authorization purposes. Multiple hard links to files can exist, but only if they are in the same directory.
  • symbolic links - These work as one is used to from Unix . Links whose destination has a special form are interpreted by the AFS client as a volume mount point. The content of the base directory of another volume is then mounted in their place.

The administrator of a cell defines the namespace by interlinking volumes in a well-structured manner. Starting with the standard volume root.cell you then access e.g. B. on volumes such as homedirectories, software, projects and temp . For example, in the homedirectories volume, add more with the name home.ernie, home.bert ,…. The path to Bert then sees z. B. looks like this:

/afs/meine.zelle/home/bert

Hints:

  • The path of a directory / file does not say anything about the file server that is being accessed. The same applies to mount points.
  • The volumes that you walk through along a path do not necessarily emerge from the path, but these can be z. B. determine from the volume mount points.

Under operating systems that are not familiar with the concept of symbolic links (e.g. Windows ), these appear as directories in the AFS file namespace. Newer Windows clients contain appropriate extensions to express such links as junction points and shell extensions to deal with them.

The AFS protocol supports network-wide file locks, but only advisory locks (flock ()), no byte range locks (lockf ()). The OpenAFS Windows client from version 1.5.10 is able to emulate byte range locks locally. Local applications on the client machine can use such locks, but the AFS client uses simple advisory locks to lock corresponding files on the file server.

The display of the free or used memory of the mounted AFS ( Unix ) is a fantasy figure. In principle, in a distributed network file system, the free or used memory can only be determined for each directory. The Windows client is able to report the free space per directory back to applications.

AFS clients

AFS clients are computers (e.g. workstations) that can access the AFS file namespace. A kernel extension is necessary for this under Unix operating systems. This is done either via a generic file system driver such as FUSE (Arla AFS client) or via a more comprehensive AFS-specific kernel module (OpenAFS). In both cases, additional userspace processes are required to support the kernel drivers. The OpenAFS Windows client is based on a redirector developed for AFS that cooperates with a Windows service.

Cache

AFS clients (also cache managers ) are able to temporarily store large amounts of data from file servers, whereby not entire files, but bits of adjustable size are stored. The optimal size of such a cache depends on the usage pattern and can be many gigabytes.

The cache integrity is guaranteed by the AFS. A file fragment stored by the cache manager is valid until the corresponding AFS server actively invalidates it. This happens for RW instances z. B. if the corresponding file was modified by another AFS client, in RO instances z. B. when the administrator triggers replication.

Only read processes are actively cached. Write access is also buffered, but if a file opened for write access is closed, the close () command blocks until all data has been written to the file server.

The cache is persistent with OpenAFS for Unix . The cache integrity is realized after a restart by comparing the change time stamps of files with the file server. The persistence of the cache makes The use of huge caches to increase speed also makes sense in local networks .

Under Windows, the cache consists of a single file that is used via memory mapping . The maximum size of the virtual memory (4 GB for a 32-bit system) is therefore an insurmountable hurdle for the cache size.

Under OpenAFS for Unix systems, the cache consists of many files in a directory. Increased demands are made on the file system in which this directory is located:

OpenAFS also allows the use of main memory (RAM) instead of a directory on the hard disk for the cache (option afsd -memcache).

Supported Platforms

AFS is supported on many platforms. This is easier to implement for AFS server processes than for AFS clients, since no kernel extensions are required. There are various projects that implement the AFS protocol in whole or in part - here is a non-exhaustive list:

Platform / implementation OpenAFS Arla
(client)
MR-AFS
(server)
Hostafs
(server)
kAFS
(client)
Client server
Linux Kernel 2.4 1.2.11 V ? ? E.
Kernel 2.6 V V V V (*) E. V (*)
Windows Windows 3.11, 3.1 and 3.0
Windows 98, ME 1.2.2b
Windows 2000, XP V V E.
Windows Vista V V ?
macOS 10.1 1.2.7 1.2.7 0.35.9
10.2 1.2.10 1.2.10 0.35.10
10.3 (panther) 1.4.1 1.4.1 0.37
10.4 (tiger) V V V
10.5 (leopard) V V V
Solaris before 2.0 ? ? ? ?
2.0-2.6 V (*) V ? V (*)
from 2.6 V V E. V (*)
BSD FreeBSD 5.x V
FreeBSD 6.x
NetBSD V
OpenBSD 4.8 V V V
AIX AIX 5.1, 5.2, 5.3 V V
AIX 6.1
SGI Irix 6.5 ? ?
HPUX 11i ? ?

Legend:

syntax meaning
V The corresponding platform port is actively maintained and further developed. The use of the corresponding AFS implementation on this platform is therefore generally recommended.
E. This port is experimental and not recommended for productive use.
version This port was actively maintained in the past, but at least there are no more current packages. The last available version number is indicated.
? This port is officially supported. If you have more information about the quality of the port, please enter it.
(*) Be sure to read the section on the relevant implementation!

For AFS servers, you should use the recommended platforms whenever possible. There are z. B. experimental AFS server versions for Windows or old AFS server versions for IRIX , but these are not officially supported or free from errors.

Transarc-AFS and its successors have an NFS server in the client, which can grant other platforms for which there is no client access to the AFS namespace via NFS. However, it is only known from Solaris that a current OpenAFS client running on it still supports this. In principle, however, every server process running in userspace (e.g. Samba, userspace NFS server, Webdav, ...) should be able to share files from the AFS without any problems. Without special adjustments to the server software, only anonymous access will be possible.

AFS implementations, historical

AFS, Transarc-AFS, IBM-AFS

AFS was originally a university project of Carnegie Mellon University and comprised a client and a server implementation. It was later marketed by the Transarc company under the name Transarc AFS . Transarc was bought by IBM , AFS was marketed under the name IBM-AFS. In 2000, IBM released AFS under an open source license ( IBM Public License ) - it has been called OpenAFS ever since and is actively being developed. However, numerous Transarc and IBM AFS servers are still in use worldwide.

OpenAFS

OpenAFS is the most actively maintained AFS implementation.

The main focus of development at OpenAFS is currently on servers

Basically, the AFS server is only slightly dependent on the host operating system. So it should z. For example, on an older version of Linux it will be possible to translate and operate the server (which usually only works in user space). Exceptions are server versions that make special modifications to the host file system (so-called inode servers). These require additional kernel modules and are practically no longer used for new AFS installations.

Supported client platforms are

  • Linux . Since the kernel module required for the client is open source and does not require kernel patches, it can be compiled for any Linux distribution .
  • Windows 2000 and Windows XP
  • macOS 10.4 (Tiger)
  • AIX
  • Solaris . Warning: OpenAFS client support for Solaris prior to 2.6 will be removed from the development version of OpenAFS - however, OpenAFS 1.4 will continue to support Solaris 2.0 and higher.

Clients for older platforms - e.g. B. for older Windows versions can be found on OpenAFS by searching something in the old OpenAFS releases.

As part of the DCE standard, the distributed file system DFS was developed as the successor to AFS. This offers u. a. the following advantages:

  • One secret key per server, not per cell as with AFS
  • ACLs per file and not just per directory

Despite its downward compatibility with AFS, the DFS was unsuccessful because its use was tied to high license fees.

Arla

The Arla project came about at a time when there was no free AFS implementation and Transarc-AFS was associated with license payments. It was developed independently of the "AFS mainstream" (AFS ... OpenAFS) at the KTH as open source software. So far there is only one client implementation, but it covers some platforms not supported by OpenAFS.

MR-AFS

MR-AFS ( Multi-Resident -AFS) was developed as a commercial development of Transarc-AFS. MR-AFS 'strength is that the file server is able to transfer files from the AFS namespace to tertiary storage (tapes, optical data media, ...). The file servers write to an HSM file system and leave the actual migration decisions to the HSM software. Normal AFS clients can exchange data with MR-AFS servers. MR-AFS deals exclusively with server software. MR-AFS-specific functions are e.g. B. built into the command line tools of OpenAFS. The future of MR-AFS is uncertain as the only developer is already retired.

Hostafs

Hostafs is a small AFS server implementation that aims to disguise normal directories as volumes and share them via AFS. In this way you can z. B. Make CDROMs available in the AFS. However, Hostafs does not provide any access protection mechanisms such as ACLs - all releases are readable by everyone.

kAFS

This AFS implementation consists of a client that is implemented as a Linux kernel module. It is part of the standard Linux kernel . However, the client is not intended for productive AFS operation. B. for booting over the network, if the administrator really wants to keep everything in the AFS. It has no possibility of authenticated access to the AFS, supports only read access and only speaks to file servers. The latter means that you have to explicitly specify the file server to be used - the module cannot ask the vlserver for this.

YFS

Due to dissatisfaction with the organizational mechanisms of the further development of the AFS protocol, some Openafs developers are working on a commercial fork of Openafs called YFS . This fork can handle both the AFS and the massively improved YFS protocol. There is currently no official publication (as of January 2013).

A look into the future

At the Rechenzentrum Garching is an AFS server (with appropriate in the OpenAFS client incoming client modifications) with OSD (Object Storage) in development. The metadata (access rights, timestamps, directory structures) are still managed by the AFS server, but the data are on so-called object storage servers, with which the client then communicates directly. In this way, z. B. files are on several servers (striping) and are read and written much faster.

Restrictions, limits

  • Due to the callback principle (computers are actively informed about changes on the server), AFS cannot work reliably with NAT routers. Rule of thumb: There is no NAT router in between, it must apply to every possible pair of computers in an AFS cell - from version 1.4.1, OpenAFS works better with IP-NAT.
  • AFS works exclusively with IPv4 . Support for IPv6 would require changes to the schemas of the AFS database as well as to the RPCs of the database servers.
  • The AFS client is not designed for extremely large amounts of data. This is due to the organization of the cache manager, which can manage bits of files of exorbitant size, but not very many efficiently regardless of their size. This restriction only applies to OpenAFS clients prior to version 1.4.0.
  • Under Unix operating systems, the widespread OpenAFS client uses the GIDs (Unix group IDs) 0x7f0 to 0xbf00. Using these groups for other purposes is a security risk.
  • AFS does not support network-wide byte range locks. The OpenAFS Windows client simulates byte range locks locally. A similar function will soon also be available for the OpenAFS Linux client.
  • Each computer can be a server and a client for one AFS cell. It is not possible to serve several AFS cells via one AFS server , similar to a WWW server. Of course, nothing speaks against server virtualization. Clients can exchange data with any number of cells at the same time , regardless of their home cell.
  • Only the directory, file, symbolic link and volume mount point (a special form of symbolic link) objects are known in the AFS namespace . Pipes, device files, or sockets are not supported.
  • A maximum of 254 file servers are allowed per cell.
  • 255 data partitions are supported per file server.
  • The block size in the AFS is 1 kbyte and cannot be changed.
  • Per data partition, 4 Tebibytes (32 bit * block size) can be used without any problems under OpenAFS file servers  .
  • Some RPCs of the file server lead to invalid return values ​​if this limit is exceeded. From file server version 1.6.2 onwards, this is not a problem for regular users.
  • Volumes can have a maximum size of 2 ^ 31−1 blocks (around 2 tebibytes ). This restriction is marginal because the goal should always be to keep volumes easily movable - i.e. small. Since OpenAFS 1.4.0, larger volumes are also possible, but the maximum quota that can be set is still 4 TiB.
  • Volume names can be a maximum of 22 characters long ( not including instance extensions such as .readonly and .backup ).
  • AFS directories are static data structures with a maximum capacity of 64435 entries ( dentry s). The number of entries is reduced if one or more entries have names longer than 15 characters.
  • Each ACL (positive and negative ACLs independent of each other) of a directory can have a maximum of 20 entries.
  • AFS cannot handle automatic replication. Data is written to an RW instance and possibly later copied manually (or script-controlled) into RO instances. However, there can only ever exist one RW instance per volume.

Various programs are bothered by this when they are running in the AFS.

Other restrictions

  • AFS is not suitable for storing databases.
  • AFS is not suitable as a mail server backend. There are examples of AFS cells in which mails are placed directly in the home directories of users, but this is technically demanding. In addition, such solutions scale poorly with many users and the profit is minimal.

Administration effort

Facility vs. business

Setting up an AFS cell is much more difficult than z. B. creating an SMB share or an NFS export. The cryptographic protection of authentication using Kerberos requires a certain amount of effort, which is independent of the size of the cell. In addition, the design of the cell takes time.

AFS shows its advantages in the following situations:

  • when scalability is important (keyword: exponential growth of the database)
  • when the database is already extremely large. Hundreds of terabytes of AFS cells are no problem.
  • when security is more important.
  • when users need a high degree of flexibility in assigning rights
  • when a lot needs to be automated. AFS can be completely controlled via command line tools.
  • when cross-platform access to data is mandatory. AFS covers the Unix, macOS and Windows platforms.

Once the AFS cell is up and running, the work of the AFS administrator is limited to upgrading and, if necessary, replacing servers. The administration effort is then extremely low in relation to the number of users and memory size. Cells with many terabytes of data and several thousand users may be involved. U. with an administrator position.

Overhead for normal users

The effort for users should not be underestimated - the per-directory ACLs are unfamiliar and ACLs in general are a concept that is only slowly gaining importance, especially in the Unix world.

It has proven to be a sensible strategy to provide the AFS home directories with certain standard paths that express the level of security (e.g. ~/public, ~/secret) and thus, apart from exceptional cases, to keep the user away from ACLs.

However, since user instructions should not be abstract, an AFS administrator will usually write one himself for his own cell and take local peculiarities into account.

Backup

Many manufacturers of backup solutions do not support AFS. The reasons for that are complex. However, your own backup solution can be programmed comparatively quickly in the form of a few shell scripts.

Institutions using AFS

See also

Web links

Individual evidence

  1. OpenAFS-announce OpenAFS 1.5.10 release available
  2. A list of known problem children and Solutions is here to find.
  3. Discontinuation of AFS on December 1, 2013 ( Memento from August 22, 2015 in the Internet Archive )