0% found this document useful (0 votes)
33 views

Sun Network File System

Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
33 views

Sun Network File System

Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 6

Sun Network File System

The architecture of Sun NFS. It follows the abstract model defined in


the preceding section. All implementations of NFS support the NFS protocol
– a set of remote procedure calls that provide the means for clients to
perform operations on a remote file store. The NFS protocol is operating
system–independent but was originally developed for use in networks of
UNIX systems. The NFS server module resides in the kernel on each
computer that acts as an NFS server. Requests referring to files in a remote
file system are translated by the client module to NFS protocol operations
and then passed to the NFS server module at the computer holding the
relevant file system. The NFS client and server modules communicate using
remote procedure calls. It can be configured to use either UDP or TCP, and
the NFS protocol is compatible with both. The RPC interface to the NFS
server is open: any process can send requests to an NFS server; if the
requests are valid and they include valid user credentials, they will be acted
upon.
• Virtual file system •
NFS provides access transparency: user programs can issue file
operations for local or remote files without distinction. Other distributed
file systems may be present that support UNIX system calls, and if so, they
could be integrated in the same way. The integration is achieved by a virtual
file system (VFS) module, which has been added to the UNIX kernel to
distinguish between local and remote files and to translate between the
UNIX-independent file identifiers used by NFS and the internal file
identifiers normally used in UNIX and other file systems. In addition, VFS
keeps track of the filesystems that are currently available both locally and
remotely, and it passes each request to the appropriate local system
module (the UNIX file system, the NFS client module or the service module
for another file system). The file identifiers used in NFS are called file
handles. A file handle is opaque to clients and contains whatever
information the server needs to distinguish an individual file. In UNIX
implementations of NFS, the file handle is derived from the file’s i-node
number by adding two extra fields as follows (the i-node number of a UNIX
file is a number that serves to identify and locate the file within the file
system in which the file is stored):

• Client integration •
The NFS client module plays the role described for the client module
in our architectural model, supplying an interface suitable for use by
conventional application programs. But unlike our model client module, it
emulates the semantics of the standard UNIX file system primitives
precisely and is integrated with the UNIX kernel. It is integrated with the
kernel and not supplied as a library for loading into client processes so that:
• user programs can access files via UNIX system calls without
recompilation or reloading;
• a single client module serves all of the user-level processes, with a
shared cache of recently used blocks (described below);
• the encryption key used to authenticate user IDs passed to the
server (see below) can be retained in the kernel, preventing impersonation
by user-level clients.
The NFS client module cooperates with the virtual file system in each
client machine. It operates in a similar manner to the conventional UNIX file
system, transferring blocks of files to and from the server and caching the
blocks in the local memory whenever possible. It shares the same buffer
cache that is used by the local input-output system.

• Access control and authentication


Unlike the conventional UNIX file system, the NFS server is stateless
and does not keep files open on behalf of its clients. So the server must
check the user’s identity against the file’s access permission attributes
afresh on each request, to see whether the user is permitted to access the
file in the manner requested. The Sun RPC protocol requires clients to send
user authentication information (for example, the conventional UNIX 16-bit
user ID and group ID) with each request and this is checked against the
access permission in the file attributes. These additional parameters are not
shown in our overview of the NFS protocol ; they are supplied automatically
by the RPC system.
In its simplest form, there is a security loophole in this access-control
mechanism. An NFS server provides a conventional RPC interface at a well-
known port on each host and any process can behave as a client, sending
requests to the server to access or update a file. The client can modify the
RPC calls to include the user ID of any user, impersonating the user without
their knowledge or permission. This security loophole has been closed by
the use of an option in the RPC protocol for the DES encryption of the user’s
authentication information. More recently, Kerberos has been integrated
with Sun NFS to provide a stronger and more comprehensive solution to the
problems of user authentication and security; we describe this below.

• NFS server interface •


A simplified representation of the RPC interface provided by NFS
version 3 servers (defined in RFC 1813 [Callaghan et al. 1995]) is shown in
Figure 12.9. The NFS file access operations read, write, getattr and setattr
are almost identical to the Read, Write, GetAttributes and SetAttributes
operations defined for our flat file service model (Figure 12.6). The lookup
operation and most of the other directory operations defined in Figure 12.9
are similar to those in our directory service model (Figure 12.7). The file and
directory operations are integrated in a single service; the creation and
insertion of file names in directories is performed by a single create
operation, which takes the text name of the new file and the file handle for
the target directory as arguments. The other NFS operations on directories
are create, remove, rename, link, symlink, readlink, mkdir, rmdir, readdir and
statfs. They resemble their UNIX counterparts with the exception of readdir,
which provides a representation independent method for reading the
contents of directories, and statfs, which gives the status information on
remote file systems.

• Mount service
• The mounting of subtrees of remote filesystems by clients is
supported by a separate mount service process that runs at user level on
each NFS server computer. On each server, there is a file with a well-known
name (/etc/exports) containing the names of local filesystems that are
available for remote mounting. An access list is associated with each
filesystem name indicating which hosts are permitted to mount the
filesystem. Clients use a modified version of the UNIX mount command to
request mounting of a remote filesystem, specifying the remote host’s
name, the pathname of a directory in the remote filesystem and the local
name with which it is to be mounted. The remote directory may be any
subtree of the required remote filesystem, enabling clients to mount any part
of the remote filesystem. The modified mount command communicates
with the mount service process on the remote host using a mount protocol.
This is an RPC protocol and includes an operation that takes a directory
pathname and returns the file handle of the specified directory if the client
has access permission for the relevant filesystem. The location (IP address
and port number) of the server and the file handle for the remote directory
are passed on to the VFS layer and the NFS client.

• Automounter
• The automounter was added to the UNIX implementation of NFS in
order to mount a remote directory dynamically whenever an ‘empty’ mount
point is referenced by a client. The original implementation of the
automounter ran as a user level UNIX process in each client computer. Later
versions (called autofs) were implemented in the kernel for Solaris and
Linux. We describe the original version here.

• Server caching
• Caching in both the client and the server computer are
indispensable features of NFS implementations in order to achieve
adequate performance. In conventional UNIX systems, file pages,
directories and file attributes that have been read from disk are retained in a
main memory buffer cache until the buffer space is required for other pages.
If a process then issues a read or a write request for a page that is already in
the cache, it can be satisfied without another disk access. Read-ahead
anticipates read accesses and fetches the pages following those that have
most recently been read, and delayed-write optimizes writes: when a page
has been altered (by a write request), its new contents are written to disk
only when the buffer page is required for another page. To guard against loss
of data in a system crash, the UNIX sync operation flushes altered pages to
disk every 30 seconds. These caching techniques work in a conventional
UNIX environment because all read and write requests issued by user level
processes pass through a single cache that is implemented in the UNIX
kernel space. The cache is always kept up-to-date, and file accesses cannot
bypass the cache.
• Client caching
• The NFS client module caches the results of read, write, getattr,
lookup and readdir operations in order to reduce the number of requests
transmitted to servers. Client caching introduces the potential for different
versions of files or portions of files to exist in different client nodes, because
writes by a client do not result in the immediate updating of cached copies
of the same file in other clients. Instead, clients are responsible for polling
the server to check the currency of the cached data that they hold.

You might also like