The Network File System was developed by Sun Microsystems as a way of sharing file systems between several hosts connected together by a TCP-IP network. The nature of this sharing is client server. One host exports its file system to the others. Although the serving host is the master, the client nodes can write to the file system as if it were on a disk that was physically connected.

History

Sun developed this with the idea of it becoming a defacto standard for Unix file sharing - a venture in which they were completely successful. NFS was adopted by all the Unix vendors as part of their standard offering.

Digital included it in the TCP-IP extension to the VMS operating system called UCX (which would later be renamed "TCP-IP Services for OpenVMS"). This was no mean feat getting a VMS mounted file structured volume to look like a Unix file system, and vice versa. Security was not much of an issue, as VMS offers access control lists and proxy accounts.

More recently, PC software vendors have written software which connects as an NFS client, and emulates a Windows drive mapping. There are several issues with this, including reliability and security. Security is an issue because PCs identify themselves by a machine name, not a uid and gid. To solve this requires an extra daemon process on the NFS serving host to authenticate - translating machine names into users and groups.

The problems with running NFS clients on PCs have led to the popularity of samba, an alternative solution which puts a daemon process on the Unix box which emulates Windows networking.

Known problems with NFS

  • Reliability
    There are problems if the server is rebooted without rebooting the clients. Any client processes trying to access file systems via NFS will, at best fail with I/O errors, at worst they will hang. Network connectivity problems can cause the same results, especially if the traffic is funnelled through a single piece of hardware which fails, such as a router.
  • Security
    Besides the PC problems listed above, there are other issues. If the passwd files on the Unix boxes are out of sync, then someone coming in on one box appears as a different user on another box. Widespread deployment of nis for authentication solves this problem.
  • Concurrent update
    Confusion can occur if processes on more than one box try to write to the same file at the same time. Unix normally handles this, when the processes are on the same box. Also, the flock mechanism is not guaranteed to work across an NFS mount.
  • File information
    Because it is not a real local file system, it does not behave quite like one. Information is limited on free space and symlinks, which tend to misbehave over NFS.

A few things to add to ponder's otherwise excellent writeup:

  • Reliability
    NFS was designed to be stateless on the server side, so that clients would be nearly immune to server reboots. When I first started doing system administration, back when X11 required 10 hours to compile, I ran the compile on an NFS client (faster cpu than server) from the server's disk (faster and bigger disks than client), while reconfiguring the server. I rebooted the server a couple of times, and the compile did not get a single burp.

    So, in my view, NFS at best fails to fail and hangs the client while server is rebooting, and at worst gets an I/O error. If you didn't want your clients to hang when the server is unavailable, you should have mounted with the intr option, which lets the user SIGINT out of a NFS disk wait.

  • Security
    NFS was designed for unix to unix file sharing. Since the standard security model under unix is that root is (mostly) trusted, if you trust the client machine to mount your disk writable at all, you trust it to do the security too. No, NFS is not particularly appropriate for unix to Microsoft Windows filesharing, as the security models are different.
  • Concurrent update
    Concurrent update over NFS works about the same as concurrent update on a local unix disk -- i.e., unpredicatably when done to the same byte(s), but otherwise not a problem. The (not stateless) lockd and statd were added to assist with locking. Don't use flock--it is obsolete; use posix fcntl intead, which works fine over NFS.
  • File information
    File information under NFS is identical to file information on UFS, except that it is cached, and therefore possibly out of date.
  • Symbolic Links
    Symbolic links under NFS work exactly like they do on a normal filesystem. However, you must realize that the filesystem view over NFS may be different on different machines, as the filesystem could be mounted in a different place. Absolute links (starting with a "/") might not refer to the same file; you should use relative links (sometimes containing "../") if you want that; but be careful to not try to back up beyond the root of the export. This problem is not limited to NFS -- it also occurs if you move the filesystem's mount point, for instance, when in a miniroot or rescue disk environment.
This is not to say that NFS is flawless or that the above are not flaws; just that most of the "flaws" mentioned above are design criteria that were intended from the start. Some real flaws in NFS are:
  • Reliability
    A more serious reliability problem is the stale nfs filehandle which occurs when either the remote mount point is changed (and server rebooted) or when the file in question is deleted on the server -- but what did you expect in these cases anyway?
  • Security and statelessness
    NFS filehandles were originally based on the inode number. Unix file permissions are partly based on permissions of parent directories. Since the server is stateless, it doesn't know how the client opened the file or even if it still has it open, so all kinds of attacks can be done based on this. Workarounds include randomizing inode numbers on the parent filesystem, encrypting them, hashing them, or serializing the filehandles. Many of these solutions require the server to keep a mapping between the generated filehandle and the real filename. So much for statelessness. Despite the lack of statelessness, several schemes have been used to fix this and still keep the original design criteria of transparent server reboots..

Log in or register to write something here or to contact authors.