Thursday, January 24, 2013

NFS: How to remove stale entries in 'showmount -a' ?

The showmount -a command lists all NFS clients (by IP address) along with the NFS mount they have mounted.

If you run an NFS server, you may notice that stale entries can build up in the showmount listing. This is because entries remain there until explicitly unmounted by the client. If the client crashes or otherwise disappears, its entry remains in showmount forever. If you don't believe me, read the following excerpt from man rpc.mountd:

The rmtab File

The rpc.mountd daemon registers every successful MNT request by adding an entry to the /var/lib/nfs/rmtab file. When receivng a UMNT request from an NFS client, rpc.mountd simply removes the matching entry from /var/lib/nfs/rmtab, as long as the access control list for that export allows that sender to access the export.

Clients can discover the list of file systems an NFS server is currently exporting, or the list of other clients that have mounted its exports, by using the showmount(8) command. showmount(8) uses other procedures in the NFS MOUNT protocol to report information about the server's exported file systems.

Note, however, that there is little to guarantee that the contents of /var/lib/nfs/rmtab are accurate. A client may continue accessing an export even after invoking UMNT. If the client reboots without sending a UMNT request, stale entries remain for that client in /var/lib/nfs/rmtab.

Removing these stale entries is easy. Just clean out the rmtab file. In recent versions of SUSE Linux Enterprise Server (e.g. SLES11-SP2), the file is located at /var/lib/nfs/rmtab. If you can't find it there, consult the NFS server documentation for your vendor's system. (For example, on AIX the file is located at /etc/rmtab.)

Edit the file and remove the stale entries. Then try showmount -a again. You may need to restart the NFS server for the changes to take effect. On SLES11-SP2 this wasn't necessary.


  1. Great Article, this was exactly what I was looking for and explains what I am seeing on our data domains


  2. Fixed an issue and got us out of a jam - thanks!