NFS, the Network File System, is a common method for sharing files over a network from a Unix host. In this blog, we’ll go over how to create NFS exports (i.e., shares) on a CentOS 8 host, as well as how to mount those exports from a Linux client.
NFS exports are supported on most Linux distributions, although the specific required packages may differ per your distro (e.g., CentOS/RHEL, Ubuntu, etc.). However, the overall workflow and methodology will be the same.
To start, you need a Linux host to use as an NFS server (where you have sudo or root rights) as well as a Linux client to test mounting the export after server-side configuration.
Before creating NFS exports, you need to install some prerequisites and ensure the proper ports are open on your NFS server’s firewall (as well as any firewalls in the network). In this example, we’ll assume “firewalld” is the firewall installed on the server.
Run the following commands to open ports for nfs, mountd, and rpc-bind, followed by a reload of the firewalld service:
sudo firewall-cmd --permanent --zone=public --add-service=nfs
sudo firewall-cmd --permanent --zone=public --add-service=mountd
sudo firewall-cmd --permanent --zone=public --add-service=rpc-bind
sudo firewall-cmd --reload
Next, we need to install some NFS utilities on the server, as well as portmap, and enable/start them:
sudo yum install -y nfs-utils portmap
sudo systemctl enable nfs-server.service rpcbind.service
sudo systemctl start nfs-server.service rpcbind.service
With those commands successfully run, we’re all set with configuring the firewall and prerequisite packages/services on the NFS server.
We can now start creating NFS exports. First, create a folder you’d like to export to clients (or you can use an existing folder):
sudo mkdir -p /nfs_example
Next, we’ll configure the /etc/exports file to allow the previously created folder to be exported to NFS clients. Run sudo nano /etc/exports to open the exports file in the Nano text editor, and then add the following line to export the folder:
The first part of the line is the path for the folder we’d like to export. After that, we specify a single client, or subnet of clients (e.g., 192.168.1.0/24), by IP address that will have access to the export. Finally, we add flags to the export such as:
rw: Allow clients to read and write to the exported folder.
sync: The server will only respond to the client write requests once the transaction has been written to disk. This protects against data loss in the event the NFS server or the network goes down mid-transaction. The alternative to this setting, async, does not protect against data loss in these scenarios (although async does offer performance gains over sync).
no_subtree_check: Subtree checking forces the NFS server to verify if a file is still available in an exported tree during each client request. While useful when only a portion of a volume is exported, in most NFS scenarios it’s best to disable subtree checking, via this flag, to improve performance. Now that we’ve configured our exports file, we can refresh our current NFS exports with the following command on the server:
sudo exportfs -ra
To confirm our folder is now exported, run the following command to verify the server’s current NFS export list:
showmount -e localhost
>> showmount -e localhost
Export list for localhost:
With the NFS export visible in the output of the showmount command, we’re now ready to mount the export on a client.
Before we mount the export, we need to install a prerequisite client-side package:
sudo yum install -y nfs-utils
Next, verify the client can communicate with the NFS server over the network and see its NFS exports:
showmount -e <nfs_server_ip_address>
Note that in each client NFS mount command, a DNS resolvable hostname can be used in place of an IP address.
Now create an empty folder to use as a mount point for the NFS export, and mount the export to it:
sudo mkdir -p /mnt/nfs_mount
sudo mount -t nfs <nfs_server_ip_address>:/nfs_example /mnt/nfs_mount
Depending on which versions of NFS your server and client support, you may need to specify the NFS version in the client’s mount command. For example, the following mount command specifies NFSv3:
sudo mount -t nfs -o vers=3 <nfs_server_ip_address>:/nfs_example /mnt/nfs_mount
If you don’t receive any error messages, then you should now be able to read and write to the mounted NFS export from the client.
You can also confirm the export has been mounted using output from the client’s df -h command.
>> df -h
Filesystem Size Used Avail Use% Mounted on
<nfs_server_ip_address>:/nfs_example 36G 5.1G 31G 15% /mnt/nfs_mount
Alternatively, the mount command provides more specific info regarding the mount type than df -h:
>> mount | grep nfs
<nfs_server_ip_address>:/nfs_example on /mnt/nfs_mount type nfs (rw,...,vers=3,...)
If the mount command returns errors, you can add a -v flag that will potentially output info that can help with troubleshooting.
Overall, configuring and mounting NFS exports is a simple process. NFSv4 can be a bit more advanced due to its inclusion of access control lists (ACLs), however, the traditional NFS workflow we’ve discussed here only takes a few minutes to get up and running.
Linux hosts are often used for some of the most critical functions in an organization’s infrastructure. With that said, monitoring NFS exports, understanding how they’re used, observing who accesses them, and verifying how they’re secured are critical workflows.
StealthAUDIT, a full-fledged data access governance solution, includes preconfigured and customizable jobs for auditing, analyzing, and reporting on Unix/Linux hosts, including specific workflows for monitoring and securing NFS exports.
From over-provisioned user access to weak passwords to high-risk open exports/shares, it only takes one compromised host or user for attackers to move laterally into admin rights in your domain.
IDENTIFY THREATS. SECURE DATA. REDUCE RISK. Find out how Stealthbits can simplify monitoring, detection, and remediation in your organization here.
Dan Piazza is a Technical Product Manager at Stealthbits, now part of Netwrix, responsible for PAM, file systems auditing and sensitive data auditing solutions. He has worked in technical roles since 2013, with a passion for cybersecurity, data protection, automation, and code. Prior to his current role he worked as a Product Manager and Systems Engineer for a data storage software company, managing and implementing both software and hardware B2B solutions.
Adopting a Data Access Governance strategy will help any organization achieve stronger security and control over their unstructured data. Use this free guide to help choose the best available solution available today!Read more