Tag Archives: NFS error

Access denied error in NFS for root account

Learn how to resolve access denied issues in the NFS mount point. Understand how to root access is limited in NFS and no_root_squash to be used.

Access Denied in NFS for root account

Current setup

Access denied error in NFS share mount points when attempted to create file or directory even if rw option is set while exporting.

I had a directory named mydata which is exported from the NFS server. My /etc/exports file looks like this –

root@kerneltalks # cat /etc/exports
/mydata     10.0.2.34(rw,sync)

I mounted it on the NFS client client1 successfully. I am able to read all data within this directory from the NFS client.

root@client1 # mount kerneltalks:/mydata /nfs_data
root@client1 # ls -lrt /nfs_data

Issue

I am not able to create a file or directory in the NFS mount even if rw option is set. I tried creating files, directory and I get access denied error.

root@client1 # cd /nfs_data

root@client1 # touch testfile
touch: cannot touch ‘testfile’: Access denied

root@client1 # mkdir testdir
mkdir: cannot create directory ‘testdir’: Access denied

Solution

By default, NFS prevents remote root users from gaining root-level privileges on its exports. It assigns user privileges of nfsnobody user  to remotely logged in root users. This is what happened here and hence even if rw option is set, since we are using mount at root user we are not able to write any data on export.

This is called squashing root privileges to the normal ones. This to ensure accidental writing or modifying data on exports. You can set all_squash option which will squash privileges of all remote users including root to normal user nfsnobody.

For our issue, we have to set no_root_squash option on export so that remote root user keeps his power intact and will be able to write on the exported directory.

I changed my /etc/exports as below :

root@kerneltalks # cat /etc/exports
/mydata     10.0.2.34(rw,sync,no_root_squash)

I re-exported directory using exportfs. Re-exporting mount points does not require the client to un-mount exported directories. Re-export also avoid the NFS server restart and catch up with new configuration.

root@kerneltalks # exportfs -ra

That’s it! Now I am able to create files and directories in the exported directory on NFS client.

root@client1 # cd /nfs_data
root@client1 # touch testfile
root@client1 # mkdir testdir

Conclusion

When you are using NFS mount points with root account on client-side then export them with no_root_squash option. This will ensure you don’t face access related issues on NFS mount points.

mount.nfs: requested NFS version or transport protocol is not supported

Troubleshooting error ‘mount.nfs: requested NFS version or transport protocol is not supported’ and how to resolve it. 

Resolve NFS error

Another troubleshooting article aimed at specific errors and help you how to solve it. In this article, we will see how to resolve error ‘mount.nfs: requested NFS version or transport protocol is not supported’ seen on NFS client while trying to mount NFS share.

# mount 10.0.10.20:/data /data_on_nfs
mount.nfs: requested NFS version or transport protocol is not supported

Sometimes you see error mount.nfs: requested NFS version or transport protocol is not supported when you try to mount NFS share on NFS client. There are couple of reasons you see this error :

  1. NFS services are not running on NFS server
  2. NFS utils not installed on the client
  3. NFS service hung on NFS server

NFS services at the NFS server can be down or hung due to multiple reasons like server utilization, server reboot, etc.

You might be interested in reading :

Solution 1:

To get rid of this error and successfully mount your share follow the below steps.

Login to the NFS server and check the NFS services status.

[root@kerneltalks]# service nfs status
rpc.svcgssd is stopped
rpc.mountd is stopped
nfsd is stopped
rpc.rquotad is stopped

In the above output you can see the NFS services are stopped on the server. Start them.

[root@kerneltalks]# service nfs start
Starting NFS services: [ OK ]
Starting NFS quotas: [ OK ]
Starting NFS mountd: [ OK ]
Starting NFS daemon: [ OK ]
Starting RPC idmapd: [ OK ]

You might want to check for nfs-server or nfsserver service as well depends on your Linux distro.

Now try to mount NFS share on the client. And you will be able to mount them using the same command we see earlier!

Solution 2 :

If that doesn’t work for you then try installing package nfs-utils on your server and you will get through this error.

Solution 3 :

Open file /etc/sysconfig/nfs and try to check below parameters

# Turn off v4 protocol support
#RPCNFSDARGS="-N 4"
# Turn off v2 and v3 protocol support
#RPCNFSDARGS="-N 2 -N 3"

Removing hash from RPCNFSDARGS lines will turn off specific version support. This way clients with mentioned NFS versions won’t be able to connect to the NFS server for mounting share. If you have any of it enabled, try disabling it and mounting at the client after the NFS server service restarts.

Let us know if you have faced this error and solved it by any other methods in the comments below. We will update our article with your information to keep it updated and help the community live better!

How to resolve mount.nfs: Stale file handle error

Learn how to resolve mount.nfs: Stale file handle error on the Linux platform. This is a Network File System error that can be resolved from the client or server end.

When you are using the Network File System in your environment, you must have seen mount.nfs: Stale file handle error at times. This error denotes that the NFS share is unable to mount since something has changed since the last good known configuration.

Whenever you reboot the NFS server or some of the NFS processes are not running on the client or server or share is not properly exported at the server; these can be reasons for this error. Moreover, it’s irritating when this error comes to a previously mounted NFS share. Because this means the configuration part is correct since it was previously mounted. In such case once can try the following commands:

Make sure NFS service are running good on client and server.

#  service nfs status
rpc.svcgssd is stopped
rpc.mountd (pid 11993) is running...
nfsd (pid 12009 12008 12007 12006 12005 12004 12003 12002) is running...
rpc.rquotad (pid 11988) is running...

If NFS share currently mounted on the client, then un-mount it forcefully and try to remount it on NFS client. Check if its properly mounted by df command and changing directory inside it.

# umount -f /mydata_nfs
# mount -t nfs server:/nfs_share /mydata_nfs
#df -k
------ output clipped -----
server:/nfs_share 41943040  892928  41050112   3% /mydata_nfs

In above mount command, server can be IP or hostname of NFS server.

If you are getting error while forcefully un-mounting like below :

# umount -f /mydata_nfs
umount2: Device or resource busy
umount: /mydata_nfs: device is busy
umount2: Device or resource busy
umount: /mydata_nfs: device is busy

Then you can check which all processes or users are using that mount point with lsof command like below:

# lsof |grep mydata_nfs
lsof: WARNING: can't stat() nfs file system /mydata_nfs
      Output information may be incomplete.
su         3327      root  cwd   unknown                                                   /mydata_nfs/dir (stat: Stale NFS file handle)
bash       3484      grid  cwd   unknown                                                   /mydata_nfs/MYDB (stat: Stale NFS file handle)
bash      20092  oracle11  cwd   unknown                                                   /mydata_nfs/MPRP (stat: Stale NFS file handle)
bash      25040  oracle11  cwd   unknown                                                   /mydata_nfs/MUYR (stat: Stale NFS file handle)

If you see in above example that 4 PID are using some files on said mount point. Try killing them off to free mount point. Once done you will be able to un-mount it properly.

Sometimes it still gives the same error for mount command. Then try mounting after restarting NFS service at the client using the below command.

#  service nfs restart
Shutting down NFS daemon:                                  [  OK  ]
Shutting down NFS mountd:                                  [  OK  ]
Shutting down NFS quotas:                                  [  OK  ]
Shutting down RPC idmapd:                                  [  OK  ]
Starting NFS services:                                     [  OK  ]
Starting NFS quotas:                                       [  OK  ]
Starting NFS mountd:                                       [  OK  ]
Starting NFS daemon:                                       [  OK  ]
Starting RPC idmapd:                                       [  OK  ]

Also read : How to restart NFS step by step in HPUX

Even if this didn’t solve your issue, final step is to restart services at the NFS server. Caution! This will disconnect all NFS shares which are exported from the NFS server. All clients will see the mount point disconnect. This step is where 99% of you will get your issue resolved. If not then NFS configurations must be checked, provided you have changed configuration and post that you started seeing this error.

Outputs in above post are from RHEL6.3 server. Drop us your comments related to this post.