[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-API] NFS Tuning - How To



Hi!

Thanks, you give me a chance to share my expirience in straggle with poor NFS performance in XCP 1.6.x

First of all, you just need to check mounting options issued by NFS client in XEN.

You should see something like

#mount
/dev/sda1 on / type ext3 (rw)
none on /proc type proc (rw)
none on /sys type sysfs (rw)
none on /dev/pts type devpts (rw)
none on /dev/shm type tmpfs (rw)
none on /proc/sys/fs/binfmt_misc type binfmt_misc (rw)
sunrpc on /var/lib/nfs/rpc_pipefs type rpc_pipefs (rw)
10.1.0.42:/var/storage1/ISO on /var/run/sr-mount/0f61bf48-650b-41e6-183f-6231f763468f type nfs (rw,soft,timeo=133,retrans=2147483647,tcp,acregmax=1,acdirmax=1,addr=10.1.0.42)
#

In my case you see final result, which works rather fine for me.
But in case you will find noac option inside brackets - yes, it there!

Open the /opt/xensource/sm/nfs.py and jump onto the 66 line.
You would see near code something like 

options += ',noac' # CA-27534

You just need to comment out this line of code and reattach you NFS or NFS ISO storage to check performance.
In our case the VM protection policy archiving process to NFS storage became 10x faster.

You can do even more and google the "noac CA-27534".

I sure you will find about year old patch 

https://github.com/mcclurmc/xcp-storage-managers/pull/1

-    options += ',noac' # CA-27534
  72
+
  73
+    # Attribute caching can lead to stale data, so we can try to minimize these
  74
+    # problems by reducing the caching time. Going right down to zero does not
  75
+    # remove race conditions (rather just makes them more unlikely) while
  76
+    # introducing some serious performance penalties for the NFS filesystem.
  77
+    # So we get 99% of the way there without the massive performance penalties
  78
+    # by going for 1 second.
  79
+    # The correct solution to cases where you need up-to-date information is to
  80
+    # use file locking primitives, as the lock operation will force the
  81
+    # getattr() and ensure no concurrent operations.
  82
+    options += ',acregmax=1'
  83
+    options += ',acdirmax=1'

noac option suddenly found in XCP 1.6.x as well as in XS 6.x.

I thind develepers can describe why mentioned above patch was not pulled to final version.

Mentioned above noac option definately didn't present in XS 5.6. I did not check it in XCP 1.4.9.

Hope above will help.

Finally i want to note that you have to edit manually nfs.py on every member of pool.

2013/1/4 Juan Lorenzana <juan@xxxxxxxxxxxxxx>

Okay, so I have two Dell PowerEdge server and we are using with a NAS solution called TrueNAS by IX Systems.  This is basically a ZFS version 28 implementation for the NAS solution.

 

Anyway, when we mount our NFS storage repositories over our 10GB interfaces on the Dell PowerEdge server and the NAS also has 10GB, we get horrible NFS performance.  If I mount a CIFS repository, we get blazing fast performance.

 

So my question is how would I go about tuning the different NFS options to do various tests.  A quick scan on Google produced nothing on how to do this.  My thoughts are to either pass NFS options in Xen Center under the
Advanced Options input box or edit the nfs.py file.  I tried using the Advanced Options but I do not think that worked.

 

I am currently running XCP 1.5 but plan on migrating to 1.6 in the next few months.  I would really like to tune the NFS first on 1.5 so that I can compare performance to 1.6. 

 

Any pointers to try to improve NFS performance?

 

I am getting about a 6MB/s write and 20MB/s read on the NFS storage repository when doing iozone tests inside a VM.  With a 10GB interface I should be in the 100’s.  If I do iozone test on the TrueNAS directly, I get about 600MB/s on average.  When doing iozone testing on VM running Windows using CIFS from the same storage solution, we are getting close to 180 MB/s so I know the issue is not the storage repository, so I am guessing it has to do with NFS tuning.


Any help is appreciated.  Thanks.

 

Juan

 

 


_______________________________________________
Xen-api mailing list
Xen-api@xxxxxxxxxxxxx
http://lists.xen.org/cgi-bin/mailman/listinfo/xen-api




--
WBR

Sergey Kruchatov
_______________________________________________
Xen-api mailing list
Xen-api@xxxxxxxxxxxxx
http://lists.xen.org/cgi-bin/mailman/listinfo/xen-api

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.