Synology in the Homelab – NFS 3 vs 4.1

March 6, 2018

Recently, I purchased a Synology 1817+ to be used as storage for my homelab. Previously, I was using an older, and much slower Synology, RS812, as well as a running vSAN on my single host. While vSAN performance was great, I ran into multiple issues trying to run on a single host, with FTT=0 and Force Provisioning set to On. Cloning, Restores, etc would fail due to trying to use it in an unsupported manner. So, enter the Synology 1817+!

iSCSI or NFS?

When I got the NAS online, I had to decide if I would go block or file. After reading many reviews, it seems the block implementation is not quite up to par. So much so, that most people were getting better performance using a single 1gb/s link for NFS, than using 4 ethernet ports and round-robin multi-pathing on iSCSI. I decided to go with NFS over a single link. At the time, I was using a raid1 pair of 6TB SATA disks, and a pair of 200GB SSD for read/write cache, and was able to fill the 1gb/s link without issue.

New SSD Added

After some searching, I purchased 4 Crucial MX500 500GB SSD drives and added them to the Synology, filling out the rest of the 8 bays. I now had 2 volumes setup:

  • 1.5TB SSD Raid 5 Volume
  • 6TB Raid 1 Volume with 200GB Read / Write Cache assigned

Now I have plenty of storage to run my VMs, and 2 different tiers of storage to choose from. The All Flash volume, or the Hybrid volume, and both performed quite well! I knew, However, that they could perform even better! 10gb/s will be in added to the homelab in the near future, but I wanted to get what I could out of what I had. So, NFS 4.1 seemed like the answer, since it supported multi-pathing.

NFS 4.1 Considerations

First things first, Synology supports NFS 4, not NFS 4.1. VMware requires NFS 4.1 to be able to use multi-pathing. But after some reading, my 1817+ did, in fact, have 4.1, but it was disabled. Here are the steps to enable NFS 4.1 on a Synology NAS:

  • Enable SSH in the Synology control panel, under Terminal and SNMP
  • SSH into the box with your admin credentials
  • Sudo vi /usr/syno/etc/rc.sysv/S83nfsd.sh
  • Change line 90 from “/usr/sbin/nfsd $N” to “/usr/sbin/nfsd $N -V 4.1
  • Save and exit VI
  • Restart NFS service with sudo /usr/syno/etc/rc.sysv/S83nfsd.sh restart
  • Sudo cat /proc/fs/nfsd/versions to verify 4.1 now shows up.

Important! Since NFS 4.1 isn’t supported by Synology yet, it is not a persistent change. It WILL stay in place with reboots, but it WILL NOT stay for version upgrades. So, every time you apply a new Synology OS update, you would need to follow the above steps to re-enable 4.1. However, since my lab is running all the time, I don’t update the Synology OS unless there is a specific need.

Also, you can’t “upgrade” and NFS3 datastore to NFS 4.1 – you will have to either storage vMotion machines onto a new 4.1 share, or unmount and re-mount as 4.1 – which caused me issues.

Last, NFS 4.1 does NOT support VAAI. So, cloning, etc will result in reading and writing data across the LAN, instead of offloading the task to the NAS.

NFS 3 vs 4.1 performance

In my single host, I have 4 NICs, 3 dedicated to NFS networks. Those 3 NICs are just crossover cabled directly to the Synology 1817+. In vSphere, my host has 3 vmk ports, 1 on each subnet tied to the proper NIC to access the same subnet on the NAS. For NFS 3, only the first NIC is used. For NFS 4.1, I have entered 3 server IPs for the NFS datastore, to utilize multi-pathing over the 3 dedicated NICs. All testing is done to the All Flash Volume, and not to the Hybrid Volume.

Deploying a Single VM from a Template

I have a Windows 2016 template, about 26GB consumed, that I use constantly for deploying new VMs into my lab, and with no VAAI in NFS 4.1, I thought it would be a good start.

NFS 3

NFS 4.1

The clone operation for the template took only 1m48s thanks to VAAI. As you can see, zero network traffic, and a lot of disk activity on the Synology. Cloning took longer here, but not too much, thanks to the ability to utilize multiple NICs. 2m44s.

Deploying Multiple (4) VMs from a Template

I noticed that I’m not even using the bandwidth of 2gb/s, let alone getting close to 3gb/s which would saturate my 3 links. I wanted to see if the single operation was the problem and if I could push more by cloning more, so I used a PowerCLI script to deploy 4 VMs simultaneously from the template.

NFS 3

NFS 4.1

To deploy all 4 VMS (highlighted)at the same time took longer, 3m30s, but again, used no network resources. It was able to push writes on the NAS over 800MB/s! Again, much higher times to deploy here, 10m30s, as the network was the bottleneck, even though we were getting speeds of 250MB/s utilizing multiple NICs. Because we had to use the network, and not VAAI, disk performance suffered by more than 50%!

IOmeter testing

Finally, I went with the ‘ol goto, IOmeter to test. I always head over to vmktree.org/iometer and download the presets. I ran 100% Read, and the Real Life test over NFS 3 and NFS 4.1. While NFS 3 was the clear winner in cloning operations, thanks to the offloading capability of VAAI integration, NFS 4.1 was able to achieve higher IO rates for “normal” VM operations, thanks to the multiple NICS.

Clearly, the Single NIC was a bottleneck for both the 100% Read test AND the Real Life test. While we didn’t get 3gb/s worth of speed (~330MB/s) we got awfully close with the 100% Read test over NFS 4.1! NFS 4.1 is the winner in the IOmeter testing, able to achieve higher IOPs and higher throughput while maintaining lower latency.

Conclusions?

To be honest, it’s not as clear cut as I would have liked. I was torn on whether I would stick with an NFS implementation, or NFS 4.1. The obvious answer is, get a 10gb/s NIC for the Synology and the host – and that’s in the future, but months away. If VAAI were available on NFS 4.1, then it would be the protocol to choose, since it already had higher IO, and the cloning operations would be just as fast as what NFS 3 was above.

For my environment, I do deploy VMs from template quite often – maybe 6 VMs per week. I also have a large static footprint of VMs that don’t get removed and re-provisioned. But, what I do a lot of software deployment, and VM replication & backup testing, which would require higher throughput and higher IO. So, for those reasons, I decided to stay on NFS 4.1, and just wait a little longer while deploying templates.

Here’s to hoping Synology will release NFS 4.1 officially, and VMware will enable VAAI over 4.1!

No related content found.

Tags: , , , , , ,


Leave a Comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.


2013 | 2014 | 2015 | 2016 | 2017

Sponsors

Categories