vSphere VSA Raid requirements relaxed!

Today VMware announced on it’s blog that the RAID 1o requirements will be dropped, and that VMware will be supporting RAID 5 and RAID 6 as well.

See the blog here, and see the updated documentation here.

So, this means that you will be able to use more of your local storage!

Before, with RAID 10 and the VSA mirroring the 2 ESXi server’s local storage, it meant that you could only use 25% of your actual storage.

See my previous posts on the vSphere 5 VSA – Virtual San Appliance:

Now, You can see a little bit of difference in storage – let’s say we have 6 – 500GB drives (3 TB total space) each in our 2 ESXi servers. (let’s ignore formating for size and just use nice round numbers.)

Here’s the differences:

Raid 10:  3 TB total space (1.5 TB per ESXi server).  With RAID1 between VSA’s, you are left with 1.5TB usable.

RAID 6: 4 TB total space (2 TB per ESXi server). With RAID1 between VSA’s, you are left with 2TB usable.

RAID 5: 5 TB total space (2.5 TB per ESXi server). With RAID1 between VSA’s, you are left with 2.5TB usable.

So, assuming disk performance isn’t an issue – you have the chance to use up to 40% of your total storage instead of 25%. In this scenario, we gained an additional 1 TB of storage space.

BTW – Highly recommend going 100% write cache on your controller if you are going to use RAID5 or 6.

This Post Has 2 Comments

  1. Peter Kuczynski

    Hey Tim,
    I caught your well writtenl while searching for write cache on a hp dl 360 g6 controller for vmware’s VSA appliance.
    I have a customer that I recently migrated to 5.1 and two hosts, 4 drives each in raid 5.
    I changed the write cache to the default of 25%read and 75% write.
    Can you let me know how and why you arrived at your recommendation to set the write cache of 0/100 write ?
    Thanks ; )

    1. Tim

      Sure! A ways back I did some testing and did not see a performance benefit to the read cache when in a raid 5. When information is read, it is being read from multiple spindles. So, in a 6 disk raid 5, we have 6 disks to read from each at its max transfer speed. The odds of any of those blocks being in that 25% of cache we dedicated is rare, so moving it from disk to cache to use doesn’t save us any time. However, when writing – it is much quicker to write to the cache, and then allow the controller to flush to the disks and calculate parity. Since you receive basically no benefit from the read, I choose to dedicate all to write cache, where the benefits are obvious.

Leave a Comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.