Veeam Architect – Proxy Transport Modes

 

Veeam, the leader in virtualization backups, has a couple different ways of gathering your production data and transporting it to the Backup Repository.  In this post, we will look at all three methods, and show the data path for each, and see what some benefits of each mode are.  First, let’s take a look at how the backup process works.

How Veeam backs up virtual machines:

Veeam can backup both VMware and Hyper-V virtual environments.  For the sake of simplicity, we will only focus on a VMware environment for this post.  By using supported API calls, Veeam processes a virtual machine backup as follows: (extremely simplified)

  1. Request vCenter to snapshot the virtual machine
  2. Read the blocks from the VMDK’s that were just snapped
  3. Deduplicate and compress those blocks
  4. Send data to the Backup Repository into a VBK (or other) file
  5. Request vCenter to remove the snapshot

By utilizing proxy servers (Physical, virtual, or both), using the data mover service, we can process the backups in parallel, and in a couple different ways.

Hot Add Mode:

First of all, hot add mode requires virtual proxies.  This is due to the nature of how it handles the data transfer.  In this mode, the virtual proxy’s VM is modified, and the snapshotted VM’s hard disks are mounted.  The virtual machine we are backing up is unaffected, as it is running off of the snapshot (.vmdk-000001) file, and not the original VMDK.  The proxy VM reads the blocks from the virtual hard drive, and deduplicates the data, and sends it out it’s virtual NIC, to the virtual switch, and out to the physical LAN, to the Backup Repository.  This is a preferred mode in a lot of environments, as it scales easily, and – thanks to 9.5 – is now even faster is processing data!

null

Direct Storage Mode:

Previously known as Direct SAN mode, this was changed to Direct Storage as Direct NFS was added recently!  With Direct Storage mode, we will utilize a physical proxy that has a connection to the storage network.  For example, a Backup and Replication server that has a Fibre Channel HBA installed, and is zoned to the storage array, with access to the LUNs used by VMware for datastores.  The Veeam server (or proxies) read the VMDK blocks directly from the storage, bypassing the hypervisor completely.  This means, that there is no additional load on the ESXi host, as it is not processing or transporting any of the backup data.  In a NFS environment, the proxies would need to have access to mount the NFS share that is used by VMware for storage.

null

NBD Mode (Network Mode):

The last of the transport methods is NBD, or Network Mode.  With NBD, extra load is put onto the ESXi servers.  All data is processed by the ESXi server, as it reads the blocks from the VMDK files, and using the management interface, sends that data out to the Veeam proxy.  There the data is deduplicated and sent to the repository.  In a 1gb/s environment, this method will most likely be too slow.  Using 10gb/s and above, you can bypass the API calls required for Hot Add mode (to mount the VMDK) and shorten the backup time.  However, due to ESXi throttling, you will not be able to utilize the full bandwidth of the management interface.  Also, by default, the other 2 transport modes will fail back to NBD if they are unable to access the data via the selected method.

null

 

Final Thoughts:

So, which mode is best for your environment?  Well, that depends.  Design of storage, network, and even the repository type all play factors into which transport method to use.  In general, Direct storage will offer the highest throughput of backup data to repository.  Hot Add mode, also offers high throughput, but puts a small load on the ESXi host the virtual proxy resides on.  Finally, NBD mode puts the highest load on the hypervisor, but can be a good design, or sometimes even the only design for some datacenters.

Be sure to watch the whiteboard session below as well!

No related content found.

Tags: , , , , ,

One Response to “Veeam Architect – Proxy Transport Modes”


Leave a Comment

2013 | 2014 | 2015 | 2016

Sponsors

Voted Top 100 vBlog 2016

Categories