x86 architecture maintains four levels of privilege. They are Ring 0, 1, 2 and 3. Operating systems and applications use this hierarchy to manage access to the computer hardware. In a physical environment the OS runs in Ring 0 (most privileged) level and user applications run in the Ring 3 (least privileged) level as described in Figure 1.
However in a virtualised environment since the hypervisor directly runs on top of the physical hardware, it becomes difficult for a Guest OS to run in Ring 0, which is now occupied by the hypervisor itself. Further complicating the situation, certain sensitive instructions can only be virtualised if they are executed in the Ring 0 level. To help circumvent this problem, VMware introduced certain binary translation techniques that allowed the Virtual Machine Monitor (VMM) residing in the VMkernel, to run in Ring 0. A Guest OS can now execute these sensitive instructions in the Ring 0 level via the VMM. This is described in Figure 2.
In common Fully Virtualised SCSI Adapters, Para Virtual SCSI (pvSCSI) Adapters and VM Direct Path IO could be used to give a VM access to storage. However understanding which option to pick for a VM, is what this blog tries to describe.
Virtual SCSI Adapters:
Virtual SCSI Adapters are referred to as VMware SCSI Disk Drivers or Fully Virtualised SCSI Adapters in some cases. They are default adapters used on a standard VM. The available types of Virtual SCSI Adapters are: BusLogic Parallel, LSI Logic Parallel and LSI Logic SAS. A good article with details about what Virtual SCSI Adapter to use for a given Guest OS is also available here. To keep this simple vSphere automatically picks a compatible Virtual SCSI Adapter, depending on what Guest OS is selected during VM creation. “Each VMM has to partition and share the CPU, memory and I/O devices to successfully virtualize the system.” The advantage of using Virtual SCSI Adapters is that it provides a platform for multiple VMs to simultaneously share the same storage resource. Of course with this option selected, there is a sharing of storage space and storage bandwidth by all the competing VMs. Figure 3 describes how data flows to and from a VM and its associated storage. It is important to note that in this approach a Guest OS may not always be aware that it is running inside a VM.
Para Virtual SCSI (pvSCSI) Adapters:
Paravirtual SCSI adapter is a misleading term as all virtual hardware is considered paravirtual. However a pvSCSI adapter in a guest OS uses a special driver to communicate directly with the hypervisor. PvSCSI adapters offer greater throughput performance and efficiency with lower CPU utilization. Storage used by VMs using pvSCSI adapters can also be shared by multiple VMs. As shown in Figure 4, paravirtualisation involves modifying the OS kernel to replace non-virtualizable instructions with hypercalls that communicate directly with the virtualization layer hypervisor. The hypervisor also provides hypercall interfaces for other critical kernel operations such as memory management, interrupt handling and time keeping. It is best suited for environments in which guest applications are very I/O intensive. While adding a SCSI adapter for a VM choose “VMware Paravirtual” to enable paravirtualisation. Since paravirtual Guest OS VMs should be aware and have custom guest OS device drivers, its compatibility and portability is poor. VMware Tools provides optimised virtual device drivers that in turn help with paravirtualisation. An example of a paravirtualized I/O device driver is the VMxnet network driver.
Requirements to enable pvSCSI:
- Only supported on VMs with hardware version 7.
- Supported on the following Guest OS’s – Windows Server 2003 and 2008, RHEL 5 and 6, SUSE 11 SP1, Ubuntu 10.04 and Distros2.6.33
- Supports NAS, iSCSI and FC storage.
- vSphere 4.1 now also supports Guest OS boot disks to use pvSCSI adapters.
- VMware tools must be installed.
- On Windows VMs using pvSCSI adapters one would have to install 3rd party SCSI driver during the Guest OS installation by pressing the F6 key when prompted.
- Use of pvSCSI adapters to connect to DAS is not recommended by VMware.
- FT is not supported on VM’s using pvSCSI adapters.
- Hot-adding a pvSCSI adapter is not supported.
- Hot adding or removing disks, requires a bus rescan from within the guest.
- VMs using pvSCSI adapters may not experience performance gains on disks with snapshots or if memory on the ESX host is overcommitted.
Virtual Machines Communications Interface (VMCI):
VMCI is a new interface that provides high-speed and efficient communication between a VM’s Guest OS and its host server, or between multiple virtual machines on the same host. It is independent of the guest networking stack. VMCI maintains a networking path via the host’s memory. This feature provides a network connection that is typically 20 times as fast as a 1GB/s network connection. Once VMCI is enabled on a VM, it uses VMCI sockets to communicate with other VMs. VMCI sockets support both connection-oriented (TCP) and connectionless protocols (UDP). VMCI and its benefits are seen only if VMCI Sockets API’s are included into applications running inside VMs. However there is a minimal code modification by introducing VMCI Sockets API.
- VMs must be on the same host.
- Virtual machine hardware 7 is needed
- VMware Tools must be installed on the VMs
- VMs must be running an application that supports VMCI
On a VM with VMCI enabled, features such as VMotion, DRS, HA and FT may not work at all or may be ineffective. VMware has officially stated that VMCI communication between one VM to another VM will be deprecated in the next major release. However VM to host level VMCI communication will be continued.
VM Direct Path I/O:
Hardware independence is one of the basic advantages that virtualisation can offer, it also could be a problem when one needs to connect a specific hardware device to a VM. Normal I/O virtualisation causes a tiny delay in I/O operation. This could act as a bottleneck for VMs that need access to a good bandwidth. Furthermore we know that hardware resources are shared with one or more VMs. NPIV is available as a solution using which one can directly mask external storage to virtual storage adapters that are created on a per VM basis. Direct Path I/O could also fill the void by providing an option to dedicate specific hardware devices for specific VMs. In fact the VM Direct Path I/O feature is available for USB devices and is also fully supported on Network devices. At present support for storage adapters using VM Direct Path I/O is experimental. As described in Figure 5, storage Device A is completely dedicated to VM A and Device C is dedicated for VM B. Direct Path I/O helps to cut latency and hence improves the performance of devices such as Host Bus Adapters. Currently only a handful of such devices are supported by VMware. In fact pretty much any PCI (e) device can be connected using the technology. To use the feature, the same has to be first enabled in the BIOS of the physical server. Then the device has to be defined as a Direct Path IO device using the vSphere client. Go to “Configuration” – “Advanced Settings” (Hardware Section) click on “Configure Pass-through” – select the device in mind. Next the device that is identified should be added to a VM that is powered off. “Edit Settings” of a VM – Select the PCI Device and assign it to the VM. Switch on the VM, login to the Guest OS and if required install the device driver. The device is then ready to use. A good article describing how to configure VM Direct Path IO is available here.
VM Direct Path I/O Requirements:
- New generation CPUs supporting Intel VT-d and AMD IOMMU (Input Output Memory Management Unit) features are required. Do not forget to enable the same in the BIOS.
- Storage, network or USB devices also should be compatible.
- Supported on vSphere 4.1
VM Direct Path I/O Limitations:
- By enabling VM Direct Path IO on a VM the following features are automatically disabled for the VM:
- Fault Tolerance
- Storage VMotion
- Distributed Resource Scheduler (DRS)
- High Availability (HA)
- VM Snapshots
- Suspending a VM
- VMs have to be ver. 7 or newer
- A VM Direct Path IO device is dedicated for a VM and cannot be used by any other VM.
- A maximum of two VM Direct Path IO devices can be used to a single VM.
VMware offers several storage IO options. Fully Virtual SCSI adapters should ideally be used by VMs that require lower I/O disk workloads or need average bandwidth. Para Virtual SCSI (pvSCSI) Adapters could be used for medium to high disk I/O workloads. It offers better performance than the latter and more importantly a storage device can be shared by more than one VM, unlike VM Direct Path IO. However compatible Operating Systems and availability of Para virtual device drivers \ OS should be considered. VM Direct Path IO offers a great option to dedicate storage to a specific VM. In other words this is the best option for performance as the hardware is dedicated to a single VM. But, one should consider all factors including the limitations in terms of hardware requirement and features that would be disabled before actually using VM Direct Path IO.
Some sections of this document was created using the official VMware icon and diagram library. Copyright © 2010 VMware, Inc. All rights reserved. VMware does not endorse or make any representations about third party information included in this document, nor does the inclusion of any VMware icon or diagram in this document imply such an endorsement.