Optimized for Virtualization
Intel Virtualization Technology for Connectivity (Intel VT for Connectivity) is a suite of hardware assists that improve overall system performance by lowering the I/O overhead in a virtualized environment. This optimizes CPU usage, reduces system latency and improves I/O throughput. Intel VT for Connectivity includes the following:
• Virtual Machine Device Queues (VMDq)
• Intel I/O Acceleration Technology (Intel I/OAT)
Use of a multi-port adapter in a virtualized environment is very important to provide redundancy and data connectivity for the applications/workloads in virtual machines. Due to slot limitations and the need for redundancy and data connectivity, it is recommended that a virtualized physical server needs at least six GbE ports to satisfy the I/O requirement demands.
Virtual Machine Device queues (VMDq)
VMDq reduces I/O overhead created by the hypervisor in a virtualized server by performing data sorting and coalescing in the network silicon. This technology utilizes multiple queues in the network controller. As data packets enter the network adapter, they are sorted and packets traveling to the same destination (or virtual machine) get grouped together in a single queue. The packets are then sent to the hypervisor, which directs them to their respective virtual machines. Relieving the hypervisor of packet filtering and sorting improves overall CPU usage and throughput levels.
The PCIe Intel Gigabit adapter provides improved performance with the next-generation VMDq technology, which includes features like loop-back functionality for inter-VM communication, priority-weighted bandwidth management and doubling the number of data queues per port from four to eight. It also supports multicast and broadcast data on a virtualized server.
Intel I/O Acceleration Technology
Intel I/O Acceleration Technology (Intel I/OAT) provides a set of features that improve data acceleration across the platform, from networking devices to the chipset and processors, which help improve system performance and application response times. These features include multiple queues, Direct Cache Access (DCA), MSI-X, Low-Latency Interrupts, Receive-Side Scaling (RSS) and others. Using multiple queues and receive-side scaling, a DMA engine moves data using the chipset instead of the CPU. DCA enables the adapter to pre-fetch data from the memory cache, thereby avoiding cache misses and improving application response times.
MSI-X helps in load-balancing I/O interrupts across multiple processor cores and low-latency interrupts can provide certain data streams a non-modulated path directly to the application. RSS directs the interrupts to a specific processor core based on the application's address.
Single-Root I/O Virtualization (SR-IOV)
For mission-critical applications, where dedicated I/O is required for maximum network performance, users can assign a dedicated virtual function port to a VM. The controller provides direct VM connectivity and data protection across VMs using SR-IOV. SR-IOV technology enables the data to bypass the software virtual switch and delivers near-native performance. It assigns either physical or virtual I/O ports to individual VMs directly.
This technology is best suited for applications that demand the maximum I/O throughput and deliver lowest latency performance such as database, storage and financial applications. The PCI-SGI SR-IOV capability is a mechanism for devices to advertise their ability to be directly assigned to multiple machines. This technology enables the partitioning of a PCI function into many virtual interfaces for the purpose of sharing the resources of a PCIe device in a virtual environment. These virtual interfaces are called virtual functions. Each virtual function can support a unique and separate data path for I/O related functions within the PCI Express hierarchy. Use of SR-IOV with a networking device allows bandwidth of a single port(function) to be partitioned into smaller slices that can be allocated to specific VMs or guests via a standard interface.
End-to-End Wired Security
The Intel adapter provides authentication and encryption for IPsec and LinkSec. LinkSec is already designed into the network adapter hardware. The adapter is future-proof and prepared to provide LinkSec functionality to devices that support the technology.
IPsec provides data protection between the end point devices of a network communication session. The IPsec offload feature is designed to offload authentication and encryption of certain types of IPsec traffic and still delivers near line-rate throughput and reduces CPU utilization
LinkSec is an IEEE standard feature that provides data protection in the network. The IEEE 802.3ae and IEEE 802.3af protocols provide hop-to-hop data protection between two network devices in the transaction line between the host and destination. The two network devices must support the LinkSec technology. The network devices could be servers, switches and routers.