A hypervisor is a computer application that allows computers to run more than one operating system. Thus takes place through a process known as virtualization. The software “tricks” the operating system into thinking that it is the only one accessing a given hardware resource. The location of the hypervisor is in between the physical hardware and the OS and presents a virtual hardware environment to all other OS running in the machine. Hypervisors also manages the flow of information between the virtualized hardware, physical hardware and software. A hypervisor has found acceptance in both enterprise and consumer settings.
Usually, a computer OS runs in a highly privileged state utilizing all the machines hardware while applications runs at a fewer privileges state depending on the OS. The diverse levels of hardware access privileges are referred to as rings with the operating system traditionally running in ring 0, user applications on ring 1-3. This model is useful, but sometimes it terribly fails when trying to run more than one OS on a single machine. The use of the hypervisor solves this by using a process known as virtualization.
Hypervisors create simulated sets of hardware known as virtual machines. Each virtual machines comprise of a guest OS running it while the running of the hypervisor itself is through physical hardware or the bare metal. Hypervisors utilize sophisticated technologies to manage guest OS and ensure that they do not collide with each other. Since OS inside a virtual machine is unable to determine its deprivation of ring 0 privileges, it performs restricted instructions without permission. In this scenario, it intercepts a function before it reaches the intended hardware, execute the instruction in a manner that causes no harm to other OS running on the hardware, and return the result to the original OS. The name of this technique is trap; it emulates infrastructure and is exclusively unique to the world of virtualization.
The leading advantage of hypervisor virtualization is efficiency. Virtualization allows a data centre to get enhanced efficiency from a piece of hardware. Where the user has a dual quad core running one server instance, and one application per server, virtualization makes it possible to have 32 virtual servers running individual applications.
Another advantage is the ability to consolidate servers. This remedies the inability to isolate applications on a physical box. Each virtual machine acts and thinks that it owns all the resources. This makes a quick way of separating operating systems between virtual hosts. Additionally, virtualized hardware allows administrators to avoid the constant worries about drivers and other hardware related issues resulting from new devices and new servers.
Data services such as snapshots, cloning, and replication are moving out of the storage systems into the hypervisor. For example, VMware includes replication in its vSphere Essential Plus, Standard, Enterprise, and Enterprise Plus versions. Replication in the hypervisor is an ideal component of the virtualized world for small and medium size business because of cost and complexity. Hypervisor based data services alter the way small and medium business select its storage hardware, enabling them to implement data replication. The first advantage of hypervisor-based replication is that it is virtual ware. This implies that even if multiple VMs share the same volume, some VMs can be identified for replication, substantially improving storage set and simplifying storage capacity requirements at the data replication site. Hypervisor-based replication exhibits another advantage. It can replicate any storage device giving the administrator considerable flexibility in choosing storage solutions both locally and in disaster recovery situations. Since large enterprises have more than one storage solutions from different vendors, the flexibility to replicate to a single vendor in the DR site is commendable.
Cost effectiveness
Replication is often an expensive add-on available from storage vendors. It requires the purchase of a second identical system. Also, there is the extra profit margin that the vendor makes when there is integration of both hardware with the software. The cost of second storage system plus the support and maintenance costs pushes it further. However, by using the replication utility in the hypervisor, there is a significant reduction of costs. There is the elimination of the extra option to acquire a “value add” factor. Hypervisor replication is ready to use, and secondary sites can be established with a single cost effective virtual machines to be used as the DR target.
Disadvantages
Despite the continual movement of data into the hypervisor, hardware still matters in a number of circumstances. Storage vendors for instance, must move past the software solutions common to hypervisors such as replication, snapshots and cloning. First, storage should not be evaluated on the cheapest per GB cost alone. Evaluation should be based on other capabilities and functionalities too that hardware exhibit reduced cost, improves data protection, improved performance and ease of use. Hypervisor virtualization is based on vendors as a sole source. This implies that the vendor forces the user to buy the storage system with drives, even if they do not require that capacity. Thus, they end up buying smaller systems which they outgrow, or they get additional need up front, missing out on lower capacity costs over time. Both options results in more costs in the long run. Hardware based solution are still beneficial because it offers the flexibility required by the user and can be scaled up as the user grows.
Data protection is another issue in hypervisor virtualization. Some applications require isolations in order to operate effectively. Thus, using one server per application as in data centre environment spreads out the risks and DR concerns that administrators may have about server failures.
Applications on hypervisors run slower than on hardware thus hurting performance. Apart from underperformance, microkernelized hypervisors, for instance, calls for OS to be installed so that the hypervisor provide hardware access to all other guests OS. The configuration renders the whole system vulnerable to downtime. If for instance, the host system crushes, all other quest OS’s go down as well.
Availability has also been proven to be a serious concern for Hyper V environments. The critical security updates provided by vendors require the system to reboot which require that all the virtualized machines need to be migrated to another node in the cluster, introducing downtime during the process. This is detrimental for mission critical operations.
Comparison
Hyper-V, ESXi and Virtage
Hyper V and ESXi are undoubtedly the markets best virtualization products. VM ware is the industry leader that has had a lot of history allowing organizations to, successfully, deploy virtual environments. Microsoft’s Hyper V is coming up and integrates with most windows solutions. It for instance forms integral part of the new Windows 8 and Windows server 2012 operating systems providing familiar interfaces and features that dramatically simplify the process of virtualizing Microsoft workloads and managing virtual environments. Hitachi ESXi is a collaborative product of Hitachi and Red Hat and provides the feature of running multiple instances of server virtualization software on top of LPARs. The product is sufficient for multi-tenant server virtualization environment with the ability to isolate the effects of load level fluctuations and faults.
Microsoft’s Hyper-V, VMware and Hitachi’s ESXi both suit for small and large enterprise deployments. The three products have features that allow users to start it small and add capabilities as virtualization environment grows.
Windows Server 2012 introduces Hyper-V replication. Hyper-V has unlimited host-to-host replication of virtual machines with no storage sharing utility. VMware’s ESXi has VM migration with the ability to transfer virtual running machines between hosts without affecting applications and users.
Contrast
The main difference between Hitachi’s Virtage, Microsoft’s Hyper-V and VMware’s ESXi is their virtualization classes. Virtage is an embedded class or hardware-based virtualization technology where hypervisors are embedded on blade server’s firmware. According to Hitachi, this results in a server that is easy to virtualize and eliminates the thorny driver issues that are synonymous with the two software-based hypervisors. Hyper-V and ESXi are classified as bare metal hypervisors and comprise of software systems that run directly on the host’s software as hardware control and guest operating system. Virtage is Hitachi’s original server logical partitioning functionality in BladeSymphony servers.
While Hyper-V and ESXi is suited for both small and large enterprises, Hitachi’s Virtage virtualization technology targets large and established organizations alone. The company is yet to implement a solution for small enterprises since their technology is hardware-based.
Pricing
Hyper V is free for Windows Server 2008 and Windows Server 2012 users except the System Centre management costs. Microsoft includes the management of physical as well as virtual environments together with hyper V and VMware. Hyper-v provides free migration capabilities using Live Migration in Windows Server. With VMware ESXi, VMotion in both standard and Foundation editions charge additional charges. For instance, comparing 5 servers Microsoft’s charges system centre management fee of $7,520, system centre configuration settings, system centre Ops mgr Server and System centre data protection costs of $580 each bringing the total amount to $9,262. However, ESXi charges VCenter plus 2 year SA cost and 2 processor infrastructures Enterprise License +2 year SA cost of $7,318 and $42,125 bring the total cost to $49,443. This is far much higher than Microsoft’s. Hitachi Virtage embedded on BladeSymphony is not separate software that can be purchased and installed on the hardware.
The configuration maximums for the three products are almost the same with a little contradiction when it comes to Hitachi’s Virtage since it is hardware based. Both Hyper-V and ESXi require the same host characteristic (logical processors of 320GB, physical memory of 4GB, 64 virtual CPUs per VM, maximum virtual disk of 64TB and a single memory per VM. Since Virtage is hardware partitioning system provided via firmware, it is only available on Blade Symphony 200, 500 AND 320 Hitachi Server products.
Performance and availability
With Hyper-V server 2012, it is possible to create high availability Hyper-V clusters and leverage advanced features such as VM live and storage migration, VM replication and network virtualization. As the number of VM grows, it is possible to scale management and scalabilities by deploying System Centre 2012. By contrast, VM ware has a central management system. While VMware ESXi hypervisor allows virtualization of servers, vCentre server is required to implement high availability and activate features such as live migration, replication and distributed virtual switching. These capabilities are, however, available on a sliding scale as the upper versions of vSphere editions. Hitachi’s Virtage technology allows the hypervisor to be loaded into the firmware of the service processor located on the chassis of the blade server. This implies the ability to span a single workload across several blade systems that are utilizing the same chassis while encouraging low level access to the hardware layer. The low level access on top of the hardware components increases performance, in addition to protection from most types of security attacks.
The entire Virtage technology can work out with Red HAT Enterprise Linux and Microsoft Windows Server 2003 without any configuration settings. According to the company, this compatibility increases reliability, transparency and performance. This feature is absent in VMware and Hyper-V servers.
Effect of virtualization on the total cost of ownership
Before virtualization, IT would run one application per physical server so that the cost per server is a quick way to evaluate costs. The relationship derived in this case is one to one. However, with virtualization, many applications are executed in each physical server (each is running in its own machine) deriving a many-to-one relationship. So it is not practical to use cost per server evaluation to compare prices. A realistic way to compare costs is using cost-per-application because it will give the exact cost of running the entire applications required to maintain business processes. Virtualization will ultimately reduce the overall total cost of ownership. This is because virtualization increases the application-per-server density. Density matters a lot in a many-to-one relationship. Specifically, virtualization accords the following features which improve the total cost of ownership in an organization.
Memory oversubscription
There is enhanced efficient utilization of physical RAM brought about by multiple levels of technology. This includes page sharing, reclaiming unused memory, and memory compression. Memory oversubscription is particularly useful when it is used continuously or during periods of high demand only.
DRS Resource pools
Dynamic load balancing of virtual machines across a cluster gives applications the required resources when required. This acts as a safety net that allow administrators to run individual servers at high utilization levels while maintaining service level agreements.
Logical resource pools
With virtualization, ability to divide host clusters into pools of CPU, memory, networking and storage resources and pool business units so that they can be managed independently without wasteful dedication of hardware.
High performance
Virtualization accords users the ability to account for I/O requirements for additional virtual machines. This is done by dynamically allocating more resources, and larger processing time slice to increase performance. In addition, virtualization technology such as vSphere gives high I/O throughput and can handle I/O requirements for more virtual machines requesting virtual resources.
Being able to run more applications on few physical servers directly affects the total cost of ownership by reducing hardware, software, power, cooling, and space costs. There will be no derivation of the total cost of ownership with the use of the license. This achieved by the total costs of software, hardware and infrastructure compared against virtual machines per host. This gives the cost-per application which is considerably low in virtualized environments than in stand-alone server settings.
There would be an impact on the aspects administration like training and configuration in cases where there is the implementation of hypervisors. Virtualization undoubtedly blurs the line between network administrators, security teams, server administrators and storage engineers. There is a need for proper integration to make sure that the various teams align with the new operational order. This might be in the form of re-training and re-organization. There is drastic alteration of network and server security landscape meaning that new protocols for traffic monitoring must be incepted. There is an alteration of management protocols. If there is shifting of a virtual server to another physical host on a different physical host, there will be a need to account for the changes gets complex and administrators need to be re-trained.
References
Diane Barrett, G. K. (2010). Virtualization and Forensics: A Digital Forensic Investigator’s Guide to Virtual Environments. Syngress.
Hoopes, J. (2009). Virtualization for Security: Including Sandboxing, Disaster Recovery, High Availability, Forensic Analysis, and Honeypotting. Syngress.
Mishchenko, D. (2011). VMware ESXi: Planning, Implementation, and Security. Cengage Learning.
Scott D. Lowe, N. M. (2013). Mastering VMware vSphere 5.5. John Wiley & Sons.