Disadvantages of hyper v virtualization. Review of virtualization technologies for VPS. Flexibility to manage the entire infrastructure

💖 Do you like it? Share the link with your friends

The purpose of writing this article is to study the performance of the program. 1C on virtual machines Oh ESXi and HyperV platforms. For clarity of evaluation, a hardware server was added to the tests as a reference sample.

The idea for this study arose from observing problems with performance of 1C configurations using managed forms in a virtual environment. If, for example, the UT 10.3 configuration worked perfectly in a virtual environment, then UT 11.0 began to slow down sharply and cause user dissatisfaction, and there were no slacks in resources - 1C simply worked slowly. The only solution that helped in most cases was to transfer the roles of the DBMS and the enterprise 1C server to a physical server, and this, naturally, entails high costs, reduced fault tolerance, etc. IN this moment Most companies still continue to work with 1C in a virtual environment, many of them complain about poor performance, but do not allocate funds for physical servers - they hope that 1C will optimize the product for virtualization or that virtualization itself will become better.

The second prerequisite for this study was Microsoft's announcement of support in the operating-based hypervisor. Windows systems Server 2012R2 2nd generation virtual machines. Accordingly, it became interesting how these new machines would work in general and with 1C in particular, since their description is quite interesting: they are free of emulation of legacy devices such as IDE, BIOS, I/O ports, interrupt controllers, etc. When the guest virtual machine OS boots, it detects that it is running in a virtual environment and communicates directly with the hypervisor. Installing the operating system, loading and rebooting it is much faster compared to first-generation machines.

To conduct the study, four different configurations were selected based on the 1C:Enterprise 8.3 platform (8.3.5.1186):


And four platforms on which their performance was measured:

  • VMware ESXi 5.1;
  • physical server;
  • Windows Server 2012R2 HyperV (1Gen);
  • Windows Server 2012R2 HyperV (2Gen).

The hardware server resources were allocated as follows:

  • central processor - Intel Core i5 3330 (3.0 Ghz);
  • Random access memory (RAM) – 16 GB DDD3;
  • HDD– SSD 240 Gb Intel.

Hypervisors were deployed on the same hardware platform with an allocation of 8 GB random access memory and 4 virtual processors. All databases except ERP 2.0 are real databases with which the company works, they are filled with real data. MSSQL 2012 was used as the DBMS. The operating system on the physical server and virtual machines was Windows Server 2012R2. The enterprise 1C application server and the DBMS were installed on one server and operated in shared memory mode.

Practical testing and test results

Trade management 11
Typical OperationsVMware ESXiHardware computer
Configuration startup time, sec58 61 20 18
Financial report - movement analysis Money(per year), sec9 5 2,5 5
Marketing – marketing and planning reports – ABS/XYZ product analysis, sec30 41 20 27
Sales – Sales reports – Revenue and cost of sales, sec20 15 7,5 7
Procurement – ​​reports on inventories and purchases – Statements on warehouse balances, sec14 8 7 10
Warehouse – warehouse reports – list of goods in warehouses, sec24 3 1 2
Warehouse – movement of goods, sec9 15 6 3
Reconducting the quarter 3rd quarter 2014 3381 documents, sec3252 2987 1436 2003

Table 1 – Results of measurements of the speed of configuration operations Trade management 11


Diagram 1 – Re-conducting the quarter

ERP 2.0
Typical OperationsHyper-V 2012. VM 1st generationHyper-V 2012. VM 2-d generationVMware ESXiHardware computer
Gilev test score 8.3 17,12 17,12 25 24,15
Configuration startup time64 65 36 40
Marketing and Planning – Marketing and Planning Reports1 1 2 1
Marketing and Planning – Marketing and Planning Reports – Customer Dependency Analysis (CDA)1 1 1 1
Sales - status of securing orders1 2 1 2
Purchasing – reports on inventories and purchases – balances of goods accepted for commission. (year)2 6 1 2
Warehouse – warehouse reports – list of goods in warehouse (year)2 2 1 2
Warehouse – order for internal consumption1,5 1,5 1 2
Production – Production report – condition of operating facilities1 1 1 2
Salary – salary report – payslip for employees for the year21 22 16,5 22
Finance – Financial Statements – Cash Flow Analysis (Year)1,5 1,5 1 2
Finance – financial result – Closing the results of the month135 140 121 158
Budgeting – budgeting reports – turnover sheet by budget items (year)22 9 6 7
International Financial Accounting – Report on International Financial Accounting – Subconto Analysis (Year)2 5 1 2

* The best results of the practical test are highlighted in green.


Table 2 – Results of measurements of the speed of ERP 2.0 configuration operations



Diagram 2 – Gilev test indicator 8.3

Enterprise accounting
Typical OperationsHyper-V 2012. VM 1st generationHyper-V 2012. VM 2-d generationVMware ESXiHardware computer
Configuration startup time8 19 9,4 11
Accounting, taxes, reporting – Posting report (year)3 8 3 5
Directories and accounting settings – Account turnover (year)10 3 1 2
Directories and accounting settings – Analysis of invoices for the year2 2 1 2
Directories and accounting settings – Balance sheet (year, all indicators)2 2 1 2

* The best results of the practical test are highlighted in green.


Table 3 – Results of measurements of the speed of configuration operations Enterprise accounting

SCP
Typical OperationsHyper-V 2012. VM 1st generationHyper-V 2012. VM 2-d generationVMware ESXiHardware computer
Configuration startup time44 30 20,9 30
Financial report – analysis of cash flows (for the year)3 2 0,5 1
Reports – sales – Analysis (XYZ\ABS) (year)76 92 73 80
Reports - Costs - Cost Allocation Analysis (Year)27 31 16 22
Reports – Procurement – ​​Plan-actual analysis of procurement (year)6 8 5,3 10
Reports – Inventory – Goods in warehouses (year)2 1 1 1
Buyer's order1 1 1 1
Restoring the sequence of tax accounting of the simplified tax system5 4 1 1

* The best results of the practical test are highlighted in green.


Table 4 – Results of measurements of the speed of soft starter configuration operations



Diagram 3 – Time to launch configurations on various platforms in seconds

conclusions

  1. The first and second generation Hyper-V virtual machines are practically no different from each other. Their performance in a number of tests differed, but it is impossible to say with certainty which generation works better with 1C, since alternately one generation or the other showed better results. It’s not worth making the transition to new generation machines with the hope of increasing 1C performance.
  2. Performance measurements on VMware were unexpected. 1C on a virtual machine in most cases worked faster than on a hardware platform. Sometimes showing simply incredible superiority - for example, re-running a quarter in the UT 11 configuration took 40% less time than hardware computer. The lag of virtual machines on HyperV was more than 108% and 126% for the 2nd and 1st generations, respectively. Most likely, this phenomenon occurs due to better work with hardware drivers from VMWare, versus similar ones from Microsoft. It is also possible that ESXi creates a cache to store data, and thus processes information faster.

The next stage of the research is to deploy working bases 1C on virtual machines ESXi hypervisor and collect user feedback after some time. This will be the most significant indicator by which it will be possible to conclude whether this hypervisor is really that good for running 1C in a virtual environment.

System integration. Consulting

IN Lately Users are increasingly hearing about the concept of “virtualization.” It is believed that its use is cool and modern. But not every user clearly understands what virtualization is in general and in particular. Let's try to shed light on this issue and touch on server virtualization systems. Today, these technologies are cutting-edge because they have many advantages both in terms of security and administration.

What is virtualization?

Let's start with the simplest thing - the definition of the term that describes virtualization as such. Let us note right away that on the Internet you can find and download some manual on this issue, such as the “Server Virtualization for Dummies” reference book in PDF format. But when studying the material, an unprepared user may encounter a large number of incomprehensible definitions. Therefore, we will try to clarify the essence of the issue, so to speak, on the fingers.

First of all, when considering server virtualization technology, let's focus on the initial concept. What is virtualization? Following simple logic, it is not difficult to guess that this term describes the creation of a certain emulator (similarity) of some physical or software component. In other words, this is an interactive (virtual) model that does not exist in reality. However, there are some nuances here.

Main types of virtualization and technologies used

The fact is that in the concept of virtualization there are three main directions:

  • representation;
  • applications;
  • servers.

For the best understanding simple example there may be the use of so-called ones that provide users with their own computing resources. User program it is executed exactly and the user sees only the result. This approach allows us to reduce system requirements to a user terminal whose configuration is outdated and cannot cope with the specified calculations.

For applications, such technologies are also used quite widely. For example, this could be virtualization of a 1C server. The essence of the process is that the program runs on one isolated server, and a large number of remote users gain access to it. The software package is updated from a single source, not to mention the highest level of security for the entire system.

Finally, it implies the creation of an interactive computer environment in which server virtualization completely replicates the real configuration of its “hardware” counterparts. What does this mean? Yes, that, by and large, on one computer you can create one or more additional ones that will work in real time, as if they existed in reality (server virtualization systems will be discussed in more detail a little later).

In this case, it does not matter at all what operating system will be installed on each such terminal. By and large, this does not have any effect on the main (host) OS and the virtual machine. This is similar to the interaction of computers with different operating systems in local network, but in this case the virtual terminals may not be connected to each other.

Equipment selection

One of the clear and undeniable advantages virtual servers is to reduce material costs to create a fully functional hardware and software structure. For example, there are two programs that require 128 MB of RAM for normal operation, but they cannot be installed on the same physical server. What to do in this case? You can purchase two separate servers of 128 MB each and install them separately, or you can buy one with 128 MB of RAM, create two virtual servers on it and install two applications on them.

If anyone has not yet understood, in the second case the use of RAM will be more rational, and material costs will be significantly lower than when purchasing two independent devices. But the matter does not stop there.

Security benefits

As a rule, the server structure itself implies the presence of several devices to perform certain tasks. In terms of security system administrators install domain controllers Active Directory and Internet gateways are not on one, but on different servers.

In the event of an external intervention attempt, the gateway is always the first to be attacked. If a domain controller is also installed on the server, then the likelihood of damage to AD databases is very high. In a situation with targeted actions, attackers can take possession of all this. Yes, and data recovery from backup copy- this is quite a troublesome task, although it takes relatively little time.

If we approach this issue from the other side, we can note that server virtualization allows you to bypass installation restrictions, as well as quickly restore the desired configuration, because the backup is stored in the virtual machine itself. True, it is believed that server virtualization with Windows Server (Hyper-V) in this view looks unreliable.

In addition, the issue of licensing remains quite controversial. So, for example, for Windows Server 2008 Standard it is possible to run only one virtual machine, for Enterprise - four, and for Datacenter - a generally unlimited number (and even copies).

Administration issues

The advantages of this approach, not to mention the security system and cost reduction, even when virtualizing servers with Windows Server, should first of all be appreciated by system administrators who maintain these machines or local networks.

It becomes very common to create system backups. Usually, when creating a backup, third-party software is required, and reading from optical media or even from the Internet takes longer compared to the speed of the disk subsystem. Cloning the server itself can be done in just a couple of clicks, and then quickly deploy a working system even on “clean” hardware, after which it will work without failures.

In VMware vSphere, server virtualization allows you to create and save so-called snapshots of the virtual machine itself (snapshots), which are special images of its state at a certain point in time. They can be represented in a tree structure within the machine itself. Thus, restoring the functionality of the virtual machine is much easier. In this case, you can arbitrarily select restore points, rolling the state back and then forward (Windows systems can only dream of this).

Server virtualization programs

If we talk about software, there are a huge number of applications that can be used to create virtual machines. In the simplest case, native tools of Windows systems are used, with the help of which server virtualization can be performed (Hyper-V is a built-in component).

However, this technology also has some disadvantages, so many people prefer software packages like WMware, VirtualBox, QUEMI or even MS Virtual PC. Although such applications have different names, the principles of working with them are not particularly different (except in details and some nuances). Some versions of applications can also be virtualized Linux servers, but these systems will not be considered in detail, since most of our users still use Windows.

Server virtualization on Windows: the simplest solution

Since the release of the seventh Windows versions it added a built-in component called Hyper-V, which made it possible to create virtual machines using the system’s own tools without using third-party software.

As in any other application of this level, in this package you can simulate the future by specifying the size hard drive, amount of RAM, availability optical drives, the desired characteristics of a graphics or sound chip - in general, everything that is available in the hardware of a regular server terminal.

But here you need to pay attention to the inclusion of the module itself. Hyper-V server virtualization cannot be performed without first enabling this component in the Windows system itself.

In some cases, it may be necessary to activate support for the corresponding technology in the BIOS.

Use of third party software products

Nevertheless, even despite the means by which Windows servers can be virtualized, many experts consider this technology to be somewhat ineffective and even overly complicated. It is much easier to use a ready-made product, in which similar actions are performed based on automatic selection of parameters, and the virtual machine has greater capabilities and flexibility in management, configuration and use.

We are talking about using software products such as Oracle VirtualBox, VMware Workstation(VMware vSphere) and others. For example, a VMware virtualization server can be created in such a way that computer analogues made inside a virtual machine work separately (independently of each other). Such systems can be used in training processes, testing any software, etc.

By the way, it can be separately noted that when testing software in a virtual machine environment, you can even use programs infected with viruses that will only show their effect in the guest system. This will not affect the main (host) OS in any way.

As for the process of creating a computer inside a machine, in VMware vSphere server virtualization, as well as in Hyper-V, is based on the “Wizard”, however, if you compare this technology with Windows systems, the process itself looks somewhat simpler, since the program itself may offer some kind of templates or automatically calculate required parameters future computer.

The main disadvantages of virtual servers

But, despite how many advantages server virtualization gives the same system administrator or end user, such programs also have some significant disadvantages.

Firstly, you can’t jump over your head. That is, the virtual machine will use the resources of the physical server (computer), and not in full, but in a strictly limited amount. Thus, for the virtual machine to work properly, the initial hardware configuration must be powerful enough. On the other hand, buying one powerful server will still be much cheaper than purchasing several with a lower configuration.

Secondly, although it is believed that several servers can be combined into a cluster, and if one of them fails you can “move” to another, this cannot be achieved in Hyper-V. And this looks like a clear disadvantage in terms of fault tolerance.

Thirdly, the issue of transferring resource-intensive DBMSs or systems like Mailbox Server, Exchange Server, etc. to the virtual space will be clearly controversial. In this case, obvious inhibition will be observed.

Fourthly, for correct operation Such an infrastructure cannot use only virtual components. In particular, this applies to domain controllers - at least one of them must be “hardware” and initially accessible on the Internet.

Finally, fifthly, server virtualization is fraught with another danger: failure of the physical host and the host operating system will entail automatic shutdown all related components. This is the so-called single point of failure.

Summary

However, despite some disadvantages, such technologies clearly have more advantages. If you look at the question of why server virtualization is needed, there are several main aspects:

  • reducing the amount of hardware equipment;
  • reduction of heat generation and energy consumption;
  • reduction of material costs, including the purchase of equipment, payment for electricity, acquisition of licenses;
  • simplification of maintenance and administration;
  • the ability to “migrate” the OS and the servers themselves.

Actually, the advantages of using such technologies are much greater. Although it may seem that there are some serious disadvantages, when proper organization the entire infrastructure and the necessary controls in place to ensure smooth operation, in most cases such situations can be avoided.

Finally, for many, the question of choosing software and practical implementation of virtualization remains open. But here it is better to turn to specialists for help, since in this case we were faced solely with the question of general familiarization with server virtualization and the feasibility of implementing the system as such.

Today, probably every administrator has wondered what virtual machines are and how they can be used in their enterprise. By the way, many people probably already use virtual machines as their main servers, and today we will figure out what the benefits are in server virtualization, and it is actually very, very big.

First, let's figure out what virtual machines are, or in server virtualization they are called hypervisor (virtualization environment), and this is something software, which emulates your equipment, and thereby makes it possible to create a separate platform, one might say a separate computer, inside your computer, into which, accordingly, you can also install any ( almost any) another operating system.

Today we will talk specifically about server virtualization, and we touched on virtualization on a home computer in the article VirtualBox virtual machine. Since these are completely two different topics.

Now let's move on to all the advantages of using virtual machines in your organization as servers.

Pros of virtual machines

1. Space in the server room

The first advantage to note is the fact that in your server room or office with servers there is simply freed up space for everything. Since there is no space required when using virtual servers, only one or two powerful servers are needed.

2. Reduce noise and energy consumption

If we have a reduced number of physical servers, we have a corresponding reduction in power consumption, heat dissipation and, of course, a reduction in noise. This, by the way, can serve as a good reason for introducing virtual machines.

3. Cost reduction

Another good reason to organize virtualization in your enterprise is the fact that it will cost you much less than if you bought physical servers. This is a powerful argument for management!

4. Possibility of dedicating servers for “old” OS and software

There is no need to allocate a separate server for specific software or operating systems. In other words, if you are using old software or old operating systems on which this oldest software runs, but you cannot refuse it, since it is needed for production processes. Here you go help will come just a hypervisor, where you simply create one instance of a virtual machine, into which you will install the OS you need and the programs you need, without using a separate server.

5. Reduced labor costs for data backup

Another significant advantage, in my opinion, is that when using virtual machines you only need to backup one physical server, or only files hard drives, on which all guest operating systems are installed. It seems to me that backing up one server is much easier than, for example, 10 servers! If you take a closer look at any hypervisor ( VMware or Hyper-V), then they have many different functions, including backup, replication of virtual machines and much more.

6. Flexibility in managing the entire infrastructure

Another advantage is the centralized management of these virtual machines, i.e. You have connected to the management console or opened a snap-in on the host server and can easily, for example, reboot any virtual server. With all this, remember how long it takes you to reboot a physical server? So, rebooting the guest operating system is much faster.

7. Fault tolerance increases

In other words, if something happens inside a virtual machine, you can simply and most importantly quickly restore the virtual machine’s hard drive from the archive. And on a physical server, how long did it take you to do this? I think it's more. If someone says " What if our host itself fails?"We have an archive of all virtual machines, we only need to install the operating system ( or directly the server hypervisor) on new server add the hypervisor role in the case of Hyper-V and restore all virtual machine disks, you don’t even need to configure anything! Now imagine, if your physical server fails, how many manipulations need to be done with it so that it works the same way as before.

8. Reduced equipment wear

Another advantage is that if some part of your physical server fails, for example, the power supply burns out or the hard drive is damaged, you will need to buy or, if you have something in stock, then change it, and in the case of virtual machines it's simply unnecessary.

9. Hardware scalability

Now it is necessary to note that if you suddenly decide, for example, to add RAM to all servers ( so to speak about upgrading servers), You need to open them all and insert the pieces of hardware, and if they are in the rack, then unscrewing the whole thing and so on, you must admit, it’s stressful. In the case of virtual machines, you need to add physical memory to ONE server, and on virtual machines this is not easy, but very simple! literally a few clicks and that's it. By the way, this also applies to other parameters, such as hard drive size, number network adapters and others.

10. Dynamic infrastructure

We could note this point first, but oh well, it lies in the fact that with server virtualization, we have the opportunity to quickly expand our server fleet, so to speak, you will have a dynamic infrastructure. They wanted to, they added a couple of new servers into operation, but they wanted to, they removed them. In the case of physical servers, this will not work; when adding, you will need to: look for a place to put them, plan funds, direct purchase, delivery, and so on.

Conclusion

From all this, you and I learned why you need to use virtualization in your enterprise and why it is so profitable and convenient, well, I think you probably understood everything yourself. Personally, I think that soon absolutely everyone will give preference to virtual servers, both small and large organizations; by the way, server virtualization is already actively used by large enterprises today. So I think everyone will soon be using virtual machines to some extent.

Server hypervisors

Now let's talk about the implementation of all this, i.e. about those products with which you can implement server virtualization. Two very popular hypervisors come to mind here, of course: VMware ESX (or ESXi) and also, of course, Microsoft Hyper-V. These products are implemented as separate systems, for example, Microsoft Hyper-V Server 2008, and as components included in the operating system, in our case Microsoft Server 2008 (VMware also serves as a server system and as simple software for installation on an existing OS).

In the case of a server system, you simply connect to it through the console, and in the case of installing a virtual machine on the OS, you open the necessary equipment or launch some console in the case of VMware.

These virtual machines support many types of guest operating systems, especially VMware, but Hyper-V is not far behind, it’s just that less OS is legally supported, and therefore there will be no support for other operating systems, although almost everything can be installed.

If we talk about disadvantages, oddly enough, they also exist; in order to effectively implement virtualization in your organization, you need to purchase a powerful server or several powerful servers if you have a large enterprise.

But if you have a small organization, then a not very powerful server will do, but as you understand, you will install fewer guest operating systems on it. By the way, when choosing a server for virtual machines, pay attention to the amount of RAM, it should be - the bigger, the better! In other words, the amount of RAM determines how many guest OSes you can install. For a small business, if, for example, you only need two or three virtual servers, then a host with 8 gigabytes of RAM will do. For each virtual server, you choose how much RAM you should have, in our case, for example, we will give 2 gigabytes to the host, and create three virtual servers of 2 gigabytes each, and in the end we will have all our 8 gigabytes. Or, you can create two virtual servers, for example, the first with a capacity of 4 gigabytes, and the second with 2. Well, in general, you decide for yourself what you need.

That’s basically all I wanted to tell you about virtualization, if you are going to implement virtual servers, then be sure to plan and study everything license agreement the product you prefer.

I jumped a little there from one aspect to another. =)

Look...

You are right in that it doesn't matter small office either a cluster is created, or there is one point of failure in the form of a physical server running the hypervisor. It's stupid to argue with this. In addition, even in the case of a cluster, in most cases there is still a single point of failure in the form of a storage facility on which the data physically resides. Simply because replicated SANs and the like are generally not a discussed solution for small and medium-sized businesses. There prices already run into hundreds of thousands of dollars just for storage systems plus licenses.

The caveat is that there are three main options:

  • You have a hypervisor and N virtual machines on it
  • You have N physical servers
  • You have one physical server with one operating system(without virtualization) and everything is installed on this OS.

In the case of the third option (the worst), you have problems a priori. You can't predict the load, you don't have security as such (because you probably need to give access to users on the server who are also a domain controller), and your applications influence each other. Well, for example, from life: "ones" gobbled up 100% of the CPU - everything stopped working, simply because everything was on one instance of the OS.

The second option usually leads to the purchase of several very cheap (relatively) computers, which are proudly called a “server”. I've seen this many times. Client computers are essentially computers with a little more resources and a server OS on them. The reliability of such computers is appropriate. They are simply not designed for permanent job under load. I'm not even talking about the quality of components and assembly. With all the consequences. If you can buy several branded servers (as many as you need) - you are lucky and most workers in “small businesses” are fiercely jealous of you.

Well, the first option. If you only need to buy one server, you can almost always justify a larger budget for it. Explaining that buying it once will eliminate the need to purchase new servers, say, in the next two years. And you will be able to buy a server from a normal manufacturer (HP\DELL, etc.), which will have a normal hardware RAID, normal quality component base, and so on. The plus is that it will have normal warranty support. If you use the appropriate RAID level, you are protected from data loss if a disk (or even two) fails. And a failed disk will be replaced under warranty. Also, under warranty, everything else will be replaced for you (although the “rest” fails in decent servers much less often; over many years I remember only a couple of cases when components failed). But again, you will be spared the search for “the same motherboard”, because the warranty will cover everything for you.

That is, reliability is significantly higher, and there are fewer risks.

Everything that is written after “It is enough to buy one sufficiently powerful server” relates to the second issue - the compatibility of applications and their mutual influence on each other. Which is much more often a problem than the reliability of the equipment itself. You will be able to recover your data from a backup copy (you make backup copies, right?) in the event of equipment failure. But in many cases, you will not be able to solve the problem of compatibility and mutual negative influence of software on each other without buying a new server (that is, without financial investments).

Which risk is higher: hardware failure or software incompatibility? What, if you have a normal backup copy, is worse - a burned-out server or a malicious program that interferes with the work of others, but you cannot get rid of it (for example, this is software needed by some department for its work)?

Virtualization is not a silver bullet; it will not solve all problems at once. And it doesn’t need to be implemented simply because it exists. But you shouldn’t give it up without considering all the advantages.

I hope this is clearer.

Performance modern computers has long exceeded the standard needs of most organizations and individual users. And more and more often, instead of several servers, space in the rack is taken by a single one, which is then “cut” into several machines. There are usually no problems with choosing hardware, but choosing a virtualization system is more difficult.

VMware ESXi

Anyone who has worked with virtual machines since the turn of the century is well aware of VMware products, which were popular due to their functionality and productivity.

Even today you can often find VMware Workstation and VMware Player on desktops. The latter appeared as an answer to MS Virtual PC and is free version Workstation. It works under the installed OS, that is, it is not entirely suitable for an industrial environment. For installation on bare metal, VMware ESXi is offered - an independent product that is the basis for installing guest OSes, and together with VMware vSphere - a tool for building a virtual infrastructure and managing virtual resources (for more details, see the article “Virtual Sphere”, see ][ 08.2010 ). Essentially, ESXi is a greatly stripped down Linux version, containing a hypervisor (VMkernel) and management consoles: vCLI (vSphere CLI), PowerCLI (PowerShell interface to vCLI), SSH and DCUI (Direct Console User Interface).

Previously, ESXi was considered the “little brother” in the VMware product line, because it is a free and stripped-down version of ESX. But the time of ESX has passed, the next versions of VMware VSphere will include support exclusively for ESXi (it is also proposed alternative name- VMware vSphere Hypervisor), and all the advantages of ESX over ESXi have come to naught. So the developers recommend switching to ESXi.

The main difference between ESXi and ESX is the architecture. ESX is based on a full-fledged version of Linux, on which you can install your applications if necessary. VMware agents work through COS (Console OS), that is, through an additional layer. As a result, we have a larger distribution size: ~2 GB compared to 350 MB for ESXi (only 70 MB is installed on the hard drive).

In ESXi, agents work directly in the VMkernel; if necessary, third-party modules (monitoring, drivers) are also output to the hypervisor. Reducing layers means more reliability and security, less opportunity for attacks.

The distribution can be written to a flash drive or even embedded into the server firmware. Due to some features, the official list of compatible equipment for ESXi (clck.ru/9xlp) is smaller than that of ESX, which is also supported by older servers, but over time it will increase. In addition, volunteers have created an unofficial list of ESXi Whitebox HCL computers (clck.ru/9xnD) running VMware ESXi. The systems on this list are used at your own risk, but usually there are no problems.

The product from VMware is distinguished by its support for a large number of guest operating systems. There's a lot of stuff here - Windows, Linux, Solaris, FreeBSD, Netware and many others, the whole list is available on the website.

The functionality of the latest ESXi releases has already been “brought up” to the capabilities of ESX - integration with Active Directory has appeared (any Account will be checked in the catalog), advanced memory management functions (unused resources are released), collaboration with VMware vStorage VMFS/Storage VMotion and SAN storage systems, setting traffic priorities, VMsafe Security API security technology. Flexible resource distribution allows you to “hotly” add a CPU, RAM, or hard drive (including changing the size of the current one without rebooting).

Installing the distribution on bare metal is very simple (standard option from a drive or via PXE), and starting from version 4.1, scripts are supported that allow you to automate the process of software installation, network configuration and connections to vCenter Server. Management integrated via VSphere API Reserve copy ESXi.

It is important to have a special converter, VMware vCenter Converter (vmware.com/products/datacentervirtualization/converter), which allows you to use MS Virtual Server, Virtual PC, Hyper-V images in ESXi, as well as physical servers and disk partition images created by programs such as Acronis True Image, Norton Ghost and others.

In addition, the free VMware Go web service (go.vmware.com) can also help with ESXi deployment, allowing you to test a physical server for compatibility, install ESXi and create new VMs.

MS Hyper-V

Virtualization technology from MS, the final version of which was released in the summer of 2008. With the release of Win2k8R2, Hyper-V received new features - Live Migration, dynamic memory, a number of tools and hardware support were improved.

Hyper-V is built on the principle of a hypervisor with a microkernel and directly “communicates” with the server hardware on Ring-1. This reduces costs, thereby achieving high speed work. Offered in two flavors - as a Windows Server 2k8/R2 role (available in full version and Server Core) or as a separate solution for installation on bare metal - MS Hyper-V Server 2008 R2 (microsoft.com/hyper-v-server). The latter is distributed free of charge (does not require a Client Access License), a license is only needed for Windows guests. Essentially, this is a stripped-down version of Server Core, in which one role is installed (without the ability to change) and management tools are limited.

In addition to the license, there are other differences between the different versions of Hyper-V, but the free version has everything you need to build a virtualization server. This includes support for Live Migration technology, server consolidation and node clustering.

The server on which MS Hyper-V Server is installed can have 1 TB of RAM and up to 8 CPUs, which is quite enough for the tasks of a small and medium-sized organization.
Officially supported are 32- and 64-bit versions of Windows XP SP3, Vista SP2/2k3 SP1/2k8 and Linux (SLES and RHEL). But on the Internet you can find a dozen guides that describe the successful operation of other versions of *nix - Ubuntu, FreeBSD and so on. For installation it is recommended to select Linux distributions with kernel 2.6.32+, which added Hyper-V support(LinuxIC, distributed by MS under GPL). True, only Win2k8 guests can be configured with 4 vCPUs.

To install MS Hyper-V Server, you will need a computer with an x64 CPU that supports Intel VT or AMD-V technologies, and at least 1 GB of RAM.

To manage large arrays of virtual servers, MS offers a separate product, System Center Virtual Machine Manager 2008 (SCVMM 2008), which has tools for P2V (Physical to Virtual) and V2V server conversion (with VMware). Again, only Win is listed as supported for P2V. Therefore, to transfer your server running on Linux, you will have to choose the long path: VMware vCenter Converter .. ESXi .. SCVMM .. Hyper-V. Not always this process goes smoothly, especially for distributions that are not officially supported.

In this case, it is safer to install the system clean and then transfer the data from the backup. Instead of SCVMM in this bundle you can use the free VMDK2VHD (vmtoolkit.com/files), Citrix XenConvert, Quest vConverter (



tell friends