Make a cluster of ordinary computers. Desktop cluster. Installing Cluster User Account

💖 Do you like it? Share the link with your friends

Introduction

A server cluster is a group of independent servers managed by the Cluster service that work together as a single system. Server clusters are created by combining multiple Windows® 2000 Advanced Server and Windows 2000 Datacenter Server servers together to provide high availability, scalability, and manageability for resources and applications.

The purpose of a server cluster is to ensure continuous user access to applications and resources in cases of hardware or software failures or planned equipment shutdowns. If one of the cluster servers is unavailable due to a failure or shutdown for maintenance, information resources and applications are redistributed among the remaining available cluster nodes.

For cluster systems, the use of the term " high availability" is preferable to using the term " fault tolerance", since fault tolerance technologies imply a higher level of equipment resistance to external influences and recovery mechanisms. As a rule, fault-tolerant servers use a high degree of hardware redundancy, plus in addition specialized software, allowing almost immediate restoration of operation in the event of any individual software or hardware. These solutions are significantly more expensive compared to the use of cluster technologies, since organizations are forced to overpay for additional hardware, which is idle most of the time and is used only in case of failures. Fault-tolerant servers are used for applications that handle high volumes of high-value transactions in areas such as payment processing centers, ATMs, or stock exchanges.

Although the Cluster service does not guarantee uptime, it does provide a high level of availability sufficient to run most mission-critical applications. The Cluster service can monitor the performance of applications and resources, automatically recognizing failure conditions and restoring the system when they are resolved. This provides more flexible workload management within the cluster and increases the availability of the system as a whole.

The main benefits obtained by using the Cluster service are:

  • High availability. If any node fails, the Cluster service transfers control of resources such as e.g. hard disks and network addresses of the active cluster node. When software or hardware failure,The cluster software restarts the failed application on the,functioning node, or moves the entire load of the failed node,to the remaining functioning nodes. However, users may only notice a short delay in service.
  • Refund after refusal. The Cluster service automatically redistributes the workload in the cluster when a failed node becomes available again.
  • Controllability. Cluster Administrator is a snap-in that you can use to manage the cluster as a single system, as well as to manage applications. The Cluster Administrator provides a transparent view of how applications are running as if they were running on the same server. You can move applications to different servers within a cluster by dragging and dropping cluster objects with the mouse. You can move data in the same way. This method can be used to manually distribute the workload of servers, as well as to offload the server and then stop it for scheduled maintenance. In addition, the Cluster Administrator allows you to remotely monitor the state of the cluster, all its nodes and resources.
  • Scalability. To ensure that cluster performance can always keep up with increasing demands, the Cluster service has scaling capabilities. If the overall cluster performance becomes insufficient to handle the load generated by clustered applications, additional nodes can be added to the cluster.

This document contains instructions for installing the Cluster service on servers running Windows control 2000 Advanced Server and Windows 2000 Datacenter Server, and describes the process of installing the Cluster service on cluster node servers. This manual does not describe the installation and configuration of clustered applications, but only walks you through the entire process of installing a simple two-node cluster.

System requirements for creating a server cluster

The following checklists will help you prepare for installation. Step by step instructions Installation instructions will be presented below these lists.

Software requirements

  • operating room Microsoft system Windows 2000 Advanced Server or Windows 2000 Datacenter Server installed on all servers in the cluster.
  • An installed name resolution service such as Domain Naming System (DNS) Windows Internet Naming System (WINS), HOSTS, etc.
  • Terminal server for remote administration cluster. This requirement is not mandatory, but is recommended only to ensure ease of cluster management.

Hardware Requirements

  • The cluster node hardware requirements are the same as those for installing the Windows 2000 Advanced Server or Windows 2000 Datacenter Server operating systems. These requirements can be found on the search page Microsoft directory.
  • The cluster hardware must be certified and listed on the Microsoft Hardware Compatibility List (HCL) for the Cluster service. The latest version of this list can be found on the search page Windows 2000 Hardware Compatibility List Microsoft directory by selecting the search category "Cluster".

Two HCL-compliant computers, each with:

  • HDD with boot system partition and an installed Windows 2000 Advanced Server or Windows 2000 Datacenter Server operating system. This drive should not be connected to the shared storage bus, discussed below.
  • Separate PCI Fiber Channel or SCSI device controller for connecting an external shared storage device. This controller must be present in addition to the controller boot disk.
  • Two network PCI adapters installed on each computer in the cluster.
  • An HCL-listed external disk storage device that is connected to all nodes in the cluster. It will act as a cluster disk. Configuration using hardware RAID arrays is recommended.
  • Cables for connecting a common storage device to all computers. Refer to the manufacturer's documentation for instructions on configuring storage devices. If the connection is made to the SCSI bus, you can refer to Appendix A for additional information.
  • All equipment on the cluster computers must be completely identical. This will simplify the configuration process and eliminate potential compatibility issues.

Requirements for setting up network configuration

  • A unique NetBIOS name for the cluster.
  • Five unique static IP addresses: two addresses for private network adapters, two for public network adapters, and one address for the cluster.
  • Domain account for the Cluster service (all cluster nodes must be members of the same domain)
  • Each node must have two network adapters - one for connecting to the public network, one for intra-cluster communication of nodes. Configuration using one network adapter for simultaneous connection to public and private networks is not supported. Having a separate network adapter for the private network is required to comply with HCL requirements.

Shared Storage Disk Requirements

  • All shared storage disks, including the quorum disk, must be physically attached to the shared bus.
  • All disks connected to the shared bus must be accessible by each node. This can be checked during the installation and configuration of the host adapter. For detailed instructions Refer to the adapter manufacturer's documentation.
  • SCSI devices must be assigned target unique SCSI ID numbers, and terminators must be installed correctly on the SCSI bus, according to the manufacturer's instructions. 1
  • All shared storage drives must be configured as basic disks(not dynamic)
  • All shared storage drive partitions must be formatted with the NTFS file system.

It is highly recommended to combine all shared storage drives into hardware RAID arrays. Although not required, creating fault-tolerant RAID configurations is key to protecting against disk failures.

Cluster installation

General Installation Overview

During the installation process, some nodes will be shut down and some will be rebooted. This is necessary in order to ensure the integrity of the data located on disks connected to the common bus of the external storage device. Data corruption can occur when multiple nodes simultaneously attempt to write to the same disk that is not protected by the cluster software.

Table 1 will help you determine which nodes and storage devices should be enabled at each stage of installation.

This guide describes how to create a two-node cluster. However, if you are setting up a cluster with more than two nodes, you can use the column value "Node 2" to determine the state of the remaining nodes.

Table 1. Sequence of turning on devices when installing a cluster

Step Node 1 Node 2 Storage device A comment
Setting Network Settings On On Off Make sure that all storage devices connected to the common bus are turned off. Turn on all nodes.
Setting up shared drives On Off On Turn off all nodes. Power on the shared storage device, then power on the first node.
Checking the configuration of shared drives Off On On Turn off the first node, turn on the second node. Repeat for nodes 3 and 4 if necessary.
Configuring the first node On Off On Turn off all nodes; turn on the first node.
Configuring the second node On On On After successfully configuring the first node, power on the second node. Repeat for nodes 3 and 4 if necessary.
Completing the installation On On On At this point, all nodes should be turned on.

Before installing the cluster software, you must complete the following steps:

  • Install an operating system on each computer in the cluster Windows system 2000 Advanced Server or Windows 2000 Datacenter Server.
  • Configure network settings.
  • Configure shared storage drives.

Complete these steps on each node in the cluster before installing the Cluster service on the first node.

To configure the Cluster service on a Windows 2000 server, your account must have administrator rights on each node. All cluster nodes must be either member servers or controllers of the same domain. Mixed use of member servers and domain controllers in a cluster is unacceptable.

Installing the Windows 2000 operating system

To install Windows 2000 on each cluster node, refer to the documentation that you received with your operating system.

This document uses the naming structure from the manual "Step-by-Step Guide to a Common Infrastructure for Windows 2000 Server Deployment". However, you can use any names.

Before you begin installing the Cluster service, you must log in as an administrator.

Configuring network settings

Note: At this point in the installation, turn off all shared storage devices, and then turn on all nodes. You must prevent multiple nodes from accessing a shared storage device at the same time until the Cluster service is installed on at least one node and that node is powered on.

Each node must have at least two network adapters installed - one to connect to the public network, and one to connect to the private network consisting of cluster nodes.

The private network adapter provides inter-node communication, reporting of the current state of the cluster, and management of the cluster. Each node's public network adapter connects the cluster to the public network consisting of client computers.

Make sure everything network adapters are physically connected correctly: private network adapters are connected only to other private network adapters, and public network adapters are connected to public network switches. The connection diagram is shown in Figure 1. Perform this test on each cluster node before proceeding to configure the shared storage disks.

Figure 1: Example of a two-node cluster

Configuring a Private Network Adapter

Complete these steps on the first node of your cluster.

  1. My network environment and select a team Properties.
  2. Right-click on the icon.

Note: Which network adapter will serve a private network and which public one depends on the physical connection of the network cables. In this document we will assume that the first adapter (Power connection) local network) is connected to the public network, and the second adapter (Local Area Connection 2) is connected to the cluster's private network. In your case this may not be the case.

  1. State. Window Status LAN Connection 2 shows the connection status and its speed. If the connection is in the disconnected state, check the cables and connections. Fix the problem before continuing. Click the button Close.
  2. Right click on the icon again LAN connection 2, select a command Properties and press the button Tune.
  3. Select a tab Additionally. The window shown in Figure 2 will appear.
  4. For private network network adapters, the speed must be set manually instead of the default value. Specify your network speed in the drop-down list. Don't use values "Auto Sense" or "Auto Select" to select the speed, since some network adapters may drop packets while determining the connection speed. To set the network adapter speed, specify the actual value for the parameter Connection type or Speed.

Figure 2: Additional settings network adapter

All cluster network adapters connected to the same network must be configured identically and use the same parameter values Duplex mode, Flow control, Connection type, etc. Even if different network equipment is used on different nodes, the values ​​of these parameters must be the same.

  1. Select Internet Protocol (TCP/IP) in the list of components used by the connection.
  2. Click the button Properties.
  3. Set the switch to position Use the following IP address and enter the address 10.1.1.1 . (For the second node, use the address 10.1.1.2 ).
  4. Set the subnet mask: 255.0.0.0 .
  5. Click the button Additionally and select a tab WINS. Set the switch value to position Disable NetBIOS over TCP/IP. Click OK to return to the previous menu. Perform this step only for the private network adapter.

Your dialog box should look like Figure 3.

Figure 3: Private Network Connection IP Address

Configuring a public network adapter

Note: If a DHCP server is running on a public network, the IP address for the network adapter on the public network can be assigned automatically. However, this method is not recommended for cluster node adapters. We strongly recommend assigning permanent IP addresses to all public and private host network adapters. Otherwise, if the DHCP server fails, access to the cluster nodes may be impossible. If you are forced to use DHCP for network adapters on a public network, use long terms address lease - this will ensure that the dynamically assigned address will remain valid even if the DHCP server is temporarily unavailable. Always assign permanent IP addresses to private network adapters. Remember that the Cluster service can only recognize one network interface on each subnet. If you need help with appointments network addresses in Windows 2000, refer to the operating system's built-in Help.

Renaming network connections

For clarity, we recommend changing the names of your network connections. For example, you can change the name of the connection LAN connection 2 on . This method will help you more easily identify networks and correctly assign their roles.

  1. Right click on the icon 2.
  2. IN context menu select team Rename.
  3. Enter Connect to the cluster's private network in the text field and press the key ENTER.
  4. Repeat steps 1-3 and change the connection name LAN connection on Connect to a public network.

Figure 4: Renamed network connections

  1. The renamed network connections should look like Figure 4. Close the window Network and remote access to the network. New network connection names are automatically replicated to other nodes in the cluster when they are turned on.

Examination network connections and name resolution

To check the operation of the configured network equipment, follow these steps for all network adapters on each node. To do this, you must know the IP addresses of all network adapters in the cluster. You can get this information by running the command ipconfig on each node:

  1. Click the button Start, select team Execute and type the command cmd in the text window. Click OK.
  2. Type the command ipconfig /all and press the key ENTER. You will see information about the IP protocol configuration for each network adapter on the local machine.
  3. In case your window is not yet open command line, follow step 1.
  4. Type the command ping ipaddress Where ipaddress is the IP address of the corresponding network adapter on another node. For example, assume that the network adapters have the following IP addresses:
Node number Network connection name Network adapter IP address
1 Connecting to a public network 172.16.12.12
1 Connect to the cluster's private network 10.1.1.1
2 Connecting to a public network 172.16.12.14
2 Connect to the cluster's private network 10.1.1.2

In this example you need to run the commands ping 172.16.12.14 And ping 10.1.1.2 from node 1, and execute the commands ping 172.16.12.12 And ping 10.1.1.1 from node 2.

To check name resolution, run the command ping, using the computer's name as an argument instead of its IP address. For example, to check name resolution for the first cluster node named hq-res-dc01, run the command ping hq-res-dc01 from any client computer.

Checking domain membership

All cluster nodes must be members of the same domain and have networking capabilities with a domain controller and DNS server. Nodes can be configured as member domain servers or as controllers of the same domain. If you decide to make one of the nodes a domain controller, then all other nodes in the cluster must also be configured as domain controllers of the same domain. This guide assumes that all hosts are domain controllers.

Note: For links to additional documentation on configuring domains, DNS, and DHCP services in Windows 2000, see Related Resources at the end of this document.

  1. Right click My computer and select a team Properties.
  2. Select a tab Network identification. In the dialog box Properties of the system You will see the full computer and domain name. In our example, the domain is called reskit.com.
  3. If you have configured the node as a member server, then at this stage you can join it to the domain. Click the button Properties and follow the instructions to join the computer to the domain.
  4. Close the windows Properties of the system And My computer.

Creation account cluster services

For the Cluster service, you must create a separate domain account under which it will be launched. The installer will require you to enter credentials for the Cluster service, so an account must be created before installing the service. The account must not be owned by any domain user, and must be used solely for running the Cluster service.

  1. Click the button Start, select a command Programs / Administration, run the snap-in.
  2. Expand category reskit.com, if it has not yet been deployed
  3. Select from the list Users.
  4. Right click on Users, select from the context menu Create, select User.
  5. Enter a name for the cluster service account as shown in Figure 5 and click Further.

Figure 5: Adding a Cluster User

  1. Check the boxes Prevent user from changing password And The password has no expiration date. Click the button Further and a button Ready to create a user.

Note: If your administrative security policy does not allow passwords that never expire, you will need to update the password and configure the Cluster service on each node before it expires.

  1. Right click on the user Cluster in the right toolbar Active Directory– users and computers.
  2. In the context menu, select the command Add members to a group.
  3. Select group Administrators and press OK. The new account now has administrator privileges on the local computer.
  4. Close the snap Active Directory - Users and Computers.

Configuring shared storage drives

Warning: Ensure that at least one of the cluster nodes is running the Windows 2000 Advanced Server or Windows 2000 Datacenter Server operating system and that the Cluster service is configured and running. Only after this you can download operating system Windows 2000 on other nodes. If these conditions are not met, the cluster disks may become damaged.

To begin setting up shared storage drives, turn off all nodes. After that, turn on the shared storage device, then turn on node 1.

Quorum disk

The quorum disk is used to store checkpoints and recovery log files of the cluster database, providing cluster management. We make the following recommendations for creating a quorum disk:

  • Create a small partition (at least 50 MB in size) to use as a quorum disk. We generally recommend creating a quorum disk of 500 MB in size.
  • Dedicate a separate disk for the quorum resource. Because if a quorum disk fails, the entire cluster will fail, we strongly recommend using a hardware disk RAID array.

During the Cluster service installation process, you will be required to assign a letter to the quorum drive. In our example we will use the letter Q.

Configuring shared storage drives

  1. Right click My computer, select a command Control. In the window that opens, expand the category Storage devices.
  2. Select a team Disk management.
  3. Make sure that all shared storage drives are formatted as NTFS and have the status Basic. If you connect new disk, will start automatically Disk Signing and Update Wizard. When the wizard starts, click the button Update, to continue its operation, after this the disk will be identified as Dynamic. To convert the disk to basic, right-click on Disk #(Where # – number of the disk you are working with) and select the command Revert to basic disk.

Right click area Not distributed next to the corresponding disk.

  1. Select a team Create a section
  2. Will start Partition Creation Wizard. Press the button twice Further.
  3. Enter the desired partition size in megabytes and click the button Further.
  4. Click the button Further, accepting the default drive letter suggested
  5. Click the button Further to format and create a partition.

Assigning drive letters

After the data bus, disks, and shared storage partitions are configured, you must assign drive letters to all partitions on all disks in the cluster.

Note: Connection points are functionality file system that allows you to install file system using existing directories, without assigning a drive letter. Mount points are not supported by clusters. Any external drive used as a cluster resource must be partitioned into NTFS partitions, and these partitions must be assigned drive letters.

  1. Right-click the desired partition and select Changing the drive letter and drive path.
  2. Select a new drive letter.
  3. Repeat steps 1 and 2 for all shared storage drives.

Figure 6: Disk partitions with assigned letters

  1. At the end of the procedure, the snap window Computer management should look like Figure 6. Close the snap-in Computer management.
  1. Click the button Start, select Programs / Standard, and run the program " Notebook".
  2. Type a few words and save the file under the name test.txt by selecting the command Save as from the menu File. Close Notebook.
  3. Double-click on the icon My Documents.
  4. Right click on the file test.txt and in the context menu select the command Copy.
  5. Close the window.
  6. Open My computer.
  7. Double-click the shared storage drive partition.
  8. Right click and select command Insert.
  9. A copy of the file should appear on the shared storage drive test.txt.
  10. Double click on the file test.txt to open it from a shared storage drive. Close the file.
  11. Select the file and press the key Del to delete a file from the cluster disk.

Repeat the procedure for all disks in the cluster to ensure that they are accessible from the first node.

Now turn off the first node, turn on the second node and repeat the steps in section Checking operation and public access to disks. Follow these same steps on all additional nodes. Once you are sure that all nodes can read and write information to the shared storage drives, turn off all nodes except the first one and continue to the next section.

Today, the business processes of many companies are completely tied to information
technologies. With the growing dependence of organizations on the work of computing
networks, the availability of services at any time and under any load plays a big role
role. One computer can only provide First level reliability and
scalability, the maximum level can be achieved by combining
a single system of two or more computers - a cluster.

Why do you need a cluster?

Clusters are used in organizations that need round-the-clock and
uninterrupted availability of services and where any interruptions in work are undesirable and
unacceptable. Or in cases where there is a possible load surge, which may
the main server cannot cope, then additional ones will help compensate
hosts that usually perform other tasks. For mail server, processing
tens and hundreds of thousands of letters per day, or a web server serving
online stores, the use of clusters is highly desirable. For the user
such a system remains completely transparent - the entire group of computers will
look like one server. Using several, even cheaper,
computers allows you to get very significant advantages over a single
and a fast server. This is a uniform distribution of incoming requests,
increased fault tolerance, since when one element fails, its load
picked up by other systems, scalability, convenient maintenance and replacement
cluster nodes, and much more. Failure of one node automatically
is detected and the load is redistributed, all this remains for the client
unnoticed.

Win2k3 features

Generally speaking, some clusters are designed to improve data availability,
others - to ensure maximum performance. In the context of the article we
will be of interest MPP (Massive Parallel Processing)- clusters, in
in which similar applications run on multiple computers, providing
scalability of services. There are several technologies that allow
distribute the load between several servers: traffic redirection,
address translation, DNS Round Robin, use of special
programs
working for application level, like web accelerators. IN
Win2k3, unlike Win2k, support for clustering is included initially and
two types of clusters are supported, differing in applications and specifics
data:

1. NLB (Network Load Balancing) clusters- provide
scalability and high availability of services and applications based on TCP protocols
and UDP, combining up to 32 servers with the same set of data into one cluster, on
running the same applications. Each request is executed as
separate transaction. Used to work with sets of rarely changing
data, such as WWW, ISA, Terminal Services and other similar services.

2. Server clusters– can unite up to eight nodes, their main
The task is to ensure application availability in case of failure. Consists of active and
passive nodes. The passive node sits idle most of the time, playing a role
main node reserve. For individual applications it is possible to configure
several active servers, distributing the load between them. Both nodes
connected to a single data warehouse. A server cluster is used to operate
with large volumes of frequently changing data (mail, file and
SQL servers). Moreover, such a cluster cannot consist of nodes running under
management various options Win2k3: Enterprise or Datacenter ( Web versions And
Standard server clusters are not supported).

IN Microsoft Application Center 2000(and only) there was one more kind
cluster - CLB (Component Load Balancing), providing the opportunity
distributing COM+ applications across multiple servers.

NLB clusters

When using load balancing, a
a virtual network adapter with its own IP and MAC address independent of the real one.
This virtual interface represents the cluster as a single node, clients
they access it precisely by its virtual address. All requests are received by everyone
cluster node, but are processed by only one. Runs on all nodes
Network Load Balancing Service
,
which, using a special algorithm that does not require data exchange between
nodes, decides whether a particular node needs to process a request or
No. Nodes exchange heartbeat messages showing them
availability. If the host stops issuing heartbeat or a new node appears,
the remaining nodes start convergence process, again
redistributing the load. Balancing can be implemented in one of two ways
modes:

1) unicast– unicast when instead of a physical MAC
The MAC of the virtual cluster adapter is used. In this case, the cluster nodes are not
can exchange data with each other using MAC addresses only via IP
(or a second adapter not associated with the cluster);

Only one of these modes should be used within a single cluster.

Can be customized several NLB clusters on one network adapter,
specifying specific rules for ports. Such clusters are called virtual. Their
application makes it possible to set for each application, node or IP address
specific computers in the primary cluster, or block traffic for
some application without affecting traffic for other programs running
on this node. Or, conversely, an NLB component can be bound to several
network adapters, which will allow you to configure a number of independent clusters on each
node. You should also be aware that setting up server clusters and NLB on the same node
is not possible because they work differently with network devices.

The administrator can make some kind of hybrid configuration that has
advantages of both methods, for example, by creating an NLB cluster and setting up replication
data between nodes. But replication is not performed constantly, but from time to time,
therefore, the information on different nodes will differ for some time.

Let’s finish with the theory here, although we can talk about building clusters
for a long time, listing the possibilities and ways to build up, giving various
recommendations and options for specific implementation. Let's leave all these subtleties and nuances
for self-study and let's move on to the practical part.

Setting up an NLB cluster

For organizing NLB clusters no additional software required, that's all
produced using the available Win2k3 tools. To create, maintain and monitor
NLB clusters use the component "Network Load Balancing Manager"
(Network Load Balancing Manager)
, which is in the tab
“Administration” “Control Panel” (NLBMgr command). Since the component
“Network Load Balancing” is installed as a standard Windows network driver,
NLB installation can also be performed using the “Network Connections” component, in
where the corresponding item is available. But it's better to use only the first one
option, simultaneous use of the NLB manager and “Network connections”
may lead to unpredictable results.

NLB Manager allows you to configure and manage work from one place at once
several clusters and nodes.

It is also possible to install an NLB cluster on a computer with one network
adapter associated with Network Load Balancing, but this
case, in unicast mode, the NLB manager on this computer cannot be
used to control other nodes, and the nodes themselves cannot exchange
information with each other.

Now we call the NLB dispatcher. We don’t have clusters yet, so what has appeared
the window does not contain any information. Select “New” from the “Cluster” menu and
We begin to fill out the fields in the “Cluster Parameters” window. In the "Settings" field
Cluster IP parameters" enter the value of the virtual IP address of the cluster, mask
subnet and full name. The value of the virtual MAC address is set
automatically. Just below we select the cluster operating mode: unicast or
multicast. Pay attention to the “Allow remote control” checkbox - in
everyone Microsoft documents strongly recommends not to use it in
avoiding security problems. Instead you should use
dispatcher or other remote control tools, such as a toolkit
Windows Management (WMI). If the decision to use it is made, you should
take all appropriate measures to protect the network, covering additionally
firewall UDP ports 1717 and 2504.

After filling out all the fields, click “Next”. In the "Cluster IP Addresses" window, when
if necessary, add additional virtual IP addresses that will
used by this cluster. In the next “Port Rules” window you can
set load balancing for one or for a group of ports of all or
selected IP via UDP or TCP protocols, as well as block access to the cluster
specific ports (which the firewall does not replace). Default cluster
processes requests for all ports (0–65365); It’s better to limit this list,
adding only what is truly necessary. Although, if you don’t want to mess around,
you can leave everything as is. By the way, in Win2k by default all traffic
directed to the cluster, processed only the node that had the highest priority,
the remaining nodes were connected only when the main one failed.

For example, for IIS you will only need to enable ports 80 (http) and 443 (https).
Moreover, you can make it so that, for example, protected connections are processed
Only certain servers on which the certificate is installed. For adding
new rule, click “Add”, in the dialog box that appears, enter
Host IP address, or if the rule applies to everyone, then leave the checkbox
"All". In the “From” and “To” fields of the port range we set the same value –
80. The key field is “Filtering Mode” - here
specifies who will process this request. There are three fields available that define the mode
filtering: “Multiple nodes”, “Single node” and “Disable this port range”.
Selecting "Single Node" means that traffic directed to the selected IP (computer
or cluster) with the specified port number will be processed by the active node,
having the lowest priority indicator (more on that below). Selecting "Disable..."
means that such traffic will be discarded by all cluster members.

In the “Multiple nodes” filtering mode, you can additionally specify the option
determine client affinity to direct traffic from a given client to
the same cluster node. There are three options: “None”, “One” or “Class”
C". Choosing the first means that any request will be answered by an arbitrary
node But you should not use it if the UDP protocol is selected in the rule or
"Both". When selecting the remaining points, the similarity of clients will be determined by
specific IP or class C network range.

So, for our rule with port 80, let's choose the option
"Multiple nodes - class C." We fill out the rule for 443 in the same way, but use
“One node” so that the client is always answered by the main node with the lowest
priority. If the dispatcher detects an incompatible rule, it will display
warning message, additionally in the log Windows events will be entered
corresponding entry.

Next, connect to the node of the future cluster by entering its name or real IP, and
We determine the interface that will be connected to the cluster network. In the Options window
node" select the priority from the list, specify the network settings, set the initial
node state (running, stopped, paused). Priority at the same time
is a unique node identifier; the lower the number, the higher the priority.
The node with priority 1 is the master server, primarily receiving
packets and acting as a routing manager.

The “Save state after restarting the computer” checkbox allows you to
failure or reboot of this node will automatically bring it into operation. After clicking
On “Ready”, an entry about the new cluster will appear in the Manager window, in which for now
there is one node.
Adding the next node is just as easy. Select “Add node” from the menu or
“Connect to existing”, depending on which computer
connection is being made (it is already part of the cluster or not). Then in the window
indicate the name or address of the computer, if the rights to connect are sufficient, new
the node will be connected to the cluster. At first, the icon next to his name will be
different, but when the convergence process is completed, it will be the same as
first computer.

Since the dispatcher displays the properties of nodes at the time of its connection, for
To clarify the current state, select the cluster and in the context menu the item
"Update". The manager will connect to the cluster and show the updated data.

After installation NLB cluster don't forget to change the DNS record to
name resolution now showed the cluster IP.

Changing server load

In this configuration, all servers will be loaded evenly (except
"One node" option). In some cases it is necessary to redistribute the load,
placing most of the work on one of the nodes (for example, the most powerful).
For a cluster, rules can be changed after they have been created by selecting
in the context menu that appears when you click on the name, select the “Cluster Properties” item.
All the settings that we talked about above are available here. Menu item
"Node Properties" provides slightly more options. In "Node Options"
you can change the priority value for a specific node. In "Rules
for ports" you cannot add or delete a rule; this is only available at the level
cluster. But by choosing to edit a specific rule, we get the opportunity
adjust some settings. Yes, when established mode filtering
“Multiple nodes” the “Load Estimation” item becomes available, allowing
redistribute the load to a specific node. The default is checked
“Equal”, but in “Load Estimation” you can specify a different load value for
specific node, as a percentage of total load cluster. If the mode is activated
filtering “Single node”, a new parameter “Priority” appears in this window
processing." Using it, you can make traffic to a specific port
will be processed first by one node of the cluster, and to another by others
knot.

Event logging

As mentioned earlier, Network Load Balancing records all
cluster actions and changes in the Windows event log. To see them
select “Event Viewer – System”, NLB includes WLBS messages (from
Windows Load Balancing Service, as this service was called in NT). Besides, in
the dispatcher window displays the latest messages containing information about errors
and about all configuration changes. By default this information is not
is saved. For it to be written to a file, select “Options ->
Logging Options", select the "Enable Logging" checkbox and specify a name
file. The new file will be created in a subdirectory of your Documents account
and Settings.

Setting up IIS with replication

A cluster is a cluster, but without a service it makes no sense. Therefore, let's add IIS (Internet
Information Services)
. IIS Server included in Win2k3, but to boil it down to
minimizes the possibility of attacks on the server; it is not installed by default.

There are two ways to install IIS: through the Control Panel or
the role management wizard for this server. Let's look at the first one. Let's go to
“Control Panel - Add or Remove Programs”
Remove Programs), select “Install Windows components” (Add/Remove Windows
Components). Now go to the “Application Server” item and check in “Services”
IIS" is all that is needed. By default, the server's working directory is \Inetpub\wwwroot.
Once installed, IIS can output static documents.

First of all, decide what components and resources you will need. You will need one main node, at least a dozen identical compute nodes, an Ethernet switch, a power distribution unit and a rack. Determine the wiring and cooling capacity, as well as the amount of space you will need. Also decide what IP addresses you want to use for the nodes, what software you will install, and what technologies you will need to create parallel computing power (more on this below).

  • Although hardware is expensive, all the programs presented in the article are distributed free of charge, and most of them are open source.
  • If you want to find out how fast your supercomputer could theoretically be, use this tool:

Mount the nodes. You will need to assemble network nodes or purchase pre-assembled servers.

  • Choose server frames that maximize space, energy efficiency, and cooling performance.
  • Or you can “recycle” a dozen or so used servers, a few outdated ones - and even if their weight exceeds the total weight of the components, but you will save a decent amount. All processors, network adapters and motherboards must be the same for the computers to work well together. Of course, don't forget about RAM and hard drives for each node, as well as at least one optical drive for the main node.
  • Install the servers in the rack. Start at the bottom so that the rack is not overloaded at the top. You'll need a friend's help - assembled servers can be very heavy, and placing them in the cages that hold them in the rack is quite a challenge.

    Install an Ethernet switch next to the rack. It's worth configuring the switch right away: set the size of jumbo frames to 9000 bytes, set the static IP address that you chose in step 1 and turn off unnecessary protocols such as SMTP.

    Install a power distribution unit (PDU, or Power Distribution Unit). Depending on what maximum load issue nodes on your network, you may need 220 volts for a high-performance computer.

  • When everything is installed, proceed to configuration. Linux is, in fact, the go-to system for high-performance (HPC) clusters - not only is it ideal as a scientific computing environment, but you also don't have to pay to install the system on hundreds or even thousands of nodes. Imagine how much it would cost Windows installation to all nodes!

    • Start by installing the latest BIOS version For motherboard and software from the manufacturer, which must be the same for all servers.
    • Set your preferred Linux distribution for all nodes, and for the main node - a distribution kit with a graphical interface. Popular systems: CentOS, OpenSuse, Scientific Linux, RedHat and SLES.
    • The author highly recommends using Rocks Cluster Distribution. In addition to installing all the software and tools needed for a cluster, Rocks provides an excellent method for quickly "migrating" multiple copies of a system to similar servers using PXE boot and Red Hat's "Kick Start" procedure.
  • Install the message passing interface, resource manager and other required libraries. If you did not install Rocks in the previous step, you will have to manually install the necessary software to set up the parallel computing logic.

    • To get started you will need portable system for working with bash, for example, Torque Resource Manager, which allows you to split and distribute tasks across multiple machines.
    • Add Maui Cluster Scheduler to Torque to complete the installation.
    • Next you need to install a message passing interface, which is necessary to ensure that individual processes in each individual node use the common data. OpenMP is the simplest option.
    • Don't forget about multi-threaded math libraries and compilers that will “assemble” your programs for distributed computing. Did I mention you should just play Rocks?
  • Connect computers into a network. The main node sends tasks for calculation to slave nodes, which in turn must return the result back, and also send messages to each other. And the faster all this happens, the better.

    • Use a private Ethernet network to connect all the nodes into a cluster.
    • The main node can also act as NFS, PXE, DHCP, TFTP and NTP servers when connected to Ethernet.
    • You must separate this network from the public ones to ensure that packets do not overlap with others on the LAN.
  • Test the cluster. The last thing you should do before giving users access to computer power is to test the performance. HPL (High Performance Lynpack) benchmark is a popular option for measuring computing speed in a cluster. You need to compile the software from source to the highest degree of optimization that your compiler allows for the architecture you choose.

    • You should of course compile with all possible settings optimizations that are available for the platform you choose. For example, when using an AMD CPU, compile to Open64 and optimization level -0.
    • Compare your results with TOP500.org to pit your cluster against the 500 fastest supercomputers in the world!
  • TBVPFBFSH ABOUT PDOPK NBYYOE HCE OE NPDP
    YMY DEMBEN LMBUFET CH DPNBOYI HUMPCHYSI.

    1. hCHEDEOYE

    noPZYE Y CHBU YNEAF CH MPLBMSHOPK UEFY OUEULPMSHLP Linux NBIYO, U RTBLFYUEULY CHUEZDB UCHPVPDOSCHN RTPGEUUPTPN. fBLCE NOPZIE UMSHCHYBMY P UYUFENBI, CH LPFPTSCHI NBYOSCH PVAEDEOSAFUS CH PDYO UKHRETLPNRSHAFET. OP TEBMSHOP NBMP LFP RTPVPCHBM RTPCHPDYFSH FBLYE LURETYNEOFSHCH UEWS ABOUT TBVPFE YMY DPNB. dBChBKFE RPRTPVKHEN CHNEUFE UPVTBFSH OEVPMSHYPK LMBUFET. rPUFTPYCH LMBUFET CHSC UNPTSEFE TEBMSHOP HULPTYFSH CHSHRPMOEOYE YUBUFY ЪBDBU. OBRTYNET LPNRYMSGYA YMY PDOPCHTENEOOHA TBVPFKH OULPMSHLYI TEUKHTUPENLYI RTPGEUUPCH. h LFK UFBFSHE S RPUFBTBAUSH TBUULBBFSH CHBN LBL NPTsOP VEY PUPVSHCHI HUIMYK PVAEDEOYFSH NBYOSHCH UCHPEK MPLBMSHOPK UEFFY CH EDYOSCHK LMBUFET ABOUT VBJE MOSIX.

    2. lBL, UFP Y HERE.

    MOSIX - LFP RBFYu DMS SDTB Linux U LPNRMELFPN HFYMYF, LPFPTSHCHK RPJCHPMSEF RTPGEUUBN U CHBYEK NBYOSCH RETEIPDIFSH (NYZTYTPCHBFSH) ABOUT DTHZIE HOMSH MPLBMSHOPK UEFI. chЪSFSH EZP NPTsOP RP BDTEUH HTTP://www.mosix.cs.huji.ac.il B TBURTPUFTBOSEFUS ON CH YUIPDOSCHI LPBI RPD MYGEOYEK GPL. rBFYUY UKHEEUFCHHAF DMS CHUEI SDT YJ UFBVYMSHOPK CHEFLY Linux.

    3. hUFBOPCHLB RTPZTBNNOPZP PVEUREYUEOOYS.

    h OBYUBME KHUFBOPCHLY IPYUH RPTELPNEODPCHBFSH CHBN ЪBVYTBFSH U KHMB MOSIX OE FPMSHLP EZP, OP Y UPRHFUFCHHAEYE KHFYMYFSH - mproc, mexec Y DT.
    h BTIYCHE MOSIX EUFSH HUFBOPCHPYuOSCHK ULTYRF mosix_install. OE ЪБВХДШФЭ Х ПВСЪБ FEMSHOPN RPTSDLE TBURBLLPCHBFSH YUIPDOSHE LPDSCH SDTB CH /usr/src/linux-*.*.*, OBRTYNET LBL UDEMBM S - CH /usr/src/linux-2.2.13 DBMEE ЪBRHULBEFE mosix_install Y PFCHEYUBEFE ABOUT CHUE EZP CHPRPTUSCH, KHLBIBCH ENKH UCHPK NEOEDTSET ЪБЗТХЪЛй (LILO), RХФШ Л YUIPDOILBN SDTB Y KHTPCHOY ЪBRХУЛБ.
    rTY OBUFTPKLE SDTB CHLMAYUYFE PRGYY CONFIG_MOSIX, CONFIG_BINFMT_ELF Y CONFIG_PROC_FS. CHUE LFY PRGYY RPDTPVOP PRYUBOSCH CH THLPCHPDUFCHE RP HUFBOPCHL MOSIX.
    HUFBOPCHYMY? OH YFP TSE - RETEZTHTSBKFE CHBY Linux U OPCHSHCHN SDTPN, OBCHBOYE LFPTPZP PUEOSH VHDEF RPIPTSE ABOUT mosix-2.2.13.

    4. about BUFTPKLB

    yЪOBYUBMSHOP KHUFBOPCHMEOOSHCHK MOSIX UPCHETYEOOOP OE OBEF, LBLYE X CHBU NBYOSCH CH UEFI Y U LENE ENKH UPEDEOSFUS. OH B OBUFTBYCHBEFUS LFP PYUEOSH RTPUFP. eUMY CHCH FPMSHLP RPUFBCHYMY mosix Y EUMY CHBY DIUFTYVHFYCH - SuSE YMY RedHat - UPCHNEUFYNSCHK, FP ЪBIPDYFE CH LBFBMPZ /etc/rc.d/init.d Y DBCHBKFE LPNBODH mosix start. rTY RETCHPN ЪBRHULE LFPF ULTYRF RTPUIF CHBU OBUFTPIFS MOSIX Y ЪBRHULBEF FELUFPCHSHCHK TEDBLFPT DMS UPЪDBOYS ZhBKMB /etc/mosix.map, Ch LPFPTPN OBIPDFYFUS URYUPL HЪMP H CHBYEZP LMBUFETB. fKhDB RTPRYUSCHBEN: CH UMKHYUBE, EUMY KH CHBU CHUEZP DCHE-FTY NBYOSCH Y YI IP-BDTEUB UMEDHAF
    DTHZ ЪB DTHZPN RP OPNETBGYY RYYEN FBL:



    1 10.152.1.1 5

    here RETCHSHCHK RBTBNEFT PVPOBYUBEF OPNET OBYUBMSHOPZP KHMB, CHFPTPK - IP BDTEU RETCHPZP KHMB Y RPUMEDOYK - LPMYUEUFChP KHMPCH U FELHEZP. f.E. UEKYBU KH OBUC H LMBUFETE PMKHYUBEFUS RSFSH KHMPCH, IP BDTEUB LPFPTSHCHK BLBOYUYCHBAFUS OB 1, 2, 3, 4 Y 5.
    YMY DTHZPK RTYNET:

    oPNET KHMB IP LPMYUEUFChP KHMPCH U FELHEZP
    ______________________________________
    1 10.152.1.1 1
    2 10.150.1.55 2
    4 10.150.1.223 1

    h LFK LPOZHYZHTBGYY NSCH RPMKHYUN UMEDHAEIK TBULMBD:
    IP 1-ПЗП ХЪМБ 10.150.1.1
    IP 2-ПЗП ХЪМБ 10.150.1.55
    IP 3-ПЗП ХЪМБ 10.150.1.56
    IP 4-ПЗП ХЪМБ 10.150.1.223
    FERETSH OHTSOP ABOUT CHUEI NBYOBI VKHDHEEZP LMBUFETB KHUFBOPCHYFSH MOSIX Y UPJDBFSH CHEDE PDOBLPCCHCHK LPOZHYZHTBGYPOOSCHK ZHBKM /etc/mosix.map .

    FERETSH RPUME RETEBRKHULB mosix CHBYB NBYOB KhCE VKhDEF TBVPFBFSH CH LMBUFETE, YuFP NPTsOP KHCHYDEFSH ЪBRKHUFYCH NPOYFPT LPNBODPK mon. h UMHYUBE, EUMY CHSHCHHCHYDYFE H NPOYFPTE FPMSHLP UCHPA NBYOKH YMY CHPPVEE OE KHCHYDYFE OYLPZP, FP, LBL ZPCHPTYFUS - OBDP TSCHFSH. ULPTEE CHUEZP X CHBU PYYVLB YNEOOP CH /etc/mosix.map.
    OH CHPF, HCHYDYMY, OP OE RPVEDYMY. YuFP DBMSHYE? b DBMSHYE PYUEOSH RTPUFP:-) - OHTsOP UPVTBFSH KhFYMYFSH DMS TBVPFSH U YЪNEOOOSCHN /proc YЪ RBLEFB mproc. h YUBUFOPUFY h LFPN RBBLEF YDEF OERMPIBS NPDYZHYLBGYS top - mtop, h LPFPTSCHK DPVBCHYMY CHPTNPTSOPUFSH PFPVTBTTSEOYS KHMB(node), UPTFYTPCHLY RP KHMBN, RETEOPUB RTPGEUU B U FELHEEZP KHMB ABOUT DTHZPK Y KHUFBOPCHMEOYS NYOINBMSHOPK ЪBZTHYLY RTPGEUUPTB KHMB, RPUME LPFPTPK RTPGEUUSH OBUYOBAF NYZTYTPCHBFSH ABOUT DTHZYE MOSIX - KHMSHCH .
    ъBRKHULBEN mtop, CHSHCHVYTBEN RPOTBCHYCHYYKUS OE URSEIK RTPGEUU (TELPNEODHA ЪBRKHUFYFSH bzip) Y UNEMP DBCHYN LMBCHYYKH "g" ABOUT CHBYEK LMBCHYBFKHTE, RPUM YuEZP CHCHPDYN ABOUT ЪBRTPU PID CHCHV TBOOPZP CH LBUEUFCHES TSETFCHSH RTPGEUUB Y ЪBFEN - OPNET KHMB, LHDB NSCH IFYN EZP PFRTBCHYFSH. b HCE RPUME bFPZP CHOINBFEMSHOP RPUNPFTYFE ABOUT TEKHMSHFBFSCH, PFPVTBTSBENSHCHE LPNBODPK mon - FB NBYOB DPMTSOB OBYUBFSH VTBFSH ABOUT UEVS OBZTHLKH CHSHVTBOOPZP RTPGEUUB.
    b UPVUFCHOOOP mtop - CH RPME #N PFPVTBTSBFSH OPNET KHMB, HERE ON CHSHRPMOSEFUS.
    oP LFP EEE OE CHUE - CHEDSH CHBN RTBCHDB OE IPUEFUS PFRTBCHMSFSH ABOUT DTHZIE KHMSCH RTPGEUUSCH CHTHYUOHA? noe oe BIFFEMPUSH. x MOSIX EUFSH OERMPIBS CHUFTPEOOBS VBMBOUITPCHLB CHOKHFTY LMBUFETB, LPFPTBS RPJCHPMSEF VPMEE-NEOEE TBCHOPNETOP TBURTEDEMSFSH OBZTKHLH ABOUT CHUE KHMSHCH. oKH B CHPF ЪDEUSH OBN RTYDEFUS RPFTHDYFUS. DMS OBYUBMB S TBUULBTsKH, LBL UDEMBFSH FPOLHA OBUFTPKLH (tune) DMS DCHHI KHMPC LMBUFETB? CH RTPGEUUE LPFPTPK MOSIX RPMHYUBEF YOZHPTNBGYA P ULPTPUFSI RTPGEUUPTPCH Y UEFI:
    ъBRPNOYFE TB Y OCHUEZDB - tune NPTsOP CHSHCHRPMOSFSH FPMSHLP CH single-mode. YOBYUE CHSC MYVP RPMKHUYFE OE UPCHUEN LPTTELFOSCHK TEKHMSHFBF, MYVP CHBYB NBYOB NPTSEF RTPUFP ЪBCHYUOKHFSH.
    yFBL, CHSHRPMOSEN tune. rPUME RETECHPDB PRTBGYPOOPK UYUFENSCH CH single - mode OBRTYNET LPNBODPK init 1 YMY init S ЪBRKHULBEN ULTYRF prep_tune, LPFPTSCHK RPDOINEF cEFECHSCHE
    YOFETZHEKUSHY ЪBRHUFYF MOSIX. rPUME LFPPZP ABOUT PDOPK YNBYO ЪBRKHULBEN tune, ChCHPDYN ENKH OPNET DTHZPZP KHMB DMS OBUFTPKLY Y TsDEN TEKHMSHFBFB - KhFYMYFB DPMTSOB CHSHCHDBFSH ЪBRTPU ABOUT CHCHPD YEUFY YUYUEM, R PMHYUEOOOSCHI PF CHSHRPMOEOYS LPNBODSCH tune -a<ХЪЕМ>ABOUT DTHZPN HYME. uPVUFCHOOOP PRETBGYA RTYDEFUS RPCHFPTYFSH ABOUT DTHZPN KHME LPNBODPK tune -a<ХЪЕМ>, B TEЪHMSHFBF YЪ YEUFY YUYUEM CHCHEUFY ABOUT RETCHSHCHK HYEM. rPUME RPDPVOPZP FAIOZB CH CHBYEK UYUFEN DPMTSEO RPSCHYFUS ZhBKM /etc/overheads, UPDETSBEIK YOZHPTNBGYA DMS MOSIX CHYDE OELYI YUYUMPCHSCHI DBOOSCHI. h UMHYUBE, EUMY RP LBLYN-FP RTYYUYOBN tune OE UNPZ UDEMBFSH EZP, RTPUFP ULPRYTHKFE YJ FELHEEZP LBFBMPZB ZHBKM mosix.cost H /etc/overheads. bFP RPNPTSEF;-).
    rTY FAOYOSE LMBUFETB YY VPMEE YUEN DCHHI NBYO OHTSOP YURPMSHЪPCHBFSH KhFYMYFKH, LPFPTBS FBLCE RPUFBCHMSEFUS U MOSIX - tune_kernel. dBOOBS HFYMYFB RPЪCHPMSEF
    CHBN CH VPMEE RTPUFPN Y RTYCHSHYUOPN CHYDE OBUFTPYFSH LMBUFET, PFCHEFYCH ABOUT OEULPMSHLP CHPRTPUPCH Y RTPCHEDS FAOIOZ U DCHHNS NBYOBNY LMBUFETB.
    LUFBFY, RP UPVUFCHEOOPNH PRSHCHFKH NPZH ULBUBFSH, YuFP RTY OBUFTPKLE LMBUFETB S TELPNEODHA CHBN OE ЪБЗТХЦБФШ UEFSH, B OBPVPTPPF - RTYPUFBOPCHYFSH CHUE BLFFYCHOSHE PRETBGYY CH MPLBMSHOPK UEFI.

    5. hRTBCHMEOYE LMBUFETPN

    dMS KHRTBCHMEOYS KHMPN LMBUFETB UKHEEUFCHHEF OEVPMSHYPK OBVPT LPNBOD, UTEDY LPFPTSCHI:

    mosctl - LPOFTPMSH OBD KHMPN. rPCHPMSEF YЪNEOSFSH RBTBNEFTSCH KHMB - FBLYE, LBL block, stay, lstay, delay Y F.D
    dBChBKFE TBUUNPFTYN OEULPMSHLP RBTBNEFTPCH LFPC KHFYMYFSCH:
    stay - RPЪCHPMSEF PUFBOBCHMYCHBFSH NYZTBGYA RTPGEUUPCH ABOUT DTHZIE KHMSHCH U FELHEEK NBYOSCH. pFNEOSEPHUS RBTBNEFTPN nostay YMY -stay
    lstay - ЪBRTEEBEF FPMSHLP MPLBMSHOSCHN RTPGEUUBN NYZTBGYA, B RTPGEUUSCH U DTHZYI NBYO NPZHF RTDDPMTsBFSH LFP DEMBFSH. pFNEOSEPHUS RBTBNEFTPN nolstay YMY -lstay.
    block - ЪBRTEEBEF KHDBMEOOSCHN/ZPUFECHSHCHN RTPGEUUBN CHSHPRPMOSPHUS ABOUT LFPN KHM. pFNEOSEPHUS RBTBNEFTPN noblock YMY -block.
    bring - CHPCHTBEBEF PVTBFOP CHUE RTPGEUUSCH U FELHEEZP KHMB CHSHPRPMOSENSHCHE ABOUT DTHZYI NBIYOBI LMBUFETB. ьФПФ RBTБNEFT NPTSEF OE UTBVBFSCHBFSH, RPLB NYZTYTPCHBCHYK RTPGEUU OE RPMKHYUIF RTETSCHCHBOIE PF UYUFENSCH.
    setdelay KHUFBOBCHMYCHBEF CHTENS, RPUME LPFPTPZP RTPGEUU OBUYOBEF NYZTYTPCHBFSH.
    CHEDSH UPZMBUYFEUSH - CH UMKHYUBE, EUMY CHTENS CHSHRPMOEOYS RTPGEUUB NEOSHYE UELKHODSCH UNSHUM RETEOPUYFSH EZP ABOUT DTHZIE NBYOSCH UEFI YUYUEBEF. yNEOOOP LFP READING CHCHUFBCHMSEFUS HFYMYFPK mosctl U RBTBNEFTPN setdecay. rTYNET:
    mosctl setdecay 1 500 200
    KHUFBOBCHMYCHBEF CHTENS RETEIPDB ABOUT DTHZIE KHMSH 500 NYMMYUELKHOD CH UMHYUBE, EUMY RTPGEUU ЪBRHEEO LBL slow Y 200 NYMYUELKHOD VMS fast RTPGEUUPCH. pVTBFYFE CHOYNBOYE, UFP RBTBNEFT slow CHUEZDB DPMTSEO VShchFSH VPMSHYE YMY TBCHEO RBTBNEFTH fast.

    mosrun - ЪBRKHULBEF RTYMPTSEOYE CH LMBUFETE. OBRTYNET mosrun -e -j5 make ЪBRKHUFYF make ABOUT 5-PN HЪME LMBUFETB, RTY LFPN CHUE EZP DPYUETOYE RTPGEUUSCH VHDHF FBLCE CHSHHRPMOSPHUS ABOUT 5-PN HЪME. rTBCHDB ЪDEUSH EUFSH PDYO OABOU, RTY YUEN DPChPMSHOP UKHEEUFCHEOOSCHK:
    CH UMHYUBE, EUMY DPYUETOYE RTPGEUUSCHCHSHRPMOSAFUS VSHCHUFTEE YUEN KHUFBOPCHMEOOBS HFYMYFPK mosctl ЪBDETTSLB (delay) FP RTPGEUU OE VHDEF NYZTYTPCHBFSH ABOUT DTKHZIE KHMSHCH LMBUFETB . Х mosrun EEE DPChPMSHOP NOPZP TBMYUOSCHI YOFETEUOSCHI RBTBNEFTPCH, OP RPDTPVOP KHOBFSH
    P OYI CHSHCH UNPTSEFE YЪ THLPCHPDUFCHB RP LFPC KHFYMYFE. (man mosrun)

    mon - LBL NSCH HTSE OBEN, LFP NPOYFPT LMBUFETB, LPFPTSCHK CH RUECHDPZTBZHYUEULPN CHYDE PFPVTBTSBEF ЪBZTHYLH LBTSDPZP TBVPYUEZP KHMB ChBYEZP LMBUFETB, LPMYUEU FChP UChPVPDOPK Y ЪBOSFPK RBNSFY KHЪMPCH Y CHSHCHDBEF NOPZP DTHZPK, OE NEOEE YOFETEUOPK YOZHTNBGYY.

    mtop - NPDYZHYYTPCHBOOBS DMS YURPMSHЪPCHBOYS ABOUT KHMBI LMBUFETB CHETUIS LPNBODSCH top. pFPVTBTSBEF ABOUT LTBOE DYOBNYUUEULHA YOZHTTNBGYA P RTPGEUUBI, ЪBRHEEOOSCHI ABOUT DBOOPN KHOME, Y KHMBI, LHDB NYZTYTPCHBMY CHBY RTPGEUUSCH.

    mps - FPTSE NPDYZHYYTPCHBOOBS CHETUIS LPNBODSCH ps. dPVBCHMEOP EEE PDOP RPME - OPNET KHMB, ABOUT LPFPTSCHK NYZTYTPCHBM RTPGEUU.

    CHPF ABOUT NPK CHZMSD Y CHUE PUOPCHOSHE KHFYMYFSHCH. ABOUT UBNPN DEME LPOEYOP NPTsOP PVPKFYUSH DBTSE VEЪ OI. OBRTYNET YURPMSHJHS DMS LPOFTPMS OBD LMBUFETPN /proc/mosix.
    FBN LTPNE FPZP, YuFP NPTsOP OBKFY PUOPCHOHA YOZHPTNBGYA P OBUFTPKLBI KHMB, RTPGEUUBI ЪBRHEOOOSCHI U DTHZYI KHMPCH Y F.D., B FBLCE RPNEOSFSH YUBUFSH RBTBNEFTPCH.

    6. lURETENEOFYTHEN.

    l UPTSBMEOYA, NOE OE KHDBMPUSH ЪBUFBCHYFSH CHSHRPMOSPHUS LBLPK-FP PDYO RTPGEUU PDOPCHTENEOOOP ABOUT OEULPMSHLYI KHMBI. nBLUINKHN, YuEZP S DPUFYZ CH RTPGEUUE LURETYNEOFPCH U LMBUFETPN-YURPMSHJPCHBOIE DMS CHSHRPMOEOYS TEUKHTUPENLYI RTPGEUUPCH ABOUT DTHZPN KHME.
    dBChBKFE TBUUNPFTYN PDYO YJ RTYNETPCH:
    dPRKHUFYN, YUFP KH OBU CH LMBUFETE TBVPFBAF DCHE NBYOSCH (DCHB KHMB), PDYO YI LPFPTSCHI U OPNETPN 1 (366 Celeron), DTHZPK - U OPNETPN 5 (PIII450). ьLURETYNEOFYTPCHBFSH NSCH VKhDEN ABOUT 5-MON HYME. 1-K HYEM CH LFP CHTENS RTPUFBYCHBM. ;-)
    yFBL, ЪBRKHULBEN ABOUT 5-N HYME KHFYMYFKH crark DMS RPDVPTB RBTPMS L rar BTIYCHH.eUMY LFP YЪ CHBU RTPVPCHBM TBVPFBFSH U RPDPVOSHNY KHFYMYFBNY, FP ON DPMTSEO OBFSH, YuFP RTPGEUU RPDVPTB RBTPMS "LKHYBEF" DP 99 RTPGEOFPCH RTPGEUUPTB. OH YFP TSE - RPUME ЪBRKHULB NSCH OBVMADBEN, YuFP RTPGEUU PUFBEFUS ABOUT LFPN, 5-PN KHJME. tBKHNOP - CHEDSH YNEOOP KH LFPPZP KHMB RTPYCHPDYFEMSHOPUFSH RTECHSHCHYBEF 1-K KHYEM RPYUFY CH DCHB TBBB.
    dBMEE NSCH RTPUFP ЪBRKHUFYMY UVPTLH kde 2.0. uNPFTYN FBVMYGH RTPGEUUPCH Y CHYDYN, UFP crark HUREYOP NYZTYTPCHBM ABOUT 1-K HYEM, PUCHPVPDYCH RTPGEUUPT Y RBNSFSH (DB, DB - RBNSFSH FPYuOP FBLCE PUCHPVPTsDBEFUS) DMS make . b LBL FPMSHLP make ЪBLPOYUM UCHPA TBVPFKH - crark CHETOHMUS PVTBFOP, ABOUT TPDOPK ENKH 5-K KHYEM.
    YoFETEUOSCHK YZHZHELF RPMKHYUBEFUS, EUMY crark ЪBRKHULBFSH ABOUT VPMEE NEDMEOOPN 1-N KHJME.
    fBN NSCH OBVMADBEN RTBLFYUEULY RTPFYCHPRPMPTSOSHCHK TEKHMSHFBF - RTPGEUU UTBH-CE NYZTYTHEF ABOUT 5-K, VPMEE VSHCHUFTSHCHK HYEM. rTY LFPN ON CHPTBEBEFUS PVTBFOP, LPZDB IPЪSIO RSFPZP LPNRSHAFETB OBUYOBEF LBLYE-FP DEKUFCHYS U UYUFENPK.

    7. yURPMSHЪPCHBOIE

    dBChBKFE CH LPOGE TBVETENUS, OBYUEN Y LBL NSCH NPTsEN YURPMSHЪPCHBFSH LMBUFET CH UCHPEK RPCHUEDOECHOPK TSYYOY.
    dMS OBYUBMB OHTSOP TBY OBCHUEZDB ЪBRPNOYFSH - LMBUFET CHSHCHZPDEO FPMSHLP CH FPN UMHYUBE, LPZDB CH CHBYEK UEFY EUFSH LOOPE LPMYUEUFCHP NBYO, LPFPTSCHE YUBUFEOSHLP RTPUFBYCHBA F Y CHSH IPFYFE YURPMSHЪPCHBFSH YI TEUKHTUSH OBRTYNET DMS UVPTLY KDE YMY DMS MAVSHHI UETSHESHI RTPGEUUPCH. CHEDSH VMBZPDBTS LMBUFETH YЪ 10 NBYO NPTsOP PDOPCHTEENOOOP
    LPNRYMYTCHBFS DP 10 FSTSEMSCHI RTPZTBNN ABOUT FPN-CE C++. yMY RPDVYTBFSH LBLPK-FP RBTPMSH,
    OE RTELTBEBS OH ABOUT UELKHODH LFPZP RTPGEUUB OEBCHYUYNP PF OBZTHYLY ABOUT CHBY LPNRSHAFET.
    dB Y CHPPVEE - LFP RTPUFP YOFETEUOP;-).

    8. ъBLMAYUEOOYE

    h ЪBLMAYUEOYE IPUKH ULBUBFSH, YuFP h LFPK UFBFSHE OE TBUUNPFTEOSCH CHUE CHPNPTSOPUFY MOSIX, F.L. S RTPUFP DP OYI EEE OE DPVTBMUS. eUMY DPVETHUSH - TsDYFE RTDPDPMTSEOYS. :-)

    tell friends