What is a corporate network definition. Organization of corporate networks based on VPN: construction, management, security. Maximum throughput is the highest instantaneous throughput recorded during the tracking period.

💖 Do you like it? Share the link with your friends

Send your good work in the knowledge base is simple. Use the form below

Students, graduate students, young scientists who use the knowledge base in their studies and work will be very grateful to you.

Posted on http://www.allbest.ru/

Introduction

1. Design of modern corporate networks

2. Main characteristics of corporate computer networks

2.1 Network performance

2.2 Bandwidth

2.3 Reliability

2.4 Network manageability

2.5 Compatibility or integrability

2.6 Extensibility and scalability

2.7 Transparency and assistance different types traffic

3. Organization of corporate networks

4. Stages of organizing computer networks

5. The role of the Internet in corporate networks

5.1 Potential dangers associated with connecting a corporate network to the Internet

5.2 Software and hardware-software methods of protection

Conclusion

Bibliography

INconducting

Our country is moving towards general computerization. The scope of application of computers in the national economy, science, education, and everyday life is rapidly expanding. The production of computers is increasing, from powerful computers to PCs, small and microcomputers. But the probabilities of such computers are limited. Consequently, there is a need to unite such computers into an integral network, to connect them with large computers and computing centers where databases and data banks are located and where it is possible to carry out calculations of varying degrees of difficulty in a limited time or obtain information stored there.

Now, any organization, even a small one, with several computers, cannot imagine its functioning without computer networks.

Merging separately standing computers grouping made it possible to obtain a number of advantages, including the collective use of expensive supercomputers, peripheral equipment, and so on. software computer traffic corporate

The network has provided users with a large number of diverse sources, the opportunity to communicate and relax, surf the Internet, free calls to other countries, participation in trading on stock exchanges, the likelihood of making good money, etc.

The effective work of firms, companies, enterprises of higher and secondary educational institutions today can no longer be realized without the use of technical means that allow optimizing production processes and learning processes, document flow, and office work.

At the present stage of the formation and application of corporate networks, issues such as assessing the productivity and quality of corporate networks and their components, and optimizing existing or planned corporate networks have become especially important.

The performance and throughput of a corporate network is determined by a number of factors: the choice of servers and workstations, communication channels, network equipment, network data transfer protocol, network operating systems and operating systems of workstations, servers and their configurations, dividing database files among servers on the network, organizing a distributed computing process, protecting, maintaining and correcting performance in case of failures and failures, etc.

In this course work The task was set to characterize corporate computer networks and their organization.

To achieve this goal, the following tasks are solved in the course work:

Coursework objectives:

1. Disassemble the design of modern corporate networks.

2. Identify the main characteristics of corporate computer networks:

3. Network performance

4. Bandwidth

5. Reliability

6. Network manageability

7. Compatibility or integrability

8. Extensibility and scalability

9. Transparency and assistance of different types of traffic

10. Find out the organization of corporate networks.

11. Highlight the stages of organizing computer networks.

12. Description of the network being developed

13. Development of an addressing scheme

14. Selection of active equipment

15. Selecting switches

16. Selecting routers

17. Find out the role of the Internet in corporate networks:

18. Potential dangers associated with connecting a corporate network to the Internet:

19. Software and hardware-software methods of protection

1. TOdesign of modern corporate networks

Corporate network - This is a network whose main purpose is to support the operation of a specific enterprise that owns this network. Users of the corporate network are only enterprise employees.

Corporate network- a communications system owned and/or operated by an organization in accordance with the rules of that organization. A corporate network differs from the network of, say, an Internet provider in that the rules for dividing IP addresses, working with Internet sources, etc. are the same for the entire corporate network, while the provider controls only the backbone sections of the network, allowing its customers to independently manage their network departments, which can be either part of the provider’s address space or hidden by the network address translation mechanism behind one or more provider addresses.

A corporate network is viewed as a complex system consisting of several interacting layers. At the base of the pyramid, representing the corporate network, lies a layer of computers - centers for storing and processing information, and a transport subsystem (Fig. 1), ensuring high-quality transmission of information packets between computers.

Rice. 1. Iehierarchy of corporate network layers

A layer of network operating systems works on the transport system; it organizes the work of programs on computers and provides the resources of its computer for public use through the transport system.

They are working on the operating system different programs, but due to the main role of database management systems, which store basic corporate information in a certain form and carry out basic search operations on it, this class of system applications is allocated to a separate layer of the corporate network.

At the next level, there are system services that, using the DBMS as a tool for searching for the required information among millions and billions of bytes stored on disks, provide users with this information in a form accessible for decision-making, and also perform some processing procedures that are common to enterprises of all types information. These services include WWW service, e-mail system, group work systems and many others.

The upper level of the corporate network is represented by special software systems, which implement tasks specific to of this enterprise or enterprises of this type. Examples of such systems include banking automation systems, accounting organization, computer-aided design, management technological processes and so on.

The final goal of a corporate network is embodied in application programs top level, but for their successful operation, of course, it is necessary that the subsystems of other layers accurately perform their functions.

2. ABOUTmain characteristics of corporate computer networks

A number of requirements are imposed on corporate computer networks (Intranet), as well as on other types of computer networks. The main requirement is for the network to fulfill its main function: providing users with the potential for access to shared sources of all computers connected to the network. The solution to this main task is subject to other requirements: performance, reliability, fault tolerance, security, manageability, compatibility, extensibility, scalability, transparency and support for different types of traffic.

2.1 Network performance

Network performance- one of the main properties of corporate networks. Provides the ability to parallelize work between several network elements. Network performance is measured using two types of indicators - time indicators, which evaluate the delay introduced by the network when exchanging data, and throughput indicators, which reflect the amount of information transmitted by the network per unit of time. These two types of indicators are mutually inverse, and knowing one of them, you can calculate the other.

To evaluate network performance, its main characteristics are used:

· reaction time;

· throughput;

· transmission delay and data transmission delay variation.

As a time characteristic of network productivity, an indicator such as reaction time is used. The term "reaction time" can be used in a very broad sense, therefore, in any particular case it is necessary to clarify what is perceived by this term. In general, response time is defined as the time interval between the occurrence of a user request for some network service and the receipt of a result on this request as shown in fig. 2.1.

Rice. 2.1. Reaction time - gap between request and result

Obviously, the meaning and significance of this indicator depend on the type of service that the user is accessing, on which user is accessing which server, as well as on the current state of other network elements - the load on the sections through which the request passes, the load on the server, etc. .P.

The reaction time consists of several components:

· time for preparing requests on the client computer;

· time of transmission of requests between the customer and the server through network segments and intermediate communication equipment;

· request processing time on the server;

· time of transfer of results from the server to the customer;

· processing time of results received from the server on the client computer.

Below are some examples of how to define a reaction time metric, illustrated by: rice. 2.2.

Rice. 2.2 Network productivity indicators

In the first example, response time is understood as the time that passes from the moment the user accesses the FTP service to transfer a file from server 1 to client computer 1 until the end of this transfer. Obviously, this time has several components. A significant contribution is made by such components of the response time as: the processing time of requests to transfer a file on the server, the processing time of file parts received in IP packets on the client computer, the time of transmitting packets between the server and the client computer via the Ethernet protocol within one coaxial segment.

To more accurately assess network performance, it is rational to isolate from the reaction time the components corresponding to the stages of non-network data processing - searching for the required information on the disk, writing it to the disk, etc. The resulting time from such reductions can be considered another definition of network response time at the application level.

Variants of this criterion can be reaction times measured under different but fixed network conditions:

1. Completely unloaded network. The response time is measured under conditions when only client 1 accesses server 1, that is, there is no other activity on the network segment connecting server 1 with client 1 - only frames of the FTP session are present on it, the performance of which is being measured. Traffic can circulate in other network segments, the main thing is that its frames do not fall into the section in which measurements are being taken. Because an unloaded section in a real network is an exotic phenomenon, this version of the efficiency indicator has limited applicability - its excellent values ​​only indicate that the software and hardware of these 2 nodes and the segment have the necessary efficiency to work in light conditions.

2. Loaded network. This is the most interesting case of testing the effectiveness of the FTP service for a specific server and client. However, when measuring productivity criteria in conditions where other nodes and services are also operating on the network, its own difficulties arise - there may be too many load variants in the network, therefore, when determining criteria of this kind, measurements must be taken under some typical network operating conditions. Since the traffic on the network is pulsating in nature, and the characteristics of the traffic vary significantly depending on the time of day and day of the week, determining the typical load is a difficult procedure, requiring long measurements on the network. If the network is just being developed, then calculating the typical load becomes more complicated.

In the second example, the criterion for network productivity is the delay time between the transmission of an Ethernet frame to the network by the network adapter client computer 1 and its arrival at the network adapter of server 3. This criterion also refers to criteria of the “response time” type, but corresponds to the service of the lower - link layer. Because the Ethernet protocol is a datagram-type protocol, that is, without establishing connections, for which the definition of “response” is not defined, the response time in this case refers to the time it takes for a frame to travel from the source node to the recipient node. The frame transmission delay in this case includes the time the frame propagates along the initial segment, the time the frame is transmitted by the switch from section A to section B, the time the frame is transmitted by the router from section B to section C, and the time the frame is transmitted from section C to section D by the repeater. The criteria related to the lower level of the network perfectly characterize the quality of transport services of the network and are more informative for network integrators because they do not contain information that is redundant for them about the operation of protocols of the upper levels.

When assessing network productivity not in relation to individual pairs of nodes, but to each node in the aggregate, criteria of 2 types are applied: weighted average and threshold.

Average- suspended the criterion is the sum of the reaction times of all or some nodes when interacting with all or some network servers for a specific service, that is, a sum of the form:

(?i?jTij)/(nxm),

Where T ij- reaction time i - th client when contacting j - mu server, n - number of clients, m- number of servers. If averaging is also performed over services, then in the above expression one more summation will be added - over the number of services under consideration. Network optimization according to this criterion consists of finding parameter values ​​at which the criterion has a minimum value or at least does not exceed a certain specified number.

The threshold criterion reflects the worst-case response time for each acceptable combination of clients, servers and services:

maxijkTijk,

Where i And j have the same meaning as in the first case, and k indicates the type of service. Optimization can also be performed with the goal of minimizing the criterion, or with the goal of achieving a certain specified value that is considered reasonable from a practical point of view.

2.2 Bandwidth

Bandwidth- reflects the amount of data transmitted by the network or part of it per unit of time. There are average, instantaneous and maximum throughput.

Average throughput is calculated by dividing the total amount of data transferred by the time of transmission, and a fairly long time interval is selected - an hour, a day or a week.

Instantaneous throughput differs from average throughput in that a very small time interval is selected for averaging - say, 10 ms or 1 s.

The maximum throughput is the highest instantaneous throughput recorded during the tracking period.

The main task for which any network is built is the rapid transfer of information between computers. Consequently, criteria related to the capacity of a network or part of a network perfectly reflect the quality of the network’s performance of its main function.

There are a huge number of options for defining criteria of this type, as well as in the case of criteria of the “reaction time” class. These options may differ from each other: the chosen unit of measurement for the amount of transmitted information, the nature of the data under consideration - only user data or user data together with service data, the number of points of measurement of transmitted traffic, the method of averaging the totals for the network in the aggregate. Let's look at different methods for constructing a capacity criterion in detail.

Criteria that differ in the unit of measurement of transmitted information. Packets (or frames, later these terms will be used as synonyms) or bits are traditionally used as a unit of measurement of transmitted information. Therefore, throughput is measured in packets per second or bits per second.

Since computer networks operate on the principle of switching packets (or frames), measuring the number of transmitted information in packets makes sense, especially since the throughput of communication equipment operating on a lower and higher channel is also more often measured in packets per second. However, due to the variable packet size (this is typical for all protocols except ATM, which has a fixed packet size of 53 bytes), measuring throughput in packets per second is associated with some uncertainty - what protocol and what size packets are meant? Most often they mean packets of the Ethernet protocol, as the most common one, having the smallest protocol size of 64 bytes. Packets of minimum length were chosen as reference packets due to the fact that they create the most significant operating mode for communication equipment - the computational operations performed with each incoming packet are weakly dependent on its size, therefore, per unit of transferred information, processing a packet of minimum length requires performs significantly more operations than for a packet of maximum length.

Measuring throughput in bits per second (for local networks, speeds measured in millions of bits per second - Mb/s are more typical) gives a more accurate estimate of the speed of transmitted information than when using packets.

Criteria that differ by taking into account proprietary information. Any protocol has a header that carries service information and a data field that carries information that is considered user information for a given protocol. Let's say, in the smallest size Ethernet protocol frame, 46 bytes (out of 64) represent the data field, and the remaining 18 are service information. When measuring throughput in packets per second, it is impossible to separate user information from service information, but when measuring bitwise, it is possible.

If throughput is measured without dividing information into user and service, then in this case it is impossible to set the task of choosing a protocol or protocol stack for a given network. This is explained by the fact that even if, when replacing one protocol with another, we get high network throughput, this does not mean that the network will work faster for final users - if the share of service information per unit of user data is different for these protocols, then it is allowed to prefer a slower version of the network as the optimal one.

If the protocol type does not change when setting up the network, then you can apply criteria that do not separate user data from the general flow.

When testing network throughput on application level It’s easier to measure throughput based on user data. To do this, simply measure the time it takes to transfer a file of a certain size between the server and the client and divide the file size by the resulting time. To measure overall throughput, special measurement tools are needed - protocol analyzers or SNMP or RMON agents built into operating systems, network adapters or communications equipment.

Criteria that differ in the number and location of measurement points. Bandwidth can be measured between any two nodes or network points, say, between client computer 1 and server 3 from the example shown in Fig. 2.2. In this case, the resulting throughput values ​​will change under the same network operating conditions, depending on between which two points the measurements are made. Because a huge number of user computers and servers operate on the network at the same time, complete data on network throughput is provided by a set of throughputs measured for different combinations of interacting computers - the so-called traffic matrix of network nodes. There are special measurement tools that record the traffic matrix for the entire network node.

Due to the fact that in networks, data on the way to the destination node traditionally passes through several transit intermediate processing stages, the throughput of a separate intermediate network element - a separate channel, segment or communication device - can be considered as a performance criterion.

Knowing the entire throughput between two nodes cannot provide complete information about the acceptable ways of its increase, because from the overall figure it is impossible to understand which of the intermediate stages of packet processing slows down the network to the greatest extent. Therefore, data on the throughput of individual network elements can be suitable for deciding on methods for optimizing it.

In the example under consideration, packets on the path from client computer 1 to server 3 pass through the following intermediate network elements:

Segment AR Switch R segment BR Router R segment CR Repeater R segment D.

Each of these elements has a certain throughput, therefore, the total network throughput between computer 1 and server 3 will be equal to the minimum throughput of the route elements, and the transmission delay of one packet (one of the options for determining the response time) will be equal to the sum of the delays introduced by each element. To increase the throughput of a composite path, you need to first pay attention to the slowest elements - in this case, such an element will most likely be a router.

It is necessary to define the overall network throughput as the average amount of information transmitted between all network nodes per unit of time. Total network throughput can be measured in either packets per second or bits per second. When dividing a network into sections or subnets, the total network capacity is equal to the sum of the subnets' capacities plus the capacity of intersegment or inter-network connections.

Transmission delay is defined as the delay between the moment a packet arrives at the input of some network device or part of the network and the moment it appears at the output of this device.

2.3 Reliability

Reliability is the ability to work faithfully for an extended period of time. This quality has three components: safety itself, preparedness and convenience of service.

Increasing safety consists of preventing malfunctions, failures and failures through the use of electronic circuits and components with a high degree of integration, reducing the level of interference, facilitating the operation of circuits, ensuring thermal conditions for their operation, and also by improving the methods of assembling equipment. Reliability is measured by failure rate and mean time between failures. The reliability of networks as distributed systems is largely determined by the safety of cable systems and switching equipment - connectors, cross-connect panels, switching cabinets, etc., which ensure the actual electrical or optical connectivity of individual nodes to each other.

Increasing readiness involves suppressing, within certain limits, the impact of failures and malfunctions on the operation of the system with the support of control and error correction tools, as well as means of mechanical restoration of information circulation in the network after a malfunction is detected. Increasing availability represents a struggle to reduce system downtime.

The criterion for assessing readiness is the readiness indicator, which is equal to the proportion of time the system is in a working state and can be interpreted as the probability of the system being in a working state. The readiness indicator is calculated as the ratio of the mean time between failures to the sum of the same value and the mean recovery time. High availability systems are also called fault tolerant systems.

The main method for increasing availability is redundancy, on the basis of which various options for fault-tolerant architectures are implemented. Computer networks include a huge number of elements of different types, and to ensure fault tolerance, redundancy is needed across all key network elements.

If we consider the network only as a transport system, then redundancy must exist for all backbone routes of the network, that is, routes that are common to a large number of network clients. Such routes are traditionally routes to corporate servers - database servers, Web servers, mail servers, etc. Consequently, in order to organize fault-tolerant operation, all network elements through which such routes pass must be reserved: there must be backup cable connections that can be used if one of the main cables fails, all communication devices on the main routes must either be implemented according to a fault-tolerant scheme with redundancy of all its main components, or for the entire communication device there must be a backup similar device.

The transition from the main connection to the backup one or from the main device to the backup one can occur either mechanically or manually, with the participation of the administrator. Apparently, a mechanical transition increases the system availability rate, because the network downtime in this case will be significantly less than with human intervention. To perform mechanical reconfiguration procedures, you need to have intelligent communication devices on the network, as well as a centralized management system that helps the devices recognize network failures and respond appropriately to them.

A high degree of network availability can be ensured when procedures for testing the performance of network elements and switching to backup elements are built into communication protocols. An example of this type of protocol is the FDDI protocol, in which the physical links between nodes and hubs of the network are continuously tested, and in the event of their failure, mechanical reconfiguration of the links is performed using a secondary backup ring.

There are also special protocols that support network fault tolerance, for example, the SpanningTree protocol, which performs a mechanical transition to redundant connections in a network built on bridges and switches.

There are different gradations of fault-tolerant computer systems, which include computer networks. Here are some generally accepted definitions:

· high availability (high availability) - characterizes systems implemented using traditional computer technology, using redundant hardware and software and allowing a fix time ranging from 2 to 20 minutes;

· fault tolerance - a characteristic of systems that have redundant hardware for all functional units, including processors, power supplies, input/output subsystems, disk memory subsystems, and the recovery time in case of failure does not exceed one second;

· continuous availability (continuous availability) is the quality of systems that also provide recovery time within one second, but unlike failure-tolerant systems, continuous availability systems eliminate not only downtime resulting from failures, but also planned downtime associated with modernization or maintenance of the system. All this work is carried out online. An additional requirement for continuous availability systems is the absence of degradation, that is, the system must maintain a continuous level of functional probabilities and efficiency regardless of the origin of failures.

Fundamental to safety theory are the problems of reliability analysis and synthesis. The first consists of calculating quantitative safety indicators of the existing or designed system in order to determine its compliance with the requirements. The goal of reliability synthesis is to ensure the required level of system security.

To assess the safety of complex systems, a further set of characteristics is used:

· Readiness or availability indicator - indicates the proportion of time during which the system can be used. Availability can be improved by introducing redundancy into the system design. In order for a network to be classified as highly reliable, it must at least have high availability, it is necessary to ensure the safety of data and protect it from distortion, the consistency (consistency) of data must be maintained (for example, if, to increase security, several copies of data are stored on several file servers, then their identity must be continuously ensured).

· Security - the ability of the system to protect data from unauthorized access.

· Fault tolerance. In networks, fault tolerance refers to the ability of a system to hide the failure of its individual elements from the user. In a fault-tolerant system, the failure of one of its elements leads to a slight decrease in the quality of its operation (degradation), and not to a complete stop. Taken together, the system will continue to perform its functions;

· Probability of delivering a packet to the destination node without distortion.

· Along with this characteristic, other indicators can be used:

· probability of packet loss;

· probability of distortion of an individual bit of transmitted data;

· ratio of lost packets to delivered ones.

The basis for the security of all corporate networks is the security of communication networks (CN), but ensuring high security is not an end in itself, but is a means of achieving maximum network performance. The level of security at which the maximum CN performance indicator is achieved is optimal for it. This level is determined by many factors, including: the purpose of the system, its design, the amount of losses caused by the loss of a service request, the control algorithms used, the level of security of the elements of the system, their cost, operating data, etc. The best level of safety of the SS is determined at the stage of system design of a high-order system, into which the SS is included as a subsystem.

Ensuring the required level of security at the stage of managing the present SS is first decided to use internal network sources for this, without the introduction of structural redundancy, and comes down to the formation of a set of routes for the entire gravitating pair, providing the required level of security.

The formation of a set of routes is carried out iteratively, and at each step, for the set formed at the beginning of this step, the probability of a successful session is calculated. If this probability is not less than the required one, the process ends.

The formation of the initial set of routes can be carried out using two methods:

- The first is that the user includes routes selected by him on the basis of some criterion, say, based on past experience in their use.

The 2nd method is used when the user has no probability of independently generating this set. In this case, a certain number (traditionally no more than ten) correct routes are selected, from which the user selects a subset at his discretion. If the security indicator of the subnetwork formed in this way is less than the required one, especially correct routes (perhaps one) are selected from the remaining set, the probability of connectivity provided by this is evaluated, etc.

2.4 Network manageability

Network manageability- this is the ability to centrally monitor the status of the main elements of the network, identify and resolve problems that arise during the operation of the network, perform a productivity review and plan the development of the network. That is, the presence of probabilities for the interaction of maintenance personnel with the network in order to assess the performance of the network and its elements, configure parameters and make changes to the process of network operation.

An excellent management system monitors the network and, when it finds a problem, initiates a specific action, corrects the situation, and notifies the administrator of what happened and what steps were taken. At the same time, the control system must accumulate data on the basis of which it is possible to plan network developments.

The control system must be independent from the manufacturer and have a convenient interface that allows you to perform all actions from one console.

The International Organization for Standardization (ISO) has defined the following five management categories that a network management system should include:

· Configuration management. Within the boundaries of this category, the parameters that determine the state of the network are established and managed;

· Failure handling. There is identification, isolation and correction of network problems;

· Accounting management. Main functions - recording and issuing information about the correction of network sources;

· Performance management. This is where you review and control the speed at which the network transmits and processes data;

· Security management. The main functions are control of access to network sources and protection of information circulating in the network.

2.5 Compatibility or integrability

Compatibility or integrability means that the network is capable of incorporating a wide variety of software and hardware, that is, it can coexist different operating systems supporting different communication protocol stacks, and run hardware and applications from different manufacturers.

A network consisting of different types of elements is called heterogeneous or heterogeneous, and if a heterogeneous network works without tasks, then it is integrated.

2.6 Extensibility and scalability

Extensibility denotes the likelihood of relatively easily adding individual network elements (users, computers, applications, services), increasing the length of network elements and replacing existing equipment with stronger ones. At the same time, it is firmly significant that the ease of stretching of the system can sometimes be ensured within some extremely limited limits. For example, an Ethernet local network, built on the basis of one segment of thick coaxial cable, has excellent expandability, in the sense that it allows you to easily connect new stations. However, such a network has a limit on the number of stations - their number should not exceed 30-40. True, the network allows a physical connection to the segment and more stations (up to 100), but this most often greatly reduces the efficiency of the network. The presence of such a limitation is a sign of poor scalability of the system with excellent extensibility.

Scalability means that the network can increase the number of nodes and the length of connections within a wide range, without deteriorating the efficiency of the network. To ensure network scalability, it is necessary to use additional communication equipment and structure the network in a special way.

For example, a multi-segment network built using switches and routers and having a hierarchical connection design has excellent scalability. Such a network can include several thousand computers and at the same time provide all network users with the necessary quality of service.

2.7 Transparency and assistance of different types of traffic

Transparency- this is the quality of a network to hide details of its internal structure from the user, thereby simplifying his work on the network.

Network transparency is achieved when the network is presented to users not as many individual computers interconnected by a complex system of cables, but as an integral traditional computing machine with a time distribution system.

Supports different types of traffic - the main characteristic of a network that determines its probabilities. There are such types of traffic as:

· computer data traffic;

· traffic of multimedia data representing speech and video in digital form.

Networks that use these two types of traffic are used to organize video conferences, education and entertainment based on video films, etc. Such networks are significantly more complex in their software and hardware and in their organization of functioning compared to networks where only computer data traffic or only multimedia traffic is transmitted and processed.

Computer data traffic is characterized by a very uneven intensity of messages entering the network in the absence of strict requirements for the synchronization of delivery of these messages. All computer communication algorithms, corresponding protocols and communication equipment were designed specifically for this “pulsating” nature of traffic. The need to transmit multimedia traffic requires fundamental changes to both protocols and equipment. Today, virtually all new protocols provide support for multimedia traffic to one degree or another.

3. Organization of corporate networks

When developing a corporate network, it is necessary to take all measures to minimize the volume of transmitted data. Otherwise, the corporate network should not impose restrictions on which applications and how they process information transferred over it.

Applications are understood as both system software - databases, mail systems, computing sources, file services, etc. - and the tools with which the final user works.

The main tasks of a corporate network are the interaction of system applications located in different nodes and access to them by remote users.

The first task that must be solved when creating a corporate network is the organization of communication channels. If within one city you can rely on renting dedicated lines, including high-speed ones, then when moving to geographically distant nodes, the cost of renting channels becomes primitively astronomical, and their quality and safety are often extremely low. In Fig. Figure 3.1 shows an example of a corporate network, which includes local and regional networks, public access networks and the Internet.

A natural solution to this problem is to use existing global networks. In this case, it is enough to provide channels from offices to the nearest network nodes. The global network will take on the task of delivering information between nodes. Even when creating a small network within one city, one should keep in mind the likelihood of subsequent expansion and use special technologies that are compatible with existing global networks. Often the first, or even the only, such network that comes to mind is the Internet.

Rice. 3.1. Integration of different network communication channels into a corporate network.

In Fig. 3.2. Several local network topologies are given.

Rice. 3.2. Methods for connecting computers into a network.

Every network, even the tiniest one, must have a manager (Supervisor). This is the person (or group of people) who sets it up and ensures smooth operation. Managers' responsibilities include:

· distribution of information among working groups and between specific customers;

· creation and support of a universal data bank;

· protecting the network from unauthorized penetration, and protecting information from damage, etc.

If we touch on the technical aspect of building a local computer network, we can highlight the following elements:

· Interface board in user computers. This is a device for connecting a computer to a common LAN cable.

· Cabling. With the support of special cables, physical communication is organized between local network devices.

· Local network protocols. In general, protocols are programs that provide data transport between devices connected to a network. In Fig. 3.3. The rule of operation of any protocol, local network or Internet network is schematically shown:

Rice. 3.3. Rule for transmitting data over the network.

Network operating system. This is a program that is installed on a file server and serves to provide an interface between users and data on the server.

· File server. It is used to store and host programs and data files that are used for collective access by users.

· Network printing. It allows many local network users to collectively use one or more printing devices.

· Local network protection. Network security is a set of methods used to protect data from damage from unauthorized access or any accident.

· Bridges, gateways and routers. They allow networks to be connected to each other.

· Local network management. This is all that relates to the manager’s tasks listed earlier.

The core function of any local network is the division of information between certain workers, so that two things are carried out:

1. Any information must be protected from unauthorized use. That is, any employee should work only with the information to which he has rights, no matter what computer he logged into the network on.

2. Working on the same network and using the same technical means data transmission, network clients are required not to interfere with each other. There is such a thing as network loading. The network should be built in such a way as not to fail and work fairly quickly for any number of customers and requests.

4. Stages of organizing computer networks

Computer networks is better represented as a three-level hierarchical model. This model includes the following three levels of hierarchy:

- core level;

- separation level;

- access level.

The kernel layer is responsible for high-speed transmission of network traffic. The primary purpose of network nodes is packet switching. In accordance with these theses, it is prohibited to introduce various special technologies on kernel-level devices, such as, say, access lists or routing according to rules, which interfere with the rapid switching of packets.

At the separation level, route summation and traffic aggregation occur. Route summarization refers to the presentation of several networks as one huge network with a short mask. This summation allows you to reduce the routing table in kernel-level devices, as well as isolate the metamorphoses that occur within a huge network.

The access level is necessary to generate network traffic and control access to the network. Access level routers are used to connect individual users (access servers) or individual local networks to a global computer network.

When designing a computer network, two requirements must be met: structure and redundancy.

The first requirement implies that the network must have a certain hierarchical design. First of all, this relates to the addressing scheme, which must be designed in such a way that summation of subnets can be carried out. This will allow you to reduce the routing table and hide changes in the topology from routers at higher levels.

Redundancy refers to the creation of backup routes. Redundancy improves network security. At the same time, it creates difficulties for addressing.

Description of the network being developed

A mixed topology was chosen, including the hierarchical star, ring, and “each with each” topologies.

The core level is the three central offices of the organization located in different cities. The routers of these nodes - core routers (A, B, C) - are interconnected through the special technology of IP-VPN MPLS global networks, forming a ring core of the network with redundant paths. A group of servers and router X are connected to each of the core routers through a switch, forming a demilitarized zone through which access to the Internet is provided. Corporate servers are connected to core router B via a switch. The functions of the separation layer will be performed by energetic kernel-level devices. Campus networks that make up the access layer are connected to each core-level router using campus routers and special technology of IP-VPN MPLS global networks. The entire campus consists of three buildings, the total number of workplaces in which is determined according to the assignment.

The access layer router installed on all campuses connects to the LAN through the campus switch. The campus servers and the building switch are connected to the same switch. Workgroup switches are connected to building switches. The topology of the designed network is shown in Fig. 4.1.

Rice. 4.1. Topology of the designed network

Addressing scheme development

The address scheme is developed in accordance with the hierarchical thesis of computer network design.

The addressing scheme must allow address aggregation. This means that the addresses of lower-level networks must be included in the range of the higher-level network with a larger mask. In addition, it is necessary to provide for the possibility of stretching the address space at all tiers of the hierarchy.

The network is divided into three regions. Each region contains no more than 50 campuses. Each campus has no more than 10 departments, each of which is assigned a subnet. At the lower level of the hierarchy there are host addresses; in the entire division there are no more than 200 hosts.

To distribute addresses within the designed corporate network, we use the range 10.0.0.0, which has the largest capacity (24 bits of address space).

The division of bits in the IP address of the designed corporate network is shown in Fig. 4.2 and in table 4.1.

Rice. 4.2. Bit separation in an IP address

Table 4.1. Bit separation in an IP address

The ranges of regional addresses are shown in Table 4.2, campus addresses for the second region - in Table 4.3 (for other regions, addresses are constructed similarly), for the addresses of departments of the second region of the first campus are shown in Table 4.4. Examples of host addresses are given in Table 4.5. The remaining addresses are calculated similarly.

Table 4.2. Region address ranges

Binary code

Address range

10.32.0.1 - 10.63.255.254/12

10.64.0.1 - 10.95.255.254/12

10.96.0.1-10.127.255.254/12

10.128.0.1 - 10.143.255.254/12

Table 4.3. Campus address ranges for region two

Binary code

Address ranges

10.32.33.1 - 10.32.42.254

10.32.65.1 - 10.32.74.254

10.32.97.1-10.32.106.254

10.38.65.1-10.38.74.254

Table 4.4. Department address ranges for the second region of the first campus

Subdivision

Binary code

Address range

10.32.33.1 - 10.32.33.254

10.32.34.1 - 10.32.34.254

10.32.35.1-10.32.35.254

10.32.42.1-10.32.42.254

Table 4.5. Host Address Examples

Table 4.6. Service network addresses

Selecting active equipment

Active equipment is selected in accordance with the requirements of the designed network, taking into account the type of equipment (switch or router), its characteristics - the number and type of interfaces, supported protocols, bandwidth. Should be preferred:

- network core routers;

- campus routers;

- Internet access routers;

- campus switches;

- building switches;

- switches of working units.

Selecting Switches

Workgroup switches are used to directly connect computers to a network. Switches in this group are not required to have high switching speeds, routing support, or other additional functions.

Enterprise switches are used to combine workgroup switches into one network. Because traffic from many users passes through these switches, they are required to have a high switching speed. These switches also perform the functions of routing traffic between virtual subnets.

Selecting Routers

Kernel routers are designed to quickly route all data flows coming from the lower tiers of the network hierarchy. These are modular routers with high-speed interface modules.

Internet access routers for connecting small local networks to a common one. These are small modular routers, with interfaces for connecting to both a local and public network. In addition to packet routing, such devices perform additional functions, such as, say, traffic filtering, VPN organization, etc.

5. The role of the Internet in corporate networks

If we look inside the Internet, we will see that information passes through a large number of, of course, independent and mostly non-commercial nodes, connected through the most diverse channels and data networks. The insane growth of services provided on the Internet leads to overload of nodes and communication channels, which dramatically reduces the speed and security of information transfer. At the same time, contractors Internet services do not bear any responsibility for the functioning of the network as a whole, and communication channels are progressing very unevenly and mainly where the state considers it necessary to invest in it. In addition, the Internet binds users to one protocol - IP (Internet Protocol). It's great when we use it standard applications, working with this protocol. Using other systems with the Internet turns out to be difficult and expensive.

...

Similar documents

    Virtualized 5G network architecture. Requirements for the fifth generation of networks. Network bandwidth, quantity simultaneous connection devices. Potential technologies in the 5G standard. The future of medicine with the development of 5G. 5G in the evolution of cars.

    abstract, added 12/21/2016

    Signs of a corporate product. Features and specifics of corporate networks. A layer of computers (information storage and processing centers) and a transport subsystem for transmitting information packets between computers at the core of a corporate network.

    test, added 02/14/2011

    Classification of computer networks. Purpose of a computer network. Main types of computer networks. Local and global computer networks. Methods for building networks. Peer-to-peer networks. Wired and wireless channels. Data transfer protocols.

    course work, added 10/18/2008

    The essence and classification of computer networks according to various criteria. Network topology is a diagram of connecting computers into local networks. Regional and corporate computer networks. Internet networks, the concept of WWW and the uniform resource locator URL.

    presentation, added 10/26/2011

    Basic information about corporate networks. VPN organization. Introduction of VPN technologies into a corporate network and their comparative assessment. Creation of a corporate network monitoring complex. Monitoring the status of servers and network equipment. Traffic accounting.

    thesis, added 06/26/2013

    Concept and main characteristics of a local computer network. Description of the typology "Tire", "Ring", "Star". Studying the stages of network design. Traffic analysis, creation of virtual local computer networks. Estimation of total economic costs.

    thesis, added 07/01/2015

    Application of network technologies in management activities. The concept of a computer network. The concept of open information systems. Advantages of combining computer networks. Local computer networks. Global networks. International network INTERNET.

    course work, added 04/16/2012

    Principles of organizing local networks and their hardware. Basic exchange protocols in computer networks and their technologies. Network operating systems. Information security planning, structure and economic calculation of a local network.

    thesis, added 01/07/2010

    Architecture and topology of IP networks, principles and stages of their construction. Basic equipment of corporate IP networks at the backbone and local levels. Routing and scalability in internetworks. Analysis of campus network design models.

    thesis, added 03/10/2013

    Internet. Internet protocols. How the Internet works. Application programs. Opportunities on the Internet? Legal norms. Politics and the Internet. Ethics and the private commercial Internet. Security considerations. Volume of the Internet network.

Corporate information network

“A corporate network is a network whose main purpose is to support the operation of a specific enterprise that owns the network. Users of the corporate network are only employees of this enterprise." The primary purpose of a corporate network is to provide comprehensive information services to enterprise employees, in contrast to a simple local network, which provides only transport services for transmitting information flows in digital form.

Information flows in the modern world are crucial. Today, no one needs to be convinced that for the successful operation of any corporate structure, a reliable and easily managed information system is necessary. Any enterprise has internal communications, ensuring interaction between management and structural divisions, and external relations with business partners, enterprises, and authorities. External and internal communications of an enterprise can be considered as informational. But at the same time, an enterprise can be considered as an organization of people united by common goals. To achieve these goals, various mechanisms are used to facilitate their implementation. One of these mechanisms is effective production management, based on the processes of obtaining information, processing it, making decisions and communicating them to performers. The most important part of management is decision making. To produce the right decision complete, prompt and reliable information is required.

The completeness of information is characterized by its volume, which should be sufficient to make a decision. Information must be prompt, i.e. such that during its transmission and processing the state of affairs does not change. The reliability of information is determined by the degree to which its content corresponds to the objective state of affairs. On workplace For the manager of an enterprise or performer, information must be received in a form that facilitates its perception and processing. But how to organize a high-quality information system at minimal costs? Which equipment should you prefer when choosing?

A significant part of the telecommunications equipment market is occupied by hardware designed to provide corporate structures with intra-industrial communication and data transfer services. Moreover, these concepts can mean a fairly wide range of modern services. Using modern PBX technologies, it is possible to deploy a digital network with the integration of ISDN services and provide users with access to databases and the Internet, organize a minicellular communication system of the DECT standard, introduce a video conference or intercom mode.

Modern PBXs use digital technologies, a modular construction principle, have relatively high reliability, and provide a full range of basic functions(call routing, administration, etc.), provide the ability to connect additional equipment, such as voice mail, charging systems, etc.

Any organization is a collection of interacting elements (divisions), each of which can have its own structure. The elements are interconnected functionally, i.e. they perform certain types of work within the framework of a single business process, as well as information, exchanging documents, faxes, written and oral orders, etc. In addition, these elements interact with external systems, and their interaction can also be both informational and functional. And this situation is true for almost all organizations, no matter what type of activity they are engaged in - for a government agency, bank, industrial enterprise, commercial firm, etc.

Such a general view of the organization allows us to formulate some general principles for constructing corporate information systems, i.e. information systems throughout the organization.

A corporate network is a system that provides information transfer between various applications used in the corporation's system. A corporate network is the network of an individual organization. A corporate network is any network that operates over the TCP/IP protocol and uses Internet communication standards, as well as service applications that provide data delivery to network users. For example, a company can create a Web server to publish announcements, production schedules, and other official documents. Employees access necessary documents using Web content viewers.

Web servers of a corporate network can provide users with services similar to Internet services, for example, working with hypertext pages (containing text, hyperlinks, graphics and sound recordings), providing the necessary resources upon requests from web clients, as well as accessing databases.

A corporate network, as a rule, is geographically distributed, i.e. uniting offices, divisions and other structures located at a considerable distance from each other. The principles by which a corporate network is built are quite different from those used when creating a local network. This limitation is fundamental, and when designing a corporate network, all measures should be taken to minimize the volume of transmitted data. Otherwise, the corporate network should not impose restrictions on which applications and how they process the information transferred over it. An example of a corporate network is shown in Figure 9.

The process of creating a corporate information system

We can highlight the main stages of the process of creating a corporate information system:

Conduct an information survey of the organization;

Based on the survey results, select the system architecture and hardware and software for its implementation; based on the survey results, select and/or develop key components of the information system;

Corporate database management system;

Automation system for business operations and document flow;

Electronic document management system;

Special software;

Decision support systems.

When designing a corporate information network the organization needed to be guided by the principles of consistency, standardization, compatibility, development and scalability, reliability, security and efficiency.

The principle of consistency implies that when designing and creating a CIS, its integrity must be maintained by creating reliable communication channels between subsystems.

The principle of standardization provides for the use of standard equipment and materials that comply with international ISO standards, FCC, State Standards of the Republic of Kazakhstan.

Example of a corporate network

Figure 9

The principle of compatibility, directly related to the principle of standardization, ensures the compatibility of equipment, interfaces and data transfer protocols across the organization and the global network.

The principle of development (scalability) or openness of a CIS is that even at the design stage, a CIS should be created as an open system that allows for the addition, improvement and updating of subsystems and components, and the connection of other systems. The development of the system will be carried out by replenishing it with new subsystems and components, modernizing existing subsystems and components, updating the means used computer technology, more perfect.

The principle of reliability is the duplication of important subsystems and components in order to ensure uninterrupted operation of the CIS, creating a supply of materials and equipment for prompt repair and replacement of equipment.

The principle of security of a CIS implies the use of software, hardware and organizational methods when constructing a CIS, excluding unauthorized access to equipment and retrieving information from CIS external and internal objects and subjects that do not have special access.

The principle of efficiency is to achieve a rational ratio between the costs of designing and creating a CIS and the target effects obtained as a result practical implementation and operation of CIS. The economic essence of creation and implementation is to ensure effective and prompt exchange of information between divisions of the organization to resolve production and financial and economic issues, expressed in reducing the cost of telephone communications and postal items.

We will analyze the specific implementation of the above later at the stage of designing the computer information network of the organization under study.

Introduction. From the history of network technologies. 3

The concept of "Corporate networks". Their main functions. 7

Technologies used in creating corporate networks. 14

Structure of the corporate network. Hardware. 17

Methodology for creating a corporate network. 24

Conclusion. 33

List of used literature. 34

Introduction.

From the history of network technologies.

The history and terminology of corporate networks is closely related to the history of the origins of the Internet and the World Wide Web. Therefore, it does not hurt to remember how the very first network technologies appeared, which led to the creation of modern corporate (departmental), territorial and global networks.

The Internet began in the 60s as a project of the US Department of Defense. The increased role of the computer has given rise to the need for both sharing information between different buildings and local networks, and maintaining the overall functionality of the system in the event of failure of individual components. The Internet is based on a set of protocols that allow distributed networks to route and transmit information to each other independently; if one network node is unavailable for some reason, the information reaches its final destination through other nodes, which this moment in working order. The protocol developed for this purpose is called Internetworking Protocol (IP). (The acronym TCP/IP means the same thing.)

Since then, the IP protocol has become generally accepted in military departments as a way to make information publicly available. Since many of these departments' projects were carried out in various research groups at universities around the country, and the method of exchanging information between heterogeneous networks proved to be very effective, the use of this protocol quickly expanded beyond the military departments. It began to be used in NATO research institutes and European universities. Today, the IP protocol, and therefore the Internet, is a universal global standard.

In the late eighties, the Internet faced a new problem. At first, the information was either emails or simple data files. Appropriate protocols have been developed for their transfer. Now, a whole series of new types of files have emerged, usually united under the name multimedia, containing both images and sounds, and hyperlinks, allowing users to navigate both within one document and between different documents containing related information.

In 1989, the Laboratory of Elementary Particle Physics of the European Center for Nuclear Research (CERN) successfully launched a new project, the goal of which was to create a standard for transmitting this kind of information over the Internet. The main components of this standard were multimedia file formats, hypertext files, as well as a protocol for receiving such files over the network. The file format was named HyperText Markup Language (HTML). It was a simplified version of the more general Standard General Markup Language (SGML). The request servicing protocol is called HyperText Transfer Protocol (HTTP). In general, it looks like this: a server running a program that serves the HTTP protocol (HTTP demon) sends HTML files upon request from Internet clients. These two standards formed the basis for a fundamentally new type of access to computer information. Standard multimedia files can now not only be obtained upon user request, but also exist and be displayed as part of another document. Since the file contains hyperlinks to other documents that may be located on other computers, the user can access this information with a light click of the mouse button. This fundamentally removes the complexity of accessing information in a distributed system. Multimedia files in this technology are traditionally called pages. A page is also the information that is sent to the client machine in response to each request. The reason for this is that a document usually consists of many separate parts, interconnected by hyperlinks. This division allows the user to decide for himself which parts he wants to see in front of him, saves his time and reduces network traffic. The software product that the user directly uses is usually called a browser (from the word browse - to graze) or a navigator. Most of them allow you to automatically retrieve and display a specific page that contains links to documents that the user accesses most often. This page is called the home page, and there is usually a separate button to access it. Each non-trivial document is usually provided with a special page, similar to the “Contents” section in a book. This is usually where you start studying a document, so it is also often called the home page. Therefore, in general, a home page is understood as some kind of index, an entry point to information of a certain type. Usually the name itself includes a definition of this section, for example, Microsoft Home Page. On the other hand, each document can be accessed from many other documents. The entire space of documents linking to each other on the Internet is called the World Wide Web (the acronyms WWW or W3). The document system is completely distributed, and the author does not even have the opportunity to trace all the links to his document that exist on the Internet. The server providing access to these pages may log all those who read such a document, but not those who link to it. The situation is the opposite of what exists in the world of printed products. In many research fields, there are periodically published indexes of articles on a topic, but it is impossible to track all those who read a given document. Here we know those who read (had access to) the document, but we do not know who referred to it. Another interesting feature is that with this technology it becomes impossible to keep track of all the information available through the WWW. Information appears and disappears continuously, in the absence of any central control. However, this is not something to be afraid of; the same thing happens in the world of printed products. We do not try to accumulate old newspapers if we have fresh ones every day, and the effort is negligible.

Client software products that receive and display HTML files, are called browsers. The first graphical browser was called Mosaic, and it was made at the University of Illinois. Many of the modern browsers are based on this product. However, due to the standardization of protocols and formats, any compatible software product can be used. Viewing systems exist on most major client systems capable of supporting smart windows. These include MS/Windows, Macintosh, X-Window and OS/2 systems. There are also viewing systems for those operating systems where windows are not used - they display text fragments documents to which access is provided.

The presence of viewing systems on such disparate platforms is of great importance. The operating environments on the author's machine, server, and client are independent of each other. Any client can access and view documents created using HTML and related standards and transmitted through an HTTP server, regardless of the operating environment in which they were created or where they came from. HTML also supports form development and functions feedback. This means that the user interface for both querying and retrieving data goes beyond point-and-click.

Many stations, including Amdahl, have written interfaces to interoperate between HTML forms and legacy applications, creating a universal front-end user interface for the latter. This makes it possible to write client-server applications without having to worry about coding at the client level. In fact, programs are already emerging that treat the client as a viewing system. An example is Oracle's WOW interface, which replaces Oracle Forms and Oracle Reports. Although this technology is still very young, it already has the potential to change the landscape of information management in the same way that the use of semiconductors and microprocessors changed the world of computers. It allows us to turn functions into separate modules and simplify applications, taking us to a new level of integration that better aligns business functions with the operation of the enterprise.

Information overload is the curse of our time. Technologies that were created to alleviate this problem have only made it worse. This is not surprising: it is worth looking at the contents of the trash bins (regular or electronic) of an ordinary employee dealing with information. Even if you don't count the inevitable heaps of advertising "junk" in the mail, most of the information is sent to such an employee simply "in case" he needs it. Add to this “untimely” information that will most likely be needed later, and here you have the main contents of the trash can. An employee will likely store half of the information that "might be needed" and all of the information that will likely be needed in the future. When the need arises, he will have to deal with a bulky, poorly structured archive of personal information, and at this stage additional difficulties may arise due to the fact that it is stored in files of different formats on different media. The advent of photocopiers made the situation with information “that might suddenly be needed” even worse. The number of copies, instead of decreasing, is only increasing. Email only made the problem worse. Today, a “publisher” of information can create his own, personal mailing list and, using one command, send an almost unlimited number of copies “in case” they may be needed. Some of these information distributors realize that their lists are no good, but instead of correcting them, they put a note at the beginning of the message that reads something like: "If you are not interested..., destroy this message." The letter will still be blocked Mailbox, and the recipient will in any case have to spend time familiarizing himself with it and destroying it. The exact opposite of "maybe useful" information is "timely" information, or information for which there is a demand. Computers and networks were expected to help in working with this type of information, but so far they have not been able to cope with this. Previously, there were two main methods of delivering timely information.

When using the first of them, information was distributed between applications and systems. To gain access to it, the user had to study and then constantly carry out many complex access procedures. Once access was granted, each application required its own interface. Faced with such difficulties, users usually simply refused to receive timely information. They were able to master access to one or two applications, but they were no longer sufficient for the rest.

To solve this problem, some enterprises have attempted to accumulate all distributed information on one main system. As a result, the user received a single access method and a single interface. However, since in this case all enterprise requests were processed centrally, these systems grew and became more complex. More than ten years have passed, and many of them are still not filled with information due to the high cost of entering and maintaining it. There were other problems here too. The complexity of such unified systems made them difficult to modify and use. To support discrete transaction process data, tools were developed to manage such systems. Over the past decade, the data we deal with has become much more complex, making it difficult to information support. The changing nature of information needs, and how difficult it is to change in this area, has given rise to these large, centrally managed systems that are holding back requests at the enterprise level.

Web technology offers a new approach to on-demand information delivery. Since it supports the authorization, publication and management of distributed information, new technology does not lead to the same complexities as older centralized systems. Documents are created, maintained, and published directly by the authors, without having to ask programmers to create new data entry forms and reporting programs. With new browsing systems, the user can access and view information from distributed sources and systems using a simple, unified interface without having any idea about the servers they are actually accessing. These simple technological changes will revolutionize information infrastructures and fundamentally change how our organizations operate.

home distinguishing feature This technology means that control of the flow of information is in the hands not of its creator, but of the consumer. If the user can easily retrieve and review information as needed, it no longer has to be sent to them "just in case" it is needed. The publishing process can now be independent of automatic information dissemination. This includes forms, reports, standards, meeting scheduling, sales enablement tools, training materials, schedules, and a host of other documents that tend to fill our trash bins. For the system to work, as stated above, not only a new information infrastructure, but also a new approach, a new culture. As creators of information, we must learn to publish it without disseminating it, and as users, we must learn to be more responsible in identifying and monitoring our information needs, actively and efficiently obtaining information when we need it.

The concept of "Corporate networks". Their main functions.

Before we talk about private (corporate) networks, we need to define what these words mean. IN Lately this phrase has become so widespread and fashionable that it has begun to lose its meaning. In our understanding, a corporate network is a system that ensures the transfer of information between various applications used in the corporate system. Based on this completely abstract definition, we will consider various approaches to creating such systems and try to fill the concept of a corporate network with concrete content. At the same time, we believe that the network should be as universal as possible, that is, allow the integration of existing and future applications with the lowest possible costs and restrictions.

A corporate network, as a rule, is geographically distributed, i.e. uniting offices, divisions and other structures located at a considerable distance from each other. Often corporate network nodes are located in different cities and sometimes countries. The principles by which such a network is built are quite different from those used when creating a local network, even covering several buildings. The main difference is that geographically distributed networks use fairly slow (today tens and hundreds of kilobits per second, sometimes up to 2 Mbit/s) leased communication lines. If when creating a local network the main costs are for the purchase of equipment and laying cables, then in geographically distributed networks the most significant element of the cost is the rental fee for the use of channels, which grows rapidly with the increase in the quality and speed of data transmission. This limitation is fundamental, and when designing a corporate network, all measures should be taken to minimize the volume of transmitted data. Otherwise, the corporate network should not impose restrictions on which applications and how they process information transferred over it.

By applications we mean both system software - databases, mail systems, computing resources, file services, etc. - and the tools with which the end user works. The main tasks of a corporate network are the interaction of system applications located in various nodes and access to them by remote users.

The first problem that has to be solved when creating a corporate network is the organization of communication channels. If within one city you can count on renting dedicated lines, including high-speed ones, then when moving to geographically distant nodes, the cost of renting channels becomes simply astronomical, and their quality and reliability often turn out to be very low. A natural solution to this problem is to use already existing wide area networks. In this case, it is enough to provide channels from offices to the nearest network nodes. The global network will take on the task of delivering information between nodes. Even when creating a small network within one city, you should keep in mind the possibility of further expansion and use technologies that are compatible with existing global networks.

Often the first, or even the only, such network that comes to mind is the Internet. Using the Internet in corporate networks Depending on the tasks being solved, the Internet can be considered at different levels. For the end user, this is primarily a worldwide system for providing information and postal services. The combination of new technologies for accessing information, united by the concept of the World Wide Web, with a cheap and publicly accessible global computer communications system, the Internet, has actually given birth to a new mass media, which is often simply called the Net. Anyone who connects to this system perceives it simply as a mechanism that gives access to certain services. The implementation of this mechanism turns out to be absolutely insignificant.

When using the Internet as the basis for a corporate data network, it turns out that interesting thing. It turns out that the Network is not a network at all. This is exactly the Internet - interconnection. If we look inside the Internet, we see that information flows through many completely independent and mostly non-commercial nodes, connected through a wide variety of channels and data networks. The rapid growth of services provided on the Internet leads to overload of nodes and communication channels, which sharply reduces the speed and reliability of information transfer. At the same time, Internet service providers do not bear any responsibility for the functioning of the network as a whole, and communication channels are developing extremely unevenly and mainly where the state considers it necessary to invest in it. Accordingly, there are no guarantees about the quality of the network, the speed of data transfer, or even simply the reachability of your computers. For tasks in which reliability and guaranteed time of information delivery are critical, the Internet is far from the best solution. In addition, the Internet binds users to one protocol - IP. This is good when we use standard applications that work with this protocol. Using any other systems with the Internet turns out to be difficult and expensive. If you need to provide mobile users with access to your private network, the Internet is also not the best solution.

It would seem that there shouldn’t be any big problems here - there are Internet service providers almost everywhere, take a laptop with a modem, call and work. However, the supplier, say, in Novosibirsk, has no obligations to you if you connect to the Internet in Moscow. He does not receive money for services from you and, of course, will not provide access to the network. Either you need to conclude an appropriate contract with him, which is hardly reasonable if you find yourself on a two-day business trip, or call from Novosibirsk to Moscow.

Another Internet problem that has been widely discussed lately is security. If we are talking about a private network, it seems quite natural to protect the transmitted information from prying eyes. The unpredictability of information paths between many independent Internet nodes not only increases the risk that some overly curious network operator can put your data on their disk (technically this is not so difficult), but also makes it impossible to determine the location of the information leak. Encryption tools solve the problem only partially, since they are applicable mainly to mail, file transfer, etc. Solutions that allow you to encrypt information in real time at an acceptable speed (for example, when working directly with a remote database or file server) are inaccessible and expensive. Another aspect of the security problem is again related to the decentralization of the Internet - there is no one who can restrict access to the resources of your private network. Since this is an open system where everyone sees everyone, anyone can try to get into your office network and gain access to data or programs. There are, of course, means of protection (the name Firewall is accepted for them - in Russian, or more precisely in German, “firewall” - fire wall). However, they should not be considered a panacea - remember about viruses and antivirus programs. Any protection can be broken, as long as it pays off the cost of hacking. It should also be noted that you can make a system connected to the Internet inoperable without invading your network. There are known cases of unauthorized access to the management of network nodes, or simply using the features of the Internet architecture to disrupt access to a particular server. Thus, the Internet cannot be recommended as a basis for systems that require reliability and closedness. Connecting to the Internet within a corporate network makes sense if you need access to the enormous information space, which is actually called the Network.

A corporate network is a complex system that includes thousands of different components: computers of various types, from desktops to mainframes, system and application software, network adapters, hubs, switches and routers, and cable system. The main task of system integrators and administrators is to ensure that this cumbersome and very expensive system copes as best as possible with processing the flow of information circulating between employees of the enterprise and allows them to make timely and rational decisions that ensure the survival of the enterprise in fierce competition. And since life does not stand still, the content of corporate information, the intensity of its flows and the methods of processing it are constantly changing. The latest example of a dramatic change in the technology of automated processing of corporate information is in plain sight - it is associated with the unprecedented growth in the popularity of the Internet in the last 2 - 3 years. The changes brought about by the Internet are multifaceted. The WWW hypertext service has changed the way information is presented to people by collecting on its pages all the popular types of information - text, graphics and sound. Internet transport - inexpensive and accessible to almost all enterprises (and, through telephone networks, to individual users) - has significantly simplified the task of building a territorial corporate network, while simultaneously highlighting the task of protecting corporate data while transmitting it through a highly accessible public network with a multimillion-dollar population. ".

Technologies used in corporate networks.

Before setting out the basics of the methodology for building corporate networks, it is necessary to provide a comparative analysis of technologies that can be used in corporate networks.

Modern data transmission technologies can be classified according to data transmission methods. In general, there are three main methods of data transfer:

circuit switching;

message switching;

packet switching.

All other methods of interaction are, as it were, their evolutionary development. For example, if you imagine data transmission technologies as a tree, then the packet switching branch will be divided into frame switching and cell switching. Recall that packet switching technology was developed more than 30 years ago to reduce overhead and improve the performance of existing data transmission systems. The first packet switching technologies, X.25 and IP, were designed to handle poor quality links. With improved quality, it became possible to use a protocol such as HDLC for information transmission, which has found its place in Frame Relay networks. The desire to achieve greater productivity and technical flexibility was the impetus for the development of SMDS technology, the capabilities of which were then expanded by the standardization of ATM. One of the parameters by which technologies can be compared is the guarantee of information delivery. Thus, X.25 and ATM technologies guarantee reliable delivery of packets (the latter using the SSCOP protocol), while Frame Relay and SMDS operate in a mode where delivery is not guaranteed. Further, the technology can ensure that the data reaches its recipient in the order it was sent. Otherwise, order must be restored at the receiving end. Packet switched networks can focus on pre-connection establishment or simply transfer data to the network. In the first case, both permanent and switched virtual connections can be supported. Important parameters are also the presence of data flow control mechanisms, a traffic management system, mechanisms for detecting and preventing congestion, etc.

Technology comparisons can also be made based on criteria such as the efficiency of addressing schemes or routing methods. For example, the addressing used may be geographic (telephone numbering plan), WAN, or hardware specific. Thus, the IP protocol uses a logical address consisting of 32 bits, which is assigned to networks and subnets. The E.164 addressing scheme is an example of a geo-location-based scheme, and the MAC address is an example of a hardware address. X.25 technology uses the Logical Channel Number (LCN), and the switched virtual connection in this technology uses the X.121 addressing scheme. In Frame Relay technology, several virtual links can be “embedded” into one link, with a separate virtual link identified by a DLCI (Data-Link Connection Identifier). This identifier is specified in each transmitted frame. DLCI has only local significance; in other words, the sender can identify the virtual channel with one number, while the recipient can identify it with a completely different number. Dialup virtual connections in this technology rely on the E.164 numbering scheme. ATM cell headers contain unique VCI/VPI identifiers, which change as cells pass through intermediate switching systems. Dialup virtual connections in ATM technology can use the E.164 or AESA addressing scheme.

Packet routing in a network can be done statically or dynamically and can either be a standardized mechanism for a specific technology or act as a technical basis. Examples of standardized solutions include the dynamic routing protocols OSPF or RIP for IP. In relation to ATM technology, the ATM Forum has defined a protocol for routing requests to establish switched virtual connections, PNNI, the distinctive feature of which is to take into account quality of service information.

The ideal option for a private network would be to create communication channels only in those areas where they are needed, and transfer over them any network protocols that the running applications require. At first glance, this is a return to leased communication lines, but there are technologies for constructing data transmission networks that make it possible to organize channels within them that appear only at the right time and in the right place. Such channels are called virtual. A system that connects remote resources using virtual channels can naturally be called a virtual network. Today, there are two main virtual network technologies - circuit-switched networks and packet-switched networks. The first include the regular telephone network, ISDN and a number of other, more exotic technologies. Packet switched networks include X.25, Frame Relay and, more recently, ATM technologies. It is too early to talk about using ATM in geographically distributed networks. Other types of virtual (in various combinations) networks are widely used in the construction of corporate information systems.

Circuit-switched networks provide the subscriber with multiple communication channels with a fixed bandwidth per connection. The well-known telephone network provides one communication channel between subscribers. If you need to increase the number of simultaneously available resources, you have to install additional phone numbers, which is very expensive. Even if we forget about the low quality of communication, the limitation on the number of channels and the long connection establishment time do not allow using telephone communications as the basis of a corporate network. For connecting individual remote users, this is quite convenient and often the only available method.

Another example virtual network circuit switched is ISDN (Integrated Services Digital Network). ISDN provides digital channels(64 kbit/sec), through which both voice and data can be transmitted. A basic ISDN (Basic Rate Interface) connection includes two such channels and additional channel control at a speed of 16 kbit/s (this combination is designated as 2B+D). It is possible to use a larger number of channels - up to thirty (Primary Rate Interface, 30B+D), but this leads to a corresponding increase in the cost of equipment and communication channels. In addition, the costs of renting and using the network increase proportionally. In general, the limitations on the number of simultaneously available resources imposed by ISDN lead to the fact that this type of communication is convenient to use mainly as an alternative to telephone networks. In systems with a small number of nodes, ISDN can also be used as the main network protocol. You just have to keep in mind that access to ISDN in our country is still the exception rather than the rule.

An alternative to circuit-switched networks is packet-switched networks. When using packet switching, one communication channel is used in a time-sharing mode by many users - much the same as on the Internet. However, unlike networks like the Internet, where each packet is routed separately, packet switching networks require a connection to be established between end resources before information can be transmitted. After establishing a connection, the network “remembers” the route (virtual channel) along which information should be transmitted between subscribers and remembers it until it receives a signal to break the connection. For applications running on a packet switching network, virtual circuits look like regular communication lines - the only difference is that their throughput and introduced delays vary depending on the network load.

The classic packet switching technology is the X.25 protocol. Nowadays it is customary to wrinkle your nose at these words and say: “it’s expensive, slow, outdated and not fashionable.” Indeed, today there are practically no X.25 networks using speeds above 128 kbit/s. The X.25 protocol includes powerful error correction capabilities, ensuring reliable delivery of information even over poor lines and is widely used where high-quality communication channels are not available. In our country they are not available almost everywhere. Naturally, you have to pay for reliability - in this case, the speed of network equipment and relatively large - but predictable - delays in the distribution of information. At the same time, X.25 is a universal protocol that allows you to transfer almost any type of data. "Natural" for X.25 networks is the operation of applications that use the OSI protocol stack. These include systems using the X.400 (email) and FTAM (file exchange) standards, as well as several others. Tools are available that allow you to implement the interaction of Unix systems based on OSI protocols. Another standard feature of X.25 networks is communication through regular asynchronous COM ports. Figuratively speaking, the X.25 network extends the cable connected to the serial port, bringing its connector to remote resources. Thus, almost any application that can be accessed through a COM port can be easily integrated into an X.25 network. Examples of such applications include not only terminal access to remote host computers, such as Unix machines, but also the interaction of Unix computers with each other (cu, uucp), Lotus Notes-based systems, cc:Mail and MS e-mail Mail, etc. To combine LANs in nodes connected to the X.25 network, there are methods for packaging ("encapsulating") information packets from the local network into X.25 packets. Part of the service information is not transmitted, since it can be unambiguously restored on the recipient's side. The standard encapsulation mechanism is considered to be that described in RFC 1356. It allows various local network protocols (IP, IPX, etc.) to be transmitted simultaneously through one virtual connection. This mechanism (or the older IP-only RFC 877 implementation) is implemented in almost all modern routers. There are also methods for transferring other communication protocols over X.25, in particular SNA, used in IBM mainframe networks, as well as a number of proprietary protocols from various manufacturers. Thus, X.25 networks offer a universal transport mechanism for transferring information between virtually any application. In this case, different types of traffic are transmitted over one communication channel, without “knowing” anything about each other. With LAN aggregation over X.25, you can isolate separate parts of your corporate network from each other, even if they use the same communication lines. This makes it easier to solve security and access control problems that inevitably arise in complex information structures. In addition, in many cases there is no need to use complex routing mechanisms, shifting this task to the X.25 network. Today there are dozens of public global X.25 networks in the world; their nodes are located in almost all major business, industrial and administrative centers. In Russia, X.25 services are offered by Sprint Network, Infotel, Rospak, Rosnet, Sovam Teleport and a number of other providers. In addition to connecting remote nodes, X.25 networks always provide access facilities for end users. In order to connect to any X.25 network resource, the user only needs to have a computer with an asynchronous serial port and a modem. At the same time, there are no problems with authorizing access in geographically remote nodes - firstly, X.25 networks are quite centralized and by concluding an agreement, for example, with the Sprint Network company or its partner, you can use the services of any of the Sprintnet nodes - and these are thousands of cities all over the world, including more than a hundred in the former USSR. Secondly, there is a protocol for interaction between different networks (X.75), which also takes into account payment issues. So, if your resource is connected to an X.25 network, you can access it both from your provider's nodes and through nodes on other networks - that is, from virtually anywhere in the world. From a security point of view, X.25 networks provide a number of very attractive opportunities. First of all, due to the very structure of the network, the cost of intercepting information in the X.25 network turns out to be high enough to already serve as good protection. The problem of unauthorized access can also be solved quite effectively using the network itself. If any - even however small - risk of information leakage turns out to be unacceptable, then, of course, it is necessary to use encryption tools, including in real time. Today there are encryption tools designed specifically for X networks. 25 and allowing you to work at fairly high speeds - up to 64 kbit/s. Such equipment is produced by Racal, Cylink, Siemens. There are also domestic developments created under the auspices of FAPSI. The disadvantage of X.25 technology is the presence of a number of fundamental speed restrictions. The first of them is associated precisely with the developed capabilities of correction and restoration. These features cause delays in the transmission of information and require a lot of processing power and performance from X.25 equipment, as a result of which it simply cannot keep up with fast communication lines. Although there is equipment that has two-megabit ports, the speed they actually provide does not exceed 250 - 300 kbit/sec per port. On the other hand, for modern high-speed communication lines, X.25 correction tools turn out to be redundant and when they are used, equipment power often runs idle. The second feature that makes X.25 networks considered slow is the encapsulation features of LAN protocols (primarily IP and IPX). All other things being equal, LAN communications over X.25 are, depending on network parameters, 15-40 percent slower than using HDLC over a leased line. Moreover, the worse the communication line, the higher the performance loss. We are again dealing with obvious redundancy: LAN protocols have their own correction and recovery tools (TCP, SPX), but when using X.25 networks you have to do this again, losing speed.

It is on these grounds that X.25 networks are declared slow and obsolete. But before we say that any technology is obsolete, it should be indicated for what applications and under what conditions. On low-quality communication lines, X.25 networks are quite effective and provide significant benefits in price and capabilities compared to leased lines. On the other hand, even if we count on a rapid improvement in communication quality - a necessary condition for the obsolescence of X.25 - then the investment in X.25 equipment will not be lost, since modern equipment includes the ability to migrate to Frame Relay technology.

Frame Relay networks

Frame Relay technology emerged as a means to realize the benefits of packet switching on high-speed communication lines. The main difference between Frame Relay networks and X.25 is that they eliminate error correction between network nodes. The tasks of restoring the flow of information are assigned to the terminal equipment and software of users. Naturally, this requires the use of sufficiently high-quality communication channels. It is believed that to successfully work with Frame Relay, the probability of an error in the channel should be no worse than 10-6 - 10-7, i.e. no more than one bad bit for several million. The quality provided by conventional analog lines is usually one to three orders of magnitude lower. The second difference between Frame Relay networks is that today almost all of them implement only the permanent virtual connection (PVC) mechanism. This means that when connecting to a Frame Relay port, you must determine in advance which remote resources you will have access to. The principle of packet switching - many independent virtual connections in one communication channel - remains here, but you cannot select the address of any network subscriber. All resources available to you are determined when you configure the port. Thus, on the basis of Frame Relay technology, it is convenient to build closed virtual networks used to transmit other protocols through which routing is carried out. A virtual network being "closed" means that it is completely inaccessible to other users on the same Frame Relay network. For example, in the USA, Frame Relay networks are widely used as backbones for the Internet. However, your private network can use Frame Relay virtual circuits on the same lines as Internet traffic - and be completely isolated from it. Like X.25 networks, Frame Relay provides a universal transmission medium for virtually any application. The main area of ​​application of Frame Relay today is the interconnection of remote LANs. In this case, error correction and information recovery are carried out at the level of LAN transport protocols - TCP, SPX, etc. Losses for encapsulating LAN traffic in Frame Relay do not exceed two to three percent. Methods for encapsulating LAN protocols in Frame Relay are described in the specifications RFC 1294 and RFC 1490. RFC 1490 also defines the transmission of SNA traffic over Frame Relay. The ANSI T1.617 Annex G specification describes the use of X.25 over Frame Relay networks. This uses all the addressing, correction and recovery functions of X.25 - but only between end nodes that implement Annex G. The permanent connection over the Frame Relay network in this case looks like a "straight wire" along which X.25 traffic is transmitted. X.25 parameters (packet and window size) can be selected to obtain the lowest possible propagation delays and speed loss when encapsulating LAN protocols. The absence of error correction and complex packet switching mechanisms characteristic of X.25 allows information to be transmitted over Frame Relay with minimal delays. Additionally, it is possible to enable a prioritization mechanism that allows the user to have a guaranteed minimum information transfer rate for the virtual channel. This capability allows Frame Relay to be used to transmit latency-critical information such as voice and video in real time. This one is comparatively new opportunity is becoming increasingly popular and is often the main argument when choosing Frame Relay as the basis of a corporate network. It should be remembered that today Frame Relay network services are available in our country in no more than one and a half dozen cities, while X.25 is available in approximately two hundred. There is every reason to believe that as communication channels develop, Frame Relay technology will become increasingly widespread - primarily where X.25 networks currently exist. Unfortunately, there is no single standard that describes the interaction of different Frame Relay networks, so users are locked into one service provider. If it is necessary to expand the geography, it is possible to connect at one point to the networks of different suppliers - with a corresponding increase in costs. There are also private Frame Relay networks operating within one city or using long-distance - usually satellite - dedicated channels. Building private networks based on Frame Relay allows you to reduce the number of leased lines and integrate voice and data transmission.

Structure of the corporate network. Hardware.

When building a geographically distributed network, all the technologies described above can be used. To connect remote users, the simplest and most affordable option is to use telephone communication. Where possible, ISDN networks may be used. To connect network nodes in most cases, global data networks are used. Even where it is possible to lay dedicated lines (for example, within the same city), the use of packet switching technologies makes it possible to reduce the number of necessary communication channels and, importantly, ensure compatibility of the system with existing global networks. Connecting your corporate network to the Internet is justified if you need access to relevant services. It is worth using the Internet as a data transmission medium only when other methods are unavailable and financial considerations outweigh the requirements of reliability and security. If you will use the Internet only as a source of information, it is better to use dial-on-demand technology, i.e. this method of connection, when a connection to an Internet node is established only on your initiative and for the time you need. This dramatically reduces the risk of unauthorized entry into your network from the outside. The simplest way To ensure such a connection - use dialing to the Internet node via a telephone line or, if possible, via ISDN. Another, more reliable way provide connection on demand - use a leased line and the X.25 protocol or - which is much preferable - Frame Relay. In this case, the router on your side should be configured to break the virtual connection if there is no data for a certain time and re-establish it only when data appears on your side. Widespread connection methods using PPP or HDLC do not provide this opportunity. If you want to provide your information on the Internet - for example, install WWW or FTP server, the on-demand connection is not applicable. In this case, you should not only use access restriction using a Firewall, but also isolate the Internet server from other resources as much as possible. Good decision is the use of a single connection point to the Internet for the entire geographically distributed network, the nodes of which are connected to each other using X.25 or Frame Relay virtual channels. In this case, access from the Internet is possible to a single node, while users in other nodes can access the Internet using an on-demand connection.

To transfer data within a corporate network, it is also worth using virtual channels of packet switching networks. The main advantages of this approach - versatility, flexibility, security - were discussed in detail above. Both X.25 and Frame Relay can be used as a virtual network when building a corporate information system. The choice between them is determined by the quality of communication channels, the availability of services at connection points and, last but not least, financial considerations. Today, the costs of using Frame Relay for long-distance communications are several times higher than for X.25 networks. On the other hand, higher data transfer speeds and the ability to simultaneously transmit data and voice may be decisive arguments in favor of Frame Relay. In those areas of the corporate network where leased lines are available, Frame Relay technology is more preferable. In this case, it is possible to both combine local networks and connect to the Internet, as well as use those applications that traditionally require X.25. In addition, telephone communication between nodes is possible via the same network. For Frame Relay it is better to use digital communication channels, but even on physical lines or voice frequency channels can be created completely efficient network, installing the appropriate channel equipment. Good results are obtained by using Motorola 326x SDC modems, which have unique capabilities for data correction and compression in synchronous mode. Thanks to this, it is possible - at the cost of introducing small delays - to significantly increase the quality of the communication channel and achieve effective speeds of up to 80 kbit/sec and higher. On short physical lines, short-range modems can also be used, providing fairly high speeds. However, it is necessary here high quality lines, since short-range modems do not support any error correction. RAD short-range modems are widely known, as well as PairGain equipment, which allows you to achieve speeds of 2 Mbit/s on physical lines about 10 km long. To connect remote users to the corporate network, access nodes of X.25 networks, as well as their own communication nodes, can be used. In the latter case, the required number of telephone numbers (or ISDN channels) must be allocated, which can be prohibitively expensive. If you need to connect a large number of users at the same time, then using X.25 network access nodes may be a cheaper option, even within the same city.

A corporate network is a rather complex structure that uses Various types communications, communication protocols and methods of connecting resources. From the point of view of ease of construction and manageability of the network, one should focus on the same type of equipment from one manufacturer. However, practice shows that suppliers offering the maximum effective solutions does not exist for all emerging problems. A working network is always the result of a compromise - either it is a homogeneous system, suboptimal in terms of price and capabilities, or a more complex combination of products from different manufacturers to install and manage. Next, we will look at network building tools from several leading manufacturers and give some recommendations for their use.

All data transmission network equipment can be divided into two large classes -

1. peripheral, which is used to connect end nodes to the network, and

2. backbone or backbone, which implements the main functions of the network (channel switching, routing, etc.).

There is no clear boundary between these types - the same devices can be used in different capacities or combine both functions. It should be noted that backbone equipment is usually subject to increased requirements in terms of reliability, performance, number of ports and further expandability.

Peripheral equipment is a necessary component of any corporate network. The functions of backbone nodes can be taken over by a global data transmission network to which resources are connected. As a rule, backbone nodes appear as part of a corporate network only in cases where leased communication channels are used or when own access nodes are created. Peripheral equipment of corporate networks, in terms of the functions they perform, can also be divided into two classes.

Firstly, these are routers, which are used to connect homogeneous LANs (usually IP or IPX) through global data networks. In networks that use IP or IPX as the main protocol - in particular, on the Internet - routers are also used as backbone equipment that ensures the joining of various communication channels and protocols. Routers can be implemented either as stand-alone devices or as software based on computers and special communication adapters.

The second widely used type of peripheral equipment is gateways), which implement the interaction of applications running in different types networks. Corporate networks primarily use OSI gateways, which provide LAN connectivity to X.25 resources, and SNA gateways, which provide connectivity to IBM networks. A full-featured gateway is always a hardware-software complex, since it must provide the necessary software interfaces. Cisco Systems Routers Among the routers, perhaps the best known are the products of Cisco Systems, which implement a wide range of tools and protocols used in the interaction of local networks. Cisco equipment supports a variety of connection methods, including X.25, Frame Relay and ISDN, allowing you to create quite complex systems. In addition, there are excellent servers among the Cisco router family remote access to local networks, and in some configurations the functions of gateways are partially implemented (what in Cisco terms is called Protocol Translation).

The main application area for Cisco routers is complex networks using IP or, less commonly, IPX as the main protocol. In particular, Cisco equipment is widely used in Internet backbones. If your corporate network is designed primarily to connect remote LANs and requires complex IP or IPX routing across heterogeneous links and data networks, then using Cisco equipment will most likely optimal choice. Tools for working with Frame Relay and X.25 are implemented in Cisco routers only to the extent that is needed to combine local networks and access them. If you want to build your system based on packet-switched networks, then Cisco routers can work in it only as purely peripheral equipment, and many of the routing functions are redundant and, accordingly, the price is too high. The most interesting for use in corporate networks are the Cisco 2509, Cisco 2511 access servers and the new Cisco 2520 series devices. Their main area of ​​application is remote user access to local networks via telephone lines or ISDN with dynamic assignment of IP addresses (DHCP). Motorola ISG Equipment Among the equipment designed to work with X.25 and Frame Relay, the most interesting are the products manufactured by the Motorola Corporation Information Systems Group (Motorola ISG). Unlike backbone devices used in global data networks (Northern Telecom, Sprint, Alcatel, etc.), Motorola equipment is capable of operating completely autonomously, without a special network management center. The range of capabilities important for use in corporate networks is much wider for Motorola equipment. Of particular note are the developed means of hardware and software modernization, which make it possible to easily adapt the equipment to specific conditions. All Motorola ISG products can operate as X.25/Frame Relay switches, multi-protocol access devices (PAD, FRAD, SLIP, PPP, etc.), support Annex G (X.25 over Frame Relay), provide SNA protocol conversion (SDLC/ QLLC/RFC1490). Motorola ISG equipment can be divided into three groups, differing in the set of hardware and scope of application.

The first group, designed to work as peripheral devices, is the Vanguard series. It includes Vanguard 100 (2-3 ports) and Vanguard 200 (6 ports) serial access nodes, as well as Vanguard 300/305 routers (1-3 serial ports and an Ethernet/Token Ring port) and Vanguard 310 ISDN routers. Routers Vanguard, in addition to a set of communication capabilities, includes the transmission of IP, IPX and Appletalk protocols over X.25, Frame Relay and PPP. Naturally, at the same time, the gentleman’s set necessary for any modern router is supported - the RIP and OSPF protocols, filtering and access restriction tools, data compression, etc.

The next group of Motorola ISG products includes the Multimedia Peripheral Router (MPRouter) 6520 and 6560 devices, which differ mainly in performance and expandability. In the basic configuration, the 6520 and 6560 have, respectively, five and three serial ports and an Ethernet port, and the 6560 has all high-speed ports (up to 2 Mbps), and the 6520 has three ports with speeds up to 80 kbps. MPRouter supports all communication protocols and routing capabilities available for Motorola ISG products. The main feature of MPRouter is the ability to install a variety of additional cards, which is reflected in the word Multimedia in its name. There are serial port cards, Ethernet/Token Ring ports, ISDN cards, and Ethernet hub. The most interesting feature of MPRouter is voice over Frame Relay. To do this, special boards are installed in it, allowing the connection of conventional telephone or fax machines, as well as analog (E&M) and digital (E1, T1) PBXs. The number of simultaneously serviced voice channels can reach two or more dozen. Thus, MPRouter can be used simultaneously as a voice and data integration tool, a router and an X.25/Frame Relay node.

The third group of Motorola ISG products is backbone equipment for global networks. These are expandable devices of the 6500plus family, with fault-tolerant design and redundancy, designed to create powerful switching and access nodes. They include various sets of processor modules and I/O modules, allowing for high-performance nodes with from 6 to 54 ports. In corporate networks, such devices can be used to build complex systems with a large number of connected resources.

It is interesting to compare Cisco and Motorola routers. We can say that for Cisco routing is primary, and communication protocols are only a means of communication, while Motorola focuses on communication capabilities, considering routing as another service implemented using these capabilities. In general, the routing capabilities of Motorola products are poorer than those of Cisco, but they are quite sufficient for connecting end nodes to the Internet or a corporate network.

The performance of Motorola products, all other things being equal, is perhaps even higher, and at a lower price. Thus, Vanguard 300, with a comparable set of capabilities, turns out to be approximately one and a half times cheaper than its closest analogue, Cisco 2501.

Eicon Technology Solutions

In many cases, it is convenient to use solutions from the Canadian company Eicon Technology as peripheral equipment for corporate networks. The basis of Eicon solutions is the universal communication adapter EiconCard, which supports a wide range of protocols - X.25, Frame Relay, SDLC, HDLC, PPP, ISDN. This adapter is installed in one of the computers on the local network, which becomes a communication server. This computer can be used for other tasks as well. This is possible due to the fact that EiconCard has enough powerful processor and its own memory and is capable of processing network protocols without loading the communication server. Eicon software allows you to build both gateways and routers based on EiconCard, running almost all operating systems on Intel platform. Here we will look at the most interesting of them.

The Eicon family of solutions for Unix includes the IP Connect Router, X.25 Connect Gateways and SNA Connect. All of these products can be installed on a computer running SCO Unix or Unixware. IP Connect allows IP traffic to be carried over X.25, Frame Relay, PPP or HDLC and is compatible with equipment from other manufacturers, including Cisco and Motorola. The package includes a Firewall, data compression tools and SNMP management tools. The main application of IP Connect is connecting application servers and Unix-based Internet servers to a data network. Naturally, the same computer can also be used as a router for the entire office in which it is installed. There are a number of advantages to using an Eicon router instead of pure hardware devices. Firstly, it is easy to install and use. From the operating system point of view, EiconCard with IP Connect installed looks like another network card. This makes setting up and administering IP Connect fairly simple for anyone who has been around Unix. Secondly, directly connecting the server to the data network allows you to reduce the load on the office LAN and provide that very single point of connection to the Internet or to the corporate network without installing additional network cards and routers. Third, this "server-centric" solution is more flexible and extensible than traditional routers. There are a number of other benefits that come with using IP Connect with other Eicon products.

X.25 Connect is a gateway that allows LAN applications to communicate with X.25 resources. This product allows you to connect Unix users and DOS/Windows and OS/2 workstations to remote email systems, databases and other systems. By the way, it should be noted that Eicon gateways today are perhaps the only common product on our market that implements the OSI stack and allows you to connect to X.400 and FTAM applications. In addition, X.25 Connect allows you to connect remote users to a Unix machine and terminal applications on local network stations, as well as organize interaction between remote Unix computers via X.25. Using standard Unix capabilities together with X.25 Connect, it is possible to implement protocol conversion, i.e. translation of Unix Telnet access into an X.25 call and vice versa. It is possible to connect a remote X.25 user using SLIP or PPP to a local network and, accordingly, to the Internet. In principle, similar protocol translation capabilities are available in Cisco routers running IOS Enterprise software, but the solution is more expensive than Eicon and Unix products combined.

Another product mentioned above is SNA Connect. This is a gateway designed to connect to the IBM mainframe and AS/400. It is typically used in conjunction with user software—5250 and 3270 terminal emulators and APPC interfaces—also manufactured by Eicon. Analogues of the solutions discussed above exist for other operating systems - Netware, OS/2, Windows NT and even DOS. Particularly worth mentioning is Interconnect Server for Netware, which combines all of the above capabilities with remote configuration and administration tools and a client authorization system. It includes two products - Interconnect Router, which allows routing IP, IPX and Appletalk and is, from our point of view, the most successful solution for interconnection remote networks Novell Netware, and Interconnect Gateway, which provides, among other things, powerful SNA connectivity. Another Eicon product designed to work in the Novell Netware environment is WAN Services for Netware. This is a set of tools that allow you to use Netware applications on X.25 and ISDN networks. Using it in conjunction with Netware Connect allows remote users to connect to the LAN via X.25 or ISDN, as well as provide X.25 egress from the LAN. There is an option to ship WAN Services for Netware with Novell's Multiprotocol Router 3.0. This product is called Packet Blaster Advantage. A Packet Blaster ISDN is also available, which works not with the EiconCard, but with ISDN adapters also supplied by Eicon. In this case, various connection options are possible - BRI (2B+D), 4BRI (8B+D) and PRI (30B+D). To work with Windows applications NT is intended for the product WAN Services for NT. It includes an IP Router, tools for connecting NT applications to X.25 networks, support for Microsoft SNA Server, and tools for remote users to access a local area network over X.25 using Remote Access Server. To connect Windows server NT to an ISDN network, an Eicon ISDN adapter can also be used in conjunction with the ISDN Services for Netware software.

Methodology for building corporate networks.

Now that we have listed and compared the main technologies that a developer can use, let's move on to the basic issues and methods used in network design and development.

Network requirements.

Network designers and network administrators always strive to ensure that three basic network requirements are met:

scalability;

performance;

controllability.

Good scalability is necessary so that both the number of users on the network and the application software can be changed without much effort. High performance network is required for the normal operation of most modern applications. Finally, the network must be manageable enough to be reconfigured to meet the organization's ever-changing needs. These requirements reflect a new stage in the development of network technologies - the stage of creating high-performance corporate networks.

The uniqueness of new software and technologies complicates the development of enterprise networks. Centralized resources, new classes of programs, different principles of their application, changes in the quantitative and qualitative characteristics of the information flow, an increase in the number of concurrent users and an increase in the power of computing platforms - all these factors must be taken into account in their entirety when developing a network. Nowadays there are a large number of technological and architectural solutions on the market, and choosing the most suitable one is a rather difficult task.

In modern conditions, for proper network design, development and maintenance, specialists must consider the following issues:

o Change of organizational structure.

When implementing a project, you should not “separate” software specialists and network specialists. When developing networks and the entire system as a whole, a single team of specialists from different fields is needed;

o Use of new software tools.

It is necessary to become familiar with new software at an early stage of network development so that the necessary adjustments can be made in a timely manner to the tools planned for use;

o Research different solutions.

It is necessary to evaluate various architectural decisions and their possible impact on the operation of the future network;

o Checking networks.

It is necessary to test the entire network or parts of it in the early stages of development. To do this, you can create a network prototype that will allow you to evaluate the correctness of the decisions made. This way you can prevent the emergence of various kinds of bottlenecks and determine the applicability and approximate performance of different architectures;

o Selection of protocols.

To choose the right network configuration, you need to evaluate the capabilities of different protocols. It is important to determine how network operations that optimize the performance of one program or software package may affect the performance of others;

o Selecting a physical location.

When choosing a location to install servers, you must first determine the location of the users. Is it possible to move them? Will their computers be connected to the same subnet? Will users have access to the global network?

o Calculation of critical time.

It is necessary to determine the acceptable response time of each application and possible periods maximum load. It is important to understand how emergency situations can affect network performance and determine whether a reserve is needed to organize the continuous operation of the enterprise;

o Analysis of options.

It is important to analyze the different uses of software on the network. Centralized storage and processing of information often creates additional load at the center of the network, and distributed computing may require the strengthening of local workgroup networks.

Today there is no ready-made, debugged universal methodology, following which you can automatically carry out the entire range of activities for the development and creation of a corporate network. First of all, this is due to the fact that there are no two absolutely identical organizations. In particular, each organization is characterized by a unique leadership style, hierarchy, and business culture. And if we take into account that the network inevitably reflects the structure of the organization, then we can safely say that no two identical networks exist.

Network architecture

Before you begin building a corporate network, you must first determine its architecture, functional and logical organization, and take into account the existing telecommunications infrastructure. A well-designed network architecture helps evaluate the feasibility of new technologies and applications, serves as a foundation for future growth, guides the choice of network technologies, helps avoid unnecessary costs, reflects the connectivity of network components, significantly reduces the risk of incorrect implementation, etc. The network architecture forms the basis of the technical specifications for the created network. It should be noted that network architecture differs from network design in that it does not, for example, define the exact schematic diagram networks and does not regulate the placement of network components. Network architecture, for example, determines whether some parts of the network will be built on Frame Relay, ATM, ISDN, or other technologies. The network design must contain specific instructions and estimates of parameters, for example, the required throughput value, the actual bandwidth, the exact location of communication channels, etc.

There are three aspects, three logical components, in the network architecture:

principles of construction,

network templates

and technical positions.

Design principles are used in network planning and decision making. Principles are a set simple instructions, which describe in sufficient detail all the issues of constructing and operating a deployed network over a long period of time. As a rule, the formation of principles is based on the corporate goals and basic business practices of the organization.

The principles provide the primary link between corporate development strategy and network technologies. They serve to develop technical positions and network templates. When developing a technical specification for a network, the principles of constructing a network architecture are set out in a section that defines the general goals of the network. The technical position can be viewed as a target description that determines the choice between competing alternative network technologies. The technical position clarifies the parameters of the selected technology and provides a description of a single device, method, protocol, service provided, etc. For example, when choosing a LAN technology, speed, cost, quality of service, and other requirements must be taken into account. Developing technical positions requires in-depth knowledge of networking technologies and careful consideration of the organization's requirements. The number of technical positions is determined by the given level of detail, the complexity of the network and the size of the organization. The network architecture can be described in the following technical terms:

Network transport protocols.

What transport protocols should be used to transfer information?

Network routing.

What routing protocol should be used between routers and ATM switches?

Quality of service.

How will the ability to choose the quality of service be achieved?

Addressing in IP networks and addressing domains.

What addressing scheme should be used for the network, including registered addresses, subnets, subnet masks, forwarding, etc.?

Switching in local networks.

What switching strategy should be used in local area networks?

Combining switching and routing.

Where and how switching and routing should be used; how should they combine?

Organization of a city network.

How should branches of an enterprise located, say, in the same city communicate?

Organization of a global network.

How should enterprise branches communicate over a global network?

Remote access service.

How do users of remote branches gain access to the enterprise network?

Network patterns are a set of models of network structures that reflect the relationships between network components. For example, for a particular network architecture, a set of templates is created to “reveal” the network topology of a large branch or wide area network, or to show the distribution of protocols across layers. Network patterns illustrate the network infrastructure that is described full set technical positions. Moreover, in a well-designed network architecture, network templates can be as close in content to technical items as possible in terms of detail. In fact, network templates are a description of the functional diagram of a network section that has specific boundaries; the following main network templates can be distinguished: for a global network, for a metropolitan network, for a central office, for a large branch of an organization, for a department. Other templates can be developed for sections of the network that have any special features.

The described methodological approach is based on studying a specific situation, considering the principles of building a corporate network in their entirety, analyzing its functional and logical structure, developing a set of network templates and technical positions. Various implementations of corporate networks may include certain components. In general, a corporate network consists of various branches connected by communication networks. They can be wide area (WAN) or metropolitan (MAN). Branches can be large, medium and small. A large department can be a center for processing and storing information. A central office is allocated from which the entire corporation is managed. Small departments include various service departments (warehouses, workshops, etc.). Small branches are essentially remote. The strategic purpose of a remote branch is to place sales and technical support services closer to the consumer. Customer communications, which significantly impact corporate revenue, will be more productive if all employees have the ability to access corporate data at any time.

At the first step of building a corporate network, the proposed functional structure. The quantitative composition and status of offices and departments is determined. The need to deploy your own private communication network is justified or the choice of a service provider that is able to meet the requirements is made. The development of a functional structure is carried out taking into account the financial capabilities of the organization, long-term development plans, the number of active network users, running applications, and the required quality of service. The development is based on the functional structure of the enterprise itself.

The second step is to determine the logical structure of the corporate network. The logical structures differ from each other only in the choice of technology (ATM, Frame Relay, Ethernet...) for building the backbone, which is the central link of the corporation’s network. Let's consider logical structures built on the basis of cell switching and frame switching. The choice between these two methods of transmitting information is made based on the need to provide guaranteed quality of service. Other criteria may be used.

The data transmission backbone must satisfy two basic requirements.

o The ability to connect a large number of low-speed workstations to a small number of powerful, high-speed servers.

o Acceptable speed of response to customer requests.

An ideal highway should have high reliability of data transmission and a developed control system. A management system should be understood, for example, as the ability to configure the backbone taking into account all local features and maintaining reliability at such a level that even if some parts of the network fail, the servers remain available. The listed requirements will probably determine several technologies, and the final choice of one of them remains with the organization itself. You need to decide what is most important - cost, speed, scalability or quality of service.

The logical structure with cell switching is used in networks with real-time multimedia traffic (video conferencing and high-quality voice transmission). At the same time, it is important to soberly assess how necessary such an expensive network is (on the other hand, even expensive networks are sometimes not able to satisfy some requirements). If this is so, then it is necessary to take the logical structure of the frame switching network as a basis. The logical switching hierarchy, combining two levels of the OSI model, can be represented as a three-level diagram:

The lower level is used to combine local Ethernet networks,

The middle layer is either an ATM local network, a MAN network, or a WAN backbone communication network.

The top level of this hierarchical structure is responsible for routing.

The logical structure allows you to identify all possible communication routes between individual sections of the corporate network

Backbone based on cell switching

When mesh switching technology is used to build a network backbone, the interconnection of all workgroup-level Ethernet switches is carried out by high-performance ATM switches. Operating at Layer 2 of the OSI reference model, these switches transmit 53-byte fixed-length cells instead of variable-length Ethernet frames. This networking concept implies that the Ethernet level switch working group must have an ATM segment-and-assemble (SAR) output port that converts variable-length Ethernet frames into fixed-length ATM cells before forwarding the information to the ATM backbone switch.

For wide area networks, core ATM switches are capable of connecting remote regions. Also operating at Layer 2 of the OSI model, these WAN switches can use T1/E1 links (1.544/2.0Mbps), T3 links (45Mbps) or SONET OC-3 links (155Mbps). To provide urban communications, a MAN network can be deployed using ATM technology. The same ATM backbone network can be used to communicate between telephone exchanges. In the future, as part of the client/server telephony model, these stations may be replaced by voice servers on the local network. In this case, the ability to guarantee quality of service in ATM networks becomes very important when organizing communications with client personal computers.

Routing

As already noted, routing is the third and highest level in the hierarchical structure of the network. Routing, which operates at Layer 3 of the OSI reference model, is used to organize communication sessions, which include:

o Communication sessions between devices located in different virtual networks (each network is usually a separate IP subnet);

o Communication sessions that pass through wide area/city

One strategy for building a corporate network is to install switches at the lower levels of the overall network. Local networks are then connected using routers. Routers are required to divide a large organization's IP network into many separate IP subnets. This is necessary to prevent "broadcast explosion" associated with protocols such as ARP. To contain the spread of unwanted traffic across the network, all workstations and servers must be divided into virtual networks. In this case, routing controls communication between devices belonging to different VLANs.

Such a network consists of routers or routing servers (logical core), a network backbone based on ATM switches and a large number of Ethernet switches located on the periphery. With the exception of special cases, such as video servers that connect directly to the ATM backbone, all workstations and servers must be connected to Ethernet switches. This type of network construction will allow you to localize internal traffic within workgroups and prevent such traffic from being pumped through backbone ATM switches or routers. The aggregation of Ethernet switches is carried out by ATM switches, usually located in the same compartment. It should be noted that multiple ATM switches may be required to provide enough ports to connect all the Ethernet switches. As a rule, in this case, 155 Mbit/s communication is used over multimode fiber optic cable.

Routers are located away from the backbone ATM switches, since these routers need to be moved beyond the routes of the main communication sessions. This design makes routing optional. This depends on the type of communication session and the type of traffic on the network. Routing should be avoided when transmitting real-time video information, as it can introduce unwanted delays. Routing is not needed for communication between devices located on the same virtual network, even if they are located in different buildings within a large enterprise.

In addition, even in situations where routers are required for certain communications, placing routers away from backbone ATM switches can minimize the number of routing hops (a routing hop is the portion of the network from a user to the first router or from one router to another). This not only reduces latency, but also reduces the load on routers. Routing has become widespread as a technology for connecting local networks in a global environment. Routers provide a variety of services designed for multi-level control of the transmission channel. This includes general scheme addressing (at the network level), independent of how the addresses of the previous level are formed, as well as conversion of one control layer frame format to another.

Routers make decisions about where to route incoming data packets based on the address information they contain. network layer. This information is retrieved, analyzed, and compared with the contents of routing tables to determine which port a particular packet should be sent to. The link layer address is then extracted from the network layer address if the packet is to be sent to a segment of a network such as Ethernet or Token Ring.

In addition to processing packets, routers simultaneously update routing tables, which are used to determine the destination of each packet. Routers create and maintain these tables dynamically. As a result, routers can automatically respond to changes in network conditions, such as congestion or damage to communication links.

Determining a route is quite a difficult task. In a corporate network, ATM switches must function in much the same way as routers: information must be exchanged based on the network topology, available routes, and transmission costs. The ATM switch critically needs this information to select the best route for a particular communication session initiated by end users. In addition, determining a route is not limited to just deciding on the path along which a logical connection will pass after generating a request for its creation.

The ATM switch can select new routes if for some reason the communication channels are unavailable. At the same time, ATM switches must provide network reliability at the router level. To create an expandable network with high cost efficiency, it is necessary to transfer routing functions to the network periphery and provide traffic switching in its backbone. ATM is the only network technology that can do this.

To select a technology, you need to answer the following questions:

Does the technology provide adequate quality of service?

Can she guarantee the quality of service?

How expandable will the network be?

Is it possible to choose a network topology?

Are the services provided by the network cost-effective?

How effective will the management system be?

The answers to these questions determine the choice. But, in principle, they can be used in different parts of the network different technologies. For example, if certain areas require support for real-time multimedia traffic or a speed of 45 Mbit/s, then ATM is installed in them. If a section of the network requires interactive processing of requests, which does not allow significant delays, then it is necessary to use Frame Relay, if such services are available in this geographic area (otherwise, you will have to resort to the Internet).

Thus, a large enterprise may connect to the network via ATM, while branch offices connect to the same network via Frame Relay.

When creating a corporate network and selecting network technology with appropriate software and hardware, the price/performance ratio must be taken into account. It's hard to expect high speeds from cheap technologies. On the other hand, it makes no sense to use the most complex technologies for the simplest tasks. Different technologies should be properly combined to achieve maximum efficiency.

When choosing a technology, the type of cabling system and the required distances should be taken into account; compatibility with already installed equipment (significant cost minimization can be achieved if new system it is possible to turn on already installed equipment.

Generally speaking, there are two ways to build a high-speed local network: evolutionary and revolutionary.

The first way is based on expanding the good old frame relay technology. The speed of the local network can be increased within the framework of this approach by upgrading the network infrastructure, adding new communication channels and changing the method of packet transmission (which is what is done in switched Ethernet). A typical Ethernet network shares bandwidth, that is, the traffic of all users on the network competes with each other, claiming the entire bandwidth of the network segment. Switched Ethernet creates dedicated routes, giving users real bandwidth of 10 Mbit/s.

The revolutionary path involves the transition to radically new technologies, for example, ATM for local networks.

Extensive practice in building local networks has shown that the main issue is quality of service. This is what determines whether the network can work successfully (for example, with applications such as video conferencing, which are increasingly used around the world).

Conclusion.

Whether or not to have your own communication network is a “private matter” for each organization. However, if building a corporate (departmental) network is on the agenda, it is necessary to conduct a deep, comprehensive study of the organization itself, the problems it solves, draw up a clear document flow chart in this organization and, on this basis, begin to select the most appropriate technology. One example of building corporate networks is the currently widely known Galaktika system.

List of used literature:

1. M. Shestakov “Principles of building corporate data networks” - “Computerra”, No. 256, 1997

2. Kosarev, Eremin " Computer systems and networks", Finance and Statistics, 1999.

3. Olifer V. G., Olifer N. D. “Computer networks: principles, technologies, protocols”, St. Petersburg, 1999

4. Materials from the site rusdoc.df.ru

A corporate network is a network whose main purpose is to support the operation of a specific enterprise that owns this network. Users of the corporate network are employees of this enterprise. Depending on the scale of the enterprise, as well as the complexity and variety of tasks being solved, department networks, campus networks and corporate networks (that is, a large enterprise network) are distinguished.

Department networks- These are networks that are used by a relatively small group of employees working in one department of the enterprise.

The main purpose of a department network is to share local resources such as applications, data, laser printers, and modems. Typically, departmental networks have one or two file servers, no more than thirty users and are not divided into subnets (Fig. 55). Most of an enterprise's traffic is localized in these networks. Departmental networks are usually created on the basis of one network technology - Ethernet, Token Ring. Such a network is characterized by one or at most two types of operating systems. A small number of users makes it possible for departments to use peer-to-peer network operating systems such as Microsoft's Windows.



There is another type of network, close to departmental networks - working group networks. Such networks include very small networks, including up to 10-20 computers. The characteristics of workgroup networks are practically no different from the characteristics of departmental networks. Properties such as network simplicity and homogeneity are most evident here, while departmental networks can in some cases approach the next largest type of network, campus networks.

Campus networks got their name from English word"campus" - student town. It was on university campuses that there was often a need to combine several small networks into one large network. Now this name is not associated with college campuses, but is used to designate networks of any enterprises and organizations.

The main features of campus networks are that they combine many networks of different departments of one enterprise within a single building or within one territory covering an area of ​​​​several square kilometers (Fig. 56). However, global connections in campus networks are not used. The services of such a network include interactions between departmental networks. Access to shared enterprise databases, access to shared fax servers, high-speed modems and high-speed printers. As a result, employees of each department of the enterprise gain access to some files and network resources of other departments. An important service provided by campus networks has become access to corporate databases, regardless of what type of computer they are located on.

It is at the campus network level that problems arise in integrating heterogeneous hardware and software. Types of computers, network operating systems, network hardware may vary in each department. This leads to the complexity of managing campus networks. In this case, administrators must be more qualified, and the means of operational network management must be more advanced.

Corporate networks are also called enterprise-scale networks, which corresponds to the literal translation of the term “enterprise - wide network”. Enterprise-scale networks (corporate networks) connect a large number of computers in all areas of an individual enterprise. They can be intricately connected and cover a city, region or even a continent. The number of users and computers can be measured in thousands, and the number of servers - in hundreds; the distances between the networks of individual territories can be such that the use of global connections becomes necessary (Fig. 57). To connect remote local networks and individual computers in a corporate




networks use a variety of telecommunications tools, including telephone channels, radars, satellite connection. A corporate network can be thought of as “islands” of local networks “floating” in a telecommunications environment. An indispensable attribute of such a complex and large-scale network is a high degree of heterogeneity (interogeneity) - it is impossible to satisfy the needs of thousands of users using the same type of hardware. A corporate network necessarily uses various types of computers - from mainframes to personal computers, several types of operating systems and many different applications. Heterogeneous parts of the corporate network should work as a single whole, providing users with the most convenient and simple access to all necessary resources.

The emergence of a corporate network is a good illustration of the well-known philosophical postulate about the transition from quantity to quality. When individual networks of a large enterprise with branches in different cities and even countries are combined into a single network, many quantitative characteristics of the combined network exceed a certain critical threshold, beyond which a new quality begins. Under these conditions, existing methods and approaches to solving traditional problems of smaller-scale networks for corporate networks turned out to be unsuitable. Tasks and problems came to the fore that in distributed networks of work groups, departments, and even campuses were either of secondary importance or did not appear at all.

In distributed local networks consisting of 1-20 computers and approximately the same number of users, the necessary information data is moved to the local database of each computer, the resources of which users must have access to, that is, the data is retrieved from the local accounting database and accessed based on it provided or not provided.

But if there are several thousand users on the network, each of whom needs access to several dozen servers, then, obviously, this solution becomes extremely ineffective, since the administrator must repeat the operation of entering the credentials of each user several dozen times (according to the number of servers). The user himself is also forced to repeat the logical login procedure every time he needs access to the resources of the new server. The solution to this problem for a large network is to use a centralized help desk, the database of which stores the necessary information. The administrator performs the operation of entering user data into this database once, and the user performs the logical login procedure once, not to a separate server, but to the entire network. As the scale of the network increases, the requirements for its reliability, performance and functionality increase. With ever-increasing volumes of data circulating across the network, the network must ensure that it is safe and secure as well as accessible. All this leads to the fact that corporate networks are built on the basis of the most powerful and diverse equipment and software.

Of course, corporate computing networks have their own problems. These problems are mainly associated with organizing effective interaction between individual parts of a distributed system.

Firstly, there are difficulties associated with software - operating systems and applications. Programming for distributed systems is fundamentally different from programming for centralized systems. Thus, a network operating system, performing all the functions of managing local computer resources, will solve its numerous tasks of providing network servers. The development of network applications is complicated by the need to organize the joint operation of their parts running on different machines. A lot of concern comes from ensuring the compatibility of software installed on network nodes.

Secondly, many problems are associated with transporting messages over communication channels between computers. The main objectives here are to ensure reliability (so that the provided data is not lost or distorted) and performance (so that data exchange occurs with acceptable delays). In the structure of the total costs of a computer network, the costs of solving “transport issues” make up a significant part, while in centralized systems these problems are completely absent.

Thirdly, there are security issues that are much more difficult to resolve on a computer network than on a stand-alone computer. In some cases, when security is especially important, it is better to avoid using the network altogether.

However, in general, the use of local (corporate networks) gives the enterprise the following opportunities:

Sharing expensive resources;

Improving switching;

Improving access to information;

Fast and high-quality decision making;

Freedom in the territorial placement of computers.

A corporate network (enterprise network) is characterized by:

Scale – thousands of user computers, hundreds of servers, huge volumes of data stored and transmitted over communication lines, many different applications;

High degree of heterogeneity (heterogeneity) – types of computers, communications equipment, operating systems and applications are different;

Using global connections – branch networks are connected using telecommunications means, including telephone channels, radio channels, and satellite communications.



tell friends