The central device of the computer has the main characteristics. What does each parameter affect? Main characteristics of the processor

💖 Do you like it? Share the link with your friends

Central processing unit (CPU, or central processing unit - CPU; English central processing unit, abbreviated as CPU, literally - central processing unit) - microcircuit, executor of machine instructions (program code), main part hardware computer or programmable logic controller. Sometimes called a microprocessor or simply a processor.

❑ Composition software and the principles of its connection with technical equipment. The relationship of computer structures and functions is based on the modular design of the computer structure. Its essence is that a computer consists of a set of modules - devices and blocks that implement complete functions and are structurally insensitive to each other. The greatest advantage of this principle is the ability to update your computer by incorporating new, improved functional modules. Physically, the modules are connected by cables.

The combination of methods, as well as information exchange algorithms and units, is called the technical interface of a computer. The interface is an important part of computer architecture. Its most important parameters are the length of the transmitted podium and the speed of communication, as well as the significance of any standard accepted by the public. Care must be taken to distinguish between technical and user interfaces: the technical interface defines the rules for device compatibility, and the user interface defines how the user interacts with the system and controls its operation.

The history of the development of processor production technology fully corresponds to the history of the development of element base production technology.

The first stage, which affected the period from the forties to the late fifties, was the creation of processors using electromechanical relays, ferrite cores (memory devices) and vacuum tubes. They were installed in special connectors on modules assembled into racks. A large number of such racks, connected by conductors, together represented the processor. A distinctive feature was low reliability, low performance and high heat generation.

The principle of modular structure has been implemented. b backbone architecture. For more convenient and efficient use of both the entire computer and its devices, the classical computer circuit made changes: the central part or processing part, as well as the periphery or circuit part. Auxiliary devices are connected to the central part through an input - an input channel that controls the exchange of information. I/O devices are independent processors specialized in data exchange.

This is the hierarchical structure of a computer, which at the highest level of the hierarchy is the central processor and operating memory, at a lower level, input-output channels and even lower interface device controllers. At least the devices are located. Such structures are high-performance multi-user computers.

The second stage, from the mid-fifties to the mid-sixties, was the introduction of transistors. The transistors were mounted on boards close to modern in appearance, installed in racks. As before, the average processor consisted of several such racks. Performance has increased, reliability has increased, and energy consumption has decreased.

Rice. 6 Hierarchical architecture of a universal computer. Architecture of mainline computers. Mini- and microcomputer class, which includes personal computers, have a simpler structure and a small number of main devices. Their architecture is based on the use of a common interface. Interface bus - connecting channel. The basic architecture does not have a clear hierarchy; all devices exchange data via the bus. The order of data exchange is determined by the following rules.

b Data are transmitted by gender. b The width of the interface must be equal to the length of the machine. At one time, data is exchanged by one pair of devices. Data, commands, addresses, or control signals sent to the common interface bus, the system address of the device to which the information is sent.

The third stage, which began in the mid-sixties, was the use of microcircuits. Initially, microcircuits with a low degree of integration were used, containing simple transistor and resistor assemblies, then, as technology developed, microcircuits were used that implemented individual elements of digital circuitry (first elementary switches and logical elements, then more complex elements - elementary registers, counters, adders), and later microcircuits appeared containing functional blocks of the processor - a microprogram device, an arithmetic-logical device, registers, devices for working with data and command buses.

Rice. 7 Magnetic computer architecture. Computer system topology. According to topology computer systems The following types of system architecture are distinguished. b Centralized computer systems. ❑ Personal computer systems. Distributed computer systems.

Centralized computer systems. In a centralized computer system, one computer performs many user calculations. This is mainly done by general purpose or supercomputers. It can also be a server for minicomputers.

The fourth stage was the creation of a microprocessor, in which all the main elements and blocks of the processor were physically located on one chip. In 1971, Intel created the world's first 4-bit microprocessor, the 4004, intended for use in microcalculators. Gradually, almost all processors began to be produced in the microprocessor format. For a long time, the only exceptions were low-volume processors, hardware optimized for solving special problems (for example, supercomputers or processors for solving a number of military problems), or processors that had special requirements for reliability, speed, or protection from electromagnetic pulses and ionizing radiation. Gradually, with the reduction in cost and spread of modern technologies, these processors are also beginning to be manufactured in the microprocessor format.

Banks and other large and geographically widespread companies with centralized databases use banks. Personal computer systems. The basic idea of ​​a personal computer system is that the computer should serve as a separate tool for work and productivity at all times. A personal computerized system is effective for individual work: preparing texts, presentations, working with calculators, performing simple design work. Such systems are also useful for small businesses and institutions.

b greater flexibility in individual tasks. Lower influence on other employees. It is difficult to allocate work to individual users. Duplicate hardware and software. Data and other system resources - applications, processors - can be local or any other nodes in the system. b greater opportunity to distribute work, information and other resources more rationally.

The first commercially available microprocessor was the 4-bit Intel 4004. It was succeeded by the 8-bit Intel 8080 and 16-bit 8086, which laid the foundation for the architecture of all modern desktop processors. But due to the prevalence of 8-bit memory modules, the 8088 was released, a clone of the 8086 with an 8-bit memory bus. Then came its modification 80186. The 80286 processor introduced a protected mode with 24-bit addressing, which allowed the use of up to 16 MB of memory. The Intel 80386 processor appeared in 1985 and introduced improved protected mode, 32-bit addressing, which allowed the use of up to 4 GB random access memory and support for the virtual memory mechanism. This line of processors is built on a register computing model.

You can continue to work even if some of the system nodes are down or not working for other reasons. It is more difficult to organize data protection. Maintaining uniform standards is difficult. Modern methods of organizing distributed computer systems.

Client-server architecture. In client/server architecture, various computers on the network serve as clients or servers. Computers - clients request any services such as printing or data mining. These requests are sent to specific computers - servers - that perform the specified work. The client joins a server with a network whose purpose is not only to connect but also to manage the connection. The network passes the client's request to the appropriate server and returns the results of the work.

In parallel, microprocessors are being developed that take the stack computing model as a basis.

IN modern computers The processors are designed as a compact module (about 5x5x0.3 cm in size) that is inserted into a ZIF socket. Most modern processors are implemented in the form of a single semiconductor chip containing millions, and more recently even billions of transistors. In the first computers, processors were bulky units, sometimes occupying entire cabinets and even rooms, and were made of a large number of individual components.

The client-server architecture allows you to organize computer work on a modular basis, i.e. divide the system into modules or some functional components of the global structure. This is a widespread system because it... Wide range of technical options and flexible use of equipment.

b Greater ability to adapt and coordinate hardware and software different manufacturers. Equivalent node architecture. Equivalent node architecture is an important client-server architecture alternative to small computer networks. Here each workstation can communicate directly with another workstation on the network without using dedicated server services. There may be problems with uniform copies of the same documents when connecting to other computers.

Initially, the developers are given a technical task, based on which a decision is made on what the architecture of the future processor will be, its internal structure, and manufacturing technology. Various groups are tasked with developing the appropriate functional blocks of the processor, ensuring their interaction, and electromagnetic compatibility. Due to the fact that the processor is actually a digital automaton that fully complies with the principles of Boolean algebra, a virtual model of the future processor is built using specialized software running on another computer. It is used to test the processor, execute elementary commands and significant amounts of code, work out the interaction of various blocks of the device, carry out optimization, and look for errors that are inevitable in a project of this level.

Burgis et al. Kaunas: technology. . IN technical specifications processor, you should pay attention not only to installing a new desktop computer or updating an old one. Processors are also the main component of laptops, so even without a desktop computer, it's definitely worth knowing what parameters need to be addressed. When choosing a processor, it is important to consider which motherboard it will be used on.

In addition, the processor speed is also indicated by multi-core processors. The more cores, the faster and smoother the processor. Currently, two and four core processors are still the most common, but they can be purchased with six, eight or ten cores, but it is necessary to decide whether one is needed powerful processor, in advance, since such processors will greatly damage your wallet. In addition, a number of software vendors refer to minimum system requirements, so if you want to install a very powerful program or computer game, you should consider whether the CPU core is sufficient to run smoothly.

After this, a physical model of the processor is built from digital basic matrix crystals and microcircuits containing elementary functional blocks of digital electronics, on which the electrical and timing characteristics of the processor are checked, the processor architecture is tested, the correction of errors found continues, and issues of electromagnetic compatibility are clarified (for example, with almost ordinary At a clock frequency of 10 GHz, sections of conductor 7 mm long already work as emitting or receiving antennas).

Another very important parameter when choosing a new processor is the tactical frequency of the processor. This frequency is measured in gigahertz and indicates how many processors can perform operations simultaneously. The higher this parameter, the more efficiently and smoothly the entire computer will work.

When combining a processor and motherboard, you should pay attention to the bus frequency, measured in megahertz. It shows how often the processor is working with the rest of the computer and is an extremely important parameter in combination with motherboard. Let us remind you that the technical parameters of all computer components must be configured to the maximum level - an expensive and powerful video card will also require an appropriate processor for the specifications.

Then begins the collaborative phase between circuit and process engineers who, using specialized software, convert the electrical circuit containing the processor architecture into a chip topology. Modern systems automatic design allow, in general, from electrical diagram directly receive a package of stencils for creating masks. At this stage, technologists try to implement technical solutions, laid down by circuit designers, taking into account the available technology. This stage is one of the longest and most difficult in development and sometimes requires compromises on the part of circuit designers to abandon some architectural solutions. It should be noted that a number of manufacturers of custom chips (foundry) offer developers (design center or fabless) a compromise solution in which at the processor design stage they use the libraries of elements and blocks they present, standardized in accordance with the existing technology (Standard cell). This introduces a number of restrictions on architectural solutions, but the technological adjustment stage actually comes down to playing with Lego. In general, custom microprocessors are faster than processors built using existing libraries.

As with all other computer components, it is important to note which connection processors will be connected to the motherboard. It's even possible that you won't be able to do this by simply updating your old computer to the new processor. Because all other connections were previously used, and processors with old connection types were closed for a long time. Therefore, before you plan to buy a processor, be sure to check whether it suits your computer. Also, be sure to evaluate the purpose of your computer.

After all, you don't want to pay a lot of money for a component that won't be used for its intended purpose. And while there is powerful information about the processor on the Internet, it is best to ask your questions to qualified professionals so that you can buy the product you need.

The next step is to create a prototype microprocessor chip. In the manufacture of modern ultra-large-scale integrated circuits, the lithography method is used. At the same time, layers of conductors, insulators and semiconductors are alternately applied to the substrate of the future microprocessor (a thin circle of monocrystalline silicon or sapphire) through special masks containing slits. The corresponding substances evaporate in a vacuum and are deposited through the holes of the mask on the processor chip. Sometimes etching is used when an aggressive liquid corrodes areas of the crystal not protected by a mask. At the same time, about a hundred processor crystals are formed on the substrate. The result is a complex multilayer structure containing hundreds of thousands to billions of transistors. Depending on the connection, the transistor operates in the microcircuit as a transistor, resistor, diode or capacitor. Creating these elements separately on a microcircuit is, in general, not profitable. After the lithography procedure is completed, the substrate is sawn into elementary crystals. Thin gold conductors are soldered to the contact pads (made of gold) formed on them, which are adapters to the contact pads of the microcircuit body. Next, in the general case, the heat sink of the crystal and the cover of the microcircuit are attached.

Everyone can choose the right one and good computer for myself. It's true, it doesn't matter that you can't know everything when making a choice. People are very individual, they choose very differently and it's not really worth trying to figure anything out, it's important to know exactly what you want, what your computer will be used for and its main purpose. Home may be difficult, but be careful and you will find that it is not so difficult. Below you will find many answers here that will help you avoid mistakes when purchasing. First of all, you must understand that all computers are made up of individual components.

Then the testing stage of the processor prototype begins, when its compliance with the specified characteristics is checked, and remaining undetected errors are looked for. Only after this the microprocessor is put into production. But even during production, the processor is constantly being optimized due to technology improvements, new design solutions, and error detection.

And when choosing correct setting parameters, these parts can be combined with a reliable and reliable device. A desktop computer consists of: case, power supply, motherboard, processor, operating memory, hard drives, video, audio, network and other cards, optical device, internal and external connectors. The essence of choosing a desktop computer: step 8. Selecting a power source Focus on the processor, motherboard, RAM, video card, hard drive and power supply.

This is a prerequisite for a good purchase. But first of all, you must choose a processor and motherboard. Step 1 Select a processor. The processor is one of the most important functions - an integrated microchip that handles the flow of basic processes in a computer. This will directly affect your computer's performance. The main characteristics of the processor are series, socket type, number of cores, inner memory, operating frequency, system bus frequency. In addition, the processor should always be compatible with the RAM and motherboard.

It should be noted that in parallel with the development of universal microprocessors, sets of peripheral computer circuits are being developed that will be used with the microprocessor and on the basis of which they are created motherboards. The development of a microprocessor set (chipset) is a task no less complex than the creation of a microprocessor chip.

Over the past few years, there has been a tendency to transfer some of the chipset components (memory controller, PCI Express bus controller) to the processor. See System on a Chip for more details.

In the early 1970s, breakthroughs in LSI and VLSI (large-scale integrated circuit and very large-scale integrated circuit, respectively) technology made it possible to house all the necessary CPU components in a single semiconductor device. So-called microprocessors appeared. Now the words microprocessor and processor have practically become synonymous, but then this was not the case, because conventional (large) and microprocessor computers coexisted peacefully for at least another 10-15 years, and only in the early 1980s did microprocessors supplant their older brothers. It must be said that the transition to microprocessors later made it possible to create personal computers, which have now penetrated almost every home.

The first microprocessor, the Intel 4004, was introduced on November 15, 1971 by Intel Corporation. It contained 2,300 transistors, ran at a clock speed of 92.6 kHz (the document says an instruction cycle lasts 10.8 microseconds, and Intel's promotional materials say 108 kHz) and cost $300.

Over the years, microprocessor technology has developed many different microprocessor architectures. Many of them (in expanded and improved form) are still in use today. For example, Intel x86, which first developed into 32-bit IA-32, and later into 64-bit x86-64 (which Intel calls EM64T). x86 processors were initially used only in IBM personal computers (IBM PCs), but are now increasingly used in all areas of the computer industry, from supercomputers to embedded solutions. You can also list such architectures as Alpha, POWER, SPARC, PA-RISC, MIPS (RISC architecture) and IA-64 (EPIC architecture).

Most processors currently in use are Intel-compatible, that is, they have a set of instructions and programming interfaces implemented in Intel processors.

The most popular processors today are produced by Intel, AMD and IBM. Among the processors from Intel: 8086, i286 (in computer slang called “two”, “two”), i386 (“three”, “three”), i486 (“four”), Pentium (“stump”, “stump”, “second stump”, “third stump”, etc. There is also a return of names: Pentium III is called “three”, Pentium 4 is called “four”), Pentium II, Pentium III, Celeron (a simplified version of Pentium), Pentium 4, Core 2 Quad, Core i7, Xeon (series of processors for servers), Itanium, Atom (series of processors for embedded technology), etc. AMD has x86 architecture processors in its line (analogues of 80386 and 80486, K6 family and K7 family - Athlon, Duron, Sempron) and x86-64 (Athlon 64, Athlon 64 X2, Phenom, Opteron, etc.). IBM processors (POWER6, POWER7, Xenon, PowerPC) are used in supercomputers, 7th generation video set-top boxes, and embedded technology; previously used in Apple computers.

According to IDC, at the end of 2009, Intel's share of the microprocessor market for PCs, laptops and servers was 79.7%, and AMD's share was 20.1%.

In the next 10-20 years, most likely, the material part of processors will change due to the fact that technological process will reach physical production limits. Perhaps it will be:

Optical computers - in which instead of electrical signals, streams of light (photons, not electrons) are processed.

Quantum computers, the operation of which is entirely based on quantum effects. Currently, work is underway to create working versions of quantum processors.

Molecular computers are computing systems that use the computing capabilities of molecules (mainly organic ones). Molecular computers use the idea of ​​the computational power of the arrangement of atoms in space.

The development of microprocessors in Russia is carried out by CJSC MCST and NIISI RAS. Also, the development of specialized microprocessors focused on the creation of neural systems and digital signal processing is carried out by the Scientific and Technical Center "Module" and the State Unitary Enterprise Scientific and Production Center "ELVEES". A number of microprocessor series are also produced by Angstrem OJSC.

NIISI develops Comdiv series processors based on the MIPS architecture. Technical process - 0.5 microns, 0.3 microns; KNI.

KOMDIV32, 1890VM1T, including the KOMDIV32-S (5890VE1T) version, resistant to the effects of outer space factors (ionizing radiation)

KOMDIV64, KOMDIV64-SMP

Arithmetic coprocessor KOMDIV128

STC Modul has developed and offers microprocessors of the NeuroMatrix family:

1998, 1879VM1 (NM6403) - high-performance specialized microprocessor digital processing signals with vector-pipeline VLIW/SIMD architecture. Manufacturing technology - CMOS 500 nm, frequency 40 MHz.

2007, 1879VM2 (NM6404) - modification of 1879VM1 with a clock frequency increased to 80 MHz and 2 Mbit RAM located on the processor chip. Manufacturing technology - 250 nm CMOS.

2009, 1879VM4 (NM6405) - a high-performance digital signal processor with a vector-pipeline VLIW/SIMD architecture based on a patented 64-bit NeuroMatrix processor core. Manufacturing technology - 250 nm CMOS, clock frequency 150 MHz.

Thanks to a number of hardware features, microprocessors in this series can be used not only as specialized digital signal processors, but also for creating neural networks.

State Unitary Enterprise SPC ELVIS develops and produces microprocessors of the Multicore series, the distinctive feature of which is asymmetric multi-core. In this case, physically one chip contains one CPU RISC core with MIPS32 architecture, which performs the functions of the system’s central processor, and one or more cores of a specialized accelerator processor for digital signal processing with floating/fixed point ELcore-xx (ELcore = Elvees's core), based on "Harvard" architecture. The CPU core is the leader in the chip configuration and executes the main program. The CPU core is provided with access to the resources of the DSP core, which is slave to the CPU core. The CPU of the chip supports the Linux kernel 2.6.19 or the hard real-time OS QNX 6.3 (Neutrino).

2004, 1892VM3T (MC-12) - single-chip microprocessor system with two cores. Central processor - MIPS32, signal coprocessor - SISD core ELcore-14. Manufacturing technology - CMOS 250 nm, frequency 80 MHz. Peak performance 240 MFLOPs (32 bits).

2004, 1892VM2Ya (MC-24) - single-chip microprocessor system with two cores. Central processor - MIPS32, signal coprocessor - SIMD core ELcore-24. Manufacturing technology - CMOS 250 nm, frequency 80 MHz. Peak performance 480 MFLOPs (32 bits).

2006, 1892VM5YA (MC-0226) - single-chip microprocessor system with three cores. Central processor - MIPS32, 2 signal coprocessors - MIMD core ELcore-26. Manufacturing technology - CMOS 250 nm, frequency 100 MHz. Peak performance 1200 MFLOPs (32 bits).

2008, NVCom-01 (“Navicom”) - single-chip microprocessor system with three cores. Central processor - MIPS32, 2 signal coprocessors - MIMD DSP cluster DELCore-30 (Dual ELVEES Core). Manufacturing technology - CMOS 130 nm, frequency 300 MHz. Peak performance 3600 MFLOPs (32 bits). Designed as a telecommunications microprocessor, it contains a built-in 48-channel GLONASS/GPS navigation function.

As a promising project of SPC ELVIS, MC-0428 is presented - the MultiForce processor - a single-chip microprocessor system with one central processor and four specialized cores. Manufacturing technology - CMOS 130 nm, frequency up to 340 MHz. Peak performance is expected to be at least 8000 MFLOPs (32 bits).

OJSC Angstrem (company) produces (does not develop) the following series of microprocessors:

1839 - 32-bit VAX-11/750 compatible microprocessor kit of 6 chips. Manufacturing technology - CMOS, clock frequency 10 MHz.

1836VM3 - 16-bit LSI-11/23-compatible microprocessor. Software compatible with DEC PDP-11. Manufacturing technology - CMOS, clock frequency 16 MHz.

1806VM2 - 16-bit LSI/2-compatible microprocessor. Software compatible with LCI-11 from DEC. Manufacturing technology - CMOS, clock frequency 5 MHz.

L1876VM1 32-bit RISC microprocessor. Manufacturing technology - CMOS, clock frequency 25 MHz.

Among Angstrem's own developments, we can note the single-chip 8-bit RISC microcomputer Theseus.

The MCST company has developed and put into production a family of universal SPARC-compatible RISC microprocessors with design standards of 130 and 350 nm and frequencies from 150 to 500 MHz (for more details, see the article about the series - MCST-R and about the computing systems based on them Elbrus-90micro ). The Elbrus VLIW processor with the original ELBRUS architecture has also been developed and is used in the Elbrus-3M1 complexes). The main consumers of Russian microprocessors are military-industrial complex enterprises.

In Soviet times, one of the most popular because of its immediate simplicity and clarity was the MPK KR580 used for educational purposes - a set of chips similar to the set Intel chips 82xx. Used in domestic computers, such as Radio 86RK, YuT-88, Mikrosha, etc.

Modern CPUs, implemented in the form of separate microcircuits (chips) that implement all the features inherent in this type of device, are called microprocessors. Since the mid-1980s, the latter have practically replaced other types of CPUs, as a result of which the term has become more and more often perceived as an ordinary synonym for the word “microprocessor”. However, this is not the case: the central processing units of some supercomputers, even today, are complex complexes built on the basis of large-scale (LSI) and ultra-large-scale integration (VLSI) microcircuits.

Initially, the term central processing unit described a specialized class of logical machines designed to run complex computer programs. Due to the fairly close correspondence of this purpose to the functions of the computer processors that existed at that time, it was naturally transferred to the computers themselves. The use of the term and its abbreviation in relation to computer systems began in the 1960s. The design, architecture and implementation of processors have changed several times since then, but their main executable functions remain the same as before.

Early CPUs were created as unique components for unique, even one-of-a-kind, computer systems. Later, from the expensive method of developing processors designed to run one single or several highly specialized programs, computer manufacturers moved to mass production of standard classes of multi-purpose processor devices. The trend towards standardization of computer components arose during the era of rapid development of semiconductor elements, mainframes and minicomputers, and with the advent of integrated circuits it became even more popular. The creation of microcircuits made it possible to further increase the complexity of CPUs while simultaneously reducing their physical size. The standardization and miniaturization of processors has led to the deep penetration of digital devices based on them into everyday human life. Modern processors can be found not only in high-tech devices such as computers, but also in cars, calculators, mobile phones and even in children's toys. Most often they are represented by microcontrollers, where, in addition to the computing device, additional components are located on the chip (program and data memory, interfaces, input/output ports, timers, etc.). Modern computing capabilities of a microcontroller are comparable to personal computer processors of ten years ago, and more often than not even significantly exceed their performance.

Most modern personal computer processors are generally based on some version of the cyclic sequential processing process invented by John von Neumann.

J. von Neumann came up with a scheme to build a computer in 1946.

The most important steps in this process are outlined below. In various architectures and for various teams Additional steps may be required. For example, arithmetic instructions may require additional memory accesses that read operands and write results. A distinctive feature of the von Neumann architecture is that instructions and data are stored in the same memory.

Execution cycle stages:

– the processor places the number stored in the program counter register on the address bus and issues a read command to the memory;

– the set number is an address for memory; The memory, having received the address and the read command, places the contents stored at this address on the data bus and reports readiness.

The processor receives a number from the data bus, interprets it as a command (machine instruction) from its instruction system and executes it.

If the last instruction is not a branch instruction, the processor increments by one (assuming the length of each instruction is one) the number stored in the program counter; As a result, the address of the next command is formed there.

This cycle is executed invariably, and it is this cycle that is called a process (hence the name of the device).

During the process, the processor reads a sequence of instructions contained in memory and executes them. This sequence of commands is called a program and represents the algorithm of the processor. The order of reading commands changes if the processor reads a jump command, then the address of the next command may be different. Another example of a process change would be when a stop command is received or when it switches to interrupt mode.

CPU commands are the lowest level of computer control, so the execution of each command is inevitable and unconditional. No check is made to ensure that the actions performed are acceptable; in particular, the possible loss of valuable data is not checked. In order for the computer to perform only valid actions, the commands must be properly organized into the required program.

The speed of transition from one stage of the cycle to another is determined by the clock generator. The clock generator produces pulses that serve as rhythm for the central processor. The frequency of the clock pulses is called the clock frequency.

Pipelining architecture was introduced into the central processor to improve performance. Typically, to execute each command, it is necessary to carry out a certain number of similar operations, for example: fetching a command from RAM, decrypting a command, addressing an operand in RAM, fetching an operand from RAM, executing a command, writing the result to RAM. Each of these operations is associated with one stage of the conveyor. For example, a MIPS-I microprocessor pipeline contains four stages:

– receiving and decoding instructions,

– addressing and fetching an operand from RAM,

– performing arithmetic operations,

– saving the result of the operation.

After release kth stage conveyor belt, she immediately starts working on the next team. If we assume that each stage of the conveyor spends a unit of time on its work, then executing a command on a conveyor of length n stages will take n units of time, however, in the most optimistic case, the result of executing each subsequent command will be obtained after each unit of time.

Indeed, in the absence of a pipeline, executing a command will take n units of time (since the command still needs to be fetched, decrypted, etc.), and executing m commands will take time units; When using a pipeline (in the most optimistic case), it will take only n + m units of time to execute m instructions.

Factors that reduce conveyor efficiency:

– pipeline downtime, when some stages are not used (for example, addressing and fetching an operand from RAM is not needed if the instruction operates with registers);

– waiting: if the next command uses the result of the previous one, then the latter cannot begin to be executed before the first is executed (this is overcome by using out-of-order execution);

– clearing the pipeline when a branch command hits it (this problem can be smoothed out using branch prediction).

Some modern processors have more than 30 stages in the pipeline, which increases processor performance, but leads to a lot of downtime (for example, in the event of an error in conditional branch prediction). There is no consensus on the optimal pipeline length: different programs may have significantly different requirements.

Superscalar architecture is the ability to execute multiple machine instructions in one processor cycle by increasing the number of execution units. The advent of this technology has led to a significant increase in productivity. At the same time, there is a certain limit to the growth of the number of actuators, beyond which productivity practically stops growing, and the actuators are idle. Partial solutions to this problem are, for example, Hyper technology Threading.

CISC processors - complex instruction set computer - calculations with a complex set of instructions. Processor architecture based on a complex instruction set. Typical representatives of CISC are microprocessors of the x86 family (although for many years these processors have been CISC only in external system commands: at the beginning of the execution process, complex commands are broken down into simpler micro-operations (MOPs), executed by the RISC core).

RISC processors – Reduced instruction set computer - calculations with a simplified set of instructions (in the literature the word “reduced” is often mistakenly translated as “reduced”). The processor architecture, built on the basis of a simplified instruction set, is characterized by the presence of fixed-length instructions, a large number of registers, register-to-register operations, and the absence of indirect addressing. The RISC concept was developed by John Cocke of IBM Research, and the name was coined by David Patterson.

Simplifying the instruction set is intended to shorten the pipeline, which avoids delays in conditional and unconditional branch operations. A homogeneous set of registers simplifies the work of the compiler when optimizing executable program code. In addition, RISC processors have lower power consumption and heat dissipation.

Among the first implementations of this architecture were MIPS, PowerPC, SPARC, Alpha, PA-RISC processors. IN mobile devices ARM processors are widely used.

MISC processors – Minimum instruction set computer - calculations with a minimum set of instructions. Further development ideas from the team of Chuck Moore, who believes that the principle of simplicity, the original principle of RISC processors, has too quickly faded into the background. In the heat of the struggle for maximum performance, RISC has caught up and surpassed many CISC processors in complexity. The MISC architecture is based on a stack computing model with a limited number of instructions (approximately 20-30 instructions).

VLIW processors – Very long instruction word command word. A processor architecture with explicit computational parallelism built into the processor instruction system. They are the basis for the EPIC architecture. The key difference from superscalar CISC processors is that for them, part of the processor (scheduler) is responsible for loading the execution devices, for which a fairly short time is allocated, while the compiler is engaged in loading computing devices for the VLIW processor, for which much more time is allocated (download quality and, accordingly, performance should theoretically be higher). An example of a VLIW processor is Intel Itanium.

Multi-core processors - contain several processor cores in one package (on one or more chips).

Processors designed to run one copy operating system on multiple cores, represent a highly integrated implementation of multiprocessing.

First multi-core microprocessor became POWER4 from IBM, which appeared in 2001 and had two cores.

In October 2004, Sun Microsystems released dual core processor UltraSPARC IV, which consisted of two modified UltraSPARC III cores. At the beginning of 2005, the dual-core UltraSPARC IV+ was created.

On November 14, 2005, Sun released the eight-core UltraSPARC T1, with each core running 4 threads.

January 5, 2006 Intel of the Year introduced the first dual-core processor on a single chip, Core Duo, for the mobile platform

On September 10, 2007, native (in the form of a single chip) quad-core processors for AMD Opteron servers were released, codenamed AMD Opteron Barcelona during development. On November 19, 2007, the AMD Phenom quad-core processor for home computers went on sale. These processors implement the new K8L (K10) microarchitecture.

In October 2007, eight-core UltraSPARC T2 went on sale, each core running 8 threads.

In March 2010, AMD released the world's first 12-core server processors Opteron 6100 x86 architecture.

On this moment Dual-, quad- and six-core processors are widely available, in particular Intel Core 2 Duo on the 65nm Conroe core (later on the 45nm Wolfdale core) and the Athlon 64 X2 based on the K8 microarchitecture. In November 2006, the first four-core Intel Core 2 Quad processor based on the Kentsfield core was released, which is an assembly of two Conroe crystals in one package. The descendant of this processor was the Intel Core 2 Quad on the Yorkfield core (45 nm), architecturally similar to the Kentsfield but with a larger cache size and operating frequencies.

AMD took its own path, manufacturing quad-core processors with a single chip (unlike Intel, whose first quad-core processors were actually gluing together two dual-core chips). Despite all the progressiveness of this approach, the company’s first “quad-core”, called AMD Phenom X4, was not very successful. Its lag behind contemporary competitor processors ranged from 5 to 30 percent or more, depending on the model and specific tasks.

By the 1st-2nd quarter of 2009, both companies updated their lines of quad-core processors. Intel introduced the Core i7 family, consisting of three models operating at different frequencies. The main highlights of this processor are the use of a three-channel memory controller (DDR-3 type) and eight-core emulation technology (useful for some specific tasks). In addition, thanks to general optimization of the architecture, it was possible to significantly improve processor performance in many types of tasks. Weak side platform using Core i7 is its excessive cost, since installing this processor requires an expensive motherboard on Intel chipset X58 and a three-channel DDR3 memory set, which also has a high price at the moment.

AMD, in turn, presented a line Phenom processors II X4. When developing it, the company took into account its mistakes: the cache volume was increased (clearly insufficient for the first Phenom), and the production of the processor was transferred to a 45-nm process technology, which made it possible to reduce heat generation and significantly increase operating frequencies. In general, the AMD Phenom II X4 is on par in performance with previous generation Intel processors (Yorkfield core) and is quite significantly behind the Intel Core i7. However, with the release of the AMD Phenom II X6 Black Thuban 1090T processor, the situation has changed significantly in AMD's favor. This processor is priced at the level of the intel core i7 930, but can compete with the intel core i7 processor line in terms of performance. Its full 6 cores are great for complex multi-threaded tasks.

Caching is the use of additional high-speed memory (cache) to store copies of blocks of information from the main (RAM) memory, the probability of which will be accessed in the near future is high.

There are caches of levels 1, 2 and 3 (designated L1, L2 and L3 - from Level 1, Level 2 and Level 3). The 1st level cache has the lowest latency (access time), but is small in size; in addition, the first level caches are often made multi-ported. So, AMD processors K8 were able to perform simultaneous 64-bit writing and reading, or two 64-bit reads per clock; AMD K8L could perform two 128-bit reads or writes in any combination. Intel processors Core 2 can perform 128-bit writes and reads per clock cycle. A L2 cache typically has significantly higher access latency, but it can be made much larger. Level 3 cache is the largest in size and quite slow, but it is still much faster than RAM.

Harvard architecture differs from von Neumann architecture in that the program code and data are stored in different memory. In such an architecture, many programming methods are impossible (for example, a program cannot change its code during execution; it is impossible to dynamically redistribute memory between program code and data); But the Harvard architecture allows you to do work more efficiently in the case of limited resources, so it is often used in embedded systems.

Parallel architecture - von Neumann architecture has the disadvantage that it is sequential. No matter how large a mass of data needs to be processed, every byte of it will have to pass through the central processor, even if the same operation must be performed on all bytes. This effect is called the von Neumann bottleneck.

To overcome this drawback, processor architectures called parallel have been and are being proposed. Parallel processors are used in supercomputers.

Possible options for parallel architecture can be (according to Flynn's classification):

SISD - one command stream, one data stream;

SIMD - one command stream, many data streams;

MISD - multiple command streams, one data stream;

MIMD - many command streams, many data streams.

For digital signal processing, especially with limited processing time, specialized high-performance microprocessors with parallel architecture are used.

The first x86 processors consumed a tiny (by modern standards) amount of energy, a fraction of a watt. An increase in the number of transistors and an increase in the clock frequency of processors led to a significant increase in this parameter. The most productive models require up to 130 watts or more. The energy consumption factor, which was insignificant at first, now has a serious impact on the evolution of processors:

– improving production technology to reduce consumption, searching for new materials to reduce leakage currents, lowering the supply voltage of the processor core;

– the appearance of sockets (sockets for processors) with a large number contacts (more than 1000), most of which are intended to power the processor. Thus, for processors for the popular LGA775 socket, the number of main power contacts is 464 (about 60% of the total);

– change in processor layout. The processor crystal has moved from the inside to the outside for better heat dissipation to the cooling system radiator;

– integration of temperature sensors and an overheating protection system into the chip, which reduces the processor frequency or even stops it if the temperature increases unacceptably;

– appearance in the latest processors intelligent systems that dynamically change the supply voltage, frequency of individual processor blocks and cores, and turn off unused blocks and cores;

– the emergence of energy-saving modes to “sleep” the processor at low load.

TASK 2

248.615 =8F.9D70FA (16) = 370. 47270 (8) =10111000.10011 (2)

248|16

240 15

8

0.615 * 16 =9.84

0.84*16 =13.44

0.44*16= 7.04

0.04*16 = 0.64

0.64*16 = 10.24

248|8

248 31|8

0 24 3

7

0.615*8 =4.92

0.92*8 = 7.36

0.36*8 = 2.88

0.28*8=7.04

0.04*8 = 0.32

248 |2

248 124|2

0 124 62 |2

0 62 31 |2

0 30 15|2

1 14 7|2

1 6 3 |2

1 2 1

0

0.615*2 = 1.23

0.23*2 = 0.46

0.46*2 = 0.92

0.92*2 = 1.84

0.84*2 = 1.68

322.549 =142.8C8B4 (16) =502.43105 (8) =
101000010.10001 (2)

322|8

320 40 |8

2 40 5

0

0.549*8= 4.392

0.392 * 8 = 3.136

0.136 * 8 = 1.088

0.088 * 8 = 0.704

0.704 * 8 = 5.632

322|16

320 20 |16

2 16 1

4

0.549*16 =8.784

0.784 * 16 =12.544

0.544*16 =8.704

0.704*16 =11.264

0.264*16 = 4.224

322 |2

322 161|2

0 160 80 |2

1 80 40 |2

0 40 20 |2

0 20 10 |2

0 10 5 |2

0 4 2 |2

1 2 1

0

0.549 *2 = 1.098

0.098*2 = 0.196

0.196*2 = 0.392

0.392*2 = 0.784

0.784 *2 = 1.568

11001100.10101 =204.65625 (10) = 314.52 (8) =CC.A8 (16)

11110001.11101= 241.90625 (10) = 361.72 (8) =F1.E8 (16)

2.462E+03 = 2462

7.355E-02 = 0.07355

5.526E+04 = -55260

1.254E-01 = 0.1254

TASK 3

1) all files whose names begin with “pr” and contain no more than three characters

Sample

pr*.

all files

*

2) order.txt from the Setup folder

D:\Setup\order.txt

alladdin.exe from the Games folder

D:\Mguk\Games\alladdin.exe

TASK 4

To complete the assignment on this issue, you need to develop in a word processor Microsoft Word advertising sheet on a given topic. The document must contain:

- text;

– curly text;

- drawing;

– table.

On the topic: “Advertising sheet for a book publishing house with a form for ordering books by mail.”






Subscribe to the free catalog and get a discount on every second purchase

Coupon for free quarterly catalog

FULL NAME. ___________________________

Age___________________________

Type of your activity____________

Your interests_____________________

Your literary preferences______________________________

Purchase amount

Discount amount

up to 1000 rub.

Monitors. Purpose, main types, operating modes and characteristics Printers personal computer, their types and characteristics

The most important component of any computer is its processor (microprocessor)- a software-controlled information processing device made in the form of one or more large-scale or ultra-large-scale integrated circuits.

The processor includes the following components:

§ control device- generates and supplies to all PC elements at the right times certain control signals (control pulses), determined by the specifics of the operation being performed and the results of previous operations;

§ arithmetic logic unit (ALU)- designed to perform all arithmetic and logical operations over numeric and symbolic information;

§ coprocessor- an additional block necessary for complex mathematical calculations and when working with graphic and multimedia programs;

§ general purpose registers- high-speed memory cells, used mainly as various counters and pointers to the PC address space, access to which can significantly increase the speed of the program being executed;

§ cache memory- a high-speed memory block for short-term storage, recording and output of information being processed at a given time or used in calculations. This improves processor performance;

§ data bus- interface system that implements data exchange with other PC devices;

§ clock generator(impulses);

§ interrupt controller;

The main characteristics of the processor are:

Clock frequency- the number of elementary operations (cycles) that the processor performs in one second. Clock speed is measured in megahertz (MHz) or gigahertz (GHz). The higher the clock speed, the faster the processor runs. This statement is true for one generation of processors, because in different models processors require different numbers of clock cycles to perform certain actions.

Bit depth- the number of binary digits (bits) of information that is processed (or transmitted) in one clock cycle. Bit size also determines the number of binary bits that can be used in the processor to address RAM.

The processors are also characterized by: type of processor core(production technology determined by the thickness of the minimum elements of the microprocessor); bus frequency, where they work; cache size; belonging to a certain family(as well as generation and modification); "form factor"(device standard and appearance) And additional features (for example, the presence of a special system of “multimedia commands” designed to optimize work with graphics, video and sound).

Today, almost all desktop IBM PC-compatible computers have processors from two main manufacturers (two families) - Intel And AMD.

Over the entire history of the development of the IBM PC, there have been eight main generations in the Intel microprocessor family (from i8088 to Pentium IV). In addition, Intel has produced and continues to produce spin-off generations Pentium processors(Pentium Pro, Pentium MMX, Intel Celeron, etc.). Generations of Intel microprocessors differ in speed, architecture, form factor, etc. Moreover, various modifications are produced in each generation.

A competitor to Intel microprocessors today is the AMD family of microprocessors: Athlon, Sempron, Opteron (Shanghai), Phenom.

Intel microprocessors and AMD are not compatible (although both are IBM PC compatible and support the same programs) and require appropriate motherboards and sometimes memory.

For PCs such as Macintosh (Apple), their own processors of the family are produced Mac.



tell friends