Tests gtx 280 by users after purchase. Computer resource U SM. Key Architectural Features of CUDA

💖 Do you like it? Share the link with your friends

It’s summer outside, there’s a thunderstorm outside, a warm humid wind is blowing, there’s a new heating stove nearby in the stand, spewing hot air at me after removing the heat from the 280 watts of heat generated, all one to one.

I like Hi-End accelerators because if you remove it from the computer immediately after work (with swearing and blowing on your burnt fingers), put it in all sorts of protective bags that save the miracle of precision-quality equipment from damage, then even after transporting it for an hour - the product It will be warm, as if it had just come from a factory or bakery. And sometimes even hot. So all these hundreds of watts are not like unscrewing a light bulb and carrying it in your pocket.

Within the section, due to work and official needs, we sometimes give each other video cards for tests and other research, and sometimes you receive an accelerator, but it’s still almost hot inside... During the transportation in the car it didn’t have time to cool down... :)

So, games still need more FPS, people need more beautiful graphics, and accelerators need to eat more, and therefore cooler companies will be in business for a long time, coming up with new sophisticated ways to take the heat off the fire-breathing square-shaped dragons and take it outside the case (sometimes just into the body so that everything is sintered there). Soon we will put the same huge 24-pin connectors on video cards that we use to connect power to motherboards. We will already talk about three-slot video cards, which will probably require a special mount in the case. Yes, it seems that the technical process is getting smaller, but the sizes of video cards are growing and growing, because they want more and more from them.

Poor Nvidia made another monster, like in 2006 - the G80, the chip is very expensive - this can be seen from all parameters, judging by the first information - very few cards will go on sale after the announcement, which indicates a low percentage of suitable ones. At the same time, in order to curb demand, the price of the GTX 280 was raised to the skies. Why poor? - Well, because there is a difference in the situation in 2006 and now. If back then there really was a need for new super-powerful cards, and the G80 showed a truly revolutionary breakthrough then, now this is another plus of twenty to thirty percent to... the 9800 GTX. Yes, not even to the 9800 GX2. Below we will show everything in detail. Although there are also tests where the GTX 280 is a complete leader. And if previously the G80 (8800 GTX) really sold like hotcakes in winter, now the demand will clearly not be so high. Although, taking into account the fact that at the beginning there will be very few card sales in general, Nvidia is afraid of such demand, so prices have been raised to 650 US dollars, which is clearly illogical, since even the 9800 GX2 is cheaper.

Well, to dilute the intrigue with practice, we will move on to studying the map. Readers have already studied the theoretical part, they realized that inside one such square with a side of 3 cm is the embodiment of the super ideas of Nvidia engineers, which required almost one and a half billion transistors, now let’s look at what it looks like.

Boards

  • GPU: GeForce GTX 280 (GT200)
  • Interface: PCI-Express x16
  • GPU operating frequencies (ROPs/Shaders): 600/1300 MHz (nominal - 600/1300 MHz)
  • Memory operating frequencies (physical (effective)): 1100 (2200) MHz (nominal - 1100 (2200) MHz)
  • Memory bus width: 512bit
  • Number of vertex processors: -
  • Number of pixel processors: -
  • Number of universal processors: 240
  • Number of texture processors: 80 (BLF/TLF)
  • Number of ROPs: 32
  • Dimensions: 270x100x33 mm (the last value is the maximum thickness of the video card).
  • PCB color: black
  • RAMDACs/TMDS: placed in a separate NVIO chip.
  • Output Jacks: 2xDVI (Dual-Link/HDMI), TV-out.
  • VIVO: No
  • TV-out: integrated into the GPU.
  • Multiprocessor support: SLI (Hardware).
Comparison with reference design, front view
Reference Nvidia Geforce GTX 280 1024MB PCI-E
Comparison with reference design, rear view
Reference Nvidia Geforce GTX 280 1024MB PCI-E Reference card Nvidia Geforce 9800 GTX

Obviously, this is a completely new design, unlike anything previously released by Nvidia, since the PCB carries a 512-bit memory bus. This forces 16 memory chips to be placed on the PCB, so a design with double-sided mounting of chips (8 pieces on each side) was required. Therefore, the length of the card remains large, and the PCB is very expensive. Don't forget that Nvidia again resorted to the method of separating GPU blocks, and moved all the blocks responsible for outputting information into a separate NVIO chip, as was the case with the G80 (8800 GTX/Ultra).

Above are the GPU and the same NVIO. It is clear that the dimensions of the GPU crystal are much smaller - it is closed with a lid, but one can imagine the area of ​​the core, which can accommodate almost 1.5 billion transistors.

Now about the cooler. The cooling system is not fundamentally different from the version we saw on the GeForce 8800 GTS 512. And the shape of the cooler is the same. The length of the radiator simply increased in accordance with the size of the card itself, and a plate was installed at the back to cool the memory chips on the back of the card. The entire device is assembled in such a way that it creates a single common large radiator from the covers (the back and front covers snap into place, so when disassembling the video card and removing the cooler there are certain difficulties and some experience is required to expose the card itself without causing damage). The engineers liked the experience of creating the 9800 GX2 with the same latches.

We remind you once again of an important point: the length of the accelerator is 270 mm, like the 8800 GTX/Ultra, so there should be enough space in the case to install such a design. Let’s also pay attention to the width of the casing, which is constant along the entire length, and therefore there should be no ports or high capacitors on the motherboard behind the PCI-E x16 connector, and for a width of 30 mm (that is, not only behind the PCI-E slot itself , but there should not be any high parts on the motherboard next to it).

Video cards of this series are equipped with a jack for connecting an audio stream from an audio card to then transfer it to HDMI (using a DVI-to-HDMI adapter), that is, the video card itself is not equipped with an audio codec, but receives a signal from an external sound card. Therefore, if this function is important to anyone, make sure that the video card comes with an audio cord for these purposes.

We also note that the accelerator is powered using TWO connectors, a 6-pin and an 8-pin. Therefore, you should also pay attention to the presence of an 8-pin power adapter in the package.

The card has a TV-out socket, which is unique in its connector, and to output images to TV via both S-Video and RCA, special adapters supplied with the card are required. You can read more about TV output.

Connection to analog monitors with d-Sub (VGA) is made through special DVI-to-d-Sub adapters. DVI-to-HDMI adapters are also supplied (we remember that these accelerators support full video and audio transmission to an HDMI receiver), so there should be no problems with such monitors either.

Maximum resolutions and frequencies:

  • 240 Hz Max Refresh Rate
  • 2048 × 1536 × 32bit x85Hz Max - via analog interface
  • 2560 × 1600 @ 60Hz Max - via digital interface (all DVI sockets with Dual-Link)

As for the capabilities of video cards for playing MPEG2 (DVD-Video), we studied this issue back in 2002, and little has changed since then. Depending on the movie, CPU load when played on modern video cards does not rise above 25%.

Regarding HDTV. One of the studies has also been conducted and can be found.

We conducted a temperature study using the RivaTuner utility (author A. Nikolaychuk AKA Unwinder) and obtained the following results:

It is worth paying special attention to how much the frequencies are reduced when working in 2D (left marker in the screenshot) - up to 100(!) MHz in the shader unit and in memory! This actually reduces the card's power consumption to 110W. When, as in 3D, at full load the accelerator consumes all 280 W! And at the same time, the heating of the core reaches 80 degrees, which is within the norm, especially considering that the cooler remains quiet. In this regard, the card is flawless, it just requires a very powerful power supply. We believe that everyone understands that below 700W there is no point in even trying.

Since the card is supplied in OEM form as a sample, we are not talking about the delivery kit.

Installation and drivers

Test bench configuration:

  • Computer based on Intel Core2 (775 Socket)
    • processor Intel Core2 Extreme QX9650 (3000 MHz);
    • Zotac 790i Ultra motherboard with Nvidia nForce 790i Ultra chipset;
    • RAM 2 GB DDR3 SDRAM Corsair 2000MHz (CAS (tCL)=5; RAS to CAS delay (tRCD)=5; Row Precharge (tRP)=5; tRAS=15);
    • hard drive WD Caviar SE WD1600JD 160GB SATA.
    • power supply Tagan TG900-BZ 900W.
  • operating system Windows Vista 32bit SP1; DirectX 10.1;
  • Dell 3007WFP monitor (30").
  • ATI drivers version CATALYST 8.5; Nvidia versions 175.16 (9xxx series) and 177.34 (GTX 280).

VSync is disabled.

Synthetic tests

The synthetic test packages we use can be downloaded here:

  • D3D RightMark Beta 4 (1050) with a description on the website 3d.rightmark.org
  • D3D RightMark Pixel Shading 2 and D3D RightMark Pixel Shading 3 tests of pixel shaders versions 2.0 and 3.0 link.
  • RightMark3D 2.0 with a brief description:

RightMark3D 2.0 requires the MS Visual Studio 2005 runtime package installed, as well as the latest DirectX runtime update.

Synthetic tests were carried out on the following video cards:

  • Nvidia GeForce GTX 280 GFGTX280)
  • Nvidia GeForce 9800 GX2 with standard parameters (further GF9800GX2)
  • Nvidia GeForce 9800 GTX with standard parameters (further GF9800GTX)
  • Nvidia GeForce 8800 Ultra with standard parameters (further GF8800U)
  • RADEON HD 3870 X2 with standard parameters (further HD3870X2)
  • Radeon HD 3870 with standard parameters (further HD3870)

To compare the results of the Geforce GTX 280, these particular video card models were chosen for the following reasons: it will be interesting to compare it with the GeForce 9800 GX2, as with the fastest dual-chip GPU card of the previous generation, with the GeForce 9800 GTX as with a single-chip, with the old Geforce 8800 Ultra model We compare in order to see the difference in throughput and evaluate the impact of architecture improvements. Well, the comparison with the RADEON HD 3870 and HD 3870 X2 is interesting because these are the fastest single-chip and dual-chip solutions from AMD at the moment.

Direct3D 9: Pixel Filling Tests

The test determines the peak texture sampling performance (texel rate) in FFP mode for a different number of textures applied to one pixel:

As usual, not all video cards produce values ​​close to theoretical ones. Most often, the results of synthetics do not reach the theory; video cards based on the G80 and RV670 come closest to them; they fall short of the theory by only 10-15%. But for Nvidia video cards with improved TMUs, the theoretical maximum is not reached in our old test. Moreover, no improvements are visible in the GT200, that the G92 in our test selects only about 32 texels per clock cycle from 32-bit textures with bilinear filtering, that the GT200 does not reach its theoretical capabilities. However, our outdated test may be to blame.

However, the GeForce GTX 280 is too close to the GeForce 9800 GTX, and with one texture it loses even to the GeForce 8800 Ultra, despite the higher bandwidth! But in such cases, the cards are limited by the video memory bandwidth... In the case of a large number of textures per pixel, the capabilities of the ROP units are revealed more fully, and in more difficult conditions, the GT200 card becomes the fastest (if we take into account the incorrect test result of the dual-chip Nvidia video card). The new product outperforms the dual-chip card from AMD in all tested modes. Let's look at the results in the fill rate test:

The second synthetic test measures the fill rate, and in it we see the same situation, but taking into account the number of pixels written to the frame buffer. It’s strange that in cases with 0 and 1 overlaid textures, the GeForce GTX 280 got such a low result; usually in such modes, performance is limited by memory bandwidth, as well as the number and operating frequency of ROP blocks. And with this the new solution is all right...

But everything turns out the same as in the previous test only in situations with a large number of textures per pixel, the GeForce GTX 280 slightly outperforms its closest competitors, although it should have a stronger lead.

Direct3D 9: Geometry Processing Speed ​​Tests

Let's look at a couple of limit geometry tests, and first we have the simplest vertex shader showing the maximum triangle throughput:

All modern chips are based on unified architectures, their universal execution units in this test are occupied only with geometric work, and the solutions show high results that clearly depend not on the peak performance of the unified units, but on the performance of other units, for example, triangle setup.

Actually, the results once again confirm that AMD chips process geometry faster compared to Nvidia chips, and dual-chip solutions in AFR mode effectively double the frame rate. GeForce GTX 280 loses to dual-chip cards, is ahead of the G80 solution and is on par with the fastest single-chip card based on G92. It turns out that this test depends solely on the GPU clock speed. Interestingly, the performance of the test in different modes of the GT200 is more similar to that shown by the G80, but not the G92.

We have removed from consideration intermediate tests on the speed of processing geometry with one light source, and immediately move on to considering the most complex geometric problem with three light sources, including static and dynamic transitions:

In this version, the difference between AMD and Nvidia solutions is more visible, the gap has increased slightly. The Geforce GTX 280 shows the best result among Nvidia cards, slightly ahead of the GeForce 9800 GTX and 8800 Ultra, except for the FFP test, which no one is interested in anymore. In general, the new chip performs well in these geometric tests. But in real applications, universal shader processors are mainly occupied with pixel calculations, the performance of which we now examine.

Direct3D 9: Pixel Shaders Tests

The first group of pixel shaders that we are considering is very simple for modern video chips; it includes various versions of pixel programs of relatively low complexity: 1.1, 1.4 and 2.0.

The tests are too simple for modern architectures and do not show their true strength. This is clearly seen in the first two tests (Wood and Psychodelic), the results of which are the same for almost all solutions. In addition, in simple tests, performance is limited by the speed of texture samples, which is evident from the weak results of the RADEON HD 3870 X2, which showed results at the level of single-chip Nvidia solutions.

In more complex tests, the GeForce GTX 280 shows good results, outperforming both the top-end G92 card and the G80 card. Moreover, as the complexity of the task increases, the gap between the GT200 and previous chips clearly grows. Although the card does not match the dual-chip 9800 GX2 in any of the tests. Let's look at the test results of more complex pixel programs of intermediate versions:

The “Water” test of procedural water visualization, which is highly dependent on the texturing speed, uses a dependent sample from textures of large levels of nesting, so the maps are arranged strictly according to the texturing speed, as was the case in the very first graph. The only RADEON, even being a dual-chip one, lags behind all solutions based on the G92, G80 and GT200. Well, the video card we are considering today loses only to the dual-chip 9800 GX2, ahead of its single-chip counterparts, exactly according to theory.

The second test, which is more computationally intensive, is clearly better suited for the R6xx and GT200 architectures, which have a larger number of computational units. In this test, the AMD solution shows the best result, followed by a dual-chip card, but from Nvidia. But the best part is that the GeForce GTX 280 loses to them just a little! Not a bad result, the GT200 is 1.7 times faster than the G92 alone in this test, as Nvidia wrote in its presentations. But the SLI efficiency for the 9800 GX2 is clearly lacking.

Direct3D 9: Pixel Shader Tests New Pixel Shaders

These DirectX 9 pixel shader tests are even more difficult and fall into two categories. Let's start with the simpler version 2.0 shaders:

  • Parallax Mapping texture mapping method, familiar to most modern games, described in detail in the article
  • Frozen Glass complex procedural frozen glass texture with controllable parameters

There are two variants of these shaders: those with a focus on mathematical calculations, and those with a preference for sampling values ​​from textures. Let's consider mathematically intensive options that are more promising from the point of view of future applications:

The position of video cards in the Frozen Glass test differs from the results of previous tests. Despite the fact that these are mathematical tests that depend on the frequency of the shader units, the GeForce GTX 280 beats the 9800 GTX quite a bit, and the dual-chip 9800 GX2 is far ahead of both of them. Apparently, performance is limited not only by mathematics, but also by the speed of texture fetching. RADEON HD 3870 X2 shows the weakest result.

But in the second “Parallax Mapping” test, the AMD solution is noticeably stronger, although again behind the best Nvidia cards. But this time it loses only to the new video card and dual-chip solution. Improvements in TMU and on-chip caches affected the results of the GTX 280; it outperformed the two-chip RADEON and is slightly behind the similar solution on two G92s. Let's consider these tests in a modification with a preference for sampling from textures over mathematical calculations, where G92-based video cards should show higher relative results:

The situation has changed a little; we see a clear emphasis on performance in the speed of texture units. In all tests, the GeForce GTX 280 is significantly ahead of the AMD solution and slightly ahead of all its single-chip counterparts. But ahead of everyone is the two-chip GeForce 9800 GX2. It should be noted that for all solutions, shader options with a large number of mathematical calculations work 1.5-2 times faster compared to their “texture” options.

Let's look at the results of two more tests of pixel shaders version 3.0, the most complex of our pixel shader tests for Direct3D 9. The tests differ in that they heavily load both the ALU and texture modules; both shader programs are complex, long, and include a large number of branches:

  • Steep Parallax Mapping a much more “heavy” type of parallax mapping technique, also described in the article
  • Fur procedural shader that renders fur

Although AMD solutions provide efficient execution of complex pixel shaders version 3.0 with a large number of branches, the GeForce 9800 GTX shows results on par with a dual-chip RV670-based card. This can be explained by faster bilinear texture fetches in the G9x architecture and greater efficiency in using available resources due to the difference between scalar and superscalar architectures.

The two-chip GeForce 9800 GX2 almost doubles the performance, being the leader in both tests, while the GeForce GTX 280 being reviewed today is logically located in the middle between these solutions. I would like a greater difference between the speed of the GT200 and G92, of course... At least 1.6-1.7 times.

Direct3D 10: PS 4.0 pixel shader tests (texturing, loops)

The new version of RightMark3D 2.0 includes two familiar PS 3.0 tests for Direct3D 9, which have been rewritten for DirectX 10, as well as two more completely new tests. The first pair added the ability to enable self-shadowing and shader supersampling, which further increases the load on video chips.

These tests measure the performance of pixel shaders running in cycles, with a large number of texture samples (in the heaviest mode, up to several hundred samples per pixel!) and a relatively small ALU load. In other words, they measure the speed of texture samples and the efficiency of branches in the pixel shader.

The first test of pixel shaders will be Fur. At the lowest settings, it uses 15 to 30 texture samples from the height map and two samples from the main texture. The Effect detail mode “High” increases the number of samples to 40-80, the inclusion of “shader” supersampling up to 60-120 samples, and the “High” mode together with SSAA is characterized by maximum “heaviness” from 160 to 320 samples from the height map.

Let's first check the modes without supersampling enabled; they are relatively simple, and the ratio of results in the “Low” and “High” modes should be approximately the same.

The results in “High” were almost one and a half times lower than in “Low”. Otherwise, Direct3D 10 tests of procedural fur rendering with a large number of texture samples again show a huge advantage of Nvidia solutions over AMD. Performance in this test depends not only on the number and speed of TMU blocks, but also on the fill rate and bandwidth. A comparison of the results of the GeForce 9800 GTX and 8800 Ultra indicates this.

The hero of the review, Geforce GTX 280, has very good results in this test; it is only slightly behind the dual-chip GeForce 9800 GX2, outperforming the single-chip solution on the G92 by 60-70%. Let's look at the result of the same test, but with shader supersampling enabled, which increases the work by four times, perhaps in this situation something will change, and memory bandwidth with fill rate will have less impact:

Enabling supersampling theoretically increases the load by four times, but on Nvidia video cards the speed decreases a little more than on AMD, due to which the gap between them is reduced, and the HD 3870, together with the X2 variant, moves up quite a bit. But the advantage of Nvidia cards has not gone away, it is overwhelming.

Otherwise, as the complexity of the shader and the load on the video chip increases, the difference between the GeForce GTX 280 and all other Nvidia cards grows very strongly. Now the new GTX is 2.5 times faster than the old one! This is what it means to have an architecture redesigned to run the most complex shaders. Even the dual-chip 9800 GX2 is defeated by a large margin.

The second test, which measures the performance of complex pixel shaders with loops with a large number of texture samples, is called Steep Parallax Mapping. At low settings it uses 10 to 50 texture samples from the height map and three samples from the main textures. When you enable heavy mode with self-shadowing, the number of samples doubles, and supersampling quadruples this number. The most complex test mode with supersampling and self-shadowing selects from 80 to 400 texture values, that is, eight times more than the simple mode. Let's first check simple options without supersampling:

This test is even more interesting from a practical point of view, because varieties of parallax mapping have been used in games for a long time, and heavy variants, like our steep parallax mapping, are used in some projects, for example, in Crysis and Lost Planet. In addition, in our test, in addition to supersampling, you can enable self-shadowing, which approximately doubles the load on the video chip; this mode is called “High”.

The situation of the previous test was repeated. Although AMD solutions were previously strong in Direct3D 9 parallax mapping tests, in the updated D3D10 version without supersampling they cannot cope with our task at the level of Geforce video cards. In addition, enabling self-shadowing causes a larger performance drop on AMD products compared to the difference for Nvidia solutions.

The GeForce GTX 280 we are reviewing today, without enabling supersampling, begins to outperform everyone, including the GeForce 9800 GX2, outperforming the 9800 GTX and 8800 Ultra in heavy mode by more than twice. Let's see what difference enabling supersampling will make; in the previous test it caused a greater drop in speed on Nvidia cards.

When supersampling and self-shadowing are enabled, the task becomes more difficult; enabling both options together increases the load on the cards by almost eight times, causing a large drop in performance. The difference between the speed of different video cards is somewhat different. Enabling supersampling has the same effect as in the previous case: AMD cards improve their performance relative to Nvidia solutions. The HD 3870 continues to lag behind all GeForces, but the dual-chip X2 is almost on par with the 8800 Ultra and 9800 GTX.

As for comparing the GeForce GTX 280 with the previous tops based on a single G80 or G92 chip, they are both defeated by a 2-3 times advantage! And in High mode, the new video card is much faster and dual-chip on the G92. Once again, an excellent result that shows how well the GT200 handles such complex tasks.

Direct3D 10: PS 4.0 Pixel Shader Tests (Compute)

The next couple of pixel shader tests contain a minimum number of texture fetches to reduce the performance impact of the TMU units. They use a large number of arithmetic operations, and they measure precisely the mathematical performance of video chips, the speed of execution of arithmetic instructions in a pixel shader.

First math test Mineral. This is a complex procedural texturing test that uses only two samples of texture data and 65 sin and cos instructions.

Previously, when analyzing the results of our synthetic tests, we have repeatedly noted that in computationally complex tasks, the modern AMD architecture often shows itself better than the competing one from Nvidia. But time passes, and the situation changes, now in the competition between the RADEON HD 3870 and any GeForce, AMD's solution is inferior. But the dual-chip HD 3870 X2 is good (thanks to AFR), almost on par with the dual-chip GeForce 9800 GX2.

But today we are interested in the performance of the Geforce GTX 280. And it is simply excellent, the video card based on the new GT200 chip almost catches up with the dual-chip cards of the previous generation, ahead of the “old” GeForce 8800 Ultra and the “almost new” Geforce 9800 GTX by 60-70%, which roughly corresponds to the difference in the pure power of shader units, their number and clock frequency.

The second shader calculation test is called Fire, and it is even more ALU-heavy. It has only one texture fetch, and the number of instructions like sin and cos has been doubled, to 130. Let's see what has changed with increasing load:

In general, in this test the rendering speed is clearly limited by the performance of the shader units. Since the release of the RADEON HD 3870 X2, the bug in AMD drivers has been corrected, the result of AMD's solutions has become consistent with theory, and now the RADEON HD 3870 in this test shows speed even higher than all GeForce 8800 and 9800.

But not the GeForce GTX 280, which is more than 1.5 times faster than its single-chip predecessors from Nvidia, which is also close to the theoretical difference in shader performance. The leader is the dual-chip RADEON HD 3870 X2. And it is likely that with the advent of new AMD solutions, the palm in mathematical tests will pass to them.

Direct3D 10: Geometry Shader Tests

The RightMark3D 2.0 package has two geometry shader speed tests, the first option is called “Galaxy”, a technique similar to “point sprites” from previous versions of Direct3D. It animates a particle system on the GPU, a geometry shader from each point creates four vertices that form a particle. Similar algorithms should be widely used in future DirectX 10 games.

Changing the balancing in geometry shader tests does not affect the final rendering result, the final image is always exactly the same, only the methods of processing the scene change. The “GS load” parameter determines in which shader the calculations are performed: vertex or geometry. The number of calculations is always the same.

Let's look at the first version of the Galaxy test, with calculations in the vertex shader, for three levels of geometric complexity:

The fun begins, because Nvidia promised to increase the efficiency of geometry shaders. However, the graph shows that the first test makes poor use of these capabilities, and we will have to wait for the second. The ratio of speeds for different geometric complexity of scenes is approximately the same. The performance corresponds to the number of points, with each step the FPS drops by about two times. The task for modern video cards is not very difficult and the speed limitation by the power of stream processors in the test is not obvious; the task is also limited by memory bandwidth and fill rate, although to a lesser extent.

The Geforce GTX 280 shows results on par with the dual-chip RADEON HD 3870 X2, which is more than twice as fast as the single HD 3870. In terms of speed among its peers from Nvidia, the results of the announced card are exactly between the single card based on the G92 chip and the dual-chip version. Overall, not too bad, although I would like to achieve the performance of the 9800 GX2. Perhaps, when some of the calculations are transferred to the geometry shader, the situation will change, let’s see:

The difference between the considered test options is small; no significant changes occurred. All Nvidia video cards show almost the same results when changing the GS load parameter, which is responsible for transferring part of the calculations to the geometry shader. But the results of both AMD video cards have increased slightly, and the RADEON HD 3870 is already less behind, and the dual-chip HD 3870 X2 is even slightly ahead of the Geforce GTX 280. Let's see what changes in the next test, which assumes a greater load on geometry shaders...

“Hyperlight” is the second test of geometry shaders, demonstrating the use of several techniques at once: instancing, stream output, buffer load. It uses dynamic geometry creation using dual buffer rendering, as well as the new Direct3D 10 stream output feature. The first shader generates the direction of the rays, the speed and direction of their growth, this data is placed in a buffer, which is used by the second shader for drawing. For each point of the ray, 14 vertices are built in a circle, up to a million output points in total.

A new type of shader programs is used to generate “rays”, and with the “GS load” parameter set to “Heavy” also to draw them. That is, in the “Balanced” mode, geometry shaders are used only to create and “grow” rays, the output is carried out using “instancing”, and in the “Heavy” mode, the geometry shader is also involved in output. First we look at the easy mode:

Relative results in different modes correspond to the load: in all cases, performance scales well and is close to theoretical parameters, according to which each subsequent level of “Polygon count” should be twice as slow. The performance of the GeForce 9800 GX2 this time failed somewhere very, very deeply, perhaps the situation will be different with new drivers. Both AMD cards also lag behind all Nvidia solutions.

If we compare all the boards based on the G80, G92 and GT200, we can clearly see that the emphasis in the test turns out to be something different from memory bandwidth, fill rate and processing power - all cards are almost equal. Although it is somewhat surprising that in the heavy mode the GT200 loses a little to the G92... The numbers may change in the next diagram, in a test with more active use of geometry shaders. It will also be interesting to compare the results obtained in “Balanced” and “Heavy” modes with each other.

Well, we've waited! For the first time in geometry tests, the speed ratio between the GT200 and all others has changed in a way that was intended by Nvidia engineers when they addressed the shortcomings of previous architectures. GeForce GTX 280 is more than twice as fast as GeForce 9800 GTX and 8800 Ultra. Moreover, it is also ahead of the dual-chip RADEON HD 3870 X2. It probably would have beaten the 9800 GX2 fairly, even without the help of the latter's driver problems in this test.

As for comparing the results in different modes, everything is as always; in the competition, the single-chip AMD video card is not helped by the fact that when switching from using “instancing” to a geometry shader during output, Nvidia video cards (except for the new one on the GT200) lose performance. For all Geforce cards based on the G92 and G80 chips, the speed in the “Balanced” mode is higher than in the “Heavy” mode of the RADEON HD 3870. At the same time, the picture obtained in different modes does not differ visually.

The behavior of the GeForce GTX 280 in “Balanced” and “Heavy” is much more interesting. This is the first Nvidia video card to receive a performance boost from transferring part of the calculations to the geometry shader in this test. Once again, Nvidia is clearly working on bugs, as has happened more than once before! Some people should learn from them, and not continue to step on the same rake for a generation...

Direct3D 10: Texture fetch speed from vertex shaders

The Vertex Texture Fetch tests measure the speed of a large number of texture fetches from the vertex shader. The tests are similar in essence and the ratio between the cards’ results in the “Earth” and “Waves” tests should be approximately the same. Both tests are based on texture sampling data, the only significant difference is that the Waves test uses conditional branches, while the Earth test does not.

Let's look at the first "Earth" test, first in the "Effect detail Low" mode:

Judging by previous studies, the results of this test are greatly influenced by memory bandwidth, and the simpler the mode, the greater the impact on speed it has. This is clearly visible in the comparative results of the GeForce 9800 GTX and GeForce 8800 Ultra; if in the simple mode the latter wins due to its clear advantage in bandwidth, on average the results are closer, and in the most difficult mode they are almost equal.

The dual-chip 9800 GX2 doesn't really get ahead, although the HD 3870 X2 shows a twofold increase compared to the HD 3870. Probably, driver shortcomings, more precisely, the AFR mode. However, even the GeForce 8800 Ultra shows better results than the HD 3870 X2, and the GeForce GTX 280 being reviewed today can secure the formal leadership. Let's look at the results of the same test with an increased number of texture samples:

The situation has not changed too much; in easy mode the GTX 280 continues to lead, but in difficult mode the 9800 GX2 is already ahead. However, the GeForce GTX 280 is still faster than both competitors from AMD and is slightly ahead of its single-chip counterparts in the GeForce 8 and 9 lines. Like last time, as the task becomes more complicated, the results of the cards become more compact.

Let's look at the results of the second test of texture fetches from vertex shaders. The Waves test has a smaller number of samples, but it uses conditional jumps. The number of bilinear texture samples in this case is up to 14 (“Effect detail Low”) or up to 24 (“Effect detail High”) per vertex. The complexity of the geometry changes similarly to the previous test.

But the “Waves” test is more favorable to AMD products; the single-chip model of the RADEON HD 3800 family looks good, ahead of the G92-based solution in light mode, and slightly inferior in heavy mode. It is clearly seen that in this test the speed depends not so much on the power of the TMU, but on the bandwidth and fill rate, since even a dual-chip card with two G92s showed results at the level of the solution of the previous generation GeForce 8800 Ultra. Our hero Geforce GTX 280 is ahead of everyone in the easiest mode, but in the other two it is inferior to the dual-chip RADEON. Let's consider the second version of the same test:

There are few changes, but as the complexity of the test increased, the results of the RADEON HD 3800 series video cards became even slightly better relative to the speed of Nvidia cards. The latter lost a little more speed. All other conclusions also remain valid the speed is most limited by bandwidth, in light mode it is stronger, and in heavy mode TMU units and “dual-chip” begin to play a big role, so the 9800 GX2 catches up with the GTX 280, and the HD 3870 X2 is completely ahead. In the VTF tests, the position of AMD boards has significantly improved. Previously, we noticed that Nvidia solutions cope better with tests of texture samples from vertex shaders, but now the situation is different.

Conclusions on synthetic tests

Based on the results of synthetic tests of the Geforce GTX 280, as well as other video card models from both major video chip manufacturers, we can conclude that the new Nvidia solution is very powerful. In synthetic tests, it is significantly faster than single-chip versions of the previous generation, sometimes up to two times or even more, and often competes on equal terms with dual-chip products. This was made possible thanks to the improved GT200 architecture with an increased number of ALU, TMU and ROP execution units. All modifications and improvements allow the reviewed video card to show excellent results in all synthetic tests.

Not only did the increased number of execution units influence the increase in speed, but also the improved architecture compared to the G8x and G9x, which is characterized by higher efficiency and computational performance, which is important for modern and future applications with a large number of complex shaders of all types. Changes were made to almost all blocks of the GT200 architecture; shader processors, texture and ROP units, and much more became more powerful.

In addition to modifications aimed at further increasing performance, Nvidia also paid attention to eliminating annoying shortcomings in the G8x/G9x. Thanks to this, video cards based on the GT200 chip show better results in very complex shaders, and especially complex geometry shaders with on-the-fly geometry creation. This is the first video chip from Nvidia that received a performance boost from moving part of the calculations to the geometry shader in one of our synthetic tests. And it’s even more pleasant that the company itself uses our test for internal purposes.

Overall, the new Geforce GTX 280 is well-balanced, especially for future applications that are more shader-intensive. It has a large number of all execution units, a very wide memory bus, and therefore high memory bandwidth; it has the optimal amount of local video memory for a high-end solution. The solution doesn't have many technical shortcomings; the only thing we would like is a slightly higher operating frequency for the video chip in general and shader units in particular. But this is more a question about the technological process...

The next part of our article contains tests of Nvidia's new solution in modern gaming applications. These results should roughly correspond to the conclusions drawn from the analysis of the results of synthetic tests, adjusted for the greater influence of fill rate and bandwidth. Rendering speed in games depends more on the speed of texturing and fillrate than on the power of the ALU and geometry processing units. And judging by the results in synthetics, we can assume that the speed of the GeForce GTX 280 in games will be somewhere between the GeForce 9800 GTX and 9800 GX2, but closer to the latter. That is, on average, the GT200 should be 60-80% faster than the G92.

The power supply for the test bench was provided by the company TAGAN
Dell 3007WFP monitor for test benches provided by the company

Today's article will talk about the world's most modern and most powerful graphics chip from NVIDIA, codenamed GT200, and the video adapter based on it, GeForce GTX 280. We will try to consider all of its most interesting features, innovations and differences from previous chips , as well as test performance under equal conditions and compare with competitors.

Background

But not all at once, let's go back in time a little and trace the history of the development of graphics chips. It's no secret that for many years now two companies have been competing in the graphics card market: ATI (currently bought out by AMD and having the AMD Radeon brand) and NVIDIA. Of course, there are also small manufacturers, such as VIA with its S3 Chrome chips or Intel with integrated video adapters, but the confrontation between ATI (AMD) and NVIDIA has always dictated fashion. And what’s noteworthy is that the stronger this confrontation was, or let’s not even be afraid of this word “Cold War,” the more scientific and technological progress advanced, and the greater the benefits received by the end users - that is, you and me. After all, one of the mechanisms of the struggle for users’ wallets is the technical superiority of the products of one of the manufacturers, and another is the pricing policy and the price/feature ratio. By the way, often the second mechanism turns out to be much more effective than the first.

When one side is noticeably superior to a competitor in technical terms, the other has no choice but to put forward even more advanced technology or “play with prices” on existing products. A clear example of the “price game” is the competition between Intel and AMD in the field of central processors. After the announcement of the Core 2 architecture, AMD was unable to offer something more advanced and therefore, in order not to lose market share, was forced to reduce prices for its processors.

But there are also examples of a different nature. At one time, ATI released a very successful line of products of the X1000 family, which appeared at the right time and was very popular with many users, and many still have video cards like Radeon X1950. NVIDIA did not have a worthy answer at its disposal at that time, and ATI managed to simply knock NVIDIA out of the game for about half a year. But we must pay tribute to the Californian engineers; after a short time they came up with a fundamentally new technological solution - the G80 chip using universal processors. This chip became a real flagship for a long time, returned the palm to the Californian company and brought unsurpassed gaming performance to ordinary users. What happened next? And then nothing happened - ATI (now under the AMD brand) was unable to create something more powerful. Its R600 chip failed in many ways, forcing the Canadian company to constantly reduce prices. The lack of competition in the category of productivity solutions allowed NVIDIA to relax - after all, there are no opponents anyway.

The release of a new flagship

Everyone interested in 3D graphics has been waiting for a real update of the G80 architecture for a long time. There were always a variety of rumors about the next generation of chips, some of them were later confirmed, but in 2007 we only waited for a minor architectural update in the form of solutions based on G92 chips. All video cards released on their basis are quite good for their market sectors; these chips made it possible to reduce the cost of powerful solutions, making them less demanding on power and cooling, but enthusiasts were waiting for a full update. In the meantime, AMD released updated products based on the RV670, which brought it some success.

But the development of the gaming industry, new powerful games like Crysis, forced both companies to develop new graphics chips. Only their goals were different: AMD’s main goal was to fight for lost market share, minimize production costs and provide productive solutions at reasonable prices, while NVIDIA had a goal to maintain technological leadership and demonstrate the fantastic performance of its chips.

Today we will have the opportunity to examine in detail the results of the work of one of the companies - the most productive, most modern GT200 chip produced by NVIDIA, presented by the company on June 17, 2008.

Technical details

Architecturally, the GT200 has many similarities with the G8x/G9x; the new chip took all the best from them and was supplemented with numerous improvements. And now we move on to consider the features of new solutions.

Graphics accelerator GeForce GTX 280

  • chip code name GT200;
  • 65 nm technology;
  • 1.4 billion (!) transistors;
  • unified architecture with an array of common processors for stream processing of vertices and pixels, as well as other types of data;
  • hardware support for DirectX 10.0, including the shader model – Shader Model 4.0, geometry generation and recording of intermediate data from shaders (stream output);
  • 512-bit memory bus, eight independent controllers 64-bit wide;
  • core frequency 602 MHz (GeForce GTX 280);
  • ALUs operate at more than double the frequency of 1.296 GHz (GeForce GTX 280);
  • 240 scalar floating-point ALUs (integer and floating-point formats, FP support for 32-bit and 64-bit precision within the IEEE 754(R) standard, two MAD+MUL operations per clock);
  • 80 texture addressing and filtering units (as in G84/G86 and G92) with support for FP16 and FP32 components in textures;
  • the possibility of dynamic branches in pixel and vertex shaders;
  • 8 wide ROP blocks (32 pixels) with support for antialiasing modes up to 16 samples per pixel, including with FP16 or FP32 frame buffer format. Each block consists of an array of flexibly configurable ALUs and is responsible for generating and comparing Z, MSAA, and blending. Peak performance of the entire subsystem is up to 128 MSAA samples (+ 128 Z) per clock cycle, in the mode without color (Z only) – 256 samples per cycle;
  • recording results from up to 8 frame buffers simultaneously (MRT);
  • all interfaces (two RAMDAC, Dual DVI, HDMI, DisplayPort, HDTV) are integrated on a separate chip.

Reference graphics card specificationsNVIDIA GeForce GTX 280

  • core frequency 602 MHz;
  • universal processor frequency 1296 MHz;
  • number of universal processors 240;
  • number of texture blocks – 80, blending blocks – 32;
  • effective memory frequency 2.2 GHz (2*1100 MHz);
  • memory type GDDR3;
  • memory capacity 1024 MB;
  • memory bandwidth 141.7 GB/s;
  • theoretical maximum fill rate 19.3 gigapixels/s;
  • theoretical texture sampling speed up to 48.2 gigatexels/s;
  • two DVI-I Dual Link connectors, output resolutions up to 2560x1600 are supported;
  • dual SLI connector;
  • PCI Express 2.0 bus;
  • TV-Out, HDTV-Out, DisplayPort (optional);
  • power consumption up to 236 W;
  • two-slot design;
  • original suggested price $649.

Separately, we note that DirectX 10.1 is not supported by the GeForce GTX 200 family. The reason given was the fact that when developing the new family of chips, after consultation with partners, it was decided to focus not on supporting DirectX 10.1, which is still in little demand, but on improving the architecture and performance of the chips.

The GeForce GTX 280 architecture has undergone many changes compared to the GeForce 8800 GTX and Ultra video cards:

  • The number of computing cores has been increased by 1.88 times (from 128 to 240).
  • The number of simultaneously executed threads has been increased by 2.5 times.
  • The maximum length of complex shader code has been doubled.
  • The accuracy of floating point calculations has been doubled.
  • Geometric calculations are performed much faster.
  • The memory capacity is increased to 1 GB, and the bus is increased from 384 to 512 bits.
  • Increased memory buffer access speed.
  • Improved internal chip connections between different blocks.
  • Improved Z-cull optimizations and compression to ensure less performance hit at high resolutions.
  • Supports 10-bit color depth.

Here is the main diagram of the GT200 chip:

Key Architectural Features of CUDA

Since the announcement of the Core 2 architecture and its triumphal march, a fashion has emerged among developers to advertise, in addition to the names of products, the names of the architecture on which they are made. NVIDIA is no exception, actively promoting its CUDA (Compute Unified Device Architecture) architecture - a computing architecture aimed at solving complex problems in the consumer, business and technical spheres - in any data-intensive applications using NVIDIA GPUs. The advantage of this approach is the significant superiority, by an order of magnitude or even two, of graphics chips over modern central processors. But, immediately, a drawback emerges - for this it is necessary to develop special software. By the way, NVIDIA is holding a competition among software developers for the CUDA architecture.

The GT200 video chip was developed with an eye to its active use in computing tasks using CUDA technology. In the so-called computational mode, the new video chip can be imagined as a programmable multiprocessor with 240 computing cores, built-in memory, random write and read capabilities, and a gigabyte of dedicated memory with high bandwidth. As NVIDIA says, in this mode, the GeForce GTX 280 turns an ordinary PC into a small supercomputer that provides almost teraflop speeds, which is useful for numerous scientific and applied tasks.

A fairly large number of the most demanding tasks can be transferred from the CPU to the GPU using CUDA, and at the same time it will be possible to obtain a noticeable performance increase. The picture shows examples of the use of CUDA in real tasks, and provides figures showing the multiplicity of performance gains of the GPU compared to the CPU.

As you can see, the tasks are very diverse: video data transcoding, molecular dynamics, astrophysical simulations, financial simulations, image processing in medicine, etc. Moreover, the gains from transferring calculations to the video chip turned out to be about 20-140-fold. Thus, the new video chip will help speed up many different algorithms if they are transferred to CUDA.

One of the everyday applications of GPU calculations can be considered the transcoding of videos from one format to another, as well as the encoding of video data in the corresponding editing applications. Elemental completed the task of moving encoding to the GPU in its RapidHD application, yielding the following numbers:

The most powerful GPU GeForce GTX 280 performs excellently in this task; the speed increase compared to the fastest central processor is more than 10 times. Encoding a two-minute video took 231 seconds on the CPU and only 21 seconds on the GT200. It is important that the use of a GPU made it possible to achieve this task not just in real time, but even faster!

However, intensive computing using modern graphics cards is not new, but with the advent of the GeForce GTX 200 family of graphics processors, NVIDIA expects a significant increase in interest in CUDA technology.

From the point of view of CUDA technology, the new GeForce GTX 280 graphics chip is nothing more than a powerful multi-core (hundreds of cores!) processor for parallel computing.

NVIDIA PhysX

This is perhaps the most interesting aspect of the new NVIDIA video adapters for ordinary users. Although it applies not only to new solutions based on the GT200, but also to all video cards of the GeForce 8 and GeForce 9 family.

In modern games, well-implemented physical interactions play an important role; they make games more interesting. Almost all physics calculations are performance-intensive, and the associated algorithms are computationally intensive. Until a certain time, these calculations were performed only on central processors, then physical accelerators from the Ageia company appeared, which, although not widely used, significantly revived activity in this market. Only a few enthusiastic players could purchase such boosters.

But everything changed when NVIDIA bought Ageia and with it received all the necessary information about PhysX. Namely information, since the hardware devices themselves did not interest her. We must give NVIDIA credit - it took the right course and adapted the PhysX physics engine to its CUDA architecture, and now every owner of a video card with this architecture receives hardware acceleration of physical processes in games by simply updating the drivers.

When working with a powerful video chip, PhysX can offer many new effects, such as dynamic smoke and dust effects, tissue simulation, liquid and gas simulation, weather effects, etc. According to NVIDIA itself, the new GeForce GTX 280 video cards are capable of working 10 times or more faster than 4-core processors when working with PhysX. Currently, PhysX support is implemented in more than 150 games.

Improved power management technology

The new video chip uses improved power management compared to the previous generation of NVIDIA chips. It dynamically changes the frequencies and voltages of GPU blocks based on their load, and is capable of partially disabling some of the blocks. As a result, the GT200 significantly reduces power consumption during idle moments, consuming about 25 watts, which is very low for a GPU of this level. The solution supports four operating modes:

  • Idle or 2D mode (about 25 watts);
  • HD/DVD video viewing mode (about 35 watts);
  • full 3D mode (up to 236 watts);
  • HybridPower mode (about 0 watt);

To determine the load, the GT200 uses special blocks that analyze data streams inside the GPU. Based on data from them, the driver dynamically sets the appropriate performance mode, selects frequency and voltage. This optimizes power consumption and heat dissipation from the card.

We got acquainted with the innovations and features - in this regard, NVIDIA achieved its goal by introducing a completely new graphics chip. But there remains a second goal - to prove superiority in terms of performance. To do this, we will look at the GT200 chip already implemented in the form of a finished video card, test it and compare all the power embedded in it with the flagships of the previous generation and solutions of competitors.

Video cardonNVIDIA GeForce GTX 280

Having stirred up interest in the graphics accelerator, let’s move on directly to its review, testing, comparison and, of course, overclocking. But first, once again the specification, now of a ready-made serial accelerator.

Manufacturer

Name

ENGTX280/HTDP/1G/A

Graphics core

NVIDIA GeForce GTX 280 (G200-300-A2)

Conveyor

240 unified streaming

Supported APIs

DirectX 10.0 (Shader Model 4.0)
OpenGL 2.1

Core (shader domain) frequency, MHz

Memory volume (type), MB

Memory frequency (effective), MHz

Memory bus

512-bit

Bus standard

PCI Express 2.0 x16

Maximum resolution

Up to 2560 x 1600 in Dual-Link DVI mode
Up to 2048 x 1536 at 85 Hz via analog VGA
Up to 1080i via HDTV-Out

2x DVI-I (2x VGA via adapters)
TV-Out (HDTV, S-Video and Composite)

HDCP support
HD video decoding

Eat
H.264, VC-1, MPEG2 and WMV9

Drivers

Fresh drivers can be downloaded from:
- support site;
- GPU manufacturer website.

Products webpage

The video card is supplied in a fairly large double cardboard box. But, unlike the packaging of previous top-end accelerators, this one is slightly smaller in size and lacks a plastic handle; apparently ASUS has begun to save cardboard.

But one of the sides of the package still opens up in the form of a book, telling the buyer about the extreme capabilities of the graphics accelerator and proprietary technologies.

On the back of the package, in addition to listing the general capabilities of the video card and proprietary software, information about the minimum requirements for the system in which the ASUS ENGTX280/HTDP/1G/A will be installed is carefully indicated. The most interesting and critical part is the recommendation to use a minimum 550 W power supply that is capable of delivering up to 40 A along the 12V line. Also, the power supply must provide the required number of power outputs, to which the power adapters will be connected.

The correct power supply diagram for the video card is also shown nearby. Please note that for the 8-pin connector an adapter is used from two 6-pin PCI Express, and not from a pair of peripheral ones, as could be seen earlier when installing AMD/ATI accelerators. Considering the power consumption of the GeForce GTX 280, you will have to approach the power supply more carefully.

Inside the colorful and informative cover, i.e. outer box, there is a completely black inner one, which in turn is divided into several more separate boxes and niches that accommodate the entire set.

The package is more than sufficient for full use of the accelerator and, in addition to the video adapter itself, includes:

    two disks with drivers, utilities and an electronic version of the user manual;

    paper guide for quickly installing a video card;

    branded “leather” mouse pad;

    branded folder for disks;

    adapter from 2-x Molex (power supply for peripheral devices) to 6-pin power supply PCI-Express;

    adapter from 2 6-pin PCI Express to 8-pin power connector;

    8-pin power connector extension;

    8-pin to 6-pin PCI Express adapter;

    adapter from S-Video TV-Out to component HDTV-Out;

  • adapter from DVI to VGA.

The video card on the GeForce GTX 280 has the same dimensions as the accelerators on the NVIDIA GeForce 9800 GX2, and it is even similar in appearance to the NVIDIA GeForce 9800 GTX, when looking at the front part, which is completely hidden under the “familiar” cooling system. In general, approximately the same engineers were involved in the development of all these accelerators and their coolers, so the external similarity is not surprising.

Let us immediately note that it does not matter at all who is the final seller of the accelerator; top-end video cards are produced directly by NVIDIA itself at the production facilities of its partners. The final distributors are only involved in packaging ready-made accelerators and can only count on the opportunity to flash their proprietary BIOS, slightly overclock the video card, or replace the cooler with an alternative one.

The reverse side of the video card is now hidden behind a metal plate, which, as it turned out during disassembly, plays the role of a heatsink for the memory chips, now located on both sides of the printed circuit board.

On top of the video card, almost at the very edge, there are connectors for connecting additional power. Having a power consumption of up to 236 W, the accelerator requires reliable power supply, which is provided by one 6-pin PCI Express connector and one 8-pin connector, as on the dual-chip GeForce 9800 GX2.

Hidden under a rubber plug next to the power connectors is an SPDIF digital audio input, which should provide mixing of the audio stream with video data when using the HDMI output.

On the other side, also under the plug, there is a double SLI connector, which provides support for 3-Way SLI and allows you to build a computer with an incredibly powerful video system.

Two DVIs are responsible for image output, which can be converted to VGA or HDMI using adapters, as well as TV-Out with HDTV support. Next to the TV output connector, near the heated air outlets, there is a video card power indicator that displays its current status.

Under the cooling system there is a printed circuit board, which in many ways resembles previous top-end solutions on the G80 (for example, GeForce 8800 Ultra), only now, due to the increase in video memory to 1 GB, the chips are located on both sides of the printed circuit board and are not so dense. Plus, the power system has been strengthened to ensure the operation of such a powerful accelerator.

The main consumer of electricity is the NVIDIA G200-300 chip of the second revision, which is called the GeForce GTX 280. It contains 240 unified stream processors that operate at a clock frequency of 1296 MHz while the rest of the core operates at 602 MHz. Data exchange with video memory is carried out via a 512-bit bus. This GPU is capable of delivering incredible graphics performance, but it doesn't have the front-end processing capabilities.

A separate NVIO2 chip is responsible for all inputs and outputs, and its location “far” from the main processor allows us to speak about the absence of various interference and interference, which should provide an excellent image even on analog monitors.

Hynix products are used as memory chips. Microcircuits with an operating voltage of 2.05 V have a response time of 0.8 ms, i.e. ensure operation of video memory at an effective frequency of up to 2200 MHz. Memory chips operate at the same clock frequency.

Let's talk separately about the cooler. The cooling system has a design familiar to NVIDIA and occupies the expansion slot adjacent to the video card, ensuring the removal of heated air outside the case.

It is interesting to note that not only the aluminum radiator plates, but also the entire cooler body are responsible for heat removal, which is clearly visible from the connection of the heat pipes to it. Therefore, ventilating the video card in any convenient way can provide a noticeable improvement in its temperature. And few owners of this “hot monster” will be able to avoid thoughts about improving cooling. Even a short serious load on the video card causes the turbine to spin up to a maximum of 1500 rpm, which noticeably disrupts acoustic comfort. But even this does not relieve the accelerator from significant heating.

In a closed, well-ventilated case, the temperature of the GPU exceeded 100°C, and the air blown out by the cooling system suggested that NVIDIA should not have introduced this GPU by summer - it should have been by winter, so that a user who had bought a very expensive accelerator could save on heating.

To prevent the video card from overheating, I had to open the case and direct a household fan in its direction - this ensured a reduction in the temperature of the GPU by 14 degrees and by 9 degrees of the entire video card. It was in this position that all tests and subsequent overclocking were carried out. But with the case open, the stock cooler seemed a little louder.

But in the absence of a 3D load, the temperature of the video card decreases significantly, which is also achieved by an additional reduction in operating frequencies and a decrease in voltage - in 2D mode the video card consumes 200 W less. The same fact allows the cooler turbine to rotate more slowly, which makes it almost silent.

During testing, we used Video Card Test Stand No. 1
Choose what you want to compare GeForce GTX280 1GB ASUS with


Among single-chip accelerators, the solution based on NVIDIA GeForce GTX 280 undoubtedly occupies a leading position, but the ASUS ENGTX280/HTDP/1G/A does not always outperform dual-chip accelerators and multi-GPU configurations from previous generation cards, especially when using a processor that is not the most productive .

2-core processor vs 4-core

What will be the benefit of using a more powerful processor, such as a quad-core one? It is these processors that are now often recommended to owners of high-performance video cards.

In order to check whether a quad-core processor would be preferable, we replaced the Intel Core 2 Duo E6300 @2800 with an Intel Core 2 Quad Q9450 @2800.

Test package

Intel Core 2 Duo E6300 @2800

Intel Core 2 Quad Q9450 @2800

Productivity gain, %

As you can see, there is indeed a performance increase on a quad-core processor, and sometimes quite a lot, but it is in high resolutions, for which expensive video cards are bought, that the acceleration is the least.

IntelCore 2 QuadagainstAMD Phenom X4

Another frequently voiced recommendation regarding the configuration of a high-performance gaming system is preference for Intel processors, as they are faster. Well, let's try to check in practice how much slower a gaming system based on the AMD Phenom X4 processor will be if such a fact occurs.

To “run” under equal conditions, we overclocked the AMD Phenom X4 9850 Black Edition processor to a frequency of 2.8 GHz, which is quite easy to do just by changing the multiplier, and conducted a series of tests on the new ASUS M3A32-MVP DELUXE/WIFI-AP platform. At the same time, the RAM worked in DDR2-800 mode with the same timings as on a system with an Intel Core 2 Quad Q9450 processor.

Test package

AMD Phenom X4 9850 @2800

Intel Core 2 Quad Q9450 @2800

Performance difference, %

Serious Sam 2, Maximum Quality, AA4x/AF16x, fps

Call Of Juarez, Maximum Quality, NO AA/AF, fps

Call Of Juarez, Maximum Quality, AA4x/AF16x, fps

Prey, Maximum Quality, AA4x/AF16x, fps

Crysis, Maximum Quality, NO AA/AF, fps

Crysis, Maximum Quality, AA4x/AF16x, fps

So, when running at the same clock speeds, a system with an Intel Core 2 Quad processor actually turns out to be slightly faster than one with an AMD Phenom X4 processor. At the same time, the higher the resolution and the greater the requirements for image quality, the less superiority of Intel processors. Of course, using the most expensive and productive video card, it is unlikely that the buyer will save on the processor and motherboard, but in other conditions we would not recommend “definitely Intel Core 2 Quad”, but would suggest carefully weighing the options for systems with processors from AMD and Intel .

Overclocking

To overclock the video card, we used the RivaTuner utility, while, as noted above, the case was open, and an additional flow of fresh air to the video card was provided by a household fan.

As a result of overclocking, the raster domain frequency rose to 670 MHz, which is 70 MHz (+11.67%) higher than the default value. Overclocking of the shader domain turned out to be slightly better and the frequency indicators, in contrast to the default values, increased by 162 MHz (+12.5%). But the memory overclocking exceeded all expectations. Stable operation was noted at an effective frequency of almost 2650 MHz, which is 430 MHz (+19.5%) higher than the nominal one. We note the excellent overclocking potential of the tested accelerator, especially the video memory.

Now let's see how overclocking a single video card affects performance:

Test package

Standard frequencies

Overclocked video card

Productivity gain, %

Serious Sam 2, Maximum Quality, AA4x/AF16x, fps

Call Of Juarez, Maximum Quality, AA4x/AF16x, fps

Prey, Maximum Quality, AA4x/AF16x, fps

Crysis, Maximum Quality, AA4x/AF16x, fps

Only in the most difficult video modes will you be able to see a performance gain from overclocking. This result was quite predictable. At the same time, it is quite reasonable to note that the processor became the limiting factor in almost all tests. But we will not strongly recommend only the fastest processors to owners of NVIDIA GeForce GTX 280 accelerators, because Even with a dual-core processor that operates at 2.8 GHz, or maybe less, you can play almost any game quite comfortably at the highest settings in high resolutions. In such conditions, you can even see an increase from overclocking. But, of course, if possible, you shouldn’t skimp on the processor if you didn’t skimp on the video card and power supply.

conclusions

We are forced to admit that all video cards based on the GeForce GTX 280 today are the most powerful single-chip graphics accelerators that can provide sufficient performance in any modern game. But, on the other hand, these are the most expensive modern video cards and the most demanding in terms of power supply and, in general, the most “gluttonous” and hot. That is, the GeForce GTX 280 turned out to be the best in all respects, both bad and good.

We are talking generally about accelerators on the GeForce GTX 280, although the hero of the review is ASUS ENGTX280/HTDP/1G/A, since most of them are exactly the same reference samples, differing from each other only in stickers, components and packaging. Therefore, when choosing a GeForce GTX 280 from ASUS, the buyer receives an expanded package with a couple of branded bonuses and a wide network of service centers, but otherwise there is no superiority over competitors’ offers.

Advantages:

  • very high performance in gaming applications;
  • DirectX 10.0 (Shader Model 4.0) and OpenGL 2.1 support;
  • support for NVIDIA CUDA and NVIDIA PhysX technologies;
  • support for 3-Way SLI technology;
  • good overclocking potential.

Flaws:

  • the cooling system occupies 2 slots, is not highly efficient and is not comfortable in quiet operation;
  • quite high cost of the graphics accelerator.

We express our gratitude to the companyLLC PF Service (Dnepropetrovsk) for the video card provided for testing.

When writing this article, materials from the site were used http://www.ixbt.com/.

Article read 19856 times

Subscribe to our channels

However, first, about the technical features. Being a logical development of the GeForce 8 and GeForce 9 series, which represented the first generation of NVIDIA's unified visual computing architecture, the new GeForce GTX 200 family products are based on the second generation of this architecture.

The NVIDIA GeForce GTX 280 and 260 GPUs are the most massive and complex graphics chips ever known - no joke, 1.4 billion transistors each! The most productive solution is the GeForce GTX 280, which has 240 shader processors, 80 texture processors, and supports up to 1 GB of video memory. Detailed characteristics of the GeForce GTX 280 and GeForce GTX 260 chips are shown in the table below.

NVIDIA GeForce GTX 280 and GTX 260 Specifications

Graphics core
Process standards
Number of transistors
Graphics clock speed (including dispatcher, texture modules and ROP)
Clock frequencies of processor modules
Number of processor modules
Memory clock speed (frequency/data)

1107 MHz / 2214 MHz

999 MHz / 1998 MHz

Memory Interface Width
Memory bus bandwidth
Memory
Number of ROP modules
Number of texture filtering modules
Performance of texture filtering modules

48.2 Gigatexels/s

36.9 Gigatexels/s

HDCP support
HDMI support

Yes (DVI-HDMI adapter)

Interfaces

2 x Dual-Link DVI-I
1 x 7-pin HDTV

RAMDAC, MHz
Tire
Form factor

Two slots

Power Connector Configuration

1 x 8 pin
1 x 6 pin

2 x 6-pin

Maximum power consumption
GPU temperature limit

In fact, the modern graphics core of the GeForce GTX 200 family can be thought of as a universal chip that supports two different modes - graphics and computing. The architecture of GeForce 8 and 9 family chips is usually represented by arrays of scalable processors (Scalable Processor Array, SPA). The architecture of the GeForce GTX 200 family of chips is based on a modified and improved SPA architecture, consisting of a number of so-called “texture processing clusters” (TPC, Texture Processing Clusters) in graphics mode or “thread processing clusters” in parallel computing mode. Moreover, each TPC module consists of an array of streaming multiprocessors (SM, Streaming Multiprocessors), and each SM contains eight processor cores, also called stream processors (SP, Streaming Processor), or thread processors (TP, Thread Processor). Each SM also includes texture filtering processors for graphics mode, also used for various filtering operations in compute mode. Below is a block diagram of the GeForce 280 GTX in traditional graphics mode.

Switching to compute mode, the hardware thread manager (above) manages the TPC threads.

A closer look at the TPC cluster: distributed memory for each SM; Each SM processor core can distribute data among other SM cores through distributed memory, without the need to access an external memory subsystem.

Thus, the NVIDIA unified shader and computer architecture uses two completely different computing models: for TPC operation MIMD (multiple instruction, multiple data) is used, for SM calculations - SIMT (single instruction, multiple thread), an advanced version, SIMD (single instruction, multiple data). Regarding general characteristics, compared to previous generations of chips, the GeForce GTX 200 family has the following advantages:

  • Ability to process three times more data streams per unit of time
  • New design of the command scheduler, with 20% increased texture processing efficiency
  • 512-bit memory interface (384-bit for previous generation)
  • Optimized z-sampling and compression process to achieve better performance results at high screen resolutions
  • Architectural improvements to improve shadow processing performance
  • Full-speed frame buffer blending (versus half-speed on the 8800 GTX)
  • Double the command buffer for improved computational performance
  • Double the number of registers for faster processing of long and complex shaders
  • Double precision floating point data according to IEEE 754R version standard
  • Hardware support for 10-bit color space (DisplayPort only)
This is the list of the main characteristics of the new chips:
  • NVIDIA PhysX support
  • Microsoft DirectX 10, Shader Model 4.0 support
  • Support for NVIDIA CUDA technology
  • PCI Express 2.0 bus support
  • GigaThread technology support
  • NVIDIA Lumenex engine
  • 128-bit floating point (HDR)
  • OpenGL 2.1 support
  • Dual Dual-link DVI support
  • Supports NVIDIA PureVideo HD technology
  • NVIDIA HybridPower technology support
It is separately noted that DirectX 10.1 is not supported by the GeForce GTX 200 family. The reason given was the fact that when developing chips of a new family, after consultations with partners, it was decided to focus not on supporting DirectX 10.1, which is still in little demand, but on improving the architecture and performance of the chips. Based on a suite of physics algorithms, NVIDIA PhysX technology is a powerful physics engine for real-time computing. Currently, PhysX support is implemented in more than 150 games. Combined with a powerful GPU, the PhysX engine provides a significant increase in physical computing power, especially in such moments as creating explosions with dust and shrapnel flying, characters with complex facial expressions, new types of weapons with fantastic effects, realistically worn or torn fabrics, fog and smoke with dynamic flow around objects. Another important innovation is new energy saving modes. Thanks to the use of precision 65 nm process technology and new circuit solutions, it was possible to achieve more flexible and dynamic control of power consumption. Thus, the consumption of the GeForce GTX 200 family of graphics chips in standby mode or in 2D mode is about 25 W; when playing a Blu-ray DVD movie - about 35 W; at full 3D load the TDP does not exceed 236 W. The GeForce GTX 200 graphics chip can be completely disabled thanks to the support of HybridPower technology with motherboards on nForce HybridPower chipsets with integrated graphics (for example, nForce 780a or 790i), while the graphics stream of low intensity is simply calculated by the GPU integrated into the motherboard. In addition, GPUs of the GeForce GTX 200 family also have special power control modules designed to turn off GPU units that are not currently in use.

The user can configure a system based on two or three video cards of the GeForce GTX 200 family in SLI mode when using motherboards based on the corresponding nForce chipsets. In the traditional Standard SLI mode (with two video cards), an approximately 60-90% increase in gaming performance is declared; in 3-way SLI mode – the maximum number of frames per second at maximum screen resolutions.

The next innovation is support for the new DisplayPort interface with resolutions higher than 2560 x 1600, with a 10-bit color space (previous generations of GeForce graphics had internal support for 10-bit data processing, but only 8-bit RGB component colors were output). As part of the announcement of the new series of GeForce GTX 200 family of graphics processors, NVIDIA offers a completely new look at the role of the central and graphics processors in a modern balanced desktop system. Such optimized PC based on the concept heterogeneous computing (that is, computing a stream of heterogeneous tasks of different types), according to NVIDIA experts, has a much more balanced architecture and significantly greater computing potential. This refers to the combination of a central processor with relatively moderate performance with the most powerful graphics or even an SLI system, which allows you to achieve peak performance in the most demanding games, 3D and media applications. In other words, the concept can be briefly formulated as follows: the central processor in a modern system takes on service functions, while the burden of heavy calculations falls on the graphics system. Approximately the same conclusions (though more complex and numerically substantiated) are observed in a series of our articles devoted to research into the dependence of performance on key elements of the system, see the articles Processor dependence of a video system. Part I - Analysis; Processor dependence of the video system. Part II - Impact of CPU cache size and RAM speed; Bot addiction, or why 3D games need a powerful CPU; Processor dependence of the video system. Transition region. "Critical" point of CPU frequency. However, intensive computing using modern graphics cards is not new, but with the advent of the GeForce GTX 200 family of graphics processors, NVIDIA expects a significant increase in interest in CUDA technology. CUDA (Compute Unified Device Architecture) is a computing architecture aimed at solving complex problems in the consumer, business and technical spheres - in any data-intensive applications using NVIDIA GPUs. From the point of view of CUDA technology, the new GeForce GTX 280 graphics chip is nothing more than a powerful multi-core (hundreds of cores!) processor for parallel computing. As stated above, the graphics core of the GeForce GTX 200 family can be thought of as a chip that supports graphics and computing modes. In one of these modes - “computing”, the same GeForce GTX 280 turns into a programmable multiprocessor with 240 cores and 1 GB of dedicated memory - a kind of dedicated supercomputer with teraflop performance, which significantly increases the efficiency of working with applications that parallelize data well, for example , video encoding, scientific computing, etc. GPUs of the GeForce 8 and 9 families were the first on the market to support CUDA technology, now more than 70 million of them have been sold and interest in the CUDA project is constantly growing. You can learn more about the project and download the files necessary to get started. As an example, the screenshots below show examples of computational performance gains obtained by independent users of CUDA technology.

To summarize our brief examination of the architectural and technological improvements implemented in the new generation of NVIDIA GPUs, we will highlight the main points. The second generation of unified visual computing architecture featured in the GeForce GTX 200 family is a significant improvement over the previous generations of GeForce 8 and 9.

Compared to the previous leader GeForce 8800 GTX, the new flagship processor GeForce GTX 280 has 1.88 times more processor cores; capable of processing approximately 2.5 more threads per chip; has double the size of file registers and support for double precision floating point calculations; supports 1 GB of memory with a 512-bit interface; equipped with a more efficient command manager and improved communication capabilities between chip elements; improved Z-buffer and compression module, support for 10-bit color palette, etc. For the first time, the new generation of GeForce GTX 200 chips is initially positioned not only as a powerful 3D graphics accelerator, but also as a serious computer solution for parallel computing. It is expected that GeForce GTX 280 video cards with 1 GB of memory will appear in retail at a price of about $649, new products based on the GeForce GTX 260 with 896 MB of memory will cost about $449 (or even $399). It will be possible to check to what extent the recommended prices coincide in real retail very soon, since according to all data, the announcement of the GeForce GTX 200 family is by no means “paper”; solutions on these chips have been announced by many NVIDIA partners, and in the very near future new products will appear on the shelves. Now let's move on to the description of the first GeForce GTX 280 video card that came into our laboratory, and to the results of its testing.

General characteristics

Video card type

Modern video adapters can be divided into three classes, which will determine the performance and cost of the video card: budget, business class and top models. Budget cards aren't too expensive, but they won't allow you to play modern, resource-demanding games. Business class models will allow you to play all modern games, but with restrictions on image resolution, frame rate and other parameters. Top models give you the opportunity to play the most advanced games with maximum quality.

office GPU NVIDIA GeForce GTX 280 Interface

The type of slot in which the video card is installed. Through the slot, data is exchanged between the video card and the motherboard. When choosing a video card, you must proceed from which slot is used in your motherboard. The most common two types of video card connections are AGP, PCI-E 16x and PCI-E 1x. Glossary of terms for the category Video cards

PCI-E 16x 2.0 GPU codename GT200 Process technology 65 nm Number of monitors supported 2 Maximum resolution 2560x1600

Specifications

GPU frequency

The frequency of the GPU largely determines the performance of the video system. However, as the processor frequency increases, its heat dissipation also increases. Therefore, for modern high-performance video systems it is necessary to install a powerful cooling system, which takes up additional space and often creates a lot of noise during operation. Glossary of terms for the category Video cards

602 MHz Shader unit frequency 1296 MHz Video memory capacity 1024 MB Video memory type GDDR3 Video memory frequency 2210 MHz Video memory bus width 512 bit RAMDAC frequency 400 MHz SLI/CrossFire mode support yes 3-Way SLI support yes

Connection

Connectors support HDCP, TV-out, component

Math block

Number of universal processors 240 Shader version

Shaders are microprograms that allow you to reproduce effects such as, for example, metallic shine, water surface, realistic volumetric fog, all kinds of object deformations, motion blur effect, etc. The higher the version of the shaders, the more the video card has possibilities for creating special effects. Glossary of terms for the category Video cards

4.0 Number of texture units 80 Number of rasterization blocks 32 Maximum degree of anisotropic filtering

What is the most important component in a gaming computer? Someone will answer that the processor. Others will say it's RAM. But the one who says it’s a video card will be right. It is this component that is responsible for high-quality output of graphic information. And the more powerful the adapter, the greater the likelihood of seeing all the “beauties” that were intended by the creator of the game. The GTX 280 has the highest performance and can fully unleash the full potential of the game. But there is one thing: this card is designed for high FPS, and not for image quality. However, it will only bring joy to gamers. Let's take a closer look at this interesting adapter. But first, a few words about the manufacturer.

A little about the company

NVidia is one of the most famous manufacturers of graphics accelerators. The company was founded in 1993. Initially, only chips and sets of logic solutions for computers were produced. However, over time, the company mastered the production of high-performance graphics adapters. At the very beginning of its life, NVidia was almost the only company producing products of this kind. The well-known AMD then dealt only with processors. GeForce series video cards have become legendary. Now every self-respecting gamer dreams of a top-end video card from NVidia. But not everyone can afford it. However, the last statement does not apply to our today's hero - GeForce GTX 280.

Currently, there is a competition between adapters from NVidia and similar products from AMD. The latter are clearly losing, because they do not have such productivity and energy efficiency. However, their price is much less. Nevertheless, top-end video cards from NVidia are deservedly popular among professionals and gaming enthusiasts. Only these cards provide the best picture quality and the highest FPS. The company's success in producing video cards is truly phenomenal. However, let's move on to the review. Our hero today is the 280. An excellent video card for a home computer. Let's take a closer look at it.

Lyrical digression

Don't forget that the 280 was released back in 2008. Therefore, comparing it with modern products is the height of madness. Performance will be incomparable. At the moment, this adapter is interesting precisely as an inexpensive and simple solution for a home computer designed for work. Yes, the card can easily handle Full HD video and not particularly demanding games (even from modern samples). But you shouldn’t count on running tanks with maximum graphics settings.

Of course, with the help of overclocking (the card also supports this option) and some tweaks, you can make the adapter play modern masterpieces of the gaming industry. But it is worth considering that without a good cooling system this is very risky. And the video adapter wears out faster than it should. Therefore, it is better to use the normal mode and not overload the video card. However, let’s move on to the technical characteristics of the device and try to understand what was innovative about it at the time of release.

Specifications

The video card is based on a chip made using 65 nm technology. What does it mean? This means that a smaller chipset can accommodate twice as many chips as with the previous generation technology. This helped reduce the size of the graphics card and make it much more energy efficient. You can also solve the heat dissipation problem in a positive way. But this is what concerns physical parameters. What about other characteristics? The graphics clock speed is 602 megahertz. This is a very important parameter. Many people believe that the “coolness” of an adapter depends on the amount of memory, but this is not true. It is the operating frequency that affects performance. And here it's pretty decent. The memory interface width is 512 bits. This parameter also affects the color depth. The bus throughput is 141 gigabits per second. This means that the GeForce GTX 280 is capable of processing information many times faster than its predecessors.

Now about what inexperienced users pay special attention to is the amount of memory. It is equal to 1 gigabyte. Is this a lot or a little? As already mentioned, this does not affect performance as much as the operating frequency. However, it would be useful to have a reserve of memory. But even a gigabyte is enough for confident work with ordinary everyday tasks. Among other things, the video card supports SLI technology and is connected using a PCI Express 2.0 slot. That is, it will suit the vast majority of modern computers. Even despite its "rarity". Now let's look at the main interfaces of the GeForce GTX 280.

Connection interfaces

So how do you connect a monitor to this video card? It is worth noting that there is no trace of a VGA socket there. Therefore, older monitors using this technology can only be connected to the adapter using an adapter. The NVidia GeForce GTX 280 video card has only two DVI format connectors. There is also the possibility of connecting via HDMI. But only a very ancient version of this technology is supported. Nevertheless, even such a meager set of interfaces is enough to connect any monitor. Another feature: there is an SLI connector available, which allows you to combine two similar video cards into one using CrossFire technology. This “trick” was unique in 2008. This combination of two adapters helps to greatly increase productivity.

Many users complain that this card supports HDMI technology of an extremely old version. But do not forget that this adapter was released in 2008. Current technologies simply did not exist back then. For its age, the adapter is equipped with a very good set of interfaces and connectors. And if you need to connect something else, you can use adapters and splitters. Nobody forbids this. Yes, and the design of the video card allows it. And now a little about the various modifications of this video card.

Analogues from MSI

The MSI GeForce GTX 280 differs from the “original” only in that it is slightly overclocked. Thus, the effective operating frequency of the bus is increased to 1500 megahertz. This allowed for greater productivity. The cooling system has also been completely redesigned. Heat removal has become more efficient. There are also noticeable changes in design. Now the card is clad in a modern plastic case with MSI brand colors. Now only a small nameplate with the name of the company indicates that it belongs to Nvidia.

By the way, cards from MSI are durable. The same cannot be said about adapters from other manufacturers. According to users, these particular video cards last longer. Even if they were dispersed. And there are plenty of lovers of this business in our latitudes. As for the price, these adapters belong to the middle price segment.

Analogues from Zotac

This company is known for making real works of art out of low-end versions of video cards and greatly increasing their performance. Thus, Zotac GeForce GTX 280 can give odds to even modern entry-level video cards. Designers from Zotak managed not only to increase the bus frequency to an outrageous level, but also to expand the amount of built-in memory. Naturally, such changes required a radical redesign of the cooling system. And it is really effective here.

Zotak produces top-end video cards. But they also have overclocking capabilities. Therefore, at the moment they are still relevant, especially overclocked copies. The beauty of these adapters is that their cooling system allows you to play around with overclocking without worrying about the possible consequences. This is a real find for geeks and gamers. The price for these video card models is, of course, appropriate. You definitely won’t be able to buy them for pennies. But the user knows exactly what he is paying for.

Analogs from Palit

This company takes an adapter from a well-known manufacturer, puts a few branded “goodies” on it and sells it as its own product. Although there is nothing special about the Palit GeForce GTX 280. Performance remained at the same level, as did the amount of built-in memory. The only innovation is the cooling system. But this is not so important. There are also noticeable changes in design. This is perhaps the only thing the company has worked hard on. In general, “Palit” is the same “Nvidia” with different colors.

Cards from Palit are the cheapest in our review. They have plenty of praise. But there are approximately the same number of negative ones. Most users complain about extremely low "survivability". These are the least reliable adapters out there. Therefore, naturally, the price is appropriate. However, many choose this option precisely because of the reasonable cost.

What games will run on this adapter?

It is impossible to answer this question in one sentence. It all depends on the graphics settings. Of course, modern toys will not work at ultra settings. Yes, you can’t even count on average ones. But the low graphics settings of the GeForce GTX 280 1GB are still possible. So, for example, well-known tanks show a stable 50 frames per second on this card at minimum graphics settings. Not too bad. With finer settings it is even possible to achieve a comfortable 60 FPS. The "Warcraft" version of "Mists of Pandaria" easily produces both 70 and 120 frames per second. Naturally, with minimal graphics settings. However, the best results can be achieved with games of the same year as the video card: 2008. If in the recommended system requirements of the game in the “Video card” column there is a line like NVidia GeForce GTX 280 series or higher, then you can safely run it at maximum graphics settings. Beautiful pictures and excellent performance are guaranteed.

User's Manual

Usually this “paper” comes with the video adapter. It's in a box. Along with the map itself. If you are using a modification from another manufacturer (Palit GeForce GTX 280), the instructions may be on a disk with drivers and the necessary software. It contains all the information about the card. Comprehensive explanations are also given about what to connect where. And all this information is in Russian with adequate translation. This is a sign that the company values ​​​​its reputation and treats customers humanely. And it doesn't matter what country they are from.

Of course, paper instructions are preferable. After all, it is not always possible to run the disk. But even without instructions it is clear how to connect a video card. Even a child can handle this. However, the instructions on the disk are more complete. It is also equipped with quite extensive multimedia material, in which everything is laid out on the shelves. And if the user has a laptop or a second computer, then it is better to study all the necessary information directly from the disk.



tell friends