3D computer graphics requires a lot of computer processing power and a large amount of memory, and, until late 1995, 3D acceleration was only found on a small number of high-end products. Its target applications were high-end rendering packages, supporting engines like Intel’s 3DR and Silicon Graphics’ OpenGL. Then, the advent of the higher-clocked Pentiums coupled with the mass acceptance of the PC as a viable games system and the interest in Virtual Reality, created a market for cheap 3D acceleration.
The earliest attempts at 3D accelerators were failures. They were slower than conventional GUI accelerators under Windows, and poor in DOS at a time when most PC games ran on that platform. The main problem was software support. With 32-bit Super Games Consoles on the rise, the quantity and quality of titles available for the 3D cards was poor.
Attitudes changed when Microsoft threw its weight behind DirectX, enhancing Windows 95 as a multimedia platform and the 3D phenomenon really took off during 1997 when sales of 3D graphics chips exceeded 42 million, up from 16 million the previous year. By the year 2000, that number had increased to more than 140 million – representing an almost a tenfold increase in sales over a four-year period.
A graphics chip, whether it’s dedicated to 3D or a dual-purpose 2D/3D chip, removes the bulk of the load off the CPU and performs the rendering of the image itself. All of this rendering, or drawing, is accomplished through the graphics pipeline in two major stages: geometry and rendering. The geometry stage, performed by the CPU, handles all polygon activity and converts the 3D spatial data into pixels. The rendering stage, handled by the 3D hardware accelerator, manages all the memory and pixel activity and prepares it for painting to the monitor.
For a brief period, the only way a PC user could have access to 3D acceleration was via an add-on card that worked alongside a conventional 2D card. The latter was used for day-to-day Windows computing, and the 3D card only kicked in when a 3D game was launched. As 3D capability rapidly became the norm, these 3D-only cards were supplanted by cards with dual 2D/3D capability.
These 2D/3D combo cards combine standard 2D functions plus 3D acceleration capabilities on one card and represent the most cost-effective solution for most gamers. Almost all modern-day graphics cards have some kind of dedicated 3D acceleration, but their performance varies a lot. For the serious gamer, or those who already have a 2D card and want to upgrade to 3D, there remains the option of a dedicated 3D add-on card.
Handling the various 3D rendering techniques involves complex calculations which stretch a CPU’s capabilities. Even with dedicated 3D accelerators performing many of the functions identified above, the CPU is still the biggest bottleneck to better graphics. The main reason for this is that the CPU handles most of the geometry calculations – that is, the position of every filtered, mip-mapped, bump-mapped and anti-aliased pixel that appears on-screen. With current 3D accelerators spewing out over a 100 million pixels per second, this is beyond the abilities of even the fastest CPUs. The 3D accelerator literally has to wait for the CPU to finish its calculations.
There are two very different means of getting over this problem. The 3D-hardware manufacturers advocate the use of a dedicated geometry processor. Such processors take over the geometry calculations from the main CPU. On the other side of the debate, this is the least acceptable solution for processor manufacturers because once geometry processors become standard on graphics boards, it only takes a mediocre processor to perform other functions such as running the operating system and monitoring devices. Their response has been to boost the 3D performance of their CPUs by the provision of specialised instruction sets – Katmai New Instructions (KNI) in the case of Intel and 3DNow! in the case of AMD.
The problem, however, is that in the longer term even the performance increase provided by these new MMX-style instructions appears insufficient to cope with the brute power of the new generation of 3D accelerator. Furthermore, most users – even gamers – do not upgrade their systems regularly and have CPUs which are relatively slow. Given this, dedicated geometry processors appeared to offer the best solution.
nVidia was the first to market with the first mainstream Graphics Processing Unit (GPU) in the autumn of 1999, its GeForce 256 chip having the hitherto unique ability to perform transform and lighting (T&L) calculations. Since these are highly repetitive – with the same set of instructions performed millions of times per second – they’re a prime candidate for hardware acceleration. A dedicated engine can be optimised for the necessary mathematical functions, making it fairly simple to create an efficient, purpose-focused silicon design – and one that is capable of far outperforming the CPU’s efforts at performing these tasks. Furthermore, off-loading the T&L functions to the GPU allows the main CPU to concentrate on other demanding processing aspects, such as real-time physics and artificial intelligence.
Comprising nearly 23 million transistors – more than twice the complexity of the Pentium III microprocessor – the GeForce 256 GPU is capable of delivering an unprecedented 15 million sustained polygons per second and 480 million pixels per second and supports up to 128MB of frame buffer memory.
- How Do Computers Make Pictures?
- Graphic Card Resolution
- Graphic Card Colour Depth
- Graphic Card Components
- Graphic Card Memory
- Graphic Card Driver Software
- 3d Accelerated Graphic Cards
- Graphic Card Geometry
- 3D Rendering
- FSAA Graphic Card Technology
- Digital Graphic Cards
- DVI Graphic Cards
- HDCP Technology
- Graphic Card HDMI Ports
- Graphic Card Display Port
- Unified Display Special Interest Group
- DirectX
- OpenGL technology
- Direct3D
- Talisman
- Fahrenheit Graphic Cards
- SLI Technology
- CrossFire Graphic Cards