SHIP DATE: August 19th, 2002*
The Radeon 9700 and its older, faster brother, the Radeon 9700 Pro, are the best second best third best ATI has to offer right now, soon to be the fourth-best. The card it replaced as top dog was the GeForce4 Titanium 4600, nVidia's top card until the release of the rather disappointing GeForce FX 5800 Ultra (the "dustbuster"), which regardless of tardiness managed to overtake the 9700 Pro by a small margin in many tests. The 9700 Pro excels at antialiasing, anisotropic, and generally anything that involves image quality; clearly if you're paying this much you don't want to see if you can get 500FPS, you want to see purtyness. So the 9700 Pro is functionally superior. ATI, of course, has nothing on nVidia when it comes to pretentious names.
In the way of technical data, this card certainly has a pageful, much like all of its predecessors. I'm not exactly sure how to introduce them properly, so once more I'll say to hell with linear paragraph format and just start listing them, skipping some of the less important ones:
GPU
The GPU is sort of the graphics card's processor. In fact, that's exactly what it is. GPU stands for Graphics Processing Unit (or Graphical Processing Unit, it doesn't matter either way, really), and it basically does the majority of the video card's calculations.
- R300 core
R300 is the name of the core, just as current P4s are Northwoods and the GeForce FX 5800 is NV30. R250 is the core for the
Radeon 9000 (and Pro), R200 is the core for the famous
Radeon 8500, and so on. The R300 core meets
Vertex Shader 2.0 and
Pixel Shader 2.0 specifications, which basically give more flexibility and speed to software designers to play with. The core also boasts four vertex shader pipelines, which give it a lot of horsepower in that area. HyperZ III and Smoothvision 2.0 are basically features of the core that will be gone over later.
- 325MHz clock for Pro, 275MHz clock for regular
A lot of people were worried about the 325MHz clock, because they felt it was sure to make for painfully low
yield; that is, that ATi would have too many parts not capable of pulling such a high speed, and thus have to raise the price. It seems ATi have done their homework in this area, though, and part yield is more than acceptable despite the 150
nanometer process on which the card is built. It's also worth noting that the clocks are slightly out of sync with the memory clocks-- 325MHz should be paired with 650MHz DDR, and 275MHz with 550MHz DDR. However, that would make for rather low-yield memory, and it doesn't matter too much anyway.
MEMORY
Every card needs memory. After all, the days of dipping into system memory are long gone, as increasingly complex textures insist upon 64MB of memory (and 128 is handy)-- and even if the hit to system memory wasn't enough, video cards are usually equipped with much faster memory than the motherboard can offer. Not to mention that the GPU accesses onboard memory much faster.
- 128MB DDR (620MHz for Pro, 540MHz for regular)
128MB is more or less standard with top-of-the-line cards these days. Increasingly useful for turning the
eye candy up. Pretty damned fast, too; 620MHz
DDR (okay, 310MHz, but 620 EFFECTIVE!) is nothing to laugh at. DDR is unrelated to
Dance Dance Revolution, but this writeup would probably be none the worse for having every instance of "DDR" replaced with "Dance Dance Revolution."
- 256-bit memory interface
The nice thing about a 256-bit memory interface is that it doubles the effective memory bandwidth available compared to a 128-bit one. Sure, it has no effect on the DDR's speed, but fast DDR is only part of the memory bandwidth equation, and a 256-bit memory interface is a lot easier to implement than 1240MHz (620MHz real) DDR. Before this, cards had 128-bit memory interfaces (never mind the
Matrox Parhelia, it sucked), and for some bizarre reason the
GeForce FX 5800 Ultra has 128-bit too, and has 500MHz (1GHz effective) DDR2 to compensate (when it could just have plain old 250MHz DDR and be a lot cheaper to produce, sigh). One
benchmark showed an almost 20% decrease in performance when a Radeon 9700 (not Pro) was software crippled to a 128-bit memory interface.
MISC.
Marketing features, standards being conformed to, and that type of thing.
- AGP 8x
What's AGP 8x, you ask? Well, it works like this.
AGP 1x offers bandwidth of 266MB/sec, which quite frankly isn't always enough.
AGP 2x offers 533MB/sec, and 4x-- up until recently, the fastest needed-- offers 1066MB/sec. 8x, thus, offers 2133MB/sec. So is it really needed? Well.. no, not really, right now. But it will be, so better early than late. By the way,
AGP 1.0 effectively means older AGP,
AGP 2.0 means modern 4x AGP, and
AGP 3.0 is the new 8x standard. In other words, AGP is ridiculously slow, AGP 2x is extremely slow, AGP 4x is quite slow, and AGP 8x is fairly slow
. Meh.
- HyperZ III
Eat your heart out, Marketing. HyperZ III isn't just a fancy marketing word meaning something relatively simple like "fast memory" or whatever; it's a rather confusing "feature" of the R300 cards (actually, all the Radeons have Z-compression and stuff, I think, but whatever-- this is the newest iteration! Or was.) meant to save on memory bandwidth. Remember, fast memory is pricey and memory bandwidth is at a premium to begin with, so manufacturers have to come up with increasingly elaborate ways to save on it. The first piece of HyperZ III is "
Z-compression," which compresses (losslessly, naturally) data going to the Z buffer by up to 4:1 (in optimal situations). The second is "Fast Z Clear," which allows the buffer to empty itself more quickly than it normally could. The third is "Hierarchical Z," and it makes a lot of sense as a concept: it simply doesn't bother working with pixels that won't be seen in an image. After all, why do the calculations if it's not seen?
- Smoothvision 2.0
Smoothvision 2.0 is an update of Smoothvision, which is a
fancypants name for an
FSAA and
anisotropic filtering package. The anisotropic filtering bit is an updated version of the R200 (Radeon 8500)'s
AS (AS is short for anisotropic, by the way), and it allows for "quality" mode, which can operate in conjunction with
trilinear filtering-- at the cost of a little performance, naturally.
As for performance, well.. suffice to say it certainly qualified as top of the line when it came out, and as of the time of writing it still represents the effective high-end. The GeForce FX 5800 Ultra is marginally superior, but it also sounds like a banshee armed with a tornado-shooting dustbuster, to use an overkill of a simile. It also eats a PCI slot. The Radeon 9800 Pro is marginally superior in all respects, but it's a lot pricier. The GeForce FX 5900 Ultra should kill it, except it doesn't really exist yet. If you really must know how it performs, there are benchmarks all over the intarweb. I'd feel guilty about stealing them, so instead I provide links to some of the better sets of benchmarks:
http://www.anandtech.com/video/showdoc.html?i=1683 is AnandTech's review of the Radeon 9700 Pro, and it has some very interesting and useful benchmarks.
http://www6.tomshardware.com/graphic/20021218/ is a Tom's Hardware Guide benchmark series with the Radeon 9700 (regular and Pro) comparing the two's performance to older cards on a modern system. I'm not very fond of THG, but I don't see how they could screw THIS up.
* Ship date's more important than release date. Too many evil paper launches these days.
Oh, yes-- and full screen anti-aliasing is now quite useful on current hardware. Back In The Day, though, it wasn't, because you could just up the resolution and FSAA had a huge IQ hit and besides it really didn't look all that great. Things are much better now.