display | more...

Back in 1996, home PC graphics cards were strictly 2D only with 3D hardware being limited to high-end graphics workstations.

NEC were one of the first companies to develop an affordable 3d card aimed at the PC games market, with their PowerVR chipset. The card quickly gained a reputation as being technically impressive, if you had a fast enough CPU to run alongside it, and if you could find games that supported it- At the time, there was no such thing as DirectX, so NEC implemented their own API. Any games which wanted to take advantage of these cards had to have large sections of code totally re-written.

At roughly the same time, however, 3DFX released their Voodoo card with it's Glide API which was much more popular both with developers. As a result of this, people bought the 3DFX card that their favourite games required and the PowerVR was quickly forgotten.

The second generation PowerVR2 chipset was released in 1998 where again, in the PC market it lost out to 3DFX's latest offering, the Voodoo 2. However, it received a major boost when Sega selected it to sit alongsite the Hitachi SH4 CPU in the Dreamcast console.

In a console environment with games written specifically for the VR2, the power of the chipset became clear. Graphically advanced games were released which looked good even compared to their PC counterparts right up until the end of Dreamcast production in 2001.

Returning to the PC market, the Kyro chipset was essentially PowerVR3 - NEC's attempt to keep up with 3DFX, ATI and nVidia's latest 'all in one' 2D and 3D cards, where they unfortunately continued their trend of failing to offer high enough performance at a low enough price to match the competition.

Which brings us to today- The Kyro II is the latest PowerVR based chipset which tries to compete with the Nvidia GeForce and the ATI Radeon GPU architectures. By all accounts, it performs well and with DirectX around to provide programmers with a single programming interface into all three cards, it might finally have a chance of succeeding.

Also the best known (if not the first and only) GPU family to use tile based rendering, which supposedly reduces bandwidth needs by drawing only what's visible, compared to other GPUs that waste bandwidth by drawing what's hidden. Also on the way for PowerVR4 are internal true color - 32-bit color quality at 16-bit bandwidth - bump mapping and eight layer multitexturing.

VideoLogic's PowerVR PCX1 was advertised as a "3dfx Upgrade". Quirky Brits. It didn't even support bilinear filtering. Perspective correction was all over the box though. Crap, really.

PCX2 was the second iteration, it upped the clock and added bilinear filtering. It still lacked many blending modes, like it only had 4 bits of intensity for light mapping (fugly), but on single texture games the filtering was arguably better than 3dfx's offerings. But lordy was it fast for it's time.

The reason that it needed a fast processor was to convert the scene data that games gave it to a language (crazy moon language, apparently) that the video card could understand.

PowerVR Series 2 was next. I love how they forgot about their first chip. Oh well, nVidia did it too with the truly bizarre NV1. Unlike the PCX2, it was not a big bag of suck when it came to blending modes. More importantly, especially for average framerates in complex scenes, the PVR2 introduced variable tile sizes. Most importantly, especially for the PC game space, the card converted standard 3D data to its bizarre moon language on chip, not in driver (yay). PC development, however, was sidetracked for Sega's Dreamcast, which chose the PowerVR series 2 for it's innards. They chose it for its low memory usage and low production costs.

Break for a moment: tile-based rendering. Okay, in a extremely general nutshell: Tile-based (region based) rendering is an entirely different way to think about 3D rendering. Rather than draw back to front and use the Z Buffer to figure out (roughly) what to texture and fill and whatnot, a region of the display is picked out and processed. Things that are not visible are not drawn at all. Very cool. It solves the easy side, though, not the hard side. The entire scene must still be transformed by the system processor. Only the drawing (rendering) of the scene is sped up.
Think of it this way: the integer part of the 3D rendering is potentially much faster, but the floating point part is untouched. While both are problems, the polygon count limitations of the Dreamcast are due to the 200 MHz SH4 processor, not the PowerVR Series 2. Keep in mind however that translucent polygons in view pretty much negate the bandwidth advantage in that region of the screen. Also of note, the Hitachi SH4, programmed well, is capable of about 1.3 MFLOPS. I don't know if anyone could reach that peak, but it is technically possible. The Hitachi SH4 is capable of processing 4 simultaneous 32 bit floating point calculations (hence 128 bit on the box of the Dreamcast). It is a wierd chip too, though. 64 bit memory bus, 32 bit address space, 16 bit instruction set (!) to conserve memory.

Course, this isn't a Dreamcast or SH4 node. Moving along...

By the time the PVR2 was available for the PC, no one cared. Had it been released when the Dreamcast was launched in Japan, it would have been the best card on the market. It was really a bad mamma jamma for a 100MHz oddball. The drivers kinda sucked, but the image quality was great. Plus, a few demo applications really showed it's potential. In particular, a D3D app that was 52 cards in a deck falling to the ground (meant to test fill rate) showed the card listed at 1 gigapixel. Astounding for a 100MHz card, and it showed that they were indeed onto something. Again, nobody cared.

The PVR3 was released, upping the clock again and making the moon language interpreter even faster. You may have heard of it in the Kryo/Kryo II. The Kryo II was the same chip with a higher clock speed.

I love it when a company has the balls to step in a different direction than the rest of the planet. They have been proven technically right, in my humble opinion. They just need to solve the other half of the equation and put in a T&L processor.

My prediction for the future? They will die an agonizing death. nVidia bought 3dfx only for GigaPixel. Gigapixel only had one thing - an idea for the internal rendering structure on a chip. Basically, it made the external interfaces look completely normal (in theory) and made the renderer tile based. When nVidia goes tile based, cash in your chips; they've already got the T&L part licked. I'll miss those guys. *sniff.

If you feel like voting this down, please tell me why. I don't understand much of this; if feels like a clique to me ;)

Log in or register to write something here or to contact authors.