Search This Blog

Sunday, 1 January 2023

The (Almost) Definitive 486DX/50 Article

(Source: Infoworld, 25th May 1992, p56)

We need to start at the beginning, because understanding the timeline of the 486 - Intel's 4th generation x86 CPU - is key to understanding the story of the DX/50. (I also can't not start at the beginning, so "I'm sorry" or "you're welcome!" depending on how you feel about that.) This entire article is actually a complete accident. It was supposed to be the preface to the article I wanted to write, which was everything I found out from benchmarking and comparing a DX/50 against its closest competitors. I have, at least, made this a separate article, so you if you just want to look at the findings of my tests, you can go to the other one (once its completed I'll add the link). I'm not going to get massively technical here - look at the datasheet for that - but I will try to get as much information as possible in one place while clarifying dates and clearing up misconceptions, because there are a few.

The 486 Revolution


Intel initially finalised the specifications of their next gen 486 in June 19871, with a somewhat vague arrival date of 1990. Speculation about the chip continued for the next couple of years until Chicago's Comdex Spring in April of 1989, when Intel finally revealed the first samples of their new CPU, priced at $950 in quantities of 1,0002. Actual systems including the 486 weren't expected until Q4 that year when production would begin in full, though IBM showed a prototype system at the end of April. The 486 was actually a pretty big leap ahead of its predecessor, despite few new instructions being added, thanks to some architectural enhancements. Through the use of a more precise 1 micron manufacturing process, Intel were able to integrate a co-processor into the die (most 386 systems did not ship with a co-processor2). It also used the RISC-inspired technique of pipelining, which allows a new instruction to be worked on before the previous one has completed - this was a step closer to the superscalar design of the future Pentium. The other significant enhancement was internal cache memory. Cache was introduced with the 386 and was external SRAM at this time. While it provided improved RAM access times, there was still some latency. By integrating 8K of cache onto the die of the CPU itself, access to the data and instructions stored there was instantaneous. This internal cache was also called 'level 1', while the external SRAM was called 'level 2'. Some early or cheap 486 systems didn't include this.

These enhancements meant a 486 was about twice as fast as a 386 running at the same clock speed.

(Source: Infoworld, 8th May 1989, p38)

The first 486 system to be announced was Apricot's VX FT server, as early as June, with the initial 25MHz model3. It's hard to imagine an DX/25 being at the heart of a high-end server today, but remember that this was the fastest x86 CPU in the world at the time. It wasn't all plain-sailing, however: the first teething troubles appeared as early as October that year, when it became apparent there were bugs in the FPU of the B4-stepping of the chip4. This delayed systems for at least a month, but this coincided nicely with Comdex Fall so everyone got to show their systems off anyway.

Volume shipping of the fixed 486 began in early December 19895 and the first 33MHz systems began appearing in February '90, the month that a second bug was discovered in systems with the new EISA bus6. April then saw the release of the co-pro-less 20MHz 486SX and complementary 487 co-processor upgrade (technically a DX with a different pinout - Intel never made a discrete co-pro for the 486) and the first mention of their 50MHz model, said to be capable of 40 million instructions per second (MIPS)7. Considering that the DX/20 was capable of 20 MIPS, it's pretty impressive that Intel were planning to deliver double the power within a couple of years. By pricing the SX model at less than half the cost of the existing DX models, Intel was aggressive in its efforts to establish the 486 as the mainstream desktop CPU. This was somewhat ambitious, as most users around this were contemplating upgrading their 286 to a 386, and AMD were selling the best-value chips on that platform. For context, 386-based systems accounted for half of all PC unit sales in 1992, but by 1993 the 486 took over with 66% of sales so Intel's strategy ultimately worked8. What helped was that they had no competition at that time. Aside from a very specific agreement with Cyrix, Intel was, for the first time, not legally obliged to license its 486 designs to anyone else, as had been the case with the 8088, 80286 and 80386. This did land them with a number of antitrust lawsuits, however, which usually only arose when they tried to sue a competitor.

(Source: Computerworld)

50Mhz... But Not From Intel


Hilariously, the first 50MHz 486 showed up in October 1990, a full 8 months months before Intel even announced their official model9. In fact at least two cheeky companies - Velox and Cambion - sold peltier-based products that upped the voltage and cooled a DX/33 effectively enough that it could be overclocked to 50MHz10. I think this is one of the earliest extreme overclocking products I've ever heard of. There was talk that Intel might sue them considering that their product would be in direct competition with the real deal, but I don't think there was a huge amount of buyer confidence in running chips outside their spec. Certainly makes me wonder if the performance would match a genuine chip.

Business Only


Intel officially announced their 50MHz model at PC Expo in June 1991 (on the same day AMD announced their 25MHz 386SX clone!), and it was received with enthusiasm by some system builders and caution by others11. With the Q4 arrival date some way off on the horizon, a few OEMs preferred to focus on selling the existing DX/33, while others had plans for multi-CPU systems. Intel also announced a custom cache module to optimise their newest CPU and help reduce 'design issues', though it's unclear how many manufacturers actually adopted it - cost was cited as a significant issue - and some manufacturers chose to design their own cache solution instead. It was clear from the off that this CPU was going to be used predominantly, if not exclusively, in servers and mainframe-class systems and there were two big reasons for this:
  1. Cache: a 50MHz bus requires at least 20ns SRAM for its external cache, and this was pretty expensive in 1991. As an example, a slower 64KB of cache for a 25MHz system would set you back $150 in August of '91, and 256K was considered an optimal amount so that's $600 alone (the equivalent of $1,300 today). Goodness knows how much 20ns chips would be. By comparison, DRAM was $100 per megabyte at the time, so normal speed SRAM was 24 times the cost12. Complementing components, such as video and I/O, would also need to be 'server grade' and that meant EISA, SCSI, etc. It wasn't so much of an issue to match the 50MHz bus speed because the local bus was in its infancy at this time, and didn't really mature until late 1993.
  2. The FCC, at least in the US market. All computers produce electromagnetic emissions capable of causing interference and must be tested to conform to certain limitations before they can be authorised for sale. The two main categories for computers are class A (business) and class B (residential). Class B is much more stringent and, therefore, more costly to meet. System manufacturers anticipated that the 50MHz frequency of the DX would make it harder to meet class B, so they didn't bother wasting the time and money. This meant that it would not be authorised for sale to individuals, making it a de facto 'business only' CPU13.
Some desktop suppliers of the latest chip actually considered it to be 'too powerful', which definitely contributed to the general view that the DX/50 would be limited to only the most high-end business applications14. The speed bump wasn't the only improvement made in the 50MHz model. It also received a JTAG test access port (TAP) and although a reason for this isn't documented, it's likely it was so that manufacturers could test bus stability of their systems with the new CPU.

(Source: Infoworld)

Clock-Doubling on the Horizon


The DX/50's release was overshadowed somewhat by plans for a 66MHz model due to arrive in autumn the following year (plus talk of the next-gen P513). This is actually the first time 'clock-doubling' is publicly mentioned by Intel, though it wasn't called that at the time - this new method of having an internal clock that was double the speed of the external clock was initially named the 'double clutch', 'dual clock' or 'clock doubler' depending on which magazine you were reading.

NCR, among others, had big plans for Intel's new CPU. Their 3600 system was designed with massively parallel processing in mind, to deal with 'large transaction processing operations' for big retail clients like K Mart. This computer used thirty two 486DX/33 CPUs along with the same number of multi-CPU cards, each with two to eight of the new 50MHz chips for up to 288 processors in total15. This was due for release in December '91 and was a scalable design, costing between $850,000 and $8,000,000. Obviously this wasn't going to be running Windows - a new flavour of UNIX was to be developed specifically for this system. NCR are the people that invented WiFi alongside AT&T, and were one of the earliest successful adopters of SCSI, amongst other industry-shaping innovations. If you only looked at the stories from publications focusing on desktop computing, you would think the DX/50 was a flop, but that was far from the truth. Unfortunately I've been unable to obtain historical sales figures for each 486 chip so I can't back this up with data - there is plenty of anecdotal evidence such as reviews and adverts.

Too Hot to Handle


In August '91, not long after volume production began, Intel had to cease production of the DX/50 because initial shipments were getting too hot and shutting systems down. This was attributed to Intel's existing suite of tests being inadequate when testing the 50MHz CPU16. It's pretty well known today that the DX/50 was the first 486 that really required a heatsink, not that Intel was telling anyone that - quite the opposite, in fact (can't find the reference right now). It's quite plausible that system builders simply weren't installing sufficient cooling, or indeed any. Either way, Intel and Dell, the only suppliers selling systems equipped with the chip at that time, said they would not be recalling systems and hadn't experienced complaints from their customers17.

1.0 Micron or 0.8 Micron?


I've seen multiple people saying that the first stepping of the DX/50 was manufactured using a 1.0 micron process and then moved to 0.8 micron because of overheating problems, but I've found no evidence whatsoever to support this. There is even a discussion over at CPU World about an engineering sample of the chip that was apparently the earliest example of a die-shrunk DX/50 but, again, no sources are given. Every available press article I've read, however, talks about the 0.8 micron process being a necessary factor in the speed increase, just like the additional metal layer it facilitates to reduce the vertical size of the die18. Changing the layer construction is a significant effort and isn't the kind of thing you casually do between steppings, as far as I know. Even the 486DX datasheet doesn't elaborate, only stating that the CHMOS IV and V processes were used in the making of the 25, 33 and 50MHz chips, but not which ones applied. It's possible that a 1.0 micron initial run of chips only made it into the hands of OEMs but if this did happen then it's likely that they were all recalled and destroyed. Intel themselves stated that there was "virtually no chance" that anyone would encounter one of these chips as a result19. These early samples also lacked the enamelled Intel logo. I would love to come across one of these, because one way to prove it definitively would be to compare an early chip with a later one and looking at the gold cap on the bottom of the CPU - a chip made with the 1.0 micron process will need a larger cap than one made using 0.8 micron. I have not yet found anyone who has made such a comparison so, for now, it's completely unproven that the DX/50's manufacturing process changed between steppings.

Left: a 1.0 micron DX/33. Right: a 0.8 micron DX/50.

Weirdly, one publication mentioned Intel demonstrating a 100MHz chip18. I know what you're thinking - they must mean a clock-doubled model, except they go on to talk about DX2s separately in the next paragraph, so they're either confused or Intel really did plan to go for a 100MHz front side bus. Such a thing seems absurd on so many levels, however. The same article suggested that a DX2/40 would be part of the future line-up and another mag speculated over the possibility of a DX2/10019.

DX2 & OverDrive


At least one supplier in Feb '92 said that Intel had effectively 'hit the wall' with the DX/5020, just as mainstream systems had begun shipping. Vendors had also begun turning their attention to the OverDrive range of CPUs - due to be announced in March '92 - which would be a 'drop in' upgrade allowing users to increase the speed of their existing systems without having to change any of the other components. As an aside, system vendors did not like Intel selling CPUs directly to consumers if it meant those users were delaying the purchase of a new PC for a couple more years. Intel's CPUs were not retail parts and were usually only sold through OEMs as part of systems. The OverDrive range effectively marked Intel's entry into the retail sector and that may explain why there were so expensive - $699 for the DX2/50 when it was released and $549 for the 33 and 40MHz models. It was a strategy to deter most users from upgrading, while placating those that wanted to pay for more speed without spending thousands on a new PC. As such there aren't many around these days, relatively.

Just as the DX/50 was hitting its stride, the DX2 line was officially announced in March '92. Sold initially as an OEM part running at 50MHz internally21 (25MHz externally), the consumer OverDrive part was due to follow at a later date. System manufacturers loved it because they could ship existing systems unmodified with the new chip installed for a claimed 70% performance improvement and no FCC pain-in-the-ass22.

(Source: PC Mag, 15th Sep '92, p154)

To see how the DX/50s productions issues affected system vendors, we had to wait until June '92 for PC Magazine's group test of 19 of the first DX/50 machines. This featured systems that were shipping to customers by January that year, 7 months after the chip's initial announcement and 3 months after the 'good' DX/50 became available. The findings were positive: systems were 30% higher than the previous fastest 486, the DX/33, with a typical spec of 8MB RAM, a 300MB hard disk, SVGA graphics, plus DOS 5.0 & Windows 3.0. The DX/50's place as a server-grade CPU was confirmed - though it was claimed that the new DX2/50's raw performance was 'virtually identical'23, its lack of I/O performance meant it was intended for the masses, and not seen as a threat to the DX. The linked article also provides a really good explanation of the technical challenges vendors faced in creating 50MHz systems, on page 116, and a look a the production process.
"Intel's step up from the 33MHz 486DX chip to a 50-MHz version required additional technical refinements. Both chips contain essentially the same processor logic, with a math coprocessor, an 8K memory cache, and 1.2 million transistors. In order to coax faster performance from the 486, the 50-MHz part uses a submicron (0.8 of a micron), three-layer chip design (versus the 33-MHz's 1 micron, 2-layer approach)." (p116)
PC Magazine's group test in September 1992, however, pitted the DX and DX2s against each other to provide a direct performance comparison24.
"Just how important is an external, secondary processor cache? So important that PC Magazine Labs found DX2s with a well-designed 128K (or larger) cache run at 96 percent the processor performance level of a 50-MHz 486DX ... [T]he difference is almost unnoticeable if you're not performing memory-intensive tasks..." (p115)
"The average memory performance for all the DX2s was 6,314 kilobytes per second (compared with 8,017 KBps for the DX/50s)--78 percent of the performance of true 50-MHz systems ... Under most applications, PCs utilising the clock-doubler technology should perform nearly on the same level as a true 50-MHz 486 when the DX2 chip is used with an appropriately designed external cache." (p127)
(Source: Infoworld, 17th Sep '92, p32)

The Competition Heats Up


(It literally did - the DX2 ran a lot hotter than the DX). It seems that the writing was on the wall. Already it was obvious that the imminent 66MHz DX2 was likely to pose a threat once it arrived. Systems built around the DX2/66 began to show up in November, but it wasn't as clear cut as you would think. Infoworld's group test of 17 PCs shows why25. Firstly, Intel Priced the DX2/66 $100 higher than the DX/50, so it wasn't a no-brainer. Secondly, they acknowledged a 9% performance increase over 50MHz systems but said:
"[S]ynchronising the timing between internal and external operations can be tricky. It's possible that some operations, particularly those that involve memory- or disk-intensive access, may actually be slower with the DX2/66 than with the DX/33." (p114)
Personally I think that's a bit of a bonkers thing for a tech rag to say, especially when they haven't actually provided any numbers to back it up. Tricky timing issues? May be slower? What they're probably referring to is wait states undermining the internal speed of the DX2, but it's almost as if Intel asked them to create a veil of uncertainty around this new 'too good to be true' CPU. Except no, because Intel themselves were publicly discounting the DX/50 back in August.26
"[D]esktop users can expect to see a 30 percent improvement in performance ... when compared with Intel's true 50-MHz 486 processor ... the DX2/66 requires less engineering expertise and uses less costly components than a true 50-MHz-based system does ... many of the design issues that appeared with the 50-MHz system have now disappeared." (p32)
These were the words of Anand Chandrasekher, product marketing manager for the DX2/66, so either he was pissing off a lot of people at Intel who were behind the DX/50, or Intel had chosen to cut their losses and move on to this more reliable technology.

(Source: PC Mag, 26th Jan '93, p183)

Infoworld and others noticed that systems with at least 128K of cache would perform better than those with 64K or none. While it may be inconceivable 30 years later that OEMs were selling 486 systems with little or no level 2 cache, cost - or, rather, value - was a huge deal and if consumers didn't really understand or appreciate the relevance of cache, then they wouldn't consider it a worthy investment.

The biggest reason Infoworld could give for choosing a DX2/66 over a DX/50 was the price: the average 50MHz system was around $2,000 more. This is unsurprising as such systems were usually servers and would therefore be better-equipped, with features like more cache, SCSI, greater storage and caching drive controllers. They also cite local bus performance as a factor because the VESA local bus was brand new at the time and didn't support 50MHz operation yet, along with all the other proprietary local buses. The bottom line here is that there was very little emphasis on comparing the actual performance of Intel's top-end chips at this point, almost like they were apples and oranges.

The DX/50 Marches On


Moving forward to January 1993, PC Magazine conducted a massive test of 74 systems that showed the DX2/66 in its stride. Local bus graphics was still in its infancy, but system vendors had learned quickly how to get the most out of the new CPU27. You would think that everyone was ready to dethrone the DX/50 by this point, but apparently not.
"[F]ast video performance not only makes PCs run faster, it makes them seem faster still. We've rarely seen that demonstrated so well as with these 66-MHz DX2s. While their average performance is only about 15 percent to 20 percent ahead of the 50-MHz 486DXs that were our previous speed champs--a margin of improvement that does not stir much interest in most PC users--the fast video subsystems of these machines often makes them appear far faster." (p122)
They've basically discounted a measurable performance improvement as... perceived? Insignificant? At this point I'm confused - surely most people would say that a 20% improvement had some merit? They seem to double down in their article on local bus video in the same issue, saying that "a local-bus 486DX/50 should outperform a local-bus 486DX2/66 because the clock-doubled DX2/66's local bus stops as 33MHz." Again, no numbers to back this up; lots of 'should' and 'may' being thrown around. Yet ask anyone today which system will perform better and the DX/50 will quickly be discounted for multiple reasons, despite very few people properly testing their systems from what I've seen (yes that's a dig). What about GUI performance? Hard drive transfers? There are many factors that can be compared.

Pentium Cometh


(Source: PC Mag, 11th May '93, p215)

In the meantime, on March 22 of '93, Intel unveiled their Pentium CPU and really put the cat amongst the pigeons28. Competitors, such as IBM and Cyrix, had only just started to released their own 486 clones (under license from Intel) and suddenly (well, not suddenly - the P5 had been talked about since June '91, remember?) Intel had released their next-gen chip. At $900 in volume purchases, it was never going to threaten the healthy 486 market, but it did make the DX/50 Intel's 3rd-fastest chip behind the DX2/66.

PC Magazine did a group test of servers in May 1993, relatively late in the life of the DX/50, and found that "systems with 486DX/50 processors should have a performance advantage over 486DX2/66 systems in reading data from disk cache memory29." These magazines really overuse the term 'should'. Anyway, of 9 servers tested, 3 were equipped with the DX/50, and the 'editor's choice', the Zenith Z-Server 450DE, was one of them, scoring 10 to 15 percent faster on the I/O throughput tests than its competitors. Not bad, all things considered.

In July 1993, the DX/50 reached its second birthday. PC Magazine ran an article on The Perfect PC and, to show how things had moved on in the last year, an entry-level system was considered to have a 486SX rather than a 38630. They also labelled the DX2/66 as "the value choice for power users".
"In the hierarchy of computing, comparing a 50-MHz DX to a 66-MHz DX2 is problematic. The DX2 is better at processor-intensive tasks (calculation- and graphic-intensive programs); the plain DX is faster at memory intensive applications. Because 50-Mhz motherboards are difficult to design and manufacture, many PC makers have focused their attention on the 486DX2/66 and orphaned the 486DX/50." (Page 127)
It was true - Red Hill Technology shared their own experience of the DX/50: they only sold one but had to underclock it at 40MHz just to get it to run without issues31. Systems featuring the DX/50 were still being produced in December of 1993, according to PC Magazine's 486 buyers guide32, though out of 89 lines being produced by the various manufacturers, only 4 offered a DX/50 option, while 83 featured the DX2/66. Taking a look at the distribution of scores, the DX/50 put in a strong showing.

How the chips stack up (PC Magazine, 7th Dec 1993, p182)

Although it's no surprise that the sole Pentium system in the group murdered the competition, what's interesting is that, while the highest-performing DX2/66 scores about 47, the slowest only manages 27, producing an average of 39. The DX/50, meanwhile, is sitting happily at about 36 and therefore outperforming a fair number of 66MHz machines, and all of the DX2/50s. This just goes to show how important system design is in getting the most out of a CPU. Either way, it was clear that the DX/50 was on the outs, and the release of the clock-tripled DX4 only cemented it. Although not completely. A DX2/100 was actually rumoured around the time the DX4 (then called the DX3) was announced in July '93, which obviously would have had a 50MHz front side bus, and would have been intended to drop into systems that had been designed for the DX/5033. Sadly nothing more was heard about this CPU, though the DX4 went on to successfully fill the gap between the DX2/66 and the Pentium.

So that's the full story. I think it's kind of sad that CPUs don't get a 'final day' when their manufacturer stops production, celebrates their achievements and sends them out to pasture. I think the DX/50 deserved such a send-off, but we will probably never know when the last one was produced.



As a celebration of its life, I want to discover, once and for all, where it outperformed its clock-doubled rival, where it didn't fare so well, and what the final score is. It will be a bit like what I did with the Tualatin vs Pentium 4 tests, except I'll be doing them myself rather than scraping them from other sources34. It's worth noting, in the context of modern-day, retro benchmarking, that there were multiple versions of the DX2/66. The main model, the P24, originally had the 'write through' scheme on its internal cache, just like the DX. Late in 1994, however, the P24D was released, which used the faster 'write back' scheme. It's important, when benchmarking, to ensure you are comparing like-for-like. The P24D DX2/66 is going to wipe the floor with the DX/50 every time because it was not its contemporary - it was released over 3 years later35.

Footnotes


The official model number of the DX/50 is A80486DX-50.

According to CPU World, there were 10 steppings of the 486DX/50: Q0209, Q302 and SXE69 are engineering samples and SX408, SX409, SX518, SX546, SX547, SX705, SX710 went into full production. These are displayed alphabetically, so it's often hard (or impossible) to work out where engineering samples fit in with the other versions chronologically. I have an SX518 and SX710 and both seem to behave very similarly. 

I have found at least two examples of people overclocking a DX/50 to 60 or 66MHz respectively with varying success and little detail. It makes sense for this to be possible given that the DX/50 was the first of the 0.8 micron CPUs, which should have theoretically given it some headroom - later CPUs on a particular manufacturing process historically have more trouble with higher frequencies (looks at the 1.13GHz Coppermine). It was an uncommon practice to overclock such a chip, though.

As ever, if you have enjoyed reading this, please consider buying me a coffee via Ko-Fi. I try my best to make sure that everything I write is historically accurate by citing primary sources and weaving together some kind of story from everything, but if you do think I've got something wrong, please comment below with a source and I'll be really happy to make corrections. I really enjoy writing about this stuff and it can be really time consuming! Also, I promise to give 20% of every donation to the Internet Archive, without which this article (and others I've written) wouldn't be possible. Thanks for reading :)

References

1 Intel Finalizes Specifications for 80486 Chip (Infoworld, 15th June 1987, p.6)
2 Hot New Chips (Infoworld, 8th May 1989, p.35)
3 Hold Onto Your Hat (And Your Wallet) (Byte Magazine, August 1989, p.8)
4 486 Bugs Derail PC Vendors' Plans (Infoworld, 30th October 1989, p.1)
5 Intel Ships 80486 Chip in Volume (Infoworld, 4th December 1989, p.3)
6 Intel Identifies Bug in 486 Systems (Infoworld, 5th February 1990, p.21)
7 Intel Bids For High-End PC Market With Aggressively Priced 486SX (Infoworld, 29th April 1991, p.1)
8 U.S. Industrial Outlook 1994 (Department of Commerce, p.26-18)
9 Everex Set To Show 50-MHz PC (Computerworld, 8th October 1990, p.4)
10 Computers Hot Up With Cooler Chips (New Scientist, 29th June 1991)
11 Intel Launches 50-MHz 486 at PC Expo (Infoworld, 1st July 1991, p.79)
12 MIS Computer Systems Advert (Byte Magazine, September 1990, p.308)
13 Intel Sets Sights on 586 (Infoworld, 1st July 1991, p.1)
14 Fast 486s Too Much Too Soon (Computerworld, 3rd June 1991, p.1)
15 Users Wait for 3600 Reality (Computerworld, 17th June 1991, p.27)
16 Faulty Chip Test Suites... (Infoworld, 26th August 1991, p.1)
17 Chip Choice Outnumber User Needs (Computerworld, 26th August 1991, p.1)
18 The 50MHz 486 is Just One of Several... (PC Magazine, 24th September 1991, p.37)
19 50-MHz 486-Based PCs (PC Magazine, 16th June 1992, p.116)
20 Vendors Race to Support Intel's Clock-Doubling Dual-Speed Chips (Infoworld, 10th Feb 1992, p.1)
21 Clock-Doubler Blows Into Town (Infoworld, 9th March 1992, p.3)
22 Intel Rolls Out Its OverDrive Processor (Infoworld, 25th May 1992, p.27)
23 486/50: The New Performance Leader (PC Magazine, 16th June 1992, p.113)
24 Intel Ups the Ante with Its 50-MHz DX2 (PC Magazine, 15th September 1992, p.111)
25 66-MHz 486DX2 Computers (Infoworld, 16th November 1992, p.114)
26 Manufacturers Hop Aboard Intel's DX2/66 Bandwagon (Infoworld, 17th August 1992, p.32)
27 DX2/66: The New Speed Limit (PC Magazine, 17th Jan 1993, p.111)
20 Intel Launches Rocket in a Socket (Byte Magazine, May 1993, p.92)
29 To Serve and Protect (PC Magazine, 11th May 1993, p.179)
22 The Perfect System (PC Magazine, July 1993, p.123)
31 The Red Hill CPU Guide: 386DX-40 and Competitors (The Red Hill Hardware Guide)
32 486 Buyers Guide (PC Magazine, 7th December 1993, p.108)
33 Faster 486 Could Overlap Pentium (Computerworld, 19th June 1993, p.1)
34 The (Almost) Definitive Pentium III Tualatin Article (The Brassic Gamer Blog)
35 Intel to Revamp DX2 With Faster Cache (Infoworld, 27th June 1994, p.1)

1995-12-04 Clock Multiplier (Google Patents)
1992-05-08 Clock Multiplication Circuit and Method (Google Patents)
1987-10-27 Computer Element Performance Enhancer (Google Patents)
1991-07-01 Intel Unveils 50-MHz Chip (Computerworld)
1991-06-17 Cold Shoulder (Computerworld)
1990-10-16 Pushing It To The Limit Dept. (PC Magazine)
1991-04-22 Cache's Motherboard, Icecap Module, Boost 33MHz CPU to 50MHz (Infoworld)
1990-11-05 Supercool Chip (Computerworld)
1991-08-05 Dual-Clock 486SX: Bonanza For Intel? (Computerworld)
1992-04-06 Cyrix gains customers for its 486 chip (Infoworld)
486DX Datasheet (DosDays)

History


2/1/23: First version published

Sunday, 4 December 2022

The (Almost) Definitive AGP Article

Contents

3D Takes Centre Stage
PCI: The Limitations
Bringing AGP Into Being
The Players
The First AGP Graphics Card
The First AGP Motherboard
Implementation
Early Benchmarks
Card Controversy
AGP 2.0 aka 4x
Competition & Litigation
AGP's Legacy
Footnote
References

First AGP graphics chipset: Cirrus Logic Laguna3D-AGP (3 Feb 1997)
First card with AGP connector: Trident 3DImàge 975 (24 Mar 1997)
First AGP graphics card: ATI 3D Rage Pro
First AGP Pentium motherboard: FIC PA-2012 (2 Nov 1997)
First AGP Pentium II motherboard: FIC KL-6011 (26 Aug 1997)

The first thing to understand about AGP is that it's not exciting. In fact it's deeply, deeply boring, but it is also absolutely pivotal to the success of gaming-focused PC industry that we know today. Technically it wasn't unique or even revolutionary: it was just an extension of the existing 66MHz PCI 2.1 specification. The only difference really was that it had some enhancements designed specifically to handle textured 3D graphics and provides bandwidth exclusivity. In reality it's a bit trickier than that, of course, so let's take an in-depth look at how it came to be, as this is the slightly more interesting angle.

Note: this article is about consumer PC graphics at the 'affordable' end of the spectrum, with a focus on gaming. I'm also not getting heavily into the technical workings of AGP, as this is very well documented elsewhere. The Introducing AGP article from the September 23rd issue of PC Magazine is a good primer, while Anand Lal Shimpai's AGP Explained article goes into more technical depth. There's also the official Intel docs listed at the bottom of this page. The scope of this article is an historical perspective.

3D Takes Centre Stage

Prior to 1996, smooth 3D games on consumer PCs were only possible through some fiendishly inventive coding and a good CPU. If you wanted to play Doom, you needed a 486. Oh, you want to play Quake now? Buy a Pentium. Although out of the reach of most bedroom gamers in 1994, a Pentium-based system sporting a PCI graphics card was the nearest you could get to gaming graphics nirvana on a PC. The mainstream-focused Pentium 75 hit the market just before Christmas '94 and the following months saw hundreds of thousands of families purchasing their first IBM-compatible PC.

It was when RAM prices began to fall in 1996 that 3Dfx Interactive brought their Voodoo technology to the platform, establishing the PC as a legit gaming machine for the first time. Gamers could at last enjoy arcade-style high res, 3D, texture-mapped graphics in their PC games. But the limitations of the PCI bus were already being pushed, specifically the bandwidth between the system memory and the expansion cards themselves, such as graphics and storage. In an interview late in 1997, just after AGP came to market, John Carmack said:

"The biggest problem we have on 3Dfx right now is texture paging. Not triangle rate, not fill rate, it’s textures."

Scott Randolph, programmer of the Falcon 4.0 engine at Spectrum Holobyte, eloquently made the case for AGP at the developer conference in May 1996. He claimed that software 3D rendering typically used 80% of the CPU's processing time and filling pixels 'overwhelms' the PCI bus. The point was made that a Pentium running at 90MHz with dedicated 3D hardware outpaced one running at 166MHz using software rendering. While dedicated 3D hardware had solved this problem, it was still being held back by the system architecture itself.

PCI: The Limitations

Polygonal graphics had been used quite early in the PC's history, but it was the rise of texture mapping that had started saturating bandwidth. Devices using the PCI bus had to share bandwidth, which made it very difficult to guarantee graphics throughput. This negatively affected the speed of copying textures, for example, from main memory into the graphics RAM. By providing a dedicated connection to the system bus, AGP provided higher speed and non-shared bandwidth. The theory was that this would make it possible to keep the textures in system RAM, saving the time of transferring them over the bus and saving the cost of fast RAM on the video card. Additionally, the data and address buses were not multiplexed as on PCI, providing a further performance bump. Surprisingly, for what was high-end consumer tech at the time, one of the main motivations for AGP was cost saving. From version 1.0 of the AGP spec, published 31 Jul 1996:

"...3D rendering data structures may be effectively shifted into main memory, relieving the pressure to increase the cost of the local graphics memory ... Reducing costs by moving graphics data to main memory is the primary motivation for the A.G.P."

While the theoretical throughput of PCI was 133MB/s, cards were realistically operating at the 50-60MB/s range, while a 60Hz texture-mapped polygons at 8 bits per pixel with mipmapping demanded 96MB/s. AGP, with its 200MB/s specification, was designed to solve this problem with room to spare. Even without AGP-specific features, an AGP slot behaved like a 66MHz PCI slot. Intel obviously decided that just creating a dedicated PCI bus for graphics (like Micron PC's Samurai chipset did) wasn't going to be enough.

Bringing AGP Into Being

The first revision of the AGP specification was completed by Intel in December 1995 and was first announced at the 1996 Windows Hardware Engineering Conference (WinHEC) in April, followed by the inaugural AGP Developers Conference during May 30-31. Presentations (which can be downloaded here) were given by a number of people involved in the technology:

  • Mike Aymar, VP and General Manager of Desktop Products Group at Intel Corporation gave an overview of the tech. The main point of his presentation seemed to be about getting arcade-quality 3D graphics onto volume PCs, so it was obviously recognised as a growth market.
  • Dean McCarron of Mercury Research talked about the history of graphics hardware on PCs, from MDA/CGA right up to software-rendered 3D and the challenges being faced by 3D hardware.  Intel's endorsement of Direct3D was very apparent during the AGP briefings, with the alliance being referred to as "great 3D".
  • Paul Richardson, Technical Evangelist at Microsoft, talked a lot about operating system support and the bandwidth limitations of PCI.
  • Jay Torborg, Director of Graphics and Multimedia at Microsoft, gave a technical run-down on DirectX and emphasised the importance of polygons/s and pixels/s rates, their importance to the performance of 3D graphics, and the role AGP would play in guaranteeing these.
  • Michael J. Allen, Jeff Lauruhn and Norman Rasmussen of Intel discussed the electrical, mechanical, and protocol aspects respectively in incredible detail. They also emphasised the increased data rate of AGP 2x, one of the key upgrades from PCI.
  • Finally, John Davies the Director of Marketing for Consumer Desktop Products at Intel gave attendees a bit of a pep talk. He promised them that, if hardware and software vendors work closely together, they could grow the 3D graphics sector by a factor of 10 by 1998, with a target of arcade-quality graphics on the PC by the 2nd half of 1997. The Accelerated Graphics Port Implementors Forum was created to aid this, so that developers and OEMs could interact, access technical resources and attend workshops.

After releasing version 1.0 of the AGP specification on 31 Jul, hardware and software developers started in earnest the race to produce the first working samples. The race was on.

The Players

Cirrus Logic not only claimed to have produced the first AGP chip to be sampled (i.e. physically sent to card manufacturers so they could integrate it in their products) with their GD5465-based Laguna3D-AGP on 3 Feb, but also the first card to be demonstrated in Microsoft's Memphis operating system - a beta release of Windows 98 running DirectX 5.0 - a month later. Part of the reason Cirrus Logic were so quick off the mark was that their existing PCI-based GD5464 chip already supported storing textures in system memory. Cards based on the new chip were available to buy on 26 Aug, also being notable as one of the earliest cards to be equipped with RAMBUS's emerging RDRAM tech. According to Mercury Research, the performance of these cards was 'glitchy' at best, which is possibly why the chipset wasn't more widely adopted as the year went on. Here's a quote from their quarterly publication Accelerating PC Graphics from Q1 of 1997:

"The 5465 was announced in early 1997, and is Cirrus’ (and quite likely the industry’s) first ... AGP component. The device extends the 5464 with the AGP port and support for system-memory textures in the AGP environment."


Trident
, the 3rd largest graphics chip manufacturer at the time, claimed to have had "the fastest mainstream 3D graphics accelerator on the market today" in their 3DImàge 975. Their efforts to make an AGP version were announced at CeBIT, held in Hannover, Germany on 28 Feb. Trident claimed it was the first chip to 'meet the basic requirements' of the technology in a press release dated 24 Mar and this means they were the first manufacturer to produce a working card with an AGP connector. Technically this was just a cosmetic change to their existing PCI design, so it doesn't take the title as the 'real' first card. Jaton, the "largest" PC expansion card manufacturers of that time, announced they would use the 975 chipset on Jun 3. The 985, a faster version supporting AGP 2x, appeared soon afterwards and was being "sampled in limited quantities" at the end of August. This card also employed RDRAM. At the time FIC's socket 7 AGP board was released at the beginning of Nov, this card and ATI's effort were apparently the only ones to support 2x mode.

3Dlabs threw their hat into the ring on 24 Mar, announcing that their GLINT Gamma chip was 'expected to be the first AGP-compliant graphics product' but, as we know, Cirrus Logic's Feb announcement puts paid to this claim. They announced their PCI Permedia 2 card in May and an AGP variant was used by FIC to benchmark their KL-6011 board at the end of Aug. Shipments of the card were officially announced on 18 Aug but Tom's Hardware cite Diamond's Fire GL 1000 Pro as "one of the first AGP cards available" so they can't be ruled out.

NVIDIA were still very much a fledgling company at the time. Following the failure of their NV1 architecture, when Direct3D made its quadratic texture mapping technology irrelevant, they were understandably nervous to go 'all in' with AGP. They did promise to release "the industry's first 3D multimedia accelerator [on] ... AGP systems" at Intel's Visual Computing Day, announcing their RIVA 128 chipset at WinHEC a month later. Although their chip was the choice of some of the first AGP cards, such as Diamond's Viper V330 and STB's Velocity 128, these were AGP 1x models so neither qualify as the first card. This moment marks the beginning of a somewhat prosperous future for the firm, however.

Rendition had built a reputation for producing impressive and innovative 3D hardware since their V1000 chipset was unleashed in 1996. The PCI card's integration of both 2D and 3D capabilities set it apart from 3Dfx's Voodoo, and earned the honour of being the first card to support Quake in hardware. Their Vérité V2200 chipset was also demonstrated at WinHEC, with review hardware appearing by 5 Aug but also limited to AGP 1x.

S3 were expected to be strong players, having dominated the consumer 2D marketplace with their Trio64 line of chipsets and holding 43% of the market in Q3 of 1997. Their Virge/GX2 chip was unconvincingly announced at WinHEC on 8 Apr and didn't see anything like the adoption rates that other manufacturers enjoyed. Like everyone else, they showed off physical hardware at Comdex that November, but they can't be ruled out either due to very early mentions.

ATI's 3D Rage Pro also lays claim to be the first AGP graphics chip to be demonstrated at 2x mode, debuting at Intel's Visual Computing day on 24 Mar, nearly a month after Cirrus Logic's product was officially released. This is backed up by the June issue of Wave Magazine, so it could be argued as the first 'legitimate' AGP graphics card based on all these factors. This was still the case in November according to Thomas Pabst of Tom's Hardware.

Matrox were in pole position to take a stranglehold on the business 3D graphics market, with their Millennium II card widely considered to be the best performer around mid '97. They seem to be the only company that showed no interest whatsoever in being 'first', which is unsurprising considering their reputation for high performance backed by rock-solid drivers. They announced the AGP variant of the Millennium II - boasting their MGA-2164W processor - in September with full optimisation for the new bus. The card was widely adopted by system integrators, many of which were demonstrated at Comdex Fall that year.

Number 9 also sunk their teeth in, not just producing cards but also producing a chip of their own. They announced sampling of their AGP-supporting Ticket To Ride chip on 15 May, which is pretty early but not early enough. Their Revolution 3D card based on this chip was officially released to retail on 25 Aug, to coincide with Intel's release of their motherboard chipset, which would have been a pretty solid shout for 'first card' were it not for ATI's efforts.

Intel developed a graphics chip in conjunction with Lockheed-Martin's Real3D division called the i740 (codenamed Auburn) in parallel with the 440LX chipset. It was intended to showcase AGP at its best and had no on-board texture memory at all. They managed to produce a prototype in time for Comdex '97 but didn't officially announce the card until February '98. It's almost as if they delayed it deliberately so that other chip makers would push the technology instead, as there were significant fears that Intel was trying to take over the 3D hardware market, as they had done with motherboard chipsets.

What of 3Dfx, I hear you ask? It's difficult to get into this subject without bringing up the whole sorry story of how things went down, but I'll try to be brief. It seems that AGP threw a bit of a spanner in the works for them in '97. Their VSA100 chip was designed to be scalable from the beginning, a feature no other graphics chip on the platform could boast. Given that their Voodoo2 chipset was announced on 3 Nov, they had obviously spent '97 developing their PCI product while everyone else was chasing AGP. Although it was compatible with the new interface, the Voodoo2's key feature was SLI (Scanline Interleave), which allowed two cards to be linked together to increase performance. With AGP being a single-slot deal, only Quantum 3D produced an AGP Voodoo2, with their eye-wateringly expensive Obsidian card, while everyone else had to use up 3x PCI slots (including the requisite 2D card) for the same thing. Although the V2 took the industry by storm with its performance when it was released, the competition was moving so rapidly that things didn't work out well for them at all. 3Dfx never took AGP seriously, even with later products, so they do not belong in this story.

Related to the downfall of 3Dfx was NEC's PowerVR chipset. Their PCX2 card was released in April '97, and their next-gen chip saw a delayed release on the PC after it was adopted by Sega for their ill-fated Dreamcast console, a contract 3Dfx lost. As such, they made no overtures towards AGP whatsoever at this stage of development. VGA stalwarts Oak, meanwhile, released an AGP version of their WARP 5 at some point during Q4, after initially releasing the accelerator at E3 in July '97 without a single mention of the emerging tech.

The First AGP Graphics Card

Identifying the very first AGP card (which was the original motivation behind this article) depends on your criteria. First you have the graphics chip itself. Once a design has been finalised, it is 'sampled' i.e. manufactured in limited quantities and sent to graphics card makers such as Diamond or STB (unless you're Matrox or ATI who made their own boards). Once evaluation hardware and drivers have been produced, these are tested, refined, and eventually made available for sale through retail channels. Some existing graphics chips could be easily adapted to AGP, although many of the first AGP cards were simply PCI models with an AGP connector. This somewhat undermined AGP's performance because, to take full advantage of the new bus, the additional features of AGP must be supported and this usually required designing a brand new chip. This made significant demands on the finances and resources of the industry's chip makers and may have actually broken a few of them.

There are a couple of key dates in identifying the first board and the first is Intel's Visual Computing Day, on 24 Mar 1997. Here, cards were announced from ATI, Cirrus Logic, S3, Trident, and 3Dlabs, and hardware was demonstrated by each. How well that hardware was working (they would have been engineering samples) is another question. If we fast forward to Mercury Research's report in November (reflecting tests performed around September), they included working boards from #9, ATI, Matrox, 3Dlabs, S3 and NVIDIA. Only ATI, S3 and 3Dlabs are common to both lists, but ATI's card was the only one capable of 2x mode.

The boring truth of the situation is that a board doesn't really exist until it's available at retail, as anything prior to this would be engineering samples or evaluation hardware. These don't really count because they are like ghosts. This brings us to key date no. 2, which was the official launch of the 440LX chipset on 25 August 1997. If we look at the press releases on or around that date, we can see that a number of cards were released simultaneously:

  • NVIDIA announced that Diamond and STB would be using their RIVA 128 chip.
  • Cirrus Logic announced volume shipping of their Laguna3D board.
  • ATI announced two different AGP incarnation of their 3D Rage Pro in the XPERT@Work and XPERT@Play boards.

These three chip makers take the joint 'first AGP card crown' officially, but no one likes a tie so ATI are announced as unofficial winners based on the fact that theirs was the only AGP 2x card. Trident are out of the running because they had only just started sampling their 3DImage985 chipset (though the 975 was the first card with a physical AGP connector), while 3Dlabs were in the process of shipping their Permedia II to board manufacturers for development at this time. Matrox didn't announce their AGP Millennium II until September.

The First AGP Motherboard

This is somewhat easier. Although VIA's AGP-capable, VP3 chipset was announced in June, the first motherboard chipset to appear was Intel's. The specs for their 440LX chipset were unveiled in Jul 1997, with motherboard details released a month later. On 26 Aug, the day Intel's chipset hardware itself was formally released, it was FIC's KL-6011 that was the first AGP board to retail, despite Intel's own AL440LX and NX440LX board being announced that month.

An interesting point to note is that Intel didn't produce a chipset for the socket 7 platform, perhaps intending to leave other CPU manufacturers such as AMD and Cyrix - who couldn't use Intel's patented slot 1 - out in the cold. Via Technologies came to save the day, with their VP3 chipset. Pre-production versions of FIC's PT-2012 motherboard, released on 2 Nov, were the first to use it.

Implementation

Remember that the primary driver behind AGP was cost-saving. The idea was supposed to be that, by storing textures in system RAM and transferring them as required into a comparatively small - but fast -cache on the graphics card, large amounts of 'expensive' video RAM weren't needed. The standard amount of video RAM on most cards was 4MB, although the type of RAM varied between EDO, SGRAM and RAMBUS. By the time AGP cards were fully developed, memory prices had dropped far enough that board manufacturers were quite happy to store textures on the card instead. AGP cards that were simply PCI cards adopting the new connector took advantage of the dedicated bus and increased frequency, but not any of the AGP-specific features such as 2x data transfers and side-band addressing.

From a software perspective, buggy drivers were an issue for a number of cards, as we'll see, and they worked in conjunction with Intel's VGARTD.VXD (virtual device driver) which enabled AGP feature support in Windows 95. This support became official as part of OEM Service Release (OSR) 2.1, which was released by Microsoft on 26 Aug '97. Note that the public version of DirectX at the time was 2.0a.

Although most cards were officially available to buy at Comdex Fall that year, some were available sooner but most were pre-release hardware.

Early Benchmarks

The earliest, publicly-available benchmarks of AGP cards I've been able to find come from FIC, in an evaluation of their KL-6011 board on 25 Aug '97. They ran tests using the DirectX-based 3D WinBench 97, under Windows 95 on a Pentium II 300MHz system. Initial results indicated that AGP performance was vastly superior to PCI, with AGP versions of the Permedia II and Rage Pro scoring 170 and 179 respectively, while a PCI Matrox Mystique scored 55.7.

Something that was common among desktop 3D cards of the time was that certain 3D features, such as fog tables, were either not implemented or were implemented badly. Such advanced features were the reserve of high-end cards such as ELSA's legendary GLoria chip. This was demonstrated by benchmark results showing the unrefined nature of AGP-based hardware and software at the time.

The Cirrus Logic and Trident offerings (both using RAMBUS memory, incidentally) barely cross the line, while ATI's card does a respectable job but is outperformed across the board by the Permedia II. ATI's card may have been the first to market, but it wasn't the most credible product at the time.

Anandtech's review of the Diamond Viper V330 (using NVIDIA's RIVA 128) at the end of September saw it outscore the Rage Pro's 191 in 3D WinBench 97 with a score of 280. Chalk one up to NVIDIA.

Here we see results of a test of high-end, 300MHz Pentium II-based systems in the 23 Sep issue of PC Magazine, again using the 3D Winmark 97 AGP graphics benchmark. They used an AST Bravo machine to compare the PCI and AGP 2x versions of the ATI 3D Rage Pro and found framerates to be 1.85fps and 18.6fps respectively, which is a jaw-dropping improvement. As you can see in the graphic above, the Dell, Gateway and Micron computers all used 3D cards based on NVIDIA's RIVA 128 chip and significantly outperformed the competition in 3D tests (except for the AGP-specific one, which the Gateway couldn't perform). The AST used ATI's Rage, while the NEC sported Number Nine's Revolution.

Tom's Hardware put a bunch of AGP cards to the test at the end of October, and NVIDIA's chip, yet again, came out on top by some margin. ATI's card couldn't participate at the time because of driver issues, so there is no direct comparison at this point. The Permedia II put in a respectable showing though, particularly in OpenGL tests, while Number Nine's card languished behind. A more extensive round-up came at the beginning of November.

Mercury Research's Q4 review (Nov '97) of 3D accelerators told a similar story, using the more recent 3D WinBench 98 suite (featuring DirectX 5) and a 300MHz-based Pentium II system. Although they tested 28 cards, only 6 were AGP capable.

As you can see, the RIVA 128 tops the results again by a decent margin, but not a million miles away from the Rage Pro. Clearly the benchmark wasn't optimised for AGP, as the scores from each respective PCI equivalent (not shown) were faster, but by a negligible margin.

One factor Mercury struggled with in particular was drivers. This was particularly apparent with the 3Dlabs offering from Diamond, where the AGP card scored lower (317) than the PCI offering (352).

Performance was predicted to improve dramatically over time as drivers improved, but this is where we get into the first mention of controversy. In their Q4 report, Mercury explained that some vendors were expressing concern that competitors' drivers were, perhaps, 'too good'. This was allegedly achieved by sacrificing compatibility and accuracy for speed, with the conclusion that:

"The atypical performance of the “good” drivers raises the suspicion that something unusual is taking place."

This is something we'll come back to.

Once he had got all the drivers he needed, Thomas Pabst ran another group test of 3D cards in November. The results are reproduced here:

Yet again, NVIDIA's RIVA 128 is still dominating the scores, as it is used by the top 3 cards. Also observe no discernible difference between the AGP and PCI models, which again show that the test is not AGP-optimised. We do see that the inclusion of ATI's Rage Pro puts it ahead of the Diamond Fire GL (powered by the Permedia II), but some way behind the leaders.

The coup de gråce comes in the form of PC Magazine's insanely detailed group test of graphics cards at the beginning of December. Like we didn't know it already, NVIDIA takes the crown (again), and the same patterns as before are repeated.

I feel kinda sorry for Trident, S3 and Oak with their pathetic offerings tacked onto the end of the 'proper' results table. While ATI seem very much to have been the first to get a card to market, there is no doubt that NVIDIA won the battle of whose card was best quite early on. I think 3Dlabs should also get a shout for their early success and stable offering in the development of the Permedia II, although they're not around anymore to celebrate it.

Card Controversy

But there's a problem. Remember what Mercury Research said about something 'unusual' taking place? In Feb of '98, Tom's Hardware reported on this mystery, with some very intriguing findings. Notice how every benchmark I've shared has been in the same program: 3D WinBench. It seems that ATI had optimised their drivers to perform outstandingly well in this high-profile and widely-used test suite in an attempt to mask the hardware's deficiencies in real world applications such as games. 

AGP 2.0 aka 4x and the Pro Variant

One potentially confusing aspect of AGP is the naming convention. While there were only 3 versions (1.0, 2.0 and 3.0) released - before it was superseded by PCIe - each of these represented a doubling of the bandwidth (2x, 4x, 8x). Graphics cards and motherboards were usually advertised based on which speed they supported, rather than the version. Remember that AGP 1.0 represented both 1x and 2x speeds, while AGP 2.0 represents 4x speed, hence the confusion. Each time the spec changed, so did the voltage, so new cards couldn't be used in old slots and vice versa, although there was such a thing as a 'universal' slot. This has been expertly documented elsewhere.

Development of AGP 4x was announced at Intel's Visual Computing Day and was expected to be added to the official spec by the end of the year, but wasn't officially published until May '98. Not long after this, specification for AGP Pro was released, which was an extension of standard AGP intended for workstations, providing up to four times (110W) more power for industrial applications. It was also adopted by Apple in their Power Mac G4 range, albeit not until October 1999. They added an additional power connector to their boards to support their Apple Desktop Connector (ADC) monitors via a single cable.

New AGP cards are still available today, although they are somewhat utilitarian.

Competition & Litigation

AGP was intended as an open standard from the beginning. Intel believed, and stated numerous times, that a "reciprocal royalty-free license" would be the only way to foster rapid development of the platform. It became apparent through a 2001 court case, Intel Corporation v. VIA Technologies, Inc., that Intel probably only wanted this openness on Intel-based systems. Without wanting to get into it too much (again), Intel's competitors had been using their CPU socket designs since the '70s. Intel brought an end to this with the introduction of the Pentium II's slot 1 in summer '97, leaving the likes of AMD, Cyrix and IDT with socket 7 (I'm ignoring socket 8). It wasn't until 28 May '98 that AMD officially introduced their competing CPU, the K6-2. In order to support higher clock speeds, AGP and other technical improvements, AMD relied on 3rd parties such as SIS, ALi and VIA to design and fabricate motherboard chipsets for their new CPU, as they didn't make chipsets themselves. By extending the socket 7 specification, the 'Super 7' platform was born.

Intel didn't appear to intentionally obstruct these companies from making AGP work with their competitor's CPUs, but it's possible they didn't expect it to happen. They weren't far wrong, as AGP was plagued with reliability issues on Super 7 until the Althon's slot A platform was introduced. Taking VIA to court was the first evidence that Intel's open licensing of AGP was disingenuous, claiming that only the "baseline" features of AGP were licensed as such. In particular they argued that the new 'fast write' feature was excluded from the license, in an attempt to at least stifle performance on 3rd party motherboards. Unfortunately for them, the court rejected their semantic argument and dismissed the case. 

AGP's Legacy

There really isn't much more to say about it all. What took place in 1997 was AGP almost single-handedly setting the 3D gaming graphics industry on its path to the success that we recognise today. Prior to that, there were disparate efforts by many competing parties (led largely by 3Dfx), a bunch of proprietary APIs and a general headache for games developers and users alike. With Windows 98, Microsoft's DirectX platform began to be taken more seriously. Efforts from graphics card manufacturers unified the PC gaming platform with Windows, instead of DOS, becoming the dominant gaming operating system. Say what you like about Intel and Microsoft's dominance of the platform - their efforts in bringing arcade-quality games to the PC were a resounding success, especially considering how hard it had been, historically, to establish new hardware standards (*cough* EISA *cough* MCA *cough* VLB).

Although there were a total of 38 competitors vying for a piece of the 3D pie in 1998, we know how many chipset makers came out the other side: two. ATI were eventually acquired by AMD in 2006 and nVidia hired pretty much every 3D graphics expert in the industry, most of them coming from SGI's OpenGL dev team.

Footnote

If you have enjoyed reading this, please consider buying me a coffee via Ko-Fi. I try my best to make sure that everything I write is historically accurate by citing primary sources (over 100 in this article) and weaving together some kind of story from everything, but if you do think I've got something wrong, please comment below with a source and I'll be really happy to make corrections. I really enjoy writing about this stuff and it can be really time consuming! Also, I promise to give 20% of every donation to the Internet Archive, without which this article (and others I've written) wouldn't be possible. Thanks for reading :)

References

1996
28 Mar: Intel to roll out new 3D technology. Source: c|net
2 Apr: Intel runs graphics faster for cheaper. Source: c|net
9 May: ATI Joins AGP Implementors Forum. Source: ATI
30 May: Intel host AGP Developers Conference. Source: CaseText
30 May: PDFs from the AGP Developers Conference. Source: AGP Developers Forum
Jul: Avoid the traffic jam. Source: CGW
31 Jul: AGP 1.0 specification released by Intel. Source: Intel (PDF)

1997
3 Feb: Cirrus Logic samples industry's first AGP 3D graphics controller. Source: Cirrus Logic
28 Feb: At CeBIT '97 Trident showcases AGP technologies. Source: Trident
24 Mar: Trident's 3DImàge 975 now supports Intel's AGP. Source: Trident
24 Mar: 3Dlabs reveal GLINT Gamma chipset. Source: 3Dlabs
24 Mar: Leading graphics chip companies announce AGP-Enabled products. Source: Intel
24 Mar: 3D Rage Pro first chip to demonstrate support for AGP 2x. Source: ATI
24 Mar: NVIDIA announce first AGP card. Source: NVIDIA
24 Mar: Intel outlines 3-D graphics plan: Source: InfoWorld
31 Mar: Intel AGP spec to speed up graphics. Source: InfoWorld
Apr: PCI version of nVidia's RIVA 128 is released. Source: IEEE Computer Society
8 Apr: S3 announce ViRGE/GX at WinHEC. Source: S3
8 Apr: NVIDIA and SGS-Thomson announce RIVA 128. Source: NVIDIA
13 May: Matrox Millennium II "shatters all previously held Graphics Winmark records". Source: Matrox
15 May: Number Nine unveils "Ticket To Ride"
18 May: Permedia 2 announced. Source: 3Dlabs
2 Jun: VIA, not Intel, to be first with AGP Chip Set. Source: InfoWorld
19 Jun: Company perspective - ATI. Source: Wave Report
19 Jun: Product brief for Warp 5. Source: Oak
Jun: Developers begin receiving samples of Intel's i740 graphics chipset. Source: Gamasutra
Jul: Version 1 of 440LX chipset specification released. Source: Intel 440LX Datasheet (PDF)
1 Aug: Chipset guide - Via VP3. Source: Anandtech
Aug: Specifications for Intel's AL440LX motherboard. Source: Intel (PDF)
Aug: Specifications for Intel's NX440LX motherboard. Source: Intel (PDF)
5 Aug: Review of the Rendition Vérité V2200 Source: Anandtech
11 Aug: AGP and on-chip cache will speed graphics performance. Source: InfoWorld
11 Aug: Rendition's V2200 top in 3D benchmarks. Source: Rendition
11 Aug: Accelerating PC Graphics '97 Published. Source: Mercury Research
16 Aug: Intel AL440LX motherboard (266MHz). Source: spec.org
25 Aug: ATI's 3D RAGE PRO chips announced. Source: ATI
25 Aug: Intel Introduces 440LX AGPset. Source: Intel
25 Aug: FIC KL-6011 benchmarking report. Source: FIC
25 Aug: Jaton to use Trident 3DImàge 985 chip. Source: Jaton
26 Aug: Trident sampling limited quantities of 3DImage985. Source: Trident
26 Aug: Cirrus Logic ships Laguna3D-AGP card. Source: Cirrus Logic
26 Aug: FIC first to release 440LX board with the KL-6011. Source: ZDNet
27 Aug: Chaintech release the 6LTM. Source: ZDNet
28 Aug: 3DLabs' Permedia 2 announced, with AGP support. Source: EE Times
2 Sep: OEMs cautious about releasing 440LX boards. Source: SCMP
12 Sep: Matrox announce Millennium II AGP. Source: Matrox
14 Sep: Review of the Diamond Viper 330. Source: Anandtech
23 Sep: Review of 300MHz Pentium II systems. Source: PC Mag
28 Sep: AGP explained at Anandtech
28 Sep: AGP performance explained at Anandtech
Oct: Specifications for RIVA 128 chip released. Source: Thompson SGS (PDF)
21 Oct: Review of the STB RIVA 128. Source: Anandtech
27 Oct: Group test of AGP graphics cards. Source: Tom's Hardware
28 Oct: Review of Pentium II boards with Intel's 440LX chipset. Source: Tom's Hardware
29 Oct: Review of FIC's PA-2012, with the RIVA 128. Link: Tom's Hardware
2 Nov: FIC officially release the PA-2012, the first AGP-capable socket 7 board. Source: Tom's Hardware
3 Nov: 3Dfx interactive announces the revolutionary Voodoo2 graphics chipset. Source: 3Dfx
6 Nov: STB's Velocity 128 dominates reviews. Source: STB
9 Nov: 3D accelerator card reviews. Source: Tom's Hardware
17 Nov: Quantum3D announces support for new Voodoo2 chipset. Source: Quantum 3D
18 Nov: Matrox Millennium II AGP achieves wide-spread acceptance with top SIs. Source: Matrox
20 Nov: Intel show prototype i740 at Comdex Fall. Source: ZDNet
25 Nov: Interview with John Carmack. Source: Boot Magazine Archives
2 Dec: Graphics accelerators. Source: PC Mag
8 Dec: Intel aiming for business desktop systems with i740. Source: InfoWorld

1998
Jan: Version 2 of 440LX chipset specification released. Source: Intel 440LX Datasheet (PDF)
12 Feb: Intel announce i740 graphics chipset. Source: Intel
12 Feb: Intel enter 3D graphics market to stiff competition. Source: Computer Business Review
14 Feb: 3D Winbench 98 - Only a misleading benchmark? Source: Tom's Hardware
23 Apr: Maximising AGP Performance 2.1. Source: Intel
10 Mar: Review of Real3D Starfighter graphics card. Source: Anandtech
17 Mar: Test results of the most popular video accelerators. Source: IXBT Labs
24 Mar: Voodoo2 dominates independent 3D PC accelerator tests. Source: 3Dfx
4 May: Version 2.0 of AGP spec released. Source: archived at University of California (PDF) and Intel
28 May: AMD introduce K6-2 processor at E3. Source: AMD
25 Jul: < $99 2D/3D accelerators square off. Source: coolcomputing
Aug: AGP Pro 1.0 specs published. Source: Intel
2 Sep: Voodoo2 Accelerator Review. Source: Tom's Hardware
8 Sep: 3Dfx wins best PC hardware award. Source: 3Dfx

1999
Apr: AGP Pro version 1.1 specs. Source: Intel
Aug: AGP Pro version 1.1a specs. Source: Intel

2000
22 Nov: AGP 8x announced. Source: Eurogamer

2001
20 Mar: Intel Corp. v. VIA Technologies, Inc. Source: Casetext (Nov 2001, Feb 2003)

2002
Sep: Version 3.0 of AGP spec released. Source: Intel (PDF)

Which version of Windows 95 Supports AGP? at Computer Hope
How to Determine the Version of Windows 95/98/Me in Use at Microsoft
The Story of the 3dfx Voodoo 1 at Fabian Sanglard's Site
AGP Implementors Forum archives from 19971999 and 2002
AGP Overview at Intel from 1998 and 2002
AGP Tutorial at Intel
AGP Design Guide 1.0 at Intel
AGP Design Guide 1.5 at Intel
AL440LX Motherboard at Intel
82440LX Overview at Intel
Community at Iceteks
i740 Information Site at Intel
Intel i740 Specifications at retronn.de (PDF)
Coming Soon Magazine
AGP Compatibility for Sticklers at Playtool
Technology Evolution at AMD

1997 Press Release Archives
Rendition - STB - Cirrus Logic - NVIDIA - Matrox - 3Dlabs - Trident - PowerVR - 3Dfx - S3 - #9 - ATI - Oak - Intel - AMD

1998 Press Release Archives
3Dfx

History

1/1/23: Added 'first card with AGP connector' details. Thanks to yjfy on Twitter.
4/12/22: Initial version published.