Search This Blog

Friday, 2 October 2020

The (Almost) Definitive Pentium III Tualatin Article

Home Page


From Deschutes to Coppermine
1GHz... and Beyond?
Enter Tualatin
The Big Differences
Coppermine T
Chipsets & Motherboards
The 440BX Hack

Pentium III Coppermine die shot. Source: Intel

Intel's Tualatin CPU is not well understood, despite a resurgence in interest in the last few years. Some say you can run it on a Pentium II motherboard. Some say it's the architecture that became Intel's current Core range. Some say no one pronounces it properly. Some even say it was faster than the first Pentium 4s. Let's find out the truth once and for all. But first, see how good your existing knowledge is with a game of True or False!

1. You can easily tell the difference between a Coppermine and a Tualatin because there's no integrated heat spreader on a Coppermine.

2. Although released 6 months after the Pentium 4, Tualatin was the first Intel CPU to use the 0.13µm manufacturing process.

3. Tualatin CPUs cannot be used on standard Pentium III motherboard because they use AGTL+ signalling instead of AGTL.

4. Intel's controversial Processor Serial Number feature, introduced with the Pentium III, was not implemented in Tualatin CPUs following privacy concerns.

5. Intel won the GHz war, by releasing their 1GHz Pentium III a month before AMD's Athlon.

6. The Coppermine CPU is so-called because it uses copper interconnects instead of aluminium.

7. The 1.4GHz Tualatin-based Celeron was so good at overclocking, you could get it to run at 2.6GHz.

8. Although not in the Xeon family, the Tualatin could be used in configurations of up to 8 CPUs.

(answers at the very end)

From Deschutes to Coppermine

Although we're familiar with the use of CPU codenames today, they were a more recent thing when Tualatin was revealed in 2001. It was the final incarnation of the Pentium III and came at a time when Intel was making great strides in revolutionising its CPU designs, mostly due to strong competition from AMD. The first Pentium III was dubbed Katmai, using a similar package (SECC2) to the Pentium II (SECC), in Feb '99. Although it saw the introduction of SSE instructions and brought noticeable improvements over the performance of the Deschutes Pentium II (both were made using the 0.25µm manufacturing process), it was an incremental release that saw more similarities than differences. Coppermine's technical enhancements were key to Tualatin's success, however, and that's what we're going to be looking at in more detail.


This is a necessary paragraph but I'll try to keep it brief. Cache is a very small but very fast area of memory that can hold frequently used instructions and data, so that the CPU doesn't have to go running off to the relatively slow RAM each time. It was first introduced on 386-equipped PCs, was about 64KB in size, and resided on the motherboard. One of the improvements that came with the 486 was to put a small amount (8KB) of cache for both data & instructions on the CPU die itself. This was dubbed level 1 cache, while a larger amount remained on the motherboard (level 2), the size of which depending on how much RAM was installed. Both these amounts were increased with the introduction of the Pentium, along with the L1 cache being split into separate areas for data and instructions (8KB each).

The Pentium Pro's CPU die and L2 cache combined on the same package []

The most relevant change came with the Pentium Pro, where up to 1MB of L2 cache was located in the CPU package itself. This enabled it to run at the same speed as the CPU, instead of being limited by the speed of the motherboard. A tiny fault in either component, however, would cause a CPU to be discarded resulting in low yields, high prices and relatively scarce availability. The Pentium II was quickly introduced, modifying this arrangement by placing the L2 cache and the CPU on a shared PCB instead, resulting in the Slot 1 form factor and higher yields but half the speed. The same package was used for the Katmai Pentium III. Coppermine (Oct '99) was such a revolution because the move to a 0.18µm fabrication process made full speed, on-die level 2 cache possible. Intel dubbed it Advanced Transfer Cache, which connected to the CPU via a 256-bit bus. According to this source, Tualatins were able to cache the full 64GB of addressable RAM (thanks to Standard Def Steve on Vogons).

1GHz... and Beyond?

Coppermine was a very successful product for Intel but it wasn't all plain sailing. This was a time when CPU frequency (known better as 'speed') was everything, and AMD had stolen a march on Intel by releasing their 1GHz Athlon first. The Pentium III's Processor Serial Number feature had caused more consternation than excitement too, but the really rocky period was still to come. Although clock speeds on desktop CPUs had climbed from 300MHz to 1000MHz in the space of 18 months, the ceiling was hit in a big way. In a brilliant show of tech journalism, HardOCP, AnandTech and Tom's Hardware caused the initial 1.13GHz Coppermine to be recalled by Intel after their standard testing regime showed serious issues with instability in July 2000. It took about a month for Intel to admit the problem, however, with the Pentium 4 being announced during this period. Following a redesign, the recalled CPU was eventually re-released as the cD0 stepping, but it was clear that scalability was going to be an issue at clock speeds over 1GHz, despite Intel previously stating at the 1GHz mark, "there's plenty of headroom left…"

If you want to read more about the enhancements introduced with Coppermine, here's a full write-up from Thomas Pabst.

Enter Tualatin

Intel introduced chips using their new 0.13µm process in Feb 2001. Initially touted as mobile processors, this new Pentium III core was dubbed Tualatin, with some relatively minor differences to Coppermine, at least on the surface: Data Prefetch Logic to make better use of the L2 cache, and the Voltage Regulation Module (VRM) spec was modified (more on this later) brining lower power consumption. Additionally, the extra real-estate on the die made 512KB of cache possible. If you compare the two die shots you can see the differences quite clearly. You could even call Tualatin the Pentium 3.5 (III.V would be funnier). Six months later, server and desktop variations of the chip were announced.

So how do you say it? There are some weird pronounciations out there but, if you want to hear it from the horse's mouth, it's too-ALL-uh-tin. It's quite a well-known fact that the chip was named after one of Intel's manufacturing facilities in the Tualatin Valley, Oregon.


There are 4 types of Tualatin that need to be discussed because that's one of the most confusing aspects of this CPU: server, desktop, Celeron and mobile. Here's a very brief summary of the distinguishing features:

Server (aka Pentium III-S): 133MHz, 512KB cache.
Desktop (aka Pentium III): 133MHz, 256KB cache.
Celeron (aka Tualeron): 100MHz, 256KB cache.
Mobile: 100/133MHz, 512KB cache, SpeedStep.

Despite Xsome sources saying otherwiseX, Data Prefetch Logic was included with all models as it was an integral feature of the Tualatin design. The principle focus of this article is the server version, because that's the one that produced all these killer benchmarks people talk about. Yes, the Celeron version (aka Tualeron) was cheap and could overclock quite well so was discussed a lot at the time. They're pretty versatile performers but, given that the cost difference isn't so much of an issue these days, you may as well plump for the full 512KB model if you can. Even better, if you get the server version and the right board, you can run multiple CPUs.

Intel chips of the Pentium III era are pretty easy to identify if you just use the Sample Specification (sSpec or S-Spec) code; a five digit code displayed on a label on the CPU package. This code is unique to a batch of CPUs and will give you all the info you need if you look it up. If for some reason this code is obscured, there are still ways to work it out, but none of them on their own will tell you what the CPU is. Here's how to identify a desktop Tualatin: 

IHS: the presence of the heat spreader only tells you it's a late Coppermine, Tualatin or Tualeron.
Model: the server version of the chip is labelled Pentium III-S, Celeron is obvious, but if it says Pentium III, you need to check the voltage.
Voltage: If it says 1.45, it's a Tualatin core of whatever the model is. 1.7 or more means Coppermine.
Cache: only the server version has 512KB cache, everything else is 256KB.
FSB: Tualerons are labelled 100, everything else is 133.

Pentium III-S Tualatin Die Shot. Source: Intel

The Big Differences

On the surface, there aren't many differences so we need to zoom into the die itself. Hilariously Intel doesn't bother with things like brand names in their datasheets as that would obviously be absurd. The product 'name' they use for the full Tualatin is more of a definition:

Intel® Pentium® III Processor with 512KB L2 Cache at 1.13GHz to 1.40GHz

As compared to the plain old Coppermine, which is described thus:

Pentium® III Processor for the PGA370 Socket at 500 MHz to 1.13 GHz

Architecturally, the increased L2 cache really was the biggest new feature and is the thing emphasised most heavily in the datasheet. The Processor Serial Number feature was retired very quietly, with emphasis instead being placed on the newly-introduced Data Prefetch Logic defined as:

add[ing] functionality that anticipates the data needed by the application and pre-loads it into the Advanced Transfer Cache, further increasing processor and application performance.

Both the desktop and server Tualatins were available at frequencies of 1GHz, 1.13GHz, 1.26GHz and 1.4GHz with a 1.2GHz variation exclusive to the desktop version. Compare this to the 20 Coppermine models and you get an idea of how long each family was around. One aspect I find hilarious is the CPUID Intel chose to 'distinguish' the two chips. Coppermine was allocated 068xh. Tualatin's was 06Bxh. Yeah. B vs 8. If the increment was chosen by a pre-defined process, why didn't they make it less ambiguous, and if they chose it intentionally, why didn't they make it less ambiguous? Very odd.

It has been well publicised that, despite its name, Coppermine used aluminium interconnects on the CPU die, with Intel claiming that the design was 'purely transistor limited'. When designing Tualatin, engineers made the shift to copper, which makes sense considering its greater headroom for higher frequencies. Maybe this had a bigger impact than they thought it would.


The biggest difference between Tualatin and its predecessors is electrical and is what, ultimately, made them incompatible. This was the transceiver logic they were designed to use, which is most simply defined as 'the voltages used to send signals between chips that are connected to each on the motherboard via the traces'. (If you want a more in-depth technical description, check out the patent describing the technology.) Gunning Transceiver Logic (GTL) had been invented in 1990 and a variation of this (GTL+) was used on the Pentium Pro. The Pentium II introduced Assisted GTL+, which continued to be used on the Pentium III but was unsuitable for Tualatin. The use of AGTL instead made the CPU incompatible with existing motherboards because of changes in the voltages used. Fortunately this didn't constitute a full layout change to the boards themselves, just some updates to the chipset and pin allocations, plus changes to the VRM.

The basic difference is that the Tualatin bus runs at 1.25V instead of Coppermine's 1.5V. This means Tualatin CPUs have an additional pin (for a total of 5) allocated for voltage identification. Depending on the logical state (hi or lo) of these pins, the CPU's core voltage can be set automatically. The additional pin makes twice as many voltage steps available (1.05V to 1.825V in 32x 0.025V steps) compared to Coppermine (1.3V to 2.05V in 16 steps). As mentioned, the VRM design guidelines changed from 8.4 to 8.5 to account for this more detailed level of voltage control.

Coppermine T

Interestingly, Tualatin wasn't the only Pentium III to use AGTL, it was just the only one to use it exclusively. Just to make things really confusing, Intel released a revised Coppermine, dubbed Coppermine T, that could use both AGTL and AGTL+. This meant it would work on motherboards that supported either. These models were clocked at 866MHz, 933MHz, 1GHz, and 1.13GHz. As an aside, 1.13GHz might, at first glance, seem like an odd frequency to jump to, but actually it's 1000MHz plus 133MHz, which makes perfect sense.


Intel had lost a lot of ground in the server marketplace thanks to competition from more thermally-efficient CPUs, particularly Trasmeta's Crusoe. Thermal performance greatly improved with Tualatin and it was widely reported to be intended for server blade application.

The 1.13GHz Coppermine had a Thermal Design Power of 37.5W, compared to 27.9W for the Tualatin part running at the same frequency. Even the 1.4GHz model ran lower, at 31.2W, so that's quite a significant improvement. There was absolutely no headroom for overclocking the 1GHz+ Coppermines, but it was certainly a possibility on Tualatin and was done frequently.

Chipsets & Motherboards

Official adaptors existed, such as the PowerLeap PL-iP3/T, that could adapt a Tualatin processor to a technically incompatible slot 1 motherboard. There were also interposers that could be used on socket 378 boards. These adaptors made the necessary modifications between the CPU pins and the socket to compensate for the change from AGTL+ to AGTL. They were too expensive at the time for most users ($169+) and are almost impossible to find today so I'm going to assume you don't possess such a unicorn. In most cases, official Tualatin chipsets were existing models but with a 'T' suffix to indicate support for the new CPU.

Much as I would love to also provide a list of compatible chipsets and motherboards for you to peruse, that's beyond the scope of this article. The Vogons Wiki has an excellent list for you to refer to.

The 440BX Hack

Getting Tualatins working on older motherboards was a massive challenge, however. At the time, Intel's 440BX was hailed as Intel's finest ever chipset. It was introduced in 1998 and supported slot 1 boards initially, with socket 370 following later. It achieved legendary status, as board makers were able to provide bus speeds of 133MHz and higher, well beyond its 100MHz specification. This is the chipset that exposed the overclocking abilities of the relatively cheap Celeron 300A CPU, with enthusiasts consistently running it at 450MHz (a 50% increase).

After studying the datasheets and the VRM design guide, an individual called Nightcat from Taiwan was able to work out a way to run a Tualatin CPU on BX-based board, even though they didn't support AGTL. The Tualatin specs required voltage steps of 0.025V, but it was discovered that the 0.05V steps possible on most 440BX-based motherboards were sufficient.

Key to performing this hack on slot 1 boards was the need for a 'slocket' or 'slot-ket', the nickname given to a device that takes a vanilla socket 370 CPU and adapts it to fit into slot 1. Such adaptors are almost impossible to find these days, but were once widely available from a number of manufacturers. There was no guarantee, however, that the slocket you had acquired would work with the mod. The hack was eventually tried on pretty much every Pentium III motherboard people could lay their hands on.

Hacking a Tualatin. Source: James Anderson

Following one of the popular guides, you can see just how rough and ready this hack is. I mean you have to connect two pins on nearly opposite sides of the CPU together and then insulate three of them, requiring widening of the corresponding holes on the socket so that they fit. Some people opted to remove the pins and I can see the motivation but no. It's a hack in the truest sense of the term, but impressive considering it defeated (enabled?) Intel's engineering. It's also testament to the 440BX chipset that even the introduction of an incompatible CPU couldn't yet render it obsolete.


The Pentium Pro was arguably Intel's first exclusively 'enterprise' CPU, but the Xeon name was introduced to distinguish server versions of the Pentium II from their desktop counterparts. This continued through the development of the Pentium III. 

Tualatin was introduced at a time when Intel's was in multiprocessing trouble: Foster core P4 Xeons were not widely sold and performed badly. Meanwhile, the fastest PIII Xeon was a paltry 1GHz so there was no performance leap available for existing customers. It appears that the server version of Tualatin was intended as a stop-gap solution and it certainly did the job, effortlessly outclassing the 1.7GHz Foster Xeon at most tasks while matching the 1.8GHz Prestonia Xeon as well. Not wanting to completely sabotage the Pentium 4's reputation, they didn't create a full Xeon version of Tualatin because, while Xeons could be used in configurations up to 8 CPUs, only 2 processor Tualatin systems were possible. 


This is the question everyone seems to want an answer to. You may have already read claims that the 1.4GHz Tualatin can outperform 1st gen Pentium 4s. This is something that is routinely reported as fact today, but very few people can point you in the direction of a reliable source. Fortunately reviewers benchmarked the shit out of new CPUs when they came out so there's plenty of evidence to refer to if you can find it. There are also crappy articles that provide no sources or evidence but seem to be cited by people trying to prove a point. I'm not that lazy, so it's time to ask "are the claims true?".

Mostly. Hot Hardware's tests of a 1.2GHz Tualatin saw it beat a 1.8GHz Pentium 4 in Winstone tests, mostly because they're not RAM intensive. It was beaten at encoding tasks, however. Considering the 50% difference in frequency, it still put up a good fight. The big surprise came when they were able to overclock the FSB to 163MHz, resulting in 1.47GHz from the Tualatin (a 22% increase). This enabled it to beat the P4 hands-down in 3DMark 2001 and score at a mere 1.6FPS lower than the Pentium 4 in Quake III. Pretty impressive.

Kinda. Tweakers demonstrated that the Tualatin kicked a 1.7GHz Pentium 4's butt at raw calculations, but lost out big in tests measuring RAM data rate. Additionally, AnandTech's extensive benchmarks demonstrated that the P4 outperforms its predecessor in nearly every test, with AMD's 1.2GHz Athlon being the actual threat. The test only compares with a stock 1GHz PIII so it's actually inconclusive. I needed more data. In IXBT's comparison against the 1.7GHz P4 Xeon, Tualatin came out on top in 7 out of 17 tests

The most decisive results came from Tom's Hardware, who did a group test of 65 CPUs in 2003. I took results from the CPUs of relevance in order to demonstrate how the Tualatin actually performed against the Pentium 4 and made some nice charts. The 1.3GHz Tualatin has a (!) next to it because it's simulated by taking an average of both Tualatins. This was done because I wanted to provide some kind of clock-matched comparison to the lowest-clocked P4. All the P4s are Willamette core (0.18µm) except the 2.0GHz Celeron, which is Northwood (0.13µm, like the Tualatins).

The Tualatins were tested on an Asus TUSL2-C equipped with SDRAM, the lower-clocked P4s used a Asus P4T with RDRAM, and the 1.8GHz P4 and Celerons used an Asus P4PE with DDR RAM. Graphics cards used were the closely-matched ATI Radeon 9700 Pro and GeForce 4 Ti 4600 on all systems.

I'm going to keep score between the 1.5GHz P4 and the 1.4GHz Tualatin to see who the overall winner is at the end.

In DirectX 8 tests, the results are astounding. The Pentium 4's architecture actually seems to decelerate 3D graphics, with the 1.4GHz PIII-S smashing the 1.8GHz P4 in both tests. Clearly the 512KB cache helps, as the others have 256KB or less, but there must be some kind of serious issue between AGP and the Pentium 4 here for such a big effect. My guess would be that it's related to the double-edged sword of the P4's long instruction pipeline. Without getting into it too far, the P4 has the capacity to predict up to 20 instruction in advance which is great until one of them turns out to be a 'bad guess', causing the entire queue to be flushed. Oof. The Pentium III's pipeline is half as long, one of the biggest differences between the two architectures.

Tualatin: 2, Pentium 4: 0

I read this chart wrong at first: lower is better! When it comes to encoding video, the 1.4GHz Tualatin performs admirably. Although raw clocks appears to be the biggest factor in this test (with the 2GHz Celeron winning out), a large amount of cache seems to compensate hugely, with the 1.2GHz PIII beating the 1.8GHz P4. Maybe the software wasn't yet optimised for the P4's SSE2 instructions. Either way, this is exactly the kind of task the Pentium 4 was designed to be better at.

Tualatin: 3, Pentium 4: 0

It's a different story with MP3 encoding, with a linear graph showing that raw speed and a long pipeline takes the crown, but by a narrow margin.

Tualatin: 3, Pentium 4: 1

Another graphics-heavy test here. You can see the almost linear improvement between the group of PIIIs and the P4s respectively but, again, the P4 loses out big to the gang of Tualatins. Raw speed helps out the Celeron, but the big cache of the 1.4GHz Tualatin compensates to put it ahead miles ahead of the 1.5GHz P4. Shocker!

Tualatin: 4, Pentium 4: 1

Here's where the P4 wins every time: memory. The quad-pumped, 400MHz front side bus really demonstrates the speed advantage the next-gen CPUs hold over the Tualatin with its lowly 133MHz bus. Even the Celerons manage to put on a good show.

Tualatin: 5, Pentium 4: 2

The OpenGL test seems to hand the graphics advantage back to the Pentium 4, with a very linear representation of performance. It also demonstrates just how much the lack of cache on the Celerons cripples what is theoretically a 'fast' CPU. The Tualeron truly was the last good Celeron and Pentium 4 generation ones really can be thrown in a fire. They're so pathetic I'm tempted to remove a point!

Tualatin: 5, Pentium 4: 3

An interesting spread of results here, identical in shape to the PCMark test. Again, raw speed (I really struggle to use that phrase) hands a win to the 2GHz Celeron, with the 1.4GHz Tualatin not far behind, beating everything else. Shockingly the 1.2GHz Tualatin beats the 1.5GHz P4. That shouldn't happen really, should it?

Tualatin: 6, Pentium 4: 3

The P4s get a bit of pride back as they whack their FLOPS out for all to see, but there's almost nothing in it between the lower-speed participants. The Tualatins still manage to beat their successors, though. This is getting embarrassing.

Tualatin: 7, Pentium 4: 3

Of course another RAM benchmark leaves the Pentium IIIs eating the P4s' dust. Sad times.

Tualatin: 7, Pentium 4: 4

Sysmark's real-world benchmarks seem to be optimised fully for the P4 architecture, with another linear spread. You'd be nuts to run XP on a Pentium III anyway, so it's not really a fair or relevant comparison.

Tualatin: 7, Pentium 4: 5

Ah, a classic file compression benchmark certainly emphasises the benefit of a large cache. The blistering performance of the 1.4GHz Tualatin suggest it probably didn't access the RAM much at all and, once again, the 1.2GHz Tualatin beats three of the P4s.

Final Score Tualatin: 8, Pentium 4: 5

While these tests show Tualatin out-performing the Pentium 4 with raw calculations and DirectX games, anything slightly memory intensive sees the P4 come out in front with its superior front side bus. Despite this, Intel made any advantage the Tualatin might have had at the time completely irrelevant by pricing it well above the P4. The joke was on Intel in the end, however, as they inadvertently made the P4 irrelevant and impossible to recommend because of its high price and reliance on RDRAM. This eventually changed, with DDR RAM becoming the standard and ever-increasing clock speeds (and temperatures!). What a shit show.


Tualatin caused some big waves when it hit the market, probably more than Intel had anticipated. I often wonder if its architects had any idea that it would become one of the most important designs in Intel's history. With hindsight NetBurst, the architecture used in the Pentium 4, was the wrong road from day one. Although it was a completely separate architecture from Tualatin, we kind of need to look at it in detail to understand what happened next. That's an article for another day.

Thanks for reading. See you on Twitter and YouTube.


Report: Online Shopping Fraud Bites Merchants, Not Buyers (4 Dec 1998)
Serial Number Alone Won't Violate Privacy Concerns at InfoWorld (22 Feb 1999)
Intel's Pentium III Case at Berkley University (Spring 1999)
Intel's New Weapon: The Coppermine at Tom's Hardware (25 Oct 1999)
AMD hits 1 GHz with new Athlon microprocessor! at ITPro Today (5 Mar 2000)
It's official: AMD hits 1,000MHz first at ZDNet (6 Mar 2000)
Intel Pentium III 1GHz at Anandtech (8 Mar 2000)
Intel Pentium III 1GHz Review at Sharky Extreme (8 Mar 2000)
Intel Pentium III 1.13GHz Review at Sharky Extreme (31 Jul 2000)
Intel to detail Pentium 4 at CNN (21 Aug 2000)
Intel Officially Launches Pentium 4 at IDF at Electronics Weekly (23 Aug 2000)
Intel Admits Problems With 1.13GHz Pentium III at Tom's Hardware (28 Aug 2000)
Intel Pentium 4 1.4GHz & 1.5GHz at AnandTech (20 Nov 2000)
Intel Thinks Small in Mobile Spotlight At ComputerWorld (28 Feb 2001)
The Microarchitecture of the Pentium 4 Processor at Intel Technology Journal (Q1 2001)
Intel Samples Tualatin Processors at EE Times (16 May 2001)
SiS635T: A Tualatin Twilight at Aces Hardware (30 May 2001)
Tualatin PIII Outclasses Pentium 4 at The Inquirer (11 Jul 2001)
The Celeron of The Future at AnandTech (30 Jul 2001)
Intel Fights Back With Tualatin Chip at InfoWorld (30 Jul 2001)
Intel's "Tualatin" Pentium III 1.20GHz Processor at Hot Hardware (31 Jul 2001)
Two For The Price of One at InfoWorld (10 Sep 2001)
Last Passing Maneuver at Tom's Hardware (19 Sep 2001)
The New Pentium III - Codename Tualatin at Insane Hardware (1 Nov 2001)
Intel to demo 0.13-micron Pentium 4 at CNN (5 Nov 2001)
The Little Celeron That Could (9 Nov 2001)
Q3 2001 Industry Update at Real World Technologies (7 Oct 2001)
Intel makes strides with Tualatin chips at CNet (2 Jan 2002)
Running Tualatin On CuMine MB w/o Powerleap at Overclockers (17 Feb 2002)
Tualatin on a BX mobo works. No adaptor required at Gamers HQ (25 Feb 2002)
Tualatin In A BX Board at Overclockers (30 Apr 2002)
Tualatin in the Asus P3B-F by Mathias Rufer (6 Oct 2002)
Benchmark Marathon: 65 CPUs from 100 MHz to 3066 MHz at Tom's Hardware (7 Feb 2003)
Intel's Centrino CPU (Pentium-M): Revolutionizing the Mobile World at AnandTech (12 Mar 2003)
Pentium M Dual Core in January 2006 - Summary of the Intel Mobile Roadmap at AnandTech (6 Oct 2005)
The Tualatin Story at OS/2 Museum (6 Apr 2013)
Tualatin Celeron vs Williamette Celeron at Vogons (4 Mar 2017)
Powerleap PL-iP3/T Slot 1 to Socket 370 Slocket Adapter at Ancient Electronics (14 Sep 2018)
Fastest Tualatin Chipset / Best Pentium III Motherboard at Vogons (31 Dec 2018)
Wikipedia article
GTL Patent at Google
AGTL Patent at Google
Pentium Pro Datasheet at dexsilicium
Pentium II Datasheet at dexsilicium
Coppermine Datasheet at Intel
Intel VRM8.5 Design Guidelines
Pentium III Processor Specification Update at chipdb
Die Shots at
Intel 440BX at Wikipedia
Intel 440BX Datasheet at Intel
Overclocking Computers with Intel Celeron (Tualatin) at Flylib
Overclocking Computers with Intel Pentium III (Tualatin) at Flylib
Tualatin based Intel Celeron 1.2 GHz for Socket 370 at IXBT Labs
Intel Xeon Processor Review at IXBT Labs
Intel P6 Comparison with charts + additional CPUs at Vogons PowerLeap PL-iP3/T Slot1 to Tualatin Adaptor at PowerLeap
Server Tualatin Review at IXBT Labs
List of Socket 370 motherboards at Vogons Wiki
Pentium M and Core Interlude at QDPMA
The father of Centrino, Pentium M and Core 2 leaves Intel after 33 years
Intel's 90nm Pentium M 755: Dothan Investigated at AnandTech (21 Jul 2004)
Intel Technology Journal Q2, 1999 at Intel

True or False Answers

1. False. Late model Coppermine PIIIs also used the FCPGA2 package with the IHS. It was officially introduced because of the need for more efficient heat dissipation at frequencies near 1GHz. There is also anecdotal evidence that, without it, the Pentium III die could be chipped during heatsink application on the FCPGA package.

2. True. Tualatin was Intel's first 0.13µm CPU. Initial Pentium 4s and Xeons still used 0.18µm.

3. False. It's a bit counter-intuitive, but Tualatin uses AGTL signalling.

4. True. Although Intel encouraged board manufacturers to turn the feature off by default in the BIOS, the threat of litigation from the EU was enough for them to remove the serial number when Tualatin was being designed.

5. False. AMD released their 1GHz Athlon mere days prior to Intel's release.

6. False. Although it was thought to be necessary to achieve higher clock speeds, Intel found a way to ensure that unreliable aluminium interconnects wouldn't be a problem. Tualatin, however, used copper.

7. True. Some dude called Gradus managed to overclock a Tualatin Celeron 1.4GHz to 2.651GHz using nitrogen cooling and an Asus ST6-RAID motherboard.

8. False. Multiprocessing was a feature that Intel made exclusive to the Xeon line so you could only use up to 2x CPUs in a Tualatin-based system.

Home Page

Monday, 10 August 2020

Kick-Ass For 2009: AM3 System Build [Part One]

Home Page


The Motherboard
The Graphics Cards
Everything Else
System Complete

The ASUS Republic of Gamers Crosshair IV Formula. Source: IXBT Labs

There is an accompanying video to this, which was enormously long and included far too many pedestrian details originally. I think I cut more footage that I used. I was brutal. The blog is the place for organised information. And there will probably be quite a lot because this build was over a year in the making and it's taken me so long to finish this that it's out of date now. Also, so the article isn't ridiculously long, it's split into 2 parts: part one is for the research and part two is for the build. Let's get into it.


I'll keep this bit brief, but a bit of back story is important because the graphics card I already had was central to this whole thing. The last time I purchased a whole PC new was in 1999 and I've been making incremental upgrades ever since. Never being able to afford a top of the range graphics card (except that time I bought an ATi Radeon 8500 - ah, simpler times!), I have always tried to get good value for money. But I have failed a number of times. When attempting to upgrade the 8500 to a card that could better handle GTA III, I purchased a 9200SE or something, thinking that a higher number meant better performance. No. I was caught out (and never again) by crappy marketing by ATi and this probably represents the first occasion graphics card model names ceased to make logical sense. Although the 9200 did indeed succeed the 8500, it was an inferior version of the same chipset. I sent it straight back. I then did a bit more research and settled on nVidia's GeForce 8500GT, itself a 'value' card. My low-budget purchasing style had begun.

I'm not sure how long it was until I lost patience with that card, but I distinctly remember buying GTA IV when it came out and the 8500 failed abjectly. It took me a while, but I eventually managed to buy a suitable replacement in 2009: a Radeon HD4850 with 512MB DDR3 memory. Although the cards share a number of similarities, the 4850 has twice the RAM, double the bus width, higher clocks and 5 times the pixel and texture rates owing to 5 times as many transistors.

The Sapphire Radeon HD4850. Source:

It was just under £80 in a clearance sale on Amazon and, although it was released a year after the 8500GT, it was over a year old by the time I bought it, which meant it still performed well but was cheaper than its successors. I was using this card in my main PC alongside an overclocked Core 2 Quad 9550 CPU but the itch to upgrade had become impossible to ignore. I would have loved to have chucked in an additional 4850 into my motherboard for a bit of extra grunt, but the chipset didn't support CrossFire. Instead I made a pre-emptive strike and purchased an additional 4850 because they could now be had for £9.99 on eBay. Now all I needed was a motherboard and this marked the beginning of the process that ended up with my kick-ass system. Muhahahahahaaa.

The Motherboard

Asus Crosshair IV Formula. Retail New: £190. I paid: £30. Saving: £160.

But it wasn't that simple. Do you have any idea how many motherboards there are out there? Millions. Starting a process to find 'the one' is an absolute nightmare, especially when the board you own is from 2008 and you haven't been keeping up to date with technological improvements. The problem with having years between upgrades is that the amount of research that's required is absurd. It was now 2016 and I was broke. Intel had reached the 6th generation of its Core architecture and AMD's Excavator was a laughing stock. Neither of those was going to be an option, and vintage computing had definitely become my thing; I was far too used to getting stuff for free / cheap and never spent enough time playing new games to justify the outlay, never mind the fact I lacked the earnings to support such a habit. Brassic, I was.

So I started small. What chipsets supported Crossfire? How would I find a motherboard that would allow me to use my existing CPU and graphics card, plus add a new one and support a future CPU upgrade? This chart was a good place to start. Intel's X48 and P45 chipsets seemed appealing, but searches for boards using them came out as expensive. I was looking to spend about £30 and they were all £70+ for the board alone. Additionally, these tied me to Socket 775, which had a low ceiling performance-wise. So I started looking at AMD models instead. This would, of course, require a CPU upgrade as well but, if I could find a cheap enough board, AMD CPUs of the time weren't as expensive as Intel equivalents but performed very well. Also, Crossfire was an AMD thing and Intel boards tended to have support for nVidia's SLI technology instead. Some did both, but were expensive and rare.

The trouble with these keyword-based saved searches on eBay is that they don't work well for things like motherboards, especially when you don't have a specific model in mind. Let's say I use 'Computers' as the category and use 790, 890 and 990 as the keywords representing the chipset names. Unless the chipset name is included in the motherboard model name, you're relying on the seller to include the name of the chipset in the description. The kind of seller that would do this usually knows what they have, making it harder to find a bargain. Unless I went through the massive pain in the arse of setting up a search for every single known motherboard model known to man with the chipsets I wanted, all I could do was sit back, wait, and strike quickly when something came up.

Gigabyte's GA-MA790XT-UD4P (almost). Source: Ali Express

And wait, I did. It was July 2016 when the first bargain presented itself, a Gigabyte GA-MA790XT-UD4P. It was £20 buy-it-now so I jumped straight in there. My existing system had a Gigabyte board and they're fine, but not at all exciting. At this point I needn't say anymore because I fucked it up properly. It's an AM3 board and I tested it with an AM2 CPU by accident (they all look the same!!!). You can drop an AM3 CPU into an AM2 board and it'll work fine but you can't do it the other way round. Except you can. They are mechanically compatible but not electrically compatible. I knew this, but I had picked out the wrong CPU. I didn't notice this until I attempted to remove the CPU and felt more resistance than you would normally expect. Once I'd removed it I ended up with a broken CPU and a socket with a burn and a missing contact. I tried my absolute hardest to fix it because this was actually quite a nice board but, alas, I failed. Who knows - maybe it was faulty already. I was heartbroken. It would have been a capable board.

Christmas came and went and 2016 became 2017. I bid on a number of boards as the months went by, but all went over my very slim budget. My next bite came from a different fishing line (a result from a different, unrelated saved search) in February and was described literally as 'Job lot computer parts...unknow condition'. While the description was piss poor (and badly spelled), the photos were more revealing and I spotted a motherboard that might just fit the bill. At £7.50 for the lot I couldn't go wrong. The board in question was an Asus M4A79T Deluxe and a quick bit of research showed me that this was a quality board featuring the 790FX chipset. I was so excited to receive this board that I had it installed in its case and all connected up before I'd even tested it. Obviously it didn't work and no amount of repairing was going to fix this thing. The chokes for the voltage regulator literally crumbled into nothing when I tried to replace them, so some kind of serious corrosion had taken place here and affected the board very seriously. Disappointing.

I had to wait until July for the next hit but holy crap it was a good one. Again, it was buy-it-now and it was listed as 'ASUS Crosshair IV Formula AMD 890FX Mainboard ATX Socket AM3' so I got lucky with the title. Although the user had listed the board as untested, it was worth the risk for the price. It came, it worked, I was stoked.

The Graphics Cards

2x Sapphire Radeon HD 4850 X2. Retail New: £259.97 each (£519.94). I paid: £15 each (£30). Saving: £489.94.

I thought I had decided on my graphics cards but finding the ill-fated Gigabyte motherboard had rejuvenated my thinking around the project and I'd hit on a new idea: QuadFire. It was ridiculous but as soon as I thought of it I couldn't get the idea out of my head. I became obsessed. The key moment came when I discovered Sapphire's HD 4850 X2 - two 4850s on one card. I could actually have used one of these in my existing system, simulating Crossfire on one PCI slot but now I could have two of these cards meaning four GPUs. This was insane to me and enough time had passed that these cards were almost as cheap as a single 4850. Additionally, this card was nearly as fast as its big brother, the HD 4870 X2 which was, at the time, the fastest card on the planet. By March I was the owner of two 4850 X2s. The dream was becoming a reality.

Sapphire's ridiculous Radeon HD 4850 X2. Source: MadboxPC


AMD Phenom II X4 955 Black Edition. Retail New: £213.79. I paid: £25. Saving: £188.79.
Phanteks PH-TC12DX Cooler. Retail New: £38. I paid: £14. Saving: £24.

I'd also been doing a lot of research into CPU options and two main candidates had emerged for this system: the quad core Phenom II X4 955 Black Edition and the Phenom II X6 1055T. Both had become legendary for different reasons, with the X4 being the flagship CPU when it was released in April 2009 and famed for its overclocking abilities while the X6... well it has 6 cores. Whichever one of these I chose was probably going to be the most expensive component. It was still going to be dramatically cheaper than when new, and would still perform respectably today with complementing hardware. One thing that's interesting is that my existing Core 2 Quad 9550 was very comparable at the time, but lacked the unlocked multiplier. Very clear memories came back to me of reading articles online when these CPUs were released, marvelling at the idea of having a hexa-core CPU or a Black Edition model. I was literally realising a dream by building this system.

AMD's Phenom II X6 1055T CPU. Source: techradar

Two weeks after buying the graphics cards, I was bidding on both a 1055T and a 955BE simultaneously. What I expected to happen was that I would win one or neither of the auctions. Instead I won both. Oops. But at £45 for the 6-core and £25 for the 4-core I think I did okay. The plan was to try out both CPUs, benchmark the hell out of them, and decide which one to use based on that. I could then sell the spare one. As for cooling, air would suffice for my needs and I already had a pretty good solution in my Phanteks PH-TX12DX, which has a massive heatsink and a fan blowing either side. I originally purchased this in summer 2015 for my previous system for overclocking purposes so it's well over spec for this system.

The Power Supply

Corsair TX850W. Retail New: £114.99. I paid: £28. Saving: £86.99

From the moment the QuadFire idea came to me, it was clear I was going to need a serious PSU. Nothing I had was going to cut it, mostly because of the amount of PCIe power connectors I was going to need. Sure, I could use adaptors but, at best I would have an unstable system and, at worst, I could destroy it all. 850W was going to be more than enough and a few days of research and eBay searches led me to purchase a used Corsair TX850W. It's not at all modular but features like that always command a premium.

Everything Else

Fractal Design Define R3. Retail New: £89.99. I paid: £25. Saving: £64.99.
Samsung 840 Pro SSD. Retail New: £185.10. I paid: £0. Saving: £185.10.
Hitachi DeskStar 7200rpm 1TB. Retail New: £45. I paid: £0. Saving £45.

The case was actually the first thing I had bought. I hadn't bought a new one for years and I found a second hand Fractal Design Define R3 on Gumtree. It was 'arctic white' but it was £25 and had all the features I was looking for: PSU at the base, lots of drive bays, excellent air flow, clean appearance, great cable routing and two 5.25" bays at the front. What a bargain.

System storage was taken care of by a Samsung 256GB SSD someone gave me for free. Although it was released in late 2012, it's a SATA3 part, so it matches the hardware. Further storage was taken care of by a 1TB SATA3 hard disk I already had and a random SATA optical drive, which happened to be ASUS-branded. The SSD was important because storage is always a bottleneck. With the 890FX chipset comes the SB850 south bridge and 6Gbps SATA speeds. Blazing!

If I'm going to get this system properly overclocked to gain me some more performance, I'm going to need proper RAM. At the time of the build I chucked in whatever DDR3 RAM I had, but this will be a future upgrade.

System Complete

The specs of this system are such that nearly every component is top-of-the-line for 2009. Only the cooler isn't period correct as it dates from 2013, and I've mentioned the loophole with the SSD. Here are the final numbers:

Total New Retail: £1,352
Total I Paid: £140
Total Saving: £1,015

For 2009, this is a truly awesome system. Scan's Gaming PC of The Month for April 2020 rocked a Ryzen 5 and RTX 2060 graphics for little more than a grand, so the fact it only cost me £140.25, a saving of £1014.70, demonstrates a number of things:

- Buying the latest hardware is a mug's game.
- Building a system that's 7 years old is loads of fun and rewarding.
- You can have a system that was, at some point, completely awesome. You can do the same thing every 7 years.

In the next part of this article I'll do a bit of a gallery of the system itself and the change of direction that I decided to take with the graphics cards. Thanks for reading, find me on Twitter.

Home Page


Crossfire Compatibility Chart at AMD [archived]
Asus Crosshair IV Formula specs at Asus
Asus Crosshair IV Formula review at IXBT Labs
Gigabyte GA-MA790XT-UD4P at Gigabyte

Asus M4A79T Deluxe at Asus
Quick Reminder... at Ars Technica
Radeon HD 4850 X2 specs at Sapphire [archived]
Radeon HD 4850 X2 review at techpowerup [archived]
Radeon HD 4850 X2 review at Overclockers Club
Recommended Power Supplies at Overclockers
Power Supply Calculator at Coolermaster
AMD Phenom II X4 955 BE specs at AMD [archived]
AMD Phenom II X4 955 BE review at Trusted Reviews
AMD Phenom II X4 955 BE review at Tom's Hardware
Comparison of 955BE and Q9550 at CPU World
Comparison of 955BE and 1055T at CPU World
Phanteks PH-TC12DX specs at Phanteks
Phanteks PH-TC12DX review at Hardware Secrets [archived]
Corsair TX850W specs at Corsair
Corsair TX850W Review at JonnyGURU [archived]
Fractal Design Define R3 specs at Fractal Design
Fractal Design Define R3 review at Anandtech
Samsung 840 Pro specs at Samsung
Samsung 840 Pro review at The Guru of 3D

Friday, 26 June 2020

Recovering Data For Free From a Mac (HFS+) Partition That Won't Mount in macOS Catalina And Later

Home Page

Despite me nearly writing an entire article just now about the folly of storing data anywhere, I'm going to try and keep focused on this one problem that can be solved relatively simply. In fact, I'll keep it super simple for those that know what they're doing:

TL;DR if your Mac hard drive won't mount and Disk Utility can't fix it, grab a PC and boot it into Parted Magic from the Ultimate Boot CD (UBCD). Attach the problem drive and try to mount it using the drive icon in the bottom left corner of the desktop. If it does, copy the files onto a different drive. There. Done.

Don't Trust Hard Drives

That's the title of the article I wanted to write but here it's just a heading.

Although there is a massive list of potential failures that a hard drive is prone to, developing an error that results in the data being there but inaccessible is one of the least fatal, but most frustrating, problems. This article deals specifically with problems on Mac OS X, but also applies to other operating systems and can be adapted quite easily.

When you attach a storage device to a computer, the operating system takes a look at the file system in use. If it's recognised, the file system will be 'mounted' by the OS and you will be able to interact with the file system. Ideally you would have read & write access to the drive but in some cases you will have read-only access i.e. you can open and copy files, but you can't make any changes to the files. For example, Windows does not recognise Mac drives at all without installing additional (non-free) software. Windows drives, however, can be at least read on Macs. This depends on what file systems are in use. For Windows it's usually NTFS and for Mac it's usually HFS+ because these are their 'native' file systems, but there are many others available just to make things more complicated for you: FAT, FAT32, NTFS, APFS, HFS, HFS+, EXT, VFS, XFS, NFS, exFAT, ReFS, JFS, WBFS are just some of them.

Some drives have multiple 'partitions' on them and each can have a different file system. If you've ever installed a USB loader on a Wii console, you might have a drive with a FAT partition and a WBFS partition. Without extra software, you'll usually only be able to see the FAT partition.

Anyway, if you insert a disk with an unrecognised file system, you'll usually get a message explaining this. Most of the time you will simple be asked if you want to format the disk. Unless you've just installed a new, blank drive, you don't want to do this. You might even think that 'formatting' the drive will mean you can access the drive. No. Be cautious at all times because...

You are only ever one click away from losing your data!

Explaining all of the pitfalls involved requires me to write the whole other article I'm not currently writing. Trust me though, I'm an expert because I've made all the mistakes at some point in the past. Luckily I've also recovered from nearly all of them. Think of fire: it's pretty dangerous and it's very easy to start one. It takes a lot more effort to put one out, unless you have all the necessary safety equipment to hand. Think the same when it comes to data: easy to destroy, much harder to recover.

Mounting (Or Not)

So it's normal for an operating system to not mount a partition that's unsupported, but there are occasions where a filesystem that would normally be recognised becomes unrecognisable. This is usually due to the header data on the drive becoming corrupted in some way. If the metadata (data about the data) cannot be found, this results in an error state. You won't normally get a message telling you this though - all that will happen is that, when you plug the drive in, it doesn't show up.

This happened with me recently. I don't know what specifically caused the problem with this drive, but it was behaving weirdly before anyway. SMART data for the drive also indicated it was failing, so that's nice.

Background is that I have a MacBook Pro with a 256GB SSD as its internal drive (an upgrade I installed after the performance of the original drive tanked - featured in this video on YouTube). This doesn't leave me with much space left for data storage, so I've been using a 1TB external hard drive in a caddy connected via USB. One day it unplugged by accident while it was rendering a video and didn't mount when I plugged it back in. The first thing to do on a Mac when this happens is to open the Disk Utility that comes with Mac OS.

On the left you can see the list of physical drives attached to the computer.

- The Samsung at the top is the SSD, with partitions for the system and data within it. This is labelled as 'internal' so that the user knows what is what. Labelling your various partitions is important in this regard as it saves a bunch of confusion and avoids potentially embarrassing mistakes, too.
- The 'Mass Storage Device' is a hard drive that's connected externally and has a 'Restore' partition in it, which is mounted so I can access it from the Finder.
- The last drive shows a 'Time Machine' partition, but it's greyed out. This means it's not mounted. This is the problem drive.

At this point the first thing to try is to select the problem partition and then click the First Aid button.

More often than not, this process will fix the problem. In this instance it did not. If it doesn't, this usually implies a deeper problem with the drive. If the file system is having issues then you need to get the files off, as getting them fixed usually requires reformatting the drive. The specific errors I got were:

File system check exit code is 8.
File system verify or repair failed. : (-69845)

The first message is a general status thing meaning that the file system could not be checked and is corrupted somehow. The second error is amusing because, if you type it verbatim into Google with the minus sign attached, you're telling it NOT to include 69845, so search without minus signs unless you want to exclude it. You may have read elsewhere that booting into single user mode and running FSCK (Unix 'File System Check' utility) manually can fix it, but macOS Catalina won't let you do that. You could try it on another Mac with an older OS because external volumes in macOS and OS X tend to be HFS+, which has been supported in OS X forever. But let's say that didn't work. If you happen to own one of the (not free) file recovery tools available for Mac OS such as Disk Drill, DiskWarrior, Stellar, Data Rescue or TechTool Pro, then you're in luck. Paying upwards of $80 for such a piece of software, however, is a bit of a risk, especially if it doesn't even work in the end - these apps have a specific 'no refunds' policy for that reason.

The Ultimate Boot CD

Instead there are a number of free tools we can use but for this specific problem we're interested in Parted Magic in particular. This is an operating system in itself. More specifically it's a version of Linux which is designed with data recovery and diagnosis in mind. It's jam-packed with every free and open-source tool out there and it's really, really useful. It doesn't run on a Mac, however, so you're going to need a PC and if you on't own one it's time to call on one of your not-so-cool friends and hopefully you're on good terms with them. Although the latest version of Parted Magic itself costs (not very much) money, the free version included with the Ultimate Boot CD works just as well for most purposes. You're going to need to download it, install it on a USB thumb drive (or CD if you're into pain and time wasting), and boot your computer with it.

There are many other sources of information out there telling you how this is done so I'm not going there. Here's the link to the UBCD website. Come back when you're ready to boot a PC with it.

Parted Magic

I used a PC laptop to run the UBCD. I also removed the internal hard drive from said laptop and replaced it with the drive I'm having problems with because a) it eliminates the risk of me messing up anything on the PC laptop and b) it means the problem drive is hooked up directly to the SATA bus, which is quicker than USB in most cases. You're also going to need a spare hard drive of the same or larger capacity for recovering any data onto. I attached this via a USB caddy but, if you're using a desktop PC, this can also be attached internally.

Some PCs are fussy about UBCD and I've had situations where it just won't boot. All you can really do in this situation is to try another PC. Sorry.

Once you're on the desktop, click on the drive icon which is second from the bottom-left. This will open a window showing physical drives and their mountable file systems. Yes, Linux can mount Mac partitions. Even better, it can mount them with read and write access. In this specific case, /dev/sda3 is the internal drive and this is indicated by the icon on the left. It's the only internal drive on this computer so I know it's the right one.

Make things as simple for yourself as possible when working with multiple drives.

You also want to mount the drive you're recovering to. I had already purchased a nearly identical drive to replace the problem one because I think it's got a mechanical issue, too. I also partitioned it on my Mac laptop before plugging it into the PC.

Now open the File Manager from the desktop and you will see your mounted partitions on the left. Choose 'New Window' from the 'File' menu to give you two side-by-side and then it's a simple case of drag-and-drop.

I couldn't quite believe it was this simple but, if it's a file system error rather than a mechanical or electrical error then this could save you a bunch of time and a bunch of money. For other more complicated problems, there's another article in the works.

I hope this helps.

Monday, 8 June 2020

Every Mobile Phone I Ever Owned (Well, Almost)

Home Page

The iconic Motorola RAZR V3, brandished by Dr. Wilson ('All In', Season 2, House M.D.). ©Fox.

I was watching House M.D. earlier. One episode (featuring a female teenage stalker of celebrities) reminded me of one of my all-time favourite phones: the Motorola V3 RAZR.

Mobile phones are much like computers for me in the sense that they conjure up a level of nostalgia unlike almost anything else. But that's me and technology all over: reading articles on Engadget about the latest slabs of technology, spending my hard-earned cash and picking up (or, more recently, having delivered) a shiny box with all sorts of goodies inside - ah, the unboxing experience! Phones have had this down for decades now.

After watching the aforementioned episode, I trawled eBay for at least thirty minutes looking to establish a collection of all my previous phones. I still possess a few but even those that are missing could be acquired for between £10-£20 in acceptable condition. Considering that the first phone I acquired was in 1998, my experiences represent 'a brief history of mobile phones', as I've upgraded almost every single year since then, and pored over every option available on an annual basis. I only ever bought the newest, most exciting shit.

Something that will make this exercise particularly interesting is that I kept a diary from 2005 until about 2010. During that time, I recorded my impressions of each new phone the day I upgraded, so I'll be able to include these insights.

Where It Started

Owning a mobile phone in 1998 was no mean feat. They were very much still considered a business tool rather than a consumer necessity, mostly because the use of one required a credit agreement. This came at a cost of around £40 a month for the most basic call plan, which covered instalments for the phone hardware and line rental, and usually included free weekend and evening calls to local landlines. The need to be a legal adult (18-years-old in the UK) to sign this contract put 'mobiles' out of many people's reach, unless you could talk mummy and daddy into getting one for you.

I had just left school after A-Levels and started working full-time in a local computer store - the Upminster branch of catalogue computer retailer Special Reserve (another story for another time). For the first year or two that I possessed a mobile, only one of my mates had one - the rest of the numbers I had on my SIM were land lines. Yes, this was a time when even those with a mobile had to endure the humbling experience of speaking to a girl's parents when they answered the phone instead. Texting (or SMSing as it was clumsily called by most people at the time) was also something of a novelty: one- or two-line screens on phones and a numerical keypad weren't well-suited for typing messages and, as it was no more convenient or expensive than making a phone call, why would you bother with the faff? How things have changed...

It was the introduction of pay-as-you-go (PAYG) that caused the explosion in mobile phone popularity. Suddenly anyone, anywhere could get a mobile phone. You bought yourself a phone (usually a bad, cheap one), and you bought 'credit' that enabled you to make calls and send messages, usually at a higher rate than contract phones. Mobiles were no longer targeted at just business. The limitless customer base led to increased sales, which led to more investment in the network meaning increased coverage, reliability and quality of service.  This also made more money available for R&D budgets and handsets improved rapidly almost overnight. Just look at the difference between my first and second phones to see what I mean.

Almost as soon as I started the aforementioned job I was straight down the one2one store (formerly Muercury) in Romford. Having chosen my first phone, I was able to choose my mobile number and, amazingly, I've had the same one ever since!

Ericsson PF768

Source: Wikimedia
Year: 1998. I don't even remember what the other phone choices were at the time and, to be honest, I don't think I cared - I was just happy to have a mobile phone! It's also the only phone I've owned that had an antenna.

Advantages: small, lightweight.

Best feature: the 'flip', which covered the keys. It had a spring, making the action looked quite flash at the time.

Worst feature: the 'flip', which eventually broke as it was so flimsy. Without it, the phone had a habit of turning itself off in your pocket.

Fate: it broke. I binned it when I upgraded. I don't think I was even curious enough to bother with dismantling it.

Nokia 3210

Source: Wikimedia
Year: 1999. This phone is an absolute legend. At the time, I simply needed to replace the old phone but didn't know about 'upgrades' at the time. It was a nice surprise to get this as my first 'free' phone. It felt like I'd won the lottery! Obviously I was paying for it through my contract, plus the salesperson did a great job of talking me into purchasing insurance ("it's got a bigger screen - makes it more at risk of breaking..."). That was the last time I fell for that con.

Advantages: bigger screen for texting, and changeable covers! I had a bunch of different covers for mine, ending up with translucent blue. It looked amazing with the case all lit up and was a very cool phone at the time. I think everyone had one.

Best feature: the legendary game; Snake!

Worst feature: it was a bit big.

Fate: was stolen from my jacket pocket in a club. Remains the only phone I've owned to be nicked.

Nokia 8210

Source: author
Year: 2000. Although it was upgrade time I seem to remember this phone costing me something like £200 as it wasn't free with the contract. This is easily the most I've ever paid for a phone, but that's how desirable it was at the time.

Advantages: did everything the 3210 did but was much smaller and incredibly light. It remains the smallest, lightest phone I've ever owned. I think it's also the first phone I used as a modem with a laptop in order to connect to the Internet, thanks to its infrared port. I also used this to sync my contacts with my Psion Revo. GPRS didn't exist yet, however, so it was like dial-up but mobile.

Best feature: aside from its form it would be the changeable covers again, so I never got bored of it. Black looked quite good.

Worst feature: towards the end of its life the screen started to fail which made it pretty useless.

Fate: I've still got it and it works pretty well, surprisingly.

Ericsson T68

Source: coolsmartphone
Year: 2001: I remember that the choice of phones at the time was pretty poor - nothing stood out in terms of looks or features. I think this was the first time I went to get a new phone not knowing which one to get. This one turned out to be a nice surprise.

Advantages: very gadgety. Could do MMS, had a colour screen and bluetooth for contacts sync and GPRS Internet, and could send and receive photos. There was also a camera add-on you could get but it was expensive and hard to find. Plus the quality was appalling. Still, it was a technological first.

Best feature: the WAP browser. London Underground had a WAP page - living in London and being able to look up train times and sports results on the move was a Godsend.

Worst feature: it was a pretty ugly phone, I thought, and when the vibration failed it became annoying.

Fate: I've still got this one too, but the back of it is turning to goo.

Sony Ericsson T610

Source: snupps
Year: 2002. Perhaps the first time I got really excited in anticipation of a phone being released. I hounded the carphonewarehouse salesman for about a week so I could get hold of it the day it came out.

Advantages: did everything the T68 could, plus had a bigger, better screen and built-in camera. In fact I believe it was the first consumer mobile phone to have this feature and at the time it raised many questions over privacy. It was therefore the first phone you could take a pic of yourself doing something and send it to a friend immediately (if their phone was compatible).

Best feature: definitely the camera. I've still got all the photos I took with it (sorry, no nudes). This is the genesis of modern mobile phones.

Worst feature: nothing springs to mind. I could gripe about the passive-matrix display but that wouldn't even make sense to most people.

Fate: I fell asleep on the train one night coming back from London and overshot my station by a few stops. woke up with a jolt, jumped off the train, forgot my phone.

Sony Ericsson T630

Year: 2003. I don't remember getting this phone. Obviously it was out of necessity and must have been near enough to upgrade time because it was released Q4, apparently.

Advantages: it was essentially a (slightly) improved version of the T610, and still seemed to be the best thing going at the time. I continued taking photos and it was pretty much the peak of the consumer phone experience at the time.

Best feature: the improved active matrix LCD.

Worst feature: didn't improve much on its predecessor so it was actually quite boring as upgrades go. I dearly wanted the P900 at the time, but I'd just bought a house, so I was limited to a free upgrade.

Fate: I left this one in a taxi. Oops.

Motorola RAZR V3

Source: spiria
Year: 2004. I have always had two basic tenets when it came to choosing mobile phones: 1) never buy a flip phone, 2) never buy Motorola. The RAZR broke both these rules because another rule I tend to abide by is to get the latest, hottest, generally bestest phone available, and the RAZR V3 was most certainly that.

Advantages: big, pretty screen; second, smaller screen to identify callers; very thin and very stylish; could record video; had a (relatively) high resolution camera. It was also the first phone I was able to load videos and music on from elsewhere (mp3 ringtones).

Best feature: recording video. It was a brand new feature for a phone as far as I remember, but it didn't support it initially. I had to hack the damn thing (another first) to enable it but that was fun in itself.

Worst feature: can't think of one - I liked this phone A LOT.

Fate: I dropped it and the glass protecting the secondary screen cracked. Fortunately this was toward the end of its tenure.

Sony Ericsson W810i

Source: author
Year: 2005. Another phone I got as soon as it was available on the high street. It was definitely the best thing going at the time. The W800 has established the Walkman brand on mobiles and this took it to the next level.

Advantages: memory stick storage, 2 megapixel camera with flash and autofocus, it's a walkman, FM radio, web browser, changeable themes, email - just such an amazing gadget!

Best feature: undoubtedly the camera. Having such a good quality camera in your pocket is great when you've got kids because it's easy to get high quality pics anytime, anywhere. Prior to this I would have to carry a separate camera with me for even basic shots but this thing was so good I really didn't need to. I've taken some of my best photos on this.

Worst feature: really I can't think of one this phone rocked. I was even able to upgrade the software and unlock it so it could be used on other networks.

Fate: I've still got it. dropped it a million times but the build quality is great.

T-Mobile MDA Compact III

Source: GSM Arena
Year: 2006. Also known as the HTC P3300 (codename: Artemis), this was my first 'smartphone'. This is a weird, accidental, 'parallel phone' because I actually used this purely for business (I needed a proper calendar for work that I could sync with Outlook) and continued to use the W810i for personal stuff. I used an upgrade cycle to get it for free and only it was the best one my network provider (T-Mobile) offered. There were better alternatives out there.

Advantages: touchscreen, albeit resistive so nearly impossible to use with finger touch. Lots of apps.

Best feature: actually I don't know. It was a good personal organiser but I actually hated this as a phone.

Worst feature: the full-fat HTC version of this had GPS and WiFi but the T-Mobile version had these disabled for some maddening reason. Also the camera is really quite woeful to the point of being unusable. Plus it's a brick. I cracked the screen once when it was in my pocket, but it was easy and cheap to repair.

Fate: I've still got this and recently hacked it, upgrading it to Windows Mobile 6.5 which is supposed to be better geared towards finger use. Unfortunately the screen has such a low resolution and lacks sensitivity that it doesn't in any way enhance the user experience on this hardware. Worst 'phone' I've ever had.

Blackberry Curve 8320

Source: Amazon
Year: 2007. With the W810i I found myself using the Internet and instant messaging (like Yahoo chat) increasingly, given its improved web browsing capabilities and, with the dawn of genuinely affordable always-on internet tariffs, my focus on this upgrade was specific: mobile internet. The iPhone had literally just become widely available, but wasn't available on my network. So I went into an independent mobile retailer and asked what the best mobile was for Internet. Hilariously the salesman dismissed the iPhone as "a gimmick"! His scorn was palpable. So, despite being something of an Apple fan but never having actually used the iPhone, I tended to agree. He recommended the Blackberry based on my needs (big screen, Wi-Fi, physical QWERTY keyboard) and, as a result, I changed networks from T-Mobile (formerly one2one) to Orange to achieve this.

Advantages: built-in camera (of passable quality) with flash; expandable memory via MMC; Wi-Fi; 'proper' Internet browser; SMS, email and MMS delivered to the same inbox; instant messaging; customisable themes; app store; Apple iTunes synchronisation; updatable operating system.

Best feature: push email was the big innovation that owning a Blackberry brought i.e. email that arrives immediately rather than only when a send & receive happens. Oh, and Wi-Fi Internet access on the move - fantastic.

Worst feature: the web browser. While the browsing experience was much better than on the W810i, it still couldn't render any site as was intended - it was very much 'mobile internet' rather than 'desktop internet'. Disappointing. It also came with an 18 month contract so I got really bored of it towards the end.

Fate: its plasticky build quality resulted in it eventually disintegrating. Good riddance. 2nd worst phone.

Nokia N900

Source: author
Year: 2009. This phone (and I use the term loosely) piqued my interest during September when I still had 3 months to run on my contract. I saw it on Engadget in a video that demonstrated the phone's ability to run a Super Nintendo emulator, hook it up to a television to see the action, and use a Wiimote as the controller. Genius. Absolute genius. If there's one thing I like about gadgets it's the ability to make them do stuff they're not supposed to. The N900 is basically a computer in your hand so anything a computer can do, this can. plus a lot more. It's so hacky it can run Android, OSX and whatever else the lunatics that own these things have managed to get working.

Advantages: high resolution screen, physical keyboard, Wi-Fi and 3G for data, infrared, FM receiver and transmitter (listen to your music collection in the car), 32GB internal storage (expandable via MMC), Linux-based operating system (Maemo 5 / MeeGo), many applications, can play music and videos of all formats (I watched Lost on it in glorious 720p while I had my hand stitched up in hospital), GPS, 5 megapixel camera with autofocus and flash, front-facing camera for video calls, touchscreen, instant messaging (Yahoo, MSN, Skype, Google Talk, ICQ, etc.), email. Oh and it's a phone.
Best feature: the web browsing experience. This device finally fulfils the promise I was seeking when I bought the Blackberry. It comes with a proper desktop Web browser. It could watch YouTube videos, listen to radio stations (FM and Internet), watch TV via iPlayer and such, access my bank account, play Flash games, everything. best of all though is that because it's 'open source' anyone can write programs for it and it came with the pledge of being improved on a regular basis via software updates.

Worst feature: it doesn't do MMS, oddly, which is absurd for a phone and, okay, it's a bit of a brick but when you consider it's an mp3 player, games console, radio, laptop, phone and camera all rolled into one it was a price worth paying imho.

Fate: I used this phone passionately for 2 years until I was bought a Samsung Galaxy SII, my first Android phone (I'm actually in the process of writing a dedicated article about the N900 because of its historical significance to Nokia's downfall).

2011 Onwards

That's it. That's the history of my phones. Of course I continued with the upgrades after the N900 but it's all very boring after this point. Once you've gone Android or iOS you're simply upgrading to the best available phone at the upgrade point. Sure you get a shiner screen, more storage, a faster CPU, the newest version of Android and a better camera but it's all quite uneventful. Unboxing is still great in its own way, but phones are boring now and very little research needs to be done. They can all do the same things with very little variation. The N900 maintains its crown as the best phone I ever owned.

Currently I'm rocking a Huawei Mate 20 Pro, which has the most incredible camera I've ever used. The super macro mode is fantastic and I make all my YouTube videos with it. I'm never without it and it's the digital umbilicus that keeps me connected to social media. If you wanna get in touch you'll find me on Twitter @brassicGamer. Thanks for reading.