Investigating the rather odd behavior of the Microsoft OS/2 1.21 disk driver led me to Compaq and their EXTDISK.SYS driver. While experimenting with various setups, I realized that DOS versions older than 5.0 do not support more than two hard disks exposed by the system’s BIOS, and will in fact quite likely hang early during boot-up if there are “too many” hard disks.
This seems to have been one of the many things that “everyone knew” back in the day, similar to the fact that DOS versions older than 3.3 may hang while booting from disks with significantly more than 17 sectors per track.
As was the case with the “too many sectors per track” problem, the issue with “too many hard disks” was missed for years simply because no one had a PC with more than two hard disks. This was a technical rather than architectural limitation. While the IBM PC/XT and PC/AT BIOS implementations were limited to two hard disks, the INT 13h interface as such was not.
In the days of full-height 5¼” drives, it simply was not feasible to install more than two hard disks into a PC, especially when a 5¼” floppy drive was also required. Even the big IBM PS/2 Model 80 (1987) with a tower case could only house two full-height 5¼” drives. There might also be trouble with the power supply, as the PC hard disks of the time were not designed for staggered spin-up and a standard AT power supply might have trouble spinning up four drives at the same time.
Sure, there were half-height hard disks, but who wanted four drives in the first place? People who needed to maximize the storage capacity… and the most obvious way to do that was buying a large capacity drive, which in the 1980s was inevitably a full-height 5¼” monster. Like my 1988-model 650 MB ESDI drive, for example.
Yes, there were solutions like the NetWare DCB which supported many drives, but those were only usable by NetWare and did not expose the drives via INT 13h.
Two things happened circa 1988. One was Compaq releasing the Deskpro 386/25 with an expansion unit option, a system which supported up to four hard AT-style disks (that is, the expansion unit housed up to two ESDI drives accessible via the PC/AT hard disk style programming interface, which may be called WD1003 or WD1010 or several other things). The other development was Adaptec releasing the AHA-1540/1542 SCSI HBA, and there were perhaps other SCSI vendors as well.
Compaq supported up to four hard disks, Adaptec in theory up to seven. In any case, it is apparent that both companies ran into the same problem with DOS, and solved it in a very similar manner.
Compaq simply did not expose the drives in the expansion unit through the BIOS at all. DOS users needed the EXTDISK.SYS driver, and users of other operating systems (such as OS/2 or NetWare) needed a custom driver.
Adaptec was in a more complicated situation. The AHA-154x was an add-on card which could be installed in a PC/AT compatible machine (the AHA-154x did not work in older systems because it was a bus-mastering adapter) that already had one or two AT style drives. The AHA-154x BIOS keeps the total hard disk maximum to two. In practice that means that if there are two SCSI hard disks attached to an AHA-154x (which also includes AHA-154xA and AHA-154xB, but not necessarily newer models), the Adaptec BIOS may add zero, one, or two drives to the system, depending on how many hard disks there are already installed. In any case, the total won’t be greater than two.
For DOS users, Adaptec offered a combination of ASPI4DOS.SYS (ASPI driver for the AHA-154x) plus ASPIDISK.SYS (DOS hard disk device driver). Adaptec’s ASPIDISK.SYS was functionally very similar to Compaq’s EXTDISK.SYS and allowed DOS users (especially users of DOS 4.x and older) to utilize more than two hard disks.
DOS Bug
The bug is quite visible in the MS-DOS 4.0 source code. In MSINIT.ASM (IO.SYS/IBMBIO.COM module), DOS calls INT 13h/08h and stores the number of disks in the HNUM variable. No attempt is made to validate the value returned by INT 13h.
Further down in MSINIT.ASM, DOS sets up the hard disks, calling the SETHARD routine for each drive, but it will not set up more than two. Trouble will start near the SETIT label, where the DRVMAX variable may end up with a number much higher than the number of drives that SETHARD was run on.
Eventually, disaster strikes in the $SETDPB routine in the DOS kernel. The code near LOG2LOOP label attempts to calculate the cluster shift for the FAT file system, but gets stuck in an endless loop because the BPB for a drive was never initialized and contains zeros.
This bug is present in every DOS version with hard disk support before 5.0, that is, in DOS 2.0 up to and including DOS 4. In my experiments, all these DOS versions hang when booting on a machine that exposes four BIOS drives. MS-DOS 4.01 from April 1989 still hangs, and so does Russian MS-DOS 4.01 from February 1990.
It is clear that the bug went unnoticed or at least unfixed for a number of years simply because PCs with more than two hard disks were extremely rare to nonexistent.
DOS 5.0
It is likely that DOS 4.0 (1988) was released just before PCs with multiple hard disks became a thing. By the time Microsoft started working on DOS 5.0 in earnest in 1990, EXTDISK.SYS and ASPIDISK.SYS were certainly well established, and the problem must have been known.
MS-DOS 5.00.224 Beta from June 1990 (the oldest DOS 5.0 beta I could test) does not suffer from the bug described above, and shows four hard disks exposed by the BIOS in FDISK.
Further related work was done in August 1990 with the following comment:
M011 8/07/90 CAS msinit.asm rewrote lots of spaghetti code
msbio1.asm used for initializing hard drive
partitions. fixed bugs 2141, 2204,
1866 and 1809 and prepared for
zenith/etc. support
The above is an excerpt from an MS-DOS 5.0 OAK (OEM Adaptation Kit). The first entry in the relevant file (MSBIO.TAG) is dated 7/17/90 which leaves open the question of who actually fixed the problem with more than two hard disks and when, since it must have been fixed by June 1990.
There is another rather curious data point:
The above screenshot shows that Japanese IBM DOS 4.05/V does not hang and FDISK correctly shows four hard disks. Here’s the boot screen of said DOS version:
This shows that the fix made it into at least some DOS 4.x code base. However, the system files in IBM DOS J4.05/V are dated October 1990, decidedly newer than the MS-DOS 5.00.224 Beta.
SCSI HBAs
In any case, the fix was well known to SCSI HBA vendors. Starting with the AHA-154xC, Adaptec offered an option for “BIOS Support for More Than 2 Drives (MS-DOS(R) 5.0 and above)”. When this option was disabled, the BIOS keep the total number of hard disks to no more than two, just like AHA-154xB and earlier. When enabled, the Adaptec BIOS would expose all the hard disks it would find as BIOS drives 80h, 81h, 82h, 83h, 84h, etc.
BusLogic adapters offered a more or less identical setting to solve the identical problem.
When this setting was enabled, DOS 5.0 and later no longer needed ASPIDISK.SYS or any other vendor specific driver. DOS itself could directly use the BIOS to access all hard disks in the system (limited by the number of available drive letters).
I believe clone BIOSes with support for more than two IDE hard disks generally started appearing only since 1994 or so, and assumed (not unreasonably) that the user would be installing DOS 5.0 or later. In the worst case, the BIOS could usually be set up to not detect the 3rd and/or 4th hard disk. It was the SCSI HBAs that were prepared to deal with trouble.
APAR IR86346
Completely by accident, the puzzle of the DOS fix was solved while I was looking for something totally unrelated. In an IBM announcement letter from October 1990, the following sentence jumped out at me: DOS 3.3 and 4.0 support up to two fixed disks in a system. DOS 4.0 supports up to seven fixed disks when corrective service diskette (CSD) #UR29015 is installed.
I happened to have CSD UR29015 on hand, so I looked at the included documentation. The README file states: APAR IR86346 requires DOS 4.0 to be installed with NO MORE THAN two fixed disk drives before installing corrective service. Once corrective service is installed, you can attach the additional fixed disk drives.
In the APARLIST file there’s a table which includes the following entry:
CSD APAR KEYWORD COMPONENT ABSTRACT ------- ------- -------- --------- ----- ... UR27164 IR86346 ABEND IBMBIO DOS 4.0 hangs with more than 2 hardfiles ...
Yep, that’s exactly the observed problem! With more than hard disks, DOS 4.0 and older simply hangs.
There’s also a table of fix releases in the same file (excerpted):
...
CSD UR25066 05/10/89
IFD UR25788 06/07/89
IFD UR27164 09/25/89
IFD UR27749 10/11/89
...
CSD UR27164 which (was the first to include the fix for APAR IR86346) was released on September 25, 1989. The previous CSD from June 1989 did not include the fix.
The documentation does not lie and with CSD UR29015 applied, IBM DOS 4.0 has no trouble booting up and seeing four hard disks:

That clarifies the timeline a lot. MS-DOS 4.01 from April 1989 could not possibly contain the fix. IBM fixed the bug sometime in Summer 1989, which is why IBM DOS J4.05/V includes the fix. Microsoft’s Russian MS-DOS 4.01 was likely branched before mid-1989 and the fix was never applied.
And this also explains why the earliest MS-DOS 5.0 betas don’t have the problem with more than two hard disks, even though there is no record of Microsoft fixing it. Because Microsoft didn’t—IBM did, a few months before the work on MS-DOS 5.0 started.
The only minor remaining mystery is who opened APAR IR86346. It could have been an external customer, although both the Adaptec AHA-154x HBA and the Compaq Deskpro/25 were designed to protect against DOS hanging. Then again, perhaps some other SCSI HBA was not quite so careful and could trigger the hang with multiple hard disks.
It is also possible that the bug was discovered and internally reported when IBM was working on its own SCSI adapters, released in March 1990 together with the first wave of PS/2 machines with SCSI drives.
Actually, no — Linus was always going to make something Unix-y. Because of the whole GNU thing, remember? Linus wrote an OS kernel, not a whole OS. Without GCC, without a pile of GNU tools to run, without XFree86, there would have been no Linux.
And yes, Windows 3.x compatibility was super important, and anyone besides Microsoft was going to have a pretty hard time (IBM got very close with Win-OS/2).
I don’t think that win3.x was a worthy goal, and win32 api doesn’t even exist in the complete form in 1990-91. And you can see, with wine as example, what it means to chase win32 compatibility.
Anyway, it’s hard for the hobbyist to clone/implement general-purpose os kernel without good, detailed books about cloned os architecture and source code examples, at least in a reasonable time frame. So there was no other choice but unix.
Designing a new kernel used to be a common part of Comp Sci courses. Plenty of new GUIs have been introduced with most of the hobbyist ones running out of steam after simple programs like solitaire games were developed. To get any significant programs written tends to require third parties and those are hard to get inspired without a sizable user base to target.
NT happened to fit what an OS in the 90s needed to be and the mistakes of IBM and the various Unix vendors gave MS the opportunity to get it established.
Win16 became the major application target by 1992. Before that, it was unclear whether any GUI would break out of relative niche status. MS wanted Win16 support in as many places as possible because it took years to even get that many programs on the market and starting the process again would put software that much further behind.
> I don’t think that win3.x was a worthy goal, and win32 api doesn’t even exist in the complete form in 1990-91. And you can see, with wine as example, what it means to chase win32 compatibility.
but win32 API is never in complete form, even now.
in 1991, win32 API already exists and being used in NT development.
Re “Windows NT being a better designed OS” – there’s an interesting article on the topic I once came across: https://blogsystem5.substack.com/p/windows-nt-vs-unix-design
Regarding SMP, NT was running SMP just fine on Intel systems by 1994, although it was relatively useless – few Win32 apps existed, things on x86 machines were generally more I/O or memory bus speed bound than CPU bound, and Intel SMP machines were expensive.
Microsoft put a lot of their SMP work into their MIPS and Alpha versions since it was assumed that would be where the future would be, particularly for SMP systems. We all know that was a future that didn’t happen.
SMP Intel systems did exist before that; there was one from DEC I used that seemed like it was circa 1992 or 1993 that had 3 486DX/33’s in it and ran SCO SVR3.2. That was a much more sensible use case since the machine had lots of users on it attached via terminals, developing software which would run on much more modest single-CPU DECpcs in the field. I am unclear on if the machine was truly SMP or was actually asymmetric.
Speaking of SMP:
Was the HP Vectra XU 5/90 perhaps the first “desktop” SMP x86 computer that was somewhat widely sold?
It seems like it was sold at a price that made it reasonable to buy just as a single processor computer, with one socket empty.
(These are somewhat weird, as their BIOS report the configuration in a way that causes trouble running Linux. I’ve never dived into it but I think they report PCI cards as if they were EISA cards or something like that). NT4 runs great on them and they are perhaps the only Pentium 1 class PC’s where running Winamp causes no noticeable slowdown.
It is a funny thing, NT was 100% SMP ready back in ’93, but it took 10+ years for SMP to really go mainstream in the form of hyperthreading and multi-core CPUs. Intel didn’t help by making SMP a premium feature for a very long time.
I suspect that Win9x was a big factor as well, because as long as the vast majority of users ran Win9x, what was the point of developing great SMP-friendly apps that no one cared about?
This is a case where I’m not sure if Win9x prevented the adoption of SMP hardware or if the scarcity and expense of SMP hardware kept Win9x viable for longer than it should have been.
There were two problems for NT with SMP taking hold.
I have the Micron variation on the dual socket Pentium 90 motherboard. By the time it had been validated, the Pentium 150 was available for less. Dual sockets were often poor values until the clock speeds finally plateaued.
Windows 9x stayed around not just for games. WordPerfect released WP 7 for Win9x 6 months to a year before WP 7 was made compatible with NT. Similar issues existed with common bonus software not working on NT even when the main application did. XP was well established before nearly all new software could be expected to run on a NT system. Conveniently, that was about the time dual core started becoming mainstream and corporate refresh cycles permitted the upgrades.
@Richard Wells:
Oh, the WP issue is wild!
How common was it that business software didn’t run on NT? What type of business software would that be? Was that kind of questionably coded Windows 3.x software disguised as Win9x software?
I remember having different (remote installation) packages for both OS where the installation paths and the combined runtime libraries were different; maybe it was also a thing because 95 didn’t establish security on folder levels (like writing to system32) or user management (in NT you had the “all users” and “username” folders for your settings, start menu and so on, where 95 just let you do all things at all times… a setup program that didn’t know about that was likely to fail on NT).
I still use some legacy apps where the current security settings are sometimes problematic – GoldWave 4.03, for example, per default insists on putting its goldwave.ini to the Windows folder, but then it only runs when elevated. It is possible to put the file to an AppData folder (C:\ProgramData, I think) – but it doesn’t know about that, and I have to do it manually.
>> I wonder what Compaqs plan was when they bought DEC?
Nobody really knows that. I guess it was somehow Eckhard Pfeiffers Plan for World Domination. Just like BMW bought Rolls Royce and Mercedes married Chrysler at that time. But just like the others, it turned out to be a deal that got them in serious troubles, without a clear target (other than to get “real big”).
>> NT as “revolutionary”: Well, it clearly built on the OS/2 foundation and the stability goals behind it and owes as much to Gordon Letwin (the multithreading philosophy of OS/2) as to Dave Cutler, but it also added Windows-3.1-compatibility in exact the right time slot, plus multi-user security, and that pretty much made it usable for a lot of use cases.
Now, is it just a (near-complete) rewrite of OS/2 2.x where all the 16-bit code was left out and multi-user was added? Or is it a “new” system? In any case, it’s the one system that worked well, because it combined all the positive aspects and put it into an usable (and, available!) product. So in that regard, the comparison with the iPhone (which WAS revolutionary, even though all its technology also had been seen before) isn’t a bad one.
>> Win95 as a problem for SMP: I think that SMP was not something a normal user needed during the 90s. We’ve had some dual CPU machines running NT; but that didn’t necessarily make them faster than a stinkin’ cheap single Pentium using ’95. You needed special use cases to justify the expenses. For the mainstream, it was only around 2005 when you really had so many background processes and animated web sites that ate CPU power, that a normal user could profit from more than one CPU core. XP/2000 were well established at this point and 95 already left behind.
So, no, multiple processors just were too damn expensive for their (comparatively) modest advances to take off in the 90’s – and that surely was reason enough to go into the GHz race with Netburst (and Itanium as a side project).
NT was written from scratch, but it was strongly influenced by the developers’ previous work on VMS and OS/2. The code was new, the concepts and ideas behind it much less so.
Developers (if no one else) could have always profited from SMP, compiling on multiple cores in parallel really helps. Many (though far from all) CPU-heavy tasks like compression can also be parallelized. But, again, of course no one bothered because so few systems could have used that in the Win9x days.
@_RGtech:
Re all those mergers in the 90’s:
It’s funny that once the east block planned economy was in most places switched over to capitalism, capitalism tried the thing that seems to have been one of the biggest mistakes of the east block, I.E. the mergers to larger and larger colossus companies, like the East German VEB Kombinat that absorbed everything somewhat within the same field.
Yes… the seed for those mergers was planted in the 80’s (“Greed is good!”), and then there were all these new opportunities and formerly unexposed markets. Didn’t really work out though.
Regarding all-new code in NT: I’d bet that some OS/2 parts were at least used in the initial phase (when NT was still known as “OS/2 3.0”), just to have something to run or test the new code on… but in the end, it doesn’t matter. The relevant part is: the combination of _all_ its features was new, advanced enough, and still usable, at the right point in time. That was the one revolutionary thing that led to its widespread use today (together with permanent refinements, like XPs multimedia capabilities).
But even if NT had utterly failed, I still don’t think OS/2 would’ve saved the day… not in the hands of IBM alone. Possibly we could have switched to Linux, maybe some well-made commercial version or even one from MS, after the Win9x line faded out (perhaps it would have had a few years more if NT hadn’t existed, but not much longer).
And the mentioned developers _would_ have had a good justification for a dual PPro system back then. They surely didn’t use Win95, except for testing 🙂
But the big sales numbers always came from “normal” office users, and even if those would have used only NT or OS/2 with SMP back in the day, they still would not have bought those expensive multi-processor machines (not even thought about it). So I don’t see how the SMP technique in general would have benefited from a non-existing Win95. It just wasn’t the time for it. (Ten years later, the world had changed…)
My point was simply that Win9x actively prevented the adoption of SMP. Prior to Windows XP, SMP required a non-mainstream OS, which kept SMP a niche (mostly server only) feature. This in turn kept SMP systems rare and expensive and further slowed down their adoption.
SMP didn’t have to be expensive, as shown e.g. by the ABIT BP6. But Win9x pretty much forced it to be a “premium” feature with a premium price.
As far as I’m aware, NT was really written from scratch, starting with the executable format, development tools, bootloaders, and everything. This was very different from OS/2 2.0 which was more like OS/2 1.2 with paging and a 32-bit API and multiple DOS boxes. This was also reflected in the development time.
Regarding “expensive” – you still paid a second CPU. And even if your standard basic OS would have supported that, who on earth (except for the forementioned developers, with good enough reasons) would have wanted to spend money on that second CPU for little benefit*, when it was way better spent for more RAM? The bog standard office user certainly sometimes wished for more power, but that usually was down to hard disk performance (even if swapping was never necessary, but that was kind of rare) or the local network/server… nothing a second CPU would have solved. So it mostly just didn’t matter if the OS was capable of SMP. Thus it stayed a niche product.
* = we tried it out, really. Sure, for example zipping a very big file didn’t slow down your machine, but how often did you do that?
The best theoretical case we found was for the (not really official) CD-burner PC… but then we planned for an all-SCSI setup, which was made obsolete by the advent of UDMA66.
Then again, we were only “normal” IT support staff… no developers here 🙂
There were a number of relatively budget dual Slot/Socket i820 motherboards about a year after the Abit design. The Iwill DS133-R is one example that turned up in a quick search. That was one of the better i820 boards but only supported a total of 256 MB of RDRAM. Not quite enough memory for anything that would benefit from a second processor.
The Abit took advantage of a mistake by Intel not restricting SMP capability in the more budget offerings. The Abit was not without its own problems. Two processors tightly packed produced enough heat to make throttling a possibility.
I remember the whole 2000 – 2004 period as having so many problems that the companies I worked for were very cautious in introducing new hardware. 2005 had dual core machines that were reliable and 2006 meant getting dual core on the desktop. Intel did have a single core Core2 Celeron SKU but that saved all of $5 over its dual core counterpart.
I will forever remember the P4 era as “the dark ages” in computing.
Bad CPUs, bad and confusing naming schemes, bad heating problems, bad capacitors… laptops were somehow better except for a few P4-M and PM-4 models, and those with “hairline cuts” on the traces that only worked when you “warped” your laptop.
Fun times. No collectibles.
P4 was also the start of confusing sockets v.s. processor names and whatnot, that led to the utter mess of the “Core-whatnot” naming scheme.
The P4 were available with either socket 478 or 775, and IIRC the first processors that stopped being called Pentium 4 were also made for socket 775, and then Intel just seemingly randomly named their products with no easy way to correlate name, age, performance, socket and whatnot.
MiaM asked: “How common was it that business software didn’t run on NT”
I don’t remember business software. Before Windows NT 4.0, applications written for the Windows 9x shell would not run in NT.
But the great great problem was that Windows NT uses Unicode internally and Windows 9x, not, so the string handling is problematic between the two systems if I remember correctly.
About multiprocessors, the first multiprocessors on the PC were proprietary or special configurations (as Michal showed in another article about OS/2 SMP). SMP started to be “standard” until MPS (MultiProcessor Specification) 1994-1995 according to Wikipedia, later ACPI supplanted it (I don’t have a date), but I think it was not mainstream until Pentium 4 HT and Athlon 64X2.
Also the applications had to have various processes or multihtreading to take advantage, which was not the case for a long time with exceptions for server software like Microsoft SQL Server. There were little to gain if Microsoft Word run a thread for printing or or spell checking for example.
And yes, probably Windows 9x prevented adoption of SMP.
I’d add RDRAM to the list of bad and confusing things. And CPU packaging with exposed silicon that could be damaged.
I think the cooling problems were to a large extent caused by the rapid change — from Pentium II/III processors that had 15-30W TDP, we suddenly went to CPUs with 60-130W TDP, and that’s simply a completely different ballgame. People needed to learn how to design and build such systems.
But those dark ages also gave us the Pentium M and Opteron… so it wasn’t all bad.
@_RGTech:
I don’t believe too many parts of OS/2 were used in the initial phase of the NT work. Dave Cutler actively hated OS/2 and was making sure his new system was neither OS/2- nor x86-tainted He also didn’t go well with Gordon Letwin. I think this is quite well presented in “Showstopper”.
Of course, once the NT team grew big enough and took over former OS/2 developers, some parts of the new system began to look similar. You don’t want to code the same thing in two completely different ways just for the sake of it, after all.
And from what I remember, OS/2 1.x was used as a coding/building platform for WinNT in the early stages of development. Just because it was so much more stable and capable than DOS. Only at some point WinNT became stable and capable enough to self-host its development (and Cutler encouraged dog-fooding the product).
Okay, that’s a definition I can follow.
@MiaM: There also was Socket 423 (that was the one using RD-RAM). And the naming issues didn’t end with the desktops (what kind exactly was a Celeron D/Pentium D/Pentium Dual-Core again? Which one of those had two cores?). Aside from the Centrino brand (which was just “an intel CPU and an intel Wifi card”, but didn’t say which CPU) we also had Pentium 4 Mobile/Mobile Pentium 4/Pentium 4-M/Pentium-M… (-> who wants to be a millionaire? put these in the correct order of their launch dates!)
@Michal: Right, the Pentium M was also a product of that time, and survived. But that wasn’t the one with the big expectations back then – more like a low-end CPU on life support and only for mobiles, until a better P4 could replace it there. (Which didn’t happen.)
For the Opteron, I’m not sure about those – they also weren’t the big thing then. AMD64 on the other hand, yes, that was a future-proof technology and showed Intel how to do 64 bit. (But just like CPUID and NX, it took a few generations until it really worked, and got used :))
The whole first-generation DDR computers weren’t something I would want to have again.
Yeah the Pentium M was kind of “oops the P4 is too much of a space heater, we need something else for laptops” thing. But I remember that when we got a ThinkPad prototype with Pentium M, we quickly found out that at “only” 1.6 GHz (or whatever it was exactly), it was quite a good performer, and wouldn’t fry eggs at the same time.
Similar with Opterons, at “only” 1.2 GHz the early Opteron we had was quite a decent performer. And while Opterons took a while to spread, AMD64 was the future.
With the P4, Intel may have been a victim of “fighting the last war”. The P6 core scaled from 150 MHz to almost 1.5 GHz within about 5 years. Intel thought they could do the same with the P4 and go up to 10 GHz. But they just hit a wall and found that going past 3 GHz was veeeery difficult, and going from 3 to 4 GHz was massively harder than they expected.
FWIW Socket 423 was not RDRAM exactly, although in practice it mostly was. It used Intel’s P4 FSB and the memory controller was in the northbridge, not CPU. There were some Socket 423 boards that used the Intel 845 chipset and DDR.
The Pentium M wasn’t the first response to the P4 heat issue. The return of the Coppermine P3 followed by Tualatin was the initial correction. Intel was lucky to be large enough to have teams working on Itanium and P4 while still having resources to put into the whole P3 to Pentium M effort.
The Pentium 4 Tejas would have been a reasonable performer at the top end. The mass market chips had the problem of needing a lot more power for no benefit. After the California energy crisis, that was not a good path. It did seem like everyone had figured out that a higher clocked Pentium M was to be the future two years before Intel finally did.
@Fernando:
I’m 99% sure that software compiled for Win9x would use the non-unicode versions of the APIs when running under NT. There has been a few edge cases where this could cause problems, IIRC at least one of them has been discussed in a post in this blog, but that might had been an error in the SDK rather than in the actual OS implementation.
What applications were written for the Win9x shell? It seems like mostly utilities that would “hide” in the systray corner would have trouble running on NT 3.5x.
My impression, with the anecdotal evidence of a single larger corporation, is that companies kept Windows 3.x until NT4 came out, and skipped Win9x. For any larger company where an IT department had to keep hundreds of computers running, they would want to either be able to just run a script that just formats and copies a full Win3.x setup to the local disk of a computer, or runs an OS that protects the OS from the user fiddling around breaking things.
Btw at my anecdotal evidence case corporation, a few “regular” users got NT 3.51 a short while before NT4 were rolled out. For the most part it seemed to work fine except that when users were given local admin rights there were no protection against installing drivers intended for Windows 3.x, breaking things. In particular I remember someone installing the supplied disks with Windows 3.x drivers for a printer on a NT 3.51 box, breaking the printing system totally and IIRC after spending ages the IT department just gave up and either reinstalled NT 3.51 or designated that computer as one of the first to get NT 4.
=============
Re SMP:
I would think that as soon as users started to actually run more than one program at a time SMP would increase performance.
It would be interesting to see any tests of this. I.E. run NT4 in an emulator with 1 or two CPUs and run those tests that afaik uses parts of the code from common business applications. I assume that this was done back in the days too, but I don’t know what results are available online. I assume you’d have to dig through various computer magazines from that era.
=============
Oh, I had forgot about socket 423. In my memory 478 was the one for RDRAM and 775 for everything else.
Re the core-whatnot naming scheme: At the time “Core 2” came out it sounded like it would be the dual core version, and “duo” perhaps hinted at hyper threading. But it probably wasn’t.
While we are at it bashing naming, I’d like to add that Klamath, the code name for Pentium II, sounds like an experience that gives you chlamydia (which is spelled with K rather than Ch in some languages). AMD Sempron isn’t great either, as at least in Sweden Semper is a brand for baby food.
I wonder if the Honda Fitta debacle was what finally made companies aware of that they need to check if their product names causes trouble in some part of the world. (Honda found out before actually releasing the car that “Fitta” is slang in Swedish for cunt, but afaik they had already presented the name to the press and whatnot).
The explanations on Wikipedia doesn’t help either:
https://en.wikipedia.org/wiki/Athlon_64
“The Athlon 64 is a ninth-generation, AMD64-architecture microprocessor produced by Advanced Micro Devices (AMD), released on September 23, 2003.[1] It is the third processor to bear the name Athlon, and the immediate successor to the Athlon XP.[2] The Athlon 64 was the second processor to implement the AMD64 architecture (after the Opteron) and the first 64-bit processor targeted at the average consumer”.
I.E. so it’s a 9th generation something, 3rd thing with a particular name, and the 2nd implementing something.
It’s successor was the Athlon II, and it’s predecessors were the Athlon XP and just Athlon. Clear as mud.
Also the Xeon series from Intel is really clear as mud.
Honda Fitta? Never heard of it. But Mitsubishi Pajero… yeah, that. And Lada Nova. Didn’t really work in spanish-based countries 🙂
As for the sockets and names, I think we can agree that there was absolutely no system behind it in those years. Core Solo/Duo and Core 2 was problematic back then (Core Duo was really rare, so the number 2 to add a generation was not the best idea), but in hindsight I can understand that. The successing Core i line then has shown that a systematic naming scheme helps selling things, even to the point where AMD (itself not really good at naming with all the Athlon-x2-II-64-FX-XP-subversions and the Duron-Sempron-Opteron-Phenom brands) basically copied that for its Ryzen.
I’m not sure why Intel thought it would be necessary to add a “Pro” designation now, and I still despise the extended use of “Pentium” and “Celeron” as their low-low-budget crippleware (which is nearly as confusing as the Xeons… you just can’t see the generational relationship to the corresponding Core-i-line), but well, here we are again.
(I really didn’t know that there were S423-based DDR systems… as the S423 was a rare beast in itself, those couldn’t have been many! I only got one lone Willamette processor out from the trash in all those years – way less than Pentium Pro 1M-Versions, 80186s, or even the Slot-1-Tualatins! And yeah, that was from an RDRAM system.)
Re SMP: Sure, dual processing did have benefits with “real” multitasking (where one process really *did* heavy calculations, which wasn’t an issue for most “normal” users with a few office applications and the occasional database). But I still say: it mostly didn’t matter. As long as the hard drives were slow and RAM expensive (thus: swapping to disk), a second CPU wasn’t worth it for most users.
It changed over the years – the web was slowly getting stronger and instead of bandwidth, you were more and more limited by the rendering abilities of your CPU; your antivirus had to scan just about everything including temporary internet files for 100.000 signatures; you had more and more background processes and utilities and surely your permanent mailbox connection; and not least you had the ability to use standby and resume to proceed where you left off. But in the late 90’s, when Win9x still was a thing? Nah. Few people opened more than 5 windows back then (even on NT or 2000), and closed everything daily when they went home. If printing a document slowed down your system, you took the time to go to the printer, maybe replace your cup of coffee, wait for the printout to finish, and walked back. Loading programs or opening files or connecting to your database on the other side *were* reasons to complain, but then we’re back to RAM and I/O.
BTW, for years my company (a big one!) had only Windows NT4 and 95 in use (depending on your needed connectivity or mobility; NT didn’t fly on laptops and 9x wasn’t the best choice for heavy networking). 3.x died out about 1997, and W2K took until 2001 to get validated. The typical lifecycle was 3 years, so the last 9x/NT systems were replaced directly with XP in 2003/04. So much for “skipping W9x” 😉