DOS Memory, Managers & Extenders, Part I

To understand why the maddeningly complex world of DOS memory managers and extenders came to be, it’s necessary to understand the evolution of the PC platform. Even though memory managers and DOS extenders reached their peak on 32-bit 386 and later systems, the foundation for their existence was laid with the original IBM PC.

To recap, the IBM PC, released in 1981, was built around the Intel 8088 chip. The PC was designed as a short-lived stopgap product and was meant to be simple and cheap, rather than extensible and future-proof. Of course it was exactly the low price and relative simplicity that made the PC into a major force, taking everyone–including IBM–by surprise.

The Intel 8088 (and 8086) processor was a late 1970s design, introduced in 1979. It had a 20-bit address bus, which meant it could address up to 1MB RAM. That was far more than the 64KB which the typical 8-bit microprocessors of the day (Z80, 6502) could address. In fact the choice of a 16-bit CPU was a major factor in the IBM PC’s success. The first PCs were sold with 16, 32, or 64KB RAM. Memory was very expensive and IBM initially did not even offer any expansion options to fully populate the PC’s address space.

Because the 1MB address space of the PC was seemingly insanely large, the designers did not worry about conserving it. The first 640KB (addresses 00000h-9FFFFh) were reserved for regular (so-called base or conventional) memory, directly usable by applications. Again, it should be understood that for several years, PCs could not even be upgraded to have 640KB RAM. The next 128KB (A0000h-BFFFFh) were reserved for video memory, followed by 128KB (C0000h-DFFFFh) reserved for miscellaneous adapter cards. The last 128KB (E0000h-FFFFFh) were reserved for the system ROM, which included the BIOS and built-in BASIC (the latter often not included on clones).

For a while, the address space was big enough and additional memory could be accessed directly without any tricks. But that did not last long. By the mid-1980s, users of memory-hungry applications such as spreadsheets (Lotus 1-2-3) started running out of memory and out of address space, meaning that more memory could not be added to the system. A new solution was required.

In 1984, IBM released the PC/AT, built around the Intel 286 processor. The 286 had 24 address lines and could directly address up to 16MB RAM—although the early PC/AT models were sold with 256KB or 512KB RAM, still under the 640KB limit. However, the PC/AT could be expanded to 3MB RAM from the beginning. Unfortunately, there was a catch. To address more than 1MB RAM, the 286 had to run in so-called protected mode. But DOS could not run in protected mode and the CPU had to be in so-called real mode, compatible with the earlier 8088 processor—including the 1MB addressing limit. Thus the AT was not a solution.

DOS requiring real mode wasn’t the only problem. In 1985, 286-based systems were relatively rare compared to the millions of IBM PC and PC/XT systems and clones already in use at the time. Users of those systems wanted more memory but were not willing to replace the entire system. A successful solution had to be compatible with the vast installed base of existing systems.

Expanded Memory

In 1985, the LIM EMS (Lotus/Intel/Microsoft Expanded Memory Specification) was created. The initial release was called EMS 3.0, for reasons that may be lost to history. EMS, or expanded memory, required a special memory board and a software driver which provided a uniform interface to applications. EMS utilized the well known technique of bank switching, which used a fixed “window” in the processor’s address space to selectively map a portion of memory on the EMS board. The initial EMS specification allowed for up to 4MB of expanded memory, which would have been extremely expensive at the time. Before the end of 1985, and updated EMS 3.2 specification was released, with support for up to 8MB expanded memory.

The EMS technology had the virtue of being usable on both old PCs and the newer 286-based systems. In the late 1980s, some AT-compatible chipsets (notably the Chips & Technologies NEAT chipset) even provided EMS-compatible hardware without the need for a separate EMS board.

But expanded memory wasn’t without problems. Bank switching was relatively fast since memory did not need to be copied, but it significantly complicated software design because the large memory was not accessible all at once. In addition, running code out of EMS was difficult.

Extended Memory

Meanwhile, the PC/AT and compatible systems could use more than 640KB RAM, but in a completely different manner. In the early 286-based systems, additional memory was also installed in the form of adapter boards, but those were much simpler than EMS boards. The AT memory boards simply presented RAM on the system bus, without any additional logic. The challenge was using that memory from DOS.

IBM’s PC DOS 3.0, shipped with the PC/AT, came with a VDISK.SYS driver which allowed the use of so-called extended memory (memory above 1MB, not accessible from real mode). At the time, that was about all DOS users could do with any memory above 1MB. Of course most of the early PC/ATs didn’t have any extended memory, so the issue was initially somewhat academic.

As mentioned earlier, the CPU had to be in protected mode before accessing memory above 1MB. That was a problem because the Intel 286 processor was only designed to switch from real to protected mode, but not back. After a reset, the CPU always started in real mode. IBM solved the mode switching challenge by adding special circuitry which could reset the CPU without disturbing the rest of the system state (such as memory contents or device states).

In Intel’s defense, it must be noted that the 286 was introduced in 1982 and wasn’t designed with DOS in mind. Intel could not imagine why anyone would run a 286 in the crippled real mode, rather than almost immediately switching to protected mode and staying there.

To use extended memory, DOS-based software like VDISK.SYS had to go through a number of steps. First, the BIOS would be called to perform a copy operation between extended and base memory. The BIOS would switch to protected mode, then perform the copy, up to 64KB. Next, the BIOS had to reset the CPU, perform a limited re-initialization of the processor, and finally return to the caller. This process obviously wasn’t particularly fast. However, it was still blazingly fast compared to the alternative—using a disk instead of RAM.

A worse problem was that extended memory was entirely unmanaged. The BIOS provided no mechanism to allocate or free extended memory, only to copy to/from it. The designers of the PC/AT assumed that an advanced OS (such as XENIX) would manage the memory… but almost all users ended up running DOS, and DOS definitely didn’t qualify as an advanced OS. That created problems when more than one application at the same time wanted to use a portion of extended memory.

For a while (1985-1986), the deficiencies of extended memory weren’t pressing issues. Expanded memory was much more useful than extended memory, avoiding the non-existent extended memory management. Many memory boards (such as Intel’s Above Board 286) could provide extended memory, but would be typically configured to present EMS to the system instead.

References:

  • Extending DOS, Duncan et al., Addison-Wesley, 1990; ISBN 0-201-55053-9
  • DOS and Windows Protected Mode, Al Williams, Addison-Wesley, 1993; ISBN 0-201-63218-7
This entry was posted in DOS, PC architecture, PC history, x86. Bookmark the permalink.

27 Responses to DOS Memory, Managers & Extenders, Part I

  1. I found this little gem you may like, as the 80386 kicked off the dos extender market by proving it was possible…

    http://www.youtube.com/watch?v=XFgFWdxHILc

    It’s an hour long lecture entitled “The Intel 80386 Business Case” …

  2. michaln says:

    That was highly interesting. Thanks for the link! At the time the Deskpro 386 was released (Sep ’86), reports said that IBM hadn’t ordered a single 80386 yet. I wonder if that was because the lack of a second source. If so, the Deskpro 386 was even more significant than people realize.

  3. I think that would be the case, now that I think about it, that old book, “upgrading & repairing PC’s” made a big deal about OS/2 and Intel second sourcing..

  4. Oops I was going to add.. that even IBM eventually got a second source to manufacture the i386, but that wasn’t until much later… even my PS/2 80 had an intel i386. Although the PS/2 didn’t ‘answer’ the Compaq Desqpro, but instead cemented IBM as an obsolete path (unless you were into tokenring lans, mainframes, and AS/400’s / RS/6000’s).

    Indeed the desqpro is significant as it was the first time a ‘vendor’ extended the ‘open’ standard that is the IBM PC.

    And the other thing from that video was the importance of backwards compatibility.. which is why the Intel Core i7 will outsell in one day more units then the Itanium.

  5. michaln says:

    It’s extremely ironic that Intel of all companies failed to understand the importance of backwards compatibility with the Itanium. If the marketplace were all about good design, features, and performance, Intel would be just another unimportant chip maker and not the money machine it is today.

    The PS/2 was modern and innovative hardware, but wasn’t managed well and probably was ahead of its time. IBM, too, failed to appreciate the value of backwards compatibility. Intel learned from that lesson and all the early PCI systems had both PCI and ISA slots. What was IBM thinking with no ISA slots and floppy drives different from what everyone else had…

  6. It must have been some deep seated theme in 1988, remember the NeXT? No floppy drive, optical disks, networking, sound, unix for users, and a monochrome display?!

    At the same time NeXT really didn’t have an installed base to piss off… Although at the same time it didn’t stop people from manufacturing SCSI floppy disks (wtf?) and even selling software IBM PC emulation (SoftPC!) for the NeXT. The desire to run 1-2-3 was pretty strong, even if Lotus seemed to have not only dropped the ball but thrown out of an airplane.

    In 1988 it’s pretty obvious to say today but the bigger vendor of PC’s was the ‘clones’ when all combined. Although IBM started the standard, everyone else basically got to push the ‘ball’. And yes, who wants MCA when you can get an ISA 486, or then a VL-ISA board, then PCI/ISA on the way to PCI. (ignoring the 3.3v vs 5v). I’m amazed at how many lessons were lost on the Itanium, including the idea that the compiler can ‘fix’ the software, not the chip. It’s about as bad as the forever deep pipeline of the P4 that Intel couldn’t fix, so the ‘core’ series chips are based on the PIII design. They had a lot of dead ends in that period, but nobody learned anything from the iAPX 432, Intel i960, and even the Motorola 88010.

    What is kind of interesting right now is that backwards compatibility is getting to the point of impossibility when it comes to software emulation (Wine anyone?) while we’ve watched the rise of full system emulation go from a hobbiest toy, to a bad joke, now to a major backbone of production systems. Even Microsoft has a gimped VM running XP with some Win-OS/2 esque video driver allowing for seamless windows on their new x86_64 platform….

    I don’t think it’d be impossible to launch a new OS, but Linux has devoured too much attention, but it wouldn’t be impossible for someone to make a usable OS and with VMs to bring in legacy applications. But the lesson of OS/2 is that if you run legacy applications so well, nobody will write native.

    But at the same time I’m sure there would be some developer happy world that doesn’t involve the hell that is X11, or Win32, but at the same time honors 0,0 as the top left… Not that Im about to clone OS2KRNL or PM for that matter. But if I were, I’d have something like DOSBox to run OS/2 16 & OS/2 32, and I’d be busy in the 64bit space.

  7. Yuhong Bao says:

    “In Intel’s defense, it must be noted that the 286 was introduced in 1982 and wasn’t designed with DOS in mind. ”
    Yea, the real blame was IMO Microsoft, who wasted the three years after 1982 designing a real mode multitasking OS before finally realizing that it was a mistake. By then direct hardware access was common which made running most DOS apps in protected mode impossible. It certainly don’t help that MS designed the DOS EXE format to be dependent on 8086-style segmented addressing despite the fact that the 80286 was already out by the time it was designed (DR got this right from the beginning with their CMD format).

  8. michaln says:

    Microsoft had XENIX, which could be easily adapted to run in protected mode. And indeed it was, by 1984. On the other hand, how many 286-based computers were there in 1983? Why would anyone waste time developing a mass-market OS specifically for them?

    Also, welcome to my blog! I’ve been waiting for you to show up.

  9. Yuhong Bao says:

    Yea, the frustrating thing is that UNIX was on the ball within two years or so after the release of the 286/386, compared to the DOS/Windows/OS/2 mess. To be honest, it certainly helped that it had no DOS compatibility constraints whatsoever.

  10. Yuhong Bao says:

    And there is an another major screw-up MS made in year 1991. MS giving up on the 32-bit OS/2 2.0 ~9 months before release was also a not so great idea. (Google “OS/2 Microsoft Munchkins” for example)

  11. michaln says:

    “Screw-up” is relative. Microsoft dumping OS/2 was obviously not good for OS/2, and arguably bad news for the industry as a whole (because it delayed the widespread use of a 32-bit OS by many years). But for Microsoft, it was a smart strategic move. Yes, Microsoft abandoned its OS/2 customers and played very nasty, but MS was rewarded by becoming the most valuable company in the industry and practically owning the PC software market. That’s not a screw-up–not from Microsoft’s perspective.

  12. Yuhong Bao says:

    But the two blunders combined…

  13. cozappz says:

    Hi there, I’m a little bit later to this discussion 😉
    So basically here we have Intel which is a little bit too early into the future (i960, PPro, Itanium) and M$ which is a little bit into the past Win95 appeared ages after i386.
    These are the facts.
    We can speculate Intel ventures into the future as a mean to keep the number of aces in the sleeve, as opposed to M$ which is reactive to the market whatever that mean.

    P.S. Thank you for your blog, it really adds value to the ‘net, just like The Old New Thing of Raymond Chen.

  14. Yuhong Bao says:

    michaln: Well, DR was much more of a threat to MS than IBM was, even back in 1990. Yea, I am thinking of the lawsuits, especially how it ended up being extended to Win95.

  15. michaln says:

    DR was never more than a potential threat. DR itself never had the power to directly affect Microsoft’s bottom line and had to rely on the cooperation of third parties. IBM on the other hand had the resources (not necessarily will!) to take direct action against Microsoft. Like, say, licensing DR DOS, or aggressively selling PC DOS to OEMs post-divorce.

    Look at what the lawsuit did to Microsoft. As it was being sued in the 1990s, it kept growing exponentially, and by the time the lawsuits were over, Microsoft was more powerful than ever even though it in theory lost.

  16. Yuhong Bao says:

    “IBM on the other hand had the resources (not necessarily will!) to take direct action against Microsoft. Like, say, licensing DR DOS, or aggressively selling PC DOS to OEMs post-divorce.”
    I know, but that happened after the divorce. That is why I say “even back in 1990”.

  17. michaln says:

    Fair enough. In 1990 and earlier, Microsoft “only” tried not to upset IBM, because they knew Microsoft needed IBM a lot more than IBM needed Microsoft.

  18. Yuhong Bao says:

    BTW, I think the time in 1992 when IBM gave up on DR-DOS and began developing what became PC-DOS 6.1 would have been a good time for MS to try to reestablish the JDA. By then, IBM OS/2 2.0 GA was already released and Chicago was likely in the planning stages. It would have not been difficult for MS to add a 32-bit OS/2 subsystem to NT.

  19. Yuhong Bao says:

    It would have been a bit of a mess for developers, but better than the attacks MS made on OS/2 later on.

  20. michaln says:

    Given that Microsoft had just ditched the 32-bit OS/2 API in NT and switched to Win32, it seems like they would be less than keen on re-adopting the 32-bit OS/2 interface. More importantly, Microsoft wanted to be firmly in control of the API and 100% own it. Yes, it was bad for 3rd party developers (after Microsoft had been urging them to develop for OS/2 for years), but good for Microsoft.

  21. Yuhong Bao says:

    Notice the mention of Citrix. I wonder how well that went.

  22. Yuhong Bao says:

    “More importantly, Microsoft wanted to be firmly in control of the API and 100% own it.”
    Why? Was IBM’s SOM the problem, or something else?

  23. …is there going to be a Part II?

  24. Michal Necasek says:

    Probably. Maybe. If I can find what I started writing a long time ago 🙂

  25. _RGTech says:

    “So basically here we have Intel which is a little bit too early into the future (i960, PPro, Itanium) and M$ which is a little bit into the past Win95 appeared ages after i386.
    These are the facts.”
    I’d like to think a little different about this.

    MS started Windows (1.0) in 1983 as an advanced, futuristic attempt for which the then-current hardware was not sufficient! The same can be said about the other 1985 GUI-attempts: the first Mac was underpowered and had not enough RAM, let go multitasking abilities. AmigaOS was arguably better than both, but also really in need of more RAM. TOS… well, I don’t have any experience with it, but as history tells, it wasn’t even Top 3.
    But Win 1 (barely) worked and was kinda inexpensive – which was more than could be said about VisiOn or TopView.

    Despite the market failure, they pressed on to develop Windows 2 – again, as a stopgap product until the (similar) PM for the designated DOS-successor OS/2 was ready. History tells us that it didn’t work out – IBM wanted to keep OS/2 286-ready for the majority of their upcoming PS/2 range, while Microsoft wanted to skip the “brain-dead” 286 altogether. This led to NT 3 in 1993: fully 32-bit and relying on the 386. But again, it needed more resources than the average user could afford. Thus it made sense to keep on with the low-range OS development (Windows/386 showed multitasking capabilities in 1987 which were on par with the Amiga, Win3.1 added TrueType and Multimedia in 1992, Win32S expanded the code base, Win95 used this foundation and added a new GUI – but still ran on 4MB systems!) until the average hardware was good enough to run an NT-based OS.
    And the plans to keep on with both 16-Bit Windows (or later hybrid Windows) and NT are dated way back in the early 90’s.
    That’s not what I’d call” reactive to the market”. THAT was a long-range plan.

    And for Intel… well, I think they had similar mindsets like IBM with their PS/2 range: “We are the leader, so everyone will follow us.”
    Except they don’t.

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.