Forward Compatibility, Landmines

Several years ago, after attempting to get a very old 286 version of Xenix running in a VM, I concluded that it was probably incompatible with any 386 and later processor. Recently I revisited this issue and examined the problem in detail.

The operating system in question is IBM Personal Computer XENIX 1.0. It’s historically significant, not so much because it was the second OS licensed by IBM from Microsoft, but rather because it was the first protected-mode OS available for a PC (the IBM PC/AT to be exact). IBM PC XENIX 1.0 was finalized around October 1984, just a few moths after the IBM PC/AT (“Salmon”) was introduced (August 1984). The PC/AT and PC XENIX 1.0 were in fact announced on the same day.

IBM PC XENIX 1.0This flavor of Xenix is quite picky about the hardware it runs on. It was designed to run on the first-generation PC/AT with 20 MB fixed disk, and has trouble even on later IBM PC/AT models (different hard disks, EGA).

But the reason why IBM PC XENIX 1.0 can’t run on a 386 is different. It’s related to the way the OS manages the segment descriptor tables and it shows a lot about how it took Intel years to properly manage the x86 architecture in a forward-compatible manner.

Forward vs. Backward Compatibility

Everyone is familiar with backward compatibility—in this context, the ability of newer x86 processors to run software written for older x86 processors. Backward compatibility is sometimes difficult to achieve, but it is a well defined problem because the behavior of older processors (and software) is known.

Forward compatibility is in some ways the inverse and entails designing the x86 architecture in an extensible manner. It is far more difficult to achieve because the future is by definition unknown. However, over the years, certain practices and techniques emerged which make forward compatibility if not guaranteed then at least much more manageable .

On the CPU level, the key elements are CPUID bits which identify the availability (or not) of individual features, as well as control registers which must often be used to explicitly enable new features.

Another—perhaps less obvious—ingredient is enforcing correct usage of reserved bits. That is, reserved bits are typically MBZ (Must Be Zero) and attempts to set them will cause faults (usually general protection faults). In other words, even if a bit has no function in a given CPU, it must be left unset. This strategy ensures that when previously reserved bits are assigned a function, existing software will not suddenly trigger the new (and unanticipated) behavior.

Although this approach may seem obvious, it’s clearly not; it’s something Intel had to learn by mistake.

IBM PC XENIX 1.0 Descriptor Usage

The IBM PC XENIX 1.0 is an excellent case in point. Being a protected-mode OS, it manages a GDT (Global Descriptor Table) as well as a per-process LDT (Local Descriptor Table). Recall that descriptor tables contain the segment base address, limit, and attributes.

The descriptor format is very similar on the 286 and 386; the 386 version is an extension of the 286 one. In both cases, 8 bytes are reserved for a descriptor, but on the 286, only 6 bytes are used—the base address is only 24-bit, not 32-bit, the limit field is shorter, and there are fewer flag bits. The 286 documentation available at the time PC XENIX 1.0 was written (e.g. the 1983 iAPX 286 Operating System Writer’s Guide) very clearly states that the last word is “reserved for iAPX 386” and “must be zero”.

Unfortunately, the CPU did not enforce that, and allowed the Xenix developers to create a wonderful landmine bug, not detectable on a 286 without a great effort but ready to blow up spectacularly on a 386.

There at least are two basic issues. The Xenix kernel uses the mmudescr routine to build a descriptor; this routine only writes 6 bytes, not 8. That wouldn’t be a problem per se. But various other routines (such as getxfile which loads an executable) use the copyseg routine to copy 8 bytes when moving descriptors around. In various places, the kernel builds a descriptor on the stack, writing only 6 bytes. Then it copies 8 bytes to the GDT/LDT, with the last word containing random garbage that just happened to be on the stack.

This does not even necessarily produce invalid descriptors. It can happen that the descriptor ends up being valid, but with a base which points far beyond the end of available RAM. That of course makes the segment unusable, although in a way that does not immediately trigger faults.

Lessons Learned

It’s difficult to say whether Intel was aware of this particular case; it’s entirely plausible that it was, or if not then at least of some analogous example. The 386 documentation says: “All the descriptors used by the 80286 are supported by the 80386 as long as the Intel-reserved word (last word) of the 80286 descriptor is zero.” That strongly suggests the problem was known.

It was of course far too late to fix the 286 design—that would break the existing software. On the other hand, it would have been much more difficult to blame Intel when software which explicitly broke the rules didn’t work on the 386.

It would seem that it took Intel surprisingly long to learn from these problems—for example, it wasn’t until the Pentium that the CPU prevented writing reserved bits in the CR4 register or setting reserved bits in page tables. It is also possible that Intel deliberately did not enforce the rules so as to not break some existing software, although that is no excuse for not forcing correct reserved page table bit usage on the 386, when the paging structures first appeared.

In the end, Intel no doubt realized that the biggest beneficiary of a forward-compatible x86 architecture is Intel. If software incorrectly sets reserved bits, Intel either can’t use those bits or existing software fails on new processors, sometimes in very unpleasant ways.

That’s not to say software developers are blameless. In some or perhaps most cases, they are simply sloppy (see the Xenix example above) and unintentionally set reserved bits. In other cases, they might be too clever for their own good, for example using reserved bits in page tables for their own purposes (I am currently not aware of any real-world example, but some likely exist).

Enforcing the rules ends up helping everybody. It doesn’t cause any difficulties for programmers, just helps them write correct code. It helps users because their software is more likely to run on future processors. And it avoids a few bad headaches for Intel.

This entry was posted in 286, Intel, Xenix. Bookmark the permalink.

32 Responses to Forward Compatibility, Landmines

  1. random lurker says:

    Interesting to hear that the x86 architecture was theoretically marred by such forward incompatibilities. This was a much bigger problem on the Motorola 68000 side, where the address bus was initially 24-bit. Some developers would use the top 8 bits of the 32-bit pointer for housekeeping (flags), or fill it with other non-zero bits. Then the 68020 came along and as the address bus was extended to 32 bits, suddenly some software would go haywire. This was a problem on both the Amiga and Macintosh platforms.

    Wikipedia on the 68000 address bus problems

    When the x86 architecture was extended to 64 bits by AMD, the address bus was initially only 48 bits wide. They were aware of the potential issues and cleverly designed the memory layout so that the addresses start at both the start and end of a “64-bit” address space, counting down and up 2^47 bits worth (leaving a gigantic “hole” in the middle). The idea is that nobody is crazy enough to jam their own data into pointer bits that are defined by the unused “hole”, and even if they are, the processor will raise a fault. When the address bus is extended, there will hopefully be no software incompatibilities.

    Wikipedia on the AMD64 memory addressing layout

  2. Michal Necasek says:

    The problem with non-enforced reserved bits wasn’t really the worst… much worse design flaws were things like unprivileged instructions leaking privileged information (SMSW, SGDT, SIDT) or instructions quietly dropping bits (POPF in protected mode). The 20-bit address wraparound in the 8086 was probably the most expensive problem to fix though.

    And yes, the canonical address concept in the AMD64 architecture is smart… learning from past mistakes 🙂

  3. dosfan says:

    Weren’t packed EXE files (created by EXEPACK or LINK /EXEPACK) the biggest cause of HMA wraparound problems which was addressed by the DOS 5+ LOADFIX command.

    Out of curiosity what did XENIX 1.0 do that caused it to not run on later PC AT models ?

  4. Morten Andersen says:

    As a programmer I’m always weary of the reserved bits. Often it is not even specified how software should handle them in all situations. I usually try to always use the existing value there, I.e. Reading the current value of the register and updating the bits I want, then write back everything else as is it was. That might help make the program more compatible With future architectures or easier to port if those bits get some significance in some new operating modes later on. I then trust that whoever defines those future bits will make sure that whatever they do will be compatible with old programs.

    The strategy doesn’t always work… Some registers are different when read compared to when written and some bits work like a “toggle”. So it also requires some judgement based on knowledge of the platform in question and type of register. I which data sheets would be more clear on how they want developers to manage reserved bits.

  5. Michal Necasek says:

    I couldn’t find any evidence that EXEPACK existed before the PC/AT was released. Even if it had, it’d be solvable in software ala LOADFIX. There must have been something else.

    As for what XENIX 1.0 did, I’m aware of two things: One, it could detect an EGA but didn’t set it up correctly, resulting in a garbled screen. Two, it had a built-in hard disk parameter table which only included 13 types and did not quite match what was in the BIOS (it did of course work for the 20MB standard AT hard disk).

  6. Michal Necasek says:

    Modern data sheets (e.g. from Intel) are usually pretty clear about which reserved bits must be zero and which bits must be preserved. Without that, it’s indeed hard to manage them.

  7. Richard Wells says:

    The alternate solutions for these sorts of problems is to have flags to force CPUs to work exactly like an earlier CPU and not rely on reserving words to allow a single structure to have multiple uses.

    Programmers often follow the sample code instead of reading through the documentation. Going through 250 pages to winnow the few parts that matter for software design is hard. Intel’s K286 kernel might show if the descriptors have been forced to set the reserved word forced to zero. Not curious enough to spend $100+ on the few purported copies of K286 that are on offer.

    The Initialization Module sample does clear the reserved word on a temporary descriptor in Copy_EPROM_DAT after the descriptor is changed to data segment but none of the copying of descriptors clears the reserved word. Copy with fill will zero out the remainder of the segment but won’t check the reserved word in any descriptor. It could be that I am too tired to be reading macro assembler but it does not seem that Intel was concerned about making sure that the reserved word did not have a value other than zero.

    EXEPACK.EXE had exist to before the AT was introduced. AT was August 1984 and the first shipping release of MS C v3 with EXEPACK was April 1985. Getting software written, debugged, and manuals printed is going to take more than 8 months. Wikipedia claims usage (internal to MS) in early 1984 but unsourced for that.

  8. dosfan says:

    LINK 3.0 (dated 11/14/1984) from MASM 3.0 did not support /EXEPACK and the EXEPACK utility didn’t appear with MASM until MASM 4.0 (1985) so there is no reason to assume that EXEPACK appeared prior to the introduction of the PC AT. Also there is no way it would have taken 8 months to develop EXEPACK or its equivalent LINK code. Software back then didn’t take as long to develop since it was smaller and simpler. Heck the development cycle for PC DOS 7.0 was just under 11 months and that was 1994 and the kernel and base DOS modifications were finished in less time. Finally I wouldn’t trust anything technical on Wikipedia as the information there is not peer reviewed by people with actual credentials in the specific topics.

  9. Yuhong Bao says:

    Not to mention that MS certainly had access to “Salmon” PC/AT prototypes around the time too.

  10. Richard says:

    awesome history lesson

  11. Michael Burke says:

    The 24-bit to 32-bit address problem was also a key issue with System/360 migration in the IBM world. Very often, programmers would use the top 8 bits for flags and passed parameters when the addressing max was 16MB – 24-bits.

  12. comex says:

    > The idea is that nobody is crazy enough to jam their own data into pointer bits that are defined by the unused “hole”, and even if they are, the processor will raise a fault. When the address bus is extended, there will hopefully be no software incompatibilities.

    If only that “no” were anywhere near absolute… I’ve seen a C++ class in some popular codebase that uses the upper 16 bits of a pointer as a tag and simply masks them out when performing memory operations, although it was long enough ago that I don’t remember which codebase and I can’t seem to find it. More commonly, JavaScript VMs use “NaN boxing”, where a 64-bit value is either a double or another value stored in the representation space of NaN doubles. That value is frequently a 48-bit pointer…

    http://wingolog.org/archives/2011/05/18/value-representation-in-javascript-implementations

  13. Michal Necasek says:

    Using the top bits of a pointer for storage is dangerous and almost guaranteed to break sooner or later. On the other hand, the link you gave also describes using the lowest bits of a pointer for storage, and that works well. All it takes is ensuring that the pointer points to memory aligned on a specific boundary. This is extremely common in hardware as well, where addresses require strict alignment and low bits of registers are “recycled” as control bits.

  14. Michal Necasek says:

    Who knows what documentation the guys porting Xenix to the 286 actually worked with, and how much code came straight from Intel. They may well have been working on the code before the official docs were even available. And yes, it’s unrealistic to expect that anyone would go over the finished code with the docs in hand and double-check that everything is done right. Which is exactly why reserved bits need to be enforced in hardware 🙂

    EXEPACK to me looks like something that Mark Zbikowski (or whoever) wrote over a weekend, certainly nothing that would take months. Even if it had been used internally at Microsoft before Aug ’84, that wouldn’t be a reason to add hardware and firmware to the AT, it’d be only a reason for Microsoft to fix their code before shipping. There must have been something else, something already on the market before the AT came out. Unfortunately finding copies of software from the era for analysis is really, really hard.

  15. random lurker says:

    @comex OH GOD WHY :O

    I am fully in support of inflicting actual physical pain to these people. We aren’t even that far from needing more than 48 bits of address space. You can already fit 1 TiB of memory onto a single motherboard (Supermicro X10DAi.. slap two Haswell-Xeons at 18 cores 36 threads each onto that beast and you’ve got an insane platform for VM stuff). We’re going to hit 256 TiB around the year 2030. And you can bet your ass software developed today will be in use in 2030. The short-sightedness of some people….

  16. Michal Necasek says:

    1TB is old hat 🙂 Servers with support for 4TB RAM are available today, and that’s already 42 bits of address space.

    But this kind of software is going to break far sooner than 2030. It will break as soon as CPUs extend the supported virtual address space and change the definition of a canonical address. I would expect that to happen years before machines with 256TB show up, quite possibly before 2020.

    If we’re lucky, the authors of these hacks will live to regret the errors of their ways and will be forced to fix it 🙂

  17. dosfan says:

    Packed EXE files have an ‘RB’ signature in them which is probably the initials of whoever wrote EXEPACK.

  18. Morten Andersen says:

    random lurker’s post got me thinking of another x86 classic: The A20 address line! Again, this was a lack of forward compatibility in the design. Forcing compatibility back then required a hack, making the keyboard controller control a gate to mask out the A20 line until software explicitly requested it not to. What is more surprising is the fact that this hack is well alive until this day, in fact the hack has become more and more elaborate over the years to account for internal CPU caches and other constraints. But it’s a fact that even a modern o/s running on the latest shiny x64 machine, start off with A20 disabled and will be programming the keyboard controller to enable it. Or rather, it will think it is programming the keyboard controller (using the old register specification), but it is probably talking to some logic in the chipset emulating the old behavior and which will send back a special bus message to the CPU, telling it to raise the (somewhat virtual) A20 gate! All this happening just in case some program might depend on obscure 16-bit wrapping-around behavior in a 64K region on a 36 year old CPU. I like it 😉 I’m wondering if the next step will be to remove even the current A20 bus message from the CPU and move the A20 gate control into a MSR controllable by a SMM handler, which can trap attempts by the OS to enable/disable the A20 gate and update the MSR accordingly.

    But this got me thinking: What was that old piece of software that depended on the old 8086 “wrapping around” when a seg:offset pair exceeded the 1MB limit? Was it just to be absolutely certain everything was compatible even in case of obscure programs (a noble goal) or was there known pieces of software that wouldn’t work without the hack?

  19. Yuhong Bao says:

    @Morten Andersen: I think Intel finally removed A20M support in their last few CPU generations.

  20. dosfan says:

    Modern chipsets support fast A20 toggling via port 92h bit 1 (based upon the PS/2 method) so software doesn’t have to go through the keyboard controller to toggle A20.

    Off the top of my head the only thing that relied on the 20-bit address wraparound was the packed file decompression code used in packed EXE files created by LINK /EXEPACK or the EXEPACK utility. Originally this wasn’t a problem but when DOS 5 came along and started using the HMA (DOS=HIGH) it became a major problem which is what the LOADFIX command is for. Why the EXEPACK code used it ? Likely it saved some bytes in the decompression code or someone simply thought they were being clever. At the time the 386 hadn’t come out yet and the concept of the HMA was still several years away (HIMEM was introduced in 1988).

    By the way HIMEM.SYS ultimately supported 17 different methods of toggling A20 (AT keyboard controller, PS/2 and 15 others) for various old machines. PC DOS 7 HIMEM also adds fast A20 support.

  21. Richard Wells says:

    Address wrapping was a very important part of 8080 programming. Most famous was a Jump to address 0 (zero) in order to force the stack to start at the top of RAM addresses. With the A20 wrap, code placed in ROM (in segment F) will be in the same segment as the bottom of the RAM. This should have made porting code from the 8080 to the 808x much simpler since the only appreciable change is skipping the paragraph used for CPU restart*.

    Yeah, about 15 years later, the combination of EXEPACK, HMA, and DOS in HMA all wound up creating the problems listed. Who expected any CPU model to remain in production for that long in 1976? Note: EXEPACKed applications work perfectly when launched on a DOS 3 or 4 system where Windows 286 is using the HMA with HIMEM.SYS (since DOS can’t).

    * One of the support chips also used that paragraph but I am forgetting which one.

  22. Michal Necasek says:

    The CP/M compatible CALL 5 interface also relied on the address wraparound (those infamous “not a vector” bits in interrupt vectors 30h/31h). And that was at least a semi-documented interface supported in all DOS versions. But figuring out what (if anything) depended on it is not so easy.

    I should also add that I have seen x86 software which depends on 32-bit linear address wraparound (Coherent, cough cough) which occurs when segment base + offset goes past 4GB (but not past the segment limit of course). That kind of wraparound is emulated by the AMD64 architecture, although it isn’t terribly well documented.

  23. John Elliott says:

    From Who needs the address wraparound, anyway?: The CP/M-compatible CALL 5 entry point in MSDOS uses it. There’s some thought that the first DOS versions of WordStar used CALL 5, but I don’t think anyone’s come up with a copy of WordStar from 1982-83 to prove it.

  24. Yuhong Bao says:

    @Michal Necasek: Yea, the funny thing is that when they added PAE they decided to do it at the paging level.

  25. ender says:

    > I should also add that I have seen x86 software which depends on 32-bit linear address wraparound (Coherent, cough cough) which occurs when segment base + offset goes past 4GB (but not past the segment limit of course).

    I remember reading that one of the original Xbox security features was defeated because they forgot about address wraparound (apparently AMD CPUs don’t support it, and fault, and Xbox was originally developed for AMD, then later switched to Intel).

  26. MortenAndersen says:

    Thanks for th info… And the link to the previous article in the subject. It was a hack after all 😉

    @yuhong: how otherwise would they enhance the physical address space? Software is supposed to still use 32 bit pointers, paging gives a way to map those 32 bit addresses to 64 bit. Is what you mean why they didn’t at it at the segmentation level? That wouldn’t have been enough because those >32 bit addresses resulting from the translation via the segment selector would then have to be mapped via page tables that would also result in >32 bit addresses. Hence, involving segmentation level mean you have to make changes both at the set and paging level and the latter becomes more complex because also the address to be looked up would be bigger. Hence the far simpler design is to do the change at the paging level only.

  27. Antoni Sawicki says:

    Just my ¢2 on this particular version of Xenix. First of all, it doesn’t boot on a real 286 either. I have tried on a 3 different machines. Here is a screenshot from one of them: http://virtuallyfun.superglobalmegacorp.com/wordpress/wp-content/uploads/2012/12/msxen2.png . Unfortunately none of them are original IBMs. Secondly owning original floppy disks for both IBM Xenix and the original disks that Microsoft distributed to OEMs I can tell you, that both are binary identical. In other words the OS came straight from Microsoft and was unchanged by IBM except for a blue disk label.

  28. Michal Necasek says:

    You’re not trying to boot an originally 1.2MB floppy image in a 1.44MB drive, are you? Because that produces exactly that iinit panic message.

  29. Antoni Sawicki says:

    It was booting from HxC. Hm maybe I’ll try again.

  30. ForOldHack says:

    The manual for Microsoft Xenix lists a couple of 286s that Xenix boots on: a IBM/AT and I remember a HP Vectra. Check the documentation.

  31. Michal Necasek says:

    286 XENIX ran on a number of AT compatibles and not-so-compatibles (like the HP Vectra). Texas Instruments and Sperry come to mind. As long as the CPU was a 286, it was fine, and the hardware needed to be either sufficiently compatible with IBM’s or the OEM needed to tweak XENIX a bit. Both approaches were used.

    Later on SCO tried to make their XENIX more generic and cover lots more hardware, so that OEM versions weren’t really necessary.

  32. Vikki McDonough says:

    @random lurker: Fortunately, newer x86-64 CPUs use a 52-bit address space (and ones with a 56-bit one are in the pipeline), for this exact reason.

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.