As previously mentioned, IBM’s OS/2 1.0 and 1.1 used the 286 LOADALL instruction, even on 386 and later processors which do not support it. This was typically solved by BIOS emulation. Now there’s more information about how OS/2 uses LOADALL.
Tracing OS/2 showed that LOADALL was used to implement the PhysToVirt DevHlp (Device Helper) API. The PhysToVirt function was documented in the OS/2 DDK; its function was to create a virtual address mapping for a (contiguous) buffer in physical memory.
PhysToVirt was used by device drivers when they needed to map physical memory. This might be memory on a device or a buffer in system memory provided by caller. Since the driver might execute in the context of any process, it was not necessarily able to use “normal” pointers. PhysToVirt created a temporary mapping (selector) pointing to the given physical memory. The documentation naturally made no mention of LOADALL, but it provided a very clear hint. The relevant paragraph is worth quoting in full:
“The device driver must not enable interrupts or change the returned segment register (ES or DS) before it has finished using the returned value. The value returned in the segment register has no physical meaning, so the caller of PhysToVirt should have no reason to examine it. While the pointer(s) generated by PhsToVirt are in use, the device driver may call only for another PhysToVirt. It may not call any other DevHlp routines, since they may not preserve the special ES/DS values.”
This text fairly clearly explains (without explicitly spelling it out) that the selector value did not correspond to the selector base and reloading the selector would change the base. Such situation is more or less completely unnecessary in protected mode (a temporary GDT selector would do the job), but it would have been very useful in real mode if the selector was obtained through LOADALL or a temporary switch to protected mode (386+ only, obviously).
The PhysToVirt documentation cautions that the API might switch to protected mode, but that does not appear to actually happen. Perhaps this was an earlier specification, or it might happen in specific cases.
With both OS/2 1.0 and 1.1, LOADALL was very often used if the DOS session was active and there was any disk transfer in the background (triggered by a concurrently executing OS/2 session), at least with a non-busmastering device such as a standard AT fixed disk drive.
The disk driver source code is available in the OS/2 DDK and there is a PhysToVirt call (through a wrapper called PhysToVirtESDI) right before executing REP INSW to read from a WD1003 style data port. What’s of course not visible in the DDK is that the PhysToVirt call might execute LOADALL and continue executing in real mode but with a “magic” ES selector. On a 386+, the same effect could be achieved through documented means.
Again, it’s not clear whether IBM’s OS/2 1.0 and 1.1 intentionally used LOADALL on 386 systems. It would not happen on IBM’s ABIOS-based (PS/2) 386s.
Early LOADALL Users
OS/2 was not the first user of LOADALL. It’s not clear who was, but there is very good evidence (in the Windows 2.x BAK, for example) that Microsoft used LOADALL as early as 1986. It’s also apparent from source code comments that Microsoft was given preferential treatment and received LOADALL documentation from Intel.
IBM on the other hand never wrote code to use LOADALL. IBM’s VDISK.SYS strictly used the BIOS interface for extended memory copying. Newer software then generally used XMS and let HIMEM.SYS deal with LOADALL and similar trickery.
Microsoft used LOADALL in RAMDRIVE.SYS (a clone of IBM’s VDISK.SYS) as well as SMARTDRV.SYS which had been cloned from RAMDRIVE. Note that HIMEM.SYS was comparatively speaking a latecomer. SMARTDRV development started in May 1986, and RAMDRIVE is even older, dating back to May 1985 (part of Microsoft’s effort to create a shrink-wrapped PC DOS equivalent). On the other hand, the development of HIMEM.SYS only started in April 1988.
Later users of LOADALL refined the use of LOADALL to allow interrupts during memory copies. As the OS/2 DDK documentation cautions, an interrupt is highly likely to destroy the magic selector created by LOADALL, and if the memory copy were continued after completing the interrupt, it would lead to uncontrolled memory corruption. How did Microsoft avoid that?
The trick was to make sure that an interrupt handler would not only destroy the magic selector but also change CS in a controlled fashion. The LOADALL caller (HIMEM.SYS, for instance) would change the base for ES and/or DS selector, but it would also set up CS such that the base didn’t change while the selector value did. If an interrupt occurred, the execution would not return to the interrupted instruction but rather fall into a “safety net” that would restart the memory copy again.
For example, if (real mode) CS was initially 1234h and the corresponding base was 12340h, after LOADALL the base would remain 12340h but the selector value might be changed to 1235h. If an interrupt occurred, the IRET would force CS to be reloaded under real mode rules, and the CS base would change to 12350h. The safety net code would execute LOADALL again and continue the memory copy.
More or less the same method would also applicable to 386+ processors which can switch to protected mode, reload segment selectors and bases, and return back to real mode. However, the 386 version of HIMEM.SYS used a much more straightforward method which relied on extending the selector limits (big real mode), and an interrupt in the middle of a memory copy would simply trigger an exception that could be handled and the copy continued.
LOADALL is a prime example of an undocumented feature which was so frequently relied upon by major software that it became an integral part of the architecture. A 286 clone without LOADALL support would be seriously incompatible with MS-DOS as well as with OS/2, which would likely make it extremely difficult to sell.