Windows 3.x VDDVGA

While working on my Windows 3.x display driver, I ran into a vexing problem. In Windows 3.1 running in Enhanced 386 mode, I could start a DOS session and switch it to a window. But an attempt to set a mode in the DOS window (e.g. MODE CO80) would destroy the Windows desktop, preventing further drawing from happening properly. It was possible to recover by using Alt+Enter to switch the DOS window to full screen again and then returning to the desktop, but obviously that wasn’t going to cut it.

Oddly enough, this problem did not exist in Windows 3.0. And in fact it also didn’t exist in Windows 3.1 if I used the Windows 3.0 compatible VDDVGA30.386 VxD shipped with Windows 3.1 (plus the corresponding VGA30.3GR grabber).

There was clearly some difference between the VGA VDD (Virtual Display Driver) in Windows 3.0 and 3.1. The downside of the VDD is that its operation is not particularly well explained in the Windows DDK documentation. The upside is that the source code of VDDVGA.386 (plus several other VDD variants) was shipped with the Windows 3.1 DDK.

First I tried to find out what was even happening. Comparing bad/good VGA register state, I soon enough discovered that the sequencer registers contents changed, switching from chained to planar mode. This would not matter if the driver used the linear framebuffer to access video memory, but for good reasons it uses banking and accesses video memory through the A0000h aperture.

But how could that even happen? The VDD is meant to virtualize VGA registers and not let DOS applications touch the real hardware. Something had to be very wrong.

I also suspected that the problem was likely caused by my driver doing something wrong, or perhaps not doing something necessary to correctly set up the VDD. The Video 7 sample driver that I based my code on was intended to work with its own custom VDD, not with VDDVGA; judging from the source code in the Windows 3.1 DDK, I suspect that V7VDD.386 was effectively forked from the Windows 3.0 VGAVDD and at most slightly updated for Windows 3.1. That might also explain why my driver worked with VDDVGA30.386 but not with the newer VDDVGA for Windows 3.1 (VDDVGA.386 is normally built into WIN386.EXE and does not exist as a separate file, although a standalone VDDVGA.386 can be used).

After poking through the VDDVGA source code for a while, I realized that it almost certainly wasn’t register access from a DOS session leaking through. It was the VDD itself!

And I also found that the missing link was a small section of code that was explained as “Call VDD to specify latch address” in the Windows 3.1 VGA driver. It is protected-mode service entry point 0Ch in VGAVDD, and it’s called VDDsetaddresses in the VGA display driver (VGA.ASM) but DspDrvr_Addresses in the VDD (VMDAVGA.INC).

The Windows 3.1 DDK does not appear to document the DspDrvr_Addresses function. Although due to the inconsistent naming, it’s difficult to be entirely certain.

At the same time, I tried to approach the problem from a different angle. The Windows 3.1 DDK does document a set of INT 2Fh calls, some of them with promising descriptions, such as “Save Video Register State (Interrupt 2Fh Function 4005h)” and the corresponding “Restore Video Register State (Interrupt 2Fh Function 4006h)”.

But there I hit the opposite problem. Even though the DDK documents those functions, and the VGA display driver implements 4005h/4006h callbacks, I could not find any code in the VDD calling those functions! And the debugger showed no sign that anyone else is calling them, either.

Note: It is possible that the save/restore registers INT 2Fh callbacks were specified for OS/2. Indeed the OS/2 2.1 DDK defines INT2F_SYSSAVEREGS (0x4005) and INT2F_SYSRESTOREREGS (0x4006) in the virtual video device driver source code… but again there is no sign of those being used in the code.

There is also “Enable VM-Assisted Save/Restore (Interrupt 2Fh Function 4000h)” and “Disable VM-Assisted Save/Restore (Interrupt 2Fh Function 4007h)”. The VGA and Video 7 call these functions and name them STOP_IO_TRAP and START_IO_TRAP. And VGAVDD.386 really implements these in VDD_Int_2F (the INT 2Fh intercept in VGAVDD.386). Interestingly, STOP_IO_TRAP corresponds to “VM knows how to restore the screen” logic, and START_IO_TRAP naturally corresponds to “VM doesn’t know how to restore the screen”.

But how does that make any sense? Why would the hardware access from the Windows display driver ever be trapped?

Why Oh Why?

Although I could not find any explanation in the DDK documentation, eventually I realized what the reason had to be: Windows/386 (aka Win386).

Windows/386 was essentially an add-on for Windows 2.x, adding the ability to pre-emptively multitask DOS sessions. Only, in the Windows 2.x days, Windows itself was effectively one of those DOS sessions.

That is, Windows 2.x display drivers had (almost) no clue about Win386. That only came with Windows 3.0. Therefore the Win386 VDD had to manage Windows itself as just another DOS session, save and restore all EGA/VGA registers, and also manage video memory contents. In fact in the “normal” Windows 2.x Adaptation Guide, there is almost no mention of Win386 (there was a separate development kit for Win386 which covered virtual device drivers).

I/O trapping was especially important on EGA adapters which did not have readable registers. As a consequence, it was impossible to read the current EGA hardware state; the only way was to shadow the state of EGA registers as they were written.

Windows 2.x display drivers did implement one interesting piece of functionality, switching Windows to/from the background. This was not at all intended for Win386 but rather for OS/2 (that is, OS/2 1.x, at least initially). The switching was implemented in the display driver by hooking INT 2Fh and watching for focus switch notifications.

In Windows 3.0, Enhanced 386 mode implemented the previously OS/2-only INT 2Fh callbacks that indicated switching out of and back to the Windows desktop. On the way out, the display driver could restore some kind of sane VGA state, and on the way back to the desktop it could re-establish the necessary hardware register state. In addition, the display driver could force a redraw of the entire screen, which avoided the need to save any video memory (which was good, because the video memory could be relatively big).

Unfortunately I don’t have the Windows 3.0 DDK (and no one else seems to, either) so I can’t look at the 3.0 VDDVGA source code. But it’s clear that whereas Windows 2.x display drivers knew very little about Win386, Windows 3.0 drivers typically have some level of cooperation with the VDD through the INT 2Fh interface.

Windows 3.1 VDDs

In Windows 3.1, Microsoft added a whole new level of complexity to VDDs. Namely, video memory can be paged. Microsoft article KB80901 states the following:

In Windows version 3.1, the standard virtual display device (VDD) for VGA is modified to demand page video memory. Thus, you can run graphical MS-DOS-based applications in a window or in the background on VGA systems. This VDD must track video memory usage, so it is not compatible with any of the super VGA display drivers that must access more than 256 kilobytes (K) of video memory. To run these display drivers, a user must use either the VDD provided by the display adapter manufacturer or the VDDVGA30.386, which is included with Windows version 3.1. Demand paging of video memory may break TSRs that worked with Windows version 3.0. The difference is that the VDD virtualizes access to video memory; in Windows version 3.0, the display driver had full reign over memory.

I am not entirely certain why Microsoft did that. It seems to add a lot of complexity in return for not a lot.

The Windows 3.1 VDDVGA.386 introduced a new concept of ‘CRTC VM’ and ‘MemC VM’, that is, the VM that owns the graphics card’s CRT controller (what is displayed on the screen) and the VM that owns the graphics card’s memory controller, i.e. what is read from and written to video memory.

In the typical case, the CRTC VM is also the MemC VM; that can be the Windows desktop (aka System VM) or a full-screen DOS box. Things get interesting for windowed DOS boxes. The desktop remains the CRTC owner because the desktop is what needs to be displayed. But a DOS box can temporarily become a MemC VM, directly accessing video memory.

Needless to say, this gets quite complicated. VDDVGA.386 needs to save the old MemC VM state, merge the new MemC VM state with it and update the hardware registers, let the DOS box execute, and then restore the original MemC VM state before the System VM can do any drawing to the Windows desktop.

As far as I can tell, of the drivers shipped with Windows 3.1 only VDDVGA.386 has this complexity. None of the other VDDs, including the Video 7 specific V7VDD.386, implement this logic. As mentioned above, I strongly suspect that the Video 7 VDD in the Windows 3.1 DDK (source code in VDDV7VGA directory) is actually very close to the Windows 3.0 VDDVGA.386, and thus to the Windows 3.1 VDDVGA30.386.

It’s a Tie

Needless to say, the register saving/restoring logic in VDDVGA.386 is quite fiddly and difficult to debug. In the end I have not been able to find out why register changes “leak through” to the System VM (i.e. Windows desktop). I found out where in the code that happens, but not why, or how to prevent it.

What I did find is that the DspDrvr_Addresses function does not at all do what the comments suggest. The function is supposedly used “to specify latch address” in video memory. Closer examination of the Windows 3.1 VGA display driver showed that while it does define a byte for the latches, and sends its address to the VDD, the display driver does nothing with that byte.

But even more interesting is that VDDVGA.386 does not use the latch byte either. Instead, VDDVGA.386 assumes that the latch byte lives somewhere very close to the end of the video memory used by the display driver, and expects that any following pages can be used by the VDD. (That logic likely comes from the Windows 2.x EGA/VGA drivers.)

A corollary is that passing 0FFFFh as the latch byte address to the VDD (something that SVGA256.DRV does) tells VDDVGA.386 that there is no video memory to share. In that situation, VDDVGA.386 does not try any hair-raising schemes to modify the VGA register state behind the display driver’s back.

It’s not perfect either. The system does survive MODE CO80 in a windowed DOS box without trouble, but starting (in a window) a DOS application which uses multiple pages of video memory triggers an interesting warning:

A disturbing but seemingly harmless warning

The warning appears to be harmless. Once it’s dismissed, the application works fine. The warning also only pops up the first time the application is started (in the same windowed DOS box). It’s not ideal, but it’s something I can live with.

I consider this fighting VDDVGA.386 to a draw. I am not impressed with the Windows 3.1 DDK documentation—it omits certain things while documenting other things that appear to be fictional. That said, the actual DDK source code saves the day, at least in the video area, because it is possible to see more or less all of the code involved.

And the Windows 3.0 DDK would be really nice to have.

This entry was posted in 386, Development, Documentation, Graphics, PC history, Windows. Bookmark the permalink.

59 Responses to Windows 3.x VDDVGA

  1. Joshua Rodd says:

    The architecture of Enhanced Mode Windows 3.x (and Windows/386) was interesting, but also infuriating. Essentially, Microsoft made an 80386 virtual machine hypervisor which then hosted 8086 virtual machines. Any device which might be accessed by more than one VM needed to have a VxD which would intercept these accesses and virtualise the device.

    In Windows 3.x, Microsoft extended the concept to both hosted 8086 virtual machines and then a 16-bit protected mode “VM” (aka the System VM) which, nonetheless, had to have access to the hardware controlled by a VxD if there was any chance another VM (aka a DOS window) would access the same hardware.

    Worse still, the concept of a “device” extended to things like network components and DOS itself, so you end up with an 80386 hypervisor which essentially virtualises DOS. Windows 9x didn’t depart from this architecture and indeed took it to its logical conclusion, but at least on Windows 9x, writing drivers was significantly easier; pre-9x VxDs are a confusing mess of assembler and hooked interrupts.

  2. Michal Necasek says:

    “Infuriating” is a good way to put it 🙂 Windows/386 was a real mode multitasker, and Windows 3.0 added protected mode support to it (for both the Windows “system VM” and DOS VMs). The huge complication was DOS sitting underneath all of it, both running the system VM and also running in multiple DOS VMs. And in the Windows 3.0/3.1 days this was extra complicated because Enhanced 386 mode wasn’t the only way to run Windows. So you needed VxDs to support Enhanced mode, but also not-VxDs for Standard mode. Which led some software vendors to say “sorry, we do Enhanced 386 mode only”, and Windows 3.11 for Workgroups made that official.

    My experience is that Standard and Real mode are quite close, and writing Windows code that runs as either real or 16-bit protected is not that hard. But VxDs are a whole different world. And Microsoft definitely didn’t make it easier with giant wads of heavily macro-ized assembler code.

    Windows 9x was, as you say, architecturally the same, but because the basic assumption was that the machine lived in the VxD world, many things were much easier. Microsoft could for example support NT style minidrivers, and over time the driver architecture got a lot better because there was no funny non-32-bit case to worry about. Even though, especially for storage, it was still possible to have to fall back on real-mode DOS drivers. Pretty it was not.

  3. SweetLow says:

    >It was possible to recover by using Alt+Enter to switch the DOS window to full screen again and then returning to the desktop, but obviously that wasn’t going to cut it.

    Hmm, I smell something interesting. The same problem and workaround as with VBEMP display driver under Windows 9x…

  4. Michal Necasek says:

    It is quite possibly the same underlying problem, the video BIOS forcing the graphics card out of high-res mode by writing to some registers that aren’t trapped/virtualized.

  5. Richard Wells says:

    Alt+Enter should correct for video gremlins. When the program switches into a window from full screen, the 386 grabber collects information about the current video mode (including what becomes SetPaintFnt, rather critical for text mode applications in a window) and then displays. In theory, it would be possible to catch these problems with a utility that sends the correct message to the grabber and forces an UpdateScreen instead of switching to full screen and back. I think some Win 3.1 drivers did trap a lot more and force more frequent updates. The modern solution is to virtualize everything but that is not very practical when the target machine is a 386/16 with 4MB of RAM.

    The underlying DOS with additional virtual machines containing their own copy of DOS was the only way to keep within the very tight memory constraints. Desqview-386 used a very similar for the same reasons.

    I did work on a VXD back in 1991. The level of frustration to successful code was extremely unfavorable. One depressing aspect was how little memory could be used by a VXD before the system transformed into a sluggish swapping nightmare.

  6. Yuhong Bao says:

    I wonder if part of the reason why they decided to demand page video memory is to make Windows 3.1 run better on 2MB to 4MB systems.

  7. MT says:

    I am curious about the virtual driver model and VM used in the early versions of Windows. Why did Windows run in a VM-style setup with “virtual” drivers? Would this not have negatively impacted performance on less powerful computers? If we ignore the use of DOS boxes, it seems to me that Windows would simply work as any other protected-mode operating system. Switch to pmode, run the programs as a process and use drivers to abstract the hardware. A driver is simply a piece of software that implements a standard interface specified by the operating system, such as a GDI-like interface. It works with the corresponding hardware using proprietary mechanisms to shield the operating system from hardware-specific details. However, the job of the driver is not to virtualize.

    So to clarify, I am confused about the role of drivers when running Windows 3.1. If a program calls a GDI function, isn’t it just processed in Windows, parsed down to a driver which then programs the hardware? So, nothing “virtual” about it at least at the hardware level. Windows itself will do a type of virtualization for instance by implementing the “windows” concept where every program runs in a window and can thus share one monitor and one graphics card. But this obviously isn’t due to the driver offering any virtualization of the hardware – it’s just how the o/s and applications tend to use it.
    When it comes to DOS boxes, it would make sense that the Windows VM monitor would need modules for virtualizing hardware that DOS programs tend to program directly. For example, a VGA module so a DOS program can think it’s programming a VGA card. But, this isn’t really a driver – this job can be done once by the o/s. The module will in turn invoke Windows calls to carry out the virtualization, and such calls may well end up in a vendor-specific driver (to get the things on the screen), but the vendor driver doesn’t know it’s related to virtualization.

    So my question maybe boils down to: When running plain Windows applications, were there any hardware virtualization at all going on or did things work in a classic driver setup as I described above (program calls become call to the driver which programs the hardware).
    Did each hardware vendors really have to write a driver that in addition to the above fully virtualized their hardware? And if that’s the case, was this solely for the purpose of DOS boxes or did it also play a role for normal Windows programs?
    It would make sense that the vendor could also write what I called virtualization modules above so that their specific hardware (with a proprietary interface Windows knew nothing about) could be available in a DOS box, if it happened that DOS programs tended to program such hardware directly. But this is a separate job from a driver and isn’t really a driver at all. It’s a plugin into the Windows virtualizer. But in Windows all drivers are called VxD or virtual drivers, and people talk about a “System VM” for Windows programs, so that is where my confusion is.

  8. Michal Necasek says:

    Early versions of Windows did not use any virtualization whatsoever. Microsoft did an excellent job of confusing things by talking about the Windows “virtual machine”, but that was really what we’d call a hardware abstraction layer and device drivers these days.

    True virtualization came with Windows/386 (1987), which was effectively an add-on to Windows 2.0. As the name suggests, it was 386 only, so it didn’t negatively affect performance on PCs and ATs because it didn’t run on them at all. Windows/386 added the ability to multitask DOS applications. It did nothing for Windows applications, the thing that changed was that Windows itself ran in a VM and could be time-sliced with DOS sessions. Note that in the Windows 2.x days, Windows/386 was available as one product, and plain Windows (later Windows/286) was another product. Windows/386 was Windows/286 plus DOS multitasker.

    Windows 3.1 can still run in Standard mode. When it does, there’s no virtualization. There are Windows drivers for display, keyboard, mouse, etc. and those directly control the hardware.

    Windows 3.1 in Enhanced 386 mode (aka Win386) adds a virtualization layer underneath. Hardware devices are virtualized to support multiple VMs, but you have to keep in mind that those aren’t only DOS boxes, Windows itself is a VM too. So the Windows drivers (for display, keyboard, mouse, etc.) are also subject to virtualization.

    So in Windows 3.1, there are not only VxDs. There are also still all the “regular” Windows drivers (usually with .drv extension). To a lesser or greater extent those drivers work with VxDs, but they are largely independent and do not need VxDs to operate.

    Once the Enhanced 386 mode was the only thing left, VxDs were attractive for many software vendors because they were 32-bit code, unlike 16-bit Windows drivers. It was possible to shove a lot of the driver functionality into a VxD, and there were things like network stacks in VxDs. It was confusing, because a VxD could do more or less anything, and with Standard mode gone there was no clear delineation what should be in a VxD and what shouldn’t.

    Over time, Microsoft moved more functionality into VxDs, like the DOS file system in Windows 3.11 for Workgroups and disk drivers in Windows 9x. But even in Windows 9x not everything was a VxD, notably Windows 9x display drives were quite close to Windows 3.x display drivers. There was a very clear evolution from DOS + Windows 1.0 to Windows Me, slowly moving more and more functionality into the 32-bit VxD world, but the basic Win386 architecture remained.

  9. MT says:

    Hi thanks but what I’m unsure about is probably this part “Windows 3.1 in Enhanced 386 mode (aka Win386) adds a virtualization layer underneath. Hardware devices are virtualized to support multiple VMs, but you have to keep in mind that those aren’t only DOS boxes, Windows itself is a VM too. So the Windows drivers (for display, keyboard, mouse, etc.) are also subject to virtualization.”. What does it mean that hardware devices are virtualized and not just for DOS boxes but for Windows itself? Does this mean that when running in Win 31. Enhanced 386 mode with no DOS boxes and a Windows program wants to draw something on the screen, virtualization is involved at some level (i.e. some code will behave as if writing to hardware but it’s actually picked up by a driver/VxD somewhere that intercepts and them presumably does the real drawing)?

  10. Michal Necasek says:

    You understood that correctly, except display drivers are not the best example. Because they usually tell the VDD to not virtualize anything in the system VM (and instead the display driver is responsible for saving/restoring any state). But in general that is how it works, for example the Windows keyboard driver (keyboard.drv) goes through the virtual keyboard driver (vkd.386). The VxD does whatever is required to let Windows itself and DOS sessions think they own the keyboard/mouse/display/whatever.

  11. MT says:

    Ok I think this is hard of the matter. Ok so display drivers are a bad example… but what if we take sound cards? Let’s say I am a vendor of a sound cards with some hardware interface. I want to make this available to windows programs. Do I both need to write a VxD driver for completely virtualizing my own hardware. And then a .drv driver for actually making windows able to know how to use the (now virtually exposed) hardware, which will then be manipulating the virtual hardware exposed by the VxD/.386

  12. Richard Wells says:

    Note that under Windows for Workgroups 3.11, running in standard mode was still an available option for troubleshooting. Win /d:t Having drivers that work in standard mode was still a requirement.

    The following excerpt of Writing Windows Drivers by Dan Norton applies to performance concerns with drivers under expanded mode. Please note that the book was written before Win 3.1 so all the future changes to the VXD model can not be mentioned.

    “… you should know that virtual access to the hardware can slow things down substantially. Instead of having your standard driver poke at virtual I/O ports, you may wish to give your driver direct access to the ports in enhanced mode. Alternatively, you may wish to have a virtual device driver control direct access to the hardware, handling device interrupts and buffering directly. This way your standard driver is not bogged down with virtual I/O access.”

    Disclaimer: I was an employee of Mr. Norton prior to the book’s publication.

  13. Yuhong Bao says:

    Which is why the amount of work done (and memory used) in a VxD should be kept to a minimum.

  14. MT says:

    @Richard: If a vendor were to do this (“you may wish to give your driver direct access to the ports in enhanced mode”) – what would one do? Make a VxD that actually didn’t virtualize anything and thereby let the .drv driver through)?

    To summarize in general – it seems virtualization really was generally in play in this model. And it’s all due to the need to support DOS boxes, because if it wasn’t for those there would be only one VM, thus no need for VM’s or .386/VxD’s *, only the ordinary drivers knowing their hardware (“.drv” files). This could potentially have given Windows a performance benefit? It’s strange to me that Microsoft selected this way of supporting DOS boxes, incurring an overhead for general Windows operation (and requiring vendors to supply two drivers, at least in some cases) – as opposed to optimizing for Windows apps, and give .drv drivers direct access. DOS boxes would be handled by a v86 based virtualizer that would itself contain the “drivers”/modules necessary to emulate the PC hardware – and would rely on the normal Windows API’s for drawing on screen, make sounds, keyboard etc. which would eventually go down to the .drv driver. I guess it shows how important DOS boxes must have been or maybe it just happened to be a good model initially.

    *) Though based on Michael’s response it’s clear that VxDs were later used to implement just any normal driver, not necessarily related to virtualization.

  15. Richard Wells says:

    @MT: Direct access requires the VXD to lock the device and then hands off control to the only driver permitted.

    Windows 3.x had standard mode for those that wanted the maximum speed of Windows drivers without the overhead of trying to keep DOS applications running in the background. 386 mode was a rapidly implemented method of improving old app support without trying to create a whole new protected mode OS. Almost OS/2 2.0 would have taken almost as much work to develop as the OS/2 kernel itself did. There were other DOS multitaskers with a complete protected mode kernel which were unsuccessful since the memory demands were higher and more software wouldn’t work.

    VxDs did change as memory capacity increased so buffering access to devices was possible. Some VxD and driver pairs look similar to the design concepts used in miniport drivers. Undocumented Windows also pushed the idea of using VxD for flat memory access. Natural thought for a DOS extender developer but consideration of the risks of ring 0 access and the needs of multitasking got short shrift.

  16. MT says:

    @Richard: Thanks a lot for the info. For low-bandwidth stuff (keyboard, mouse) I doubt virtualization would make much of a difference. For video it would, but apparently that was treated as a special case. I don’t know about sound cards, it seems performance could be hampered by virtualization there as well… maybe they were treated especially too. I true virtualization would require the VxD to do mixing of the incoming sound I guess, and this is also what happens in a modern o/s (in the o/s itself, not driver) for “application-level” virtualization of the sound resource.

    For what it’s worth I always found Windows 95 performance to be excellent. I was anxious when upgrading my aging 486/25 with 8MB RAM and feared it would be slow as a dog, but everything seemed faster and more snappy than with Win 3.1. Even booting was quite fast. I suppose the 32-bit drivers have played a big role there.
    On the other hand, OS/2 was at the time hailed as the “true” 32-bit o/s with preemptive multitasking etc. I for one wasn’t that impressed with the performance (sorry to say on this forum) – it seemed less responsive and more sluggish in many tasks. Booting was slow too, and with periods of no activity. Maybe the drivers for o/s weren’t as good (though my machine was an IBM PS/1 2133 tower). I think perhaps the RAM was a problem and also OS/2 was likely faster at things I didn’t use (server-stuff, network, true multitasking etc.)

  17. Michal Necasek says:

    Yes, DOS boxes were very important. Think about it like this — imagine it’s 1988 and you’re trying to do office or engineering work with Windows. There just weren’t that many Windows apps, and users required DOS applications. Windows/386 was, if anything, a way to get users to spend more time running Windows by letting them run DOS programs concurrently.

  18. r34jinkai says:

    @MT Please check VxD technology wasn’t only exclusively for DOS box support. It was also used to leverage on BIOS and DOS code to drive hardware which had no specific Windows driver but was supported by BIOS or DOS drivers. Think about cheap IDE/ATA stuff supported directly by INT13H on BIOS, or network hardware which at time had only Novell ODI compatible DOS only drivers.

    Later in the game was used to port NT code over Win95 and 98 architecture. First as source code compatibility with NT storage miniports… Then was DirectX… And very late in game, with a very minimal set of the NT Kernel running as VxD and offering a set of the WDM driver architecture (USB, ACPI, BDA, Firewire and multimedia Kernel Streaming support in Win98 use this for example). This broad compatibility between DOS, Windows and NT is the reason why Win9x architecture lasted that long.

  19. Richard Wells says:

    For those wondering about the costs of virtualizing the display, InfoWorld Feb 13, 1989 has a comparison review of several 386 multitaskers. Page 55 has some performance numbers; one that deserves highlighting is the Windows 386 Drafix/123 benchmark which takes twice as long if the apps are run in a window as opposed to full screen. Desqview performed much better in that task but running in a window was still slower than full screen.

    The multitasking market wasn’t all that big but Quarterdeck had about 100 million dollars in sales (roughly 1/10th of what MS had across all product lines) in the early 90s.

  20. SweetLow says:

    >disk drivers in Windows 9x
    In Windows 3.1x too (BLOCKDEV). Windows 9x is the second version (IOS). And you have good article about that “How to please WDCTRL” http://www.os2museum.com/wp/how-to-please-wdctrl/

  21. Michal Necasek says:

    Yes, I wrote about the Win95 IOS too.

    It’s just that in Windows 3.x, the VxD disk driver was entirely optional and probably not present on many machines (given that WDCTRL was quite picky, and not every OEM shipped a custom driver, though many did). In Windows 95, on a freshly installed system, the 32-bit “native” disk driver was almost certainly used.

  22. SweetLow says:

    >In Windows 95, on a freshly installed system, the 32-bit “native” disk driver was almost certainly used.

    Try to install any 9x on modern hardware 🙂 Definitely there is NOT any 32 bit disk driver but unlike NT (XP or else) OS is working still.

    >not every OEM shipped a custom driver, though many did

    And I used one of these drivers once upon a time. Multi-function card on VLB with IDE part… Unlike WDCTRL driver correctly processed drive’s geometry translation in BIOS and high ATA transfer modes.

  23. MT says:

    It’s really fascinating. I remember on Windows 3.1 reading in a magazine how to enable the native driver which was apparently not enabled by default. It did involve changing some things in .ini files including this wdctrl… so is this correctly understood that the native driver would never be enabled in Windows 3.1 unless one did this? I wonder how many users found out to do this 😉

  24. MT says:

    Now we are at the question of drivers – some thing I’ve always wondered: How did the hardware detection in Windows 95 work for non PNP devices? It seemed it ran through a lot of detection routines to try and detect the hardware but how was it implemented in practice… did it simply load all the vxd’s it knew about (from the INF database) or did driver manufacturers provide a special module that would be loaded for the purpose of auto-detection etc. If so, what was the API? Or were the detection routines for legacy hardware built into Windows 95 and not extendable by 3rd party drivers?

  25. SweetLow says:

    >did it simply load all the vxd’s it knew about (from the INF database)
    No. One exception uses such method (load all) is IOS (it is really load ALL .vxd in its folder).

    >detection routines for legacy hardware built into Windows 95
    Yes. Executed on OS setup and Add Hardware (detection of new hardware phase).

    >did driver manufacturers provide a special module that would be loaded for the purpose of auto-detection etc
    >not extendable by 3rd party drivers?
    Yes. In such cases you have to use custom setup or prompt user to make correct choice – .INF and device configuration.

  26. Michal Necasek says:

    Sure, but installing Win9x on hardware 10 or 20 years newer than the OS is not exactly fair 🙂

    My vague memory is that some vendors provided Windows 3.x 32-bit disk drivers as a way to get benefit from multi-sector transfers and the faster ATA-2/ATA-3 transfer speeds even on machines where the BIOS didn’t do that.

  27. Michal Necasek says:

    The WDCTRL driver was enabled by default on some systems, but the detection was relatively picky and I believe many systems didn’t pass the tests even though the driver worked fine.

  28. Richard Wells says:

    Western Digital shipped their EIDE drives with Ontrack and included FastDisk updated to support ATA-2.

  29. r34jinkai says:

    @MT Detection for Non-PnP hardware in Win95+ (yes, including NT from Win2k and onwards) basically is a fixed database with all the IO ports and memory addresses for all the known non-PnP hardware by MS (on their HCL list), and the detection routine pokes on all these known addresses to check for known answers on them. It logs the detection progress (via the Detlog.txt file), so if the detection routine crashes because the system or the hardware installed didn’t liked certain port poked in an specific way, that address gets marked and the routine doesn’t try to poke that port/address again. This is also the reason with this routine/wizard has a BIG WARNING it can crash the system.

    This routine can’t be extended. If the system doesn’t detect your specific non-PnP card, it ask for “manual” installation, where it asks for an specific INF file, and asks you to set your IRQ/IO/DMA/MemAddress from the sets included in your INF file. This specific non-PnP path is controlled by the DDInstall.FactDef sections in your INF file and SETUPAPI routines.

  30. MiaM says:

    MT: Afaik there were no need to edit ini files, to enable WDCTRL (and/or to enable the FAT file system VxD) you just checked the “32-bit disk access” (and/or “32-bit file access”) check box.

    Btw, re the 11 year old post on WDCTRL: Is it really correct that it compares two different ways to determine the disk size/parameters? I have a gut feeling that it rather compares the size/parameters for the partition. I’m writing this as IIRC I’ve had success in using 32-bit disk access on way larger disks simply by only having a single partition that fits below the 5xxMB limit. Back in the days it might had been reasonable to do this if you only lost a few percent of the disk size (and I btw suspect that the “only a few percent” thing was a reason for many vendors selling 540MB disks for a while). Later on any disk size loss probably don’t matter at all when setting up a vintage computer, like who cares if 95% percent of a 10GB disk is lost with a 500MB partition for running Windows 3.11

  31. Michal Necasek says:

    I went by my reading of the WDCTRL source code. You’re welcome to take a look at it and correct me, it’s on the Windows 3.1 DDK.

    The checkbox for 32-bit disk access (not 32-bit file access!) was AFAIK only there if Windows decided that WDCTRL might work, not always.

    A partition under the ~504M limit would only work if the geometry also matches. The fact that the partition is small would not help if logical and physical CHS geometry differ.

  32. MiaM says:

    Interesting, I would had assumed that it would always be there.

    For the use case with a larger disk in a vintage computer with a smaller partition I might had switched of LBA/address translation in the BIOS setup. Perhaps I just had the custom disk size parameters set to the 5xx limit.

    Btw I might be doing something wrong, but it seems like the rss feed for comments doesn’t update correctly. Latest comment I see with this URL is over a week old :O
    https://www.os2museum.com/wp/?feed=comments-rss2

  33. Michal Necasek says:

    It was entirely possible to set up the BIOS to use “compatible” geometry for the drive. More or less all ATA drives I’ve seen will accept any halfway sane geometry, so it’s entirely possible to convince a 10GB drive to act like an old ~500M drive where logical and physical geometry matches.

    I’ve had trouble with RSS feeds not updating many times in the past, but right now I still see the comments in RSS (Feedbro extension in Firefox).

  34. MiaM says:

    Do you know which of the feeds you use in Feedbro/Firefox?

    Side track: Perhaps you could do a blog post about the blog, to keep comments re such technical issues in the comment section there and not clutter up other blog posts?

  35. Michal Necasek says:

    I have the article and comment feeds set up. The comment feed (http://www.os2museum.com/wp/comments/feed/) definitely sees new content. A separate blog post might make sense, but you started it 🙂

  36. Josh Rodd says:

    Windows 3.0, OS/2 2.0, and NT 3.1 all developed pretty much the same model around “virtualising” VGA for DOS boxes. I say this for these reasons:

    – OS/2 would only virtualise VGA if you use the plain VGA driver, and had some remarks about “OS/2 cannot virtualise VGA without the assistance of VGA hardware”. If you had an SVGA driver for PM, the DOS boxes would not be able to display VGA graphics modes in a window.

    – NT worked the same way.

    – And… Windows 3.0 also was the same way, although we know the exact details as to why, in Michal’s post above.

    In my opinion, the virtualisation involved was much too complex, and ended up being too much work to try to do for any other graphics cards. It was also basically pointless, since (for example) very little DOS software would try to access a hardware 8514/A, so why virtualise it?

    On a VGA, the technology stack ends up looking like this in Windows 3.0 Enhanced Mode, NT 3.1, and OS/2 2.0:

    – The system runs with a 32-bit overall hypervisor with 16-bit subsystems and drivers underneath that. In NT, the only 16-bit subsystems would be the NTVDM, WoW, and OS/2 subsystems. In OS/2 2.0, quite a bit of the OS was 16-bit, plus the Windows 3.0 compatibility layer and OS/2 MVDM. In Windows 3.0, almost all of the OS was 16-bit, but still had a 32-bit hypervisor.

    – The 32-bit hypervisor contains a VGA “virtual device driver”. What this really means is that this driver knows how to:

    – Save and restore the full state of a VGA (including various undocumented modes).
    – If told that a client application, whether a DOS VDM or the main system’s video driver, has exclusive control of the display, allow that driver to directly access the VGA hardware.
    – If told that a client application is running in the background, it fully virtualises whatever that application does, buffering its video display off-screen in addition to maintaining the state of a “virtual” VGA.
    – When switching to and from such an application, the “virtual” VGA is copied to the real one.
    – If told a client application is running inside of a window, it virtualises a VGA and then reflects whatever that virtual VGA is doing inside a graphical window, which it in turn renders using the actual system display driver. This logic is quite complex, and was never implemented fully for any combination other than VGA-compatible hardware, a VGA virtual device driver, and a VGA system display driver.

    There are remnants of this driver architecture that remains on SVGA or 8514 drivers for OS/2 (and NT and Windows 3.0). Text modes are fully virtualised and work correctly. If my memory does not fail me, a virtual CGA device was also provided. But EGA and VGA modes required the full VGA stack. (A CGA is much, much easier to emulate than an EGA or VGA.) With SVGA, XGA, 8514, etc. drivers, if a program switches to an EGA or VGA mode, the DOS application is suspended.

    Part of this architecture also included a “grabber” for Windows 3.0 to allow cutting and pasting of graphical screens. On OS/2 and NT, this was implemented generically and didn’t require a special driver.

    When running Windows 3.0 in standard mode on OS/2, the VGA virtual device lets Windows take over control whether in full screen or “seamless” mode. Os/2 shipped special Windows 3.0 display drivers which had direct access to the hardware and bypassed any virtual device driver, but also cooperated with the main system display driver to avoid display corruption. In 386 Enhanced mode, Windows on OS/2 functions as a 32-bit (and 16-bit) DPMI client and avoids loading any virtual device drivers. OS/2 implemented enough in the supplied 32-bit virtual device drivers to make Windows display drivers work properly, however. Win-OS/2 did not try to implement DOS boxes in Windows, which made this a lot simpler.

    The Video 7 driver sources look like a half-hearted attempt to implement the same thing, but then at some point someone realised virtualising an SVGA was a waste of time, and the performance of a windowed VGA DOS application was quite poor, and really didn’t have much use outside of cutting and pasting. Implementing a grabber that could read from a suspended DOS session worked well enough. OS/2 and NT ended up doing the same thing.

    In looking at the OS/2 DDK sources, a lot of effort went into trying to make this implementation as high performing as possible, such as directly mapping VGA memory for a windowed DOS application if the window was maximised and in the “right” spot on the screen. In my view, this was a lot of wasted effort.

  37. Michal Necasek says:

    I suspect the performance optimizations were not so worthless when you ran the code on a 25 MHz 386 or something along those lines.

    One detail re Win16 grabbers: The grabber concept was not specific to Win386, even Windows 2.0 already had them (I don’t remember if Windows 1.0 had them or not). The grabbers applied to Real and Standard mode Windows 3.x, so architecturally they could not depend on a VDD. Windows 3.x had separate 386 grabbers that did work with VDDs, but the idea of a non-386 grabber did not make sense on OS/2 or NT.

    I believe Win16, OS/2, and NT also have slightly different approaches to executing video BIOS in DOS boxes, but I do not recall all the details exactly. All I know for sure is that Windows 3.x runs the real video BIOS but adds optimizations for text output, bypassing the video BIOS.

  38. John Elliott says:

    There are four grabbers in Windows 1.0: CGA, Hercules, EGA mono and EGA colour.

  39. Josh Rodd says:

    The reason I say it was a wasted effort is because windowed VGA graphics on my 386DX 20MHz was painfully slow. It really wasn’t usable for anything other than cut and paste, and that didn’t need the complex virtualisation – just a grabber. The grabbers on Windows worked fine in real or standard mode.

    OS/2 would run the host video BIOS in MVDMs. I suspect it had its own speedier text routines but I’ve never really run OS/2 on a machine with lousy BIOS (i.e. a CGA). NT was quite a bit more virtualised and supplied its own BIOS, which made it much closer to a true VM hypervisor type of model.

    An underappreciated aspect of this was how compatible both OS/2 and Windows 3.0 were with a wide variety of display adapters and applications written for them. If you were in full screen mode, your old DOS apps would just plain work, regardless of what they were doing to the display hardware.

  40. Michal Necasek says:

    Yes, from what I understand, especially running windowed, NT took over the video BIOS functionality. In OS/2, the real video BIOS was generally used, and that was also the case in Windows 3.x. But in Windows 3.x there was the weirdness that Windows applications (including protected-mode ones) could also call INT 10h. There are some interesting comments about that in the Video 7 driver source.

    Re video performance — my experience is that on ISA machines, the card can make a huge difference. It’s hard to know exactly what Microsoft targeted, but I doubt they would have added all the complexity if it really made no difference.

  41. MiaM says:

    Re performance: Even if the speed would be glacier slow, there is still a use case for running graphic DOS application windowed. The particular use case is applications that more or less show a static image and you want to reference that image while doing something in any other application. Like for example show a map/diagram of something and perhaps enter a search term and have things highlighted.

    Btw, the RSS link only shows comments from jan 24 this year. Seems like there is some weird cache thing going on on the server?

  42. Michal Necasek says:

    Yes, you’re right that even not-very-fast graphical applications might be useful sometimes.

    And you’re right that the RSS feed wasn’t updated, and yet the Feedbro extension in Firefox showed the latest comments. I thought I’d see if and when the feeds stopped updating, but no.

  43. Josh Rodd says:

    MiaM,

    OS/2 would “freeze” the app but display the VGA content in a window when using eg XGA PM drivers, which was useful for your described purpose. I’m betting once it was clear that was the primary use case, work on virtualising complex display adapters stopped.

    The VGA code was already written and done so it just kind of persisted in legacy mode. If I recall correctly, it still existed in the NTVDM right up through modern 32-bit versions of Windows. Pretty much every display adapter vendor had to make sure their cards would work with Windows 9x and NT’s generic VGA driver, or the OS couldn’t install.

    OS/2’s XGA driver offered a “640x480x16” mode which would restore full VGA function without requiring changing display drivers. This was the stated solution when an IBM customer needed to run VGA software. Interestingly, the XGA virtual device driver set fully supported this and you could run XGA software in a DOS full screen session and then switch to PM and run VGA software in a DOS window.

    In the present era things have come full circle with VMs like VirtualBox, VMware, and DOSBox fully virtualising a VGA (or SVGA) and then rendering on the host display’s GUI, complete with “seamless” mode for running Windows.

  44. Chris M. says:

    It’s been awhile, but I recall OS/2’s VDM being quite a bit more capable then Windows’ when it came to windowed DOS applications. For example, it could run Mode 13h programs windowed, which NT and 9x couldn’t do. Off hand, I recall NT also had problems with full screen VBE modes too.

    OS/2 also supported pass-thru to a sound card which NT didn’t really support at all until XP (very limited) or the release of VDMSound.

  45. Yuhong Bao says:

    Correction: I found that Win95 can run Mode 13h programs windowed.

  46. Yuhong Bao says:

    Interestingly it would still do so while running with the plain VGA driver, which is limited to 16 colors

  47. Yuhong Bao says:

    (Wiindows 3.1 on the other hand would not allow you to run any Mode 13h applications in a window even with the VDDVGA driver)

  48. GL1zdA says:

    Is there any documentation on how this changed in Windows 95? After this post, I fell for another rabbit hole, and I’m torturing 95 and 3.1 with graphic in windowed boxes, and it looks like they enhanced it in 95. I can run games like Duke Nukem 3D or Raptor in a window. I have also tested CompuShow with animated GIFs, and it looks like it’s trying to do something even when run in the background (games stop, once they lose focus, maybe because of sound?).

  49. Random says:

    The Windows95 DDK/SDK is on Winworld, as are the different MSDN cds. That should be pretty complete? Certainly more so than it is for Windows 3.0!

  50. Michal Necasek says:

    Yes, but not really. The Win95 DDK is easy enough to find, but Microsoft changed the architecture somewhat, implementing common core VDD functionality and asking HW vendors to only supply a relatively small mini-VDD. Only the mini-VDD source is on the DDK. This parallels the situation with the display drivers where the DIB Engine source code never made it to the DDK.

    There’s quite possibly something in the DDK documentation, maybe even vaguely accurate, but I don’t believe the actual source code implementing the added functionality is available.

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.