The floppy controller evolution

The floppy subsystem in PCs hadn’t mutated over time quite as much as, say, the hard disk subsystem, but prior to its extinction in the early 21st century, the floppy disk controller (FDC) did evolve noticeably.

In the original IBM PC (1981) and PC/XT (1983), the FDC was physically located on a separate diskette adapter card. The FDC itself was a NEC µPD765A or a compatible part, such as the Intel 8272A. It’s worth mentioning that nearly all floppy controllers supported up to four drives, but systems with more than two drives were extremely rare. The reason was that only two drives were supported per cable, while 99.99% of all systems only provided a single floppy cable connector.

The original FDC only supported two I/O ports: the read-only Main Status Register (MSR) and the Data Register, used for both reading and writing. The adapter card added another port, the Digital Output Register (DOR), used primarily for drive selection and controlling drive motors.

The MSR was mapped at port 3F4h, data port at 3F5h, and the DOR at 3F2h. All other ports in the 3F0h-3F7h range were unused. The FDC provided commands for reading, writing, and formatting disks, positioning drive heads, and several control commands. The FDC read and write commands were not able to automatically position the drive heads, which meant that the seek command had to be explicitly used before moving to a new cylinder, and at most one cylinder could be read or written with a single command.

One of the less obvious aspects of FDC programming is that there are floppy motor control bits and separate drive selection bits, but they are not quite independent. The motor control bits can be used without restriction and in any combination. However, the drive select signal is only active when the corresponding motor on bit is set. This arrangement presumably helps ensure that the drive head cannot be moved when the floppy media isn’t spinning.

The original floppy subsystem was fine for double-density 5¼” floppies, but needed upgrading for the high-density drives introduce in the IBM PC/AT (1984). The FDC was moved together with the hard disk controller onto a single adapter card. That accounts for a very peculiar arrangement where bits in port 3F7h were shared between the floppy and hard disk controllers.

For reading and writing data, DMA transfers were used. DMA channel 2 and interrupt line (IRQ) 6 were reserved for the floppy controller. The FDC also supported an alternative method of data transfer without DMA, using only the data port. However, the non-DMA method was very rarely used.


The important changes in the PC/AT was the addition of the write-only Configuration Control Register (CCR) and the read-only Digital Input Register (DIR), both at port 3F7h. The CCR selected the data rate used to communicate between the floppy controller and the drive. The original double-density media used 250 kbit/s rate, while the high-density media used 500 kbit/s. Without setting the data rate correctly, the media could not be read or written. The data rate setting was implemented in a very simple (or crude?) way: to achieve 500 kbit/s, the FDC was simply clocked at twice the speed. As a side effect, the various timeout values programmed into the FDC were only half as long and had to be set up accordingly. It should be noted that the FDC itself was still a NEC µPD765A or compatible, all changes were implemented on the adapter card outside of the FDC.

A welcome if not strictly necessary addition was the change line support. It should be noted that change line support is a feature of the floppy drive, not the controller or adapter; the diskette adapter simply makes the change line signal from a drive accessible to software. The change line signal is active if a floppy is removed from a drive, and deactivated when a disk is inserted and the drive head stepper motor receives a pulse. In older systems, software had to assume that the diskette may have been changed almost at any time; in the PC/AT, an inactive change line signal meant that the medium had remained in the drive and there was no need to re-check for media change.

The IBM PS/2

The next update came with the first generation IBM PS/2 systems (1987). The early PS/2 systems used a so-called Type 1 controller, which was still based around the same NEC µPD765A compatible FDC, but with further modifications on the adapter side—although technically there was no adapter anymore, as the floppy subsystem was now part of the system board. The PS/2 floppy interface only supported up to two drives, attaching four drives was not even theoretically possible.

The PS/2 systems typically came equipped with the new 1.44 MB high-density 3½” drives, but from a software perspective the drives were no different from high-density 5¼” drivers (other than a higher number of sectors per track, 18 vs. 15). However, there were software-visible differences in the form of several new registers. Status Registers A and B (SRA and SRB) mapped at ports 3F0h and 3F1h, respectively, were both read-only and provided read-outs of various signals on the drive interface. The SRA and SRB registers were intended primarily for diagnostic purposes and were not needed in normal operation. The DIR (at 3F7h) was no longer shared with the hard disk controller and a bit was added to detect the ‘high density’ drive signal.

Enhanced FDCs

The second generation PS/2 systems (1990) introduced a new Type 2 controller. The Type 2 controllers were the direct forerunners of all modern diskette controllers and supported four data rates: 250, 300, 500, and 1,000 kbit/s. Those data rates allowed a controller to support both 5¼” and 3½” drives, either double-density (360K and 720K) or high-density (1.2M and 1.44M). The Type 2 also for the first time used a substantially different FDC with new commands and capabilities, typically an Intel 82077 variant.

The updated FDC supported a 16-byte FIFO to improve transfer speeds on fast systems. There was a new implied seek mode to avoid sending explicit seek commands to the controller. Support for up to four drives was re-introduced, and new registers were added. A read-only Drive Status Register (DSR) at 3F4h contained drive type and configuration information. The Precompensation Select Register (PSR) at 3F4h was write-only and allowed the setting of write pre-compensation values, as well as the data rate. The new Configure command was used to enable the new FDC features, and a Dumpreg command could be used to read the FDC state.

A Type 3 controller appeared in some newer PS/2 systems (1991) with support for extra density 3½” diskettes with 2.88M formatted capacity. A new perpendicular recording mode was used which required support both in the drive and the controller. Newer variants of the Intel 82077 FDC supported perpendicular mode. Even though the 2.88M drives turned out to be too little, too late, and were never widely adopted, the support for perpendicular recording remained in many floppy controller chips.

Latter Day FDCs

In the meantime, the Intel 82077SL FDC added support for power management. Just like other chips with the SL suffix, it was aimed at the emerging laptop market. The Intel 82077AA offered an almost complete floppy adapter on a chip and supported three different modes of operation: AT compatible, PS/2 compatible, and (PS/2) Model 30 compatible.

The 82077AA also supported tape drives attached directly to the FDC. Irwin, QIC-40 and other tape drives could be attached directly to the FDC, as an alternative to tape drives with a SCSI interface.

The Intel 82078 FDC was perhaps the pinnacle of FDC development, with support for up to four drives, tape drives with up to 2 Mbit/s transfer rate, perpendicular mode, AT, PS/2, and Model 30 compatibility, drive and media detection, power management, and a few other improvements.

As it often happens, many of the advanced features of the 82078 went unused. The FDC for example offered advanced drive and media detection features, but the floppy drives on the market were too varied and not always correctly connected to the FDCs. As a result, hardware drive and media detection was not reliable and BIOSes and operating systems kept relying on the tried and true software methods which worked with any AT-compatible floppy controller.

Super I/O chips

The last stage of FDC evolution were so-called super I/O chips integrating numerous “legacy” hardware functions. The FDC was invariably one of those function, accompanied by a selection of serial and parallel ports, game port, keyboard controller (KBC), etc. One of the early such chips was the Intel 82091AA Advanced Integrated Peripheral (AIP). The FDC part of the AIP was in fact slightly less capable than the earlier standalone 82078 FDC: Support for 2 Mbit/s tape drives was removed because it unnecessarily complicated chip design and there were almost no tape drives on the market capable of taking advantage of the faster data rate.

Most PC compatibles of the post-PS/2 era utilized super I/O chips from National Semiconductor, Winbond, ITE, or SMSC, rather than from Intel—even on Intel motherboards. In the early 2000s, the super I/O chips moved to the LPC (Low Pin Count) interface, far away from the modern high-speed buses.

There were no notable changes to the FDC interface since the early 1990s, and a typical PC with no 2.88M drives didn’t need more than the standard PC/AT FDC interface with support for additional data rates, although enhanced controllers with FIFOs were standard (to reliably handle the higher data rates). By the end of the first decade of the 21st century, PCs with floppy drives were very rare, although as of 2011 there are still motherboards with floppy support on the market. Floppy drives were almost fully displaced by USB-based storage media, for reasons which are too obvious to list here.

Ready or not?

The original NEC µPD765A provided a mechanism to determine whether drives were “ready”, that is, a diskette was inserted and the drive door closed. The FDC would poll all drives and trigger an interrupt whenever the ready state for any drive changed. IBM did not use this mechanism—there was no ready signal on the floppy drive cable (or the drive), and the diskette adapter card was built such that the FDC thought the drives were always ready by tying the ready signal permanently high. As a consequence, software could never be certain whether a diskette had been changed or not. As mentioned above, the IBM PC/AT solved the same problem differently, using the change line which software could poll. Newer FDC chips don’t have any ready input pins, but still emulate one aspect of drive polling: After FDC reset, the ready state of all drives is considered to be changed, an interrupt is generated, and software must issue the Sense Interrupt command four times to clear the interrupt. That’s extra work for a non-functional feature. Only the last generation FDCs allow the polling emulation to be turned off.

This cannot possibly fail

Another signal IBM got rid of was “drive fault”. This signal shared an input pin on the original NEC µPD765A with the TRK0 signal. A drive could indicate a fault during command execution, which would terminate a command and be reflected in the ST0 and ST3 status bytes.

Even though IBM duly documented the relevant FDC status register bits in the Technical Reference, the drives IBM used never generated the fault signal (and there’s no mention of such signal in the drive or adapter documentation). Newer integrated FDCs, such as the National Semiconductor DP8473, reflect this and their documentation shows bit 7 of ST3 (which reflected the fault signal in the original NEC µPD765A) as always zero.

DMA vs. PIO transfers

The floppy subsystem in PC compatibles always used DMA for transferring data, although the FDC also supports Programmed I/O (PIO) transfers with no DMA involvement. In the first PCs, DMA was used because the PIO method prevented the CPU from doing anything else and placed very tight timing constraints on the system. Tying up the CPU wasn’t a problem, since the BIOS did not allow the user to do anything else while waiting for a floppy transfer to complete anyway. However, the timing constraints were such that all other interrupts would probably have to be disabled while a floppy read or write was in progress, and even then there might be overruns or underruns.

Once 386 and 486 systems showed up, the situation was reversed. The CPUs were fast enough to comfortably handle the floppy transfers, and FIFOs helped make the transfers more reliable (both DMA and non-DMA). But tying up the CPU was now a serious issue, especially for multi-tasking systems. DMA was now a convenient way to handle a relatively slow transfer as a background task while the CPU was busy doing something else. By the late 1990s, the floppy subsystem was typically the only user of the old-style DMA (using channel 2) in a PC.

Which drive is it?

IBM subverted the NEC µPD765A design in more ways than just the ready signal mentioned above. The FDC was built to control up to four drives, and was intended to support seek/recalibrate operations as non-exclusive tasks, potentially seeking on all drives at once. The FDC required drive selection bits for almost every command, specifying the drive which the operation applied to. An overlapped seek would select one drive, send a step pulse, select the next drive, send a step pulse, etc.

However, IBM did not connect the drive selection bits of the FDC. Instead, the Digital Output Register (DOR) was solely in control of drive selection (and motor control). As a consequence, the drive selection bits programmed into the FDC did not matter—only the DOR determined which drive, if any, was selected. This arrangement made it impossible to use overlapped seeks because the FDC could not rotate the drive selection. This design (apart from the drive ready signal tied high) also precluded the use of drive polling, which likewise required the FDC to change drive selection.

Who do you believe?

Due to the above mentioned idiosyncrasies, it is remarkably difficult to find accurate and thorough FDC documentation. The original NEC µPD765A or Intel 8272A datasheets are detailed, but may be outright confusing if the PC floppy adapter design is not taken into account. Newer, more integrated FDCs (Intel 82077, National Semiconductor DP8473) take the PC style designs into account, but do not always point out the differences from the original, and tend to be less detailed overall, concentrating more on the features that were not present in the original old µPD765A FDC at all (such as FIFOs, perpendicular recording, or power management).

An excellent (if perhaps unexpected) source of information is the IBM PC/XT/AT Technical Reference. The text is not particularly enlightening or detailed, but the included logic diagrams explain much of what the documentation leaves unsaid. With the logic diagrams and datasheets for the components that IBM used, it is usually possible to piece together the full picture which is not explained anywhere in the available documentation.

This entry was posted in PC architecture. Bookmark the permalink.

8 Responses to The floppy controller evolution

  1. Yuhong Bao says:

    “The DIR (at 3F7h) was no longer shared with the hard disk controller and a bit was added to detect the ‘high density’ drive signal.”
    Which caused trouble when ATA support was added to the Lacuna planar. IBM ended up having to add a switch:

  2. Yuhong Bao says:

    As ATA was based on the original AT hard drive/floppy card. Most clones ended up sticking with the AT mode.

  3. Michael Kjörling says:

    “The Type 2 controllers were the direct forerunners of all modern diskette controllers and supported four data rates: 250, 300, 500, and 1,000 Mbit/s.”

    I believe you meant to write kbit/s there, rather than Mbit/s. 1,000 Mbit/s is on the same order as that a modern 7200 rpm HDD can muster for sequential I/O.

  4. Michal Necasek says:

    Yes, clearly a typo, now corrected. Thanks for noticing!

  5. MiaM says:

    Perhaps a side track:

    At some point back in the days I had a 286 motherboard but no HD drives, only various DD drives and for some reason I wanted to use more than two drives. Using a controller from an old IBM PC or XT I still couldn’t get more than two drives working, even with DRIVER.SYS. The same floppy controller card and the same dos version and config.sys file worked in an IBM XT motherboard. It must have been some BIOS thing on that 286 motherboard. (It was one of the later 286 motherboards, with iirc C&T chipset and an early AMI bios).

    So the next question is which bios:es did support more than two drives?

  6. Michal Necasek says:

    To be honest I’ve never seen a BIOS with explicit built-in support for more than 2 drives. Did you actually re-jumper the second adapter to not respond at the default I/O port?

  7. MiaM says:

    I only used one adapter, the standard one from an IBM 5150 PC or 5160 XT which has one internal connector for the first two drives and one external 37-pin d-sub connector for the last two drives. The hardware worked fine in an 5160 XT but not on my 286 motherboard. I can’t remeber if I tried using only the floppy adapter without anything else (than a display adapter) or if I only tried using this setup together with my hard drive adaptor (which at that time btw was some RLL card without onboard FDC).

    I’ve never seen any BIOS where you can configure more than two drives, but on the XT motherboard you could use the third (and most likely the fourth) drive with a DEVICE=DRIVER.SYS… line in CONFIG.SYS. That didn’t work with the 286 motherboard.

    Therefore I think we should consider the ability to configure X number of drives in setup (or with switches on a PC/XT), and the ability to support selecting more than the first two drives via the BIOS API as two separate “handle more than two drives” things.

  8. techfury90 says:

    I have, but it’s not a PC. NEC PC-98s support 4 drives. Hell, mine even has an option in the CMOS setup to swap 1/2 with 3/4.

Leave a Reply

Your email address will not be published. Required fields are marked *