Deeper Into ATA History

While looking for something completely unrelated (namely the Rock Ridge extensions to ISO 9660), I came across a cache of old X3T9 committee documents from 1990. In retrospect I’m a little surprised that I hadn’t found these earlier, since the archive appears to have been published on one of Walnut Creek CD-ROMs circa 1994, but I’m not sure how long it’s been online.

What’s interesting is that the Walnut Creek archive appears to overlap with the X3T9.2 archive that has available for a long time, but contains numerous documents that the X3T9.2 archive does not. Notably there’s a directory with CAM Committee documents. While the CAM Committee’s primary objective was to define a Common Access Method (CAM) for software accessing SCSI devices, an effort that ultimately went nowhere, the CAM Committee also started a rather more successful side project, the AT Attachment (ATA) standard.

The archive is far from complete, but it does include one complete ATA draft, revision 2.1 from June 11, 1990. That’s one revision older than the oldest ATA draft I was aware of until now, which is revision 2.2 from August 1990. The rev 2.1 draft is provided as a ZIP archive containing WordStar files, which is excellent for seeing exactly how the draft was edited (and the WordStar files include a couple of editorial comments that do not show up in the printed version), but the downside is that getting from WordStar to PDF was not entirely trivial. In the end I was able to produce a PDF of ATA Rev 2.1 in 2-up format that’s quite similar to the scanned documents in the X3T9.2 archive.

Even better, the Walnut Creek archive includes what appears to be the very first and quite incomplete ATA standard draft from March 30, 1989. Said draft also provides a hint why a SCSI oriented committee started ATA in the first place: The early ATA drafts also included a specification of EATA (Extended AT Attachment), a SCSI pass through mode of ATA devices (completely separate from and much older than ATAPI).

Sadly the initial draft—which is so old that it’s called DAD, for Disk ATBus Definition, rather than ATA—does not include the EATA sections. In the next oldest currently available draft (revision 2.1), EATA had been already removed again. ATA revisions 1.x appear to have included the SCSI pass through functionality defined by EATA.

EATA was the brainchild of DPT (Distributed Processing Technology), one of the larger SCSI HBA vendors. An overview of EATA can be found here. I don’t believe anyone besides DPT implemented EATA, but the idea behind it was quite interesting.

CHM’s oral history of Dal Allan describes how EATA was created by DPT and desired by Quantum, but WD successfully fought to remove it from the standard for cost reasons, only to implement the same idea (SCSI pass through over ATA) as ATAPI a couple of years later.

The first ATA draft from March 1989 notably already defines the IDENTIFY DRIVE command as well as READ/WRITE MULTIPLE, but there is no sign of DMA support yet. The DASP signal for letting drive 0 detect drive 1 was also already defined, although the details were refined many times since then.

Finding the very first ATA draft is something I doubted would ever happen. Now I wonder if the revision 1.x ATA drafts might eventually turn up, too.

The list of early ATA drafts on this site has now been appropriately updated.

This entry was posted in IDE, PC history, Standards. Bookmark the permalink.

17 Responses to Deeper Into ATA History

  1. Richard Wells says:

    That early draft is so SCSI, it is painful. Specific bits for 5 MB/s, 10 MB/s, or 20 MB/s doesn’t make sense for new interface chips.

    EATA is somewhat different from ATAPI. EATA had a goal of making drives capable of 33 MB/s which the ATA standard didn’t implement until ATA-4 in 98. Those drives would have been very expensive in 1990. ATAPI was geared to relatively slow devices with a mass of built-in commands.

  2. Michal Necasek says:

    And yet the specific megabit speeds were something that came from ESDI, not SCSI. I’m not sure why ATA even adopted that because it made absolutely zero difference to software. Probably simply because “it was already there”.

    Yes, the hardware goals of EATA and ATAPI were very different, but the software approach (SCSI commands over ATA) is rather similar. And I do think that the lack of a common hardware/software SCSI interface was a major, major impediment to SCSI adoption.

  3. Yuhong Bao says:

    Ultra DMA/33 dates back to PIIX4 in 1997, BTW.

  4. Chris M. says:

    SCSI was never going to see widespread adoption as long as it remained expensive for no good reason. Plenty of non-PC systems adopted it in the 80s and 90s, only to turn to IDE later on due to costs.

  5. Michal Necasek says:

    Yes, but a major reason for that was the lack of standardization. SCSI was more costly even if you didn’t consider the purchase price. Basically big OEMs like Compaq made sure that IDE was easy and cheap to integrate. That never happened with SCSI, in part because the vendors liked to sell SCSI gear as a “premium” product even in cases when there was no real difference between IDE and SCSI drive models, but only in part. (In other cases of course SCSI really was the premium product, like 10k/15k RPM drives.)

    The standardization and ubiquity of IDE relegated SCSI to a niche product. Some SCSI vendors were probably just fine with that, but in the end that’s why SCSI vanished from mass market PCs.

    Anyway what I’m trying to say is that yes, some of the SCSI cost was just marketing (“no good reason” as you say), but even without that, SCSI had significantly higher integration costs due to lack of standardization. The move from SCSI to IDE in Macs, Amigas, etc. is certainly telling.

  6. Richard Wells says:

    The first Mac with IDE, the Quadra 630, shows up on the lists of the worst Macs ever. The Amiga 600 was frequently panned. The late 90s IDE drives had most of the improvements that were available to SCSI thanks to cheaper chips and memory. That left SCSI for markets like SQL where spindles for RAID and additional spindles for transaction logging were necessary. Knowing what the cheapest components available are capable of influences design. DPT’s goal of getting hard drive manufacturers to produce cheap IDE drives capable of hitting a high speed mode when attached to a DPT card was doomed to failure.

    Early SCSI had to be expensive since many of the adapters had comparatively fast processors. Offloading all the hard drive interaction to a 68000 processor leaves a lot more performance available to the system CPU. SCSI cards maintained a substantial advantage over IDE for tasks that required steady data transfers like CD writing and tape backup until RAM prices dropped enough to allow every drive a large enough cache to handle the inevitable CPU blockage delaying IDE use.

  7. Michal Necasek says:

    Yes, I remember all the dire warnings for people using ATAPI CD burners. They would typically be using Windows 9x, which was about the worst possible basis to begin with, and the conventional wisdom was “close everything you can and don’t even look at the computer until it’s done burning while you’re busy praying that it’ll work”.

    DPT’s EATA had one huge advantage which was perhaps lost on other SCSI HBA vendors but really shouldn’t have been: Thanks to the ability to use the ATA protocol, an OS did not have to have specific drivers. It was basically a fallback, similar to VGA for graphics cards. This was the Achilles’ heel of SCSI, if your OS did not have the right driver, you were completely and utterly dead in the water. A great way to prevent adoption really.

    What’s very interesting is the different drive vendors’ strategies. For example Quantum in the late 1980s and early to mid-1990s built largely identical ATA and SCSI drives. You could get a ProDrive or a Fireball or a 2.5″ Go Drive with ATA or SCSI interface, but the drive itself was the same and even the firmware was very similar. Seagate did the same in the 1980s, but after they acquired Imprimis, they had a completely separate SCSI drive line with almost zero overlap. Elite, Hawk, 1990s Barracuda, Cheetah — no ATA equivalents. Which is kind of interesting because Imprimis actually did build the same drives with SCSI/ESDI/ATA interfaces, but that all went away.

    Later on (perhaps mid-1990s) the markets really diverged, and drives designed for RAID were near exclusively SCSI, while for example SCSI laptop drives went completely extinct. And of course 10 years later the opposite happened, with SATA drives increasingly showing up in lower end RAID setups.

  8. Nils S. says:

    >“close everything you can and don’t even look at the computer until it’s done burning
    > while you’re busy praying that it’ll work”.

    Hehe I still do it like this. Not even writing on the disc before burning…

  9. Yuhong Bao says:

    And the days where ATA DMA could be the difference between coasters and a successful burn. The original IBM PC uses the 8237 for the floppy for a similar reason

  10. Richard Wells says:

    It wasn’t just ATAPI CD-R under Win9x that caused problems; the Mac systems with issues had IDE hard drives and SCSI CD writers. Systems where all the drives used SCSI had good results.

    Quantum did its part in showing that IDE was unusable in a business system with the production of the Bigfoot line; drives so slow that they would have been considered poor performers a decade earlier. Coupling a weak CPU with limited memory and a very slow hard drive that made virtual memory an excuse to get coffee did not result in a desirable machine.

  11. Vlad Gnatov says:

    Just to provide counterexample, I wrote hundreds if not thousands of cd in 90’s and lost maybe 10. I’ve used ATAPI CD drives with Linux/*BSD plus J.Schilling’s cdrecord. Its used CAM from the beginning and provide pretty good buffer underrun protection, so there was no need to be extra cautious. Heh, once I forgot about cd-r burning and started to compile mozilla in parallel.

  12. Vlad Gnatov says:

    …nevertheless, cd was burned successfully.

  13. Richard Wells says:

    cdrecord didn’t show up until 1996. Unless someone was an early adopter or purchased a used cd writer, the writer would have 1 MB to 4 MB of buffer. Many of the CD recording guidelines were developed for drives with 256 KB of buffer of which only 64 KB could be used when writing audio files. Some drives even had firmware bugs that only permitted 64 KB of buffer to be used on any write. Buffer underruns are a lot more likely to happen if the system has about 1/3 of a second of buffer instead of more than 10 seconds.

  14. Vlad Gnatov says:

    >cdrecord didn’t show up until 1996.
    I vaguely remember using cdrecord in 1995, but I might be mistaken. In general, yes,
    I’m talking about 1995/6 time period, before it cd-r were too uncommon and too
    expensive for xUSSR, ArVid was used instead.

    >Unless someone was an early adopter or purchased a used cd writer, the writer
    >would have 1 MB to 4 MB of buffer.
    Again, I might misremember, the old drives with 1x writing speed had 256K buffer and
    were indeed prone to buffer underruns, but they were already obsolete in 1995.
    The drives with 2x writing speed had 512K – 1M buffer, 4x drives had about 2M.

    >Buffer underruns are a lot more likely to happen if the system has about 1/3
    >of a second of buffer instead of more than 10 seconds.
    AFAIR, it was recomended to set the cdrecord buffer to at least 2x of internal
    cd-drive buffer, but no more than 1/3 of RAM, because that buffer was mlock(2)’ed.

    Overall, it may be said that the old IDE(ATAPI) cd writers had issues and
    disadvantages, but practically all of them could have been effectively
    mitigated in software (by OS and good cdrecording program).

  15. Nils S. says:

    Another “funny thing” about CD writers:

    Somwhere around 2001/2002 I had my first one, a double speed HP writer. I got it from my parents old computer along with some money for 50 empty CDRs.
    I went to “Müller” (Shops in south of Germany, you can buy there: music CDs, parfume, paper and pencil, etc.) and they asked me: “for what do you need *50* CDs? No one has dozens of gigabytes of personal data. It must be for piracy”.
    Still up today, whenever I buy a box of empty CDs I have a strange feeling and I am waiting for overreaching questions.

  16. rasz_pl says:

    Speaking of ATA/ATAPI. Recently on vogons I stumbled upon ATA/ATAPI2SD emulator https://www.youtube.com/watch?v=fFdFw1K25Js STM32 + MAX II CPLD, implements an impressive list of IDE devices (hdd/cd/cd changer/zip). Clearly a passion project, first emulated CDROM (Mitsumi FX400E) on the list is Authors first drive ever. So far only 2 short videos showing functionality and no word on releasing it in any physical form :(. Maybe you can contact Author for an early review sneak peek sample.

  17. ender says:

    Hah, I remember those Müller CD-Rs, though here (in Slovenia) nobody cared how many you bought (and I’m pretty sure I was buying cakes of 100 CDs in 2002).

Leave a Reply

Your email address will not be published.

This site uses Akismet to reduce spam. Learn how your comment data is processed.