After more or less accidentally coming across a BBS listing of various high-capacity floppy formatting programs, I began wondering: How much data can really be stored on a diskette in a PC floppy drive? And what’s the relationship between formatted and unformatted capacity? When I started doing the math, I realized that the problem is both simpler and more complex than I had thought. And that one megabyte is not like another.
Note: This discussion is limited to 3½” high-density floppies, by far the most common format, unless otherwise noted.
2.0 MB Unformatted Capacity
Since floppies store essentially analog signals, how is their theoretical capacity calculated? There are no addressable memory cells like those in RAM chips, so how does one arrive at 2 MB? The math is actually remarkably straightforward and has little to do with the medium and everything to do with the floppy controller (FDC) and drive.
There are several constants which determine the unformatted capacity: 80 tracks (really cylinders), 2 sides, 500 kbps, and 300 rpm.
A standard FDC reads and writes 3½” HD media at 500 kbps using MFM (Modified Frequency Modulation). In other words, every second the controller could read or write 500 kilobits of actual data.
A standard 3½” floppy drive rotates at 300 rpm, that is 5 revolutions per second (300 / 60). Given a 500 kbps data rate, the FDC can record exactly 100 (500 / 5) kbit (kilobits), or 12,500 bytes, on a single track. At 80 cylinders and 2 sides, that’s 80 * 2 * 12,500 or exactly 2,000,000 bytes (2 MB) of data.
1.44 MB Formatted Capacity, Or Is It?
Once a diskette is formatted, some of the bits and bytes on every track are used to store sector IDs or CRCs, and some space is left unused to give the FDC a bit of breathing space in the form of gaps (especially necessary when writing data). Exactly how much capacity is “lost” to the necessary overhead determines the formatted capacity.
The standard PC format of 3½” HD media uses 18 sectors (512-byte sectors that is) per track. That is 9,216 bytes or 9 KB of user-accessible data. At 80 tracks and two sides, that’s 80 * 2 * 9,216 or 1,474,560 bytes. If one megabyte is defined as one million bytes, that would be 1.475 MB. If one megabyte is defined as 1024 * 1024 bytes, that would be 1.406 MB.
So where does 1.44 MB come from? Well, it uses a unique definition of megabyte not normally used anywhere else in the industry: The capacity is 1,440 kilobytes or 1.44 MB, if one accepts this strange definition of megabyte being 1,000 KB.
Now this has an interesting corollary: It is commonly said that a 3½” floppy with 2.0 MB unformatted capacity has 1.44 MB formatted capacity. As explained above, those figures use different definitions of megabyte!
More, More, More!
Over the years, a number of people realized that the standard PC floppy format is quite conservative. No wonder—with 2.0 MB unformatted capacity going down to only 1.475 MB formatted (using the same definition of megabyte), a little over 25% of the capacity is “wasted” on all the overhead.
There are two basic methods of increasing storage capacity: Storing more tracks on a disk and reducing the format overhead. These are not mutually exclusive.
The most extreme form of squeezing more tracks on a disk is formatting 48 tpi (tracks per inch) 5¼” media as 96 tpi in high-density drives. This doubles the number of tracks on a disk and thus doubles the capacity. In my experience this method is unreliable, probably in part because it exceeds the rated capacity of the media. It’s also limited to 5¼” drives and worst of all, the result is still not nearly as good as actual high-density (1.2 MB standard formatted capacity) disk.
A less extreme method is adding just a few tracks, usually increasing the capacity from 80 to 82 or 84 tracks. This is not a problem for the medium because the disk is coated with magnetic substrate uniformly; if a disk can reliably hold 80 tracks, it can also hold 84. Software can typically deal with such disks without modification as well.
But there is a catch: Not every drive can move the read/write head to the 82nd or 84th track; this is a mechanical limitation. That makes this method problematic—it increases the capacity by only up to 5% at the risk of making the floppy unusable in some systems. The upshot is that formatting floppies to 82 or 84 tracks makes sense for local backups, but cannot be used for distribution media.
The last and most interesting method involves reducing the format overhead. The goal is not to squeeze more bits onto the medium than there’s officially space for, but rather recover some of the format overhead and turn it into user-accessible data. Hence media reliability is not affected at all. The biggest obstacle is the FDC, which is simultaneously too smart and not smart enough.
The most straightforward approach is simply increasing the number of (standard 512-byte) sectors per track by reducing the size of the gaps between sectors. This has the major advantage in that special drivers are usually not required; software which can handle 15 or 18 sectors per track can usually handle 20 or 21 just as well.
A widely used representative of this method is Microsoft’s DMF, which uses 21 sectors per track for a total capacity of 21 * 0.5 * 2 * 80 or 1,680 KB on a standard 3½” HD diskette.
This is still relatively far from the theoretical ideal, the unformatted capacity (DMF utilized about 86% of the unformatted capacity). The problem is that there is fixed per-sector overhead (IDs, checksums) which cannot be eliminated. The way to reduce the overhead is to use bigger sectors which have better natural ratio of usable data to overhead. And that’s where the real fun starts.
There are two major problems with using larger sectors. The first is a software problem: The BIOS and operating systems usually cannot handle non-standard sector sizes, which requires special drivers and thus hassle for users. The second is a hardware problem: The FDC only supports sectors whose size is a power of two, which creates a whole host of new complications.
To get the per-sector overhead to absolute minimum, there would have to be a single sector per track. But as explained above, the unformatted capacity of a single track of a 3½” HD floppy is 12,500 bytes, which very inconveniently falls right between the two closest possible sector sizes, 8 KB and 16 KB. 8 KB is less than even the conservative standard 1.44 MB format (which provides 9 KB per track), and 16 KB cannot possibly fit onto a single track.
Most people never heard of FORMAT1968/READ1968 utilities written in 1992 by Oliver Fromme, better known as the author of the popular HD-Copy utility. The FORMAT1968 utility claimed to squeeze 1,968 KB of data onto a standard 3½” HD floppy.
The FORMAT1968 utility used three 4 KB sectors per track (12 KB per track) and 82 tracks (really cylinders), which gave 12 * 2 * 82 or 1,968 KB.
The reason why this utility remained unknown is that it didn’t work on many systems. The problem is that not all drives run at exactly 300 rpm; even manufacturer specifications typically allow 1-2% slower or faster speed. And if a drive rotates faster, the capacity goes down—because the FDC has less time to read or write the data on each track.
A drive that spins 1.5% faster (304.5 rpm) could only store about 98,522 bits or 12,315 bytes per track. That’s not enough to store 3 sectors holding 12,288 bytes of data due to the required per-sector overhead (which is at least about 30 bytes per sector plus required gaps). A similar problem would occur if the FDC processed data at a rate slightly slower than 500 kbps. Of course if the drive rotated slightly slower, there would be more room on the disk… but that cannot be assumed to be the case.
Using bigger (8 KB) sectors is out of the question (one is not enough, two can’t fit), and using smaller sectors (2 KB or 1 KB) only makes the sector overhead worse.
Mix and Match
The only way to reduce the sector overhead and still be able to actually read and write the disks on all systems is to use a mix of sector sizes. One 8 KB sector may be used because that provides the lowest relative overhead. The question is then how to utilize most of the remaining slightly less than 4 KB of available space.
The approach chosen by XDF uses one sector each in 8 KB, 2 KB, 1 KB, and 512 byte sizes. That adds up to 11.5 KB per track or 1,840 KB (11.5 * 2 * 80) per disk. That is about 94.2% of the unformatted capacity, rather better than the 73.7% utilization of the standard 1.44 MB format. For reasons noted above, XDF only used the standard 80 tracks per side. With 82 tracks, it would have gotten up to 1,886 KB of user-accessible storage.
1,886 KB is in fact the capacity provided by the 2M utility by Ciriaco García de Celis (using 82 tracks/cylinders), even though that utility uses a slightly different physical format. The major difficulty with this approach is convincing standard FDCs to format such disks. The solution was previously described in the XDF article (formatting with 128-byte sectors but supplying sector IDs indicating larger sectors).
Goodbye, Sectors! Well, Almost…
The 2MGUI (GUI in this case stands for Guinness, not Graphical User Environment) utility (by the same author as 2M) went further and reduced the sector overhead to the barest minimum: One sector per track. Obviously since the FDC does not support arbitrary sector lengths, some trickery must have been involved.
Reading arbitrary length sectors is in fact not particularly difficult. As long as the sector length stored in the sector ID on the medium is longer than the actual length, the DMA controller can be programmed for the desired length and all requested bytes will be transferred. The read command will presumably fail (because it won’t find a valid CRC), but that can be solved by manually calculating a checksum and storing it in the sector’s data field.
Writing such sectors is considerably more difficult. 2MGUI formats tracks with one nominally 128-byte sector but its length indicating 16 KB (for HD floppies) or 32 KB (for ED media). That’s the easy part, but it explains how a single over-size sector can take up an entire track.
Writing the sector data is tricky because even if the FDC receives less than a full sector’s worth of data, it will keep writing zeros until the end of sector and will calculate and write a CRC. Since writing a full 16KB sector would overwrite the sector’s header and the first few thousand bytes of data at the beginning of the track, 2MGUI resets the FDC before that can happen.
2MGUI can operate in non-DMA mode where the data is fed to the FDC directly byte by byte. This is relatively straightforward because the FDC is reset after writing the desired number of bytes. In DMA mode, the 8254 PIT (Programmable Interval Timer) is used to precisely measure how long it takes to write the desired amount of data. Once the time elapses, the FDC is reset.
Regardless of whether non-DMA or DMA mode is used, interrupts must be disabled for the entire duration of track write (at least 200 milliseconds), in the former case to avoid underruns and in the latter case to avoid overwriting the beginning of the sector/track.
This is why 2MGUI remained a proof-of-concept utility and why its documentation mentions that it is not suitable for use in multi-tasking environments. However, all the negatives aside, it is highly likely that 2MGUI truly reaches the limit of floppy capacity achievable on PC hardware.
As to why 2MGUI works at all, understanding the FDC operation provides the answer. A read command starts delivering data to the host as soon as it finds the requested sector ID, which 2MGUI does provide. The fact that the end of the sector is missing does not in any way affect the data stored before the cut-off. The same is true for writing sectors—the fact that the write command is forcibly terminated before it completes does not in any way impact the data written before the command was aborted.
The maximum capacity achievable with 2MGUI cannot be generally stated because it depends on the hardware used. It can be over 2,000,000 bytes when using 82 tracks and slower-rotating drives.
It is an interesting fact that any 2MGUI disk (at least with a sufficient gap) should be readable in any system, but it may not be writable. The reason for this is that the FDC data separator locks on the data rate actually delivered by the drive, and can thus handle somewhat more (or less) than the nominal 500 kbps. On the other hand, when writing data, the nominal data rate is used and systems with faster-spinning drives simply won’t be able to write as many bytes per track.
All standard data formats, and even extended data formats like 2M or XDF, are designed with enough safety margin to be readable and writable on all systems. Their purpose was always to increase storage capacity, not implement a new form of copy protection.
1968 by Christoph Hochstätter, author of FDFORMAT? Didn’t you mean Oliver Fromme, author of QPV?
And the “GUI” in 2MGUI meant Guiness but indeed it was a joke, as the author was member of an association called “Grupo Universitario de Informática”.
I wouldn’t trust these non-standard floppy drive formats. During the development of PC DOS 7.0 early versions of XDF actually killed some floppy drives. Changes were made before shipping but that certainly left a bad impression to say the least. Of course IBM was intent on using XDF because one less diskette was used that if it hadn’t been though personally I would’ve dropped all of the Windows 3.x stuff instead. Suffice it to say I never used XDF on any system I owned.
Yes, I meant Oliver Fromme, author of HD-Copy. I don’t know how I managed to mix up those two (must have looked up something about FDFORMAT).
There’s absolutely nothing bad that XDF does. It does not run the hardware or media beyond the spec, all it does is reduce the format overhead.
However, if IBM tested the use of more than 80 tracks (which XDF does not use, but perhaps tried to use) then I certainly believe that some hardware could have been damaged.
I don’t remember the details as I didn’t work on it but earlier versions of XDF definitely killed several floppy drives. IBM worked with the XDF developer who I think was contracted by IBM.
Yes, Roger Ivey. As I said, in the form released by IBM I just don’t see how XDF could possibly harm either the hardware or the disks. But if IBM was experimenting with 82- or 84-track formats then all bets are off.
What I think happened is that IBM tried using more than 80 tracks, killed a few drives in the process, and concluded that no, more than 80 tracks can not be used for distribution media.
It is possible to push a floppy controller into a mode that disks can not be accessed without forcing a reset. Indeed, Linux fdutils documentation lists many cases where manually resetting are necessary. fdutils is an interesting read as it lists lots of problems with odd formats. Are there any seemingly excess controller resets in XDF drivers?
PC DOS 7 has a list of controllers that can not handle XDF disks, including Compaticard IV and GSI controllers and some other card using GSI BIOS. GSI controllers were commonly bundled with 2.88 MB drives for upgrades. Annoying tradeoff of having the ability to install IBM software off XDF diskettes or use 2.88MB or floptical drives.
Buy proper quad-density media and 96 track floppies work great. I worked about a year ago reading disks from a Sharp system (720kB 5.25″) and every disk was fully readable. Not bad for 30 year old disks.
I wonder if those GSI controllers were not truly NEC 765 compatible. As far as I know, most FDC vendors (National Semiconductor, Intel) licensed the NEC 765 core and the controllers were highly compatible. But if someone developed their own workalike, they may not have bothered implementing the features not commonly used in PCs, and then they wouldn’t work with XDF.
You’re right about quad-density floppies, those work just fine at 96tpi (I used them too, they always worked perfectly). Unfortunately most people wanted to get more out of 48tpi DD media, and that was rather less reliable.
There’s one other option, though I’m not sure if the FDC would support it. That’s to go to a variable speed depending on what track you are reading. Right now the disks are constant speed, so each track has the same amount of data. That means the further out from the center of the disk you go the more has to be skipped to have the same amount of data (or you can view the data has having been stretched out, take your pick)
So the idea would be to adjust the speed of the disk to the area moving under the heads is always moving the same speed no matter where on the disk it is.
Of course, that being said a number of older disk formats used exactly this (C64 comes to mind) and the problem is that it really makes things unreliable. High probability of a disk being written in one drive being unable to be read on any other.
No, the PC FDC does not support that. It would be nice if it did as it would much better utilize the storage medium.
CD-ROMs of course use CLV (Constant Linear Velocity) by varying the rotational speed of the medium, and all modern hard disks store more data on the outer (longer) tracks without changing the rotational speed. But with PC floppy drives that just isn’t an option because neither the FDC data rate nor the drive speed can be adjusted, and the data rate is in fact tied to the medium type (the reason why HD floppies cannot be easily formatted as DD).
I should add that only the 5.25″ C64 drives used the recording method with variable number of sectors per track. The 3.5″ drives (1581) wrote fairly standard 800 KB MFM-coded disks that can be read on PCs.
I think IBM was stuck having to accept drives with no speed changes because of limitations in American microdisk manufacture. There was a perception that by 1987 American manufactured 3.5″ diskettes had enough quality to work reasonably well with most drives but not those used in Apple Macintoshes. If that perception matched reality, then the “Buy American” provisions in government contracts would have killed bids including variable speed drives.
Source: USITC Publication 2170 March 1989 page A-81. Google books has a PDF version of this. Other parts are interesting too despite widespread redactions.
I have my doubts as to the overall benefits of variable speed drives. Squeezing an extra 10% to 20% capacity on a disk with flexible motors increased cost a lot. IBM had jumped early onto the idea of cheap hard drives. Better value for the money plus simpler software and no specialty chip design work.
The motors in the Floppy Disk Drives are already step-by-step (as I understand needed for precise rotational speed), probably the change in hardware would be very small. It would made the software more complex.
But probably it was a case that IBM already have it, I don’t know if the 5 1/4 disk are physically compatible with other IBM systems.
Many floppy drives had fixed speed motors which is cheap. Some went with dual speed motors that could be jumpered and were more expensive. The drives with software control of the motor were even more expensive. IIRC, the Apple Mac Sony drive was about $600 while the budget drives from other manufacturers that couldn’t change drive speed were about $150 in 1986. IBM volume and no Apple markup would lead me to expect prices of $300 for 3.5″ drive with multiple rotation rates. The increased storage per disk would save money for users that buy about 500 floppy disks per drive.
Calculation based on 720kB versus 800kB with 3.5″ DSDD priced at $3 per disk.
Have to agree with that… variable-speed drives just don’t sound like a good value for the money.
Going back to common use of the 1987, the extra 10% in storage on a floppy might have been tempting when working with larger files when that was the primary method of transferring data between non-networked computers. I think most of those machines didn’t have hard drives so the floppy was it. How large were Pagemaker’s files?
A newspaper advertisement taking up a quarter-page done in Pagemaker could easily exceed 1MB stored on disk. The 400kB and 800kB Mac formats were rapidly insignificant. Publishing companies used hard disks with SyQuest or Bernoulli drives to take all the files to a printer. 1989 saw the introduction of 1.4MB drives which still handled 400kB and 800kB disks. There was about another 5 years of maintaining support for earlier formats before Apple changed to stock PC drives able to read 1.4MB Mac plus some PC formats but not older Mac formats followed by Apple famously dropping floppies altogether.
Sony built their 3.5″ drive to be able to change rotation rate. In 1983, the choices of 3.5″ drive boiled down to Sony or Sony. Apple used 5 different zones with changes in rotation. Apple had 2 slightly different implementations which differed from the 6 zoned Sony format. Cost Apple more money later to retain compatibility.
3.5″ Sony drives (single sided – double density) were offered with multiple capacities:
HP 9121 : 270kB
Sony: 160 kB and 320kB
Atari ST and IBM PC style: 360kB Note IBM JX 360kB 3.5″ disk is double sided but only using 40 tracks
Cheaper fixed rotation drives showed up in 1985 or 1986 which were used by IBM as double sided 720kB in the Convertible and for the PC/XT/AT and the JX once IBM permitted use of all 80 tracks. IBM could certainly have gone to a fancier drive and diskette controller for the PS/2 line. I doubt that would have translated in sales worth the increased cost.
The problem is that 10% just isn’t enough to make a difference. Most people’s files would be either small enough to fit on the smaller floppy or too big to fit on a 10% larger one. If the capacity increase were 50% or 100%, that might be a different story.
Not dual rotational speed, but dual data rate formats are possible, and indeed
Ciriaco’s formatter implemented this idea. The goal was to format “double-density” media as high-(quad-)density on the longer, exterior tracks.
I love this article, thank you so much! It was a great source of nostalgia-info for me! 🙂
In the early 1990ies as a student I was obliged to save money wherever possible and so I used the READ1968 format widely! What a bad idea from today’s perspective!!
Now I am in a severe state of “nostalgic moods” and try to recover some old gems from the past, but I cannot do this because of that bad old decision! 😮
Hopefully I am not offending anyone if I ask: Is anybody out there who still has a working copy of the READ1968 driver for DOS?
I don’t know where to go else. My research on many many DOS archives was not successful. I only found Oliver Fromme’s HD-Copy on Simtel mirrors. This copy program can format up to 1.72 Mb, but cannot help me in reading the 1.968 Mb disks.
I would be thankful for any help or hint to a download location!
After I will have found the software and made it run in a DosBox, the hunt for a 100% specs compliant floppy will be the next milestone! As I read here in the article it will be as tough as finding the software itself… ;-)))
Thank you all in advance!!
It was a bit tricky to track this down. Several sites have an archive called 144TO196.ZIP which contains FORM1968 but for some reason READ1968 is missing. But I think this archive has what you want: http://files.programmersheaven.com/utils/file/144TO2_0.ZIP That’s READ1968.COM version 0.98, I hope it’s good enough.
Incidentally, I highly doubt you’ll get anywhere with DOSBox. You’ll probably need a real DOS system for this, though Linux might actually work too (not sure but I know it supports a lot of odd floppy formats).
The drive should not matter too much. If you’re only trying to read 1968-formatted disks, more or less any drive ought to work because everything will likely be within tolerance. It’s writing such floppies that needs the right kind of drive that doesn’t rotate too fast.
Wow, you are ingenious!!! 🙂 I never have thought of finding it in other packages like the one you posted!! Thank you very much!
Ok, the doubts about DosBox are valid… maybe I better dig some old hardware from the attic! Hope it wil still be working! But this apporach might even give me a better chance to have a “compatible” floppy drive.
Unless you are annoyed by such postings, I will tell you if I was successful! 🙂 🙂 🙂
I don’t mind at all, I’d love to hear if you were successful. The floppies are likely to be the weak point.
i want to know the capacity of floppy disk
#suppose a disk has 5268 tracks,256sectors per tracks,and 512 bytes per sector, what the capacity of the disk?
That doesn’t sound like any floppy disk I know.
>variable-speed drives just don’t sound like a good
From electronics perspective _every floppy drive ever_ is already variable-speed. In order to provide accurate speed every Drive needs feedback, motor controller reads data from speed encoder (coil/hall effect sensor). Implementing arbitrary variable speed is trivial, all you need is one more pin with external reference frequency.
Older floppies used DC motors and for example LM2917 frequency to voltage converter, or reference clock generator (ne555) + comparator (lm393), newer ones had custom BLDC controller + comparator. There are even mods for amiga to slow HD drive to 150rpm (adding another sensor, doubling reference frequency, etc).
Upselling this as a super duper special feature was indeed Sonys marketing genius.
Moving reference frequency into the floppy controller would actually make floppy drives cheaper (no need for ne555 in older designs).
Another problem with using more tracks is that the extra tracks actually cause the bit rate to go over the magnetic media spec since there is less disk material per track near the centre of the disk.
Commodore 8-bit 5.25″ drives varied the bit rate in the controller, thus the drive motor is the same as in a drive with equal capacity on every tracks.
The first 5,25″ diskettes and drives were only rated for 35 tracks, but soon both diskettes and drives moved to 40 tracks. If you look at pictures of some really old diskettes you can see that the hole for the read/write head is actually smaller (IIRC).
However Commodore designed their first 8-bit 5.25″ drive at the time when 35 track diskettes were still common. Somehow this hardware seem to have survived every evolutionary step atleast up to atleast the 1541. Even though the drive mechanics easilly supprted 40 tracks the electronics sadly didn’t support a bit rate that could use tracks 36-40 without using the magnetic media out of spec.
Amiga 1000 were relased in spring 1985 and it had a single speed 3,5″ 80 track DD 3,5″ disk drive. I don’t know who the manufacturer were (probably not Sony), but that shows that such drives were manufactured at that time.
I think it’s strange that old cd-rom drives actually rotated the media in different speeds when reading different parts of the disc. It must have costed a lot in terms of seek time.
P.S. it’s interesting that nowdays there are software that can read Amiga 880KB 80 track DD disks with PC hardware. For many years it was believed to be impossible. Then someone came up with the idea to use two drives and start to read a gigantic “sector” from one drive and switch drives while the controller were reading, thus tricking the controller to read an Amiga formatted disk which doesn’t have a PC FDC compatible header. Then someone found some way to trick a controller to start reading without having to use another disk. I’m not sure how that trick works.
While we’re at it: Some 20-25 years ago someone made a program that could read Commodore 8-bit style disks (specifically 1541 but probably others too) on an Amiga equiped with a 5.25″ drive (non-standard, almost everey Amiga used 3,5″ drives) that were trimmed down from 300 to 280 RPM (even more non standard, usually only adjustable on those old beld driven drives).
The Amiga controller could switch between MFM (used in most disk formats) and GCR (used by Apple and 8-bit commodore). You can tell that the Amiga has some history when early versions of the hardware reference manual (IIRC) talks about using GCR to read Apple II disk.
Sorry for going semi off-topic but still a lot related to the topic 🙂
Fun stuff. Thanks for writing it up. I wonder how that Amiga disk read trick works.
This reminds me that I may have an idea how to copy Lotus copy-protected disks of the late 1980s (that is, how to copy them before they’re personalized and the copy protection removed). But it’s so simple that I’m reluctant to believe no one thought of it 20+ years ago 🙂
Someone probably did, there were several specialized disk copying programs back in the 1980s like CopyWrite and Disk Explorer by Quaid Software and Copy II PC by Central Point Software. There was also a Copy II PC option board for the really nasty copy protection.
Anything with additional hardware is a completely different kettle of fish. I guess I should try to track down Copy II PC and/or the Quaid Software products. Might be tricky.
Optical drives could be made to use Constant Angular Velocity, see DVD-RAM for an example. The improvement to seeks didn’t result in sales. Spiral tracked floppy drives like the Mitsumi QuickDisk had the same problems as CD-ROMs but they were so relatively cheap until production volume drove down prices on regular floppy drives.
For Selimreza: Maximum theoretical capacity of a floppy drive using a variation of standard floppy controllers would be about 16 MB. That is 256 tracks each with 64kB (the limits of DMA). There was an option to have more than 256 tracks but stepping between 256 track regions looks prone to issues. As it is, 256 tracks would require about 400 tpi on a 3.5″ disk and that is narrow enough to probably need servo tracks and the use of hard drive style interfaces. The 1.8″ 1.44MB drive proposed in the mid-90s must have had about 300 tpi but information about it is practically nonexistent.
In short, the ED (2.88MB) floppy drive is close to the limits of what a PC based floppy controller can be used with.
Convoluted story but some of these tricky formats can ruin some drives if they are not made to handle them. Somewhere I read that toshiba made drives that could do DMF format and after playing around with FDFORMAT I can attest that it’s true. Further, up until Windows Millennium one could make your own set of 95 or 98 installation floppies by copying the CAB files over to those exact sized floppies and it WORKED. MS didn’t sell floppy sets for 98 because you always could roll your own at home. The exact parameters are 84 tracks and 21 sectors/track, 32k cluster size, one root directory sector, two fat sectors for 1,802,240 bytes .
fdformat a: c64 d1 t:84 n:21 h:2 = 1,802,240 free bytes 98se 1,760Kb CAB file size is to the last byte an exact match. FDFORMAT will require FDREAD support program on some systems as well.
>The math is actually remarkably straightforward and has little to do with the medium
Actually, the medium plays quite a significant role (though not quite as important as with 5.25″ media).
Higher-density floppies use a higher-coercivity medium than lower-density floppies, which requires a stronger magnet to write it but allows the written magnetic bits to take up less space on the disk; attempting to format a lower-density disk at a density too much higher can succeed initially, but lead to the disk gradually erasing itself, as the strong magnetisation applied to the bits by the drive operating in higher-density mode starts to affect the magnetisation on the adjacent bits. However, as the difference in coercivity is smaller between different densities of 3.5″ disk (for instance, double-density 3.5″ disks use a 665-oersted medium, while 3.5″ HD disks have one with 720 oersteds; the coercivity of ED media is 900 oersteds) than between different densities of 5.25″ disk (300 oersteds – the same as that used for all 8-inch floppies – for SD and DD, 600 for HD), so this problem is considerably less severe for 3.5″ floppies than for 5.25″ floppies.
>No, the PC FDC does not support that. It would be nice if it did as it would much better utilize the storage medium.
>CD-ROMs of course use CLV (Constant Linear Velocity) by varying the rotational speed of the medium, and all modern hard disks store more data on the outer (longer) tracks without changing the rotational speed. But with PC floppy drives that just isn’t an option because neither the FDC data rate nor the drive speed can be adjusted, and the data rate is in fact tied to the medium type (the reason why HD floppies cannot be easily formatted as DD).
Couldn’t one indirectly get a variable-speed drive by varying the voltage (or would it be the current?) fed to the drive motor?
You might be able to get a CLV drive like that, but with standard PC hardware there’s no way to control the drive motor beyond a simple on/off switch. The drive interface does not provide any more control.
When I wrote that the math has little to do with the medium, I didn’t mean to imply that media of various density can be freely intermixed; I simply meant that given (say) a HD drive and a HD medium, the medium does not influence the capacity calculation. The drive (rotational speed) and FDC do. The medium only plays a role when the recording is marginal and better. vs. worse media makes a difference between readable and unreadable disks.
From what I recall, the additional problem with 5.25″ media was that DD drives wrote wider tracks than HD drives, and that further complicated interchange, beyond the significant difference in coercivity.
My experience with 3.5″ media is that formatting a DD disk as HD works poorly, but formatting a HD disk as DD works well.
The track width is really a question of 48 v.s. 96 DPI or 35/40 v.s. 80 tracks.
Specifically on IBM compatible PC’s there were almost no 80 track DD 5,25″ drives, so therefore HD/DD and 40/80 tracks gets intermixed.
/Captain Obvious 🙂
>From what I recall, the additional problem with 5.25″ media was that DD drives wrote wider tracks than HD drives, and that further complicated interchange, beyond the significant difference in coercivity.
Score a second point for the FORMAT /4 switch…