A few weeks ago, an interesting question cropped up: How fast is a PS/2 keyboard? That is to say, how quickly can it send scan codes (bytes) to the keyboard controller?
One might also ask, does it really matter? Sure enough, it does. As it turns out, the Borland Turbo Pascal 6.0 run-time, and probably a few related versions, handle keyboard input in a rather unorthodox way. The run-time installs its own INT 9/IRQ 1 handler (keyboard interrupt) which reads port 60h (keyboard data) and then chains to the original INT 9 handler… which reads port 60h again, expecting to read the same value.
That is a completely crazy approach, unless there is a solid guarantee that the keyboard can’t send a new byte of data before port 60h is read the second time. The two reads are done more or less back to back, with interrupts disabled, so much time cannot elapse between the two. But there will be some period of time where the keyboard might send further data. So, how quickly can a keyboard do that?
The theoretical answer lies in the IBM PS/2 Technical Reference and other similar manuals. The PS/2 keyboard communicates using a fairly straightforward serial protocol, with separate clock and data lines. At least on the keyboard controller side, the protocol is implemented in software (that is, microcontroller ROM).
The protocol uses one start bit, 8 data bits, a parity bit, and a stop bit. That is 11 bits total. If we consider the best case (or is it worst case?) scenario, the keyboard controller is infinitely fast and only the time to transfer those eleven bits matters. The PS/2 Keyboard and Auxiliary Device Controller reference for an unknown reason only specifies the timings for auxiliary devices, but PS/2 keyboards should behave the same.
IBM gives the CLK inactive period as 30-50 μs, and CLK active period as 30-50 μs as well. The time to transfer one bit is then 60-100 μs. That’s a bit rate of about to 10-16.67 kHz, which at 11 bits per byte translates to about 900 to 1,500 bytes per second. In other words, it takes 660 to 1,100 μs to transfer one byte (scan code) from the keyboard.
The absolute best/worst case would then be 660 μs; that particular clock starts counting when the host reads the keyboard data from port 60h the first time. To put it differently, after reading from port 60h once, software has at least 660 μs to read from port 60h again without worrying that a new byte might have arrived. In reality the time is likely longer because the keyboard controller is not infinitely fast and the keyboard is probably not communicating at the maximum allowed speed.
In CPU terms, 660 μs (well over half a millisecond) is a long time and lots of instructions executed, even on a very slow PC. The only scenario where the Borland keyboard logic might get upset would be a long NMI or SMI blocking execution between the two port accesses. But a system that can just “lose” half a millisecond or more of CPU time arguably has serious issues already.
Note: XT keyboards are not considered here. Those use a somewhat different protocol, with different timings.
To determine whether the theory has any bearing on reality, I set out to measure how fast an actual PS/2 keyboard sends data. The host cannot measure this precisely, because it does not “see” the bits on the keyboard wire, but it is possible to estimate the speed well enough.
The method is to measure how fast keyboard interrupts can occur. The limiting factor would typically be the human pressing the keys. Instead of asking the user to randomly bang the keys and hope they’re pressed in quick enough succession, it is much better to press keys which generate extended scan codes in Scan Set 2. That is, two-byte scan codes with E0h prefix and a data byte. These sequences are generated by the keyboard in response to a single press and thus represent the true top speed at which the keyboard sends data.
Measurements were taken on an IBM ThinkPad 760XL, using its built-in keyboard. The system in question has a 166 MHz Pentium MMX processor; the TSC was used for maximum accuracy.
The initial results were surprising. It took about one millisecond for the byte after the E0h prefix to arrive when pressing a key… but two milliseconds when releasing the key. What?!
On closer look, that is perfectly logical. The keyboard by default uses Scan Set 2 (AT style), which the keyboard controller translates to Scan Set 1 (XT style). For example for the Right Ctrl key, the host sees an E0 1D sequence when the key is pressed, and E0 9D when the key is released; that’s Scan Set 1. But the keyboard in fact sends data in Scan Set 2 format, and the Right Ctrl key press sequence is E0 14, whereas the key release sequence is E0 F0 14. That is, the key release sequence consists of three bytes instead of two, and for the host to see one byte after the E0 prefix when releasing the key, the keyboard has to send two bytes (the conversion is done by the keyboard controller). Therefore, the delay between the E0 prefix and the next byte seen by the host when releasing a key is about twice as long as the delay for a key press.
Now the results make much more sense: It takes about a millisecond for the keyboard to send a byte, which is less than the longest theoretical time of 1.1 ms but more than the shortest theoretical time of 0.66 ms. In other words, well within the theoretical range. So maybe the Borland run-time is not completely crazy after all.
What About USB?
The method used for testing PS/2 keyboard speed can also be applied to USB keyboards with “legacy” support, although it will obviously measure very different things. In a test system with an Intel DZ68DB board and Core i7-3770 CPU, the elapsed time between E0h prefix and the next byte was close to 16 milliseconds. More or less identical results were obtained on an Intel DX48BT2 board with a Core 2 Extreme QX9770 CPU.
It should be noted that there are several possibilities of providing PS/2 keyboard (and mouse) compatibility with USB keyboards (and mice) and I did not investigate which one the tested Intel boards use. What’s notable is that compared to a PS/2 keyboard, the delays are much longer (16ms vs. 1 or 2ms) and that there is no difference between key presses and key releases, because there is no Scan Set 2 to Scan Set 1 translation behind the scenes.
As to why 16 milliseconds—that appears to be an artifact of the particular (common) BIOS USB keyboard support implementation. All keyboard-related processing is done with a 16ms interval, including conversion of a USB key event into two (or more) scan codes seen by the keyboard interrupt handler.
The upshot is that a USB keyboard in legacy mode has a much less precise response than a USB keyboard with native drivers; in this particular setup, it will take up to 16 milliseconds for a key press or release to be reported, or longer if the key sends a multi-byte response.
I remember you could send an “error” (byte 0xFE I think) back to the controller, which actually caused it to pretend that nothing happened, and reassert the IRQ. Unless there is some buffer involved though (which I do not remember), that of course won’t change the possibility of losing a scan code if the keyboard sent one before the OS rereads the pending one.
Seems like a “start” got replaced by a “stop” by accident here:
“The protocol uses one stop bit, 8 data bits, a parity bit, and a stop bit.”
The 0FEh command is ‘resend’, that’s probably what you mean. The ‘resend’ command should actually cause the keyboard to resend the last byte, which might look like the interrupt was simply reasserted, but the keyboard really should re-transmit the byte unless the keyboard controller is cheating.
There is a buffer involved, the DBB (Data Bus Buffer) register in the controller. The controller won’t start receiving the next byte from the keyboard until the previous byte is read. That’s why Borland’s method works, because the clock won’t start ticking until the first scan code is actually read by the interrupt handler (and the second read should then follow very very quickly).
Oops, thanks, fixed.
‘At least on the keyboard controller side, the protocol is implemented in software (that is, microcontroller ROM).’
On the 3270PC and AT keyboards (which use the same protocol) it’s implemented in the 8048 microcontroller ROM at the keyboard end. It’s probably the same for PS/2 keyboards, but the microcontroller in those is a 6805 and as far as I’m aware no-one’s dumped it.
The keyboard controller doesn’t always implement the protocol in software. There’s also JETkey, the self-proclaimed “fastest keyboard BIOS” ASIC. It might be interesting to benchmark one of those if you can get your hands on it.
Unfortunately, it probably isn’t a drop-in substitute for the 8042 in your test system.
The Borland peculiarity was also the topic of these discussions a decade ago:
And it was not only Turbo Pascal 6.
Yes, I’m aware that there are keyboard controllers that are not 8042 derivatives (e.g. Holtek HT6542B) or not even microcontrollers (like that JETkey chip). I don’t know how exactly they behave.
What was discussed back then were several separate problems, though the first one of them was the Borland run-time issue with reading the scan code twice that you described at the time.
The Windows 1.0 grabbers (CGA.GRB etc) appear to use the same technique to check for the user pressing PrintScreen: Hook INT9 and read port 60h to check for the PrintScreen keystroke. If it’s a different keystroke jump to the previous INT9; if it’s PrintScreen, call the previous INT9 and then handle PrintScreen.
Thanks for that. It’s not just Windows 1.x, Windows 2.x has similar code in all the standard grabbers (CGA/Hercules/EGA mono/VGA+EGA). However, in Windows 2.x the read from port 60h is conditional and only done when the BIOS shift state in the BDA indicates that Alt is being pressed. Which means the double read from port 60h won’t be done most of the time. Windows 1.x is different and reads from port 60h unconditionally in the IRQ 9 handler.
You can see the code even in the Windows 3.1 DDK (ifdef’ed out).
Some usb gaming keyboards work at 1000hz polling rate. How does that compare to a ps/2 keyboard? Which would be faster? Thanks
That’s a really good question. I don’t think there’s a really good answer.
1000 Hz polling rate only means that the “best worst case” is 1 millisecond. That is, if you hit the key such that you exactly miss one polling window, it will take at least one millisecond to the next one. The question is whether the keyboard electronics is infinitely fast (more or less) and how much bandwidth is available on the USB. If there is, say, audio or video isochronous traffic, the keyboard data won’t be received immediately and may be delayed by some fraction of a millisecond.
The PS/2 wire protocol is much more predictable. There is no polling rate, but it takes some time to send each byte (let’s say 0.6 msec). I don’t know how amenable PS/2 keyboards/controllers are to overclocking, i.e. if they might significantly raise the bit rate.
My guess, without actually measuring anything, is that a USB keyboard polling at 1000 Hz behaves about the same about a PS/2 keyboard, likely with lower average latency but perhaps higher worst case latency. What I seriously doubt is that a human can tell the difference.
Once the USB is busy, or a PS/2 mouse competes for bandwidth with the keyboard, all bets are off.
There is a difference between polling the keyboard matrix, and the communication between the computer and the microcontroller in the keyboard.
Even a PS/2 keyboard needs to poll the keyboard matrix. (Smart implementations of the hardware/software can at least detect that a key is pressed before it starts to scan the keyboard, so the time from pressing a key until it is detected is the same every time).