Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Yet if you check how fast it renders a character to the screen it will almost certainly be faster.

We've made trade-offs in the computer space, input latency and rendering of the screen (also in terms of latency) has suffered strongly at the hands of throughput and agnosticism in protocols. (USB et al.)



The latency issues are dealt with, but you have to accept the RGB LEDs that come with gaming things.


Not really. Here's what an 70's/80's PC and OS had to do to print a single character in response to user input (simpified):

Poll the keyboard matrix for a key press. Convert the key press coordinate to ASCII. Read the location of the cursor. Write one byte to RAM. Results will be visible next screen refresh.

A modern PC and OS would do something more like this:

The keyboard's microcontroller will poll the keyboard matrix for a key press. Convert the key press location to a event code. Signal to the host USB controller that you have data to send. Wait for the host to accept. Transfer the data to the USB host. Have the USB controller validate that the data was correctly received. Begin DMA from the USB controller to a RAM buffer. Wait the RAM to be ready. Transfer the data to RAM. Raise an interrupt to the CPU. Wait for the CPU to receive the interrupt. Task switch to the interrupt handler. Decode the USB packet. Pass it to the USB keyboard driver. Convert the USB keyboard event to an OS event. Determine what processes get to see the key press event. Add the event to the process's event queue. Task switch to the process. Read the new event. Filter the key press through all the libraries wrapping the OS's native event system. Read the location of the cursor. Ask the toolkit library to draw a character. Tell the windowing system to draw a character. Figure out what font glyph corresponds to that character. See if it's been cached, rasterize glyph if it's not. Draw the character to the window texture. Signal to the compositor that a region of the screen needs to be redrawn. Create a GPU command list. Have GPU execute command list. Page flip. Results will be visible next screen refresh.

I could drag this out longer and go into more detail, but I don't really feel like it.

I'm sure people who actually work on implementing these things can find inaccuracies with this, but it should give an idea how much more work and handshaking between components is being done now than in the 70's/80's. Switching to gaming hardware isn't enough to get down to ye olde latencies.


We have 1000x speed machines to handle that. Notepad is perfectly responsive. but most apps do crazy side gunk interfering with typing.


Except we don't, and this really is faster on several older machines: https://danluu.com/input-lag/


That’s not true, polling of usb keyboards, multi-process scheduling and rendering through the various translation layers adds latency, quite a lot actually. You really notice it if you type on a C64 today. The machine really feels instant. Obviously it’s too anemic to do real work on and that’s kind of my point. We traded a lot of latency for a lot of throughput in other areas.


I know, there's input lag, USB driver fiddling, kernel queue, font processing, kerning, glyph harfbuzzing or whatever, blitting, compositing, rendering, freesync/g-sync, and then waiting for the LED crystals to deform and so on.

Yet there are 1000 Hz USB, and optimizations for all of the above, and e2e lag soon might become solved, and then if we're lucky it'll percolate down to consumer stuff eventually.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: