The average human reaction time is 250ms. The amount of latency you'd save from that on average is unnoticeable, and in exchange you get the appearance of stuttering and corruption from the tearing.
The meme was "the eye can't see more than 25fps anyway", which is less than 50ms. It's a meme because it's of course incorrect. There is a huge difference between how long it takes us to acquire and react to what our retina is sensing (in the order of hundreds of ms), and keeping track of a moving/evolving shape (<5ms).
Here you are saying that latency of 250ms is unnoticeable, which is utter nonsense.
Your comment has nothing to do with the conversation. The reason to have low latency when typing text is so you can correct mistakes. That requires the full response time. There's no moving or evolving shapes. Maybe proofread your own comments before throwing around accusations of "utter nonsense".
Are you saying that latency in the order of 250ms when editing text is unnoticeable?
Yeah, that is utter nonsense. Just try it for yourself, instead of pulling statements like that out of thin air. You would notice a difference in tens of ms even when editing text. Why do you think people cannot stand writing with VSCode for example?
Re: moving/evolving shapes, I did not think I had to clarify that the brain is a massively parallel system with multiple modes of operation. Editing text does not require you to reprocess all visual signals from scratch, because that is not how the visual cortex works. The perceived latency when editing text is between pressing a key and your brain telling you "my eyes have detected a change on the screen. I will assume that it is the result of me pressing a key". It does NOT take 250ms to make this type of assumption, and is basically how our vision operates. It's a prediction engine, not a CCD sensor.
>Why do you think people cannot stand writing with VSCode for example?
Which people? Every recent study I've seen shows VSCode as the most popular code editor by a large margin. Maybe latency isn't as important as you think?
>Are you saying that latency in the order of 250ms when editing text is unnoticeable?
No. Sorry for the info dump here but I'm going to make it absolutely clear so there's no confusion. The latency of the entire system is the latency of the human operator plus the latency of the computer. My statement is that, assuming you have a magical computer that computes frames and displays pixels faster than the speed of light, the absolute minimum bound of this system for the average person is 250ms. You only see lower response time averages in extreme situations like with pro athletes: so basically, not computer programmers who actually spend much more time thinking about problems, and going to meetings, than they actually spend typing.
Now let's go back to reality: with a standard 60Hz monitor, the theoretical latency added by display synchronization is a maximum of about 16.67ms. That's the theoretical MAXIMUM assuming the software is fully optimized and performs rendering as fast as possible, and your OS has realtime guarantees so it doesn't preempt the rendering thread, and the display hardware doesn't add any latency. So at most, you could reduce the total system latency by about 6% just by optimizing the software. You can't go any higher than that.
However, none of those things are true in practice. Making the renderer use damage tracking everywhere significantly complicates the code and may not even be usable in some situations like syntax highlighting where the entire document state may need to be recomputed after typing a single character. All PC operating systems may have significant unpredictable lag caused by the driver. All display hardware using a scanline-based protocol also still has significant vblank periods. Adding these up you may be able to sometimes get a measurement of around 1ms of savings by doing things this way, in exchange for massively complicating your renderer, and with a high standard deviation. Meaning that you likely will perceive the total latency as being HIGHER because of all the stuttering. This is less than 1% of the total latency in the system and it's not even going to be consistent or perceptible.
Now instead consider you've got a 360Hz monitor. The theoretical maximum you can save here is about 2.78ms. This can give you a CONSISTENT 5% latency reduction against the old monitor as long as the software can keep up with it. Optimizing your software for this improves it in every other situation too, versus the other solution which could make it worse. If it doesn't make it worse, it could only save another theoretical 1% and ONLY in a badly perceptible way. It just doesn't make sense to optimize for this less than 1% when it's mostly just caused by the hardware limitations and nobody actually cares about it and they're happy to use VSCode anyway without all this.
So again, you can avoid these accusations of "utter nonsense" when it's clear you're arguing against something that I never said.
>The perceived latency when editing text is between pressing a key and your brain telling you "my eyes have detected a change on the screen.
Your brain needs to actually process what was typed. Prediction isn't helping you type at all, if it did then the latency wouldn't matter anyway. If you're not just writing boilerplate code then you may have to stop to think many many times while you're coding too.