If you spent what people paid for a PC in 1983 (literally, without any inflation) you probably wouldn't notice anything being perceptually slower.
Like the first Mac retailed for $2500 US. Go spend $2500 on a PC today, you'll have a great time.
Granted, economies of scale make this kind of a dumb argument. But it has a bit of truth to it. People are just less willing to spend as much on their machines, as well as push much more limited platforms like mobile to their limits. We should definitely deal with that as developers, don't get me wrong - but not having to deal with the optimizations they dealt with 40 years ago doesn't make me unhappy.
I have a top of the line Intel processor that’s less than 2 years old (launched, not bought). 970 Evo Pro that’s the one of the fastest drives around. 32 GB RAM (don’t remember the speed but it was and is supposed to be super fast).
Explorer takes a second or two to launch. The ducking start menu takes a moment and sometimes causes the entire OS to lock up for a second.
The twitter rant is spot on.
There’s so much of supposed value add BS that the core usage scenarios go to shit.
And this is coming from a Product Manager. :-)
Anyway the referencing problem is painful. I feel it often. Google maps or Apple Maps. Try to plan a vacation and Mark interesting places on it to identify the best location to stay. Yup gotta use that memory. Well isn’t that one of the rules of UX design, don’t make me think?
Regarding OSes: storage has gotten so much faster and CPUs haven’t, that storage drivers and file systems are now the bottleneck. We need less layers of abstraction to compensate. The old model of IO is super slow is no longer accurate.
I'm writing this on an AMD Phenom II, running Debian and StumpWM, that's over 10 years old. I've upgraded the hard drive to an SSD, and the memory from 8 Gb to 16 Gb (4 Gb DIMMs were very expensive when I first built it) and it's as fast as can be.
My work computer is much newer, has twice as much memory and a newer Intel processor, and I really can't tell the difference except for CPU bound tasks that run for a long time, like compiling large projects.
Have to voice my agreement. Linux is an expensive investment but so very much worth it. Each time my colleagues complain about their computers it is because of Windows. I count myself lucky to have Linux as my only desktop and the skill to maintain it. I run an ancient i5 2500k with 8GB RAM and SSD. All the games I play work fine on Steam Proton. I still have to figure out how in the world Reddit on Firefox manages to completely lock the system up, with looping audio and frozen cursor. Nothing else causes that fault.
> I still have to figure out how in the world Reddit on Firefox manages to completely lock the system up, with looping audio and frozen cursor. Nothing else causes that fault.
Fellow X220 user here... a solution for this exact problem where the system runs out of memory and then you sit there staring and waiting until it churns around long enough until it can do stuff again is to run earlyoom[0].
It will kill off the firefox process (or whichever is the main memory hog) early, which is also annoying but less so than having to wait minutes until you can use your computer again.
My previous two laptops came with Windows installed, it was XP and Win7, which I happily used for a year or two, until despite my best efforts, the whole system got crudded and slow, and at some point a teensy crashy or maybe I got a virus and that's when I put Linux on the thing. That easily gave the device a few more years of useful life. (my current laptop also came with windows, but I just backed up the factory image, wiped and installed Linux right away).
Anyway, while every version[0] of Windows I have used has become inescapably crudded up and slower over time, on Linux, even the old laptop, the only thing that got slower over time was the web browser. Which has mostly to do with webpages becoming heavier.
[0] Actually win95 cause I can't remember if this also happened on win 3.11 and the like.
I don't feel like linux is expensive investment for everyone.
I am a first year CS student. when I got my first laptop recently, I got crazy and installed debian (had some prior experience with command line), it didn't work very well for laptop. All DEs except enlightenment (yeah i even tried it) had lots of display related glitches due to cheap hardware.
Then I moved on and installed fedora. Nothing to tweak from CLI. Just changed few settings from GUI and peace of mind even on relatively obscure hardware.
It has been vastly simplified and worth it for anyone in IT / CS related fields.
Very satisfying to read about the new start menu. Every year or two I'll wonder how things are in Windows land, install a fresh copy of Windows 10, open the start menu and wait. Oh, there it is!
This is one of the things that made me ditch Windows when it came out, but I was pretty sure they would have fixed it now. Now I'm convinced Windows 10 is part of an authoritarian experiment in getting populations to gradually submit to a worse quality of life.
I have been told I have to update my Windows 7 system at work starting next year and I'm already dreading it. Too much software is Windows-only to switch to something else.
Privately it was so easy to ditch. (Still have it on dual-boot, rarely use it and so every 2-3 times it needs to update for long minutes while I wait. Meanwhile Mint updates the kernel during operation while I barely notice at all.)
That comparison's a bit ridiculous, considering how much more a modern system is doing and making possible. I think all it shows is that latencies under 200ms are widely regarded as acceptable. What latencies are observed if you run an OS of comparable simplicity to the Apple 2e's on a modern machine?
If you ran an OS that was exactly as simple as the Apple 2e, the Apple 2e would still win.
Modern hardware introduces a significant amount of latency, its important to differentiate throughput and speed, a modern computer would crush a 2e in throughput a million times over, maybe more, but that doesn't mean its pipeline is shorter.
Why? Can nothing be done about it short of literally going back to 90s era hardware?
I see latency as a silent killer, of sorts. For instance, if you introduce a tiny bit of mouse latency, users won't notice the additional latency, but they will sense that their mouse doesn't feel quite as good. Give them a side by side comparison, and I bet most will be able to tell you the mouse with slightly less latency feels better.
This extends to everything. Video games with lower latency appear to have better, smoother controls. Calls with less latency result in smoother, more natural conversations. Touch screens with less latency feel more natural and responsive.
(I only have anecdotal evidence of this, but I am absolutely convinced of it.)
> I think all it shows is that latencies under 200ms are widely regarded as acceptable.
They are literally not. At all. You're way off. For anyone who cares about latency, you gotta be sub 50ms at least. For anyone doing generic not latency-sensitive work, maybe you can get away with 100ms, but that's stretching it.
200-250ms is the (purposefully built-in) latency with which an autocomplete may appear while typing. Not the latency for a single character or mouse click!
Where do you get 200ms latency anyway? That's a lot
> ... you probably wouldn't notice anything being perceptually slower
I disagree. I have such a PC (64 GB of RAM, Quadro GPU, SSD, etc.) and I absolutely do notice things being slow, even things like Word, Excel, and VS code, let alone resource-intensive professional software.
A more expensive PC does very little to address the latency issues at play here, the problems are very much not lack of processor speed, gpu speed, or even ssd speed (most times).
I know from experience, the most godlike PC you can possibly build does virtually nothing to make common applications less laggy.
Modern day tools such as Slack, VS Code and other Electron & browser based apps do bring a fair amount of lag into day to day work.
The common denominator there is browser tech & I think that will improve with time. And network-delivered services like Google Maps & Wikipedia are best compared to CD and DVD-ROM based services like MapPoint and Encarta, which had their own latency and capacity challenges.
In the meantime, you can still use tools like vim for low-latency typing. And it’s kind of interesting to see a Java GUI (IDEA) perform as well as it [has](https://pavelfatin.com/typing-with-pleasure/).
I get your point, but I don't agree on the anecdotal front. I haven't used fewer than 4c/8th and 16GB of highspeed ram in probably 5 years - and the only "common" applications that I notice going slow on me (unless on the occasion I'm not booting older/slower hardware), are things like IDEs and absurdly large spreadsheets. Even stuff like Electron apps are snappy to me and I haven't had issues with (GitKraken, VS Code, and Slack are daily drivers for me).
Browser based apps are a shitshow though, but I figure that's mostly out of anyone's control. I chop that up to the browser being fundamentally a poor place for most applications, even ones that are tightly coupled to a server backend.
Problem is its perceptual, Go back and use windows XP, its a complete nightmare lagfest with any appreciable CPU load compared to windows 7+ (all UI rendering was done directly by the CPU, no GPU acceleration except a few minor things).
I will try to explain - going back to XP may not take you back far enough.
I recently read a history of early NT development, and then installed NT4 in a VM to play with, choosing a FAT disk. It is /extremely/ responsive. Much more so than the host OS, Windows 10.
The NT4 and 95 shells were tight code. They were replaced a few years later by the more flexible "Active Desktop". This was less responsive.
In later releases, Windows started to incorporate background features, such as automatic file indexing. File indexing is IO intensive and hammers your CPU cache.
When I was regularly using NT4 (years ago), I had an impression that there was some overhead caused by registry searches. If this was ever a thing, improvements in raw computing power have conquered it.
If anyone else wants to try, NT4 and VC++ cost me next to nothing on amazon. For a good editor, get microemacs. Python2.3 works. (Don't let it near an open network.)
I do recall being a luddite in upgrading from Office 2003 to 2010 (rip, '07 on Vista) and rued the day that it became permanent. It did get better though.
That's more a statement on Adobe's quality of engineering than computing. CC is awful software and I hope their engineers are embarrassed by its deployment for the fantastic platform that is their creative tooling.
I would argue that it’s because of vast resources.
Both Adobe’s, and their customers.
At a certain level, when a graphic designer complains that Photoshop is too slow, they don’t push back against Adobe for optimizing poorly, they just buy a new computer.
Most software surprises me with how much time it needs to start. Games are particularly slow, I guess because they need to load the entire engine and all assets before they can even show you a game menu, but even very simple applications can be slow.
On the laptop I'm typing this on, Windows Explorer often takes several seconds to open.
Games tend to be optimized for framerates so it makes sense to sacrifice load times for that. Of course there are plenty of games that are just badly optimized and could improve both framerates and load times.
Games on old machines also used to have loading screens, remember? They actually sometimes took quite a bit longer than, say, starting PS on a modern machine. I don't think games a very useful comparison in this context, it's more about utility and application software.
> "People are just less willing to spend as much on their machines"
And why should they? Today's smartphones are much more powerful than the most powerful supercomputer of 1983. Computers have been powerful enough for most practical purposes for years, which means most people select on price rather than power. And then a new OS or website comes along and decides you've got plenty of power to waste on unnecessary nonsense.
The first Mac was 3.5 inch disk based IIRC. I remember test driving it and was kind of shocked at that price since it felt slower than my Commodore 64 with a hard drive (the tape drive was so slow but cheap!) or my next computer, an Atari ST with a hard drive, of course the disk access/read/write speed was the dominating speed factor.
I seriously doubt there is a huge difference in how fast I can access files, scan memory, or iterate through a loop, which is what has a huge impact on perceptual latency.
NVME over SATA drives will drastically improve file access times. You will find these on newer, pricier machines, but if your mobo has a slot, use it because the drives are fairly cheap.
Going from low clocked memory to high clocked memory can cost a bit of money (last I looked, it was like a 30-50% premium going from 2666 to 3200 to 3600MHz). As well, if you're comfortable, tightening the CAS timings on your memory can see noticeable improvement in memory bound applications. I personally have measured a 25% performance increase once my memory profile for 3200 was set correctly (mostly a Ryzen thing) and just upgraded to 3600 and haven't tested, but in my larger projects with tons of in-memory code I'm noticing improvements.
Iterating over a loop can be a world of difference depending on what is happening in the loop and what vector instructions your CPU supports, and how well it is supported. As well as your CPU's clock, L1/L2 cache sizes... basically everything.
I have used a computer with NVMe, higher clock speed, better caches, more RAM…the works (but the computers I’ve personally owned only have had some of those ;)). They’re faster, yes, but fractionally for short latencies. Typing a character on one and waiting for it to show up is not significantly better on the other.
> People are just less willing to spend as much on their machines,
Please stop blaming the consumers, they have very little freedom of choice.
> as well as push much more limited platforms like mobile to their limits.
I don't think anyone has really pushed any recent smartphone to their limits. I haven't checked if any demoparty maybe had a smartphone compo, but if they didn't, then yeah nobody has really tried.
The C64, Amiga and early x86 PCs have been pushed to their limits though, squeezing out every drop of performance. And there still exist C64 scene weirdos that work to make these machines perform the unimaginable.
Smartphones haven't been around long enough and have been continuously replaced by slightly better versions, that really nobody has had time to really find out what those machines are capable of.
> but not having to deal with the optimizations they dealt with 40 years ago doesn't make me unhappy.
I used to have to deal with such optimizations and I totally get that. It's freeing and I occasionally have to remind myself what it means that I don't have to worry about using a megabyte more memory because machines have gigabytes. Except that a megabyte is pretty huge if you know how to use it.
But not having to deal with the optimizations also means that new developers never learn these optimizations and they will be forgotten. And that's bad. Because there's still a place for these optimizations, like 95% of the code doesn't matter, but for that 5% performance critical stuff, ... if you just learned the framework, then you're stuck and your apps gonna suck.
It's kinda weird to optimize code nowadays though. At least if you're writing JS. It's not like optimizing C or machine code at all. If you're not measuring performance, 99% sure you'll waste time optimizing the wrong thing. Sometimes it feels like I'm blindly trying variations on my inner loop because sometimes there is little rhyme or reason to what performs better (through the JIT). Tip for anyone in this situation: disable the anti-fingerprinting setting in your browser, which fuzzes the timing functions. It makes a huge difference for the accuracy and repeatability of your performance measurements. Install Chromium and only use it for that, if you worry about the security.
Like the first Mac retailed for $2500 US. Go spend $2500 on a PC today, you'll have a great time.
Granted, economies of scale make this kind of a dumb argument. But it has a bit of truth to it. People are just less willing to spend as much on their machines, as well as push much more limited platforms like mobile to their limits. We should definitely deal with that as developers, don't get me wrong - but not having to deal with the optimizations they dealt with 40 years ago doesn't make me unhappy.