Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Test your product on a crappy laptop (css-tricks.com)
527 points by dredmorbius on Dec 19, 2021 | hide | past | favorite | 307 comments


There are three kinds of web developers:

- Those who don't care about performance, as long as it loads within a few seconds on their own beefed up system on the LAN. This article is for them, and we can only hope they will listen.

- Those who care about performance, and work for people who care. They are already doing great work (or are about to), producing those rare low-friction, high-speed, content-is-king sites which we all love.

- Those who care about performance, but work for people who don't. Since the developer doesn't get to decide what to work on, they can either get cracking on feature #357, work on performance on the sly at the risk of losing their job, or quit. Not much of a choice, really, unless you have some other-worldly lax schedule and minimal oversight. And don't forget, if you are allowed to work on performance the burden is on you to prove how much faster things are with your changes (which can be really hard to show conclusively) and that your N days is worth more than Bob working N days on that sexy ticket #357.

Substitute "security", "UX", or anything else you want for "performance", it works the same way.


In my personal experience, even getting an average web dev to touch a slower Windows machine is mission impossible. The best you can hope is "ewww, this sucks so much and it's ugly, why would anyone do this" and then arrogant walk back to their 3000$+ MacBook to shovel more JS libraries into the website. I've literally not seen a single rockstart javascripter actually willingly try to test things on Windows, much less attempt to use a low/midrange machine to see if their code works well.


Come to Europe, most of those designers will be using Windows machines issued by IT to everyone on the building anyway.

And the only Apple gear will be iPhones and iPads used by upper management.

Yes, I am also aware there are plenty of cases that aren't like that specially for the fortunate ones living on tier 1 EU countries.


> Come to Europe, most of those designers will be using Windows machines issued by IT to everyone on the building anyway.

That's not my experience at all. I would not make such a generalizations for Europe which includes very different cultures / work environments.


Exactly because I expected such replies I wrote the last sentence that you forgot to read.


I did read it. You still made an untrue generalisation and in the last sentence tried to narrow it down. Why not just wrote like "I live in <your country> and here..."


Because I have lived in several European countries, slowly approaching 50, and actually you also failed to do that.

We can also start discussing formal logic regarding how to properly express generalization.


(Waiting for company issue M1 Pro MBP)


I mean can you really blame them? I frequently play tech support for many of my family members running Windows 10 without an SSD and it’s like pulling teeth it’s so slow.

Running Windows 10 with an SSD and 8+ GB of RAM is a different story and way better experience.


Especially since IT departments see Windows as the “budget” option for worker drones who don’t need much other than a web browser and Office.

The other side benefit is that Macs are way harder to manage and so IT depts don’t actually bother and you get admin access and told to turn on FileVault instead of the nightmare that is opinionated GPOs.

So when it comes to my option of a MacBook Pro or the cheapest functional Dell Latitude bought in bulk I’m gonna suddenly care a whole lot about testing in Safari.

Apple set themselves up really well as an escape hatch from overbearing IT, not surprised people with the opportunity take it. Devs get the treatments that was previously reserved only for the C suites.


Unpopular opinion: Most web developers don't have a grasp of computer science or assembly, and act like computing resources are free. OTOH, obsessing about performance to the exclusion of usability is equally insane. Nuance, knowledge, and metrics are what's needed.


Idk how CS or Assembly plays into this. I’m a webdev. Self taught. Your points about metrics and nuance are accurate but I fail to see how those are related to CS as a whole.

If stuff is legitimately slow, we should make it faster. Doesn’t need to be more complicated than that imo.


The typical CS "curriculum" has a pretty large focus on performance. Most of the assignments given in my university had constraints around time, memory and cpu consumption. Therefore performance was heavily focused on.

This has carried over quite well into my work life. Things like "big O", understanding the mechanics of different kinds of data structures, and their tradeoffs enter all my code. They don't really take up a large part of my active thinking, but I will routinely make decisions that are more performant and try to weigh in readability, "grok"-ability, and if they are common or not.

I find that university forced me to learn the unattractive bits of computing, that if I was self taught I likely would've glossed over and not spent several months on.

Yes, if stuff is legitimately slow, make it faster. But things typically _become slow_, it's a creep. As time progresses, it slows down, now it's slow and you may not have a product manager who thinks _now_ is the time to dedicated resources towards speeding it up.


I don't think assembly has any relevance, and the thing with CS that's relevant is almost entirely with complexity theory. This sounds fancy, but it basically boils down to recognizing O(n^2) vs O(n) vs O(n log n), which in practice can be reduced to "use the right datastructure" and "use the right query".


Frankly, I think far more performance problems nowadays lie in things like insane SVG/CSS animations for every possible loading spinner than in developers not knowing their O notations.


To be fair though, SVG/CSS animations are super fast and efficient compared to anything javascript-related.


> Unpopular opinion

This is HackerNews. Hating bloated websites is a time-honoured tradition here.

> Most web developers don't have a grasp of computer science or assembly, and act like computing resources are free

It's worth separating these two. Plenty of web developers have computer science degrees, but if they're paid to quickly churn out bloated websites, that's what they'll do.

> Nuance, knowledge, and metrics are what's needed.

If the aim is to improve the performance of software/websites, what's needed is a userbase that's less forgiving of bloat.


Thanks for this.

In addition it’s easy to forget until you learn a new technology that: first you learn how to solve a problem, then you learn all the ways to solve a problem, finally you learn how to best solve a problem.

This means that you need a good understanding to make performance focused solutions, it can simply be the case that most teams are learning to get there.


While I agree with the premise, I've seen this mindset get out of hand and stress people out. My anecdote involves a co-worker looking at our Azure metrics at all times of the day and having a mini panic attack when requests drop off or we get a an uptick of a few hundred milliseconds of response time. Most of this is due to scaling out and resolves itself in a short amount of time. Regardless, he's convinced something else is causing it and taking measures that actively make our code worse and less understandable all in the name of "performance".


Don't act on assumptions, verify.


"This article is for them, and we can only hope they will listen."

By my reckoning: everybody assumes that this article is for somebody else, and that's the problem. Assuming the problem is either developers who don't care or developers who don't have enough agency to act on it is easy because we can say "I care, and I have agency, so I'm not part of the problem."

Hanlon's razor applies here, but the related incompetence stems from competent people doing things outside their areas of expertise rather than being fundamentally incompetent. The developer discussion focusing almost exclusively on performance reinforces that. We favor our strongest mental models when reasoning about problems— when you're a hammer, everything looks like a nail.

But we often contribute to or create things outside our direct areas of expertise, (often reluctantly because we're the last people to touch the code before it hits production.) We might not even 'realize' how far outside they are, though. In my experience, this testing reveals far fewer performance problems than interface design and front-end implementation problems— i.e. touch targets are nearly impossible to use on burner smart phones, sidebars that clobber content or top menu bars that wrap between break points, weird behavior on non-widescreen landscape orientation devices, poor keyboard (and therefore screen reader) navigation, etc.

I believe this discussion illustrates the importance of thoughtful and skillfully-applied UX principles, where the data would ideally come from real users operating as they normally would... but maybe that's just the nail this particular hammer is hunting for! XD


There is this maxim in that testing something is a great way to make your superiors care about it.

Even filming your site loading on a normal-powered device or a mobile connection and showing them the video may be enough to change their opinions. But yeah, some people won't change, no matter what.


I agree what testing on crappy hardware is a good thing BUT...........

I think the industry has a problem selling underpowered computers in general. AFAIK it's the non-pros that need the a reasonably powerful computer. Maybe "powerful" is the wrong word but not the crappy $300 windows laptop running on a celeron or whatever the latest.

My dad got some ~$500 HP all in one desktop and it's so underpowered as to be unusable. It takes 4-5 minutes to boot while and launching any app takes what feels like half a minute. Just typing you can feel the machine struggle. You could argue if all software (and the OS) was optimized it might not be underpowered but there is no world where all software is optimized.

It see a similar problem all the time with people giving up an old computer. My friend had some like 2007 Mac and she was thinking of giving it to a friend for their kids. Maybe some hacker kid would find a use for it but for most non-geeks, an old PC won't run current software, the current OS, current browsers with vulnerabilities fixed or modern standards. Zoom, youtube, etc are probably not going to be good experiences on such a machine.

Basically it's my opinion that non-techies should always get a relatively new and reasonably powered machine but sadly they don't have the knowledge to know that so they get led to a crappy underpowered machine and then have a crappy and frustrating experience using it.

I don't have a solution. It's only an observation. In my own family, when they let me buy them a machine, on a scale of 1 to 10 I aim for ~7 in terms of power with say 8-16gig of ram and a reasonably powered processor, at least an i5? not an i3 or N or Celeron.


Even if you cut all the crap, it'd still be pretty hard to get away from the ground truth of which hardware is cheaper.


progressive enhancement, but as you said the reality doesn't always allow or account for that approach.

unless faang companies suddenly make it trendy again to care about low fidelity experience.


Explain Gmail.


Thanks for reminding me why I use HTML Gmail instead.

https://mail.google.com/mail/u/0/h/


It surely loads faster than speed of light, but with all the keyboard navigation missing, it takes 20x more time to go through each email and archive/delete/reply to. Can't have everything, it seems. :/


I find the idea that the keyboard navigation is a measurable part of the site amusing.


It's actually not faster than the current modern UI for me.


It's significantly faster for me on my i3-5010U - 5s for the modern UI and 1s for the HTML UI, unloaded. During startup, or when I'm doing something computationally expensive in the background, that 5s turns into 10s, so that difference becomes even more noticeable.


What specs does your computer/device have?


Intel Skylake Core i5-6300u.


1. Please consider using a proper mail client:

https://en.wikipedia.org/wiki/Comparison_of_email_clients

2. Please consider avoiding Google (and Microsoft, Yahoo and Apple) as your email provider, due to their mass surveillance practices, both for commercial ad targeting and for US government political policing. There are several other reasonable webmail providers, if you really can't give up the web interface.


Basically no desktop client works as well as the gmail web app (especially when it comes to search).

I use the mail client on iOS out of principle but still end up opening the gmail app when I actually need to find something.

A state actor will have less trouble breaking into a mail server I run, and I don’t really believe too much in the idea that other countries apart from the US are immune to police overreach.

Signed: someone who has multiple email addresses, has tried all the fancy alternatives (fastmail etc) and who desperately wants this stuff to work because having 3 web tabs to check email is annoying.


> A state actor will have less trouble breaking into a mail server I run

Only if they have a good reason to target you, personally.

But if we are talking about mass surveillance, attacking a large provider is incredibly *cheaper*: spend one million dollars on 0-days to gain access to one million mailboxes, or one billion to intercept submarine cables and backdoor CPUs and gain access to one billion mailboxes.

There is no way to attack somebody's personal, custom mailserver with a budget of $1.


> Basically no desktop client works as well as the gmail web app (especially when it comes to search).

I would actually make the opposite statement, i.e. generally, webmail clients are significantly inferior to desktop clients in most respects: Feature set, responsiveness, flexibility etc. And of course, being potentially somewhat secure, as opposed to guaranteed no-security with webmail.

As for search functionality - I'm not sure you're right, but I'll grant you that some webmail providers, like Google, provide speedy search.


At work I use the macOS Desktop edition of Outlook for my work O365 account and I've found searching to be pretty good and quite fast. I'm pretty sure it just searches the local cache hence the speed. It's pretty good at suggesting email addresses when I type someones name so I can usually quickly find emails from a certain person just by typing the first characters of their name and pressing return when it finds the correct address.


Outlook on mac uses Spotlight search internally


KMail (on KDE on Linux desktop) search is instant. Pity the Akonadi framework under it needs more love to iron out its bugs. I suffer web GMail only when not at my workstation.


Yeah my problem has been fast search that just is wrong. Like even exact keyword searches.

I’m sure it’s something about my configurations but every time I mess around to figure it out it doesn’t work.

Generally my use of email is “look at stuff from right now” or “look at stuff from 6+ months ago”

(Here “wrong” means “does not find the email I am looking for”. Usually a failure mode of just not showing much of anything)


If you’re on mac mimestream is really good:

https://mimestream.com/


Back when I could use emacs for email, I was quite happy. Unfortunately, I have the advanced protection on in my Gmail account, so I haven't figured how to get that back, yet. App passwords aren't allowed on my account.

I have considered moving email accounts, but also don't have enough motivation for all that entails.


I think you can get most of the benefits of the enhanced protection but setting specific features yourself instead of the one "do everything" flag.

I have 2FA required on regular logins for example, which can be a Yubikey or authenticator app token. At the same time, I have app passwords, used by my emacs-based email setup. Mu4e gives me instant search-based access to my saved email, and pretty good security elsewhere. Is that an option?


It is tempting. I like the stupid heavy locked down nature of no app passwords, for the time being.


That's a valid choice too, to be sure. As long as you know there are options.


> I have the advanced protection on in my Gmail account

Well, Google has your email, to read and use and pass on, so I'm not quite sure what this "advanced protection" can mean.


I don't understand this comment. Google clearly cares about the performance of that product, since it's the fastest webmail I've used by a long shot. Or maybe you care more about performance than almost everyone, and use Mutt or something else which can be configured to be faster with a few days' worth of effort.


I have to use Gmail for work. I've found Gmail to be pretty slow nowdays. From HN comments on other threads I'm not the only one to have noticed this. It even occasionally fails to load and just hangs at the loading screen requiring a refresh and fingers crossed that the second try succeeds in loading my inbox.

When I return back to my personal email using FastMail the difference is like night and day it loads fast and 99% of the time it works on the first try. I also occasionally use Outlook's webmail for work (as I work across different systems with some on Google and some on O365) and even Outlook webmail is generally faster than Gmail despite my main Outlook mailbox being much larger than my less-used Gmail mailboxes (although it is nowhere as fast as FastMail).

I remember the early days of Gmail -- it was a fantastic product in its heyday better than any other webmail client at the time. The quality has sadly dropped a huge amount especially in recent years to the point there are now viable alternatives. FastMail is one such alternative I have been very impressed with. Even Outlook's webmail is worth a look although the UX could do with some work.


Indeed, I have Outlook, Yandex and Gmail, and Gmail seems to be the worst of them all.

I cannot understand all thr praise people give it, is it simply brand loyalty?


It used to be amazing - a pretty full featured mail application, snappy in 2005 on much less sophisticated browser runtimes. It’s relatively a piece of shit now.


Second this, but to elaborate, likely it's because they're using chrome when it runs fast and other browsers when it's not. From my perspective, everything Google does is just an effort to get you stuck in their ecosystem.

Everything about Gmail is very confusing to me; I have no love for Outlook (it's absolutely garbage) but in garbage rankings, Outlook is a bit more functional than Gmail is. Outlook has a more complete business eco-system vision I think than Gmail does, and there is a lot better integration with the other elements of Microsoft's ecosystem than Chrome has. For example, when I attach a Sharepoint/OneDrive doc to an email in Outlook, it offers to let me set the permissions for the recipients automatically. This doesn't make up for Sharepoint/OneDrive's awful permissions handling in general, but this element is convenient at least. (Again, I cannot stress how frustrating it is to even __see__ what permissions someone has for a Sharepoint/OneDrive document as regardless of your screen resolution/browser size, you're confined to a few centimeter width sized window, never mind that sometimes saving just outright refuses to work, and bulk-adding persons is even worse)

Gmail has awful design in many places; I don't like that a simple option like FWD is hidden behind the ... menu. I don't like that selecting an email from the list in gmail results in the menu buttons suddenly expanding. Even worse, the location of the reply button when the menu buttons are contracted is the same as the location of the "Mark as Spam" button when expanded; on browsers that aren't chrome, the display lag is enough that I can move my cursor far faster than the menu buttons expand and what __was__ the reply button location is now the "mark as spam" button location.

I don't like that it takes a good 5+ seconds to load the basic email list on a 100 Mbit connection. I don't like that logging into Gmail logs me into every other Google platform (e.g., YouTube) and I start getting spam about different channels or reminders to "engage" with various channels/social media. I don't like that I have to install a .dpkg for Mac to use Gchat video when other apps (e.g., Telegram, Skype, WhatsApp) are just a regular application that I can remove with drag/drop. I really don't like how "foreign" the Gmail/Google theming is with basically everything on MacOS and even on Android, or how few controls over basic UI functionality I get. (I really struggled to find in the Playstore how to monitor the progress of a download...)

Gmail and Google at large feels like an ecosystem that Google just expects everyone to buy into regardless of the platform/experience, and I truly cannot understand their intended way of handling the UI most of the time. The underlying methods of categorizing/indexing the information of course is fine, but actually interacting with it is frustrating for me compared to every Google product contemporary. As much as I dislike Outlook, I'd rather use it than Gmail. I'd rather use basically any chat program as opposed to GChat/Whatever it's called now (it's still really bad whatever the new product is called). Even Google itself I find myself fighting with Google's preferred results instead of what I'm actually wanting (without uBlock the first results are tons of ad spam that have nothing to do with the searches, and the first few results are usually some SEO'd result).

There is an "okay" experience with google if you go all in on it, but they certainly seem to bank on the idea that you'll do that in order to get access to Google results. The rest of the ecosystem is so clunky that it's really undesirable to use for me.


That hasn't been true for a long time, now. Gmail used to be fast. Nowadays, it takes a long time to load (loading megabytes of JS before even displaying anything other than a big logo), a long time to search and a noticeable amount of time to even display an email after you click on it. I think you might relying on some older memories.


I agree that they care. I don't agree that it is fast.

It is comical how long it takes to load on my internet connection. Has gotten better, but that is on my connection side.

My favorite crazy moment lately is just how long it takes to bring up the computer new window on a fresh page load.

It used to be great, mind. Not sure what I'm getting with the additional load times.


It used to be fast. Now it's so slow it has a loading screen.


What do you mean?


The gmail prototype was made using 20% time. I'd call that "no oversight" as per the third point.

Moreover, when the developer in question demoed his work, the reaction from Brin (famously) was something akin to "there's 400msec delay, fix it". Which puts us clearly in the realm of the second scenario.


Gmail is an AdTech-funded, no-charge email service operated by Google, a popular internet services subsidiary of Alphabet Inc.


Group 3, clearly. With a bit of the "minimal oversight" thing.


The search box is that way ^


> British soldiers in World War I were equipped with a Brodie helmet, a steel hat designed to protect its wearer from overhead blasts and shrapnel while conducting trench warfare. After its deployment, field hospitals saw an uptick in soldiers with severe head injuries.

> Because of the rise in injuries, British command considered going back to the drawing board with the helmet’s design. Fortunately, a statistician pointed out that the dramatic rise in hospital cases was because people were surviving injuries that previously would have killed them—before the introduction of steel the British Army used felt or leather as headwear material.

I've seen this same story except with warplanes during the War. Story goes that an allied air force tried to improve the percentage of planes that would return from a bombing raid, so they inspected returning planes, found the places where they had holes in them and added extra layers of steel to those areas for the next bomb run; but this had no improvement on the percentage of planes that returned.

That is until a "statistician" realized that all of the places on the plane that they found holes in were actually parts of the plane that could get hit and survive and return. They then started adding extra steel to the parts of the planes they found no holes or damage assuming that if those parts were hit the plane would get shot down and not return. After this change they started seeing a dramatic increase in the amount of planes returning from a raid.

Does anyone know the true source of these stories?

Regardless, it really opened my eyes to how changing your perspective on the cause of the problem can help find the best solution, and to this day I still think of this story when solving a problem.


We apply this kind of thinking at work: early on we focused all our energy fixing user problems or adding features that were requested. We realised it is actually a small number of users who volunteer to report to us what's wrong or missing (outside of regular crash/feature tracking) - they are the ones that care enough to make an effort, and it's a small percentage. So we started campaigning to engage users we never heard from, to understand what problems they had that we weren't solving, to get them energised enough to report to us. It's been very successful, engagement has gone up significantly.


Way back in the day I was a lone IT guy in a distributor/wholesaler, and part of that was creating line of business apps to do some real heavy lifting, extending existing ones, and creating utilities to tie other apps together. In a tiny company of maybe 40 people, where even an underpaid IT guy was still an extravagance (but was adding huge value).

Periodically I'd ask to shadow someone for a day or two. At some points, even asking if I could do their task and have them watch me to confirm I was doing it correctly.

What you would learn from this was amazing. All sorts of inefficiencies that people would quietly accept, because it was still way better than the previous state.

Then a week or two later I'd roll out the update, and get huge thank yous from accounting, or sales, or the warehouse because I completely trivialized some previous common task. What used to take 45 seconds now takes 2 because all the work is done for them 99.9% of the time and they just need to confirm it's correct (or correct the odd edge case). The computer became this increasingly magical tool they loved more and more.


The number of hacks and workarounds that users will come up with without complaining to the dev team is always astounding. One time I shadowed someone who was using one of our internal tools that spit out pdfs, and at some point along the line he realized he wanted something extra included on them but didn't think to contact the dev team. As part of his daily work flow, he was taking data from one of the program's temp files and editing the pdf to add it. He had apparently been doing this for about a year. When I saw this, I spent 10 minutes updating the report template to add it and ended up receiving thank you emails from people I'd never talked to.

So yes, find a user and shadow them.


This makes sense. The sort of users that leave any kind of feedback or even hang around long enough for you to gather statistics are those who think the product is pretty good, but there are just these small issues that they hope will get fixed. The users who think your product is complete crap try it, stop using it very early and never leave feedback or statistics for you to analyze.


Did those users actually report different issues altogether from the original ones, or did they just report more of the same kinds of issues?


Yes there is a large set of venn diagrams we weren't connecting that it gives us pretty easy access to greater coverage. The harder part becomes messaging and marketing as we target more use cases.


What mechanisms did you use to get users providing feedback? Popups asking or something?


Good old email segmenting. It's become a real powerhouse for us. Figure out what groups of people you have, talk with a few to figure out what they want, build it and target those who match. We're not a startup however so this is more of a scale up strategy or if you already have access to a large number of users who trust you and want just send your emails to spam.


What are some tactics you use to get these customers amped up?


Taking the train over the Rockies they tell a story about Westinghouse[1] inverting the logic on the pneumatic brakes. When first introduced they failed open, meaning the train could move. This was convenient on the flats, but if the train was stopped on a grade and the pressure failed... down the hill you go. So Westinghouse made them fail closed: no pressure, no rolling down the hill. IIRC it saved lives.

[1] https://en.wikipedia.org/wiki/George_Westinghouse


You are thinking of Abraham Wald: https://en.wikipedia.org/wiki/Abraham_Wald


Note that the Wald story may not have actually happened: http://www.ams.org/publicoutreach/feature-column/fc-2016-06

(But yes, Abraham Wald is the statistician that the story is about.)


Aaaaaactually... in the postscript he sort of walks back everything. But since the article has mathematical and educational merit on its own, I guess he maintained it.

"My indignation at how the internet dealt with Wald's work was overblown. Stephen Stigler (son of George, and a statistician at the University of Chicago) called my attention to a note by W. Allen Wallis himself in which he mentions Wald's work explicitly in connection with survivorship bias. Wallis' original article in the Journal of the American Statistical Association was followed by two very brief comments and then by a further 'rejoinder' of a bit more than one page. Towards the end of it he says, "The military was inclined to provide protection for those parts that on returning planes showed the most hits. Wald assumed, on good evidence, that hits in combat were uniformly distributed over the planes. It follows that hits on the more vulnerable parts were less likely to be found on returning planes than hits on the less vulnerable parts, since planes receiving hits on the more vulnerable parts were less likely to return to provide data. From these premises, he devised methods for estimating vulnerability of various parts."

Amazing article btw. Thanks. This is what makes HN so unique.


yes and here's some excellent metaresearch on the "measles plane" https://counting.substack.com/p/its-that-ds-meme-plane-with-...


Yes, thank you. Now I know where this came from!



That plane with the red Measels spots has become a joke meme on twitter. Here's the Wikipedia page on it to start you off on your deep dive:

https://en.wikipedia.org/wiki/Survivorship_bias


old tech blog about improving youtube page weight -- it paradoxically worsened their long tail stats

once they sliced it geographically, they found that the increase was all in places like siberia where previously the site was unusable

https://blog.chriszacharias.com/page-weight-matters


I always feel these stories are nonsense, they no one can be that stupid. Then I think about the people I’ve dealt with over past 20 years. And I fully believe these scenarios.


> That is until a "statistician" realized that all of the places on the plane that they found holes in were actually parts of the plane that could get hit and survive and return. They then started adding extra steel to the parts of the planes they found no holes or damage assuming that if those parts were hit the plane would get shot down and not return. After this change they started seeing a dramatic increase in the amount of planes returning from a raid.

This is highly likely to be grade A BS because it would be quite obvious to literally everyone flying an aircraft. It only make sense because the reader are not given a full minute to think about this.


Oh God yes.

Back in the 20th Century, I was on a team for a major web browser. They were attempting to extend web browser technologies to crawl and cache web sites. The idea was to enable complex web experiences in a time when Internet service was analog telephone, 9600 baud.

We worked for months on that. At the office. In Silicon Valley. On $6000 Compaq computers.

When I returned home via my two-hour commute, I would try it: dial up and crawl, overnight, and surf around on the train to work the next day.

I was the only person on the team to actually do this.

I could understand that; after a 12-hour day of intense times at the office, the last thing most of us wanted was more of it, as soon as we got home.

A year later, I was at an ad agency. Big clients, national brands. Everyone there used the very latest Macintosh machines. Those could be like $8000 each. Beautiful work in Photoshop, then stuffed into a web browser.

They had a room with a few PCs, but nobody went in there.


1999. I had 10 Mb/s at work and 56 kB/s at home which is about 200 times slower, same difference between a 1 GB/s fiber and a 5.6 Mb/s ADSL.

I used the same laptop and had very different experiences. Sites like the current HN worked much better at home. Large images didn't load fast there. Of course developers didn't create very large JPEGs or 10 MB JavaScript files back then but we had the same kind of problems, scaled down to smaller sizes.


I'm sorry, but I don't understand the point you are trying to make here either with regards to the original topic or even how the second part of your story relates to the first.


Devs are lazy and wont give up there luxury macs, even for testing.


That's like 10% of the comment - I feel like some point was trying to be made with the dialup caching product but it went over my head.

Also back then, Macs weren't so much a luxury like they are today (this is pre Jobs) -- they were just defacto standard in the creative fields. DTP and Photoshop were strictly better on the Mac up until about that point and it would be a number of years for PCs to erode the entrenched Apple dominance in that field. Much like UNIX workstations still had a lock on CAD/CAM/engineering in those days that was rapidly eroding to NT PCs. $8000 was the price for a well-equipped workstation whether it was Wintel, Apple, or UNIX (well a bit more for those). As alluded, the big issue with Internet publishing was taking into account a 56K dialup vs a corporate T1 rather than hardware differences.


Thats the same point. When everyone professional is using 8000$ Macs or PC workstations but you customers are on C64s and Amigas, you would like to run some tests on those machines as well.


My point did get muddled there in telling a story.

It's that the teams were putting all this effort using quite literally millions of dollars of tech infrastructure, to create tech products to be delivered to very limited computers and slow networks.

And the real point: the end product was never seen on the target hardware by those building the things. No clue.

The web browser company assembled a usability lab, would get volunteer people to come in and try things out. But the feedback from that group was submitted as a report to a management group that was two layers above us.

Chain the coders and designers to $350 laptops, at least one day a week.


It's not just developers, and it's not just about speed, it's also designers / UX bods who need to heed this. My decently powered work laptop from job 1 has a fairly rubbish screen, which means that a lot of the subtle hover effects, 1 pixel light grey on lighter grey divisions and skinny fonts either aren't visible or are incredibly difficult to see. A good trick if you're working on a large, hi res screen is to have a small and crappy second monitor hooked up if you're doing any UI work so you can see what it will actually look like to a lot of your users.


It's because to many, aesthetics > function. This drives me INSANE. If you make a UX that most people can't use for reasons a, b, and c, who CARES how pretty it looks? This is also true for accessibility, especially for older people who are more likely to have reading issues and presbyopia. Everything is designed for 22 year olds with perfect vision and the best hardware.

It's often on the marketing and exec teams. I had to be very firm in some of our UX decisions because yes, it does look a little less polished this way, but a lot of our audience is elderly and they need to be able to read what we're giving them.

And this isn't even getting into how many sites break if you have any accessibility shortcuts or defaults set up on your browser. I'm slightly visually impaired and have pages set to load at 150% zoom (on a 4k monitor) and a minimum font size of 12. It's absurd how many sites that breaks.


At least where I have worked, UX people are evaluated by whether the PM is happy when approving it. Even for myself as a dev, the part of my job people care about is the rate tickets move from "Selected for development" to "Done" while still passing tests.

You will need to change the minds of PMs.


Luckily, I don't work in tech (I do tech work in a non-tech company/non tech team [I'm the only one with any coding or cs skills]), and this right here is why. I work in communications and own the marketing decisions for my department so I generally have strong decision rights when I do UI/UX/design work. Downside is I don't make nearly as much, but I definitely prefer the respect and autonomy.


I work in web development at a major, well-known company. My top of mind is exactly what the article is reflecting: how will this product/feature perform on a craptop/low power ChromeOS device/cheap smartphone that isn't even sold in the U.S.? I can say first hand that our designers do not care about this. I have never heard any concerns around this even so much as uttered. I think part of this is due to a rigid 'design system' that, if adjusted to better accommodate for our craptop, etc. users, would require updating many other products to keep consistent. Thus, the problem is handwaved away with "that problem is only reflected on x% of DAU."

There is so much design/development guidance around screen width, breakpoints, contrast ratio, accessibility and so forth, but so little around "real world" testing on low-tier devices. Every designer is using a 5K display with perfect color and definition, or an iMac with a super fast processor and tons of memory; our users are not.


This is one area that really grinds my gears: the trend in UI design to reduce contrast because full black on full white text is too contrasty. Whatever minor improvement in readability on their quality 3000:1 contrast ratio monitor pales in comparison to the loss in readability on a 200:1 trash TN display. Even on a good display at low brightness low contrast design choices can be infuriating (e.g. phone at min brightness in pitch dark room)


I honestly don't get why we just don't have options. That's what we did back in the Dark Ages for things like resolutions. Let people set whether they want high or low contrast.

I also feel this re: dark and light modes, as somebody whose astigmatism makes dark mode LESS usable than light mode. I get it, dark mode is awesome for most people. Please don't make it mandatory.


Will echo this. It’s astounding how bad the worst laptop screens are, and how many machines that you’d think have decent screens don’t (like many Thinkpad models, which used to and still might by default come with some of the worst TN panels I’ve ever seen).

I once had to do some photoshop work on one such machine and it was nigh unusable because many details weren’t or were only barely visible. So while I wouldn’t recommend making a screen like that one’s primary screen, yes absolutely test on it periodically.


This is done quite often in audio production.

The Auratones mimicking real world consumer equipment, ie mono, low bass and a rolloff at the top, had a revival because of that.

Another company offers speakers the correlate to the ones in TVs, for audio engineers in the movie industry. The body is made from the same material as TVs are and they have no parallel walls.


The same survivor bias analogy goes for usage of Firefox and Javascript in general. "Our users dont block JS and are 99% on Chrome." Because the tech savvy users just block your surveillance stuff.


But we anyway don't care about people who block our ads


Even more than crappy laptop, please please test your application on bad/degraded networks. So many apps completely glitch and breakdown in bad network environments.


I've used toxiproxy [1] to imitate various network problems (slowness, lost packets, dropping connections, etc). It works pretty well, and is even amenable to running during functional / integration tests.

[1]: https://github.com/shopify/toxiproxy


This looks amazing


This.

2 rules:

- Always indicate that your application is performing a network action (like showing a spinner, disable forms/buttons while submitting, etc)

- Always catch errors (including timeouts) and give some form of feedback to the user when something went wrong.

Try letting your local development backend return a 500, timeout or a 4xx error every now and then, and check if your frontend handles this in a graceful manner. It should at least give some feedback to the user that the operation failed.

You can emulate a slow internet connection in chrome devtools, though I find the experience not accurate. You can also just add a sleep() call somewhere in your local backend. Maybe inject a sleep() into your acceptance environment, and let the test team work through the scenarios with that sleep() call in place.

I've seen so many frontends that don't catch errors, and just show the spinner indefinitely. Or worse, show nothing at all. This is extremely confusing to less technical people.


> Always catch errors (including timeouts) and give some form of feedback to the user when something went wrong.

The worst offender I hit regularly is Google Meet. I use ADSL most of the day because it works just fine for most I do. Which is either local or ssh or looking up technical resources on the net. Even Google Meet works fine (I don't use a cam). Until I share my screen, which contains nothing but text in full screen. Then Google Meet will forever freeze the shared picture after a couple of minutes without anybody telling so. It's ridiculous that they cannot handle this reasonably, you don't need 30 fps to share a screen of slowly changing text. It's completely user hostile that there is no message for either the sharer or any participant that sharing has frozen and will never recover. This is paid usage of Google Meet.


Having used a range of video conferencing systems, your description of Meet lines up with pretty much every Meet session I've been involved with. I'll meet the same group of people on Zoom another time and it usually works much better on Zoom.

Meet = glitchy low-res videos, freezing screen sharing, etc even on high quality fibre connections Zoom = smooth high-res video and generally usable screen sharing (with minor glitches that usually sorts itself out in seconds) even on low quality connections

The difference is like night and day, no joke...


On 4G, which gives me 20 - 40 Mbit/sec uplink I have no major issues with Google Meet. Sharing terminals with red on black font does not work well. Whether Zoom works better I don't know from own experience. I hardly ever use it because my employer has a Google subscription.

Well, sharing terminals over telcos is backwards anyway. We did that 20 years ago over 56 kbit/s modem lines using VNC. And the compression was lossless, the result pixel perfect.


It's frustrating that this is so easy to do in one click using the browser's in-built developer tools, and yet no dev team spends their time on it (or even know that it exists).


The built-in tools are a good start but they don't simulate most of network issues and don't do anything about existing websocket connections.

On windows I used clumsy that was much more realistic. On some tablets, I used my microwave oven or my feets to walk away from the Wi-Fi.


I can't find a source, but I heard that Google used to purposely slow down their corporate network on some days to force all teams to dogfood test their software on slow networks. Or maybe they just offered an alternate slow Wi-Fi network to make testing easier, not forcing everyone to use a slow network.

But forcing managers to use a slow network would be an effective way to get them to prioritize performance. If using a slow network is a choice, they wouldn't.


Idk about the main network, but they do have a separate "Google-2g" network you can connect to that has degraded performance for testing purposes. Maybe they force people to use it when testing apps?


The Google-2g network is what I must have heard about. Thanks!


However, there maybe another darkside of the story.

Almost a decades back I talked a game dev, asking him why the game that they released demands such high hardware requirements. The two reasons that he offered was: 1) It costs money to produce & maintain content such as Level of Detail assets to enable support for low end devices, and 2) People cannot afford good computer usually also cannot afford to pay for the game.

I think it's also somewhat fitting in this case: the industry only focuses on making money out of the people who can or potentially can afford the service, and left out those who can't. The user-end requirement is just an implicit barrier that allows companies to generate rainbow farts.

No, I'm not criticizing those companies that actually employed such dark patterns. After all, everyone wants money. However, these kind of practice can be costly as well in the form of 1) bad UX for paying users, and 2) higher cost of bandwidth (for both the user and the company).


Then Roblox and Minecraft came along and showed everyone there was a large audience of kids on crap PCs and Chomebooks who will also play games and have parents who will spend for them.


This. My kids Play minecraft on the Amazon Kids tablets. They startet with a older Version with only 1.5GB of RAM and the game kind of worked good enough for them to enjoy it. It worked for many years and only had Problems with large Outdoor areas. Now they use the newer 3GB kids Version and it works really well. I am very happy to not needing to buy them 400€ tablets for this game.

(Btw, i developed medical dicom viewer for large CT and MRI data, on a 10yrs old dell Laptop. The viewer was always fast on customer machines. If it would not work on my machine, i tunded the Software until it does ;) )


There always were, there just wasn't the payment infrastructure to hoover up our parents' money in the 90s. I had to behave at computer shows and talk nicely to the adults about building computers before I was bought a video game (or make my parents drive me to a game store). Also they weren't full of micro-transactions.

If there had been ways to buy games online + digital downloads, I would genuinely have feared for my parents' pocketbooks.


I grew up in the 80s and would regularly get magazines on the weekly grocery shop with the demo cassette on them. The argument isn’t that videogames only just started to exist and that kids only just started to play them.

It’s not really comparable because there wasn’t the ubiquity or range of compute power there is now. Modern games tend to nestle at the top of it as the parent comment suggests. There is a whole market of people that want to play games that have bad hardware. And my point is that there was a sea change about a decade ago in that respect that the wider industry is only just catching up to.


There were tons and tons of similar games to mine craft with low system requirements. Also minecraft was originally written in Java ao it has the same issues as above.


The difference is that Minecraft and Roblox were phenomenally successful and demonstrate the addressable market exists. Making games for a low-spec is one thing. Making games that people with low-spec computers want to play is another.


I imagine the problem with game development is that, for 99.95% of the game development cycle, you are creating only internal builds for fellow developers on workstations.

And this is game development, the industry famous for over-scoping everything because it's a winner take all economy. Followed by death-marches to implement said over-scopes.

Doing anything special for quartile 1 of the standard distribution goes out the window.

Realistically I just think this pattern is true of every domain.

95% of software is made up of throwaway prototypes, where thoroughly thought out engineering isn't necessary because the business case demands minimum viable product for very good reasons.


> 2) People cannot afford good computer usually also cannot afford to pay for the game.

I wonder if they had any kind of proof of this. I would think that there are plenty of kids who could get their parents to pay five bucks for the game, but not 500 bucks to buy a fancy new computer.


If you look at the flip side people who can afford an expensive gaming pc can probably also afford a game which isn’t nearly as expensive


Willingness and ability to pay for one thing doesn't mean willingness to pay for other things. There are plenty of cheapskate rich people and people who consider themselves poor on six figure salaries.


Yea, the way this worked out for me as a kid was that I needed all the money I got to upgrade my PC and wouldn't have any left over for buying games so I just pirated these games.

Anyway, without numbers that's just a theory and might not hold. Anecdotally a lot of people are worrying about whether they can play games on their budget build. And they would rather buy games than spend a ton on a faster PC. And I believe consoles sell for a similar reason: they are cheap. It's the actual games that console gamers pay decent money for.

(Of course the PC game market is now extremely saturated and there are deep discounts & giveaways constantly somewhere)


Kids who can't afford fancy computers grow up into rich adults who have no nostalgia towards your game franchise


Like exposure, nostalgia doesn't pay the bills.


Notably, this is one reason why WoW managed to hit such a wide audience and become so successful. The art design made the game look amazing, but the hardware requirements were so low you could run it on a potato. Its contemporary, Everquest 2, managed to look uglier while being much harder to run.


Gmail has nearly 50mb of JavaScript that thrashes I/O so hard that clicking the search bar and typing too soon will cause it to skip letters (because the search bar is probably some Angular-powered abomination).

It is and was a regression from the previous design which was fast even on 3G connections.

It's driven by Google engineers running their bloatware on i9 MBPs with 64gb of RAM. Oh Gmail is slow? Have you tried not being poor?


They do still support the basic HTML version:

https://mail.google.com/mail/u/0/h/


Is that really the previous design, though? I remember being presented with this button before and ended up on an extremely bare bones version of Gmail (like something built for 2G feature phone browsers) that didn't have support for changing mail settings, etc.


They seem to have disabled the option to make that the default view. Or rather, the option is still there but it doesn't do anything, always opening the bloated interface.


Hey now you can make plenty fast Angular apps, there should never be a case where you skip letters, it's all about how you use the tools available. There are dozens of barebones websites that recursively SELECT 1 row at a time from a remote database and that is just as painful.


To those who test their work on crappy laptops,

Thank you.

Sincerely,

Craptop user

All work I do is tested on crappy low-powered computer because that is what I use to do work. If it is not fast on these underpowered computers, then it is not usable (for me). That means on anything better than a craptop, generally, performance can only improve. Similar principle applies with shell scripts. I prefer to use the ("crappy") scripting shell as the interactive shell. That means scripts generally run fast under all conditions and I never have to worry about scripts not working due to use of interactive shell, e.g., Bash, features.

Target the lowest common denominator. That's how I work. No overpowereed workstation with multiple monitors, or expensive "developer" laptop.


There are lots of good options for sub-$200 laptops at Wal Mart right now. ChromeBooks or little Windows machines.

Every web dev should use one.

I am fascinated by the low end options, you learn a lot about a system when it's pushed to its limits. Or maybe you learn even more about your pain threshold...

I've heard it said that it's more difficult to make a $10,000 car than it is to make a $1,000,000 car. I don't recall the source, via Horace Dedieu maybe.

My first job out of college was at a spaceship company. One of the most senior engineers told me of the footrest he had designed for early 747 airliners. First Class. He considered it his greatest work.

That foot stool was quite literally more difficult than rocket science.


> I've heard it said that it's more difficult to make a $10,000 car than it is to make a $1,000,000 car.

It is, but probably for a different reason than you'd think. In automotive, the difficulty is not designing the car, but the facility to mass produce them. Elon found out the hard way. In his words: "it's trivial to design the machine, but it's hard to design the machine, that builds the machine". He is currently facing the same challenge with the production of the raptor engines for starship.

A million dollar low volume "hand made" car can be build by a small team, with relatively few resources. Operating a factory that produces thousands of affordable cars per day takes tens of thousands of people through the entire supply chain.


producing something in a very high volume consistently is difficult. my SO's father works in plastics manufacturing. they make tiny plastic housings used in cable connectors and tiny plastic balls for machine bearing seals.

both of these products have surprisingly high specification, and producing hundreds of thousands of them each day is a serious challenge.


Elon (and other shareholder) didn't do shit, he just owns the facilities and the rights to reap the profits.


I have been fascinated by this question ever since seeing an all apple school employ macbooks as a minimum bar. using Samsung dex, and now budget arm chromebooks I have found most websites do work on even mediatek and qualcomm chips, and design my own experiences to work there accordingly. The lenovo Duet is an incredible device, its sad how poorly optimized some of the heavier websites are and you can't do anything about it.

I find the bias towards powerful desktops as the only place any apps are written and tested ridiculous and something we can work to correct long term with progressive web.


I work in Education and I have to admit things are just easier in the Apple ecosystem. Things like AirPlay and AirDrop etc are lifesavers for many of the less tech-inclined staff in schools who can actually manage to use these (and other) tools. It's not just about websites and apps--there are accessible QOL features macOS offers that simply aren't present or are packaged up in a non-obvious way in ChromeOS or Windows.


Often the trade off on those sub $200 laptops is a really poor screen and speakers and other things that make for a crappy experience.


Crappy phones, too! At one of my previous jobs there was a "device lab" where they had all sorts of phones and laptops for anyone in the company to come in and use and test things on. They had various generations of iPhones and iPads, high end and low end Android phones, some Windows laptops and an old MacBook. The room was open to all and freely accessible by anyone. I saw UX, dev, QA, product managers, and even the occasional person from sales testing stuff the site in there.

Every time I went in there and used an old, slow phone it reminded me to consider the person using that device for real. It's so easy to forget when you're working on your giant monitor with a super fast computer and a high end phone in your pocket.


We had that too and our QA team made good use of it.


There is a hidden feature in Chrome Devtools to apply a CPU slowdown...

Simply head to Devtools > Performance tab > Gear icon > CPU > 6x slowdown.

That isn't a perfect representation of an old laptop (doesn't throttle GPU operations or trigger low memory behaviours), but it's a good start. As a bonus, it makes your fans spin up wildly whenever loading a page!


Not only CPU but slow networks and mobile screen sizes too. These are all very helpful for quick sanity checks and reproducing races.


Lighthouse also emulates slow-ish web on a mobile device. Though I would take its results with a grain of salt: it can show near-perfect green scores for a website that takes 3 seconds to render


Unless you mean that the menu is hidden behind the gear icon, calling it a hidden feature is a stretch.

https://developer.chrome.com/docs/devtools/device-mode/#cpu


Sufficiently deep in menus that I suspect most people here on HN would consider themselves familiar with devtools, yet wouldn't have been aware of this option.


Also available in Firefox Developer Tools. Just click on `No Throttling` and there's a menu of network speed options.


similarly i often let throttling on.. very nice way to tame dopamine rush :)


I believe that most of the "web developers" have absolutely no idea of how to build a website using bare html+css. Actually, I worked with "web developers" that are basically not touching code much, but using all sorts of tools and framework - without fully understand what they are doing. And that's also why website looks like each other more and more. And that's the silent majority of the industry.


There is no way to be a modern web developer without knowing everything you need for raw html/css unless you are talking about square space or basic Wordpress work.

Things like react are purely extra knowledge needed over the top of the basic website building blocks.


Back when I was consulting pretty much every customer I visited had people who weren't proficient with the technology they used. The corporate world is particularly prone to this. You would work with a team of ten and find two people were doing the bulk of the work. I doubt this has changed since then.


ok, html maybe, because html if anything is easier than the one in 90/00s (no more tables, or divs everywhere)

But in my experience 80% react developers could not layout a page with raw css. I am not even talking anything "fancy" like transitions or animations. They just import libs and frameworks and customize a bit.


Websites look like each other because of Jacobs law[1]. Not sure how good the web developers you work with are if they don't understand HTML/CSS/JS.

[1]https://lawsofux.com/jakobs-law/


As someone who came up during the late 90s and coded many a page in apps that were barely more than Notepad, I often have the same sentiment. (I get the same feelings about the backend and writing SQL) However, the web then was very simple. I wonder if I was a 24 year old today, trying to get a handle on all the various front-end frameworks, all the security vulnerabilities, and having to target mobile screens and giant 4k displays at the same time, if I wouldn't consider the basics low-hanging fruit to abstract away.


As I suggested in my other comment, how many backend developers don't know SQL? I would also suggest that highly skilled cabinet installers aren't always master carpenters.


> As I suggested in my other comment, how many backend developers don't know SQL?

I mean that isn't acceptable to me either.


I still encounter websites/UI that are laggy on my high end workstation, I cannot image how unusable those websites would be on a low end laptop.

The funny thing is, most of the time those websites have no complex requirements, it's mostly news site with things likes a covid map embed, but the map has a resolution to the meter for country boundaries... A few KB image that open the interactive map on click would work just the same.

And for website with heavier requirements (web app), there are two camps, those which are excellent (like onshape, that thing is amazing, or draw.io) or painfully slow to the point I had to ditch them (notion, clickup).

I have a feeling that many of those laggy website could do some key optimization to make everything order of magnitude faster/lighter.


Quite interestingly while building a WebGL application I had a slightly reverse experience. While it worked surprisingly fine on some older potato laptops, the experience on Macbooks with retina screen was consistently problematic. That was due to the sheer resolution that had to be rendered per each frame. Took quite a bit of optimisation work to improve it.


> You can run Linux from a Windows subsystem to run most development tooling.

To be honest, this feels like the author never used "craptop". WSL even with quad-core hasewell laptop and 8GB is pushing it. It's better to RDP / develop on one machine and run on another.

Having said that, It's important to test on low speced always. Even not for website (we test our cpp code on some 2009 machines)


I think it says more about your tooling than the hardware if it can't run well on 4 cores and 8 GB of RAM.


Microsoft was famous for testing the Mac version of Office on base model 60MHz PowerPC machines


Apparently they had whole labs full of banks of machines in many common configurations to test their software. However now the "insiders" get the privilege of doing it for them for free under the guise of shaping the product direction.

Windows Updates are drip fed out to classes of machine so they can limit the damage when they hit a class that inevitably fails because it was never tested before rolling out to production on that hardware.

I appreciate they can never test every configuration but based on what they used to do it's clear they identified a need for this kind of testing in the past so what's changed? Why don't they need to do this testing anymore? Why make your users do it for you, for free, no less.


> Why make your users do it for you, for free, no less.

I think you answered your own question.


They did the same for Windows 95: developers were given 386s with 4 MB of RAM to test on, because that was the minimum requirement for Windows 95.

And in the end, it ran -- just about -- on those specs.


Did later editions (e.g. 95b) still run on the 386? I remember running 95b on a Pentium 166 MMX with 32MB of RAM and even that was a challenge at times!


OSR2 ran pretty nice on 486 DX2 with 16MB of RAM


the same with Dave cutler, the lead on windows NT, making sure everyone dogfooded NT as soon as possible.


I often test my Android apps on a nexus 5. Though its battery is so bad it sometimes shuts off while booting while plugged in, I do wonder if they're still selling batteries for this thing.


Don’t forget to implement real user metrics like Sentry. I add in timing checks for critical features that log out to my analytics and have a regular habit of analyzing the data.

Your user base greatly influences what kind of performance optimization you should be doing. In some cases it’s worth it, but if your primary user has a fast machine, it’s generally not worth it. Reminds me of the time when I had a project manager who insisted on optimizing page load time for a heavy mapping application.

And if your app is a simple shopping cart or a few forms, you probably don’t need a JavaScript frontend framework. I can’t tell you how many times I’ve removed all the unnecessary React and Vue and increased performance by at least an order of magnitude.


Try it on a low end smartphone under $50 as well. (Some are under $30)

Most things are painful on one of those. Be the exception


It's not just performance here: colour grading is really important too.

Many years ago I had a fascinating incident with a client. We were building a custom skin for our product for them, and they had quite a striking brand identity that we tried to match. When we showed them what we had built they complained that it was illegible.

Eventually, after visiting their office, we found that they were all working on cheap Windows laptops attached to horribly configured monitors - and our site really was illegible. We had done our design work on an iMac!


Indeed. Buy a crappy 15 year old LCD TV with HDMI and hook it up. Most of the really cheap Chinese models have terrible gamma that means half the colors look the same.


I highly recommend setting (non-HDR) screens to sRGB color mode or at least gamma, to have at least decent color reproduction.


Yes, do this if you want to actually use the screen properly. But do not do this if you want to test how awful your content looks on other people's setups.


Most people in large cities in US and European countries are not aware that many people don't connect to the internet regularly nor do they use fast laptops. Always make software that works without internet or fast process. That way if it works on crapy laptop and slow connection or no connection, to me those are only software I'll use or buy.


an old colleague of mine work on a project bringing internet to rural communities in east Africa.

his reasoning about network usability were always useful, especially considering jitter and latency.

also, it is surprising how usuable some internet connectivity is compared to none at all. having a shitty ADSL based setup is far better then no internet at all.


Even if your users all use high end laptops, this is still a good idea. Making your site fast enough on a craptop will make it blazing fast and delightful for everyone else. It’s like how baseball players warm up with weighted bats.


Test it?! I develop it on a crappy laptop!


Fitting because that EOY 2021 header makes the website unbearably slow when I have the browser maximized on a 4K screen, I guess the css-tricks people should read the blogpost too.


There’s a similar sentiment in the music/audio mastering world; test your mix on crappy speakers.


I had a similar experience building a NAS on a SoC.

Suddenly things like transport or at rest encryption are no longer perceived to be free but really limit your performance.

First time I actually had to be concious about choosing appropriate ciphers and hashing algos and not just using defaults.


What config did you end up setting with, if you happen to have the info easily to hand? I might be able to speed up SSHing to some of the slower hardware I have here.

Also, I'm very curious what SoC this was.


It is/was a RockPro64 with 4GB oft RAM.

The biggest issue I had was with zfs native encryption using aes256-gcm per default.

The SoC has AES support but with gcm the hashing function is not AES based, so i had to change to aes256-ccm to profit from the AES support.

For sshd i severly limit available hmac and cipher to use chacha20 but i also force that in the client side.

I will post the relevant lines tomorrow.

Wireguard is also costing much more CPU performance then I am used to.


Very useful to know. Thanks for the reply!


> apps like virus scanners that are resource-intensive and difficult to remove.

My brother bought a premium Lenovo laptop. He's not very computer literate (he's a lawyer).

I cautioned him to run "Add / Remove" programs and remove the McAffee trial and turn on the built-in Windows virus scanner. I told him I'd talk him through it. Of course he didn't.

A year later when I visited him (we live on opposite ends of the country / world depending on the time of year) I saw that he simply uses his computer with giant red warning popping up all the time from McAffee "WARNING YOUR COMPUTER MAY BE UNPROTECTED" and a link to buy McAffee.


At the beginning of the pandemic, my team worked on optimizations as priority number 1 as we got a lot of new users with terrible configurations who wanted to do a lot more that what ever intended for that same hardware.

I managed to find a crappy laptop to do some testing and work on optimizations, and it wasn't pretty. No development tool would work on it. My recommendation is thus: aim low, but not the lowest. You'll still be able to measure impact of your changes, and you'll be able to collect some data and other measurements about what is slow.


Dev laptop need not be your test laptop. You should still test on that machine even if you can't develop on it. And I have a feeling this test also indicates a lack of optimization in the dev tools as well.


If you don't have a crappy laptop in handy, you can test slow connection with Network Link Conditioner (if you have a Mac) and small screens with any browser's responsive design mode too.


Chrome and Firefox (probably others too) Devtools also have network simulators to test slow connection.


HN is so full of rich silicon valley assholes, they've never even had to make do with a cheap laptop. They need an article like this to remind them how the proles live.


Now if only they had decent desktop screens...

I have gigabit internet and powerful desktop, with enough ram... But still they don't develop for my 1440p screen, but some crappy tablets...


Analytics provides a lot of insight into that. Not so much CPU/RAM/disk speeds


I my experience, those who don't care about performance almost always happen to be bad developers. Interviewed with plenty of those.

I've worked with machines so underpowered you can notice execution time difference between minified and non-minified code. And where CSS animations are considerably slower than GIFs. Oh yeah.

Right now I am with people who are pretty good at architecture but kind of suck at the pieces that power their designs. It's an ego problem. Big one at that


Please. I beg you to buy a 2GB of transfer cell plan from a cheap carrier and try your sites via tethering. If you can only load your SPA a couple of times before your line gets disconnected, you've fucked up your design.

I had plans with 35GB of transfer and the modern web meant I was having to restart my plan every single week. Some of us live in Internet deserts where there is no hope of getting a wired connection (downtown Chicago in my case).


I don't think I've ever seen a site that would use up 2GB of data being loaded "a couple of times". What sort of site would do that?


Netflix ;-)


> https://twitter.com/davatron5000/status/1429866381831544836?...

As a business having to create ideal scenarios is difficult. Mostly, I would focus on my major target audience which pays me the most and add the optimization to roadmap when I want to experiment and test the said market for my product/service. Running a business is complicated as is.

For SaaS businesses focussing on tech companies wouldn't be an issue. Example, using Figma requires a good machine a score on Speedometer of 80 should suffice as we go below than that things aren't so great. But mostly tech companies using Figma would have a good machine. Yes, there would be an accessibility issue for new comers and college goers who cannot use Figma due to a crappy laptop but even small businesses in services like myself help them buy a new laptop for work.


The article omitted an important aspect: craptops have crappy screens with terrible contrast. It makes some elements really hard to discern, even though they look fantastic on your development machine.

The screen size can also have an impact. Developers rarely use 1366x768 monitors, or even low DPI displays.

There's also internet speed. Wired internet is fast, but people use your website on crappy hotel wifi, or in the underground.


This also applies to desktops. Where contrast and brightness can be anywhere and they might even be placed in direct sunlight...


On the Google campus there are at least two wifi networks available, one regular one that provides good connectivity, and one that provides degraded connectivity simulating a 2G cellphone connection. Of course one can simulate this easily with a little effort but I found it a nice touch to make this available to everyone at the push of a button.


I actually find Chrome's developer tools quite handy for this: for example, emulating slow Internet connections, checking your Lighthouse score, simulating different geographies, being able to adjust between many responsive device dimensions etc. Super worth it to get familiar with the whole host of tools that come installed with your browser.


Reminds me of game designer Jason Rohrer who developed his games on an old laptop that the previous owner wanted to throw away. A link from 2010: https://usesthis.com/interviews/jason.rohrer/


Raspberry pi works well for this


I second this! A Raspberry Pi 400 is often my primary web browsing device, while my workstation is busy with other things. There's no reason why a website should overwhelm it.


this, and old mobile phones/tables


I go one step further, all software I make is developed on Raspberry 2 (2W, JavaSE server) and 4 (7W, C/OpenGL client).

That way I don't have to worry about missing any performance beat!

Since energy prices are guaranteed to increase forever, I'm surprised this is not default behaviour.


I do it sometimes on a Pinebook Pro. It's like raspberry pi but with a solid aluminium body, a good keyboard and a good enough screen.


> Wealthy individuals can, and do, use lower-power technology.

Surprisingly true. I've seen quite a few fairly wealthy people use some janky Android phones. Their reasons to use them can be best summarized as personal preferences, but the fact remains.


>> Powerful devices can become circumstantially slowed by multiple factors

The slowest device we test with is actually a very expensive Chromebook with severe thermal throttling issues. Using our WebGL app that runs fine on a low-end Chromebook tablet will bring this expensive Chromebook to its knees within minutes. It throttles down to like 600 MHz or something.

The worst part is that every tech reviewer seems to have one of these things in a drawer somewhere that they pull out for testing. The usual duty pattern of Chromebooks is very bursty, e.g. page loads, but sustained GPU usage just crushes this particular device.


Any recommendations for a solid, representative crappy laptop? $100 xSomething Thinkpad off Ebay? Or is there something more representative of today.


T500/W500, 4GB of RAM.

It's something like 13 years old, and ~$60.

It can run almost every website that hasn't been web-dev'ed into oblivion.

It balks at anything with memory leaks and inefficient CPU/GPU load.

It can run YouTube/Facebook/Twitter (surprisingly, for the last).

It can't run https://github.com/


> It can't run https://github.com/

It… what? I don’t know about Facebook, but GitHub is a good deal lighter than YouTube and Twitter. GitHub is one of the extremely few websites developed by a large number of people that aren’t atrociously resource-heavy and JavaScript-dependent.


I meant the very front-page/the landing page/i.e. https://github.com/

It cannot run it at all, because of all the graphics nonsense going on.


Fairly impressed that it runs Facebook. Out of all the websites I use, I find Facebook to be the most slow (presumably due to its heavy use of JS). This is despite most other websites I use preforming fine on my personal 2017-era X1 Carbon and work 2017-era MacBook. I'd have thought it would fall off the rails on a 13 year old machine!


Buy a 5-years old laptop off eBay, yes. In a good working order, but with a slow CPU, little RAM, and a mediocre screen (not even FHD). A thinkpad is a fine choice because they are usually made to last, so it won't keep breaking at you.


I'd go older. There's no shortage of 5-10 year old machines out in the wild. Plenty of very cheap t420 and x220s available for less than $100 on eBay.


5 years is not that old, i believe


Yeah I agree, I use 10 year old desktop and still don't concider it to be old.


While this is good for testing during development, A crappy laptop with a fresh install of windows, will be far less bad than a crappy laptop with a 6 year old full-of-crap windows.

I suggest doing the final test of your your site on:

* A friends old laptop, on their wifi, still logged in as them, and with them clicking the buttons with you watching over their shoulder.

* The same, but a friends old android phone, preferably with 'samsung browser'.


Yeah a T-series Thinkpad. We use them at work for dumb terminals for forms, COVID compliance nonsense, and random data entry points. They are surprisingly durable and still pretty solid.


Not just for that. I use a T460 with an old i5 as my daily driver. Runs like a charm. Helps that it runs Linux but it was OK with Windows 10 too.


Try the shittiest laptop at Walmart. Like an HP stream.

Remember the low res 1366x768 screen.


> Remember the low res 1366x768 screen.

The low... um. One of us is in a bubble. I'm typing this on a laptop with a 1280x800 screen and it's perfectly fine. Like, not super spacious but I code and browse the web on this box and I don't have any problems. (To be clear, it might well be me in the bubble; this laptop is over a decade old)


It’s low res relatively as it is roughy the lowest resolution still in use. Mid range laptops these days come with 1080p displays and high end is somewhere at or under 4K


Ah, you're right; if I look on Best Buy and Newegg I can buy new machines below 1080p but they're the very, very bottom of the barrel ($100 Chromebooks and those weird "technically a laptop" off-brand Android... things). Well, here's to a crisper future:) (That apparently arrived while I was distracted)


I don’t know if bubble is the right term here. WXGA (1366x768) is more common and more fubar, as the horizontal may vary between 1360 and 1372.

The point is to feel the pain. By any modern sensibility both resolutions are pretty awful, and experiencing it on a $200 consumer laptop is icing On the cake. Although I have clients who routinely test applications on ancient 800x600 displays due to legacy constraints.


Trying small displays is essential!

My software engineer teammates get specially approved 4K displays and “engineering” laptops. People who use the apps are often restricted to “business” laptops. Conference rooms still use old projectors that are even lower resolution. We have no control over the displays our customers use.

I’ve been teaching my engineers to use the responsive design tools in their browsers. For UI components that render differently based on @media queries, it’s been helpful to add stories for those breakpoints to our Storybook component library.


When preparing to present to a remote audience, check how the slides will look for people connecting with a phone.

When recording a demo video, try to record at a lower resolution like 720p to save bandwidth and make it easier for people with small screens to watch.


Even a low-end modern system only takes you so far back in time.

Keep in mind that hardware improves roughly on line with Moore's Law. The gradation from low-end to mid-market might only be a few years of development.

The 10-year-old system is probably going to be much worse than a current low-end buy.


Try it first.

Moore’s law hasn’t been kind to x86 laptops, and the tricks used to speed things up are missing from low end celeron processors and binned SSDs. The cheap hardware and poor touchpads add to the charm.

I would bet that the 10 year old Lenovo would be surprisingly better than the shitty 2021 device.


CPU clock speeds haven't increased.

There's been SSDs, increased cache, increased memory speed, and more cores.

You might want to go back a bit more than 10 years, but yes, even a reasonably high-grade older system in my experience really dogs out on the current Web.


Just see what's on offer at your local thrift shop.

I don't recommend garage sales, however. People tend to overvalue their electronics. I tried to buy a radio that was worth maybe $20 from some guy. He wouldn't part with it for less than $150 because it reminded him of his father. That's nice, but I'm not paying $130 for your memories.


Could it be that he was forced to sell it?


Maybe. His wife was standing right there. But I'm still not going to pay $130 for sentimentality that I cannot share.


You were not supposed to buy it, that was the point.


Just go to Walmart or Best Buy or your local equivalent and buy a cheap laptop, something that’s $600 tops and feels like the plastic is just barely held together.


That would get you a reasonably fast processor and probably at least 8GB of RAM. That's not as low-end as many of your customers may be using.


$600 would be a prime dev laptop for me. I use the cheapest, non-chromebook, I can find then throw FreeBSD or Linux on it. I prefer desktops and spend much more money when I build one of those.


The old target was a 2008-2010 MacBook Pro, one with a Core 2 Duo CPU


One time I was working in a co-working space with terrible internet (<5mbps). It was OK until I had to order lunch from Sweetgreens - their site was so CSS/JS heavy that I couldn't even load the page to order my salad. Made me laugh at how ridiculous it was.


I use Cheap Acers for $150 they seem to run Windows 10 slow enough to be a craptop. They also run Linux just fine because Linux has less of an overhead. In Windows 10, Acer laptops have a huge CPU time working because it struggles to keep up with the main OS.


I understand you started with "cheap" but don't generalize brands like this:

> In Windows 10, Acer laptops have a huge CPU time working because it struggles to keep up with the main OS.

My coleague has an Acer laptop with i9 processor, RTX 3080 and 64GB RAM running on Raid NVMEs. It's screamingly fast.


I have one I bought new for $300 on Amazon in the summer. It's my main Visual Studio development PC. I have it hooked up with a second monitor. Honestly, it runs everything perfectly under Windows 10. The only upgrade I performed was to add some extra RAM.


I would substitute the term 'Netbook'. Several brands make them along with Acer


My personal rule of thumb for native development is to almost always run with -O0 and ASAN / UBSAN enabled on my dev computer. If the software is too slow that way it will be too slow on low-end machines (Pi 3) at -O3 -flto -march=native.


Should be pointed out you can simulate slower internet speeds in your chrome dev tools. The others- display/ram/cpu may be easier on a craptop, but a VM could also simulate resource constraints.


mind you that VM constraints are not to same as physical CPU contention and in some cases, this could be a bad test. (although unlikely for web developers).


I test all of my apps in the oldest-still-supported iPhone (currently the iPhone 6s) and optimize accordingly, carries over to Android performance (cross-platform apps) pretty nicely as well.


Crap phones too. My phone is four years old and is borderline unusable because most websites don't bother to load all the way. Most apps have trouble loading too.


If you want to write fast software, use a slow computer.



Would advise testing on ios 10 devices as well; many websites don't work well on an old iPad 1 for no good reason


I like the idea of a testing tuesday, but not sure how many companies set this up and what it actually entails?


I just put the throttling on in the network tab of Firefox/chrome developer tools. Plenty realistic


That throttles the internet bandwidth, not CPU, memory, disk reads/writes or any other constraints that lower-priced hardware will have.


It’s true but as an approximation it’s pretty good no?


CPU cores, cpu frequency, available memory, disk read/write speed all affect performance too.

CPU cores and available memory can be limited via a VM. CPU frequency, and disk speeds can't. 5200RPM disk vs SSD/NVME is a HUGE difference. If it's swapping often on a 5200RPM, that's a significant slowdown.


It's not only about poor network connection but also poor CPU / RAM, having a website visually lag / taking multiple frames to render is not a great experience and having a top of the line desktop can make you easily not aware of those issue.

There is something very pleasing about everything running smoothly without any hiccups.


The point they make is more than valid.

I don't like the attitude calling them craptops. A Gigabyte of memory and 10 Mbit/sec are enough to meet 99.9% of the user needs for Web browsing. If sites were implemented in a resource-aware manner of course.

Instead of looking down at craptops I would look down at people introducing 4K and using many Gigabytes of memory for personal computing. Even worse making masses use it. It's irresponsible greed and short-sighted capitalism that ruins the planet.


10Mbit/s is actually bit high in my mind... 256kbit/s should be reasonable.


> A Gigabyte of memory and 10 Mbit/sec a more than enough to meet 99.9% of the user needs for Web browsing. If sites were implemented in a resource-aware manner of course.

The devs need that just for Slack sadly.


Crappy laptop with grandma who doesn't speak English natively, and with 8-year-olds.


I ve always wondered why all computers don't have a "slow down" setting


They have, kind of, it's the energy save mode.

On Linux, (besides GPU) you'd set /sys/devices/system/cpu/cpu$i/cpufreq/scaling_max_freq to cpuinfo_min_freq, for all $i. I always look for minimum (idling) CPU frequency in devices before buying them, but this info is almost never available. My laptop has a min freq of 800 MHz, but I would like to go even lower, to better test low performance devices and limit energy usage. In web dev, you can simply use chrome dev tools cpu throttling though


There are two ways you can go lower: the cpu cgroup, and cooling_deviceN. I'm still working on memorizing the first approach :) but it works well just about everywhere; the second is simpler but employs intel-specific hardware-level throttling which may or may not behave usefully (it may have been the cause of a couple of mystery soft hangs on an old Pentium box I have here, presumably things are less glitchy now given how frequently laptops throttle nowadays).

While the cgroup approach is (like cpufreq) doable by poking around in /sys/fs/cgroup (you mkdir new directories to create cgroups), cgroup-tools makes it a tad more straightforward by making the steps less verbose. (Besides cgroup-tools, "unshare" and "nsenter" ship with util-linux and can issue the syscalls necessary to start a process in a given cgroup, which you can't do with pure bash.)

The setup is always the same - you create a new "cpu" cgroup (here named "cpulimit"), then configure CFS (completely fair scheduler, I think? I thought there were multiple schedulers... maybe this only applies if using CFS? I think CFS is the default everywhere) with a period and quota. I think the period is used to derive an internal tick rate. The quota is a fraction of the period and the ratio (yay you get to do the math yourself) represents how much CPU the task gets to eat. I think the ratio applies across all the CPUs. I have no idea what happens if you bring PID-level CPU affinity into the equation. Maybe you can select which CPUs are enabled for the cgroup, and the math applies to whatever's enabled. Haven't answered any of that yet. In any case:

  # cgcreate -g cpu:cpulimit
  # cgset -r cpu.cfs_period_us=1000000 cpulimit
  # cgset -r cpu.cfs_quota_us=100 cpulimit
A fairly straightforward demonstration: in one terminal run

  # cgexec -g cpu:cpulimit yes | pv -l > /dev/null
while in another terminal rerun the last `cgset` with quotas of 1000, 10000, etc, and watch the output rate go up and down. (In this case 100 is a good starting value, but anything more complicated than printf(); in a loop will probably finish launching in 2023 if started with a quota of 100.)

The nice thing is that the cgroup happily sits in the background until explicitly deleted (and systemd leaves everything it didn't create alone) and you can just poke at its values anytime. Network cgroups can probably do interesting fun things to traffic as well (oh yeah, network namespaces = discrete iptables/nftables per namespace).

Lastly, the cooling_deviceN entries are in /sys/class/thermal, and have cur_state and max_state. YMMV; setting cur_ to max_ may well take several minutes to undo (very much the case on older systems at least) - maybe try that on a throwaway-able session. :D (Think "Task Manager (Not Responding)"...)


Thanks for your elaborate notes! This is helpful information.

When I tried your commands, on Arch via libcgroup-git, `cgcreate -g cpu:cpulimit` only results in `cgcreate: can't create cgroup cpulimit: Cgroup, requested group parameter does not exist`, for some reason. But this is not a support ticket, I have not researched this at all yet. But cgroups only limit some processes anyway, never the entire core(s) - so it seems one could also simply use cpulimit [1] instead which emulates by sending SIGSTOP and SIGCONT.

About cooling_deviceN: While this does limit cpu functionality, this seems to only also set `scaling_max_freq` to an appropriate value, throttling because the fans are disabled. Not more useful than setting the frequency manually I presume.

[1] https://github.com/opsengine/cpulimit


*Makes this a tiny support ticket* :)

A bit of cursory googling around for that error didn't find anything particularly insightful, surprisingly. Most of the references were extremely obscure.

The only consistent theme I saw was "cgroups is not loaded or broken", but that doesn't make sense: systemd depends on cgroups, IIUC. And cgroups itself was introduced in the 2.6.x era. Honestly the only idea I can think of is asking on unix.stackexchange.com or #archlinux on irc.libera.chat.

I'm curious what syscall failures `strace -o cgcreate.txt -s999 -v -f cgcreate -g cpu:cpulimit` might reveal.

I'm also very interested to know whatever the root cause ends up being!

TIL that the cooling_deviceN trick does that on some systems. On the boxes I have here it basically slows everything to a crawllllll, especially if turned up to 11.

I've always looked at `cpulimit` as somewhat of an awkward hack. Yes, it works, but it's like bit-banging vs hardware I/O, or CPU vs GPU, or rapid polling instead of push/async. I kind of squint plaintively at it a bit. If it was all I had in an 11th hour situation then sure, but if I was deploying something to production I wanted to forget about? Eeeeeehh....


Absolutely agree with this, doubly so if your target market is international


Bonus stage. Develop your product on a crappy laptop.


just throw more cpu at the problem, this is not my idea btw so don't quote me

I learnt it from my colleagues, they are the engineers


"It’s also no secret that the average size of a website is huge, and it’s only going to get larger."

This attitude seems common, that websites are just going to be fat and get fatter. That's a lie.

Not if you, as a web developer, stop the bloat. The power is in your hands, you just need to use it.

No, you don't need that 8 MB hero image. No, you don't need that bloated framework. No, you don't need those 37 tracking libraries. No, you don't need a "Read More" button hiding most of the content. No, you don't need dynamically loaded clickbait at the bottom of the page. No, you don't need an autoplaying video. No, you don't need load a script to inject text into a copy operation. No, you don't need 87 cookies. No you don't need to load random unvetted code from a third party advertiser. No, you don't need the page text to fade in on scroll. No, you don't need any of that.

In case you forgot, https://idlewords.com/talks/website_obesity.htm


More often than not I feel these decisions are out of the developers hands.

As a dev, I want to use tree shaking to make my bundles small and optimize to make my website lighthouse score 100 across the board.

What happens? Marketing needs tag manager installed and then proceeds to async load 50 tracking scripts in the background.

Design wants the large hero images because on the high Res devices they use it looks slightly pixelated and when you push back, a quick screenshot to the PM will guarantee the ticket is added to the backlog.

Point being I'm not sure claiming that Devs hold the power is particularly constructive as there is a large portion on non-techies who have their own priorities to push.



I've had the exact situation kisamoto is talking about happen to me countless times, and I can probably count on one hand the amount of times I've won the battle over images. Sadly, the folks calling the shots on what images to use and exactly how they need to look do not care that I'm using responsive imagery to serve images to our users with appropriate file sizes, reasonable resolutions, and ensure crops have editor selected focal points.

When I say these folks get upset over slight pixelization, I'm not even talking about pixelization where everyone can tell it's a low quality image. I'm talking about situations where people are upset they can't read an iPhone screen someone is holding in the background when they are viewing it on their 4K display (note: that particular phone isn't the focal point of the shot, nor does it have anything to add to it; it's just someone staged back there to make the shot feel more alive, mind you!).

It's disheartening because I do want to deliver the best experience possible and try to do everything in my power to accomplish it, but sometimes I lose out and folks force their megabyte imagery.


We need to figure out a better way to manage images that have multiple qualities. Something that can take connection into account.


The technical worker who codes up the web site and its backend does have a say, but not a lot of it. Product and sales people have a stronger say, because they optimize for the money the web site produces.

Explaining them how much worse the bloat makes the site's experience for the user, showing how many pages are left partially loaded because the user did not have the patience and navigated away, how the competitor's site loads faster, may help convince them to keep the experience reasonably slim.


This.

Asking for web sites that don't have 27 trackers, enormous hero images, a full copy of framework of the week, etc. is like asking for native games under Linux: the money isn't there to justify the added hassle and support costs.


The whole point of the article is that poor optimisation make you loose a big part of your audience. The money is there, you just don't notice it.


And as a dev, I make none of those decisions.


Also as a dev, I don’t really care. It’s not my money on the line, it’s the companies. If the site sucks and users don’t like it, the company loses profits and they will be forced to change.

The only thing I take a stand on is when the product is actively harmful and immoral. Just being shit or slow is not my problem.


Trouble is that your users are generally in a similar situation, not getting to choose what tools they use. By producing bad software, you’re making their life harder personally.


As a dev, I don't produce bad software.

Users of the features we develop are the ones that produce bad websites.


I agree. The few times I've cared fell on deaf ears.


Dangerously based response, Gigachad.


Cool bonus: if you skip a lot of that stuff you also wont need a cookie banner, your page will load faster and feel better


But… marketing put the 8MB hero image there using their CMS. The other marketing people put the 37 trackers there using Google Tag Manager.

There’s also mystery JS that slows down everything and breaks random parts of our app - that’s your Chrome extensions :)


You don't need those 37 tracking libraries, but management does.


They don't need them either. Doesn't matter what they think, the truly do not need them. It's time to stop this.


Good luck explaining them that.


I will if I want to target that kind of user. Personally I much prefer targeting people who have nice laptops.

Poor people are frequently low LTV customers unless you’re getting paid by someone else (say the government) and even in that case you only have to please the buyer not the user.

So maybe if I was building like payday loans or something.


B2B apps whose customers may be small businesses may be surprised at just how low-grade some of their hardware is. For example, a small mail and parcel store (many of my employer's customers)

Additionally, apps designed to be used in the field by top-tier customers often fail basic requirements. (Funny how many apps and sites fall apart on my iPhone 12 Pro in places where the connection drops down to LTE)


Big businesses too. In job 2 my employer provides services to pharma, medical and biosciences corps and our users are almost invariably using terrible hardware despite being senior scientists heading up labs.


You're obviously being facetious, but probably most HN users really do feel this way.


Haha, I am glad I entertained but no irony was intended. I understand why you believed I was being facetious, though.


I'd come to the realisation a while ago that, deliberate or otherwise, limiting website usability or accessability to recent / high-performance kit is quite possibly an effective market-segementation tool in a space in which physical location (e.g., an up-market high-street address) is not a viable differentiator.

See: https://news.ycombinator.com/item?id=27410503

I'm not a fan of this, mind. Just aware that it's a possibility. And you're giving voice to that as a deliberate choice.


I know many successful and high paying individuals who just doesn't care or understand technology enough to have decent hardware. Even if you buy premium hardware, it will get old and slow after a decade.


Or, do not endorse people's bad buying patterns.

I do care about crappy network and load times. But there's some laptop that have worse performances than three years old tablets, and there seems to be no end to what scummy companies sell at low end.

At some point down the crappy pit, there's a line where it's no longer my problem.


It's not that hard to write optimized software if you know what you're doing. Problem is, developers these days routinely prioritize their own experience over their user's. They would add a huge library just to use one function from it. They would put multiple abstraction layers they don't understand on top of their platform just so their code is "beautiful". They would use their platform in a suboptimal way (for example, moving a DOM element using position instead of transform). And so on.

I mean, I know a guy who recently got into frontend development with react. I did the backend for an app he was building. I had to explain him what an XMLHttpRequest is so he could send me one. It just blows my mind that there are people who legit write code with some framework but don't know the basics of the language they're writing in and/or the platform the whole thing runs on.


What condescending bullshit.


What do you even mean? A website has no business to have minimum system requirements. It's a hypertext document with some optional macros here and there.


Absolutely trivial to demonstrate to be false.

I can use websites to videoconferencing. Streaming considered video and audio comes with non trivial loads.

I can use websites to compress images, to compute tabulated data on various Excel clones, remotely control devices or play games.

This isn't 1995 and we're way past indulging user lazy choices which have us a decade of internet explorer nightmares. Web is much more than that, and one need to know where to draw the line where their products minimum requirement lies.


> I can use websites to videoconferencing.

> I can use websites to compress images, to compute tabulated data on various Excel clones, remotely control devices or play games.

You sure can, but you probably shouldn't. These use cases are much better served by native apps. Shoehorning hypertext documents with macros into being applications will never come close to writing proper applications, in terms of both UX and performance.

I really wish we would undo many of the "advancements" of the web technology. This scope creep needs to stop, yesterday.

I wish there to be an answer to a simple question: when is a web browser finished?


What you talking about? App delivery trough browser is what enabled the low cost zero friction startup world you see today, and we're all better for it.


I don't want "low cost zero friction startup world", I want completed products that actually work, ffs.


we've been there, 1 in 20 household could afford to tech up, and even less could spare for anything but the most essential software. acquiring a cad or photoshop license was a significant percentage of yearly earning, and renting platforms was unheard of.

you say you do, but you don't.


Heh, buying software for personal use? Maybe I'm too Russian to understand that. We pirate everything for personal use. Only companies pay for licenses. And allegedly, Adobe and such were fine with that, up until shareholders decided they need to monetize more aggressively to "grow".


"If it isn't broken, don't fix it"

There's an argument for rampant consumption as a toxic buying pattern. If we are upgrading our systems just to use bloated webpages - webpages that offer nothing of substance over webpages circa 20 years ago, then it is worth asking what we are really gaining.

In any transaction it is important to distinguish between what is being sold, what is actually delivered and the utility provided.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: