Maybe you are jumping to a conclusion too quickly.
How do you know what was really going on?
"He created a requirement that every developer commit two new tests to the code base every workday" seem a stupid requirement if you do not control the quality of the tests.
The same big corporation I wrote above had a goal of 80% code coverage reported on dashboards.
I saw people writing tests just to run lines of code, without effectively testing anything.
Others people were "smarter" and completely excluded folders and modules in the codebase with low coverage from coverage testing.
Code coverage percentage numbers on a dashboard are a risky business. They can give you a false sense of confidence. Because you can have 100% code coverage and be plagued by multitude of bugs if you do not test what the code is supposed to do.
Code coverage helps to see where you have untested code and if it is very low (ex: less 50%) tells you that you need more tests. An high code coverage percentage is desirable but should not be a target.
The real problem is again the culture.
A culture where it is ok to have critical parts of the code not being tested. A large part of the solution here is helping people to understand the consequences of low code coverage.
For example collecting experiences and during retrospectives point out where tests saved the day or how a test may have saved the day so people can see how test may save them a lot of frustration.
But again, when you give people a target and it is the only thing they care about, people find a way to hit it.
There are a thousand reasons why reasonable people might have found themselves in that position. Maybe they inherited a code base after an acquisition or from some outside consultancy who didn't do a great job. Maybe management made a rational business decision to ship something that would make enough money to keep the company going and knowingly took on the tech debt that they would then have some chance of fixing before the company failed. Maybe it actually had very high numbers from coverage tools but then someone realised that a relatively complex part of the code still wasn't being tested very thoroughly.
If a team has identified a weakness in testing and transparently reported it, presumably with the intention of making it better, then why would we assume that setting arbitrary targets based on some metric with no direct connection to the real problem would help them do that?
if team does not have automated test, but still manages to deliver working software - maybe tests are not adding as much value as VP thinks?
the most important is feature delivery, and integration test, not automated unit test where you test getters and setters with mock dependencies - absolutely useless busywork
Tests aren't exclusively about asserting current behavior -- they also help you determine drift over time and explicitly mention the implicit invariants that people are assuming.
Chesterson's fence isn't saying that the fence/test isn't necessary. It's saying you need to take the time to understand the broader context rather than take a knee-jerk assumption. To be more clear, just because developers don't see the need for better testing, doesn't mean more testing isn't needed. But it may indicate the VP didn't doing a good job of relating why, which leads to the gamesmanship shown in the story.
Schedule isn't always the most important thing either. It's possible delivery the software may just mean you've been rolling the dice and getting lucky. The Boeing 737MAX scenario gives a concrete example of where delivery was paramount. It's a cognitive bias to assume that "since nothing bad has happened yet, it must mean it's good practice"
The culture lead to fake tests instead of adding tests that were legitimately lacking.
Is that so different from saying they weren't acting like adults?
The phrasing could be called dismissive but I give that a pass because it was mimicking the phrasing from the post it replied to. The underlying sentiment doesn't seem wrong to me.
> Code coverage percentage numbers on a dashboard are a risky business. They can give you a false sense of confidence. Because you can have 100% code coverage and be plagued by multitude of bugs if you do not test what the code is supposed to do.
Or, conversly, I've been in charge by really awkwardly testable code.. which ends up being really reliable. Plugin loading, config loading (this was before you spring boot'ed everything in java). We had almost no tests in that context, because testing the edge cases there would've been a real mess.
But at the same time, if we messed up, no dev environment and no test environment would work anymore at all, and we would know. Very quickly. From a lot of sides, with a lot of anger in there. So we were fine.
Insufficient test coverage doesn't necessarily mean lack of self-discipline. It can also stem from project management issues (too much focus on features/too little time given for test writing).
An adult would have started to write the tests themselves so they'd understand what was going on around them. You don't just frown at people and hope for the best.