Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Most of the time when a team is writing low quality software it's not really a choice. No one made a conscious decision not to write tests, not to do PR reviews, or not to refactor. It's actually that the developers are not capable of writing tests, reviewing code, or refactoring to a sufficiently useful level that it's worthwhile. If you've come in to the industry and joined a company that doesn't do those things then you've never learned those skills.

Where Martin Fowler says you'll see the benefit of high quality code in a few weeks he's making the assumption that the team is capable of writing high quality code but choosing not to, whereas it's actually more likely to be the case that the team would need to go away and learn how to write high quality code before they can start, including things like learning how to write testable code in the first place. That is a much bigger and much more time-consuming problem.

The article is absolutely 100% correct that high quality code lets you go faster but it ignores the root cause of the problem - developers have been writing low quality code for so long that unlearning all the bad habits and actually getting better is a huge undertaking.



It's important to note that having high test coverage doesn't make code good. Unit tests will actually make bad code even worse because it will be even more difficult to change the underlying logic (because the tests lock all the poor implementation details into place).

Tests have nothing to do with code quality. All they do is verify that the code works. I would argue that the simpler and therefore the better your code is, the less you need to rely on tests to verify that it works. Fewer edge cases means fewer tests.

I'm a big fan of integration tests though because they lock down the code based on high level features and not based on implementation details. If you ever have to rewrite a decent portion of a system (e.g. due to changing business requirements) it is deeply satisfying if your integration tests are still passing afterwards (e.g. with only minor changes to the test logic to account for the functionality changes).


I see this opinion a lot from people who haven't seen tests and code written by people experienced with TDD. The tests should not end up that coupled to the code. The implementation structure and the test structure end up somewhat different when refactoring every time the tests are green. When listening to the feedback from the tests & code. With the skills to spot the refactoring opportunities.

Oftentimes people seem to equate unit testing with a 1:1 correspondence of test and implementation with high coupling between the two. These sort of tests resist refactoring, rather than enabling it. With good tests you can pivot the implementation and tests independently.

Recommend https://www.youtube.com/watch?v=EZ05e7EMOLM and https://vimeo.com/83960706 on TDD


In my experience, your statement is true when writing library code or tests that don't need to mock lots of objects.

Unfortunately, Unit testing becomes highly coupled when testing classes in the standard web architecture. A service class you're testing can depend on other service classes, a DAO, and potentially other web services, so now you're left mocking all those other classes if you want to create a Unit test instead of an integration test. Since the external dependencies have been mocked out, now the Unit test is higly coupled to the implementation and is a PITA to change the implmentation of the test or the code implementation. I suspect that's why OP prefers integration testing, as it helps keep the test less coupled from the implementation.


In my experience, if your tests require lots of mocks then that's a sign that IO is coupled too tightly to application logic. Refactoring your code so this isn't the case isn't always obvious, but it's a breath of fresh air and really cleans up the interfaces.


One problem with decoupling IO is that you still somehow need to get the data deep down into those places where it's needed by your application logic. That means you end up either:

1. Passing each individual little piece of data separately down the call stack with bloated method signatures containing laundry lists of data that seemingly have nothing to do with some of the contexts where they appear.

2. Combining pieces of data into larger state-holding types which you pass down the call stack, adding complexity to tests which now need mocks.

I think one of the toughest parts of day-to-day software engineering is dealing with this tension when you have complex modules that need to pass a lot of state around. It's easier and cleaner to pull stuff out of global state or thread contexts or IO, but that makes it harder to test. More often than I would like to admit, I ask myself whether a small change really needs an automated test, because those shiny tests that we adore so much sometimes complicate the real application code a lot.

If anyone has thoughts on how they approach this problem (which don't contain the words "dynamic scoping" :P) I'd love to read them.


This is my experience as well. I learned the lesson the one time I was allowed to write unit tests at work. It was on an existing code base without tests. I had to significantly refactor code to make it testable, and one of the lessons I learned from the experience is to isolate I/O from the main business logic that I'm testing.

In the pre-test code, the functions were littered with PrintConsole statements that would take a string and a warning level (the Console was an object that was responsible for printing strings on a HW console). I made sure my main business logic was never aware of the Console object. I made an intermediate/interface class that handled all I/O, and mocked that class. Instead, the function now had LogMessage, LogWarning, LogError functions of the interface class that took a string. The function had no idea where these messages could go - it could go to the console, it could be logged to a file, it could be sent as a text message. It didn't care.

Now when we needed to make changes to how things were printed, none of our business logic functions, nor their tests, were impacted. In this case at least, attempting to unit test led to less coupled code.


What if most applications are mostly IO and have little application logic? Business applications are fancy looking CRUD a lot of the time.


That’s a good insight. It applies to side effects in general, for instance setState in react.


And usually with good tdd acceptance in your team people automatically write more testable code, because they're too lazy to write tightly coupled code that needs many mocks.


... and no doubt the ratio of application/domain/pure logic to external services interaction varies tremendously by project and by industry, which is likely what leads to such a variety of opinions on the subject.


I would consider needing to mock a lot of objects to write your test a form of design feedback. An indication that our design could be improved. Perhaps the code under test has too many responsibilities, we're missing an abstraction, boundaries are in the wrong place, too many side effects.

One of the downsides of modern mocking frameworks being so easy to use is that it's less obvious when we're doing too much of it.

If we test drive the behaviour, our first failing test of a single behaviour won't involve many collaborators. If it does we're probably trying to test more than one thing at once. At some point as we add tests we may add more collaborators. If we refactor at each time we should be asking ourselves what's going wrong.

Testing more than one class at the same time doesn't make it an integration test. Arbitrarily restricting a unit to map to a single method or a class is a good way to ensure that your test code is tightly coupled to the implementation.


> Testing more than one class at the same time doesn't make it an integration test. Arbitrarily restricting a unit to map to a single method or a class is a good way to ensure that your test code is tightly coupled to the implementation.

But at least if you restrict your units to a single method, you have a chance of getting somewhat complete tests. If you're testing multiple classes with several methods each as a unit, the number of possible code paths is so huge that you know you cannot possibly test more than a small part of the possibilities.


This doesn't have to be the case.

If you TDD your implementation then it's all covered by tests. If you refactor as part of the TDD process then you may factor out other classes and methods from the implementation. These are still covered by the same tests but don't have their own microtests.


If you cannot write a simple test for your code, it is a good indication that you need to change the code, not the test.


The video seems to support all my points. "Adding a new class is not the trigger for writing tests. The trigger is implementing a requirement."

A test which covers a class is a unit test. A requirement is typically a feature. To test a feature, you usually need integration tests because a feature usually involves multiple classes.


>Tests have nothing to do with code quality.

I didn't downvote your comment but I vehemently disagree. Mission-critical code such as NASA flight guidance, avionics, and low-level libraries like SQLite depend on a suite of tests to maintain software quality. (I wrote a previous comment on this.[0])

We also want the new software that commands self-driving cars to have thousands of tests that cover as many scenarios as possible. I don't have inside knowledge of either Waymo or Tesla but it seems like common sense to assume those software programmers rely on a massive suite of unit tests to stress test their cars' decision algorithms. One can't write software with that level complexity that has life-&-death consequences without relying on numerous tests at all layers of the stack. Yes, the cars will still have bugs and will sometimes make the wrong decision but their software would be worse without the tests.

High quality software relies on both lower-level unit tests and higher-level integration tests. Or put another way, both "black box" and "white box" testing strategies are used.

[0] https://news.ycombinator.com/item?id=15592392


Isn't this disagreement basically the same point made by Martin about different kinds of quality? SQLites tests don't say the code is architected well and reusable and modular and blah blah blah, it says that it works. When people talk about the quality of NASA code or SQLite, that feels more like external quality than internal quality.


The 100% MC/DC testing in SQLite does not force the code to be well-architected, but it does help us to improve the architecture.

(1) The 100% branch test coverage requirement forces us to remove unreachable code, or else convert that code into assert() statements, thereby helping to remove cruft.

(2) High test coverage gives us freedom to refactor the code aggressively without fear of breaking things.

So, if your developers are passionate about long term maintainability, then having 100% MC/DC testing is a big big win. But if your developers are not interested in maintainability, then forcing a 100% MC/DC requirement on them does not help and could make things worse.


>Isn't this disagreement basically the same point made by Martin about different kinds of quality?

M Fowler's comment about "tests" was also made in the context of internal quality. He mentions "cruft" as the buildup of bad internal code that the customer can't see:

>[...] the best teams both create much less cruft but also remove enough of the cruft they do create that they can continue to add features quickly. They spend time creating _automated tests_ so that they can surface problems quickly and spend less time removing bugs.


I think what he means is that just because you have tests (and even if you have high code coverage) doesn't meant that your code is high quality. They're correlated, but I've actually seen code with high test coverage... whose tests never made any assertions.


>tests [...] doesn't meant that your code is high quality. They're correlated,

Yes, if they're correlated, that contradicts the absolutist statement of "Tests have nothing to do with code quality."

Trying to improve code correctness is directly affecting code quality.


So self driving systems are based on machine learning, and thus does not have regular (deterministic) unit tests. They will mainly be tested on past data, but the end results are always probabilistic. I.e. no one (not even musk himself) know how the car would behave when it see something that it was not trained on.


I've always wondered whether there's a bunch of conditional statements constraining the output of the probabilistic model. That would seem like the logical thing to do, but I'm not that familiar with ML to know whether such thing is needed or not.


I think that unit tests make sense for safety-critical systems but still in those cases, my point would be that it's better to add them near the end of the project once the code has settled.


Skilled carpenters use hammers. That doesn't mean a hammer can't cause a lot of damage if used incorrectly.


Re: your last point, I recently rewrote parts of a billing system full of hairy logic and edge cases (and bugs). The initial MVP consisted of exactly replicating the existing invoicing logic. Due to the general complexity of the problem domain, I found myself rethinking and rewriting large portions of the system as I grew more familiar with the (undocumented, naturally) business requirements. In some cases I'd throw out entire modules and associated unit tests. After a while, I started relying more on integration tests which simply compared generated invoices between the two systems (and/or against golden files.)

Having these made it extremely easy to refactor large portions of the system quickly without needing to refactor unit tests. (I still wrote unit tests, just less of them, more focused on the stabler parts of the system.) This has loosened the grip of the "every function must have a unit test" mantra in my mind, which... I dunno, somewhere along the way sort of became simply assumed.

Some caveats to note, however. A) The code had minimal external dependencies (postgres). B) The integration tests ran very quickly against a local postgres database, only slightly slower than unit tests might, providing a quick dev feedback loop. C) While internally, the system was rather complex, the output was not. It was a simple CSV file that's trivial to parse/compare.

Thus, I wouldn't overgeneralize from the above. In cases where there are lots of external dependencies, integration tests are slow, or where evaluating the test results is more tricky (ie, you need Selenium or whatnot), this approach wouldn't be as feasible.


Most of this is a series of false choices. In fact

- tests can help show code quality improvements do not break anything

- you can have integration tests and unit tests at the same time; in fact, it is more of a spectrum than two rigid categories

- it's often possible to have simple code and test it

Generally speaking the more specific the question, the less controversial the choices are. It's typically not all that interesting to argue about how to test a particular algorithm, data structure, or service.

The hard part in all of this, from an engineering perspective, is just talking to folks, promoting good teamwork, actually showing the value of less obvious things (a passing test suite), and knowing what to do when technology choices become toxic to the product or team.


Most interesting refactorings change the boundaries and count of abstractions, which usually does break unit tests.

Unit tests are great at the leaves of the call graph, and things which are almost leaves because their dependencies aren't at any real risk of change. The further into the stack you go, the more brittle they get.


The point is that I use all kinds of tests all the time. It works fine. You aren't required to join a tribe and debate about abstract test styles.

Look at the current problem and come up with good answers to the questions:

- How do we know it works?

- How will we know it still works in a year?

...you don't always need the best answers, even. Most projects should start with honest answers and work from there.


> It's important to note that having high test coverage doesn't make code good.

Sure, but low test coverage doesn't make it good either. Coverage is a metric and like any metric, it (1) needs to be assessed with judgement and (2) becomes useless when it's used as a target rather than a measurement.

> Tests have nothing to do with code quality. All they do is verify that the code works.

Well to start, Fowler notes a distinction between external and internal quality. External quality is "does it work from the end user perspective?", which can be verified by tests -- you note integration tests in this role (acceptance tests, feature tests, user tests, behaviour tests, whatever you choose to call them). In the external quality case, verifying that the code works is a large fraction of quality.

Your argument, I think, is that internal quality is unaffected by testing. I don't agree: in my experience the needs of simple testing create constant design pressure on the production code, most of which makes it easier to create future changes.

Though as noted at the top of the thread: expertise still matters. Writing better tests and better production code are skills.


> I'm a big fan of integration tests though because they lock down the code based on high level features and not based on implementation details.

I've found this to be a dangerous mindset. Integration tests are great, but they need a solid foundation of unit tests. Integration tests are slow, difficult to root-cause, complex to write and maintain, and also generally don't test all the various corners of the system.

Testing is a pyramid, with unit tests at the bottom and integration tests somewhere in the middle. If your unit tests are based in implementation details, as you say, then that's probably a sign that a refactoring is in order (would love to be less blunt but it's tough with the absence of more details).


> Tests have nothing to do with code quality. All they do is verify that the code works. I would argue that the simpler and therefore the better your code is, the less you need to rely on tests to verify that it works. Fewer edge cases means fewer tests.

While I won't argue that tests verify that the code works, the assertion that tests have nothing to do with code quality based on that premise is incorrect, and here's why.

Some of the main types of poorly written code are 1) brittle code, which breaks easily when things are changed, such as dependency changes or changes in I/O, and 2) unreadable code, which decreases accurate understanding of what the code does and causes incorrect assumptions to be made, which yields bugs.

Unit tests, over time, raise the alarm to these types of code smells. While a test might not yield much info for a short time after it is written, when the code ages and has to stand up to the test of changing code/environment around it, well written tests WILL highlight parts of the code that can be considered poorly written due to the two criteria above.


> Unit tests will actually make bad code even worse because it will be even more difficult to change the underlying logic (because the tests lock all the poor implementation details into place).

This statement is patently false, unless for some reason a project includes unit tests themselves as the production code, which would be highly unusual.

At most, unit tests must be refactored along with the code, but that's the standard operating procedure.


This seems to assume that the tests will be higher quality than the problematic code in the first place. It’s actually commonplace to see tests coupled to internal implementation details of production code, which makes refactoring very hard.

The idea of TDD (mostly lost to hype and consultants) is that you change _EITHER_ the tests or the code in each operation. This allows you to use one as a control against the other. If you change both, you prove that different tests pass against different code, which is substantially less useful. Unfortunately if tests are coupled to internal state, getting code to even compile without modifying both sides of the production/test boundary is difficult after a refactor.


> This seems to assume that the tests will be higher quality than the problematic code in the first place.

If the problem lies with probematic code then tests are not the problem. At most they're just yet another way that problematic code affects the problem.

Let's put it this way: would the problems go away if the tests were ripped out?


I actually just finished a ticket related to this. It took me significantly longer than necessary because I also had to go through all the poorly written tests.


But while you're building a new system/subsystem, it doesn't make sense to write unit tests for units of code which have a high likelihood of being deleted 1 month from now due to evolving project requirements.

It's like if you were building a smartphone; it wouldn't make sense to screw all the internal components into place one by one unless you were sure that all the components would end up fitting perfectly together inside the case. While building the prototype, you may decide to move some components around, downsize some components, trim some wires and remove some low-priority components to make room for others. In this case, unit tests are like screws.


The problem is prototypes end up being production code in the real world. Writing tests is about managing risk. You should write some basal level of unit test as you go to validate your logic as you go. That basal level is determined by the team or individual tolerance of risk


Who said anything about production ? Haven't You seen requirements changing even before first prototype is ready? I had a meeting literally today where client's CFO and head accountant throw out my team's week of work because they forgot about key requirements (and this have happened third time this year)


> Unit tests will actually make bad code even worse because it will be even more difficult to change the underlying logic

Objectively false, if not having tests is better than having tests then delete the tests. Instant improvement.

This fact leads to the conclusion that the value of having tests is greater than or equal to the value of not having tests, in all cases.


Yes I have instantly improved code by removing micro level solpisistic tests that were tightly coupled to the implementation. These tests made it much slower to improve the quality of the code and had zero benifits because they only tested that the code did what it did and not what it is supposed to do.


You are missing the effect of loss-aversion and the cognitive bias towards keeping a test even if it adds negative value.

Once a test has been added, it will tend to stick around even if it is worse than no test.


Good point, and I agree. Sunk cost fallacy and all that. I wouldn't consider those "objective", but I don't think it's worth arguing over the definition of that word when I think we otherwise agree.

I also would agree that sometimes time has been wasted creating too many tests. Perhaps that time could have been spent to greater effect.

I also think that even if, in retrospec, a test is very tightly coupled and specific to one implementation, that test still might have revealed bugs and may have helped the original author. If that test is now a burden, throw it away.


You appear to be trying some sort of reduction ad absurdum, but in many cases work on some change to the software starts with deleting all the related tests because they're going to be irrelevant and changing them isn't worth the extra work.

That that deletion is necessary means it apparently did make the code a bit worse.

Also all the time they were in while the code wasn't being changed, they made running the application's tests slower.


>No one made a conscious decision not to write tests, not to do PR reviews, or not to refactor

I know several small off shore software contractor firms that actively turn down cries of help from their developers for tests and refactors for "budget reasons" all the time. Their clients usually don't know any better and later go on to pay the technical debt in support fees.


> It's actually that the developers are not capable of writing tests, reviewing code, or refactoring to a sufficiently useful level that it's worthwhile.

Or maybe it's that management pushed new features far higher up the priority list than "making code more maintainable". That has been the case in most places where i have worked.


I've worked three different places where I attempted to implement automated testing strategies, failed twice and succeeded once. If you subscribe to the anecdata model of knowing things, here goes:

In the first company, there was a strong culture of testing but no strong culture of teaching. I did not last long there and I did not succeed at implementing even basic automated testing. Everyone was very busy in their own roles, and as a co-op employee nobody would show me how to test. I was a pre-graduated Computer Science student who honestly didn't know about unit testing frameworks, or selenium, or whatever. If you give me a giant Waterfall document about requirements, and a giant spreadsheet to fill up with naughts and crosses, with little to no additional direction about the software, how it's tested, or how it even works, then you're going to have a bad time.

Second company there was a strong culture of quality, but not of testing. We were also a two-man developer shop, so there was very little time for teaching and testing. I was expected to learn on my own, and avoid spending time on learning things my boss already knew on our behalf. I accepted broken code from him all day long and made it work.

To be honest, that's where I learned to do good work and not break stuff, and we never invested heavily in test suites. We also almost never built anything above-average in complexity, and when we did, it actually wasn't very long before the boss left, and I was on my own to support it. In the next few years that guy wrote a book about how to dig out from this situation, when your software is successful and needs to change, but doesn't have any tests.

(He says it wasn't a very good book, but from my perspective it's something that was meant to be read preventatively, even though it reads more like a step-by-step manual, you should hope that you never have to follow these terrible, terrible steps. If you are starting a new project and still have a chance to keep test coverage at acceptable levels as you go, I'd maybe recommend reading it, so you know what you're in for if you make the bad decision and your software becomes successful anyway. I have a coupon code if you really want to know, but I digress... the short version is, you've gotta test everything before you change anything.)

In this last role, I have succeeded at implementing automated testing. (But at what cost?) The company supports the idea of spending time on testing. My direct supervisors all were willing to wait the extra week or two to see what I came up with, and understanding the benefits of testing, in retrospective it was always considered time well spent. Nobody was really in a position to teach, but fortunately I had tons of experience at trying and failing before, so this time with the right support structure I was able to get it right for the most part on my own, with help from docs and the internet. (It helps a lot that browser testing tools and other testing tools have all evolved a lot in the last 10 years too, they are objectively better now than they were when I had that first job, and no support.)

In summary, I'd say it's necessary to have all three - time to learn, actual support from above for delays when "this seems like something that shouldn't take this long" ultimately appears, and an actual operational need to build automated testing, which is not always granted depending on your team size, design, and need for growing complexity.

It is possible to build a widget that works, and never changes again. In this case, spending time on a test suite may be a waste. I have found as I've grown more successful and work with more people, that it happens a lot less often than it used to.


Thanks for the insight, in my last role I attempted to introduce automated testing and had support and some success but no buy in from the rest of the team meant I ended up 'owning' the test suite.


The best suggestion I've heard recently is, when someone writes a flaky test, that person needs to be the one who fixes it. (If you write flaky tests and I fix them, I learn how to not write flaky tests, and you keep on writing them, blissfully unaware of the pain that they cause every day.)

If only one person is writing tests, that's a problem you won't have, but what's worse... I think you have it worse.


I share this sentiment. I tend to categorize cruft code in two different categories, and in most cases I have encounter both stem mostly from programmer inability rather than time constraints.

The first kind is what I would presume is the most normal one. It undoubtedly shows up if you have unregulated feature growth in a codebase with low feeling of code ownership. Grunt programmers, or drive-by feature development teams shoehorn in new code to fulfill their requirement. This leads to to the normal degenerate codebase, modules are thousands of lines long, functions are hundred of lines of deep staircase like if-statement logic, spaghetti dependencies, promiscuous state-sharing etc. The classic ball of mud.

The second kind of cruft is the one where someone tried to be clever above their ability and created heavy abstractions that are ill fit for the problem at hand. Signs of this are over-usage of complicated language constructs like inheritance, meta-classes, runtime inspection etc. The style can lead to verbose, boilerplate heavy code that overshadow the business logic. I tend in my frustration to call this abstraction wankery.

In the ball of mud-pattern the programmers often lack the ability to properly form the abstractions needed to sort out the mess and they are aware of that, resigning themselves to do trying to fulfill their task at hand without breaking the existing fragile mess. The grunt might be well knowledable in the buisniess domain and has programming as a side skill. The drive-by coder does not have the motivation to understand the whole messy codebase and does the minimal change and tries not to break anything in the process.

The abstraction wankery is driven by other things. Usually a seconds systems effect. The Junior programmer has some code under his belt and is trying to level up his skills, a smooth talking architect with low insight in the business domain is cargo culting some new fad, etc. This kind of style is usually well perceived by management, they hear the buzzwords and it sounds good in their ears. It can take some time until the card house falls, usually a requirement comes that does not fit the abstraction and unexpectedly take exorbitant amount of time to implement, or maybe a deep rooted bug that requires fixes that ripples through the whole codebase.

When the cleaners are finally sent in the big ball of mud can usually be shaped up by incrementally applying the standard refactoring techniques until structure starts to show. The abstraction mess can be much more difficult too clean up. Incremental improvements can be more difficult and sometimes a rewrite of the code is required, leading to a much more noticeable lack of feature velocity than the ball of mud fix.


>I tend to categorize cruft code in two different categories, and in most cases I have encounter both stem mostly from programmer inability rather than time constraints.

This is only natural. Where the customer doesn't value software quality they will hire cheap (not very good / not very experienced) developers.

My personal experience is that at the beginning of my career customers neither expected nor wanted quality - prioritizing speed of delivery above virtually all else - and I felt like I was engaged in a perpetual struggle to "make" them understand, while as I grew more experienced I found that customers/employers simply expected that quality should take precedence over speed and required no convincing.

IME any attempt to "convince" the customer/employer that code quality was important was a waste of time. It's better simply to get them to decide their desired level on a rolling basis and act accordingly and find somebody else to work for if the answer isn't to your liking.


> This is only natural. Where the customer doesn't value software quality they will hire cheap (not very good / not very experienced) developers.

Well those abstraction wankers usually do not come cheap. So from customer's standpoint they have hired experienced and well paid programmers but the result is still crap.


Yep, one of the catch 22's of tech is that you need good people to recognize good people. I think it's why tech tends to congregate in hubs in spite of the lack of any real intrinsic need to it to do so.


> IME any attempt to "convince" the customer/employer that code quality was important was a waste of time. It's better simply to get them to decide their desired level on a rolling basis and act accordingly and find somebody else to work for if the answer isn't to your liking.

Exactly, you have to work within the context of the culture.


> No one made a conscious decision not to write tests, not to do PR reviews, or not to refactor.

Then you've never worked at a company like my current one. The developers all very much want to do those things, but are forced not to by a management that is suspicious of the value of these things no matter how many times avoidable bugs pop up or massive refactors become unavoidable to add new functionality and management gets "I told you so"-ed. Tests are regularly postponed to follow-up issues that mysteriously never make it into sprints and preventative refactoring is a non-starter.

What the developers want or are capable of is meaningless in a situation where they have no leverage over how they spend their time.


At an old job, they were using an older XML marshalling library that hadn't been maintained in years, but it was everywhere in the code. Well, it finally got to the point where the thing just stopped working (some schema it couldn't parse), so somebody suggested that we "refactor" the code to use the more standard, well supported JAXB. Well, since we're talking about a couple million line code base that parsed XML literally everywhere, the refactoring ended up taking months (mind you, this was something we were completely dead in the water if we didn't do). However, it got into management's collective hive mind that refactoring = several months of no new features, so they prohibited anything called refactoring.


I can't speak for everywhere, but it's generally not that hard to get another software job. Places like that tend to have few people with both experience and the ambition to improve things. The turnover rate in general is usually high.

Anyway, developers tend to have plenty of leverage, in switching jobs or teams if nothing else.


Why are you asking for permission to do the right thing?


Probably the time-tracking bait-and-switch. As a developer, you have to account for every hour you spend on "company time", and you have to forecast those hours ahead of time (usually at the beginning of a "sprint"). You've got a little bit of wiggle room to sneak important things (like refactoring code, writing tests, reading documentation) in between the forecasted tasks, but not days. I guess your employer hasn't gotten "agile" (which is management-speak for "not agile") yet.


I am curious about your example. When you guys tell management about the best practices what is their response? What is their reasoning on not doing PR reviews, tests, etc ?


"In the real world...", "Nobody cares about how it works as long it works...", etc. This is a company where the top level management 1) thinks it knows best about everything, 2) micromanages everything, and 3) trusts absolutely no one to do their jobs. The CEO has a software degree but never really worked as a programmer professionally and is as obsessed with exclusively shipping features as any MBA, and it flows down from there.

When you hire subject-matter experts to do professional work and then refuse to believe that they might know more than you about how to do that work, you're going to have deep dysfunction.


"We don't need the gold plated version, we just this to work and we need to ship it YESTERDAY"

If you try to explain they just say things like "don't write bad code so you won't need tests and refactoring"

My new rule of thumb is: don't work for companies without a technical co-founder.


I agree having technical leadership makes a huge difference. Still, most developers don't realize that if they push back hard enough, a lot of those loud "done yesterday" requirements dissolve. Non technical people always ask for more than they expect they'll get.


True, but at some point, it's just exhausting. Every conversation is an exercise in pushing back.


At my current company it’s basically the roadmap.


While there are certain a subset of developers who lack the ability to write solid code, more often they aren't given the opportunity to do so. Too often company deadlines, resource shortages and selling prototypes as complete products boxes in the developer teams. Product and sales want constant new features and most companies don't enable a culture of investing in the product with refactoring and reviews and other best practices. Developers are seen as a cost center not the critical cog in the machine. If you fix this, you will get better quality code.


> they aren't given the opportunity to do so

I see this as well - in a lot of ways, it's an (accidental?) outcome of JIRA-driven project management. The project manager's job is to squeeze as much productivity out of the developers as possible, so they do so by having you account for every hour you plan to spend and what you're going to spend it on. Then they start looking at what can be cut, and the stuff that's not "mission critical" gets cut. What's frustrating is that this ends up being a Pyrrhic victory.


I would add that due to the heavy business emphasis in the industry, the level of apprenticeship has dropped to very low. Instead of letting experienced engineers guide newcomers, it is often the case that junior but agreeable and less paid engineers take the lead. This way, it takes much longer to learn the craft.


>Most of the time when a team is writing low quality software it's not really a choice. No one made a conscious decision not to write tests, not to do PR reviews, or not to refactor.

And then:

>That is a much bigger and much more time-consuming problem.

Right there is the choice.

Most comments here are blaming management. I've worked in a team where the team themselves were the ones opposing it. Yes, like you said, they worked for years without doing all this. But then management actually gave them leeway to spend time learning it and implementing it - on the job, but it was left to the developers to figure out how to learn it. They could learn it solo or form learning sessions - whatever they wanted.

Only a few developers took advantage of the opportunity. And the rest who hadn't then actively opposed changing the code for testing.

People generally don't want to change. In this case, it definitely was a choice.


Few studies have been done to prove that one way of coding is better than another. Not saying that there isn't a consensus of what leads to good code (tests are a good thing). I prefer functional style but someone else might prefer Java lasagna style. The ones that say that their style is better don't have any scientifc footing at all. We programmers love to say that we are so clever but we never actually try to do any studies. I hope this will change with time.


There have been some papers published about languages and unsurprisingly, FP languages come out as the most time inefficient while Java comes last, even after C.

I’m not sure how they controlled for experience/skill as the Java developer is probably not as skilled as the FP person, but even so the results imply that choosing a programming language is a big deal, just as Paul Graham has espoused over and over again.


> FP languages come out as the most time inefficient while Java comes last

I'm confused - you say that FP languages are the most time inefficient - so they're the worst? That means that you're saying that Java is the most time efficient/the best? I'd be curious to see the paper.


Sorry I meant “efficient”


Ah - ok, that makes a lot more sense (and fits with what else I've seen on the topic).


Interesting, I’ve read some but in all but one the sample sizes where waaaaaay to small (around 20). Do you have any links?


Excellent point. It isn't to say that the developers cannot learn modern practices. I know this is how I worked when I first started and am a moderately capable developer. It is an undertaking though.

Is going through that gauntlet fairly universal? Onboarding is almost always a pain for the individual if you are hiring outside of large tech companies. Why is our default coding style not compatible with team programming?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: