Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I see this opinion a lot from people who haven't seen tests and code written by people experienced with TDD. The tests should not end up that coupled to the code. The implementation structure and the test structure end up somewhat different when refactoring every time the tests are green. When listening to the feedback from the tests & code. With the skills to spot the refactoring opportunities.

Oftentimes people seem to equate unit testing with a 1:1 correspondence of test and implementation with high coupling between the two. These sort of tests resist refactoring, rather than enabling it. With good tests you can pivot the implementation and tests independently.

Recommend https://www.youtube.com/watch?v=EZ05e7EMOLM and https://vimeo.com/83960706 on TDD



In my experience, your statement is true when writing library code or tests that don't need to mock lots of objects.

Unfortunately, Unit testing becomes highly coupled when testing classes in the standard web architecture. A service class you're testing can depend on other service classes, a DAO, and potentially other web services, so now you're left mocking all those other classes if you want to create a Unit test instead of an integration test. Since the external dependencies have been mocked out, now the Unit test is higly coupled to the implementation and is a PITA to change the implmentation of the test or the code implementation. I suspect that's why OP prefers integration testing, as it helps keep the test less coupled from the implementation.


In my experience, if your tests require lots of mocks then that's a sign that IO is coupled too tightly to application logic. Refactoring your code so this isn't the case isn't always obvious, but it's a breath of fresh air and really cleans up the interfaces.


One problem with decoupling IO is that you still somehow need to get the data deep down into those places where it's needed by your application logic. That means you end up either:

1. Passing each individual little piece of data separately down the call stack with bloated method signatures containing laundry lists of data that seemingly have nothing to do with some of the contexts where they appear.

2. Combining pieces of data into larger state-holding types which you pass down the call stack, adding complexity to tests which now need mocks.

I think one of the toughest parts of day-to-day software engineering is dealing with this tension when you have complex modules that need to pass a lot of state around. It's easier and cleaner to pull stuff out of global state or thread contexts or IO, but that makes it harder to test. More often than I would like to admit, I ask myself whether a small change really needs an automated test, because those shiny tests that we adore so much sometimes complicate the real application code a lot.

If anyone has thoughts on how they approach this problem (which don't contain the words "dynamic scoping" :P) I'd love to read them.


This is my experience as well. I learned the lesson the one time I was allowed to write unit tests at work. It was on an existing code base without tests. I had to significantly refactor code to make it testable, and one of the lessons I learned from the experience is to isolate I/O from the main business logic that I'm testing.

In the pre-test code, the functions were littered with PrintConsole statements that would take a string and a warning level (the Console was an object that was responsible for printing strings on a HW console). I made sure my main business logic was never aware of the Console object. I made an intermediate/interface class that handled all I/O, and mocked that class. Instead, the function now had LogMessage, LogWarning, LogError functions of the interface class that took a string. The function had no idea where these messages could go - it could go to the console, it could be logged to a file, it could be sent as a text message. It didn't care.

Now when we needed to make changes to how things were printed, none of our business logic functions, nor their tests, were impacted. In this case at least, attempting to unit test led to less coupled code.


What if most applications are mostly IO and have little application logic? Business applications are fancy looking CRUD a lot of the time.


That’s a good insight. It applies to side effects in general, for instance setState in react.


And usually with good tdd acceptance in your team people automatically write more testable code, because they're too lazy to write tightly coupled code that needs many mocks.


... and no doubt the ratio of application/domain/pure logic to external services interaction varies tremendously by project and by industry, which is likely what leads to such a variety of opinions on the subject.


I would consider needing to mock a lot of objects to write your test a form of design feedback. An indication that our design could be improved. Perhaps the code under test has too many responsibilities, we're missing an abstraction, boundaries are in the wrong place, too many side effects.

One of the downsides of modern mocking frameworks being so easy to use is that it's less obvious when we're doing too much of it.

If we test drive the behaviour, our first failing test of a single behaviour won't involve many collaborators. If it does we're probably trying to test more than one thing at once. At some point as we add tests we may add more collaborators. If we refactor at each time we should be asking ourselves what's going wrong.

Testing more than one class at the same time doesn't make it an integration test. Arbitrarily restricting a unit to map to a single method or a class is a good way to ensure that your test code is tightly coupled to the implementation.


> Testing more than one class at the same time doesn't make it an integration test. Arbitrarily restricting a unit to map to a single method or a class is a good way to ensure that your test code is tightly coupled to the implementation.

But at least if you restrict your units to a single method, you have a chance of getting somewhat complete tests. If you're testing multiple classes with several methods each as a unit, the number of possible code paths is so huge that you know you cannot possibly test more than a small part of the possibilities.


This doesn't have to be the case.

If you TDD your implementation then it's all covered by tests. If you refactor as part of the TDD process then you may factor out other classes and methods from the implementation. These are still covered by the same tests but don't have their own microtests.


If you cannot write a simple test for your code, it is a good indication that you need to change the code, not the test.


The video seems to support all my points. "Adding a new class is not the trigger for writing tests. The trigger is implementing a requirement."

A test which covers a class is a unit test. A requirement is typically a feature. To test a feature, you usually need integration tests because a feature usually involves multiple classes.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: