> I can no longer concieve of writing software without using test driven development. I can't imagine not having a comprehensive suite of unit tests to back up my development.
I don't understand this. This makes a lot of sense when you make complex enterprise products depending on unstable 3rd-party libraries but for personal project this just inflates the time you need to get the thing done. I just code exactly what I need and check if it works and can't imagine dedicating time to writing additional layer of code to automate this checking. Even when I code pure functions which are a lot easier to test.
Lately I've been writing some 3D graphics code in Rust for fun. Almost twenty years ago, in highschool, I used to write similar code in C++, without relying on any sort of automated testing – and damn, with the amount of math involved, it's so nice to just write and run some rudimentary sanity-checking tests as you code! I used to rely on constantly building and running some interactive test app and from its behavior try to figure out whether your matrix calculations have a sign error somewhere or not.
A solid type system helps a lot in making refactoring a less fearsome endeavour, but a reasonable test coverage on top of static checking makes it even more so.
Also, automatic perf tests that keep track of regressions and improvements are just awesome.
Sure, this also makes sense when it is about math. But what I write mostly is about managing files on the disk, querying databases, processing text and XML and also some web frontend. Occasionally even communicating via a COM port. And when I need math I do it with NumPy. These tasks seem rather easy to implement and rather hard to test.
Sometimes mocking, even to check error paths, isn't a lot of work. It depends.
A lot of times designing your interfaces to support mock objects more easily improves them in other ways; if you pass in a file object instead of a filename, for example, not only does it become easier to pass in a mock object that raises an error, but it also becomes easier to pass in a GzipFile object or an HTTPResponse object, and presto, your function now handles compressed data and data stored on a web server.
with patch.object(requests, "get") as get_mock:
with nose.tools.assert_raises(socket.error):
get_mock.side_effect = socket.error
inject_exception.call_api("http://example.com")
If your functions have lots of hidden dependencies and side-effects, it’s hard to test.
If you split concerns properly and keep your glue/IO code separated from the decision-making/business-rules/logic, mocking is quite trivial, and there are advantages other than just testability.
I've been on both sides, heavy testing and little to no testing. IMHO it really depends on the system use case.
Tests have advantages, but they also hinder your agility by increasing the costs of the initial version and of making major changes.
Hard to update? Good tests!
Well understood problem sphere and product? Good tests!
High cost of failure? Good tests!
MVP with no actual current users? No TDD.
Exploring the product space and update cost is low? No TDD.
> I don't understand this. This makes a lot of sense when you make complex enterprise products depending on unstable 3rd-party libraries but for personal project this just inflates the time you need to get the thing done.
I hate tests with a passion, but sometimes I'll write hundreds even for a tiny personal project - when that tiny personal project is going to be at the core of another thing and I want to make absolutely sure that it is working correctly in every way.
Like that time I wrote a process manager for node.js applications because I realized PM2 is a buggy inconsistent mess with race conditions all the way down. Gotta make sure mine works better than that...
In one of my projects the "additional layer of code to automate this checking" looks like this:
def ok(a, b): assert a == b, (a, b)
from .util import ok
ok(Parse(b'.25').do(0, real), (3, ('real', 0.25)))
ok(Parse(b'1.5').do(0, real), (3, ('real', 1.5)))
ok(Parse(b'-1.5').do(0, real), (4, ('real', -1.5)))
ok(Parse(b'+1.5').do(0, real), (4, ('real', +1.5)))
If this results in an exception at import time, I sometimes have to comment out one or more of these tests while I fix parts of the bug.
I mean, I do do some manual testing. Sometimes it's easier to bang on a function and decide whether the output looks right than to figure out what the results should be from first principles. But it doesn't take very many repetitions of a manual test of something like the above before it's the manual testing inflating the time needed, not adding another ok() line.
This is especially true for debugging. Adding automated tests is often a faster way to debug than to add print statements or to step through things in a debugger. Sometimes I take the tests out afterwards, but more commonly I leave them in. And with things like Hypothesis and QuickCheck, tests can have a much higher strength-to-weight ratio than simple tests like the above.
It's also a breath of fresh air when I'm trying to understand what some code is supposed to do, or how to call it, and I come across a one-line test like the above.
Speaking of how to call things, sometimes I do test-driven development so that my internal APIs don't suck. And sometimes I don't and wish I had. There are a couple of lines of code in the same project that say this:
And, you know, that's just shitty, shitty code, because that was a shitty way to design that interface. And maybe if I'd written that code test-first, instead of just charging ahead madly implementing the first thing that came to mind, I would have realized that when I was writing the test, before writing the implementation. Now, if I want the interface to suck less, I have 700 lines of crap to refactor to the new interface, which probably means either throwing it away and starting from scratch, or incrementally refactoring it while maintaining both interfaces for a day or two. And that's inflating the time I need to get the thing done.
It would have been less work to fix it at design time, the way my friend Aaron taught me: first, write some application code as if the ideal library existed to write it with; then, write the library code that makes it run. And that's just so much easier when that application code is just a unit test.
However, as with most things, I disagree with Robert Martin's point of view on this. I can easily conceive of writing software without TDD, or even without tests. In fact, I do it frequently. Tests are often valuable, and TDD is sometimes valuable, even in weekend personal projects. But they aren't always worth the cost.
I don't understand this. This makes a lot of sense when you make complex enterprise products depending on unstable 3rd-party libraries but for personal project this just inflates the time you need to get the thing done. I just code exactly what I need and check if it works and can't imagine dedicating time to writing additional layer of code to automate this checking. Even when I code pure functions which are a lot easier to test.