Tuesday 27 May 2014

Growing Object Oriented Software, Guided By Tests

Few weeks ago I finished to read the famous book
Growing Object-Oriented Software, Guided by Tests

In this post, I try to summarize what I think is the most interesting content.

The book present the interesting approach of starting a new project with a Walking Skeleton that is a tiny implementation of the system that performs a small end-to-end function
The point of the walking skeleton is to help us understand the requirements well enough to propose and validate a broad-brush system structure.
In most Agile projects, there’s a first stage where the team is doing initial analysis, setting up its physical and technical environments, and otherwise getting started. This is usually called Iteration Zero.


One important task for iteration zero is to use the walking skeleton to test-drive the initial architecture.
Working through our first end-to-end test will force some of the structural decisions we need to make, such as packaging and deployment.
We always have a single command that reliably compiles, builds, deploys, and tests the application, and that we run it repeatedly. We only start coding once we have an automated build and test working.
Other interesting points are about the effects of this level of initial exploration:
A sense of urgency will help the team to strip the functionality down to the absolute minimum sufficient to test their assumptions.
We often found it worth writing a small amount of ugly code and seeing how it falls out. It helps us to test our ideas before we’ve gone too far, and sometimes the results can be surprising. The important point is to make sure we don’t leave it ugly.
The book recommend the following things:

  • Put the tests in a separate component to make sure that tests drive the code through its public interfaces
  • Write long unit test names to tell what a unit does
  • Write lots of little methods to keep each layer of code at a consistent level of abstraction
  • Use the Single Responsibility Principle as an heuristic for breaking up complexity
  • Don't be shy of creating new types
  • Passing around types with generics is a form of duplication. It's a hint that there is a domain concept that should be extracted into a type.
  • Try to minimize the time when the code does not compile by keeping changes incremental
  • Consider using a runtime exception called Defect to throw when the code reaches a condition that could only be caused by a programming error rather than a failure in the runtime environment
  • Choose good names and change them over time reflecting the new learning about the domain
  • It is better to define domain types to wrap built-in types including collections
  • Learn how to divide requirements up into incremental slices, always having something working, always adding just on more feature.
  • Constant refactoring as part of the TDD cycle
  • Write tests that are readable and flexible. 
  • Write tests that read like a declarative description of what is being tested.
  • Use the TestDox convention (invented by Chris Stevenson) where each test names reads like a sentence, with the target class as the implicit subject.
  • Don't worry about long names in tests as they are only called through reflection
  • Synchronize frequently with the source code repository—up to every few minutes—so that if a test fails unexpectedly it won’t cost much to revert your recent changes and try another approach.
Test Driven Development combines testing, specification and design into one holistic activity. Difficulty in testing might imply that we need to change our test code, but often it's a hint that our design ideas are wrong and that we ought to change the production code.
We’ve found that the qualities that make an object easy to test also make our code responsive to change. The trick is to let our tests drive our design
TDD is about testing code, verifying its externally visible qualities such as functionality and performance. TDD is also about feedback on the code’s internal qualities: the coupling and cohesion of its classes, dependencies that are explicit or hidden, and effective information hiding—the qualities that keep the code maintainable.
One of the importance goal the authors have is structuring code to make the boundaries of objects clearly visible.
An object should only deal with values and instances that are either local—created and managed within its scope—or passed in explicitly.
The book describes through examples a way of doing TDD that heavily rely on using mocks. I personally rarely used this style and I tend to prefer the more traditional TDD approach as presented by Kent Beck in his book TDD By Example.

The reason why they love this style is that emphasize how objects communicate, rather than what they are, so that they end up with types and roles defined more in terms of the domain than of the implementation.

They also say that:
  • There is no point in writing mocks for values (which should be immutable anyway). Just create an instance and use it.
  • Mock concrete classes only if you have no other options. This often brings you to extract an interface that reflects something about the domain. 

In terms of dependencies, they insist on dependencies being passed in to the constructor, but notifications and adjustments can be set to defaults and reconfigured later.

They strongly believe in the principle of "Tell, Don't Ask"

For me, the best piece of advice in the book is the Test Data Builder pattern. I have started to use this pattern heavily in my team in Red Gate Software and our test are now incredibly more readable and flexible. I really recommend you to have a look at the pattern and see if it can be a good fit for you.
We find that test data builders help to reduce duplication, keep tests expressive and resilient to change.
First, they wrap up most of the syntax noise when creating new objects. Second, they make the default case simple, and special cases not much more complicated. Third, they protect the test against changes in the structure of its objects.
We can write test code that’s easier to read and spot errors, because each builder method identifies the purpose of its parameter.
Combined with factory methods and test scaffolding, test data builders help us write more literate, declarative tests that describe the intention of a feature, not just a sequence of steps to drive it.
We can even use higher-level tests to communicate directly with non-technical stakeholders, such as business analysts
An another thing I totally agree on is the importance of good test diagnostic.
The last thing we should have to do is crack open the debugger and step through the tested code to find the point of disagreement.
We’ve learned the hard way to make tests fail informatively. If a failing test clearly explains what has failed and why, we can quickly diagnose and correct the code. Then, we can get on with the next task.
The easiest way to improve diagnostics is to keep each test small and focused and give tests readable names.
Sometimes tests are a little bit fragile especially when you create assertions on strings.
One interesting effect of trying to write precise assertions against text strings is that the effort often suggests that we’re missing an intermediate structure object. Most of the code would be written in terms of this intermediate object, a structured value that carries all the relevant fields.
The book terminates discussing the challenges of testing multi threading code.
The only solution is to make the system deterministic.
This also means that we are no longer exercising the entire system so some slow system testing is required to increase fidelity.



No comments:

Post a Comment

What you think about this post? I really appreciate your constructive feedback (positive and negative) and I am looking forward to start a discussion with you on this topic.