Gil Zilberfeld is a lean-agile software consultant, applying agile principles in development teams. In his interview to A1QA Gil covers practical tips on what to expect and what actions to take when introducing unit testing to the development life cycle.
Hi Gil, can you please tell us a little bit about yourself, your professional experience and expertise?
Hi, and thank you for the interview.
I’ve been around software all my life. For over 20 years I’ve been developing software, testing it, managing teams of developers and testers, implementing and improving development practices. I’ve also done a lot of product management, marketing, support and pre-sale.
Currently, I’m a consultant, helping development teams in all areas: from implementing new development practices to fleshing out stories, creating testing strategies and rolling out MVPs. I also help them in applying Scrum and Kanban methodologies (usually somewhere in between, depending on what they need).
You are the author of the Everyday Unit Testing book. What is the main purpose of your book?
In the introduction to the book I ask: “Why another unit testing book?”. The problem I’ve found is there are people who have merely installed a unit test framework, and they feel like they already understand unit testing. When they bump into real world problems, when they need knowledge and skills on top of the tools, they decide: “Unit testing is a nice idea, but it won’t work here.” The book tries to get people over this hump.
The purpose is to let people, who have already taken their first steps in unit testing, know what to expect in both handling legacy code and how to roll out the process in the smoothest way possible.
How to choose the appropriate testing strategy?
There is no single answer, but there are ways to think about the problem you’re trying to solve. I usually start with risks. How complex is the system and what do we know about it? How well do the people, who are going to write and test it, know the business domain? Is that feature we’re adding in a bug-ridden code? Or maybe it’s a completely new piece of code?
Different answers can lead to different strategies.
We also need to take into account the development process. In a test-at-the-end, which usually means no time for testing, we need to make sure at least those risks are covered. In agile methodologies, this is an iterative process, where we focus on testing additional functionality, while not neglecting the overall risks. An iterative process is good, because it allows us to learn from testing the current increment and re-plan the next one.
Then we go down a level to the feature or story level. Here there are also things we need to focus on, because we don’t have enough time to test everything. But we need to see what we can automate, and where to use exploratory testing.
Finally, there are heuristics we can apply. SFDEPOT, for example. It allows to create a map of possible testing options, then pick from those we can fit into the time we have, and where we can learn the most.
All this apply to the entire testing strategy. Within it, unit testing comes into play. The more unit tests we have, the more attention can be paid to integration, functional and exploration testing.
What are the problems that beginners usually face when it comes to unit testing?
Unit testing and test driven development (TDD) seem to be easy. The tools are quite simple. The examples make sense. Usually there is great documentation and/or community to help. The bar for entry looks very low.
And then you try to write tests for your 10-year old legacy code. Suddenly, all simple examples don’t fit. It seems risky to change the code in order to make it testable, which is often the case. Tests take a lot of time to write, and since you’re a beginner, the tests are not good enough, readable and some even are useless.
On the other hand, there is no immediate reward. We need to have the tests running for some time, until they actually detect when someone broke the code, and only then we see their value.
So the problem is not technical. It’s about not seeing the value, while still paying the cost. The people who persevere and have enough belief it will pay off, will witness costs shrinking in the end.
Why is unit testing necessary in web development?
It’s not necessary, as long as you don’t have bugs. Nothing is really necessary if you have alternatives.
If you have alternatives for regression suites that run quickly, don’t require a web server or other dependencies and point you to where the problem is – you probably have a great alternative. Use it.
Other types of tests are very good to have, but you need unit tests for the quickest feedback, hopefully on the developer machine before checking the code in.
How does unit testing impact the development process?
As I’ve said, there’s an upfront payment, but the dividends are greater. The beginning is hard, because your legacy code puts up resistance, and you have to bend it to your will. That means that developing features with tests takes sometimes even twice longer initially.
But after a while, that’s what development begins to look like. It takes significantly less time to write tests, both because you’re better, and the code becomes more testable. And since you run the tests all the time, bugs are found very quickly (not three months into the manual testing cycle). As you add more tests, you trust your automation suite more, get fewer bugs and the testers get more time for exploratory testing.
So there’s a big impact on the whole process. If you let it.
What should be tested in unit tests?
Small pieces of code. These could be single methods, or a combination of methods that do an operation or two. Having bigger tests, or more code to test doesn’t mean that the tests are not valuable, though.
Good unit tests run quickly regardless of when and where you run them and can point to the offending code when they fail.
Specifically, we want to test where bugs appear: mainly conditionals (if/else), forks in the code (switch/case) or a combination. We’d also like to test boundary cases, that are easy to test at the unit level, but getting the whole application to that state is hard. Think of testing for error handling a communication break. You won’t be cutting the communication line every time you run the tests. So checking how the error handling code behave is much easier to test at the unit level, with some mocking.
What is the right way to deal with legacy code?
Rewrite it, of course!
It usually comes to that. The code becomes more and more complex until we decide we need to rewrite it.
But the professional answer is “incrementally”. There’s a lot of experience in the field on how to take small parts of the code, extract them out at a very low risk and add a few tests for them. While it doesn’t look feasible, and definitely looks risky, simplifying a legacy code base is more possible if you take it one step at a time. That means that when you work on a feature or fix a bug, you add a test. If that test requires small modifications, do them. If it requires a big risky change – decide and move on. Overall you’ll see that most of the code, can move piece by piece to a simpler, testable design.
Always do a code and test review. Having more eyes on the problem and the solution will leave the code (and tests) in a better state.
How can a tester choose the right kind of test?
I don’t think there is one right test. The tester (or the developer for that matter) needs to understand the risk, the business domain, and what can go wrong. Maybe a test is not needed at all?
Here’s an example: The code gets a pointer which maybe null as parameter. This is a possible crash risk. But if you know more about the calling code, you know that at the current state, there’s no possibility of a null value reaching our code. So why write a test? Should the tester spend time trying to reach this “impossible” state?
We have a finite amount of time for testing, and so we should focus on important things. A good tester needs to apply all his knowledge and decide where to put the effort, based on the strategy we talked about.
When should Test Driven Development (TDD) be used?
Tough question. TDD proponent will say, on every code. But that’s not completely right. There are many cases where test-first can help, like fixing a bug or even adding a feature into existing legacy code. TDD on top of test-first (that’s the design part) is much easier and beneficial with new code.
That covers most cases in code. But in that case, why are most people not doing it?
Again, the problem is not with the technique or tools. TDD is about discipline. Going from red to green means adding just a small bit of code to make the test pass. Or even writing the test first. Most developers lack that discipline. While it can be enforced, it usually isn’t. Much in the way of “we need to release it, so we’ll skip testing”, quality usually takes a back seat to producing code. Sad but true.
What tools are generally used in unit testing and TDD?
- Test identification – how the framework tells a test from a regular function. Sometimes it’s also called test registration, because the user adds a test to the framework.
- Execution and reporting – we want to run all (or some) tests and get their result.
- Assert APIs – A method to define the acceptance criteria for tests.
Frameworks contain a few more features today, but that’s it basically.
The second family is what we call “mocking” tools. These range in capabilities, based on language and technology. Mocks are essential when working in legacy code.
A mocking framework has two uses:
- Changing the behavior of the dependency, so you can test the code in isolation
- Test the interaction between the code and the dependency
Thanks, Gil, for sharing your thoughts and ideas. We hope to talk to you again.