Involving QA team in Agile process. Interview with Matthew Heusser
Matthew Heusser is a managing consultant at Excelon Development. Matt has deep experience in software testing, project management, development, writing, and systems improvement. His extensive network of contacts in these fields has enabled him to put together a diversified, high-level team of experts at Excelon.
Matthew was lead organizer for the initial Great Lakes Software Excellence Conference, a regional event that continues today. He organized the Agile-Alliance Sponsored Workshop on the Technical Debt Metaphor, and recently published a leading position paper on the subject for Better Software magazine. Matthew served on the board of directors for the Association for Software Testing, and was the testing track chair for the Agile Conference in 2013 and 2014.
a1qa: Matthew, involving QA team in Agile process is always challenging. Nevertheless it always worth trying, hope you agree. Maybe you have your own method of introducing agile to a QA team or some leadership tricks you follow?
Matthew Heusser: Well, I’d say that the ideal Agile process moves from what (requirements) to done (in production), and done includes “running”, or production support. When you start to cut involvement back you tend to decrease throughput and quality.
Introducing Agile into QA consulting, well, I see two or three big pieces to that. I’d start with introducing the idea of iterations, or sprints. This brings up the problem of regression-testing, or release candidate testing, in a day or two, preferably less. This isn’t a problem for some teams; they work on twenty to a hundred small pieces that can be deployed independently. For others, we have to talk about strategy.
My preference is to talk to the whole delivery team about Agile adoption. Typically I find that it is possible for the developers to scale back the work to small chunks. Even in an ancient system, the programmers can slice the work thin enough to get done in two, weeks, ‘just’ adding a column to a report, and so on. The challenge is that regression. But if we ‘just’ added a column to a report, we can do a much more risk-adjusted test run. So one ‘aha’ moment is that regression testing does not have to mean re-running a specific test plan, and instead can mean varying the test approach based on risk.
The second big issue to tackle is story-testing – breaking a huge requirements documents into a chunks small enough to be actionable and yet meaningful. With traditional development, we tend to assume a 80-page specification implies deep thought. When teams chunk work and look for examples, we can often push disagreements and misunderstanding upstream, before coding begins, in a story kickoff. This is critical to actually getting testing done in tight time boxes — you just don’t have time for the back and forth bickering over what the software should do. So I see story kickoff, or “shift left”, or whatever you’d like to call it, as another ‘aha’ moment for agile-testing.
a1qa: Lean software testing is one of your interests. Still, lean testing is a curve of AGILE. So what`s makes lean software testing different from other Agile approaches?
Matthew Heusser: Lean Testing as I practice and teach it provides theory for why Agile Software Works – with small batches, limited work in progress, moving toward one-piece flow, and so on. It also provides concrete measures and metrics that improve team performance even when gamed.
Dare I say it, those are two areas that I see Agile falling down. Scrum or XP introduced as sets-of-rules without theory tend to fail, and Agile ‘metrics’ like Velocity can be easily gamed to the determent of the system.
a1qa: Let`s talk about testing in Scrum. It`s well-known fact that Scrum has a number of Pros: saving time, quick product quality delivery in a scheduled time. But what about Cons? There is a viewpoint that scrum is good for small, fast moving projects as it works well only with small team. And this methodology needs experienced team members only. Do you agree with that? And if you faced these challenges, how you managed to handle them?
Matthew Heusser: Well, first of all, you have to understand the problem Scrum was introduced to solve – requirements churn so fast that no one could get anything done. If you don’t have that problem, if, for example, your team can turn around code in a day – then off-the-shelf scrum might actually slow you down!
Here’s why – Scrum tends to prescribe iterations of a fixed length, with a standard now around two weeks. You figure out all the stories for those two weeks, the technical team commits to them, and the product management team goes to figure out what to do next sprint. If your team already does just in time requirements, and that is not a problem, then you’ve increased the batch size. Lean methods wouldn’t think of that as an improvement.
As for the argument that scrum only works with small teams, I would agree – at least to the extent that scrum was designed for small teams and doesn’t tell you what to do with big ones. But let me push back on that a little, at least to say that large teams organized by function don’t work, or at least, don’t work well.
For example, I worked on one project that you might consider medium – perhaps sixty people full-time for two years. The teams were organized functionally. Every day you’d have a test standup, a fronted standup, a backend standup, a scrum of scrums, an operations standup … who knows. I didn’t even have exposure to all the standups! Nor did I want it.
Scrum would prescribe cross-functional teams. Instead of eight functional teams of eight people each, we could have had eight feature teams with eight people each. This would have required a little infrastructure work so that the teams didn’t step on each other, but that would have been good design work anyway.
a1qa: And finally applying Lean software testing, Scrum or any other methodology, can testers reach 100% application/system quality, make it highly efficient? Or maybe QA is an endless process?
Matthew Heusser: As long as a new version of Internet Explorer, or Chrome, or FireFox, or Safari, or iOS, or Android, can introduce a change to javascript that ‘breaks’ existing functionality, then I doubt we’ll have 100% application/system quality any time soon. Of course, you can lock down the browser, and only support a specific version of IE. You can lock down the language, and only support English US and English UK keyboards, and so on. Apple controlled everything on the original version of the iPod, from hardware to OS to software. In those environments, you can get a lot closer to 100% correct. For example, a few years ago I wrote some Electronic Data Interchange (EDI) software that ‘just’ converted from one EDI format to another in plain text. That software was certainly fit for use; I am not aware of a single problem in production, ever. Yet a multi-gigabyte file with lines of text that were too long would probably crash it.
So there may be a few domains where it may be possible to get close to 100%. For the most part, for what I do, I can be most valuable in the domains that offer risk – because riskier domains are ones where risk management is called for.
Time pressure? Uncertainty? Ambiguity? Sign me up, because it is often those domains where a tester can add the most value.
Matthew thanks for sharing your viewpoint and experience. We`ll be glad to see you and talk to you again.
You can follow Matthew Heusser on Twitter, LinkedIn and read his blog.