Blog

Test specialists are essential members of software teams. Interview with Alan Page

Data-driven quality (DDQ) is not necessarily a new idea. The difference with DDQ today is that we want to use data generated by thousands (or millions) of users to understand what’s working, what’s not working, and how the product is being used.
18 December 2014
Interviews
Article by a1qa
a1qa

Alan Page has been a software tester for nearly 20 years. He was the lead author on the book “How We Test Software at Microsoft”, contributed chapters for Beautiful Testing and Experiences of Test Automation: Case Studies of Software Test Automation, and recently published a collection of essays on test automation in The A Word. He also writes about a variety of software engineering subjects on his blog at http://angryweasel.com/blog.
Alan joined Microsoft as a member of the Windows 95 team, and since then has worked on a variety of Windows releases, Internet Explorer, Office Lync and Xbox One. Alan also served for two years as Microsoft’s Director of Test Excellence.

a1qa: Do you think Data-Driven Quality is a new wave of testing or an approach that QA engineers should learn to improve software?

Alan Page: These days I struggle figuring out where the lines are between testing and just making good software in sensible and efficient ways.

Data-driven quality (DDQ) is not necessarily a new idea. Software teams have used data for decades in an attempt to understand software quality. We used data from test results or code coverage to make an educated guess on whether the software was ready to ship.

The difference with DDQ today, is that we want to use data generated by thousands (or millions) of users to understand what’s working, what’s not working, and how the product is being used. Many testers are concerned with being the voice of the customer, or a customer advocate – but I don’t care what kind of rock-star-ninja-superhero tester you are, there is absolutely nobody better at acting like a customer than the actual customer. Any major web service – Facebook, Twitter, Amazon, Netflix, etc. rely heavily on analysis of what customers are doing and how they are using the services. But DDQ isn’t just for services. When doing web testing for Xbox, for example, we gathered a lot of data on how users were interacting with the Xbox Live service, but we also frequently used information gathered from the console to help us identify a variety of potential, and real issues.

At Microsoft, we’ve used the Windows Error Reporting (WER) and the Customer Experience program for years. The Customer Experience program opt-in is that little checkbox that says, “Do you want to send usage information to Microsoft”, or, “Do you want to join the Customer Experience Improvement Program”. Those programs have provided a lot of insight into customer usage.

Today, we’re releasing updates to many web services at least once a day. While bigger changes may get extra scrutiny, many changes go from developer desktop to users within minutes. Deployment and monitoring systems give us the ability to automatically analyze for regressions in performance, functionality, or error rates and either immediately roll back the change, or deploy the change to more servers for additional coverage. We can also look at huge sets of data to gather insights on search patterns, match making in Xbox Live, or just about anything we can imagine in order to get a more accurate understanding of how customers are using our software.

DDQ is definitely an approach that’s here to stay – and on some teams, analyzing (and generating) customer data is a sensible activity for testers to take on. I don’t know if you want to call it a testing approach or an approach to software engineering in general, but using data to understand how the customers are using the software – or to understand where customers are finding problems are approaches that are here to stay.

a1qa: Combined engineering, isn’t it the end of software testing field?

Alan Page: It’s funny (or sad?) how many testers freak out at the idea of a team without separate disciplines for testers and developers (aka “combined engineering”).

Combined engineering doesn’t mean that everyone on the team does the exact same thing. In fact, for a discipline free engineering team to work, you need team members with a breadth of skills. Combined engineering, done well, promotes efficiency and (in my experience) quality software by eliminating the traditional walls between test and development and by encouraging more collaboration and team-owned quality. In practice, every successful combined engineering team I’ve seen has had plenty of people who were doing testing activities. Sometimes these teams have people dedicated to the testing activity, but most often, I see the test activities taken on by Generalizing Specialists (team members who have specialties, such as testing, but can contribute at some level in a number of areas).

I repeatedly see programmers take on work around measuring performance, writing stress or load tests that span the application (work often done by the test team). There’s also plenty of room on a combined engineering team for testers, but they tend to do better, and contribute more when they have a breadth of skills they can pull out of their bag of tricks as needed. When you only do one thing, you’re often a bottle neck on a discipline free engineering team.

For a personal example, most of my software work at Microsoft for the last 10-12 years has been in the role of a Generalizing Specialist. I’m definitely a test specialist, and people respect my ability to see the product as a whole, and to provide team leadership as needed. But I also can (and do) dive deep when needed. I led a code quality initiative for the Xbox console and identified a nice sized pile of code hygiene issues, and worked with feature teams to get those bugs fixed (and in many cases, fixed them myself). I also played the role of tool-smith from time to time and led exploratory testing efforts as well.

I know numerous people in software who have roles similar to mine. Some are developers, some are testers, and some, like me, are just software engineers. I still think there’s plenty of need for people who understand – and who are passionate about software testing. I just don’t think it’s required (or necessary) for them to work on a test team.

a1qa: It is an eternal question: should tester have engineering skills?

Alan Page: I often apply this bit of advice to this sort of question; “The answer to any sufficiently complex question is, it depends”. In this case, I think it depends mostly on what kind of tester you want to be. If you want to test high performance file systems or network protocols, I think it’s going to be difficult to do without programming. Higher up the stack, I think the same answer is appropriate for a multi-platform email client. I would also say that knowing about programming or platform architecture or how to use analysis or monitoring tools can never hurt someone’s ability to be a better tester.

I think it’s essential for a tester to have a passion for learning. The great testers I know just aren’t satisfied when their options or knowledge are limited. I started in testing without knowing anything about coding. I was good at finding bugs, and even better at finding bugs that were important enough to fix. I was also hugely curious, and wanted to learn more about software and IT. I learned networking and managed my company’s network and servers. Then I began to learn programming so I could write tools and debug difficult issues. Learning programming helped my career, but even more importantly, I think continuous learning, and the breadth of skills that gives me has helped my career the most.

If a tester doesn’t want to learn programming or just has a hard time with it, that’s fine – but then I think they should learn how their software is deployed; or learn how to aggregate customer feedback from user forums; or learn how to use more tools to help their testing. I think the career options can be limited for testers who have a limited breadth of skills, and I think a tester’s career options and growth opportunities relate directly to the number of holes they can fill on a team. A lot of teams need (and value) a great non-technical tester, but given a choice between a great tester, and a great tester who can write tools or bring up a new server when needed, I think most teams will take the second choice every time.

a1qa: How do you see the future of testing?

Alan Page: This question is so hard to answer in a world where I don’t think we can come close to agreeing on what testing is in the first place. I’m also not very good at predicting things (which is one reason I roll my eyes when I hear managers ask for detailed estimates…but that’s a different rant). At the risk of pulling my hair out – or pissing off the masses (or more likely, both), let me dump my thoughts and see where they go. But first, one more caveat. I can make a pretty safe bet that some people will read my thoughts on the future of testing and say, “that will never happen in a million years”, while others will read the same words and say, “that’s not the future, we do almost all of that today.” So, with that said, let me ramble a bit.

The not-so-controversial future is that programmers owning code-quality will be ubiquitous. Programmers will write their own unit, functional, and acceptance tests, and they’ll use tools (e.g. static analysis, and code coverage) as needed to ensure that they catch most of the early mistakes themselves (vs. the inefficient game of cod to bug to fix to regression ping pong match many teams play today. As I alluded to above, this is already happening in numerous organizations.

The future gets scary, however, when organizations think that code quality is the same as product quality. We have all seen (some of us first-hand) wonderfully functioning software that doesn’t provide any customer value; or cool new software that isn’t compatible with other software it needs to work with. There are activities that still need to happen in order to take software on the long journey from working to valuable. Programmers owning code quality and some aspects of testing doesn’t mean that additional quality work isn’t necessary.

What happens to get there varies. For many teams, the next step is to deploy to customers – maybe all of them, or maybe a portion of them and then gather data on what’s working and what’s not working, capture any crash information or performance degradation and automatically make a choice on whether to deploy to more users or roll back to good version if necessary.

Is what I just described testing? One could say that a lot of what a team looks for in production is the same thing many test teams look for in automated test runs today…but one could also say that it’s just analysis tools and deployment systems. In my opinion, it doesn’t matter. Roles are going to get fuzzier or go away, and I think that’s good – and nothing to be afraid of.

Deploying daily (or hourly) builds directly to production automatically is not always feasible – even for web services or sites (financial institutions, for example, are not likely to approve of this approach for many parts of their systems). I also doubt any user would tolerate an entire new operating system every day; or even a new file system driver. But, I predict that every organization – from banks to major software companies find a way to get customer usage data from some set of customers frequently, and use that data to learn, to improve software faster, and to release updates and new features frequently.

A hidden question here beyond what testing looks like in the future, is what testers of today do in the future. I find it funny that the people who seem to get the most outraged about test evolving are testers. While I see a future where testing roles may go away, I also see a future where test specialists (e.g. critical thinkers who see the program as a whole help the team focus on the most critical issues) are essential members of most software teams.

Regardless of whether you agree with my future, I think the trends of more frequent releases and wide scale development of web services and apps dictate that the future of testing will change. Time will tell.

Alan thank you for sharing your viewpoint and experience. We hope to talk to you again to cover a few more interesting issues.

More Posts

2 July 2024,
by a1qa
6 min read
Interview with Mike Urbanovich: How to build a robust test automation strategy?
The Head of testing department at a1qa answers the questions on how to smartly build a winning test automation strategy and talks about the advantages you may obtain with it.
Interviews
Test automation
The year in valuable conversations: recapping 2023 a1qa’s roundtables for IT executives 
8 December 2023,
by a1qa
3 min read
The year in valuable conversations: recapping 2023 a1qa’s roundtables for IT executives 
From dissecting novel industry trends to navigating effective ways of enhancing software quality — let’s recall all a1qa’s roundtables. Join us!
Big data testing
Cybersecurity testing
Functional testing
General
Interviews
Performance testing
QA trends
Quality assurance
Test automation
Usability testing
Web app testing
6 top reasons why business should invest in software quality
9 November 2023,
by a1qa
4 min read
6 top reasons why business should invest in software quality
We congratulate you on the World Quality Day with the article by Alina Karachun, Account director at a1qa, having 10+ years of QA expertise. Delve into it to explore the reasons why businesses should prioritize software quality.
Cybersecurity testing
Functional testing
General
Interviews
Performance testing
Quality assurance
alina
25 July 2023,
by a1qa
4 min read
Interview with Alina Karachun, Account director at a1qa: unearthing the power of a true IT leader
Read the interview with Alina Karachun, Account director at a1qa, about the importance of creativity and feedback for executives and their teams, what is ethical leadership, and many more.
Interviews
Quality assurance
debated technologies
30 May 2023,
by a1qa
3 min read
a1qa tech voice: Managing director at a1qa, North America, discusses pros and cons of much-debated technologies
Nadya Knysh, Managing director at a1qa, North America, puts a spotlight on 6 current technologies, discussing their positives and negatives.
General
Interviews
Test automation
10 March 2020,
by a1qa
6 min read
Dedicated team model in QA: all you should know about it
Check on everything you should know about when to apply, how to run and pay for a dedicated team in QA.
Interviews
QA consulting
Quality assurance
30 September 2019,
by a1qa
4 min read
“Every team member is responsible for software quality”: interview with Head of QA at worldwide media resource
We continue talking about unsurpassed software quality. Consider how to make QA more efficient using shift-left and continuous testing.
Interviews
8 December 2017,
by a1qa
4 min read
a1qa: one-stop shop for first-rate QA services
Dmitry Tishchenko, Head of a1qa Marketing and Pre-Sales Department, answers the questions of The Technology Headlines. 
Interviews
Quality assurance
17 August 2017,
by a1qa
4 min read
From requirements specification to complex business analysis: interview with a1qa head of BA
Check how we at a1qa converge business knowledge with IT skills to deliver maximum value. 
Interviews
QA consulting

Get in touch

Please fill in the required field.
Email address seems invalid.
Please fill in the required field.
We use cookies on our website to improve its functionality and to enhance your user experience. We also use cookies for analytics. If you continue to browse this website, we will assume you agree that we can place cookies on your device. For more details, please read our Privacy and Cookies Policy.