Blog

A story by an anti-bottleneck tester. Interview with Eric Jacobson

Eric Jacobson has been testing software for 14 years. After managing other testers at Turner Broadcasting’s traffic system, he accepted a position as Principal Test Architect at Atlanta based Cardlytics.
15 January 2015
Interviews
The article by a1qa
a1qa

Eric Jacobson has been testing software for 14 years. He entered the IT industry teaching end-user software courses but a development team at Lucent Technologies convinced him to become a tester. Later, Eric became lead tester of Turner Broadcasting’s traffic system, responsible for generating billions of dollars annually via ad placement. After managing other testers at Turner, he accepted a position as Principal Test Architect at Atlanta based Cardlytics.

Eric is a highly rated conference speaker and has been posting his thoughts to improve testing on www.testthisblog.com, nearly every week since 2007. He also enjoys playing clawhammer banjo, woodworking, caving, and spending time with his son, daughter and wife.

a1qa: In one of your posts you mentioned that resolving tests by “PASS” or “FAIL” prevents you from actual testing. Then what is actual testing?

Eric Jacobson: The “actual testing” I was referring to is the open-ended investigation part of software testing outsourcing. I was contrasting it to “checking”. It took me years to notice but the way I documented tests was actually impeding my testing. I’ve been stuck in a mindset that test cases must resolve to “PASS” or “FAIL”… for both test planning and exploratory testing sessions.

This is a big deal! I suspect most testers are stuck in the same trap. It’s like a test documentation language blindness. Even if a test began as an investigation, my brain still framed it to resolve as “PASS” or “FAIL”. For example, a test that began as “I wonder if I can submit an order with no line items.”, would get documented as “Submit button should not be active unless line items are populated.” The former might generate better testing while the latter might limit creativity. The big mindshift here is NOT to box yourself in by forcing every test instruction to resolve to PASS or FAIL. As a tester, it’s a very liberating idea.

What if instead, you just list the trigger to perform an open-ended investigation? If you have a list of these investigation triggers, you can resolve them as “DONE”. The assumption is they were completed and any bugs or issues have been otherwise shared.

a1qa: You questioned the testers’ “power to declare something as a bug”. Isn’t it a tester’s job?

Eric Jacobson: Ha ha! Well kind of. But I think our role as testers can be improved upon with a slight variation.

How about, a tester’s job is to raise the possibility that something may be a problem? Raising the possibility forces a magical little thing called a “conversation” to take place. The conversation might be, “oh that’s nasty, log a bug!” or “that sounds like a problem John just noticed, talk to him” or “actually, it’s by design, the requirements are stale”. A conversation might reduce rejected bugs, duplicate bugs, or bugs prematurely fixed by eager programmers. The conversation provides us testers with feedback about what is or isn’t important. From there we can adjust.

The conversation might be inconvenient, especially for lazy testers or introverted product managers. Beyond that, it’s hard to find disadvantages. This didn’t occur to me until I read one of Michael Bolton’s clever thought reversals. He asked if testers should have the power to delete bug reports? Most would answer, no that’s something the team or the stakeholders should decide. I agree. I would rather have a bug report repository that accurately reflects all known threats to the product. If we don’t want testers independently making decisions to remove bug reports, maybe we should use the same level of scrutiny for putting things into the bug repository.

a1qa: Is there any ideal testing approach or model that you would recommend for following to optimize tester’s work?

Eric Jacobson: No. …did I pass? Was that a trick question? My context-driven-test approach mentors just sighed with relief.

a1qa: “Anti-Bottleneck Tester”, who is this person?

Eric Jacobson: I used that term in a favorite blog post. I probably need a better term. An anti-bottleneck tester is a tester who makes decisions and suggestions that help development teams deliver software quicker, without letting its quality suffer. This is the tester I strive to be. It’s a far cry from the tester I was 13 years ago. I was a quality cop. “You can’t ship until I stamp it Certified for Production”. I was the bottleneck…and proud of it.

We messed up big time and gave ourselves a bad reputation. Some of us are still doing it. I just heard a story about a tester who said it would take three days to test a change to a GUI control’s default value. Nobody is impressed by that answer. My response would have been three minutes.

These days, programmers are moving so fast, they don’t have time to wait on testers who don’t bother learning development technologies or refining test practices. I love thinking of ways to keep up. I did a talk at a couple US test conferences title, “You May Not Want To Test It”. The basic premise was, instead of testing everything because of a factory-like process, consider only testing where you can add value. I listed 10 patterns of things testers might want to “rubber stamp” instead of spending time testing. For example, production bugs that can’t get any worse, subjective UI changes, race conditions too technical for some testers to set up. These are all things best tested by non-testers and a tester who sees that and delegates the work, is able to spend more time testing things where their skills can be more effective.

Another anti-bottleneck tester practice is to suggest compromises that enable on-time shipping such as, let’s give the users the option of shipping with the bug. I also think testers should spend less time logging trivial bugs and more time hunting for non-trivial bugs. The urge to log trivial bugs is probably left over from the ancient infamous bug count metric.

a1qa: If reporting trivial bugs is a waste of time, does it mean QA engineers should skip them?

Eric Jacobson: It might. First of all, we are talking about “trivial” bugs here. So by definition, the threat to the product is trivial. What does trivial mean? If the product-under-test has hundreds of bugs, some are probably trivial and may never get fixed. If the product-under-test has 10 bugs, there may not be any trivial bugs.

This will sound crazy but I’ll say it anyway. I think testers are more likely to hurt their reputations by logging trivial bugs than by missing non-trivial bugs. Logging trivial bugs reflects poorly on your testing skills, especially if you miss non-trivial bugs.

At my previous job I ran bug triage meetings. I hate to say it but here is what typically happens. Three or four times in each meeting, we read a bug report that made the team laugh at how trivial it was. Someone always said, “Ha ha, who logged that?”. There was nothing worse than looking at the history and seeing a tester’s name…another tester obsessed with perfecting cosmetic stuff on the GUI while the support line rings all day because the product keeps timing out.

a1qa: What quality means personally to you?

Eric Jacobson: As a user, what comes to mind is the integrity of the development team. And by development team I mean testers, programmers, and product owners. Do they have a reputation of being transparent about problems, eager to fix problems, interested in listening and responding to users? If the answer is yes, I’m likely to forgive production bugs and continue using the product.

As a tester, this means it might be more important to react quicker to problems in the field than to hold out for “perfect software”. My father, a small business owner, taught me the customer is always right. When the needs of the customer, like time to market and new functionality, outweigh my tester concerns for certain bug fixes that affect me personally, I try to weigh everything. However, first impressions are important. They say an audience decides if they like a presenter during the first five seconds. If this is true of software, we had better get the core stuff right. As the context-driven-testing principle says,

 “Only through judgment and skill, exercised cooperatively throughout the entire project, are we able to do the right things at the right times to effectively test our products.”

Eric thank you for sharing your viewpoint and ideas. We hope to talk to you again and discuss a few more topics.

More Posts

10 March 2020,
by a1qa
6 min read
Dedicated team model in QA: all you should know about it
Check on everything you should know about when to apply, how to run and pay for a dedicated team in QA.
Interviews
QA consulting
Quality assurance
30 September 2019,
by a1qa
4 min read
“Every team member is responsible for software quality”: interview with Head of QA at worldwide media resource
We continue talking about unsurpassed software quality. Consider how to make QA more efficient using shift-left and continuous testing.
Interviews
8 December 2017,
by a1qa
4 min read
a1qa: one-stop shop for first-rate QA services
Dmitry Tishchenko, Head of a1qa Marketing and Pre-Sales Department, answers the questions of The Technology Headlines. 
Interviews
Quality assurance
17 August 2017,
by a1qa
4 min read
From requirements specification to complex business analysis: interview with a1qa head of BA
Check how we at a1qa converge business knowledge with IT skills to deliver maximum value. 
Interviews
QA consulting
1 August 2017,
by a1qa
4 min read
Interview with head of a1qa test automation center of excellence
Dmitry Bogatko on how to manage the in-house Center of Excellence delivering value to the company's projects. 
Interviews
Test automation
19 August 2016,
by a1qa
4 min read
Interview with Adam Knight: Big Data exploratory testing
It is not so much to say that I find exploratory testing necessary. Rather I would say that I found it in my experience to be the most effective approach available to me in testing the business intelligence systems that I have.
Big data testing
Interviews
5 August 2016,
by a1qa
5 min read
Interview with Adam Knight: how much of a Sisyphean task is in software testing?
I’m a great believer in automation. I don’t believe that an agile approach to development is possible without some level of test automation. The use of such approaches does, however, need to be combined with an appreciation of the information that the automation provides you with.
Interviews
Test automation
20 July 2016,
by a1qa
4 min read
Good testing is about asking right questions: Intereview with Thanh Huynh
Software testing is not just to confirm things, it’s a process to explore, exercise the system to discover potential problems. You can achieve that by asking good questions to the system under test, yourself, your customers, your product owner, your manager, your colleagues, etc.
Interviews
20 June 2016,
by a1qa
5 min read
Interview with Lisa Crispin: whole-team approach to quality
To succeed with delivering valuable software over the long term, the whole team must take responsibility for quality, planning and executing testing activities. Our mindset has to shift from finding bugs after coding to preventing bugs from occurring in the first place.
Agile
Interviews

Get in touch

Please fill in the required field.
Email address seems invalid.
Please fill in the required field.
We use cookies on our website to improve its functionality and to enhance your user experience. We also use cookies for analytics. If you continue to browse this website, we will assume you agree that we can place cookies on your device. For more details, please read our Privacy and Cookies Policy.