As we approach the culmination of 2023, it’s time to take an opportunity and reflect on the wealth of knowledge that has transpired during a1qa’s online roundtables.

Let’s cut to the chase!

Unveiling the importance of a1qa’s roundtables for IT leaders

Recognizing the paramount importance of fostering a dynamic exchange of QA insights and best practices, a1qa hosts a series of monthly online roundtables designed for top executives.

These exclusive sessions help bring together diverse IT experts to deliberate on topical QA-related issues, such as quality engineering trends, test automation, shift-left testing principles, among others.

Roundup of 2023 a1qa’s sessions

The first quarter roundtables overview

During this period, participants discussed three relevant topics — “A practical view on QA trends for 2023,” “How to get the most of test automation,” and “Dev+QA: constructive cooperation on the way to project success.”

Analyzing QA trends helps business executives to proactively shape their QA strategies, ensuring they are in sync with the industry’s evolving landscape. While automation assists them in accelerating IT product’s delivery, enhancing its quality, and reducing operational expenditure.

Also, the attendees talked about the best moment for QA to step into the SDLC stages and methods to make the communication between Dev and QA more efficient.

The second quarter roundtables overview

This period was marked by three vibrant conversations:

  1. “QA for complex software: tips for enhancing the quality” — IT peers shared the challenges they encounter when testing sophisticated systems and the ways to overcome them.
  2. “How to release a quality product within a limited budget” — C-level reps exchanged practical experience on mapping software quality expectations to a QA strategy and optimizing QA costs.
  3. “How to improve QA processes with shift-left testing principles” — participants discussed how shifting QA workflows left allows businesses to identify and fix defects early on while speeding up the release of top-quality applications.

The third quarter roundtables overview

“A closer look at the field of automated testing” took center stage during the 3rd quarter, emphasizing how to derive more values from test automation supported by AI and behavior-driven development.

The fourth quarter roundtables overview

During the last quarter of 2023, IT executives have already engaged in two insightful conversations — “How to organize testing and increase confidence when starting a new project” and “Rough deadlines: how to deliver better results in less time.”

At the October event, the attendees revealed the best QA approach to choose to be confident in a project’s success from the outset, optimize ROI, and reduce business risks. The November roundtable helped the participants voice their ideas and share real-life cases on meeting tight deadlines without compromising software quality.

Thanks for being part of our roundtables in 2023!

To sum up

Our journey through the diverse and insightful roundtable discussions hosted by a1qa’s professionals with in-depth QA and software testing expertise throughout 2023 has been a testament to the company’s commitment to fostering knowledge, collaboration, and innovation in the ever-evolving landscape of IT.

From exploring emerging QA trends to delving into the nuances of automated testing, each session has played a pivotal role in helping IT executives shape future strategies.

Need support in refining the quality of your IT solutions? Reach out to a1qa’s team.

To mark World Quality Day, celebrated every second Thursday in November, let’s embark on a journey to into 6 reasons why businesses should take exceptional care of software quality.

So, without further ado!

Why companies shouldn’t neglect the quality of it products

Reason #1. Enhanced brand reputation

Consider this example: a company has released an eCommerce solution that frequently goes down during pre-holiday season sales due to the influx of shoppers, resulting in cart abandonment and lost transactions. Unhappy buyers do not bring any profit and leave negative reviews that instantly go viral and influence the opinions of potential clients.

Let’s also take a look at another case. Users flock to a streaming platform in anticipation of an enjoyable and uninterrupted viewing journey, but encounter persistent navigation glitches, buffering issues, and video freezing mid-playback. Results? Reputational harm, requiring the company to invest in significant software quality improvements.

To prevent such situations, I always suggest businesses incorporating QA processes from the initial SDLC stages. They identify errors earlier and so release high-end applications, providing positive and reliable customer experiences.

A solid reputation allows an organization to stand out among its competitors and create a favorable brand image. Moreover, satisfied clients are more likely to make repeat purchases, driving business revenue.

Reason #2. Reduced post-release expenditure

Identifying and eliminating defects at the development phase is much cheaper than addressing them after post-launch. If a buggy product gets into the hands of end users, it may involve costly emergency fixes. For example, a critical vulnerability discovered after going live may require immediate action, incurring unforeseen patching and incident response expenses.

If the fault appears in a financial application, the system may charge incorrect fees. This may result in compensation claims or even worse, regulatory fines.

In addition, relying on quality control allows businesses to prevent extra expenses for rework, like expensive architectural changes of the software.

Reason #3. Improved customer retention and satisfaction

QA plays a pivotal role in revealing and rectifying app bugs before they reach the end user. Thus, businesses ensure a seamless and trouble-free experience for clients while meeting or even exceeding their expectations. Later, satisfied customers become loyal brand advocates, recommending the organization’s IT products to others and contributing to business growth.

6-top-reasons-why-business-should-invest-in-software-quality

#4. Reinforced cybersecurity

In an era marked by the growing complexity of digital threats, companies can’t afford to overlook the paramount importance of software cybersecurity. A data breach or a privacy incident can erode confidence and tarnish the company’s reputation.

With QA at the core of their business strategies, they:

  • Uncover security concerns
  • Ensure high protection of confidential data (end-user information, financial records, addresses, e-mails) and prevent its compromise
  • Strengthen relationships with customers, boost their trust, and reduce churn rates
  • Avoid disruption of business operations, downtime, and revenue loss
  • Adhere to industry regulations, remain compliant, and avert costly legal consequences.

Reason #5. Accelerated software delivery

High-quality software is a catalyst for speeding up time-to-market due to streamlining development processes and minimizing delays associated with bug fixes and rework.

It allows businesses to respond to market demands more efficiently, ultimately enabling them to capture opportunities faster.

Reason #6. Simplified development processes and facilitated introduction of new features

When quality is a central focus, software architecture and design are typically more robust and flexible. This means that the existing codebase is less likely to present conflicts while companies smoothly integrate new features into IT solutions.

Moreover, rigorous QA practices help identify and resolve potential bugs in novel functionality during the SDLC phase, reducing the risk of post-launch problems. This approach negates costly rework and user dissatisfaction as well as minimizes disruptions.

Who can help you reach software quality excellence

While many businesses have in-house QA teams, 92% of G2000 companies opt for IT outsourcing. They get no more than:

  1. Domain-specific expertise. External specialists possess extensive QA and technical knowledge and a deep understanding of the latest QA methodologies, helping set up efficient QA workflows and enhance software quality.
  2. Cost reduction. Businesses avoid expenses associated with hiring, educating, and maintaining an internal QA team, such as salaries, equipment, and infrastructure.
  3. Focus on core competences. By entrusting the QA function to third-party experts, companies allocate their resources, time, and talent toward their main activities, such as software development or customer engagement. They enhance productivity and excel in their key areas of expertise, ultimately driving growth.
  4. Scalability and flexibility. As business requirements change, QA outsourcing can easily adapt to accommodate evolving needs. It provides flexibility, allowing businesses to scale their testing efforts up or down as needed.

Summing up

The six reasons we’ve explored with you in this article underscore the profound impact of IT product’s quality on businesses and their ability to thrive in a competitive landscape. I hope this article was useful for you.

If you need professional support to release high-end applications and attain the desired business goals, contact a1qa’s team.

On a final note, I would like to extend my sincere congratulations to the global IT community on the World Quality Day!

Thank you for your tireless work and diligence in ensuring that software products meet the highest quality standards and help businesses grow.

6 top reasons why business should invest in software quality

The role of IT leaders has changed significantly due to rapid tech advancements and ever-changing users’ expectations. Despite any of this, they should still continuously facilitate business growth, drive digital transformation, and foster innovation.

As part of the a1qa tech voice series, today, we discuss true IT leadership with Alina Karachun, Account director at a1qa, possessing 10+ years in quality assurance and software testing. At a1qa, Alina is responsible for providing exceptional experiences for clients, increasing their satisfaction, as well as building and nurturing long-term relationships with customers from the Fortune 1000 list and Deloitte Fast 500 winners.

So, let’s jump in!

These days, creativity is essential for both executives and their teams. Alina, please share your effective way to maintain and nurture your team’s creativity.

I would say brainstorming sessions are one of my favorite ways to empower my team’s creativity. You bring together people with different backgrounds and expertise and receive fresh viewpoints.

The thing is that good preparations make these meetings effective. To avoid unbalanced conversations, make sure all members contribute to the talk and no one dominates the session, everyone has time to express their thoughts. To prevent awkward silence, announce brainstorming in advance so that employees can prepare for it.

Do you agree that ethical leadership can help executives thrive? How does it manifest?

Oh, definitely. When leaders are guided by ethical principles, demonstrate integrity, and make decisions considering the well-being of all stakeholders including teams, they reinforce their reputation among employees, customers, and investors. We all know that credibility is key for establishing long-term relationships.

I honestly believe when they create a positive environment where everyone feels valued and heard, it helps attract and retain talents.

I suppose for ethical leadership, developing your emotional intelligence is really important to treat each member fairly. Sometimes, it requires setting up new — more transparent — processes, allowing top managers to control the progress of tasks, praise those who deserve it, and ethically motivate people who didn’t show good results.

interview

According to the American Institute of Stress, 83% of US employees experience work-related stress. The same is true for IT teams dealing with tight deadlines, urgent tasks, and long to-do lists. What’s one way a technical leader can help them “ecologically” handle stress and pressure?

I’m a firm believer in the power of happiness so my advice is to look towards your team’s happiness. Happy, cohesive IT teams are better than anything for a project’s success. When the project is finished, put the stress behind you, meet each other, support each other, go hiking together, for example.

And work-life balance of course is critical. Your team will become more productive and better engaged in the workflows if they feel they have a good equilibrium.

This also helps the company reduce turnover and gain a competitive differentiator in attracting better people and retaining the best talents.

But make sure you set realistic daily goals and the workloads are feasible.

Fair and just-in-time feedback may help a lot in such situations. How to make it a team habit?

I believe that clear and constructive feedback can move mountains even in super critical and hopeless, as they may seem, situations. Many times, it helped me improve team performance, enhance collaboration between all members, and reduce stress levels. And the result? It impacted positively the business outcomes.

To encourage your employees to share feedback regularly, I think it’s necessary to explain and show its value for personal and professional growth, for a particular project, and for the entire organization in the long run.

People will be open to expressing their thoughts on processes, tasks, and challenges. However, this requires really well-established communication channels, such as one-on-one sessions, team syncs, or anonymous feedback surveys.

Critically, make sure all team members feel psychologically safe and comfortable when exchanging their feedback without worrying about negative consequences and judgment.

interview-2

Alina, the last quick question — what one soft skill is essential for IT executives?

Hmm, great question. Oprah Winfrey once said, “Leadership is about empathy,” and I couldn’t agree more.

First, it helps me better understand end-user needs and ensure positive experiences. If we put ourselves in their shoes, we better recognize their needs, figure out the defects they face and their root causes.

Secondly, it allows effectively managing your technical team, foster an inclusive work environment, boost productivity and job satisfaction.

And of course, since IT leaders interact with customers, product owners, stakeholders, etc., empathy facilitates prioritizing their pain points.

So essentially, empathy allows you to make more informed and effective decisions, build a more cohesive team, and establish strong, trust-based relationships.

Alina, thank you so much for providing actionable insights into IT leadership! We are looking forward to more interviews with you!

Stay tuned for the next a1qa tech voice installment with a1qa’s top executives.

To optimize your QA costs, accelerate software releases, and increase ROI with QA, reach out to a1qa’s team.

The article by Nadya Knysh, Managing director at a1qa, North America, was published on LinkedIn.

Current technology, like AI/ML, IoT, quantum computing, intelligent automation, among others, help leaders accelerate business growth, increase ROI, and stay ahead of the curve. Provided they are implemented correctly, and companies have assessed all possible implications for both the organization and their end users.

Today, we will discuss with Nadya Knysh, Managing director at a1qa, North America, 6 topical technologies, the challenges businesses should be ready for when applying them, and the ways to overcome them.

Let’s get started!

AI/ML. Businesses spend millions, even billions, for AI/ML enhancement and development. In your opinion, which industry will be most impacted by AI/ML and why?

“I believe that healthcare will experience the biggest and most unexpected benefits from extending ML/AI technologies application. Healthcare is an industry where decision-making is happening not just on a daily basis but on a second-by-second basis for billions of patients. There are two additional factors here.

Another factor is the cost of a mistake — and we are talking about people’s lives here. Again, having additional information on protocols and diagnostics can help avoid such mistakes, especially for real cases.”

Intelligent automation. When implemented correctly, it provides companies with powerful benefits. However, not all businesses succeed in that. What’s the main cause of this?

“Intelligent automation has proven itself as a successful approach to re-invent many businesses and their operations. I think the most common barrier when implementing it is the idea that it’s easy and quick. Unfortunately, in many cases, it’s not. And in quite a few, it won’t be 100% the same as it was before with a human being.

One will most likely have to structure (and simplify) business processes before automation, and others will have to learn to live with that. The implementation will probably be in stages, and one will have to work with a machine and a human being at the same time. Perhaps you’ll have to go through a cultural change in the company and the acceptance of the new approach to doing things may not always be that easy.

My main advice here is to make sure you plan thoroughly: start with uncomplicated tasks — they are easy to implement and adopt. While working on simple operations, review more complex, multi-steps processes: can those be simplified? Can they be broken into smaller pieces? You’ll likely have to work with people who are not tech-savvy too — be patient and listen.”

IoT devices for home. This technology is evolving by leaps and bounds. What new IoT features can we witness in the future?

“It may sound a little futuristic and maybe scary too, but my guess is that the next big step for in-home IoT is human habits learning and analysis. As for today, you can easily find an oven that you can start pre-heating on your way home or a washer that will begin working with an app click while you are sitting on your couch.

With AI development and big data that can be collected, I can see your oven connecting to your car. And your car sends a signal to your oven to turn on as it knows when you are on the way home and what the traffic situation is. Or your oven knows that you normally have a glass of wine at happy hour on Friday night and will turn on an hour later on Fridays. The accuracy of such behavior predictions is very questionable, of course, but some patterns are easier to manage and may be a good start for some individuals. So, who knows?”

Blockchain. What industry can it bring the most transparency to?

“This may be quite an obvious answer, but anything involving banking and finance. It’s not a surprise that money-related fraud is on the rise, whether it’s traditional banking, online payments, or crypto.

We all know the downsides of blockchain and specifically the performance of such a technological solution, however, I believe that the performance can be improved significantly in the near future. With this in mind, we’ll see more blockchain-based alternatives to the third-party services that we are used to now (e.g., PayPal, Square, or Stripe). Such services will not only bring additional security but also convenience to customers, different currency support, and international transactions.”

Feel free to read the full article here.

The dedicated team (DTM for short) is an engagement model that is widely used in the world of outsourcing, software testing industry being no exception. By providing decent technology-oriented experts delving into business processes, many clients are getting interested in QA outsourcing.

Today, we decided to raise your awareness about this model peculiarities and summarize some specific issues one should know before deciding on the DTM.

Sounds curious? Let’s start.

What’s the dedicated team?

Just as the time & material (T&M) or fixed price (FP), the dedicated team is the business model. Its essence lies in providing the customer with the extension of their in-house team. The scope of works, team structure, and payment terms are specified in the client-service provider agreement.

Deciding on the model, the client can shift the focus on business-critical competencies by cutting expenses spent on micromanagement as well as searching, hiring, and training new QA specialists.

The team fully commits to the needs of the customer and the vision of business and product, while the QA vendor provides its administrative support, monitors the testing environment and infrastructure, measures KPIs, and proposes improvements.

When choose the dedicated team? 

Before making a step toward DTM, you might ask yourself whether it is reasonable in this particular case. Here are five signs that you need a QA dedicated team:

  1. You aim to keep QA costs to the minimum.
  2. The project requirements are changeable.
  3. You have no intention to train or manage your in-house QA team.
  4. Your project has significant scalability potential.
  5. You are interested in building long-lasting relationships with the QA vendor.

Dedicated team model at a1qa 

Vitaly Prus, a1qa Head of testing department with extensive experience in managing Agile/SAFe teams, knows how to set up and maintain successful DTMs for both large corporations and startups.

Vitaly, is the DTM popular among a1qa clients? 

It is. In fact, the DTM is the most popular engagement model at a1qa. Suffice it to say, about 60% of the ongoing projects are running this model.

DTM clients: who are they? 

Traditionally, the model is more appreciated by the US and Europe-located clients.

From my experience, customers choose the dedicated team when they want to extend their in-house crew but have no time to hire or no resources to train new QA talents.

Comparing to the fixed price or T&M models, the DTM is about persons, I would say. Applying for the dedicated team, most clients seek not additional testing hands, but they rather want to get a pool of motivated specialists who will commit to the project, flexibly adapt to changing business demands, stay proactive, and do their best to make the final solution just perfect.

The client needs a specialist to communicate with. So the personality of the dedicated team members really matters.

And I can’t but stress it that the DTM is used for long-term projects mainly. For example, we currently maintain teams that have been providing software testing services for 5, 7, and 10 years already. You see? The dedicated team is literally about dedication.

Could you list the top 3 main advantages that are valuable for the client?

Besides the commitment (that is the result of the model nature), I would name transparency of the process and opportunity to control all the workflows. At a1qa, we also provide smart team scalability by rapidly adjusting to the clients’ demands of expanding or decreasing the team size.

Through possible modalities for cooperation, our teams can perform their duties on-site by staying at clients’ place. Many of our customers get their dedicated teams operating remotely.

Furthermore, during the global outbreak, we continue helping our customers get confident in the top-level quality of their IT applications. We offer them to apply a work-from-home scenario for the whole team or particular members in order to mitigate health risks that can hinder effective work processes.

We can also combine both models to achieve greater success.

As for the latter, we offer a vast pool of specialists with diversified tech skills and industry-centric expertise (manual and automated testers, security testers, UX testers across telecom, BFSI, eCommerce, and more). Our clients also get an independent software quality evaluation along with a variety of testing means and access to the latest technological achievements.

Drop us a line to discuss whether DTM is the right decision for your business.

What are the key factors for DTM success?

The success of any dedicated team depends on how well a service provider takes care of its resources and on the quality of the infrastructure and environment provided.

At a1qa, we’ve built a well-structured approach to set up dedicated teams considering all clients’ requirements. Our сustomers highly rate the work of our crews.

Clutch: customer's review on a1qa

In addition, we continuously strengthen QA expertise and leverage innovations in proprietary R&Ds and Centers of Excellence. Our passionate a1qa talents are willing and ready to get and expand QA knowledge in our QA Academy providing a unique approach to the educational process. All of this complies with the standards of a1qa culture of excellence taking our processes up to the next level of work quality.

Does it take long to set up the right team? 

Some clients trust our project managers and rely on our choice. Others are more attentive to this matter and take part in all interviews, check CVs of all possible engineers. On average, it takes from a week to a month to set up a team that will be ready to start.

If a team requires 10 engineers, we usually recommend assigning only 2-3 specialists at the very start, and gradually expand the team as the project grows. This appears to be much more effective than to set up a team of 10 software testers from the very beginning.

Besides, we offer to assign a QA manager who will take control over QA tasks and activities to get the maximum value from the DTM.

We can also offer a try-before-you-buy option if a client is not sure that a proposed candidate fully meets requirements. It’s like a test drive for QA specialists: if there are any doubts, clients have some time to form their opinion and decide whether to continue cooperation with the team players or not.

How can clients be sure that the team delivers the expected results?

At the start of the work, define relevant metrics to track the team’s success. Use KPIs to assure your decent professionals deliver the required results on time.

As I said before, you can take full control over your team in the way it is more preferable. Ask the participants to conduct daily stand-ups and provide regular status reports to tailor the communication process to your demands.

What is the billing process?

The dedicated team is paid for on a monthly basis and the pricing process is quite simple. The billing sum depends on the team composition, its size, and the skillset.

When discussing the model with a client, we warn them about the downtime expenditures: if there happens to be downtime and the team will have no tasks to perform, the client will keep paying for this time as well.

But as a matter of practice, QA and software teams hardly ever have idle time. Even if the team has technical issues blocking the testing work (e.g. test servers or defect tracking system on the client’s side is temporarily unavailable), they can proactively suggest and do some tasks useful for the project, like preparing test data files, etc.

Summing up

Let’s summarize. In short, the dedicated team model can help achieve the needed goals and contribute to business success in the market. When working with your team (yes, it is fully yours), you can get a range of benefits:

  • Full commitment to your project needs and methodology.
  • Adjustment to your time zone.
  • Opportunity to interview all specialists.
  • Complete control over all project flaws.
  • Comprehensive reporting and smooth cooperation.
  • Rapid resources onboarding.
  • Long-term value through accumulated expertise and knowledge retention.
Dedicated team model advantages

Regardless of the industry, business need, or software product, you can have a decent crew working most suitably: be it a remote, on-site, or mixed collaboration. Within global outbreaks, we offer the clients to have their team players operating from home in order not to disrupt the work process because of health issues.

Contact the a1qa experts to get a dedicated team bringing the best possible QA solutions implemented into your business.

In the last decade, the pace of QA has shifted strongly. Now, large companies onboard an army of QA engineers and train their engineers how to care about quality.

However, this leads to an increased time to market. Why? How can we make QA more efficient?

We took up the above-mentioned topics with Dileep Marway, a passionate QA engineer in the past and the Head of QA at the ‘Economist Group’, – our long-term partner providing analysis on international business and world affairs globally – who has been eagerly contributing to the QA community for over 13 years.

Get prepared for the journey called “Shifting QA left.”

At present, companies with software products introduce QA services to the agile lifecycle. From your point of view, what is the main mistake that organizations make while adopting QA practices?

To the present, the software industry is changing at a breakneck pace, so delivering a flawless product is becoming more crucial. This is why companies understand that introducing QA practices to their business strategies is not the wish – it is the need.

In my view, companies make one core mistake – they depend solely on the QA team for assuring quality. Quality assurance should be the responsibility of the whole team, and we should advice engineers how to care about quality. The bottom line is to change the mindset of IT representatives (and C-level being among them) to understand the importance of having time and care when quality comes to mind.

Your opinion: why hasn’t test automation replaced manual testing for 100%?

I’m not sure that it should. It goes without saying that there are several cases when manual testing simply cannot be replaced by automation (while conducting UX usability tests, UI testing, and many more). But I want to highlight another crucial juncture.

I do not actually separate manual testers from test automation engineers. Let me explain why.

If you are specializing in performance or test automation, that’s your specialism, but you are still an engineer who understands the code. This fact is also about the team members and their willingness to grow professionally, develop their skills, and become more rounded in their roles. A squad should contribute to a goal, and there should not be silos in the team, which mean that we cannot release a product, as we are dependent on a particular skillset.

Another step is to clear up that there is no value in automating everything. The challenge now is to do it correctly and wisely to give the most value.

Who is the last one to be responsible for software quality in the company? Why?

Each and every member of the project team or squad is in charge of quality. All team players have to be on the same page and care about the one paramount thing – delivering a top-notch product to the market.

This irreversibly results in more calm and friendly social and psychological climate in the crew. Being respectful to each other, being humorous and proactive – all these qualities help everyone in the team contribute tremendously to the outcome.

How has Agile methodology changed the attitude of IT representatives to the QA process?

Agile development practices have provoked to shift testing left and significantly changed the world of QA. Some time ago, software products weren’t even shipped to the end users until the business need was validated for years.

Agile drives long-drawn-out processes towards continuous delivery while QA helps build strong-quality products in the conditions at an extremely high speed.

What is behind the term “shift-left testing”? Emphasize the benefits it can bring to the product and business in general as well as its main challenges.

Shift-left testing possesses countless advantages, but let me highlight the one that is clearly allocated: the delivery speed won’t be constrained by mistakes slipped up earlier in the lifecycle if you begin conducting checks as soon as possible.

Nevertheless, you shouldn’t forget about the challenging nature of this concept: it is of high importance to meticulously plan all testing and development activities as well as prepare your Dev team to add QA to their skillset. This includes better unit testing and actually contributing to the automation product.

Please name a few tips on how companies can leverage from CT to enhance the quality and speed of delivery.

By adopting test automation at each development lifecycle stage, CT can easily define the business risks concerning each potential release version. Due to that, bugs are fixed at the right time.

Tests should conduct smart checks and use modern technologies to get better automation benefits. They should also be supported, revisited, and updated so that each team player will understand: if the tests showed 100% passed entries, the build can go live; if there is a failed check – a defect needs to be fixed. This allows us to continually release and makes quality at the heart of every role in the team.

What similarities and differences can you name between the approaches of CT and shift-left testing?

To my mind, these concepts share more commonalities than differences. Both of them are considered to be vital approaches in the world of QA.

Continuous testing entails all processes starting from shift-left testing, so they do not exclude each other. When implemented properly, they can help launch a strong product several times faster.

They offer some differences, though: shift-left testing spotlights on checks conducted at the earliest opportunity when the engineer is creating the product. CT contends with a more refined software version and can be carried out at every SDLC stage.

Hence, the issues unearthed while performing continuous testing and shift-left testing can be very different.

How can the company bring these concepts together to test continuously and shift the testing processes left?

To combine them effectively, one should first realize why they should be merged and then identify the main goal. Try to coach engineers on quality processes at code level, put the engineer back into QA engineer, and get the whole team to contribute to the automation product.

Dileep, thanks for sharing your opinion with us!

In October, a1qa was highlighted in the Top 10 of QA/Testing Solution Providers complied by the Technology Headlines. And here we cite the interview by Dmitry Tishchenko that was first published in the October edition of the magazine.

The principles on which a company builds its foundation are quite important when defining a business strategy and working towards a common goal.

Many entrepreneurs ignore this fact when they first start their businesses and look forward to establishing a profitable company without taking the time to build a solid foundation that takes businesses to the next level. As a result, organizations’ endeavors to establish and make a profit fail, and the worst is many companies realize this fact only when reaching the fifth or the tenth year.

Guided by the core principles of constant improvement, long-term view, and mutual trust and backed by a spirited team, a1qa is a QA and testing solution provider that serves over 500 global customers, including Fortune 500 companies.

Started out as a software testing company in 2003, a1qa has successfully completed 1500 projects and has highly contributed to the success of its clients in various industries. By applying the core principles to its business strategies and process management, the company has been able to get certified to ISO 9001 and 27001 standards with no difficulties.

“As the company grew, we witnessed new institutions and practices that arose on its basis. For example, QA Academy, a proprietary education center for software engineers that primarily covered a1qa talents needs was later turned into the self-supporting business unit,” says Dmitry.

“With ten centers of excellence at a1qa that specialize in different focus areas including performance, security testing, test automation, etc, the company continuously builds up its expertise and accumulates experience that makes it stands out in the market.

Being the constantly developed company, the main challenges we’ve been facing are the necessity to increase performance and improve delivery quality,” says Dmitry.

“To solve this challenge, the company applies quantitative approach to management; decides on better practices to serve every single project. Over the course of time, the company has optimized the learning curve and transformed it to one of its competitive advantages, called smart scalability.”

Keeping pace with the ever-changing QA industry

“Speed and flexibility are critical to any IT project today. There is a great variety of agile modifications tailored to any context. This trend will stay topical in the nearest future and will require QA vendors to be ready to adapt to any agile variation,” Dmitry affirms.

From the technological point of view, he also points out the evolution of IoT and AR/VR.

“Requests we get mainly involve support on pilot projects and require the development of new testing methods, which is challenging from the technological and processual sides,” adds Dmitry.

When asked about a1qa’s best-of-breed test automation service, Dmitry says the company’s test automation service echoes with the principles of lean manufacturing. “Since it is a multi-step activity that starts with the analysis of the automation penetration and evaluation of the expected effects, the technical solution is prototyped. After that, the pilot launch of tests is performed, and efficiency parameters are measured. Only then the solution is scaled.”

To maintain a high level of automation, the company has also introduced an analytical system to measure and compare the effects of automation. It ensures the application of the best practices for different projects.

 “We also develop our own automation frameworks. Our main principle here can be articulated as Keep It Simple. The primary objective of any framework is to get the effect as fast as possible.

For example, we can deploy automation environment for a web project testing within a couple of hours,” explains Dmitry.

The company’s core competency lies in the ability of its professional engineers who are able to proactively react to the changing needs of the QA market and adopt latest technological solutions, specific domain knowledge; world class processes; and professional skills to deliver quality services are the significant factors that differentiate a1qa from peers in the market.

a1qa’s service line is made up of several layers: core services, value-added services, and QA consulting.

The core services include performance, security, compatibility testing that align with clients’ needs and can be built into their in-house processes. As for the value-added services, they are designed to improve productivity and add value to customers. This line includes benchmarking, baseline testing, test automation, and many more.

QA consulting services make up the third layer. Based on its expertise in the testing processes and methodologies, as well as the experience with global delivery of QA services to the world’s leading companies, a1qa works with their clients to enable stronger testing processes and superior software quality, which, in their turn, help companies to trim budgets in a timely manner.

In the nearest future, the company plans to keep helping customers increase productivity and gain maximum value through their services. To this end, a1qa is working hard to diversify its consulting offerings, and apply new engagement models such as TaaS. By bringing consulting engagement to its portfolio, the company would not only be competing with QA companies, but also with huge consulting enterprises.

In terms of geographical expansion, the company plans to open new locations to become geographically closer to potential customers.

“Right now we have offices in the US, CIS, in the UK. We come closer to our clients and we want to propose them an alternative. They should get the idea that a small or a mid-size company can handle the task they usually assign to large-scale vendors. And it will be a cost-effective and low-risk option,” Dmitry concludes.

a1qa has made a 14-year journey in software testing and quality assurance market. The company’s core services include full cycle of testing, test automation, QA audit.

Although software testing plays a central role in a1qa business, it’s not the only viable service after all. The company invests in development of new services as well, going beyond software testing.

One of such services is business analysis. To find out why and how the SQA company has grown to establish its own BA Department, we’ve talked to Anton Trizna, Head of the Department.

Anton, tell us a bit how you got to where you are today.

I started my career as a software testing engineer at a1qa testing applications with complex business logic. Sometimes, I documented requirements and prepared requirements specification for the software. To join the BA team at a1qa, I underwent professional training and obtained diploma in BA. Years later, I was promoted to the Head of the Department.

If you were asked to outline the business analysts’ role in a software development project, how would you do this?

We talk a lot (laughing). In fact, we bridge business units with IT departments to make sure the developed software will meet the requirements. I’m proud to note that despite its novelty, our service at a1qa has already earned positive reputation with such clients as Gazprom Neft, Aeroflot, etc.

Do all the clients realize the importance of BA in the project?

Unfortunately, many clients underestimate the value of BA in a project, but building up accurate requirements is the basis for implementation of high-quality software. The fuller specified requirements to the system are, the less rework leading to extra costs will be required on the last stages of development.

How many BA specialists are there onboard at a1qa?

Currently, our BA team consists of 12 people and we are planning to expand the staff soon. Our new BAs come through the educational program during their probation period and pass the final exam to meet our standards. The training program is based on real-world BA tasks that are modelled to evaluate the knowledge and skills of a candidate, to teach basic BA technics and to see the progress of the candidate.

What business domains do you specialize in?

We’ve gained vast experience across multiple domains, including such specific areas as oil mining and production.

What are the strong sides of your team?

Among a1qa business analysts main advantages are quick adaptation to client’s conditions and close interaction within all stages of the project delivery. Our specialists often go on business trips to have onsite meetings with a client, investigate business processes, and elicit requirements establishing effective communication with stakeholders.

Let’s touch upon your work. What will the client get ordering the BA service at a1qa?

The most common scenario of engaging our specialists looks like this: the customer wishes to solve some business problem by developing an IT solution. They possess some idea of how it may work but that’s it. Here come BAs. We conduct requirements elicitation, analyze them, document, and test them. Upon the client’s demand, we can also participate in user acceptance testing and verify whether the solution has been developed as required.

Speaking precisely, our activities usually include the following steps:

  • Requirements elicitation, analysis and specification
  • Creating static and dynamic UI prototypes
  • Analysis of “as is” state of the business processes and modelling of the “to be” state
  • Consultations on the selection and implementation of an IT system that provides an optimal business solution
  • Requirements implementation control on all stages of the project
  • Requirements management: evaluating change requests for IT solutions and documentation update
  • Maintenance of reworks on the development and testing stage
  • Review of a project or technical documentation, requirements testing

Could you please elaborate the requirements testing service?

Our BA Department offers assistance in software requirements testing. Requirements testing is a process of requirements validation against such quality criteria as completeness, correctness, consistency, feasibility, unambiguity and so on. The purpose of such activity is to find bugs in the requirements specification in order to prevent their leakage to production.

Based on our experience in preparing requirements specifications, we identify potential defects that may be found in the documented requirements and devise the approach to the requirements testing.

When testing, QA specialists provide customers with regular reports on the delivered work and state of quality. What about BA? What artifacts do you prepare?

Among the artefacts that we prepare are the following:

  • Vison & Scope Document or Business Requirements Document that covers all business objectives and high-level business requirements and defines the scope of the future solution.
  • User Requirements Document describes user requirements using Use Cases or User Stories.
  • Software Requirements Specification/Requirements specification of different levels of detail in which functional and non-functional requirements are described.

If agile methodologies are implemented, BAs create Epics and decompose them to User Stories that are prioritized in the backlog and provided to the dev team.

How do you keep up with latest trends and practices?

Our BA team aims to keep up-to-date with new BA technics and tools. Tools that are new to the market are applied to check their efficiency and applicability in communication, modelling, prototyping, etc. Our BAs visit specialized conferences and seminars to improve their knowledge and skills and try their strengths in sharing their experience. Every year our specialists participate in the International Conference named Analyst Days, the largest one in Eastern Europe on System and Business Analysis.

Thank you Anton.

Get in touch with a1qa today to learn how the combination of software testing and business analysis will take your business to new heights.

Test automation is one of the key areas of a1qa expertise. In 2013, Test Automation Center of Excellence was established. We talked to Dmitry Bogatko, Head of CoE, to find out what tasks CoE solves today and how it contributes to the service overall development.

Dmitry, Test Automation CoE at a1qa was put together almost 5 years ago. What was the idea of the undertaking?

Test Automation CoE was established alongside with other centers of excellence, six altogether. In 2013 it became clear that the company accumulated huge experience in multiple areas. And this experience needed systematization.

Precisely speaking, there were four main objectives that formed the basis for Centers establishment:
1. Increase quality and speed up clients’ requests processing
2. Increase the scope of custom-tailored services
3. Employees professional development
4. Building up the expertise to rapidly improve the area of competence.

And how are these goals achieved? In other words, what type of work is performed by CoE?

First of all, we train engineers providing them with the necessary skills. For example, the client wants his manual QA engineer to write automated tests. If the client is read to pay for this, the project manager forwards the training request to CoE and we start working on it.

Why should the client want to pay for the engineer’s education? Isn’t that more effective to onboard a test automation engineer?

As a matter of fact, it’s more cost-efficient to teach a manual engineer create automated tests rather than add a test engineer to the team and wait until he or she dives into the project and gets familiar with system specifications. So training requests are not rare. By the way, we conduct both personal and group trainings. Various companies who have a need in test automation resources apply to a1qa for trainings. While developing training programs, we take into account client’s domain, type of software that should be tested automatically, number of trainees, etc.

We also provide support for the ongoing projects that involve test automation. CoE experts consult the teams on technical solutions, help to solve nontrivial tasks. If a client requests an audit of the developed automation solution, we’ll evaluate the approach, review the code, and come up with remediation recommendations.

What’s more, we may also step in at the pre-sales stage and any documentation is required to prove the needed expertise. For example, we prepare presentations, case studies of the completed projects, or white papers to prove our excellence.

Of course, we accumulate the received experience and describe it in our corporate Confluence knowledge base. Troubleshooting guidelines, specific tools description and analysis, how-to articles and many more – any automation engineer may find what he or she needs there.

Test automation is a popular trend in the QA market. New tools, approaches, methodologies are rapidly evolving. How do your experts manage to stay aware?

We appreciate the desire of our engineers to learn new tools and approaches and encourage their participation in the in-field conferences. However, as all CoE experts are real-time engineers assigned with the project work, there is little time left for some research.

As a rule, we ask our specialists to conduct some investigation to competently reply to the customer’s request.

There are over 70 test automation engineers at a1qa today. How many of them take part in CoE?

Today, 11 highly skilled engineers are members of the CoE.

Can anyone become a member of CoE? What are the entering requirements?

In theory, any a1qa automation engineer who devotes at least 20 hours a month to the CoE tasks – be it documentation creation, frameworks development or trainings – can enter CoE.

It’s required to know one of the programming languages: Java, C# or Python, possess experience of automating testing for any type of software (web/standalone/integrated) and read technical documentation in English.

If an employee meets these criteria, we’ll invite him or her for an interview. Successful interview and the consent from the immediate manager will guarantee the accession to CoE.

It’s worth noting that we strive to make CoE activities go in line with project work. If an engineer has been working with Git suite of tools, we won’t check this skill during the interview. If there is no practical evidence of some expertise, we’ll ask the person to complete a test task.

Do you perform test automation engineers appraisal at a1qa?

Right. We appraise all automation engineers adhering to the established criteria for ranking them junior, middle, or senior. Being appraised, an employee gets to know where he or she stands in the company and what the essentials for further progress are.

I’d also like to mention the Knowledge pack, which is a skillset of every category mentioned above. For example, a manager knows that every junior test automation engineer knows to write end-to-end tests for a web application with Selenium WebDriver.

If there is a request to automate testing for a desktop application, middle specialist who will definitely possess the right skillset (according to his “Middle” grade).

And the last but not the least. As the CoE mainly include highly skilled engineers with years of experience, we regularly delegate them with automation frameworks development tasks. Based on the developed framework, clients’ employees will then write automated tests.

Sometimes, there is no need to create the automation solution from scratch. The client needs to enhance the existing ones, debug it.

CoE experts also provide consulting support as regards automation.

What challenges do you face in managing the CoE workflow?

The main challenge is to find the right balance and let the CoE tasks being solved with no compromise to project work. All CoE experts are charged with real-world projects on a daily basis and it may be hard to recall them to prepare documentation or consult the client’s team on any issue. To address the challenge, we reallocate resources and try to prepare documents when the work load is mediocre.

And the very last question. How do you manage to double-job as the Head of CoE and project manager?

At a1qa, CoE activities are closely related to the project ones: outlined goals, tasks, plans, and a team. Project work helps me keep CoE activities up-to-date, while CoE management allows to estimate project tasks more accurately and make more effective decisions.

Thank you, Dmitry, for your time. Wish Center of Excellence many interesting projects to come!

To learn more about Test Automation service at a1qa – visit this page. If you have any questions, ask them here and we’ll reach you in no time.

In the second part of the interview, Adam Knight speaks on the combination of exploratory testing and automated regression checking in testing Big Data systems. If you’ve missed the first part of the interview with Adam on recent changes in testing, you can find it here.

Adam Knight is a passionate tester eagerly contributing to the testing community. He is an active blogger and constantly presents at such testing events as Agile Testing Days, UKTMF and STC Meetups and EUROStar.

Adam, you specialize in testing Big Data software using an exploratory testing approach. Why do you find it necessary to do exploratory testing?

It is not so much to say that I find exploratory testing necessary. Rather I would say that I found it in my experience to be the most effective approach available to me in testing the business intelligence systems that I have.

My preferred approach for testing when working on such systems is pretty much:

  • Perform a human assessment of the product or feature being created.
  • At the same time automate checks around behavior relevant to that assessment to provide some confidence in that behavior.
  • If at some point the checks indicate a different behavior than expected, then reassess.
  • If you become aware that the product changes in a way that causes you to question the assessment, reassess.

I believe that exploratory testing is the most effective testing approach for rapidly performing a human assessment of a product, both initially but particularly in response to the discovery of unexpected behavior or identification of new risks, where you may not perhaps have a defined specification to work from, as per the last two points here.

The combination of exploratory testing and automated regression checking is a powerful combination in the testing Big Data systems, as well as many other types of software.

What peculiarities of exploratory testing can you mention?

That’s an interesting question. I’m not sure I would describe it as a peculiarity, however one characteristic of exploratory testing that I believe makes it most effective is the inherent process of learning within an exploratory approach. As I describe in response to the previous question, testing can often come in response to the identification of an unexpected behavior or newly identified risk.

The characteristic of exploratory approaches is that they will incrementally target testing activity around the areas of risk within a software development, thereby naturally focusing the effort of the tester where problems are most apparent. This helps to maximize the value of testing time, which is a valuable commodity in many development projects.

How can a tester find the right balance between exploratory and scripted testing?

It isn’t always up to the tester. I am a great believer in autonomy of individuals and teams and allowing people to find their own most effective ways of working. Many organizations, however, don’t adhere to this mentality and believe in dictating approaches as corporate standard.

Many testers I’ve spoken to or interviewed in such environments often perform their own explorations covertly in addition to the work required to adhere to the imposed standards.

For those who are in a position where they do have some control over the approach that they adopt then the sensible answer for me to this question is to experiment and iterate. I’d advocate an approach of experimentation and learning to find the right balance across your testing, whether scripted vs exploratory, manual vs automated or any other categorization you care to apply.

What are the main difficulties when it comes to Big Data testing?

The challenge of testing a Big Data product was one that I really relished, and in my latest role I’m still working with business intelligence and analytics. When I was researching the subject of Big Data one thing that became apparent to me was that Big Data is a popular phrase with no clear definition.

The best definition that I could establish was that it relates to quantities of data that are too large in volume to manage and manipulate in ways that were sufficiently established to be considered ‘traditional’.

The difficulty in testing is then embedded in the definition. Many of the problems that Big Data systems aim to solve are problems which present themselves in the testing of these systems.

Issues such as not having enough storage space to back up your test data, or not being able to manage the data on a single server, affect testing just as they do with production data systems.

Typically those responsible for testing huge data systems won’t have access to the capacity, or the time to test at production levels – some of the systems I worked on would take 8 high specification servers running flat out for 6 months to import enough data to reach production capacity.

We simply didn’t have the time to test that within Agile sprints. The approach that I and my teams had to adopt in these situations was to develop a deep understanding of how the system worked, and how it scaled.

Any system designed to tackle a big data problem will have in-built layers of scalability to work around the need to process all of the data in order to answer questions on it. If we understand these layers of scalability, whether they be metadata databases, indexes or file structures, then it is possible to gain confidence in the scalability of each without necessarily having to test the whole system at full production capacity each time.

So Big Data testing is all about understanding and being surgical with your testing, taking a brute force approach to performance and scale testing on that kind of system is not an option.

Thanks for sharing your viewpoint with us.

If you want to learn more from Adam, visit his blog a-sisyphean-task.com.

Adam Knight is a passionate tester eagerly contributing to the testing community. He is an active blogger and constantly presents at such testing events as Agile Testing Days, UKTMF and STC Meetups and EUROStar.

Hi Adam, can you please tell us about yourself, your current occupation and expertise?

For the last couple of months I’ve been working as an agile consultant establishing Product Owner role in a company called River. It specializes in employee and retailer engagement solutions.

Prior to that I spent a number of years working on a database product called RainStor, where I was responsible for testing, technical support and technical documentation as well as performing many of the day to day product owner duties.

You have a considerable experience in testing. How has testing changed since you started your career?

The main change that I see is the visibility of software testing outside of the organization that testers work in. When I started working in IT, testing was very much perceived as something done by people drafted in from the business, rather than necessarily being a valuable and skilled occupation in its own right.

The testers were often very isolated individuals or teams within their organizations, relying on strict formal doctrine and rather dry reference books. The advent of social media has had a massive impact on testers in allowing the creation of groups and forums in which people can discover and communicate with other people doing similar jobs.

In response to this question I could talk about changes such as new techniques, new automation methods and new tools, however people have been creating tools to solve testing problems for years.

What has really changed the face of testing more than anything else is our ability to share these tools and ideas more easily with others.

Some experts say that in future almost all manual testing should be automated. What do you think about this trend of increased automation?

I’m a great believer in automation. I don’t believe that an agile approach to development is possible without some level of test automation. The use of such approaches does, however, need to be combined with an appreciation of the information that the automation provides you with.

Essentially most automation is a detection system, alerting if some unexpected outcome arises from a check performed after a series of actions. Somewhere a human is still required to design relevant alerts, understand the implications of those alerts being triggered and take the appropriate course of action should that happen.

One of the principles that I hold to with test automation is that the automation should do more than just perform the checks. The automation provides an opportunity to gather information on the activity as it is executed which can help a human to investigate and diagnose should the checks throw up an unexpected result.

Is it possible to achieve 100% test automation?

If you don’t do any other tests than automated ones, and you exclude a huge amount of human critical activity from your definition of testing, then you could argue that it is possible.

It would be a weak argument however given the fragility of the assumptions around it.

No matter how much automation you create, the process of creating automated checks is a human sapient and often exploratory testing activity which in itself is not automated.

Something that is apparent to anyone working in testing, but difficult to explain to some outside of testing, is that 100% testing is not possible.

Even on the simplest of software applications there will be an infinite range of possible tests that we could perform. The amount of testing we therefore undertake is some finite subset of this, and 100% automation would only ever be 100% automation of this subset.

A great number of organizations are currently adopting agile methods and DevOps principles. How will testing change in the nearest future?

Roles are blurring. I think that we are moving away from the idea of well-defined roles. Both the agile methods and DevOps that you mention have in them an expectation of individuals with a range of skills and capabilities, rather than viewing development as a factory production line with a series of very separate roles owning different stages of the process.

I’ve never been particularly good at seeing future trends, however I see testing and product ownership responsibilities merging, with testing providing the information to allow the product owner to make educated decisions.

What are the main benefits of agile testing?

I would suggest that a really good question would be ‘what are the main benefits of agile to testing?’

Agile approaches have huge benefits to testing. The short timeboxes mean that it is hard to accumulate a huge ‘ball of mud’ development over the course of many weeks before handing it to the testing team and so testers are able to test incrementally on smaller pieces of development, making both discovery and diagnosis significantly easier.

The main benefit of an agile approach for testing in my opinion lies in the point at which testers are included in the conversation.

The “Three-Amigos” is a phrase that encapsulates these benefits. The fact that we have a phrase in common use that exemplifies the need to have testers involved in early conversations on the scope and value of new features, is huge.

How do you see the ideal testing process of future?

I think that far too much time is spent by testers on simple functional testing. I think that the ideal testing process of the future, or even today, involves the developer taking responsibility for the testing of the functional adherence of their developed software to the model that they are working to.

The role of the specialist tester can then be about validating how this solution model maps to the problem domain, what situations or events could cause the solution to fail, what environmental factors could result in the solution to fail, what software factors could cause the environment to fail.

I called my blog “A Sisyphean Task” because many folks perceive software testing as such, performing the same activity over and over again just as Sisyphus was doomed to do rolling a rock up a hill in Greek mythology. Where the interest, and value, lies for me is when we have sufficient confidence in our basic functionality to start exploring the deeper characteristics of the software and its operating environment. That’s where the really interesting testing happens.

What sources of information can you advise testers, who want to develop their practical abilities and technical skills?

For years I’ve recommended looking to other roles in the organization to learn skills from. I’ve learned a huge amount from DBAs, System Administrators, Security experts, network guys and support teams, not to mention developers.

People also need to be proactive. Most of the new practical skills that I have developed in the last few years have been the result of spending my own time researching interesting new projects.
I learned Ruby scripting through developing a SQL generation tool in my evenings. I learned Java in my own time to create a multi-threaded test harness.

If you see a need, take some time to research a solution yourself. Then take it to your manager and demonstrate the value of what you have done. Not only will you develop new skills but you won’t do your career progression any harm either.

Thanks for sharing your viewpoint with us.

If you want to learn more from Adam, visit his blog.

Thanh Huynh is a tester and creator of online community for QA and software testing specialists. Thanh helps new testers do better job and believes that the key quality for testers is the ability to ask right questions.

Hi Thanh, can you please tell us a little bit about yourself, your professional experience and major interests?

First off, I would like to thank a1qa. I’m very honored to have been invited to do the interview.

Well, I’m a tester, a husband and father of a baby boy. I’m currently working at Datalogic Vietnam as a full-time tester. I’ve been in software testing for 10 years and played both roles as a tester and test lead. Besides of 9-5 job, I’m also running AskTester, a Q&A site where QA consultants can come and ask their questions. I also blog my ideas on AskTester to help new testers do better testing. While I’m not testing, I spend time with my little son.

You have recently published A Complete Guide to Becoming a Software Tester. What was the reason for doing this article?

This 2016 is my 10th year with software testing. It’s a long and interesting journey and when I look back, I see how lucky I was when I started with software testing. Things have changed a lot since then. Technology has changed, software testing market has changed, competitiveness has changed and you can’t just rely on luck to start your career in software testing these days. I write this step-by-step guide to help those who want to start their career in software testing.

If you haven’t read it yet, check it out. I have received many good feedbacks about that single post.

How would you define a good tester?

If you look up “characteristics of a good tester” on Google, you will find a long list of attributes people expect from a good tester. While those are valid attributes, I would like to put focus on one particular skill: Ability to ask good questions.

Software testing is not just to confirm things, it’s a process to explore, exercise the system to discover potential problems. You can achieve that by asking good questions to the system under test, yourself, your customers, your product owner, your manager, your colleagues, etc.

Besides of asking good questions, I also believe that curiosity and critical thinking are important, too.

How can a person get his/her first job in software testing?

First off, you have to figure out if software testing is for you or not. Basically, you have to find answers to these questions:

  • Why do you want to start your career in software testing?
  • What do you know about software testing?
  • Is it something you want to stick with?

You don’t have to have clear answers to these questions now, the idea is to see if software testing is something you want to do or not.

Secondly, you should learn things in the right way. There are many options to learn software testing. You can take an online course, take certification, do self-study, join local workshop or one-on-one coaching. Choose what fits you best.

Finally, you will have to search and apply for a full-time job. Here’s the order to do this right:

  • Get networking
  • Write a CV that works
  • Prepare for a successful interview

These are basically 3 phases I recommend to get your first job in software testing.

I know it’s not easy, but we don’t expect things are easy, right?

What are job and career opportunities for testers?

According to World Quality Report 2015-2016, budget for QA and Testing has risen to an average of 35% of total IT spend. It means that software testing is in need and most organizations still have budgets for it.

It depends on where you live, but in my opinion, there are always opportunities for testers as long as people develop software.

You believe that testers question things. What is the key to asking good questions?

One of the key things to ask good questions is to seek the answer yourself. Ask yourself the question first and try to find your answers. Don’t be lazy. More often, people just ask questions they even can answer themselves. Also, looking for as much information as possible will help your questions get answered well, too.

Are there any manual testing trends that you follow?

Yes, I do, but it’s not really a trend. I’m subscribed to “context-driven testing” method and still learning about it.

What online resources can new testers use to learn valuable information about testing?

These days you can probably find and learn everything online. Here are a few ideas new testers can use to learn about software testing:

  • Subscribe and follow software testing blogs. Some famous blogs you should know are:
    Developsense.com (Michael Bolton) – Read the interview with Michael in a1qa blog 
    Satisfice.com (James Bach) – Read the interview with James in a1qa blog
  • Learn the basics:
    Softwaretestinghelp.com
    Stickyminds.com
  • Read online magazines:
    TestingCircus.com
    TeatimeWithTester.com
    TestingTrapezeMagazine.com

Don’t get overwhelmed by the list. Just pick one or two that interest you the most and stick with it.

What is the main advice you can give for those who are new to software testing?

There’s nothing wrong when you’re new to software testing, you will learn things along the way. However, here are a few things new testers should put focus on so that they can give a head start in their career:

#1: Learn the basics

When you’re new, don’t get distracted by advanced techniques, tools or trends. You need to learn the basics first. If you don’t build a good foundation, learning advanced stuff may do more harm than good.

#2: Expose yourself

Don’t be afraid of being “newbie”. Be confident to ask questions. The more you ask, the more you know. Remember this:

“The only stupid question is the question that is never asked” – Ramon Bautista

However, to receive the most from the answers, you need to ask the right questions. Believe me, if you can practice and be good at this skill, you will improve your testing significantly.

#3: Work hard

When you are new, you need to work really hard to learn things. I don’t believe in the so-called “shortcut”. It’s all about working hard now to become a great tester later.

Thank you, Thanh, for your time and consideration.

You can ask more questions to Thanh in AskTester community.

Lisa Crispin is an agile testing practitioner and consultant, co-author of two books.  In 2012 Lisa was voted as the Most Influential Agile Testing Professional Person, an award program sponsored by the Agile Testing Days conference.

Hi Lisa, can you please tell us about yourself, your current occupation and expertise?

I’ve been a tester on agile teams since 2000. I started out my career as a programmer/analyst, and I got interested in testing in the early 90s. Currently I’m a tester on the Pivotal Tracker Team, our product is an agile project tracking tool.

My recent experience in the domain of software testing outsourcing is testing SaaS products, both from a UI and API perspective, and I’ve done a bit of testing on our iOS product as well.

I like to share my experiences with others and learn new ideas, so I do a lot of writing and presenting at conferences. Janet Gregory and I have published two books: Agile Testing: A Practical Guide for Testers and Teams, and More Agile Testing: Learning Journeys for the Whole Team.

A large number of organizations are currently moving from traditional cycle to agile development. How will this process influence testing?

To succeed with delivering valuable software over the long term, the whole team must take responsibility for quality, planning and executing testing activities. Our mindset has to shift from finding bugs after coding to preventing bugs from occurring in the first place. That means that testers, programmers, designers, product experts, database experts, operations experts, everyone has to collaborate continuously. We have to think about testing even before we think about coding.

What are the main characteristics of testing on agile projects?

A whole-team approach to quality, test-obsessed developers, guiding development with testing both at the unit and acceptance test level, getting a shared understanding of the product and each new feature before we start testing and coding.

Since we’re doing TDD (test-driven development) and ATDD/BDD/SBE (see below for definitions), agile teams have fully automated regression testing, leaving plenty of time for testing practitioners to do the vital exploratory testing so that we learn as much as possible about our software. For these approaches to succeed, the team needs a culture that supports learning and experimentation. You can’t focus on delivering software fast – you have to focus on delivering the best possible quality software.

What skills should a tester have for testing in agile?

Curiosity, a love of learning, an agile mindset, an attitude of “I’ll do whatever is needed to help”, technical awareness, the ability to communicate and collaborate with customer and technical team members.

Can you give some examples of testing approaches on agile projects?

Successful agile projects typically use some form of guiding development with customer-facing tests, whether it’s behavior-driven development (BDD), acceptance test-driven development (ATDD), or specification by example (SBE). Those are all similar. We start by getting the whole team together to talk about a new product or release, identify the backbone or “walking skeleton”, and thinking about how we’ll test it.

We use techniques like personas and story mapping (as in Jeff Patton’s terrific book) to build shared understanding and identify thin end-to-end slices of value. We get product experts, testers, developers, designers and others as needed to have conversations about each user story, specify rules and examples of desired behavior with techniques such as Matt Wynne’s example mapping. Developers code test-first, both at the unit and acceptance level. They may do exploratory testing of each story before they finish. Testing experts do exploratory testing at a feature level. They help the team discuss all the different quality attributes, both functional and non-functional, and make sure appropriate testing activities are done.

What tools are commonly used for testing in agile?

If you mean automation frameworks, libraries and drivers, teams should first decide how they want their tests to look, and then identify a tool that supports that vision.

Test automation is vital, to free up time for exploratory testing. But conversations and collaboration are far more important than tools.

Automating regression tests is generally a must, but we need more. We need to start with a shared understanding of each new product, release, feature and story. As I mentioned, story mapping is a helpful technique. We need both testing and business analysis skills. We need to explore the code before and after it is delivered to learn whether it really solves problems for customers.

How testing can be completed in short iterations?

The whole team takes responsibility for testing, and includes testing activities when planning each feature and story. No story is done until it’s tested. TDD and BDD/ATDD/SBE help accomplish this goal. Close collaboration between testing experts and development team members helps make sure testing doesn’t lag behind.

A common problem with new agile teams is “over-committing”. They’re so keen to please their customers; they plan too many stories in iteration. To avoid disappointing their customer, they skip or put off testing activities. Instead, teams should plan to do less work, but do it as well as they possibly can. When you build a strong code and test base by focusing on quality, you’ll be able to deliver faster in the future.

Try a “lean” approach where you limit the number of stories being worked on at any given time, focus on finishing one story at a time and then moving on to the next.

What are the main principles of an agile tester?

Janet and I devote an entire chapter to this in Agile Testing. Here are our 10 principles for an agile tester:

  • Provide continuous feedback.
  • Deliver value to the customer.
  • Enable face-to-face communication.
  • Have courage.
  • Keep it simple.
  • Practice continuous improvement.
  • Respond to change.
  • Self-organize.
  • Focus on people.

What documentation is necessary on agile projects?

You need the documentation you need. The Agile Manifesto values working software over documentation, but we still value documentation. Janet and I find that automated regression tests created as a result of BDD/ATDD/SBE provide excellent living documentation of how the system behaves.

The tests always have to pass, so they must be kept up to date. But you may need additional documentation for users, regulatory agencies, team memory. Each team should decide what is needed. Talk this over with your business experts and auditors too.

How is it possible to create appropriate documentation?

Using executable acceptance tests as mentioned in the last question are one terrific way to create documentation. You can even generate documentation. My team generates our API documentation, including examples that are from actual tests.

In my experience, most teams need to hire a good technical writer to help create useful, long-lived documentation.

What can you recommend to agile testers?

Keep learning. There are plenty of online resources and communities where you can learn and practice new ideas and skills. There are so many great testing conferences these days. Read, practice, experiment!

What do you think future will bring to agile?

I don’t try to predict future, but, it’s still my experience that it is difficult to find a tester with the right attitude and mindset to contribute to an agile team. At the same time, more and more delivery teams realize the importance of testing activities such as exploratory testing and BDD/ATDD/SBE. Some teams, including my own, are adapting by hiring the few top-notch testers who can transfer their testing skills to the developers on the team.

Testing activities are done by everyone on the team. This doesn’t mean we don’t need testers, but it does mean we need testers who are good at helping their teammates learn good testing skills.

Thank you, Lisa, for devoting your time and sharing your thoughts with us. Hope to talk to you again.

To learn more interesting things about Lisa Crispin visit her website.

Gil Zilberfeld is a lean-agile software consultant, applying agile principles in development teams. In his interview to a1qa Gil covers practical tips on what to expect and what actions to take when introducing unit testing to the development life cycle.

Hi Gil, can you please tell us a little bit about yourself, your professional experience and expertise?

Hi, and thank you for the interview.

I’ve been around software all my life. For over 20 years I’ve been developing software, testing it – including some test automation services, managing teams of developers and testers, implementing and improving development practices. I’ve also done a lot of product management, marketing, support and pre-sale.

Currently, I’m a consultant, helping development teams in all areas: from implementing new development practices to fleshing out stories, creating testing strategies and rolling out MVPs. I also help them in applying Scrum and Kanban methodologies (usually somewhere in between, depending on what they need).

You are the author of the Everyday Unit Testing book. What is the main purpose of your book?

In the introduction to the book I ask: “Why another unit testing book?”. The problem I’ve found is there are people who have merely installed a unit test framework, and they feel like they already understand unit testing. When they bump into real world problems, when they need knowledge and skills on top of the tools, they decide: “Unit testing is a nice idea, but it won’t work here.” The book tries to get people over this hump.

The purpose is to let people, who have already taken their first steps in unit testing, know what to expect in both handling legacy code and how to roll out the process in the smoothest way possible.

How to choose the appropriate testing strategy?

There is no single answer, but there are ways to think about the problem you’re trying to solve. I usually start with risks. How complex is the system and what do we know about it? How well do the people, who are going to write and test it, know the business domain? Is that feature we’re adding in a bug-ridden code? Or maybe it’s a completely new piece of code?

Different answers can lead to different strategies.

We also need to take into account the development process. In a test-at-the-end, which usually means no time for testing, we need to make sure at least those risks are covered. In agile methodologies, this is an iterative process, where we focus on testing additional functionality, while not neglecting the overall risks. An iterative process is good, because it allows us to learn from testing the current increment and re-plan the next one.

Then we go down a level to the feature or story level. Here there are also things we need to focus on, because we don’t have enough time to test everything. But we need to see what we can automate, and where to use exploratory testing.

Finally, there are heuristics we can apply. SFDEPOT, for example. It allows to create a map of possible testing options, then pick from those we can fit into the time we have, and where we can learn the most.

All this apply to the entire testing strategy. Within it, unit testing comes into play. The more unit tests we have, the more attention can be paid to integration, functional and exploration testing.

What are the problems that beginners usually face when it comes to unit testing?

Unit testing and test driven development (TDD) seem to be easy. The tools are quite simple. The examples make sense. Usually there is great documentation and/or community to help. The bar for entry looks very low.

And then you try to write tests for your 10-year old legacy code. Suddenly, all simple examples don’t fit. It seems risky to change the code in order to make it testable, which is often the case. Tests take a lot of time to write, and since you’re a beginner, the tests are not good enough, readable and some even are useless.

On the other hand, there is no immediate reward. We need to have the tests running for some time, until they actually detect when someone broke the code, and only then we see their value.

So the problem is not technical. It’s about not seeing the value, while still paying the cost. The people who persevere and have enough belief it will pay off, will witness costs shrinking in the end.

Why is unit testing necessary in web development?

It’s not necessary, as long as you don’t have bugs. Nothing is really necessary if you have alternatives.

If you have alternatives for regression suites that run quickly, don’t require a web server or other dependencies and point you to where the problem is – you probably have a great alternative. Use it.

Other types of tests are very good to have, but you need unit tests for the quickest feedback, hopefully on the developer machine before checking the code in.

How does unit testing impact the development process?

As I’ve said, there’s an upfront payment, but the dividends are greater. The beginning is hard, because your legacy code puts up resistance, and you have to bend it to your will. That means that developing features with tests takes sometimes even twice longer initially.

But after a while, that’s what development begins to look like. It takes significantly less time to write tests, both because you’re better, and the code becomes more testable. And since you run the tests all the time, bugs are found very quickly (not three months into the manual testing cycle). As you add more tests, you trust your automation suite more, get fewer bugs and the testers get more time for exploratory testing.

So there’s a big impact on the whole process. If you let it.

What should be tested in unit tests?

Small pieces of code. These could be single methods, or a combination of methods that do an operation or two. Having bigger tests, or more code to test doesn’t mean that the tests are not valuable, though.

Good unit tests run quickly regardless of when and where you run them and can point to the offending code when they fail.

Specifically, we want to test where bugs appear: mainly conditionals (if/else), forks in the code (switch/case) or a combination. We’d also like to test boundary cases, that are easy to test at the unit level, but getting the whole application to that state is hard. Think of testing for error handling a communication break. You won’t be cutting the communication line every time you run the tests. So checking how the error handling code behave is much easier to test at the unit level, with some mocking.

What is the right way to deal with legacy code?

Rewrite it, of course!

It usually comes to that. The code becomes more and more complex until we decide we need to rewrite it.

But the professional answer is “incrementally”. There’s a lot of experience in the field on how to take small parts of the code, extract them out at a very low risk and add a few tests for them. While it doesn’t look feasible, and definitely looks risky, simplifying a legacy code base is more possible if you take it one step at a time. That means that when you work on a feature or fix a bug, you add a test. If that test requires small modifications, do them. If it requires a big risky change – decide and move on. Overall you’ll see that most of the code, can move piece by piece to a simpler, testable design.

Always do a code and test review. Having more eyes on the problem and the solution will leave the code (and tests) in a better state.

How can a tester choose the right kind of test?

I don’t think there is one right test. The tester (or the developer for that matter) needs to understand the risk, the business domain, and what can go wrong. Maybe a test is not needed at all?

Here’s an example: The code gets a pointer which maybe null as parameter. This is a possible crash risk. But if you know more about the calling code, you know that at the current state, there’s no possibility of a null value reaching our code. So why write a test? Should the tester spend time trying to reach this “impossible” state?

We have a finite amount of time for testing, and so we should focus on important things. A good tester needs to apply all his knowledge and decide where to put the effort, based on the strategy we talked about.

When should Test Driven Development (TDD) be used?

Tough question. TDD proponent will say, on every code. But that’s not completely right. There are many cases where test-first can help, like fixing a bug or even adding a feature into existing legacy code. TDD on top of test-first (that’s the design part) is much easier and beneficial with new code.

That covers most cases in code. But in that case, why are most people not doing it?

Again, the problem is not with the technique or tools. TDD is about discipline. Going from red to green means adding just a small bit of code to make the test pass. Or even writing the test first. Most developers lack that discipline. While it can be enforced, it usually isn’t. Much in the way of “we need to release it, so we’ll skip testing”, quality usually takes a back seat to producing code. Sad but true.

What tools are generally used in unit testing and TDD?

There are two types of tools, and every language and technology has at least a couple. The first family is the xUnit test frameworks. Examples are JUnit in Java, NUnit in .Net, Jasmine in javascript, and GoogleTest in C++. These simple tools contain 3 elements of testing:

  • Test identification – how the framework tells a test from a regular function. Sometimes it’s also called test registration, because the user adds a test to the framework.
  • Execution and reporting – we want to run all (or some) tests and get their result.
  • Assert APIs – A method to define the acceptance criteria for tests.

Frameworks contain a few more features today, but that’s it basically.

The second family is what we call “mocking” tools. These range in capabilities, based on language and technology. Mocks are essential when working in legacy code.

A mocking framework has two uses:

  • Changing the behavior of the dependency, so you can test the code in isolation
  • Test the interaction between the code and the dependency

Since I’ve mentioned a couple of test frameworks, here are some mocking frameworks that help them: Mockito in Java, FakeItEasy in .Net, Sinon.js in javascript and Google Test in C++.

Thanks, Gil, for sharing your thoughts and ideas. We hope to talk to you again.

For more information about Gil and his professional activities, visit his blog and check out the book.

In the second part of the interview, Daniel Knott clarifies the difference between the traditional test pyramid and a mobile one. He also talks on smartwatch app testing, which is the main subject of his recently published book. If you’ve missed the first part of the interview with Daniel on modern trends and challenges in mobile testing, you can find it here.

Daniel Knott is a well-known mobile expert, speaker at various conferences and a blog author. He has been working in the field of software development and testing since 2003.

Daniel, in your book Hands-On Mobile App Testing you write about a testing pyramid. Can you tell our readers about this pyramid?

Sure, in chapter 5 of my book “Mobile Test Automation and Tools” I wrote about test automation pyramids. I think every software tester and developer knows about the testing pyramid created by Mike Cohn. In this pyramid the foundations are unit tests, the middle layer are integrations tests and at the top of the pyramid there are end-to-end tests.

The size of the layer should be an indicator for the amount of automated tests that should be written. There should be lots of unit tests, because these tests are small and fast in execution. Then write many integration tests to test the integration of smaller units. In the end, write some end-to-end tests to see if the system works as expected through every layer.

However, this is just a theoretical model and is a mere guidance for software developers and testers. From my point of view, this pyramid is not applicable for mobile apps and the mobile environment.

Can you please clarify this? What is a mobile test pyramid and how is it different from the traditional testing pyramid?

In most cases mobile apps are used on the move. The users maybe walking around, the commuting to work by car, train or plane.

In all these environments they have different internet connection and face other factors, like weather conditions, that may influence the mobile app functioning. Because of these facts, I think, the traditional testing pyramid is outdated for mobile.

From my point of view, the foundation of the mobile pyramid must be manual testing! Manual testing is the key for a successful mobile app. It is important to test the app in the environment where the potential customer will use it. Simulating the environment is a bad idea and will lead to apps with really bad quality.

The next layer of the mobile pyramid is end-to-end tests covering all components of the app: backend, network and user interface.

The next layer is beta testing – another manual testing layer but with real customers. Try to establish a beta testing community with real customers to get real feedback before going live to 100% of your customers.

And the top of the pyramid there are unit tests. This might sound really strange but writing unit tests for mobile apps is not as easy as for backend or web applications. There are so many APIs and sensors that can be used by an app and it is really difficult and time consuming to mock all those interfaces to write effecient unit tests.

However, I created the mobile test pyramid in a dynamic way like the whole mobile world. The pyramid may not fit every app and therefore it is important to have a flexible model, too.

Daniel, this year you’ve published another book – Smartwatch App Testing. In this book you provide readers with an overview of the different smartwatch platforms with a focus on design guidelines, input methods, connectivity, manufacturer as well as software features. Why have you decided to cover this topic?

It was never my goal to write this eBook in the first place. My initial intention was to learn something new in the mobile sector. I was really curious to learn more about smartwatches. I got my very first one at the end of 2015. It was Samsung Gear S2 and I really liked it and soon I was interested to learn more about watchOS, Pebble OS and Android Wear.

Luckily, I had a chance to use every system for a while and try them out. And as a blog author, I decided to write a series about it on my blog to share my findings with the community. At the end of the series I thought this might be a good idea to publish it on LeanPub as a small eBook.

What are the peculiarities of smartwatch app testing?

Most smartwatch platforms are really stupid in terms of features and functionality without a paired mobile device. So one of the main challenges is the connectivity between the watch and the paired device in order to exchange data.

Another important point is the usability of the smartwatch apps. Smartwatches offer only a very small screen to show the information to the users and because of that, the UI must be well designed in order to perform the required actions efficiently.

There are new ways of interacting with the watch, e.g. with new wrist gestures or voice commands. During my exploring activities I focused on all smartwatch platforms I identified 4 key areas when it comes to smartwatch app testing.

  • Design
  • Usability/Interaction Design
  • Functionality
  • Connectivity

What advice can you give to mobile testers?

Try out many things, install many apps on your private device, use them and try to learn from other apps. Stay hungry and up to date to new trends in mobile.

Thanks for for sharing your viewpoint with us. We will be glad to see you and talk to you again.

If you want to learn more from Daniel, visit his blog Adventures in QA. You may also find helpful his books Hands-On Mobile App Testing or Smartwatch App Testing.

Daniel Knott is a well-known mobile testing expert specializing in test automation. Apart from his daily work, he’s also a blogger and author of several books on software testing. This time we’ll talk about modern trends and challenges in mobile testing.

Hi Daniel, can you please tell us about yourself, your current occupation and expertise?

Sure, my name is Daniel Knott and I am a mobile test engineer at XING AG, social network for business contacts founded in Hamburg, Germany. I started my career in 2003 as a trainee at IBM where I was already involved in a number of software development projects. Soon software testing became my passion and I started a career as a software tester for web applications. Later on I grabbed the opportunity to move into the mobile development and testing area.

When I took to test mobile applications in 2010, everything was completely new for me I started everything from scratch studying relevant tools and processes and I really enjoyed the freedom at this time. Since 2010 I have been working in the mobile testing field.

Currently I am a member of the Android platform team at XING where our most essential challenge is to scale the whole mobile development and testing to the whole company. To date, we have grown from 1 central mobile team to 7 mobile development teams contributing to the master branch of our iOS and Android apps. Every two weeks we submit a new app version to the app stores of the mobile platforms. And with this scaling many new challenges came up that we in the team have to handle.

My current task is to help the web software testers become mobile testers. Therefore, I created an in-house training to teach them all necessary skills they need for their daily work. Besides that, I coordinate the releases and test our apps both manually and in an automated way. So there is lots of work to do and more things will come up in the near future. No day is like the other and that is what I really like.

In your view, what are the main challenges in mobile testing?

For me, the main challenges in mobile testing are:

  • customers
  • mobile fragmentation
  • test automation

Customers are challenging because you need to know them. If you don’t know who your customers are you might have a problem developing wrong features. To handle the customer challenge in mobile it is very important to collect some data about your target audience: devices it uses, its habits and usage patterns.

If this is the first time you release an app, watch out for competitors and find out information about the target customers. If you have a web application that is already used by the customers and you want to provide them with a mobile experience, use the collected data from the web to create mobile personas to identify possible features. If you have apps in different app stores you can add tracking to your apps to gather information about your target customers.

With this knowledge you can also handle the second challenge in mobile testing, the fragmentation. If you know what devices are used by your customers, you can concentrate on developing and testing only on these devices.

A nice way to handle the device fragmentation is to create the so called device groups. For example, create three groups A, B and C. In group A add the most popular devices with the highest priority. Include the less popular devices in the Group B and the most unpopular in the Group C. With the help of these groups every mobile developer and tester can downsize the effort in testing on many devices.

The last challenge every mobile tester needs to handle is the mobile test automation. Mobile test automation tools are not that mature as their web counterparts but the tools are getting better and better. There are so many great mobile testing tools out there and the cool thing is most of them are open source and free to use. When choosing a tool the company must decide which of the available tools fits best into its software development environment, as there is no universal solution in the field.

Are there any trends in mobile testing that influence the way you work?

There are so many trends at the moment coming up in mobile but none of them infuence the way I work. But it may change tomorrow, so it’s important to be ready for new things and for changes.

However, there is something that impacts my daily work and this is the software testing community. There are so many great software testers out there who share their knowledge with the community to help others become better and this is great. There is always someone you can ask, e.g. via twitter or slack about a problem or solution to get almost immediate feedback.

What skills are vital for mobile testers?

The same skills every software tester must have. Apps are also software but with a different focus. Nevertheless, skills like strong communication, curiosity, critical thinking and creativity are vital for any mobile tester. There are so many ways to go in mobile testing that these skills are really important in order to become successful.

What is more, it is very important to be a constant learner. The mobile world is changing really fast and it is crucial to keep up with the market and the new features that are coming up every month.

Thank you, Daniel, for sharing your ideas with us.

This is the first part of the interview with Daniel Knott, read the second part here on our blog soon. If you want to learn more from Daniel, visit his blog.

Maaret Pyhäjärvi is a tester extraordinaire specializing in exploratory testing. She is a software specialist with soft spots for hands-on testing, helping teams grow and building successful products and businesses.

She’s been working with software since 1995 in various roles and delivers talks as popular speaker in Finland as well as internationally. She works as a tester at Granlund and trains testing on the side through Altom.

After finishing her PhD in Physics at Stockholm University, Christin Wiedemann started working as a software developer for the Swedish consulting company HiQ.

Christin soon discovered that software testing was more interesting and challenging than development and subsequently joined the Swedish test company AddQ Consulting. At AddQ, she worked as a tester, test lead and trainer, giving courses on agile testing, test design and exploratory testing throughout Europe.

Christin developed a course on exploratory testing, and is a co-creator of the exploratory testing approach xBTM. Christin currently lives in Vancouver, where she joined Professional Quality Assurance (PQA) Ltd. in 2011. In her current role as Chief Scientist, she drives PQA’s research and method development work. She continues to use her scientific background and pedagogic abilities to develop her own skills and those of others.

If you missed the first part of the interview, you can read it here

a1qa: How will tools be used to test the IoE?

Paul Gerrard: It seems inevitable to me that although there will always be a need and opportunity to do manual testing, a much larger proportion of testing will have to be performed by tools than we are currently used to. The tools will need to execute very large numbers of tests. The challenge is not that we need tools. The challenge will be, “how do we design the hundreds, thousands or millions of tests that we need to feed the tools?”

So for example, suppose we need to test a smart city system that tracks the movement of vehicles and the usage of car parking in a town. We’ll need a way of simulating the legal movements of cars around the town along roads that contain other cars that are following their own journey. The cars must be advised of their nearest car space and the test system must direct cars to use the spaces that they have been advised of. Or not. We could speculate on some heuristics that guide our test system to place cars where their owners want them. But…. life happens and our simulation might not reflect the vagaries of the weather, pedestrians, drivers and the chaotic nature of traffic in cities.

Now, not every system will be as complex as this. But the devices now being field tested in homes, hospitals and public places all have their nuances, complications and unexpected events. A healthcare application that warns a person of upcoming doctor’s appointments, schedules your prescribed drugs and monitors your vital signs could trigger hundreds or thousands of scenarios.

Increasingly, small simple systems will interact with other systems and become more complex entities that need testing. It’s only going one way.

a1qa: Why do we need a ‘New Model for Testing’?

Paul Gerrard: The current perspectives, styles or schools of testing will not accommodate emerging approaches to software development such as continuous delivery and, for example new technologies such as Big Data, the Internet of Things and pervasive computing. These approaches require new test strategies, approaches and thinking. Our existing models of testing (staged, scripted, exploratory, agile, interventionist) are mostly implementations of testing in specific contexts.
Our existing models of testing are not fit for purpose – they are inconsistent, controversial, partial, proprietary and stuck in the past. They are not going to support us in the rapidly emerging technologies and approaches especially testing the IoE.

I have proposed an underlying model of testing that is context-neutral and I have tried to shed some light on what this might be by postulating the Test Axioms, for example. The Axioms are an attempt to identify a set of rules or principles that govern all testing. Some people, who have used them think they work well. They don’t change the world, they just represent a set of things to think about – that’s all. But, if you choose them to be true, then you can avoid the quagmire of debates about scripted versus unscripted testing, the merits and demerits of (current) certifications or the value of testing and so on.

The New Model for Testing is an extension to this thinking. The model represents the thought-processes that I believe are going on in my own head when I explore and test. You might recognize them and by doing so, gain a better insight into how you test too. I hope so. As George Box said, ‘essentially, all models are wrong, but some are useful’. This model might be wrong, but you might find it useful. If you do find it useful, let me know. If you think it’s wrong, please let me know how I might improve it.

The New Model of Testing attempts to model how testers think and you can see a full description of the model, the thinking behind it and some consequences here.

a1qa: The New Model mentions Test Logistics. What is that?

Paul Gerrard: When tests are performed on-the-fly, based on mental models, the thought processes are not visible to others; the thinking might take seconds or minutes. At the other extreme, complex systems might have thousands of things to test in precise sequence, in complicated, expensive, distributed technical environments with the collaboration of many testers, technicians and tool-support, taking weeks or months to plan and apply.

Depending on the approach used, very little might be written down or large volumes of documentation might be created. I call the environmental challenges and documentary aspect ‘test logistics’. The environmental situation and documentation approach is a logistical, not a testing challenge. The scale and complexity of test logistics can vary dramatically. But the essential thought processes of testing are the same in all environments.

So, for the purpose of the model, I am going to ignore test logistics. Imagine, that the tester has a perfect memory and can perform all of the design and preparation in their head. Assume that all of the necessary environmental and data preparations for testing have been done, magically. Now, we can focus on the core thought processes and activities of testing.

The model assumes an idealized situation (like all models do), but it enables us to think more clearly about what testers need to think about.

a1qa: Can you summarise what the New Model says?

Paul Gerrard: At the most fundamental level, all testing can be described in this way:

1. We identify and explore sources of knowledge to build test models
2. We use these models to challenge and validate the sources of knowledge
3. We use these models to inform (development and) testing.

I make a distinction between exploration and testing. The main difference from the common view is that I will use the term Exploration to mean the elicitation of knowledge about the system to be tested from sources of knowledge.

I have hinted that by excluding the logistical activities from the New Model, then the processes can be both simplified and possibly regarded as universal. By this means, perhaps the core testing skills of developers and testers might coalesce. Testing logistics skills would naturally vary across organisations, but the core testing skills should be the same.

From the descriptions of the activities in the exploration and testing processes, it is clear that the skills required to perform them are somewhat different from the traditional view of testing as a staged activity performed exclusively by independent test teams. Perhaps the New Model suggests a different skills` framework. As a challenge to the status-quo, I have put together a highly speculative list of skills that might be required.

I hope the model stimulates new thinking and discussion in this field.

a1qa: What is the future for testing and testers?

Paul Gerrard: Of course, that’s a huge question and all we can do is speculate on what might happen. My suggestions below are partly informed by what I have seen in the technology and the testing markets and friends and colleagues who have shared their experiences with me.

Certification?

Love them or hate them, the certification schemes are not going away. The market is there and there are plenty of training providers willing to fulfill the need. I would say that their popularity varies by country, but also the level of credibility of the schemes varies by geography too. I would argue that the certified syllabuses map directly to what I call logistics, so their value is limited. This is being recognized by more and more practitioners and training providers.

There is hope that people recognize the limited value of certified training and invest in broader skills training. The New Model suggests what these skills might be. The existing organisations are crippled by their scale and won’t be improving their output any time soon. I believe that better certification schemes will only emerge when a motivated, informed group of people choose to make a stand and create them.

Manual vs automated testing?

There is a curious position that some people are taking. Some folk say that testing performed by tools is easy, less valuable, significant or effective than testing crafted by people, particularly when conducted in an improvisational style. This is not a stance that can be justified, I think. Most of the software on the planet doesn’t have a user interface at all, so has to be tested using tools and naturally, these tests must be scripted in some way.

Web services, for example, might be simple to test, but can also be devilishly complex. The distinction of ‘manual’ and ‘automated’ testing in terms of ease, effectiveness and value is fatuous. The New Model attempts to identify the critical thinking activities of testing. What I have called the ‘application of tests’ may be performed by tools or people, but ONLY people can interpret the outcomes of a test. And that is all I have to say.

I should add that more and more, test automation in a DevOps environment is seen as a source of data for analysis just like production systems so analysis techniques and tools are another growing area. I wrote a paper that introduces ‘Test analytics’ here.

a1qa: Should testers learn how to write code?

Paul Gerrard: I have a simple answer – yes.

Now it is possible that your job does not require it. But the trend in the US and Europe is for job ads to specify coding skills and other technical capabilities. More and more, you will be required to write your own utilities, download, configure and adapt open source tools or create automated tests or have more informed conversations with technical people – developers. New technical skills do not subtract from your knowledge, they only add to it. Adding technical skills to your repertoire is always a positive thing to do.

If you are asked to take a programming or data analytics course, take that opportunity. If no one is asking you to acquire technical skills, then suggest it to your boss and volunteer.

a1qa: What does ‘Shift-Left’ mean for testers?

Paul Gerrard: It seems like every company is pursuing what is commonly called a ‘shift-Left’ approach. It could be that test teams and testers are removed from projects and developers pick up the testing role. Perhaps testers (at least the ‘good’ ones) are being embedded in the development teams. Perhaps Testers are morphing into business or systems analysts. At any rate, what is happening is that the activities, or rather, the thinking activities of testing are being moved earlier in the development process.

This is entirely in line with what testing leaders have advocated for more than thirty years. Testers need to learn how to ‘let go’ of testing. Testing is not a role or stage in projects it is an activity. Shift-left is a shift in thinking, not people. Look at this as an opportunity, not a threat.

‘Test early, test often’ used to be a mantra that no one followed. It now seems to be flavour of the month. My advice is don’t resist it. Embrace it. Look for opportunities to contribute to your teams’ productivity by doing your testing thinking earlier. The left hand side of the New Model identifies these activities for you (enquiring, modeling, challenging and so on). If your test team is being disbanded, look at it as an opportunity to move your skills to the left.

There is something of a bandwagon for technical people advocating continuous delivery, DevOps and testing/experimenting in production. It seems hard for testers to fit into this new way of working, but again, look for the opportunity to shift left and contribute earlier. Although this appears to be a technical initiative, I think it is driven more by our businesses. I learned recently that marketing budgets are often bigger than company IT budgets nowadays.Think about that.

Marketers may be difficult stakeholders to deal with sometimes, but their power is increasing, they want everything and they want it now. The only way to feed their need for new functionality is to deliver in continuous, small, frequent increments. If you can figure out how you can add value to these new ways, speak up and volunteer. Of course large, staged projects will continue to exist, but the pressure to go ‘continuous’ is increasing and opportunities in the job market require continuous, DevOps and analytics skills more and more. Embrace the change, don’t resist it.

I wish you the best of luck in your leftwards journey.

Reach Paul via Twitter and Linkedin.

Paul thank you sharing your views and experience. We hope to talk to you again.

If you missed first part of the interview, read it here.

a1qa: Why you claim “Look at the Big Picture” as one of seven success factors?

Janet Gregory: Sometimes teams work really well, delivering and testing stories as planned. All is going really well, but when the customer sees the finished product, they are not happy, or they find some major issues.

This problem often occurs when the team forgets that there is a larger feature that the stories are part of. The product backlog may be a list of stories that have lost their context. A feature is the business capability the customer really wants. That feature is broken up into many stories and unless teams are constantly looking at the real problem (the business need), they can end up delivering the wrong thing. The other part to this success factor is the system as a whole.

There are impacts to that system that a single story may not take into account. Te In this second part ftsters looking at the big picture can often see those impacts and help identify some of the issues early preventing delays later.

a1qa: Programmers as testers: why programming skills can be a plus for a tester?

Janet Gregory: This is a very controversial question these days. Many people take that to read that testers must be able to code production code. I do not. I think it is definitely a plus to be able to read and understand code so that testers can discuss risks, tests and design with the programmers.

I also think programming skills are good for helping with test automation. The term that Lisa and I use in More Agile Testing: Learning Journeys for the Whole Team, is ‘technical awareness’. It is a phrase I first heard from Lynn McKee and I took it to heart. Technical awareness might mean programming skills, it might mean database knowledge or perhaps more about embedded dev.

Technical awareness is context sensitive so what is an important skill on one team, may not have as much importance on another. Testers should strive to learn what is important to add value to the team they are working with, and yes… that might include programming skills. To end this question, I will say emphatically, I do not think testers need to be able to code production code. That does not mean they are not capable, but there is so much other value they offer to the team.

a1qa: You teach a 3-day Agile Testing Course. What is the most difficult thing about teaching Agile?

Janet Gregory: I’m not sure teaching agile is a problem. Most people get the concepts fairly easy. Putting it into practice is another matter. In my course, I try to get the attendees to really experience what that means. I teach the theory, but then work through a case study with exercises so that the participants really experience what that theory means.

Those are usually the ‘ah ha’ moments for them. The difficulty is often in letting go of what they had considered best practices for many years. When a student comes up to me after class and says “it now all falls into place”, any of the struggles during the class was worth the effort. Those ‘light bulb’ moments make it all worthwhile for me.

Janet thank you for sharing your ideas and experince. We hope to talk to you again. 

Paul Gerrard is a consultant, teacher, author, webmaster, programmer, tester, conference speaker, rowing coach and publisher. He has conducted consulting assignments in all aspects of software testing and quality assurance, specialising in test assurance.

He has presented keynote talks and tutorials at testing conferences across Europe, the USA, Australia, South Africa and occasionally won awards for them. Educated at the universities of Oxford and Imperial College London, he is a Principal of Gerrard Consulting Limited, the host of the UK Test Management Forum and the Programme Chair for the 2014 EuroSTAR testing conference. 

In 2010 he won the EuroSTAR Testing Excellence Award and in 2013 he won the inaugural TESTA Lifetime Achievement Award. He is the Programme Chair of the EuroSTAR 2014 conference. He’s been programming since the mid-1970s and loves using the Python programming language.

AIQA: How did you get into Software Testing?

Paul Gerrard: My first job in 1980, was as a graduate civil engineer, but on the first day, the boss said, “We have no work for you at the moment, but we’ve had a new computer delivered. Here’s the manuals – go figure out what we can do with it” – it was a small office so I rapidly became the office computer expert and the bug bit. Since 1981, I’ve been working in the software and testing business ever since.

After a couple of job moves in the early 80s, I ended up working for Mercury Communications – the first competitor to British Telecom in the newly deregulated telecom market in the UK.

For about three years, I ran a small team that was embedded in a company HQ. We shared an office with our users and had dedicated on-site customers. Although we worked on ‘green screens’ connected to central computers, we prototyped ran paper walkthroughs of screen designs, we released software every few days. We used source code management tools, and even automated build and test tools. We relied on our users to test quite a bit and got a reputation in the IS department for ’going native’. I would say that we were lean and somewhat agile.

In 1992, I joined a software testing services firm, Systeme Evolutif (SE) in London. The SE business was founded on three core services – consultancy in software testing, testing training and a small amount of outsourcing. Having helped to organise the first Eurostar 1993, with Dorothy Graham and SQE from the US, we managed the whole conference in years 1994 and 1995. We have contributed to the BCS SIGIST, the British Standard BS 7925, ISEB Testing Certificate schemes, the early years of the DSDM Consortium, and since 2004, we have hosted the UK Test Management Forum.

In 2007, I bought the SE business and re-branded it Gerrard Consulting and we are active in are UK and international testing conference. I also have a non-exec role on the Technology Advisory Board of TestPlant Limited.

a1qa: Why do you think the Internet of Things is important?

Paul Gerrard: The Internet of Things or the Internet of Everything (I’ll use “IoE” as shorthand) is the most exciting change in our industry since client/server came on the scene in the late 1980s. I think client/server was important because the Internet, Web services, mobile computing are all essentially client/server implementations. Some would argue that Object-Oriented analysis, design and development are significant, but I think these affect only programmers and designers. Agile is significant – but I think it is transitional. Continuous delivery and DevOps are bringing factory automation processes into software and are perhaps the most appropriate approach for IoE implementations. After Waterfall and Agile, Continuous Delivery and DevOps are becoming ‘the third way’.

The Internet of Everything, if it develops in the way that some forecasters predict, will affect everyone on the planet. Estimates of the number of connected devices on the IoE range from 50 billion devices to 700 billion devices over the next 5 to 20 years. No one knows of course, but the expectation is that we are on a journey that will increase the scale of the internet by one hundred times. This will take some testing! But how on earth will we approach this challenge?

a1qa: What will we need to test for in the IoE?

Paul Gerrard: Last year, I was commissioned to write an article series on Testing the Internet of Everything. The first two articles (as well as a lot of other papers and articles) can be found here. These two papers set the scene and the scale of the challenge, but also introduce a seven-layer architecture that might help people to understand the features that must exist to make the IoE happen. The second paper also sets out at a high level some of the risks that we must address.

I am writing the third instalment – on IoE Test Strategy – right now. I have to tell you that figuring out how we test what you might say is every system component that exists now and forever is proving quite difficult! I won’t be solving all of the testing challenges in the next few weeks. But what I can perhaps do is suggest some of the dimensions, some of the influences, and some of the opportunities of the IoE testing problem. I will try and set the scene for what we have to think about here.

Scale: The obvious first challenge is scale of course, but that is primarily a logistical problem for implementers. Of course, there will be the scalability challenges at various levels of the architecture and we’ll have to do some large-scale load, performance and stress testing.

Hardware-Level functionality: The lowest level devices are sophisticated, but essentially perform simple functions like sensing the value of something or changing the setting, position or speed of something. These devices are packaged into objects that will need testing in isolation and most of this will be performed by manufacturers.

Object and Server level functionality: The vast majority of functionality that needs testing will reside on local hubs and aggregators and data-centre-based server infrastructure. Internet-based and native mobile apps will deliver the data, visualisations and control over other aspects of the architecture. Architectures will range from simple web-apps to systems with ten, twenty or more complex sub-systems.

Mobile objects: Testing static objects is one thing, but testing objects that move is another. Mobile objects move in and out of the range of networks, they roam across networks, the environmental conditions at different locations vary and may affect the functionality of the object itself. Our sources of data and the data itself will be affected by the location and movement of devices. Mobile devices will drift into and out of our network range, but also drift into and out of other, not necessarily, friendly networks. Power, interference, network strength, roaming and jamming issues will all have an effect.

Moving networks: Some objects move and carry with them their own local network. A network that moves will encounter other networks that interfere or may introduce a rogue or insecure network into your vicinity and pose a security problem. Cars, buses, trains, aeroplanes, ships, shopping trolleys, trash trucks, hot-dog stalls, tractors – almost anything that moves – might carry with them their own networks or join foreign networks as they encounter them.

Network security risks at multiple levels: Rogue devices that enter your network coverage area might eavesdrop or inject fake data. Rogue access points might hijack your users’ connections and data. Vulnerable points at all levels in your architecture are prone to attack. There are security risks at all levels of your network architecture that will need to be addressed.

Device registration, provisioning, failure and security: Devices may be fixed in location but the initial registration and provisioning (configuration) are likely to be automatic. More complicated scenarios arise where devices move in and out of range of a network or transition between networks. Needless to say, low-power devices fixed in perhaps remote locations are prone to power failures, snow, heat, cold, vandals, animals, thieves and so on. Power-down, power-up and automated authentication, configuration and registration processes will need to be tested.

Collaboration confusion: Mobile, moving devices will collaborate with fixed devices and each other in more and more complex ways and in large numbers. For example, in a so-called Smart City, cars will collaborate as a crowd to decide optimum routes so every car gets to its destination efficiently. But, however these resources are controlled, car park spaces will become available and unavailable randomly, so the optimisation algorithm must cope with rapidly changing situations. At the same time, these services must not confuse public services, commercial vehicle drives, private car drivers and passengers. Managing the expectations of users will be a particular challenge.

Integration at all levels: Integration of physical devices or software components will exist at every level. Integration of data will encompass the flows of data that is correctly filtered, validated, queued, transmitted and accepted appropriately. Many IoE devices will be sensors chirping a few bytes of data periodically, but many will also be software components, servers, actuators, switches, monitors, trip-ups, heaters, lifts, cars, planes and factory machinery. The consistency, timeliness, reliability and of course safety of control functions will be a major consideration. Industry standards in application domains that are safety-related are likely to have a role. However, for now, much of the legislation and standardisation that will be required does not yet exist.

Big Data – logistics: Needless to say, much of the data collected by devices will end up in a database somewhere. Some data will be transactional, but most of it will be collected by sensors in remote locations or connected to moving objects. A medium sized factory might collect as much as a Terabyte (1,000,000,000,000 bytes) per day. Performance and reliability requirements might mean this data must be duplicated several times over. Wherever it is stored (and for however long) a very substantial data storage service will be part of the system to be tested.

Big-Data – Analysis and visualisation: Analyses of data will not be limited simply to tabulated reports. The disciplines of data science and visualisation are advancing rapidly, but these rely on timely, accurate and consistent data; they rely on data acquisition, filtering, merging, integration and reconciliations of data from many sources, many of which will never be under your control. Often, data will be sparse or collected infrequently or at random times. Data will need to be statistically significant, smoothed, extrapolated and analysed with confidence.

Personal and corporate privacy: The privacy of data – what is captured, what is transmitted, what is shared with trusted or unknown 3rd parties or stolen by crooks and misused is probably peoples’ most pressing concern (and this is also a barrier to exploitation of the IoE). The current legal framework (e.g. the Data Protection and privacy laws in the UK) may not be sufficient to protect our personal or corporate privacy. Hackers and crooks are one threat, but central government listening to all our data, creating personalised data envelopes to track crooks and terrorists will probably track all citizens and not just suspects. Your government may be seen to be a villain in this unfolding story.

Wearables and Embedded: Right now, there is a heavy focus on wearable devices such as training and activity trackers, heart monitors that connect with apps on mobile phones and data can be integrated with Google Maps to show training routes, for example. Other devices such as smart watches, clothing and virtual reality headsets are emerging. But there are increasing numbers of applications where the device is not worn, but are embedded in your body. Healing chips, cyber pills, implanted birth control, smart dust and the ‘verified self’ (used to ID every human on the planet) are all being field tested. It will be hard not to call human beings ‘things’ on the internet before too long. Will we need to hire thousands of testers with devices embedded in their body? Surely not. But testing won’t be as simple firing off thousands of identical messages to servers using the functional and performance test tools we have today. Something much more sophisticated will be required.

a1qa: How will IoE change the way we test?

Paul Gerrard: Let me suggest that there have been some quantum leaps in the complexity of our systems and the IoE is the next step on our journey. Let me recount the history of our software testing journey. I know testing started before we had ‘green screens’, but it is a convenient starting point.

  1. Screen based applications running on dumb terminals were the norm up to the mid-1980s or so (and of course are still present in many companies). Each screen typically had a single entry point and a single exit point, the content of the screen and data input was limited to text. Life was relatively simple – but systems had many screens.
  2. With the advent of GUI applications, it became possible to have many windows active at the same time. There were many ways in and ways out. The content of screens could now include graphics, sounds and video. Windows applications had to deal with events – triggered by the user clicking anywhere on screen or from other windows or applications resident in the GUI environment.
  3. At the same time as GUIs arrived, client/server became the architecture of choice for most applications. The scope of an application was no longer limited to workstations but might involve the collaboration of many connected servers collaborating with (often flaky) middleware. Performance and reliability become more prominent risks.
  4. The emergence of the internet, although hailed as a revolution is really an instance of client/server. What was different though, was the exposure of private networks, functionality and data to the public internet. Security and privacy become much bigger concerns.
  5. We are in the thick of the ‘Mobile Revolution’ right now. The democratisation of software means applications are available anytime and anywhere on varied devices and configurations in numbers far beyond our ability to test comprehensively. The proliferation of apps and devices and our enthusiasm for this expansion knows no bounds.
  6. And now, on top of all of the technologies above, we expect that the network of connected ‘things’ will increase the scale of the internet by perhaps 100 times. These devices range in sophistication from 50 cent components to buildings, ships, cars and aeroplanes. And that’s the point – IoE brings all of the challenges I mentioned earlier ‘plus all of the above’ with the full range of sophistication.

The IoE brings, potentially, a new level of complexity, but brings in new levels of scale. The non-functional risks are reasonably well-known. What is new is the need to do functional testing and simulation at scale.

Read the second part of the interview here.

Janet Gregory is an agile testing coach and process consultant with DragonFire Inc. She is the co-author with Lisa Crispin of Agile Testing: A Practical Guide for Testers and Agile Teams and More Agile Testing: Learning Journeys for the Whole Team.

She is also a contributor to 97 Things Every Programmer Should Know. Janet specializes in showing agile teams how testers can add value in areas beyond critiquing the product; for example, guiding development with business-facing tests. Janet works with teams to transition to agile development, and teaches agile testing courses and tutorials worldwide. She contributes articles to publications such as Better Software, Software Test & Performance Magazine and Agile Journal, and enjoys sharing her experiences at conferences and user group meetings around the world.
For more about Janet’s work and her blog, visit www.janetgregory.ca. You can also follow her on twitter @janetgregoryca.

a1qa:You are in Agile for many years. Why you chose Agile as your specialization?

Janet Gregory: I’m not sure I chose agile as much as I fell into it .. at least at the beginning. My first introduction was to XP (eXtreme programming) was by a development manager. I was interviewing for the job of QA manager and one of his questions was “Had I heard of XP?”.

At that time it was very new to software testing outsourcing, and I had only a very limited knowledge, but we worked together trying to implement some of the practices and we were actually pretty successful, although I didn’t realize it at the time. As many other companies did at that time, we suffered layoffs and I went on to my next adventure. Probably the only QA person in Calgary at the time, I landed a tester contract with a team who was actively practicing XP.

At the end of that year, I knew I never wanted to go back to a phased and gated project as a tester. Being a tester on that agile team was so much more fulfilling than anything I had previously done, I decided to concentrate only on agile teams. For the next eight years, I worked with teams transitioning to agile – usually with a dual role of tester / coach. In 2009, Lisa Crispin and I published Agile Testing: A Practical Guide to Testers and Agile Teams to try to help as many testers and teams as we could make the transition more smoothly.

a1qa:What are the most common pitfalls that you face when engaging teams into Agile environment?

Janet Gregory: I think the most common problem I see is that testing activities are not integrated completely into the process. The team still views testing as an after-thought, and stories are not created with testability in mind. It is hard for programmers to start thinking differently about how they code so that each story can truly be ‘Done’.

I often get the call from organizations to come and help the testers keep up, and train them to understand how to do that. Most often, the problem is related to how the stories are delivered. For example, the stories may be too big and all of them are delivered at the end of the iteration at once. I call this a mini-waterfall. Teams haven’t really changed how they work; they just do it in smaller chunks.

To be continued…

Read the first part of the interview here.

a1qa: Let`s oppose sketchnoting to “usual” notes. What makes sketchnoting so appealing to you? How can sketchnotes help with meetings, workshops, conferences?

Zeger Van Hese: I am notoriously bad at writing quickly – my handwriting doesn’t not really allow that. When I do try to write as quickly as I can, the result is less than legible. Written notes just don’t do it for me during something fast-paced like presentations or meetings. When my friend Ruud Cox introduced me to sketchnotes a couple of years ago – something he is really good at – I had to give it a try.

Sketchnotes have many advantages for me: the focus is more on central ideas and concepts, and I can draw concepts much faster than I can write out entire sentences. Drawing keeps my brain engaged during the whole presentation, where my attention would wander quickly otherwise. A drawback is that I am totally drained after a day of taking notes like that – that’s the price you pay when prolonging your attention span from 15 minutes to 6 hours, I guess. The visual aspect is also great for recall: one quick glance at a sketchnote brings back vivid memories of what was captured. I think that is because you don’t have to read whole passages of text to get what it’s about, while pictures are like a fast pass into your brain. Sketchnotes also have a great social aspect: a quick snapshot makes them sharable, and people at conferences seem to appreciate these visual summaries.

a1qa: being a regular speaker at conferences (Eurostar, StarEast…), you often talk about managing the focus while testing. Is it about tricks or management policy?

Zeger Van Hese: Finding focus in these busy times is a hot topic that seems to hit home with many. There are quite some tricks to do focused work: the (10+2)*5 procrastination hack, for instance – ideal when working on logical, easily divisible tasks. Neil Fiore also has some good pointers on focusing and dealing with procrastination in his book “The Now Habit“. Focusing is important of course, but in my talk “Testing in the age of distraction – the importance of (de)focus in testing“, I make clear that it’s not only about focusing – defocusing is equally important. Focus is a paradox – it has distraction built into it. The two are symbiotic; they are like the yin and yang of consciousness.

This is something that managers still have a hard time grasping: people always assume that you get more done when you are consciously paying attention to a problem. After all, that’s what it means to be “working on something”. But if you are trying to solve complex problems, you need to give yourself a real break. Whenever creative thinking is needed, our mind performs the best when we’re defocused.

When I ask people when or where they get their best ideas, similar answers come up. Most people will tell me “in the shower”, “in the car”, “while running”, “while walking in nature”, “when thinking about other stuff”. This came as no surprise, since I have gotten my best ideas either when running or during long commutes in the car. The thing is: ideas typically don’t occur when we are focused on tasks. They happen when the mind starts wandering. You could say that mind-wandering promotes creativity: it is the perfect condition for creative thought.

A lot of people look down upon defocusing, thinking that staying focused is the only way to get things done. But if you look at professional athletes, you will notice that rest days are an explicit part of their training programs. An athlete’s body absolutely needs to recuperate and recover in order to come out stronger. Everyone accepts that athletes rest their bodies, since they are in such a physical line of work. Is it such a crazy idea that testers, as knowledge workers, plan for rest as well to let our brains recuperate and to rejuvenate our thinking?

To put this in a testing context: we need defocus as much as we need focus. To test effectively, we need to be able to switch between different thinking styles: creative and critical thinking. To think critically, we need to be focused. To think creatively, we need to embrace defocus. Being able to manage your focus is a key skill in software testing.

 Zeger thanks for sharing your views and ideas. We hope to talk to you again.

Reach Zeger on Twitter and read his blog

Zeger Van Hese has a background in Commercial Engineering and Cultural Science. He started his professional career in the motion picture industry but switched to IT in 1999. A year later he got bitten by the software testing bug (pun intended) and has never been cured since. He has a passion for exploratory testing, testing in agile projects and, above all, continuous learning from different perspectives. Zeger considers himself a lifelong student of the software testing craft. He was program chair of Eurostar 2012 and co-founder of the Dutch Exploratory Workshop on Testing (DEWT). He muses about testing on his TestSideStory blog and is a regular speaker at national and international conferences.

a1qa: You are known for relating outside things to testing. How can testers benefit from art or any outside perspectives?

Zeger Van Hese: Ah yes, test side stories… it’s a strange thing, really. It was never my explicit mission to bring seemingly unrelated areas into testing, but it is a pattern that I first saw emerging in many stories on my blog. Later on, the same happened on my quest for interesting presentation topics: each time I followed my energy, I ended up in weird places – subject-wise, that is (although there have been times were I thought I got beamed into a surrealist novel).

It started as a disposition to think about testing that way, but later on it dawned on me that testing also benefits from such outside perspectives.

I think the main thing that diverse perspectives can bring for quality assurance consultants is diversity, more specifically team diversity and the diversity of ideas. This diversity gives our testing a requisite variety that has the potential of making our testing more effective. If we build in diversity in our teams, each one of our team members will lack some skills, but the team as a whole will have them all. The broader the range of cultural and experiential backgrounds, in more diverse ways people will analyze the software and the more problems they will find. Diversity is a critical asset in testing, not something to be avoided. Intellectual diversity has another great benefit: it is a major enabler of innovation. In November last year I presented a keynote at Eurostar “Everything is connected: Exploring diversity, innovation, leadership” in which I investigated the relationship between those three central concepts of the conference theme. A lot has been written about the link between diversity and innovation, but the contradictions are striking: some studies that show that diversity is good for a team – that it leads to better performance, creativity and innovation – while there are equally compelling ones that reach opposite conclusions – that it leads to chaos and friction in the workplace. Nancy Adler shed some light on this paradox in her book “International Dimensions of Organizational Behavior“: she found out that creativity of a team does not depend on the presence or absence of diversity, but rather on how well diversity is managed. When managed well, diversity becomes an asset for the team. When ignored, diversity causes process problems that diminish the performance. But that is a leadership issue, a different story altogether.

As you mentioned, one of those specific subjects I have thought and presented about in the past is the link between art and testing (“Artful Testing”). As an art aficionado, I got the idea after reading two books that proudly carry the term “art” in their title: Glenford Myers’ book “The Art of Software Testing” (in which I was surprised to find that the word “art” does not even appear once throughout the whole book, except on the title page) and “Artful making” by Rob Austin and Lee Devin (which addressed software development and its resemblance to art). I could not help but wonder: what about “artful testing”? – can the fine arts in any way support or complement our testing efforts? As a matter of fact, they can. Testers can benefit from studying art and learning to look at it, since this largely resembles what we do when we are testing: thoughtfully looking at software. The tools used by art critics can also be a valuable addition to the tester toolbox: demystification, deconstruction, binary opposites, the notions of connotation/denotation – they can all be applied to testing, enabling testers to become software critics. Testers can learn from artists too, more specifically from the way they look at the world. Different art movements look at the same things in very different ways. What if testers would emulate those different ways of looking at the world and transpose that to software?

My main heuristic throughout this all is to follow my energy – I go wherever my curiosity and interests lead me. And this means that I am currently balancing two main themes in my head for the coming year: skepticism and visual thinking.

a1qa: Skepticism and visual thinking – what testing angles can be found there?

Zeger Van Hese: I study subjects in order to draw testing lessons from them and in the case of skepticism; it has been a fascinating journey so far: philosophy, religion, science and pseudo-science, critical thinking… oh the places I’ve seen! Sometimes I feel like a kid in a toy store. I’ve always felt that testers should strive to be professional skeptics (as James Bach puts it, skepticism is not about rejecting belief, but about rejecting certainty), keeping a wary eye and a skeptical mind. It’s natural that our clients want to become more confident in the system under test, but we should make clear to them that at no point we can promise absolute certainty. Since we cannot really prove that the software *works*, perhaps we should focus on the ways in which it fails or might fail. We should be the ones still doubting when everyone else isn’t seeing any problems: “what am I missing here, what am I not seeing?”. I’d much rather work in the questioning and information business than the confirmation business.

The visual thinking angle on testing is still very much a work in progress. It sparked my interest after I became aware of being a visual/spatial learner. I’ve always been a doodler. When my attention wanders, I start drawing: mostly doodles, random scribbles, sometimes little sketches – you should see my notes after yet another 2-hour meeting. Sure, it is a great way for my brains to stay engaged, to avoid dozing off. But I think this kind of behaviour is revealing something more fundamental: that drawing is my preferred way of thinking, it is “thinking on paper”. I need to visualize things in order to understand or think deeply about them – and that’s what doodling is: deep thinking in disguise.

Now I realize that I can put doodling or drawing to good professional use. There are a couple of ways in which testing can benefit from visualization. Firstly, it is a simple and powerful tool for innovating and solving sticky problems. When drawing, you also tap into all four learning modalities at the same time (visual, auditory, kinesthetic, and tactile). I’d say that anything that helps me learn and helps me think better is a good addition to my tester toolbox.

Visualization also plays an important role in testing with regards to modelling, which lies at the heart of testing. We use mental models all the time: requirements documents, flow charts, state transition diagrams: these are all tangible models of some aspect of the software. During testing, we also construct mental models of the software. We use them to guide our testing. Mental models are by definition volatile and intangible, so materializing them by capturing and visualizing in drawings can make them a good basis for discussion, investigation and understanding.

Over the last year, I have also been trying my hand at sketchnoting as a way of capturing talks, ideas and presentations. Yet another way of visualizing spoken or written content.

Read the second part of the interview here

Software is increasingly becoming an integral part of modern telecom equipment, which has put a greater emphasis on companies finding qualified employees to keep up with this growing demand. Software is increasingly becoming an integral part of modern telecom equipment, which has put a greater emphasis on companies finding qualified employees to keep up with this growing demand.

RCR Wireless News: Can you provide an overview of a1qa’s current workforce?

Svetlana Pravdina(SP): Since its inception back in 2002, year after year a1qa has been experiencing rapid growth. For example, in 2011 we grew by almost 182%. Currently, a1qa has around 450 full-time QA engineers. They are spread around our global locations in Austin, Texas (headquarters); in several representative offices across the U.S. (Connecticut, Massachusetts, etc.); and in Europe, the Middle East and Africa.

RCR Wireless News (RCRWN): Can you provide some details into what a typical day is like for an a1qa employee?

SP: We realize that a flexible schedule increases labor effectiveness and helps to respond more effectively to our customers’ needs. Thus, for a long time, we’ve been following the approach where our engineers are free to work any time they feel most productive, as long as they consider our clients’ timetables. They have no strict schedules unless communication with clients requires them to be in the work place during certain hours. All working processes are organized in a way to allow every person do their best job. Everyone from interns to top management have their own schedules.

Nevertheless, the daily routine is pretty much the same: getting assignments from project managers; performing manual or automated bug tracking; communicating within a team; writing bug reports; and reading professional websites to ensure they are up-to-date on the latest happenings within the industry.Manual testers and “automators” have almost identical conditions in their working day organization. The only difference “automators” might have is that they are usually not assigned to do several projects at a time.

RCRWN: How important are employee-training programs for a1qa?

SP: a1qa considers employee-training programs to be extremely important. The way we see it, an employee cannot achieve optimal professional growth without the proper support from the company. Therefore, we’ve decided to support this view in quite a fundamental way by establishing seven “Centers of Competence” within our company – covering proficiencies and competencies in the testing of mobile, integration, security, usability, test automation, etc. Each Center of Competence manages its own timetable of events, providing seminars, workshops, roundtables and master classes. Each a1qa employee can join any event to extend their own testing skill set and obtain deeper knowledge. Launching those centers was definitely a challenge for us, and it is also a challenge to keep them functioning on a proper level. However, the positive feedback we receive from our employees makes it worth the effort.

RCRWN: What issues have been the most difficult for a1qa to deal with in terms of attracting/keeping employees?

SP: During the last decade, we all witnessed a growing demand in qualified QA engineers and testers. Still, software testing and QA is not a part of an academic education (B.Sc in SQA sounds good, right?). The point here is that at certain point we found ourselves in a closed loop of hunting for testers that basically got us nowhere. That’s when the idea of establishing our proprietary QA Academy was born. The main goal of the academy was to ensure a constant inflow of QA specialists to a1qa – and, of course, to contribute to the SQA industry in general. Two years ago, our QA Academy opened its doors for the first students, offering a variety of courses in domain-specific testing, software test methodologies, agile testing, security testing, mobile testing, tools and instruments, communication and teamwork, SQA in distributed environment, metrics, personal efficiency and more. Now we select the top graduates of the QA Academy, offering them a three-month internship with a1qa. This helps to recruit new employees who already have at least basic QA and testing knowledge. Besides the QA Academy, we also offer generous benefits, such as gyms and pools, family support, medical insurance, business trips and a chance to make a good career within the company. But still, the company is not protected from losing good specialists. If someone is intending to leave, potential losses are estimated, and if the specialist is really valuable to the company, we do our best to address their retaining requirement.

RCRWN: What workforce positions has a1qa found the most difficult to fill?

SP: Our QA Academy was established to ensure company growth and to fill vacancies. While general QA engineers are taught there, automated testing always needs “special” engineers, able to both test and code. This is the rarest type of tester you can expect to easily find on the market. In the past, the company experienced a lack of those specialists. As the company grew, it started developing automators in each department itself. For that reason, the company established internal courses on automated testing, which now provides us with the automators we need.

RCRWN: What impact has the rise in telecommunication operators looking toward software solutions had at a1qa?

SP: We have witnessed this trend for many years, and indeed the rise has been significant. Like many other businesses, we have been keeping an eye on the telco market situation. Anticipating the industry growth and everything it implies, we started building up our own telecom testing team, signing our first telco-testing contract with European telecom operator EMT back in 2005. Our telecom team has grown to be the biggest department within a1qa; it consists of about a quarter of all our engineers. After offering testing for the telecom industry for nine years, our service portfolio in this domain has expanded. Now, in addition to pure testing in this domain, we offer consulting, assistance in building effective QA department for telcos, professional training and more.

RCRWN: It looks like a1qa has a rigorous hiring policy? Can you explain the reasoning behind the process and the importance of that process for the company?

SP: We care a lot about the image we’ve been building for the last 12 years. One of the main values of a1qa is to put quality of services above all else. So yes, we focus a lot of attention on the kind of specialists we hire. Regardless of where the potential employee came from – from another company or from our QA Academy – they still have to go through our six-stage selection process in order to become an a1qa employee. Even if the company lacks some specialists, we never fill vacancies with random people. Quality of service always begins with the quality of people delivering that service.

The interview was published on RCR Wireless News.

Rob Lambert is the author of “Remaining Relevant”, a book about remaining relevant and employable in today’s testing world. It’s published on LeanPub. Rob’s mission is to inspire testers to achieve great things in their careers and to take control of their own learning and self development. Rob Lambert is a veteran Engineering Manager building a forward thinking, creative and awesome team at NewVoiceMedia.

He’s is a serial blogger about all testing things.He’s on twitter at @rob_lambert.

Rob is an advocate for many important social causes, is obsessed with technology in society and has written a number of books about testing, customer excellence and community building. Rob’s also written a number of other test related content such as The Blazingly Simple Guide To Web Testing, The Problems With Testing, The Diary of a Test Manager and many others. You can find them on his blog. He is married with three kids and lives in historic Winchester, UK.

1. Do you believe coding is an essential skill for testers?

I don’t believe coding is an essential skill for testers, but it could be a valuable skill. Being able to code opens up new possibilities for testers – such as automating repetitive tasks, creating test data or using code to interrogate a database. Learning to code can also be a great personal challenge.

In a collaborative environment, where testers are working closely with programmers, it can also be helpful if the tester understands code, or can at least read it. The same is true if the programmer understands how to do good testing.

The job market is also becoming populated with testers who can code. Whether we like it or not, many hiring managers are looking for testers who can do other activities as well as testing, such as coding. This makes these candidates more appealing.

That’s not to say we should all run out and learn to code, but it’s important to understand how your peers and the market are moving.

Coding is like any other skill, it can be useful, or not. You could also learn how to become a manager, or how to write good technical documents or about systems thinking or DevOps. These are all valuable too.

If coding will help you become a better tester, or help your company achieve its goals, or it’s interesting to you – then go for it. If it won’t help you or your company and it’s not interesting then learn something else.

The important point is that you need to keep learning; it is the learners who are standing out no matter what topic they are choosing to learn.

2. Why do you think the product under test can break a person as a tester?

I don’t choose to work on products I don’t enjoy and don’t have some affinity to. If I don’t believe in the product I’m not likely to bring the best of myself to the job.

When I meet testers who “hate” testing, and I meet a lot of them, I ask them what their favourite software product or service is. I then ask them to imagine what it would be like to help build and test that product or service.

It’s at this point that most people realise that it’s not the act of software testing that they “hate”. It may be the product or service they are testing and often the company they are working for.

If you find a service you believe in and a company you want to see succeed then you’ll ride out the highs and the lows that come with any job.

I love nothing more than helping people find the best aspects of their current role – or helping them leave that role to find something altogether more fulfilling.

Usually this more fulfilling role is a testing role working on a service or product they enjoy using and have an affinity towards.

The product or service you are helping to build really can influence the way you feel about software testing.

3. Lately, you are blogging a lot about recruitment. So there goes a question how to hire a good tester when you get a tone of CVs?

I’ve been blogging about recruitment as it’s a passion I have and it’s a great challenge to take on. Growing a team is an epic challenge and you learn a lot about yourself. It also provides deep insights in to what really makes someone a good fit for your team.

In the early days of recruiting I used adverts on job boards. This resulted in a massive influx of CVs, most of which weren’t relevant to the advertised position. It was painful and ineffective. It also showed how little effort most people put in to their job application. It’s this experience that inspired me to write Remaining Relevant and Employable; my book about getting hired.

I no longer advertise testing roles on job boards and instead use my network and a trusted recruiter to find the right people.

The job markets and recruitment processes are changing rapidly. Social media and communication tools have helped greatly in helping people to connect easily.

Remote working has meant that hiring managers can now find the best candidate, rather than the best candidate that lives within commuting distance of the office.

Recruiters and hiring managers are now approaching out-standing candidates directly. This is the approach I prefer.

But how do you become a standout candidate? In a nutshell keep building your network, learn skills that are valuable to the market place, communicate your skills and keep job-hunting.

Gone are the days of reviewing 100+ CVs for an open position. Now I approach standout individuals and hope to attract them to work with us. Another positive side effect of this is that standout people want to come and work with standout people. And this means recruiting becomes a whole lot more effective.

4. You help companies to make transition from waterfall to agile. What is the most difficult part in this process?

The transition from Waterfall to Agile is fraught with many challenges. Some of these challenges will be unique to that company, but there are three common challenges that most companies face.

The first challenge is understanding, and then communicating, the reasons why the company is moving to agile.

Agile is not an end goal in itself; it’s a mechanism for solving other problems such as rapid delivery to production, spiking new products or services etc.

Those who adopt agile “because everyone else is” will likely fail. Understanding the real purpose for bringing in agile is the key to a successful roll out. Remembering this purpose throughout the transition is also important, as there will likely be setbacks and hardship.

The second challenge is shortening the feedback loops between coding something and that code being used in production. This is hard and often involves organisational restructuring (such as DevOps), new technology and a new way of thinking about software development.

The third aspect is about providing the right coaching, training and communication to the business. There will be resistance and there will likely be political uprisings. Change of an epic nature is hard and it pushes people to resist. Coaching, training and strong leadership are essential to making it happen. Good management, a strong sense of purpose and clear communication can help to mitigate this but expect the ride to be bumpy.

If you have a genuine business reason to adopt agile, a vision, some goals, the right people and strong leadership then it’s entirely possible to make the move to agile relatively smoothly.

5. Do you expect any software tendencies to intensify in 2015?

I think the trend in 2015 will be the continued need for many companies to shorten their release cycles to remain competitive. Market conditions are forcing many companies to improve the speed in which they deliver value to their customers through new features, bug fixes and new platform support.

Software as a Service (SaaS) business models are appearing in almost every industry and with this model comes increasing pressure to retain customers through great service delivery. In SaaS models unhappy customers often find it easier to leave and move to the competition.

This change will probably lead companies down the road to agile development approaches. This will include shorter release cycles and a more collaborative approach to building software.

We’ll likely hear a lot more about continuous delivery, agile and DevOps and how testing fits within these models. This, I believe, will be a good thing for the test industry, but it will pose many new challenges we’ve yet to uncover.

Rob, thank you for sharing your viewpoint and ideas. We hope to talk to you again and cover a few more interesting issues.

James is particularly interested in links between testing, auditing, governance and compliance. He spent 14 years working for a large UK insurance company, then nine years with a big IT services supplier working with large clients in the UK and Finland. He has been self-employed for the last eight years.”

1. James, you’ve tried so many IT fields. Can you explain why you switched into auditing?

I worked for a big insurance company. They had just re-organized their Audit department. One of the guys who worked there knew me and thought I’d be well suited to audit. I was a developer who had moved on into a mixture of programming and systems analysis. However, I had studied accountancy at university and spent a couple of years working in accountancy and insurance investment, so I had a wider business perspective than most devs. I think that was a major reason for me being approached.

I turned down the opportunity because I was enjoying my job and I wanted to finish the project I was responsible for. The Audit department kept in touch with me and I gradually realised that it would be a much more interesting role than I’d thought. A couple of years later another opportunity came up at a time when I was doing less interesting work so I jumped at the chance. It was a great decision. I learned a huge amount about how IT fitted into the business.

2. As the person with an audit background, do you think standards improve software testing or block it?

They don’t improve testing. I don’t think there’s any evidence to support that assertion. The most that ISO 29119 defenders have come up with is the claim that quality assurance consultants can now do good testing using the standard. That’s arguable, but even if it is true it is a very weak defence for making something a standard. It’s basically saying that ISO 29119 isn’t necessarily harmful.

I wouldn’t have said that ISO 29119 blocks testing. It’s a distraction from testing because it focuses attention on the documentation, rather than the real testing. An auditor should expect three things; a clear idea about how testing will be performed, and evidence that explains what testing was done, and what the significance was of the results.

ISO 29119, and the previous testing standard IEEE 829, emphasize heavy advance documentation and deal pitifully with the final reports. Auditors should expect an over-arching test strategy saying “this is our approach to testing in this organization”. They should also expect an explanation of how that strategy will be interpreted for the project in question.

Detailed test case specifications shouldn’t impress auditors any more than detailed project plans would convince anyone that the project was successful. ISO 29119 says that “test cases shall be recorded in the test case specification” and “the test case specification shall be approved by the stakeholders”.

That means that if testers are to be compliant with the standard they have to document their planned testing in detail, then get the documents approved by many people who can’t be expected to understand all that detail. Trying to comply with the standard will create a mountain of unnecessary paper. As I said, it’s a distraction from the real work.

3. You started the campaign “STOP 29119”.Tell us a few words about the standard?

I don’t claim that I started the campaign. The people who deserve most credit for that are probably Karen Johnson and Iain McCowatt, who responded so energetically to my talk at CAST 2014 in New York.
ISO 29119 is an ambitious attempt, in ISO’s words “to define an internationally-agreed set of standards for software testing that can be used by any organization when performing any form of software testing.”

The full standard will consist of five documents; glossary, processes, documentation, techniques and finally key-word driven testing. So far the first three documents have been issued, i.e. the glossary, processes and documentation. The fourth document, test techniques, is due to be issued any time now. The fifth, on key-word driven testing should come out in 2015.

The campaign has called on ISO to withdraw the standard. However, I would happily settle for damaging its credibility as a standard for “any organization when performing any form of software testing”. That aim is more than just being ambitious. It stretches credulity.

4. Testing standards are beneficial for testing (hope you agree): they implement some new practices and can school the untutored. Still, what is wrong with the 29119 standard?

The content of ISO 29119 is very old-fashioned. It is based on a world view from the 1970s and 1980s that confused rigour and professionalism with massive documentation. It really is the last place to go to look for new ideas. Newcomers to testing should be encouraged to look elsewhere for ideas about how to perform good testing.

Testing standards can be beneficial in a particular organization. They may even be beneficial in industries that have specific needs, such as medical devices and drugs, and financial services. However, they have to be very carefully written and they must maintain a clear distinction between true standards and overly prescriptive guidance. ISO 29119 fails to make the distinction. It is far too detailed and prescriptive.

The three documents that have been issued so far add up to 89,000 words over 270 pages. That’s as long as many novels. In fact it’s as long as George Orwell’s “Animal Farm” plus Erich Maria Remarque’s “All Quiet on the Western Front” combined. It’s almost exactly the same length as Orwell’s “1984” and Jane Austen’s “Persuasion”.

That is ridiculously long for a standard. The Institute of Internal Auditors’ “International Standards for the Professional Practice of Internal Auditing” runs to only 26 pages and 8,000 words. The IIA’s standards are high level statements of principle, covering all types of auditing. More detailed guidance about how to perform audits in particular fields is published separately. That guidance doesn’t amount to a series of “you shall do x, y & z”. It offers auditors advice on potential problems, and gives useful tips to guide the inexperienced. The difference between standards and guidance is crucial, and ISO blurs that distinction.

The defenders of ISO 29119 argue that tailored compliance is possible; testers don’t have to follow the full standard. There are two problems with that. Tailored compliance requires agreement from all of the stakeholders for all of the tasks that won’t be performed, and documents that won’t be produced. There are hundreds of mandatory tasks and documents, so even tailored compliance imposes a huge bureaucratic overhead. The second problem is that tailored compliance will look irresponsible. The marketing of the standard appeals to fear. Stuart Reid has put it explicitly.

“Imagine something goes noticeably wrong. How easy will you find it to explain that your testing doesn’t comply with international testing standards? So, can you afford not to use them?”

Anyone who is motivated by that to introduce ISO 29119 is likely to believe that full compliance must be safer and more responsible than tailored compliance. The old IIEE 829 test documentation standard also permitted tailored compliance. That wasn’t the way it worked out in practice. Organizations which followed the standard didn’t tailor their compliance and produced far too much wasteful documentation. ISO should have thought more carefully about how they would promote the standard and what the effects might be of their appeal to fear.

5. And in the end, what are the results of your campaign?

It’s hard to say what the results are. No-one seriously expected that ISO would roll over and withdraw the standard. I did think that ISO would make a serious attempt to defend it, and to engage with the arguments of the Stop 29119 campaigners. That hasn’t happened. The result has been that when people search for information about ISO 29119 they can’t fail to find articles by Stop 29119 campaigners. They will find nothing to refute them. I think that damages ISO’s credibility. ISO is now caught in a bind. It can ignore the opposition, and therefore concede the field to its opponents. Or it can try to engage in debate and reveal the lack of credible foundations of the standard.

I think the campaign has been successful in demonstrating that the standard lacks credibility in a very important part of the testing profession and therefore lacks the consensus that a standard should enjoy. I hope that if the campaign keeps going then it will prevent many organizations from forcing the standard onto their testers and thus forcing them to do less effective and efficient testing. Sometimes it feels like Stop 29119 is very negative, but if we can persuade people not to adopt the standard then I think that makes a positive contribution towards more testers doing better testing.

James, thank you for sharing your viewpoint and ideas. We hope to talk to you again and cover a few more interesting issues.

Eric Jacobson has been testing software for 14 years. He entered the IT industry teaching end-user software courses but a development team at Lucent Technologies convinced him to become a tester. Later, Eric became lead tester of Turner Broadcasting’s traffic system, responsible for generating billions of dollars annually via ad placement. After managing other testers at Turner, he accepted a position as Principal Test Architect at Atlanta based Cardlytics.

Eric is a highly rated conference speaker and has been posting his thoughts to improve testing on www.testthisblog.com, nearly every week since 2007. He also enjoys playing clawhammer banjo, woodworking, caving, and spending time with his son, daughter and wife.

a1qa: In one of your posts you mentioned that resolving tests by “PASS” or “FAIL” prevents you from actual testing. Then what is actual testing?

Eric Jacobson: The “actual testing” I was referring to is the open-ended investigation part of software testing outsourcing. I was contrasting it to “checking”. It took me years to notice but the way I documented tests was actually impeding my testing. I’ve been stuck in a mindset that test cases must resolve to “PASS” or “FAIL”… for both test planning and exploratory testing sessions.

This is a big deal! I suspect most testers are stuck in the same trap. It’s like a test documentation language blindness. Even if a test began as an investigation, my brain still framed it to resolve as “PASS” or “FAIL”. For example, a test that began as “I wonder if I can submit an order with no line items.”, would get documented as “Submit button should not be active unless line items are populated.” The former might generate better testing while the latter might limit creativity. The big mindshift here is NOT to box yourself in by forcing every test instruction to resolve to PASS or FAIL. As a tester, it’s a very liberating idea.

What if instead, you just list the trigger to perform an open-ended investigation? If you have a list of these investigation triggers, you can resolve them as “DONE”. The assumption is they were completed and any bugs or issues have been otherwise shared.

a1qa: You questioned the testers’ “power to declare something as a bug”. Isn’t it a tester’s job?

Eric Jacobson: Ha ha! Well kind of. But I think our role as testers can be improved upon with a slight variation.

How about, a tester’s job is to raise the possibility that something may be a problem? Raising the possibility forces a magical little thing called a “conversation” to take place. The conversation might be, “oh that’s nasty, log a bug!” or “that sounds like a problem John just noticed, talk to him” or “actually, it’s by design, the requirements are stale”. A conversation might reduce rejected bugs, duplicate bugs, or bugs prematurely fixed by eager programmers. The conversation provides us testers with feedback about what is or isn’t important. From there we can adjust.

The conversation might be inconvenient, especially for lazy testers or introverted product managers. Beyond that, it’s hard to find disadvantages. This didn’t occur to me until I read one of Michael Bolton’s clever thought reversals. He asked if testers should have the power to delete bug reports? Most would answer, no that’s something the team or the stakeholders should decide. I agree. I would rather have a bug report repository that accurately reflects all known threats to the product. If we don’t want testers independently making decisions to remove bug reports, maybe we should use the same level of scrutiny for putting things into the bug repository.

a1qa: Is there any ideal testing approach or model that you would recommend for following to optimize tester’s work?

Eric Jacobson: No. …did I pass? Was that a trick question? My context-driven-test approach mentors just sighed with relief.

a1qa: “Anti-Bottleneck Tester”, who is this person?

Eric Jacobson: I used that term in a favorite blog post. I probably need a better term. An anti-bottleneck tester is a tester who makes decisions and suggestions that help development teams deliver software quicker, without letting its quality suffer. This is the tester I strive to be. It’s a far cry from the tester I was 13 years ago. I was a quality cop. “You can’t ship until I stamp it Certified for Production”. I was the bottleneck…and proud of it.

We messed up big time and gave ourselves a bad reputation. Some of us are still doing it. I just heard a story about a tester who said it would take three days to test a change to a GUI control’s default value. Nobody is impressed by that answer. My response would have been three minutes.

These days, programmers are moving so fast, they don’t have time to wait on testers who don’t bother learning development technologies or refining test practices. I love thinking of ways to keep up. I did a talk at a couple US test conferences title, “You May Not Want To Test It”. The basic premise was, instead of testing everything because of a factory-like process, consider only testing where you can add value. I listed 10 patterns of things testers might want to “rubber stamp” instead of spending time testing. For example, production bugs that can’t get any worse, subjective UI changes, race conditions too technical for some testers to set up. These are all things best tested by non-testers and a tester who sees that and delegates the work, is able to spend more time testing things where their skills can be more effective.

Another anti-bottleneck tester practice is to suggest compromises that enable on-time shipping such as, let’s give the users the option of shipping with the bug. I also think testers should spend less time logging trivial bugs and more time hunting for non-trivial bugs. The urge to log trivial bugs is probably left over from the ancient infamous bug count metric.

a1qa: If reporting trivial bugs is a waste of time, does it mean QA engineers should skip them?

Eric Jacobson: It might. First of all, we are talking about “trivial” bugs here. So by definition, the threat to the product is trivial. What does trivial mean? If the product-under-test has hundreds of bugs, some are probably trivial and may never get fixed. If the product-under-test has 10 bugs, there may not be any trivial bugs.

This will sound crazy but I’ll say it anyway. I think testers are more likely to hurt their reputations by logging trivial bugs than by missing non-trivial bugs. Logging trivial bugs reflects poorly on your testing skills, especially if you miss non-trivial bugs.

At my previous job I ran bug triage meetings. I hate to say it but here is what typically happens. Three or four times in each meeting, we read a bug report that made the team laugh at how trivial it was. Someone always said, “Ha ha, who logged that?”. There was nothing worse than looking at the history and seeing a tester’s name…another tester obsessed with perfecting cosmetic stuff on the GUI while the support line rings all day because the product keeps timing out.

a1qa: What quality means personally to you?

Eric Jacobson: As a user, what comes to mind is the integrity of the development team. And by development team I mean testers, programmers, and product owners. Do they have a reputation of being transparent about problems, eager to fix problems, interested in listening and responding to users? If the answer is yes, I’m likely to forgive production bugs and continue using the product.

As a tester, this means it might be more important to react quicker to problems in the field than to hold out for “perfect software”. My father, a small business owner, taught me the customer is always right. When the needs of the customer, like time to market and new functionality, outweigh my tester concerns for certain bug fixes that affect me personally, I try to weigh everything. However, first impressions are important. They say an audience decides if they like a presenter during the first five seconds. If this is true of software, we had better get the core stuff right. As the context-driven-testing principle says,

 “Only through judgment and skill, exercised cooperatively throughout the entire project, are we able to do the right things at the right times to effectively test our products.”

Eric thank you for sharing your viewpoint and ideas. We hope to talk to you again and discuss a few more topics.

Alan Richardson has more than twenty years of professional IT experience, working as a programmer and at every level of the testing hierarchy from tester through head of testing. Author of the books “Selenium Simplified” and “Java For Testers”. Alan also has created online training courses to help people learn Technical Web Testing and Selenium WebDriver with Java. He works as an independent consultant, helping companies improve their use of automation, agile, and exploratory technical testing. Alan posts his writing and training videos on SeleniumSimplified.com, EvilTester.com, JavaForTesters.com, and CompendiumDev.co.uk.

Alan RichardsonCROP (AlanCrop)– Do you agree that exploratory testing is more a mindset than a methodology, as James Bach and Cem Kaner say?

– I do agree that it is more a mindset than a methodology. I can’t point at any mandated ‘rules of exploratory testing’ document in the practices of QA outsourcing and say “only if ye do such that is written here shall ye be of the Exploratory”.

Exploratory Testing requires each tester to learn how they currently test, identify how they can improve how they test, and how they can apply their unique skills and experience in their exploratory testing. It is a uniquely personal approach and as such ‘mindset’ offers a far better word to encourage ownership of the testing process than methodology.

Methodology suggests that ’some other’ authority knows best. Exploratory Testing, with its emphasis on learning from the actions and testing that you personally take, can’t fit into that schema.

Those testers I know that excel at Exploratory testing, do so, because they have made it their own. They have looked around at the information that other people have shared about exploratory testing. They have tried it. They have analysed what worked for them and what didn’t. They have adjusted. They have incorporated techniques, approaches, analogies, and skills into their testing that they recognised as valuable to them. And they continue to experiment with, and explore, new approaches and nuances.

Exploratory testing is a learning by doing approach. Therefore we should expect everyone doing it to learn something different and approach it subtly (or sometimes radically) differently.

When I talk to testers I very often talk about taking responsibility for their testing. And by that I mean owning their approach, so that they can justify it using their words, and from their experience, and from the experiments that they have conducted during their testing. I think that is how you build experience, attitude, learning, belief systems; and all of that, I can see falling under the banner of ‘mindset’.

– Selenium WebDriver is a tool that immensely accelerates testers` work. Still, what are the main difficulties that users face while applying the tool?

– I think that Selenium WebDriver offers testers the best open source browser automation library available today. And that when a robust set of domain specific automation libraries have been built by the team, it can allow team members to rapidly automate the functional flows through their application. And that allows teams to build sets of methods which they can use to check the application for unexpected deviations in behaviour. They can also rapidly build up adhoc code which exercises the application until the application achieves a specific state that we can manually test from. The team can also more easily add data into the system, and generally automate flows through a web application.

Common difficulties that people have when they start using Selenium WebDriver is viewing it as a tool. I view Selenium WebDriver as a library, and like any other library that we use when programming, we have to know how to program to be able to use the functions in the library. So programming knowledge is required. And that can prove a difficulty for people.

The Selenium eco-system does provide tools; like the Selenium IDE, or Selenium Builder. Which allow recording of actions with the automatic generation of scripts. But unless you know programming, you don’t know how to effectively tailor those scripts, and wean yourself off the recording tool.

Other difficulties people have, relate to the web technology itself, simply because there is a lot to learn if we really want to master automation.

Because we use Selenium WebDriver to automate the browser, we have to understand some of the technology associated with how browsers work; the DOM, CSS, JavaScript, etc. We have to understand how the specific pages we are testing work, so that we know how to synchronise effectively with the state of the page so that our automation works consistently.

So I find that, over time, to effectively work with Selenium WebDriver, we have to learn: programming, CSS selectors, HTML, DOM, dynamic HTML population, synchronisation strategies, browser developer tools, software design, continuous integration, environment maintenance, etc. (the list could go on).

I don’t think these learning topics are unique to Selenium WebDriver. I think they are important for any browser automation. But I think some automation technology and tools use a sales tactic of pretending that their users will not need this learning, but they will.

You can achieve good and effective results with minimum levels of knowledge in these areas, but to really make your automation work, you may have to learn things that other testers, may not immediately need to learn.

I could continue to list many more things that people find difficult because I do receive a lot of emails from people learning Selenium WebDriver and the issues they are having.

Very often someone on the web has already faced, and identified a workaround to, their issue. So sometimes the difficulty people have is that their google-fu isn’t quite effective enough to find the existing answers out there on the web that will help them.

– You do a lot of webinars. Is it your pass

– I need to do more webinars. I don’t do enough of them.

I like to share experience. Partly because I learn when I do that, because I have to codify it in a different way to communicate it to others. Partly because you can reach more people over time, with a webinar, than through a conference talk.

A conference talk will reach several hundred people, at a single event, and that is it over, done. I might release the slides and a blog post, but it doesn’t have the power of the presentation behind it.

With a webinar, and online training, I can reach thousands of people. And they can encounter it at the point when they are ready for it.

We also give the viewer more control: they can pause it, rewind it, or skip sections if I’m wittering on about something unimportant to them. Webinars and videos put more control in the hands of the viewer, than in the hands of the presenter, and that allows the viewer more control over their learning process.

Also the commenting systems online give people the opportunity to think about questions, ask them whenever they want to, and have more of a discussion than the conference format often allows.

Webinars also allow me to ‘demonstrate’ stuff more than I can in a conference talk, and I think it is very important for people to see things in action. Because it shows what is possible, and that can help change a belief or a mindset about what ‘testing’ means, or about what is acceptable in testing. And they can model the behaviour shown and achieve the same result.

And I guess I’m passionate about that.

– Will it be beneficial for the workflow, if team combines exploratory testing with agile techniques?

– Exploratory testing allows us to test at speed, often with minimal information, it encourages learning through investigation and conversation, we can collaboratively make decisions about which risks to explore in detail and which to perform less work on. Every Agile process I’ve worked in, has required the associated testing process to have those attributes.

I have worked on projects where people have expressed the opinion that the Acceptance Criteria on a story is good enough to assess the completed code against. So the team then build confirmatory Acceptance Automation – possibly using BDD, or ATDD, or by building Gherkin feature files or xUnit tests to cover the criteria. And when they all run green the team consider the Story done.

This is a confirmatory process.

We also need a questioning process. “Did you mean X or Y?”, “Did you only mean X?”, “What if Z happens?”, “What about…?” etc. etc.

Some of those questions may occur during the discussions as the story is coded, and some of the answers may even be added as Acceptance Criteria.

But I’ve seen too many stories where questions were not asked.

Even when the questions were asked, the answers weren’t written as acceptance criteria, and so the confirmatory automation did not cover them, and sometimes the functionality didn’t cover the criteria effectively, but no-one knew until the software was released.

An exploratory testing process can ask those and other questions.

But the main reason I think we have to combine exploratory testing with agile techniques is because all testing is exploratory. So if you want to actually have testing on your Agile project, you have to explore.

– Alan thank you for sharing your ideas and expereince. we hope to talk to you again.

Alan Page has been a software tester for nearly 20 years. He was the lead author on the book “How We Test Software at Microsoft”, contributed chapters for Beautiful Testing and Experiences of Test Automation: Case Studies of Software Test Automation, and recently published a collection of essays on test automation in The A Word. He also writes about a variety of software engineering subjects on his blog at http://angryweasel.com/blog.
Alan joined Microsoft as a member of the Windows 95 team, and since then has worked on a variety of Windows releases, Internet Explorer, Office Lync and Xbox One. Alan also served for two years as Microsoft’s Director of Test Excellence.

a1qa: Do you think Data-Driven Quality is a new wave of testing or an approach that QA engineers should learn to improve software?

Alan Page: These days I struggle figuring out where the lines are between testing and just making good software in sensible and efficient ways.

Data-driven quality (DDQ) is not necessarily a new idea. Software teams have used data for decades in an attempt to understand software quality. We used data from test results or code coverage to make an educated guess on whether the software was ready to ship.

The difference with DDQ today, is that we want to use data generated by thousands (or millions) of users to understand what’s working, what’s not working, and how the product is being used. Many testers are concerned with being the voice of the customer, or a customer advocate – but I don’t care what kind of rock-star-ninja-superhero tester you are, there is absolutely nobody better at acting like a customer than the actual customer. Any major web service – Facebook, Twitter, Amazon, Netflix, etc. rely heavily on analysis of what customers are doing and how they are using the services. But DDQ isn’t just for services. When doing web testing for Xbox, for example, we gathered a lot of data on how users were interacting with the Xbox Live service, but we also frequently used information gathered from the console to help us identify a variety of potential, and real issues.

At Microsoft, we’ve used the Windows Error Reporting (WER) and the Customer Experience program for years. The Customer Experience program opt-in is that little checkbox that says, “Do you want to send usage information to Microsoft”, or, “Do you want to join the Customer Experience Improvement Program”. Those programs have provided a lot of insight into customer usage.

Today, we’re releasing updates to many web services at least once a day. While bigger changes may get extra scrutiny, many changes go from developer desktop to users within minutes. Deployment and monitoring systems give us the ability to automatically analyze for regressions in performance, functionality, or error rates and either immediately roll back the change, or deploy the change to more servers for additional coverage. We can also look at huge sets of data to gather insights on search patterns, match making in Xbox Live, or just about anything we can imagine in order to get a more accurate understanding of how customers are using our software.

DDQ is definitely an approach that’s here to stay – and on some teams, analyzing (and generating) customer data is a sensible activity for testers to take on. I don’t know if you want to call it a testing approach or an approach to software engineering in general, but using data to understand how the customers are using the software – or to understand where customers are finding problems are approaches that are here to stay.

a1qa: Combined engineering, isn’t it the end of software testing field?

Alan Page: It’s funny (or sad?) how many testers freak out at the idea of a team without separate disciplines for testers and developers (aka “combined engineering”).

Combined engineering doesn’t mean that everyone on the team does the exact same thing. In fact, for a discipline free engineering team to work, you need team members with a breadth of skills. Combined engineering, done well, promotes efficiency and (in my experience) quality software by eliminating the traditional walls between test and development and by encouraging more collaboration and team-owned quality. In practice, every successful combined engineering team I’ve seen has had plenty of people who were doing testing activities. Sometimes these teams have people dedicated to the testing activity, but most often, I see the test activities taken on by Generalizing Specialists (team members who have specialties, such as testing, but can contribute at some level in a number of areas).

I repeatedly see programmers take on work around measuring performance, writing stress or load tests that span the application (work often done by the test team). There’s also plenty of room on a combined engineering team for testers, but they tend to do better, and contribute more when they have a breadth of skills they can pull out of their bag of tricks as needed. When you only do one thing, you’re often a bottle neck on a discipline free engineering team.

For a personal example, most of my software work at Microsoft for the last 10-12 years has been in the role of a Generalizing Specialist. I’m definitely a test specialist, and people respect my ability to see the product as a whole, and to provide team leadership as needed. But I also can (and do) dive deep when needed. I led a code quality initiative for the Xbox console and identified a nice sized pile of code hygiene issues, and worked with feature teams to get those bugs fixed (and in many cases, fixed them myself). I also played the role of tool-smith from time to time and led exploratory testing efforts as well.

I know numerous people in software who have roles similar to mine. Some are developers, some are testers, and some, like me, are just software engineers. I still think there’s plenty of need for people who understand – and who are passionate about software testing. I just don’t think it’s required (or necessary) for them to work on a test team.

a1qa: It is an eternal question: should tester have engineering skills?

Alan Page: I often apply this bit of advice to this sort of question; “The answer to any sufficiently complex question is, it depends”. In this case, I think it depends mostly on what kind of tester you want to be. If you want to test high performance file systems or network protocols, I think it’s going to be difficult to do without programming. Higher up the stack, I think the same answer is appropriate for a multi-platform email client. I would also say that knowing about programming or platform architecture or how to use analysis or monitoring tools can never hurt someone’s ability to be a better tester.

I think it’s essential for a tester to have a passion for learning. The great testers I know just aren’t satisfied when their options or knowledge are limited. I started in testing without knowing anything about coding. I was good at finding bugs, and even better at finding bugs that were important enough to fix. I was also hugely curious, and wanted to learn more about software and IT. I learned networking and managed my company’s network and servers. Then I began to learn programming so I could write tools and debug difficult issues. Learning programming helped my career, but even more importantly, I think continuous learning, and the breadth of skills that gives me has helped my career the most.

If a tester doesn’t want to learn programming or just has a hard time with it, that’s fine – but then I think they should learn how their software is deployed; or learn how to aggregate customer feedback from user forums; or learn how to use more tools to help their testing. I think the career options can be limited for testers who have a limited breadth of skills, and I think a tester’s career options and growth opportunities relate directly to the number of holes they can fill on a team. A lot of teams need (and value) a great non-technical tester, but given a choice between a great tester, and a great tester who can write tools or bring up a new server when needed, I think most teams will take the second choice every time.

a1qa: How do you see the future of testing?

Alan Page: This question is so hard to answer in a world where I don’t think we can come close to agreeing on what testing is in the first place. I’m also not very good at predicting things (which is one reason I roll my eyes when I hear managers ask for detailed estimates…but that’s a different rant). At the risk of pulling my hair out – or pissing off the masses (or more likely, both), let me dump my thoughts and see where they go. But first, one more caveat. I can make a pretty safe bet that some people will read my thoughts on the future of testing and say, “that will never happen in a million years”, while others will read the same words and say, “that’s not the future, we do almost all of that today.” So, with that said, let me ramble a bit.

The not-so-controversial future is that programmers owning code-quality will be ubiquitous. Programmers will write their own unit, functional, and acceptance tests, and they’ll use tools (e.g. static analysis, and code coverage) as needed to ensure that they catch most of the early mistakes themselves (vs. the inefficient game of cod to bug to fix to regression ping pong match many teams play today. As I alluded to above, this is already happening in numerous organizations.

The future gets scary, however, when organizations think that code quality is the same as product quality. We have all seen (some of us first-hand) wonderfully functioning software that doesn’t provide any customer value; or cool new software that isn’t compatible with other software it needs to work with. There are activities that still need to happen in order to take software on the long journey from working to valuable. Programmers owning code quality and some aspects of testing doesn’t mean that additional quality work isn’t necessary.

What happens to get there varies. For many teams, the next step is to deploy to customers – maybe all of them, or maybe a portion of them and then gather data on what’s working and what’s not working, capture any crash information or performance degradation and automatically make a choice on whether to deploy to more users or roll back to good version if necessary.

Is what I just described testing? One could say that a lot of what a team looks for in production is the same thing many test teams look for in automated test runs today…but one could also say that it’s just analysis tools and deployment systems. In my opinion, it doesn’t matter. Roles are going to get fuzzier or go away, and I think that’s good – and nothing to be afraid of.

Deploying daily (or hourly) builds directly to production automatically is not always feasible – even for web services or sites (financial institutions, for example, are not likely to approve of this approach for many parts of their systems). I also doubt any user would tolerate an entire new operating system every day; or even a new file system driver. But, I predict that every organization – from banks to major software companies find a way to get customer usage data from some set of customers frequently, and use that data to learn, to improve software faster, and to release updates and new features frequently.

A hidden question here beyond what testing looks like in the future, is what testers of today do in the future. I find it funny that the people who seem to get the most outraged about test evolving are testers. While I see a future where testing roles may go away, I also see a future where test specialists (e.g. critical thinkers who see the program as a whole help the team focus on the most critical issues) are essential members of most software teams.

Regardless of whether you agree with my future, I think the trends of more frequent releases and wide scale development of web services and apps dictate that the future of testing will change. Time will tell.

Alan thank you for sharing your viewpoint and experience. We hope to talk to you again to cover a few more interesting issues.

With more than thirty years in the software industry, Lloyd Roden has worked as a Developer, Test Analyst and Test Manager for a variety of different organizations. From 1999 to 2011 Lloyd worked as a consultant/partner within Grove Consultants. In 2011 he set up Lloyd Roden Consultancy, an independent training and consultancy company specializing in software testing. 

Lloyd`s passion is to enthuse, excite and inspire people in the area of software testing and he have enjoyed opportunities to speak at various conferences throughout the world including STAREast, STARWest, EuroSTAR, AsiaSTAR, Belgium Testing Days, Nordic Testing Days, HUSTEF, CzechTest and Better Software as well as Special Interest Groups in software testing in several countries.  Lloyd Roden was Programme Chair for both the tenth and eleventh EuroSTAR conferences and won the European Testing Excellence award in 2004.

a1qa: You worked as a developer. Why did you switch into testing?

Lloyd Roden: When I started my career over 30 years ago there were no “testers” just developers and I had studied computer science and development as part of my degree course. It was a natural step to join an organization as a developer/programmer. I spent 5 years as a developer, but I would have to admit I did not enjoy it. It was a great start point for me and I appreciated the fact that I specialized in development all those years ago and am now able to reflect back on those development days. However I soon realized that my passion was to break the code rather than build it.

I left the organization to join a company in QA outsourcing, which had both developers and testers and I decided to move into testing as a Senior Test Analyst because that was my passion. That was 29 years ago and I have never regretted that move. I love every aspect of testing and it provides such a variety of opportunities: system testing, developing automation scripts, exploratory testing, specializing in non-functional testing – such as security, performance and usability. The list is endless 🙂

a1qa: Lloyd, you say that QA engineers can`t Sprint all the time. Why is it important to have a slack in projects?

Lloyd Roden: I am noticing that companies are demanding more and more from the employees, working harder and longer than ever before. Statistics show that the working week has increased by up to 22% in some industries. With the real threat of outsourcing, insourcing and off shoring staff are becoming anxious and so work all those extra hours. I have also seen a trend within the Agile community in the teams going from “Sprint” to “Sprint” without any slack. The net result is that staff are burning out and taking more sick leave as a result, despite the fact that they enjoy working in an Agile project. Companies that have included a slack period have experienced an increase in productivity in their staff. This seems a bizarre reality – getting people to work less means that they are more productive. The problem lies with management in that they will not take the step to try this out. If only they would try this concept with their teams – they might be pleasantly surprised.

On a similar issue we should not get our staff to multi-task too much as this also reduces productivity. People are not “fungible” – money is “fungible” – if I have £100 and divide this into 3 banks; £50 in bank A, £20 in bank B and £30 in bank C, the net result is that I still have £100. However this is not true of people, if we say the person should work 50% on project A, 20% on Project B and 30% on project C the net result is a loss of up to 30% productivity through task switching. The problem is that we don’t work 70% – we work 130% – working late to write that report or respond to our emails on the train home!

a1qa: Let`s compare Scripted and Exploratory testing. Can you name the Pros and Cons?

Lloyd Roden: I am a firm believer in Exploratory Testing. I can only find 3 reasons to perform detailed scripted testing:

  1. When you want someone else to run the tests
  2. When you want to automate the tests
  3. When you have a legal reason to provide a detailed audit of the tests

But even with some of these reasons – it is a weak argument. For instance – when you want someone else to run the tests, why are we restricting their ability to test the product – limiting their skills and expertise. Writing detailed scripts is time consuming, a maintenance nightmare and such a boring task – both to write and to execute.

Exploratory testing provides greater coverage, increased creativity and better bug detection when performed well. It needs to be managed and this can be done by providing session based exploratory testing. Even greater effectiveness can be achieved with paired exploratory testing sessions.

I have an easy formula – the more documentation we produce for testing, the less test execution we can perform and therefore less bug detection will take place. If we insist on producing detailed scripts then don’t be surprised if our experienced testers leave the organization.

a1qa: A Jedi Tester, who is this person?

Lloyd Roden: I am an avid fan of Star Wars 🙂 And this phrase came to mind some years ago. A Jedi will learn the ways of the force. A Jedi tester will learn and master the ways of testing:

  • When and when not to use automation
  • When to use the appropriate test design techniques
  • Mastering the test design techniques and to use them effectively during exploratory testing sessions
  • To work alongside other “Jedi” testers – to be trained in the skills of testing

When my father was alive – he was a carpenter by trade and was a skilled craftsman. He mastered the tools of the trade and learned how to use them and respected them. He also apprentice trained me. The problem with testing is that we do not learn from older professions which have apprenticeships. Instead we employ new testers and say “so go ahead and test then”. Adopting an apprenticeship programme which trains testers to be “Jedi” testers is a passion of mine.

a1qa: You`ve worked with a variety of industries. Where do you think it is most difficult to implement testing?

Lloyd Roden: This is a hard question to answer because every industry has its challenges whether we work in safety critical systems, or companies producing games devices. I think our expectations are increasing when it comes to quality systems. We were happy to live with a few bugs years ago – it was an accepted norm. But today we want perfection and that comes with a price tag. One of the other challenges we are facing today is the sheer increase in complexity of our systems that we are developing and testing. The problem is that complexity brings an increase in bugs and often this complexity is not needed. Testers should challenge complexity at every opportunity and perhaps a simple solution would be much better for the company.

Lloyd thank you for answering our questions and sharing your viewpoint. We hope to talk to you again.

If you missed the first part of the interview, find it here

a1qa: How social sciences can help testers from your viewpoint?

Huib Schoots: Many people see IT as a technical craft or as exact or physical science. Computers work with bits and bytes and the products we create are perceived as technical products, to a certain degree that is true of course. But there is much more to it: software is created by humans for humans as a solution for a problem our clients have. The problem the software solves can be anything: making work easier and faster, operate machines or to sell products via the Internet to name a few. Creating software is often mistaken for factory work like building cars in a factory. Software development is research and development and every project has it unique context, solves a unique problem and the team consists of unique people creating unique team dynamics every time.

People perceive that computers always do exactly the same. This gets reflected in the way they think about testing: a bunch of repeatable steps to see if the requirements are met, but is that really what testing is all about? I like to link testing to social sciences as well. Testing is not only about technical computer stuff; it is also about human aspects and social interaction.

The seven basic principles of the Context-Driven School tell us that people, working together, are the most important part of any project’s context. That good software testing is a challenging intellectual process. And that only through judgement and skill, exercised cooperatively throughout the entire project, are we able to do the right things at the right times to effectively test our products. In these principles there is a lot of non-technical stuff that has a major influence in my work as a tester: anthropology teaches us about how people life, interact, something about culture. It also teaches us about qualitative research, something that is very useful for testers to learn about. Didactic helps us to learn and to teach: acquiring new knowledge, behaviours, skills, values, or preferences. Sociology teaches us empirical investigation and critical analysis and gives insight in human social activity. Psychology is the study of the mind and behaviour and helps testers understand individuals and groups. Insight in how our brain – the most important testing tool – works is essential to overcome all kinds of biases.

More about this can be found in a series of blog posts I wrote.

a1qa: Involving newbies in testing is almost an art. How do it successfully and not turn testing into a dull job?

Huib Schoots: Interesting question. It sounds to me that this is perceived as very difficult. There are several aspects to this: recruiting people who want to become testers, teaching them to do excellent testing and making your job exciting.

Advocating for testing, enthusing people and teaching testing is something I take pride in. I am very passionate and put a lot of energy in teaching people testing. That is why I speak at conferences, work as a consultant, teacher and coach. This is also why I offer free Skype coaching.

Testing to me is never a dull job. It is playing to learn, exploring new stuff, crime scene investigation, solving puzzles, work with many different people and help people create beautiful solutions to complex problems. Maybe I need to explain what I think testing is. Testing is evaluating a product by learning about it through exploration and experimentation. Testers light the way by collecting information to inform decisions by people who make the product or decide about the product. As a tester people pay me to learn, how cool is that? Michael Bolton recently tweeted an interesting series of what testing can be. He collected his tweets in a must read blog post.

Traditionally testers take a three-day class where they learn “the basics” of testing. These classes are not preparing new testers to do proper testing. Let alone that they encourage them to become excellent their profession. For example: in these three-day courses, all exercises are done on paper without actually testing a single piece of software. This makes me wonder if there are classes where developers are trained without actually coding. These courses focus on dull processes and artifacts instead of the more exciting and fun part of our craft. I recently started using books by Keri Smith to teach software testing. In her captivating guided journals, readers are encouraged to explore their world as both artists and scientists by observing, collecting, documenting, analyzing, and comparing. “How to be an explorer of the world” is a must read!

I believe that the stuff in those traditional classes is way too easy and can be learned by reading a book. Oh and please don’t give me the common language argument. It doesn’t help and is dangerous. Read what Michael Bolton says about it here and here. Real testing skills cannot be learned by reading books. And I talk about skills like: modeling, creating experiments, reporting, storytelling, critical thinking, context analysis and many more.

Huib thank you for the interview and interesting ideas you`ve shared. We hope to talk to again and cover a few more interesting topics.

Reach Huib on Twitter and read his blog

Huib Schoots is an expert in the field of context-driven and exploratory testing from Den Bosch in the Netherlands. He works for Improve Quality Services, a provider of consultancy and training in the field of testing, where he shares his passion for testing through consultancy, coaching, training, and giving presentations on a variety of test subjects. With almost twenty years of experience in IT and software testing, he is experienced in different testing roles. He has experience in testing, test management, test improvement, agile implementation, line and project management.

Huib is curious and very passionate and tries to read everything ever published on software testing. His goal is to make testing more fun by combining agile, context-driven testing and human aspects and help people grow. He is a regular speaker at international conferences like EuroStar, Agile Testing Days, Let’s Test, CAST, Belgium Testing Days, Nordic Testing Days, Test Automation Day, TestBash and many more.

Huib is a member of TestNet, AST and ISST, a black-belt in the Miagi-Do School of software testing and co-author of a book about the future of software testing. He maintains a blog on magnifiant.com and tweets as @huibschoots. Besides testing he loves to play trombone, read, travel, take photos, brew beer, scuba dive and play golf. He also enjoys puzzles, video games and strategic board games.

a1qa: Huib, you launched the series of articles “Why I am a context-driven”. Can you eхplain why context-driven testing made your testing more personal, as you say?

Huib Schoots: The series of articles was actually launched by DEWT, a group of context-driven testers that I am part of. We wrote them to share personal stories why we think context-driven testing is important to us. In 2010 me and six other Dutch testers founded DEWT (Dutch Exploratory Workshop on Testing) at my kitchen table. A few weeks earlier James Bach challenged German and Dutch testers in a blogpost to become more active in the context-driven community. We accepted his challenge and we founded DEWT to get together with like-minded testers, explore web app testing and our profession in general, get inspired, have geeky conversations about our craft software testing and learn. We started organizing peer conferences and meet-ups to learn more about context-driven testing.

Context-driven testers believe that testing is a challenging intellectual process that needs a lot of practice. There is no standard or recipe for testing since the value of any practice depends on its context. A huge part of the context is me. A lot depends upon what I do and how I do it. I learned that it was perfectly okay to do stuff my way. Context-driven testing in general and Rapid Software Testing specifically made my testing more personal by teaching me loads of new stuff and it encouraged me to develop my own style. I stopped using templates and standards and started to develop my own ways of testing. Heuristics now help me do my testing. My testing is all about personal skill development: learning and practicing!

a1qa: You run skype coaching sessions. Why you decided to join this field and how it works?

Huib Schoots: Skype coaching is a way of coaching testers using the text chat of Skype. If a tester wants to learn specific skills he or she can add me on Skype (my Skype ID: “huibschoots”) stating he or she wants to do coaching sessions. A Skype coaching session is a one-on-one interaction using the text message system of Skype. A coaching session is aimed at improving skills and learning. It is the tester that comes up with the solution. I try to help the “student” with the Socratic method. Often the tester gets a task or exercise during the session. After the exercise there is plenty of time to debrief.

Two years ago I contacted Anne-Marie Charrett on Skype. We talked about coaching and she was very helpful and gave me stuff to think about. Later I started doing Skype coaching with James Bach. He challenged me and we talked about several topics like reporting, test coverage and test automation. These sessions made me realize how powerful Skype coaching really is. I also did some sessions with Ilari Hendrik Aegarter to learn more about Skype coaching before I started doing it myself as a coach.

If you really want to learn, you need to invest a lot of time. I believe in continuous learning by deliberate practice and I put that into practice. We learn by making mistakes, preferably in a safe environment. We learn from feedback and evaluations. Coaching can boost your learning. Not only for newcomers, for anyone who wants to learn, who wants to develop, a coach adds value. Antony Marcano wrote a nice article in which he says: “One thing that I notice is that while the teams are being coached, they do amazing things. They are more happy, more productive, fast to improve as if there are no limits to what they can achieve”.

Skype coaching gives me the chance to practice my coaching and teaching skills. Sometimes I try new exercises with students. And I help a person to become better, which is very rewarding.

a1qa: Huib, you say there is no agile testing, but testing in agile context. Why do you think so?

Huib Schoots: Testing is testing! It doesn’t matter in what context you test. So agile testing really means testing in an agile context. Testing is not different in any other context. This is important because I meet many people who think they have to do completely different testing because it is agile now…

Testers in an agile context have to deal with some differences compared to more traditional (or waterfall) contexts. Work in short sprints with an iterative and incremental way of working. This needs a different test strategy. There is more focus on teamwork and collaboration. Many testers are used to work in TEST teams, in an agile context they often work in DEVELOPMENT teams. They have to deal with less certainty and changes since change is common. They have to practice continuous critical thinking. Testers need to help the team by thinking critical about the impact and risks. Traditional testers were used to do that upfront while writing documents like master test plans. In an agile context testers have to do that continuously throughout the project: in refinement or grooming sessions, daily standups, planning sessions, etc.

Does this change the way we test? I don’t think so! We still do the same, but now we collaborate more, have less time to prepare and there is even more urge to automate our regression checks.

This is the first part of the interview with Huib Schoots, read the second part here on our blog.

Rik Marselis is one of Sogeti’s most senior Management Consultants in the field of Testing and Quality Assurance. He uses his 34 years of experience in IT to bring fit for purpose solutions to organizations, having assisted many businesses around the world to improve their IT processes and establish a successful test approach.

Rik has trained and coached many testing professionals ranging from novices with an ISTQB Foundation training, to experienced testers with TMap NEXT, TPI NEXT and ISTQB Advanced training courses. Also he developed a number of training courses, for example on Agile testing, and he regularly contributes to the testing profession through research and development.
Rik is a fellow in Sogeti’s research network, SogetiLabs, and has contributed to a variety of articles and books on quality and testing. His most recent books are “The PointZERO Vision”, “Quality Supervision” and “TMap HD”.

Since Testing and Quality are his hobby, Rik also contributes to the testing world at large, currently he is the chairman of TestNet, the independent association of software testers in the Netherlands with over 1700 members. Rik is an experienced presenter at conferences throughout Europe. His presentations are always appreciated for their liveliness, his ability to keep the talks serious but light, and his use of practical examples with humorous comparisons.

a1qa:Do you think testing is more about tools, techniques or maybe people?

Rik Marselis: In my opinion if testing goes wrong it’s because of failing people and when it goes well it’s because of skilled people. So the difference is made by people. But of course skilled people will apply techniques and tools at the right moment, depending on the situation, the time available and (very important) the level of quality-risk that the system under test brings.

Nowadays, with the Agile application lifecycle model gaining more and more followers, tooling is rapidly becoming more and more important. If you need to deliver working software of which the quality level has been proven to be fit for purpose, it is almost impossible to do the necessary testing without tools.

But testers should keep in mind the old saying “a fool with a tool is still a fool”. Some testers seem to think that as soon as they have learned to use testing tools they can forget their manual testing skills. But hey, let’s take a metaphor: A carpenter uses a hammer to bang nails into the wood. Then he start using screws and still uses his hammer, that doesn’t work well. He starts using a screwdriver, but get’s tired quickly. He learns about a modern tool, the battery-powered screwdriver, and he is very happy and swears never to work without his new tool any more. And then he needs to put a nail into the wood… (have you ever tried to bang a nail into wood using a battery-powered screwdriver? it does work somewhat, but you’ll ruin your tool ;-))

a1qa:You have greatly contributed to TMap method. What is so special about it that differs from others?

Rik Marselis: TMap is a testing method that was first published in 1995 (almost 20 years ago) but actually goes back to 1987 when a first handbook of testing was created.
TMap is often compared with ISTQB, and some people have even thought that ISTQB and TMap were competing, but now people agree that ISTQB and TMap complement each other. ISTQB is a good framework for testing, it tells you what products you need to create (for example a test report, test plan and test cases) and what standards you might use (my favorite is IEEE829, especially for test plan topics).

But ISTQB doesn’t say a lot about HOW to do your testing.
And that’s where TMap comes in. The TMap method gives you a set of possible actions you can take to create the necessary testing products. All focussed on giving the relevant stakeholders insight into the quality of the information system and into the product risks that remain at the moment a decision to go live has to be taken.

Last month (October 2014) we have introduced the latest update of TMap, called TMap HD. HD stands for Human Driven, to underline the importance of people in testing. Actually TMap HD has 5 elements: People, Simplify, Integrate and Industrialize are four elements that together, when filled in properly will lead to the fifth element “confidence”, that is confidence in the quality of the system and confidence that te remaining risks are acceptable.

TMap HD is part of the TMap Suite. An important role in the TMap Suite is played by the website, www.TMap.net which contains the background knowledge a tester needs. Using building blocks the tester adapts his approach to the situation at hand. By implementing this adaptiveness the tester can use TMap to his advantage in any situation, ranging from traditional waterfall projects to state-of-the-art DevOps teams.

a1qa:Do you think in modern QA there’s a need for reviews?

Rik Marselis: When using the word “testing” all too often people only think of “banging the keys on the keyboard”, the dynamic testing.

If however we talk about structured testing, as a proper way to measure quality and report about risks, then we need to create a test strategy that has both static testing and dynamic testing. And it is a common wisdom (at least amongst skilled testers) that reviewing is a much more cost-effective quality-measure than dynamic testing.

In a recent project we investigated for every defect that was reported, the so-called “escape phase”, that is the phase in the application lifecycle where this defect could have been found (which was often much earlier than where it actually was found). In many cases a defect could easily have been found by static testing, even by doing easy informal reviews, but certainly by performing more thorough technical reviews or inspections. And this way we could calculate how much money could be saved by doing static testing, thus convincing the stakeholders to spend a little more time in the early stages of a project, and thus prevent a lot of lost time at the end of their project.

a1qa:Quality and testing is your profession and a hobby. Is test training a hobby too or the contribution to testers` education?

Rik Marselis: QA Testing is a real profession. To be able to do good testing someone needs knowledge and skills. I always find a lot of pleasure in helping people in extending their knowledge and skills. What I, in my role as trainer and coach, always try to do is make people aware what is needed to become a better tester. Which is a much nicer goal then to just study to be able to pass an exam and get a certificate. Some people in the world of testing don’t like certification. And I can agree to some extent with them that a certificate in itself doesn’t prove that someone is a good tester. But I have experienced in over 10 years of being a trainer for testing courses, that people that know they have to do an exam pay much more attention during a training course and this way are able to extend their testing skills much more than people that only join a training course on a “sit-back-and-listen” basis.

By using real-life examples and humorous comparisons I try to make my training courses entertaining so that the trainees will really feel involved and thus, hopefully, get just as excited about the testing profession as I am.

a1qa:And in the end, you are in testing for so many years. Is there any test approach or a technique that is “close to your heart”?

Rik Marselis: Exploratory testing is my favorite. However, often I come across testers that say they do exploratory testing but actually they don’t do anything more than ad hoc testing. To me the difference between ad hoc testing and exploratory testing is that in exploratory testing the tester puts a lot of thinking and preparation in what he does. The only difference with purely scripted testing is that preparation and execution are combined in a test session. Good exploratory testing has a charter (which is an assignment, maybe even a very tiny test plan) and is preferably done by two people (one subject matter expert and one testing professional). Also in exploratory testing the tester must be able to use coverage types, because when for example you come across a boundary, a skilled tester will do boundary value analysis on the fly.

By combining exploratory testing with coverage types the tester actually will be able to make a well-founded statement about the level of quality, the benefits that are achieved and the remaining risks in the system under test.
Because that’s what testing is all about: measuring quality and reporting about risks and benefits.

Thanks for this interview. I trust that when testers across the globe work together we can assist each other to enrich our profession and make the work more effective, efficient and, above all, more fun!

Rik thank you for the great interview and for sharing your experience. We hope to talk to you again and cover other interesting topics.

After 2 years with Microsoft’s localized products, Rikard spent 11 years with Spotfire, producer of interactive data visualization products. The generalized learnings resulted in the free e-book The Little Black Book On Test Design. The last 3 years has been as a consultant with lots of education at companies and higher vocational studies programs. Teaching is a learning experience, with the number one testing question: What is important?

He is a regular at (inter)national conferences, with five appearances at EuroSTAR. Member of the think-tank The Test Eye , co-author of Software Quality Characterucits, and co-organizer of SWET, Swedish Workshop on Exploratory Testing. Rikard currently work as test consultant at LearningWell in Sweden.

a1qa: Rikard, exploratory testing is your primary sphere of interest. Can you name the biggest trap of this testing approach?

Rikard Edgren: An important part of exploratory testing is freedom and responsibility. It is easy as a tester to get lost in the freedom and find interesting things, but letting go of the responsibility of getting a test coverage that is appropriate for the current testing missions. This is definitely not easy, it requires a good understanding of the software and the situation, test performance hands-on skills, together with careful observation and communications of results in recipient-friendly manners.

Not sure I can name this, but let’s call it The Fun Trap – you do testing that is fun, but you might not provide the information that really is needed. The solution is to work on your skills: rapid learning, critical thinking, understanding what is important, testing with a variety in methods, collaborating, and more…

a1qa: Test strategy is an inseparable part of any testing project though it is unique for every situation. Still, what is a good test strategy for you?

Rikard Edgren: Good question where I have one short answer published here – a good test strategy is specific, practical, justified, diverse, resource efficient, reviewable, anchored, changeable, erroneous.

I have a long answer about how to get there in a 52 page book, Den Lilla Svarta om Teststrategi, but it is only available in Swedish. It is my own version of James Bach’s Heuristic Test Strategy Model, which is in English and recommend to all testers.

A typical mistake for testers is to not understand the testing mission, to just “test the product”. You will test in different ways depending on if you are looking for all/important bugs; if broad test coverage is needed for decision support; if you need to adhere to standards or regulations; if your information is to be used by other parties like support or end users.

In short, understand which information people need, and create a good test strategy by using efficient test methods, usually found by experimenting in many ways, and getting more and more experiences from unique situations.

a1qa: Testing education is also one of your current focus areas. Can a tester do a good job just spotting the defects without learning something new? Do you agree with the statement that performing tests day by day is basically enough to become experienced and qualified QA engineer.

Rikard Edgren: Testing is about learning, so without learning it won’t be a good job.

Performing tests day by day can be a good method, that’s how I started myself. But you need freedom, so you can try out many different ways of testing. You probably also need peers who add new ideas and challenge yours. You need honest reflection, but also hard, boring work when required.

But testing day by day, in the same way, won’t probably increase your skills much.

Also, people learn different things in different ways, and a variety in learning methods is recommended (reading, discussing, go to conferences/classes, work in other roles, teach etc.) My own best school were several years of lunch discussions with my peers where we reflected on our work, and brainstormed for new solutions to our problems. And learning-wise, it probably wasn’t the results that mattered, it was the discussions, where we had to explain our thinking, and understand others thinking.

a1qa: No doubts, self-education is an essential part of any professional field. Though it seems really hard to inspire people to learn. What would you advise the manager how to stimulate his team members to learn?

Rikard Edgren: I don’t really believe in stimulating or motivating people. I think each person’s intrinsic motivation is the key to learning. So the manager should first make sure that they don’t de-motivate people. Second thing can be to show by examples; that’s something that inspires many.

Learning is such an essential part of software testing, so if you don’t like to learn new things, it might not be the right profession.

Rikard, thanks a lot for the interview and sharing your thoughts. We hope to talk to you again.

Torbjörn was chairman of SAST Stockholm for three years and is now chairman of a local chapter of SAST. He has written the book Essential Software Test Design as well as many blog posts and an e-book on visualisation. He believes in lifelong learning, visualization and team power and is currently studying UX.

a1qaTorbjörn, can you please explain how mapping user experience improves software development?

Torbjörn Ryber: If we want to solve the real user problem, we really need to understand how users think, feel and act. I believe that we are far too eager to find solutions before we have understood enough and thereby not solving the real problem or at least solving it in sub-optimal way.

a1qaSince you do UX consulting you must have a personal viewpoint that this is essential to persuade developers to think of usability? In other words, what are the most effective ways of making people care of usability?

Torbjörn Ryber: I do believe that systems are built in order to solve people’s problems. And in order to solve the problem with the user in mind we must have solutions that are effective, efficient and satisfying. Much research shows that using IT-systems makes people sick – I do not think that is OK. I do not want to be known as someone that builds things that make people break down and cry because they have to spend hours every day trying to find out what they are supposed to do in that stupid system. I talk about this constantly not only to developers but to everyone involved in the process.

The fact is that not even the customers understand the importance of user experience and are not willing to pay. The Swedish site has a lot of excellent material that I think helps people understand the importance of usability.

a1qa: Usability testing can be performed applying different methods. Which method you think is the most efficient?

Torbjörn Ryber: I have done testing for a large part of my 20 years in the IT-business and one of the major reasons I have started with UX is that usability testing used to come really late in a project, it was like the last thing you did before production. Guess how eager the developers were to make last minute changes based on the results of those tests. So I really like the agile approach of testing early and testing often.

As for method I have had most success with think aloud sessions when real users try to solve real problems. I try to keep it easy, short and to the point. Three users per function area will give you a pretty good idea of what the major problems are. The goal is that the test actually takes place and that we do something about the problems we find. I like to have developers and designers watch the sessions live or recorded – that is a really learning experience.

Most important of all – we cannot test quality into the product, we need to work hard in the beginning to understand the users and design accordingly. We will fail repeatedly so it is much better to fail early and often when it is cheap. Steve Krug has written a wonderful little book on how to get started called Rocket Surgery Made Easy. His method is a great start. Here is a picture of a session inspired by Krug.

a1qa:Where do you think usability testing brings QA in several years? Maybe, it might entirely change the approach to development?

Torbjörn Ryber: I really do hope that UX – User experience – will be understood as the powerful and important tool it is. It is much more than just usability. A central value that testers bring is to validate that we have built the right system. We have been far too focused on verifying single functions and complicated rules. That is only a part of what we should do. I see the tester role being more specialized – some will be working with UX, others with automation.

The important thing is that we bring value to our work. It is a bit absurd how much focus many spend on regurgitating old testing folklore about testing phases, stages, templates and roles instead of trying to keep up with the multitude of new brilliant ideas there are today. The certifications for tester and requirement analysts seem to have forgotten the progress of the last twenty years or so. Take instead for example the power of visualization. That is a key part of my own work regardless if I am doing Interaction design, requirement analysis or test design.

Books like Dan Roam’s Back of the Napkin and Mike Rohde’s The Sketchnote handbook are great places to start, not to forget my own piece of work Essential Software Test Design which has a strong focus on visually modelling the problems. If you are interested to read more about this I keep a blog. There so much more to say that do not fit into this rather short interview.

Torbjörn thanks a lot for the interview and the viewpoint you shared. We hope to talk to you again.

Karen N. Johnson is an independent software test consultant.  Her client work is often centered on helping organizations at an enterprise level. In recent years, she has helped companies transitioning to Agile software development. While focused on software testing and predominantly working with the testers throughout an organization, Karen helps teams and organizations improve quality overall. Her professional activities include speaking at conferences both in the US and internationally. Karen is a contributing author to the book, Beautiful Testing by O’Reilly publishers. She is the co-founder of the WREST workshop, the Workshop for Regulated Software Testing. She has published numerous articles; she blogs and tweets about her experiences. Find her on Twitter as @karennjohnson  (note the two n’s) and her website.

a1qa: Karen, you are an independent consultant. Can you explain why you chose this field?

Karen N. Johnson: My last full-time position was unrewarding and steeped in office politics. I wanted a change. I wanted more changes and more variety and thought if I was working as a consultant or a contractor I would naturally encounter changes more often. I also wanted more control over the work I had by being able to choose contracts. I had heard from other independent consultants that they encountered less office politics, as they were not in the same office day after day with the same people having to deal with some of the petty politics that can build up over time. All of this has ended up being true.

Usually the biggest fear or concern I hear from people who would like to work as an independent consultant is their fear of having an uneven income, followed by the fear of not knowing what will come next. All of this is also true. So I advise people who ask me – What is your tolerance level for these challenges? If you have some doubts to your tolerance for uneven income and unknown project workloads as well as not knowing what clients you will be working with – then realize these things before attempting to work as an independent consultant.

Both full-time employee positions and being an independent consultant have pros and cons. Figuring out which of these situations you are better equipped to handle – and what your current family life is like are important factors in choosing what is right for you at that time in your life.

For me, I would work as a full-time employee again; it would just have to be the right situation.

a1qa: Starting a new consulting assignment, what challenges do you face most often?

Karen N. Johnson: Great question. The two most likely challenges I’m walking into are:
1) Something is already not working well and there is pressure to get things figured out and figured out fast; and

2) Working as a consultant means people expect you to ascertain what the issues are and how to remedy those issues quickly. When a fulltime employee is hired, there is a sense of “Well, let them come up to speed and don’t expect instant fixes,” but this is never the situation for a consultant. A number of times, I have had Vice Presidents and Directors debrief me by the end of my first day, and they are not looking for insights on the obvious issues; they want more, they expect more. What is interesting to me is that after having worked at more places for shorter periods of time, often issues are easier for me to discern more quickly. In other words, the more I consult, the faster I am able to make assessments and provide valuable feedback. Frankly, this is the part I enjoy tremendously – being able to work with executives and rapidly clue into the issues in their environment and advise and work to remedy those issues. I hadn’t realized this when I ventured out to work independently but I am keenly aware of this now: I truly enjoy helping and assisting executives and as it turns out, I’m pretty good at doing that.

As a side note, if you don’t think you would enjoy these types of time and insight pressures, don’t think about working as a consultant. Working as a contractor – someone who comes onsite and handles project work – might be a better fit, but consulting and contracting are not the same. I happen to work as both or times even as a bit of a “hybrid” between a consultant and a contractor.

To clarify I use the following as distinctions between the terms “consultant” and “contractor.” A consultant is hired to advise and provide guidance while a contractor is hired to accomplish specific work. This is not to say a consultant might not complete a certain body of work but they are primarily being sought to advise and they are being sought as the specific person that they are – for example, I want to hire Karen Johnson to come and advise (or consult) us on our test strategy. This is a consulting situation. Contrast that situation with: We need to hire someone who can come in and build a test strategy, direct the team and get this product shipped. In this second scenario, I may advise an executive of insights but this is not the primary reason I’m being brought in. Also note, I am not being brought in by my name or who I am – I just happen to be a good fit for the project work. This situation describes contracting.

Those are my definitions; others may distinguish the roles differently. When I meet other people who work independently, I ask them: What type of engagements do you work on? What is your business model? I’m less interested in titles than I am in understanding how someone works and under what conditions I might consider referring work or recommending them.

a1qa: You are interested in continuous learning. If a junior software tester asks you “How can I become an ideal tester?”, what would you advise?

Karen N. Johnson: I do get asked with some frequency: what should I learn next? It is a question that amuses me as I have no immediate answer or advice without knowing someone’s background or knowing what they are working on or want to be working on. But there are some comments I can make.

What background or knowledge do you believe you are missing? What could you learn more about to be more effective at the work you are doing? Let me give an example: When I was working in performance testing, I needed to acquire more math and statistical knowledge (or at least I believed I did), in order to do a better analysis of the data. I knew that, I knew that instinctively. I also knew that I had a “thin” background in math and statistics so this wasn’t so surprising to me. I was good at performance testing because of my ability to build up and simulate a production load and I was good at looking through weblogs to figure out how to model production. But my limited math background was a handicap. In that situation, I didn’t need more performance tool training; I needed more math and statistics. So you need to dig in deep in your background and figure out where those gaps are – this is what makes learning a personal journey. In software testing, it is not a case of “read these 20 books and you will have all you need.” Instead, it depends on what you are testing and what your background is and finding the gaps where you can strengthen your knowledge.

a1qa:You have been involved in the software testing industry for many years and have extensive experience. From your personal point of view how has the field of software testing changed? What tendencies you observe?

Karen N. Johnson: Years ago, I felt isolated as I was the only software tester I knew – anywhere. When I stumbled into (at the time) STQE magazine (which is now Better Software), I was happy to find other testers and a profession that I had joined but didn’t know existed. In the beginning of my career, the software testing conferences and workshops were the only places to connect with other testers. Today, there is Twitter and about a half dozen or more monthly newsletters. I see the testing community as having grown and matured as well as having become better connected. I don’t need to travel to reach colleagues in far-flung places and with Twitter; I can reach hundreds if not thousands of testers and other professionals within minutes if not seconds. There are also many more women in software testing than there were two decades ago. And on the whole I see the community as a much more globally connected profession.

 Karen thanks for the interview and sharing your thoughts. We`ll be glad to talk to you again.

Justin Rohrman is a consulting software tester with a day job. He has been performing software testing in various industries and capacities for close to 10 years now. Outside of his day job, Justin is an assistant or lead instructor for 3 of the four BBST courses offered by AST, frequent facilitator for Weekend Testing Americas, miagi-do black belt, and a writer. He writes often for ITKnowledgeExchange, Stickyminds, and his own personal blog.

Justin is currently serving on the board for the Association for Software Testing as VP of Education and has active role in the BBST program and WHOSE.
You can find Justin on twitter or on his personal website.

a1qa: Justin, in your article, “My First Thirty Days as a Test Lead: Expectations and Reality” you describe observations of being a test lead. Can you summarize it all in some tips to follow when entering a new testing team? 

Justin Rohrman: Much of that article was focused on clarifying expectations. Mainly, what your boss expects of you, the lead; what the group of testers you are working with expect and need from you; and also what you need from that group of people.

As for general tips on joining new test teams, and products, focus on the skills you have and how those can be useful to the new team. A lot of the time when you are joining a new project, you will lack some domain expertise. By domain expertise, I mean your understanding of the industry your product is designed for and the problems it is designed to solve. Domain expertise is something that can usually be picked up pretty quickly in most situations. Testing skill however takes a long time to develop and can be transferred between different projects.

a1qa: You are VP of Education at Association of Software testing. Tell us how you came to be on the board and what you are hoping to do as vp of education? 

Justin Rohrman: I started out with AST around 2008 because I was looking for serious hands on training for software testing. There were, and still, are very few good options for that. I had heard great things about the BBST class series. I had a fantastic experience taking the classes and eventually met Michael Larsen at CAST 2012 in San Jose where he was leading an EdSig meeting and recruiting BBST instructors.

It was a whirlwind of getting more and more involved in BBST and AST from there. This year a few people asked me if I would be interested in running for the board and everything just fell into place from there.

As VP of Education, I am involved in all of the education offerings by AST. Right now, this includes the BBST class series and WHOSE. Both of these are very active. BBST classes are consistently full and wait listed, so we are currently exploring ways to offer more classes so we can meet the student demand. In December of 2013 a group came together in Cleveland, OH to begin working on a skills workbook for software testing. AST will be releasing that incrementally throughout the year, and it will continue to grow and change as a living document.

The education offerings from AST will continue to grow and evolve over time, I’m really excited to be involved in that.

a1qa: Measurements and social sciences are in your special interests. Still, if measurements in software QA testing are important for KPIs, efforts, etc, then how social sciences can be applied in testing process? What`s the benefit?

Justin Rohrman: I don’t think measurement programs and KPIs are as important to software testing or software in the general sense as most companies assume. Also the amount of misunderstanding around measurement mostly renders them meaningless at best and damaging at worst. The things we are trying to measure in software are social in nature; namely capacity and performance, and quality.

Studying the origins of measurement in social science, specifically experimental psychology, can shed light on where we are making mistakes. Many of the problems we have with measurement, such as understanding reliability and validity problems, were studied in depth as the fields of psychology and sociology were being developed. Software testing is a social science focusing on the relationships between people and software environments; we should understand certain aspects of our origins.

There is also some value in studying lean which is partly about management moving back to directly observing the work and making decisions from what they observe. Lean values managers being in the gemba, the place where work is happening, rather than in an office creating and studying measures. This is sort of what was happening when qualitative research was being developed in the early 1900s.

There are great anecdotes from the history of this development in the books Reliability and Validity in Qualitative Measurement, and Constructing the Subject.

a1qa: And a few things about you and the testing. Are you an adherent of some testing approach, strategy or technique? What’s your favorite field in QA?

Justin Rohrman: I am a student of the context driven school of testing. You can read about the principles of that here. The basics are: we view testing as an intellectually challenging activity, there are no best practices, only good practices in context

Justin thanks for the interview and the thoughts you`ve shared. We hope to talk to you again.

Matthew Heusser is a managing consultant at Excelon Development. Matt has deep experience in software testing, project management, development, writing, and systems improvement. His extensive network of contacts in these fields has enabled him to put together a diversified, high-level team of experts at Excelon.

Matthew was lead organizer for the initial Great Lakes Software Excellence Conference, a regional event that continues today. He organized the Agile-Alliance Sponsored Workshop on the Technical Debt Metaphor, and recently published a leading position paper on the subject for Better Software magazine. Matthew served on the board of directors for the Association for Software Testing, and was the testing track chair for the Agile Conference in 2013 and 2014.

a1qa: Matthew, involving QA team in Agile process is always challenging. Nevertheless it always worth trying, hope you agree. Maybe you have your own method of introducing agile to a QA team or some leadership tricks you follow?

Matthew Heusser: Well, I’d say that the ideal Agile process moves from what (requirements) to done (in production), and done includes “running”, or production support. When you start to cut involvement back you tend to decrease throughput and quality.

Introducing Agile into QA consulting, well, I see two or three big pieces to that. I’d start with introducing the idea of iterations, or sprints. This brings up the problem of regression-testing, or release candidate testing, in a day or two, preferably less. This isn’t a problem for some teams; they work on twenty to a hundred small pieces that can be deployed independently. For others, we have to talk about strategy.

My preference is to talk to the whole delivery team about Agile adoption. Typically I find that it is possible for the developers to scale back the work to small chunks. Even in an ancient system, the programmers can slice the work thin enough to get done in two, weeks, ‘just’ adding a column to a report, and so on. The challenge is that regression. But if we ‘just’ added a column to a report, we can do a much more risk-adjusted test run. So one ‘aha’ moment is that regression testing does not have to mean re-running a specific test plan, and instead can mean varying the test approach based on risk.

The second big issue to tackle is story-testing – breaking a huge requirements documents into a chunks small enough to be actionable and yet meaningful. With traditional development, we tend to assume a 80-page specification implies deep thought. When teams chunk work and look for examples, we can often push disagreements and misunderstanding upstream, before coding begins, in a story kickoff. This is critical to actually getting testing done in tight time boxes — you just don’t have time for the back and forth bickering over what the software should do. So I see story kickoff, or “shift left”, or whatever you’d like to call it, as another ‘aha’ moment for agile-testing.

a1qa: Lean software testing is one of your interests. Still, lean testing is a curve of AGILE. So what`s makes lean software testing different from other Agile approaches?

Matthew Heusser: Lean Testing as I practice and teach it provides theory for why Agile Software Works – with small batches, limited work in progress, moving toward one-piece flow, and so on. It also provides concrete measures and metrics that improve team performance even when gamed.

Dare I say it, those are two areas that I see Agile falling down. Scrum or XP introduced as sets-of-rules without theory tend to fail, and Agile ‘metrics’ like Velocity can be easily gamed to the determent of the system.

a1qa: Let`s talk about testing in Scrum. It`s well-known fact that Scrum has a number of Pros: saving time, quick product quality delivery in a scheduled time. But what about Cons? There is a viewpoint that scrum is good for small, fast moving projects as it works well only with small team. And this methodology needs experienced team members only. Do you agree with that? And if you faced these challenges, how you managed to handle them?

Matthew Heusser: Well, first of all, you have to understand the problem Scrum was introduced to solve – requirements churn so fast that no one could get anything done. If you don’t have that problem, if, for example, your team can turn around code in a day – then off-the-shelf scrum might actually slow you down!

Here’s why – Scrum tends to prescribe iterations of a fixed length, with a standard now around two weeks. You figure out all the stories for those two weeks, the technical team commits to them, and the product management team goes to figure out what to do next sprint. If your team already does just in time requirements, and that is not a problem, then you’ve increased the batch size. Lean methods wouldn’t think of that as an improvement.

As for the argument that scrum only works with small teams, I would agree – at least to the extent that scrum was designed for small teams and doesn’t tell you what to do with big ones. But let me push back on that a little, at least to say that large teams organized by function don’t work, or at least, don’t work well.

For example, I worked on one project that you might consider medium – perhaps sixty people full-time for two years. The teams were organized functionally. Every day you’d have a test standup, a fronted standup, a backend standup, a scrum of scrums, an operations standup … who knows. I didn’t even have exposure to all the standups! Nor did I want it.

Scrum would prescribe cross-functional teams. Instead of eight functional teams of eight people each, we could have had eight feature teams with eight people each. This would have required a little infrastructure work so that the teams didn’t step on each other, but that would have been good design work anyway.

a1qa: And finally applying Lean software testing, Scrum or any other methodology, can testers reach 100% application/system quality, make it highly efficient? Or maybe QA is an endless process?

Matthew Heusser: As long as a new version of Internet Explorer, or Chrome, or FireFox, or Safari, or iOS, or Android, can introduce a change to javascript that ‘breaks’ existing functionality, then I doubt we’ll have 100% application/system quality any time soon. Of course, you can lock down the browser, and only support a specific version of IE. You can lock down the language, and only support English US and English UK keyboards, and so on. Apple controlled everything on the original version of the iPod, from hardware to OS to software. In those environments, you can get a lot closer to 100% correct. For example, a few years ago I wrote some Electronic Data Interchange (EDI) software that ‘just’ converted from one EDI format to another in plain text. That software was certainly fit for use; I am not aware of a single problem in production, ever. Yet a multi-gigabyte file with lines of text that were too long would probably crash it.

So there may be a few domains where it may be possible to get close to 100%. For the most part, for what I do, I can be most valuable in the domains that offer risk – because riskier domains are ones where risk management is called for.

Time pressure? Uncertainty? Ambiguity? Sign me up, because it is often those domains where a tester can add the most value.

Matthew thanks for sharing your viewpoint and experience. We`ll be glad to see you and talk to you again.

You can follow Matthew Heusser on Twitter, LinkedIn  and read his blog.

In the previous post we talked to Parimala about Mobile testing and UX testing. In case you missed this part of the interview you can read it here. This time we talk about crowdtesting and its pros and cons.

a1qa: You recently moved to an organization which offers Crowdtesting services in addition to Offshore Testing Services. Can you talk about Crowdtesting and also explain how Managed Crowdtesting different from regular Crowdtesting?

Parimala Hariprasad: Crowdtesting or Crowdsourced testing is an on-demand software testing service, delivered through highly skilled & qualified, geographically distributed professionals over a secure private platform. In a way, this can be considered  a type of software testing outsourcing, where you delegate testing responsibilities not to a single vendor, but to a range of them simultaneously.

Managed Crowdtesting
A qualified project manager, who is typically a proven community leader, designs or reviews test strategy, and approves or amends them to cater to customer’s specific testing requirements. Each project includes an explanation and access to a forum where bugs and issues are discussed and additional questions can be asked. Testers submit documented bug reports and are rated based on the quality of their reports. The amount the testers earn increases as their rating increases. The community combines aspects of collaboration and competition, as members work to finding solutions to the stated problem.
a1qa: What do you think about using Crowdtesting as an augmented approach to testing?

Parimala Hariprasad: Every organization maintains certain ethos with comfortable work culture, talent pool of people and scores of technical debts. Software testing is the least important areas to spend money on for many organizations, be it 50 years ago, today or even 50 years later. Because, according to customer, software testing consumes money, it doesn’t bring money. In such a situation, selling software testing solutions bundled in different packages to customers as “the most innovative solution of the century” no longer makes sense.

The need of the hour solutions is to pitch testing models/solutions as an augmented approach to testing.

Scenario 1 – An organization which employs traditional testing methodologies approaches you for testing.
An organization, let’s say, has a mature testing process in place and also has a Test Center of Excellence. How to add value here? It’s important to understand the needs of the customer, identify the pain points they are going through as a result of not testing or doing poor testing and pitch a model that fits best for them. Customer might take a couple of test cycles to gauge if that model works well or not.
In this case, if bringing a fresh pair of eyes or several of those helps, then suggesting several new team members in the organization/team to initiate testing helps. If it needs to be done on a larger scale, Crowdtesting can be an option. Note that it is not the only option, but one of the options.

Scenario 2 – An organization is looking for diversity in test configurations and devices.
Consider a large organization where web or mobile applications are accessed from different operating systems, browsers and browser versions, multiple mobile devices, different platforms like Android, iOS, Windows OS, several manufactures, different screen sizes and resolutions. From a cost and time perspective, organizations often find it hard to test on a variety of test configurations. Such a context is suitable for crowdtesting where a professional testing community works in a Bring Your Own Device (BYOD) model and test the application, hence giving broader device/platform coverage.

Scenario 3 – An organization wants to solve its regression testing problem.
Many legacy applications have a need for regression testing. While new features are conceptualized and implemented, the pain of maintaining existing features from breaking is a big pain. This risk is further aggravated given the number of operating systems, browsers, mobile devices and other test configurations. Regression testing candidates are a great fit for Crowdtesting where the crowd is capable of regressing on a variety of platforms and test configurations within a short period of time.

a1qa: What are the advantages and disadvantages of Crowdtesting?

Parimala Hariprasad: Crowdtesting has its advantages and disadvantages. It is up to organizations and their customers on how they can use it and derive value out of it.

Advantages

  • Representative scenarios from the real user base
  • Tight feed-back loop with rapid feedback processing and agility
  • Comprehensiveness in use cases, platforms, tools, browsers, testers, etc. that is very hard to replicate
  • Cost efficiency
  • Diversity among the pool of testers lends to extensive testing
  • Reduced time to test, time to market and total cost of ownership as most defects can be identified in relatively short time, which leads to significant reductions in maintenance costs

Disadvantages

  • Governance efforts around security, exposure and confidentiality when offering a community project to wide user base for testing
  • Project management challenges that stem from the testers’ diverse backgrounds, languages and experience levels
  • Quality assurance efforts to verify and improve bug reports, identify and eliminate bug duplicates and false alarms
  • Equity and equality constraints in the reward mechanism with remuneration as a function of the quality of contributions that meets a prescribed minimum standard

a1qa: What does the future hold for crowdtesting?

Parimala Hariprasad: Crowdsourced testing, clearly, has its advantages and limitations. It cannot be considered as a panacea for all testing requirements and the power of the crowd should be diligently employed. The key to great Crowdtesting would be to use it prudently depending on the tactical and strategic needs of the organization that seeks Crowdsourced testing services. It is important for the organization to embrace the correct model, identify the target applications, implement Crowdtesting, explore them for few test cycles, monitor test results and custom design the model to suit their needs.

a1qa: And finally, what does Software Quality mean to you?

Parimala Hariprasad: I follow Jerry Weinberg’s definition for Quality – “Quality is value to some person” which was further improvised as “Quality is value to some person who matters” by James Bach and Michael Bolton.

I was recently talking to a friend who wanted users to install his mobile app and travel from point A to point B in 10 different cities across the world. The first thought of it generates laughs and gags for its weird requirement. However, for this guy, getting this done means the world to him as it is helping him analyze how his product is behaving in contexts and weather conditions amidst other environmental/technical factors. For this customer, value is to actually see the value of his product work or not work so he can make necessary amends.

Value, in my opinion, is defined by the customer based on the

  1. Pain points customer is facing
  2. How much customer is willing to pay to remove those pain points?

As we move farther and ahead in technology, it is becoming critical to pay attention to the value the customer is looking for. Offering good value means offering good quality. This applies not just to software testing, but to any craft where “Value” is valued!

“Quality is about How People Feel”

Parimala thanks for sharing your experience and thoughts.We`ll be glad to talk to you again.

You can read Parimala blog.

Parimala Hariprasad spent her youth studying people and philosophy. By the time she got to work, she was able to put those learnings to help train skilled testers. She has worked as a tester for close to 12 years in domains like CRM, Security, e-Commerce and Healthcare. Her expertise lies in test coaching, test delivery excellence and creating great teams which ultimately fired her because the teams became self-sufficient. She has experienced the transition from Web to Mobile and emphasizes the need for Design Thinking in testing. She frequently rants on her blog, Curious Tester.

a1qa: Your most recent work has been extensively in mobile apps testing space. What according to you is important while devising a Test Strategy for Mobile Apps?

Parimala Hariprasad: Mobile apps testing groundwork begins by understanding the customer and the apps to be tested. The better we know about these two, the better test strategy will be. A high level test strategy includes understanding business goals, release goals, mobile personas, platforms to test on and testing for competitiveness of the apps. Once test strategy is ready, tests in each area can be planned and executed for good coverage. Testers must plan for surprise platforms that are problematic, but weren’t tested thoroughly enough. Whenever we optimize and narrow our focus, we run the risk of missing something important. Mitigation plans are important to have, for known risks.

Fishing net heuristic
Is your test strategy good enough? Every element in test strategy is a fishing net. Ask, ‘What kind of fishing net do you use?’, ‘Does it catch small fishes?’, ‘Does it deal with sharks?’, ‘Do you have just one kind of net or more?’. Remember, which type of sea creature you catch depends on the type of net you use! Fishing nets is a powerful heuristic to assess if test strategy is good enough or not.

I don’t have time to create a strategy
Abraham Lincoln once said, “Give me six hours to chop down a tree and I will spend the first four sharpening the axe”. So what if you don’t have the time to test? Even if you have an hour to test, you must spend time creating a strategy first, because the less time you have to test, the more effective your testing must be. These words from Jonathan Kohl keep coming back to me whenever my team feels time pressure to complete testing.

a1qa: Faster release of mobile apps is always a risk to any organization. What do you think app owners should do to decrease post-release risks and how testing can help?

Parimala Hariprasad: Post-release risks can be mitigated well if testing is backed by powerful test strategy and is context-driven. There are several techniques to gather information post-release of apps.

Real world testing
Hiring testers and users to test in real world conditions w.r.t location, network types, network speeds and so forth. Such feedback has high possibility of finding problems that might occur only in real world conditions and corner case scenarios.

App store reviews
Studying reviews and comments by users on the app store is a goldmine of information about how the app can become better in subsequent releases. App discoverability and user engagement are key metrics of measurement to increase app store ratings for apps. Testers can assimilate these inputs and come up with a ‘Recommendations’ report from user perspective.

Social media analytics
What people say about the released app on social media is a good way to assess how users are feeling about the app in general. There are several tools in the market that collect social media reports about the app from different social media and provide that information to stakeholders. Analytics gives great visibility into user distribution in real world. Based on this information, testing can focus on platforms/configurations that were not previously covered during testing. Additionally, analytics data can be used to improve test strategy for subsequent test releases.

Competitor analysis
Released app can be compared against competitor apps to test its strength and stickiness. A better approach might be to take the app to users of competitor apps and provide feedback at all levels.

Recently, there was an instance where missing out on testing a ‘so-perceived’ trivial flow cost the organization, refunds to many of its users. Until then, the testing team involved did not know the importance of that flow.

Once the apps are released, information about the quality or the lack of it, keeps flowing in all directions. It’s important for testers to work outside testing team with tech support personnel, sales/marketing teams and product owners to listen to the feedback coming in. The underlying message is, ‘Testers need to keep listening in all directions.’

a1qa: Mobile Market has millions of devices today. How do you choose which devices to test on? Can you describe your approach?

Parimala Hariprasad: I like Jonathan Kohl’s approach to choosing mobile devices. According to him, there are three basic approaches to select from an ocean of devices:

  1. Singular approach: test on one device type. This is either because that is all our team plans to support, or the most popular device in a device family, with one operating system, using one cellular carrier. Problem child device that reveals lots of problems is the best bet in this approach.
  2. Proportional approach. Which devices and how many devices to test in this approach needs research which is based either based on web/mobile traffic, analytics data or user data. For e.g. if existing historical data shows 50% Android mobile traffic, 45% Apple iOS mobile traffic, and the remaining 5% are other handset types, this data can be used to prioritize testing using android and iOS devices.
  3. Shotgun approach. For a mass market app, we may need to support all sorts of devices, and we have no self or customer-imposed restrictions on devices. It has the highest risk, because there are many, many platform combinations out there, Problem devices, research data like in proportional approach above are good places to start.
  4. Outsourced approach. There are various services you can use to supplement your own test devices with basic testing on devices that other people own and have set up. Formally this can be done using remote device access services which allows to install software and control a device remotely over the web to do basic functional tests. You can also use a crowd sourcing services where they manage people with different device types in different locations and parts of the world to do testing on their phones.

Despite above approaches, organizations face the brunt of setting up mobile device labs, maintaining them and nurturing them with latest devices over a period of time. I handle this challenge by using a collective approach:

  • In-house Mobile Device Lab with access to most popular devices based on device models, platforms, countries, mobile app types and user types
  • Online Mobile Device Lab with access to millions of devices that can be accessed from across the world through in-house or external remote access mobile device organizations
  • Simulators / Emulators for quick / basic tests [Trust these at your own risk, but there are good tools in the market which are close to being real
  • BYOD approach where millions of users across the globe can be invited to complement other mobile device labs using crowdtesting

a1qa: Success of any product depends on positive user experience. Usability testers apply various techniques to enhance user experience. One of these techniques is paper prototyping. How helpful is it and what are its weak sides?

Parimala Hariprasad: Paper Prototyping is a technique adopted from design thinking world. In this technique, a tester wears a designer’s hat and re-designs prototypes of screens or pages. Testers take existing applications (Web or Mobile), view page by page or screen by screen, understand the design and perform basic tests on Design, UI and Business Logic for each design.

Designers create prototypes anyway, why re-invent the wheel, isn’t it? Testers gather vast experience over time by testing multiple products or applications in a variety of domains. For e.g, testers might say, ‘This button must be in this position’ or ‘This UI element must be in this color’ or ‘Remove this UI element as this is redundant in my experience’. This feedback is driven by testers’ knowledge of different applications, domains and industries. A step forward from here would be to incorporate above decisions and create fresh prototypes of these applications which can then be reviewed by designers/developers/product owners for further discussions.

An advanced approach towards paper prototyping can be to design two different prototypes, show it to a group of users and gather feedback on which was a better hit with users. Going to stakeholders with such information helps testers build credibility.

What are the weak sides of Paper Prototyping?
Paper Prototyping has its weaknesses:

1. Ideas are tester dependent and may not represent an ideal user at all times
2. Users involved in getting feedback may not represent a holistic sample of users

In the next post Parimala will touch upon the topic of crowdtesting.

In the previous post we started a discussion with a widely known software testing specialist Michael Bolton. Today we continue and cover the issues of automation and rapid software testing.

1. Michael, as one of the Rapid Software Testing course creators, can you, please, explain what rapid testing actually is? Is it a philosophy, an approach, a set of skills?

Rapid Testing is all three of those things. We describe rapid testing is a mindset and a skill set on how to do testing quickly, inexpensively, expertly, and credibly. It requires deep thinking about products, problems, models, processes, tools, and interactions between people. It is an approach that focuses on reducing waste and helping to speed up the project by investigation and exploration throughout development. It also requires the tester to develop skills in many domains: critical thinking skills, scientific thinking skills, using heuristics; emphasizing the use of lightweight, flexible tools; framing testing; applying oracles; identifying coverage and the gaps in it;  and reporting on all of those things.

The premises of rapid software testing are listed here; a framework for describing rapid software testing is here.

2. Automate or not to automate? Is automation testing the panacea in quality assurance world?

Is automating research the panacea in the academic world? If you look at expert researchers, they use computerized tools all the time. But nobody talks about automating research.
There is no such thing as automated testing, any more than there is automated research or automated programming. There are some tasks within programming that can be automated, but we don’t call compiling “automated programming”.

Instead of thinking about “automated testing”, try thinking about tool-assisted testing. If you do that, you’ll be more likely to think about how tools can help you perform all kinds of tasks within testing: generating data; visualizing data; statistical analysis; monitoring or probing the state of the system; recording; presenting data; helping with exploring high-speed, high-volume, long sequence testing; the list goes on and on.

Checking of course can be automated, but you can’t automate the processes that surround the performance of the check: modeling and identifying risk; identifying a way to check for outputs and outcomes that would expose problems; programming and encoding those checks; deciding when to run the checks; and setting up mechanisms to launch them. After the check is been performed humans make decisions about whether the check has revealed important information or might have missed something important. And after that, humans make decisions on what happens next. With the exception of the actual execution of the check, we need humans at every step. None of that stuff can be or will be performed by machinery anytime soon.

So, use tools by all means. Don’t limit your concept of what they can do for you. But beware: a tool amplifies whatever you are. If you’re a thoughtful and critical tester, tools can help sharpen your testing. But if you don’t apply tools thoughtfully and skillfully and critically — if you’re a bad tester — the tools will help you to do bad testing faster and worse than ever.

Most importantly, remember that the tester and his or her skills — not the tools, not the process model, not the documentation—are at the center of testing.

Michael thanks for sharing your experience and ideas. We`ll be glad to see you and talk to you again.

You can follow Michael on Twitter

Michael Bolton is known to be a real Software Testing classic.

One fact tells everything – he has over 20 years of experience in the computer industry testing, developing, managing, and writing about software. Michael delivered a lot of workshops, tutorials, and conference presentations on Rapid Software Testing and other aspects of testing methodology on ALL FIVE CONTINENTS.

He is known to be a regular columnist in Better Software Magazine (formerly Software Testing and Quality Engineering) since 2005. He was an invited participant at the 2003, and 2005-2009 Workshops on Teaching Software Testing in Melbourne, Florida (hosted by Cem Kaner, James Bach, and Scott Barber) and a founding member (with Fiona Charles) of the annual Toronto Workshops on Software Testing. He is also the Program Chair for TASSQ, the Toronto Association of System and Software Quality.

We are happy to announce that today Michael is a honorable guest in our blog and we start to discuss the specifics of quality and testability.

a1qa: Having over 20 years of experience in computer testing industry, developing, managing, and writing about software, what is “quality” for you? Can a product/system be estimated as having 100% level of quality?

My notion of quality comes from Jerry Weinberg: “quality is value to some person(s)” (Quality Software Management, Vol. 1: Systems Thinking; also available as two e-books, “How Software is Built” and “Why Software Gets in Trouble”). James Bach and I have added “who matters” to make explicit some of the things that Jerry presents in the book. When you think of quality as “value to some person who matters”, it should be absolutely clear that evaluating the quality of something starts with a decision about whose values matter.

Quality is always relative to some person. A product can have terrible problems in it, and yet still satisfy some people all of the time and most people most of the time. The idea of quantifying quality at all seems quite silly to me; what would it mean to have a 77% level of quality for a product? What would it mean to have a 100% level of quality? Until you can establish that, and until you can establish a scale that applies to widely differing people with widely differing sets of values, the number isn’t going to tell you anything meaningful. Numbers won’t do the trick there; you need a qualitative description.

What you can do, as a professional working in software testing outsourcing, is to identify important problems in the product that threaten its value such that the product owner (who is ultimately the person who makes decisions about quality on a project) might be unwilling to ship it. But that’s not usually a question of numbers to any serious degree. Instead, it’s a collection of qualitative criteria framed in terms of stories. Both testing and reporting problems are built on the idea of composing, editing, narrating, and justifying stories about our products. Sometimes the stories can be illustrated by numbers, but most of the time stories are about problems that would cause loss or harm or annoyance, loss of value, to somebody who matters. (See “Braiding the Stories (Test Reporting Part 2)”

a1qa: “Testability” is it just a trendy word or the product attribute that simplifies testers` task?

Testability is not just a trendy word. Indeed, it’s one of the quality criteria that I referred to above. In addition to reporting on problems with the product (we call those bugs), the tester must also report on issues — anything that threatens the value of the testing work; anything that makes testing harder or slower. A product that is not very testable will give more time and more opportunity for the bugs to survive below our levels of awareness. There are other dimensions of testability too: figuring out how we know a problem when we see one, and identifying aspects of the tester, or the project, or our notion of quality. There’s more on that here.

In the next post we continue the discussion with Michael and touch upon Rapid testing and test automation.

Rainar Ütt is Head of Quality Assurance at InnoGames. He considers his major tasks in developing the integrated approach, enabling company to bring high quality software faster to the market. He explains, “The job at InnoGames is an appealing task in a very dynamic market environment. I like the international appeal of the company and I am looking forward to contributing further to the success of the company, as I believe that high quality standards are one of the key factors for online games.”

Rainar also is a frequent speaker. Recently he participated “Iqnite 2014” – International Software Testing Conference in Dusseldorf and presented his report “Quality management in online games development”. This presentation gives an overview on how online games are developed and tested, what are the bottlenecks and how to survive with Quality Management in such environments. a1qa together with Rainar Ütt decided to have a closer look to testing issues in gaming industry.

a1qa: Оnline gaming industry (according to global statistics) is just on the top of its popularity nowadays. The worldwide video game marketplace will reach $111 billion in 2015, up from $79 billion in 2012, according to Gartner, Inc. Do you consider this trend as a longtime tendency?

Rainar Ütt: Online gaming industry has been growing. As more and more people are getting online and computer hardware is getting faster so you don’t need specialized hardware anymore. If it has reached the top – I cannot predict that, in the 80’s there were consoles, PC games in 2000’s browser games have been taking over and the future is obviously mobile. As technology is getting more and more affordable and people are getting more online, this trend will definitely continue.

a1qa: It is well-known that Quality assurance is a critical component in game development, though the video game industry does not have a standard methodology. Why Quality is so important for gaming and how to evaluate the scale of troubles undetected defects may cause comparing to other fields?

Rainar Ütt: Of course you cannot compare online gaming to software development for healthcare, automotive or space industry, where the quality assurance is affecting human lives. In games development worst effects software defects might cause are related to privacy and data protection and security. Could you imagine someone stealing millions of credit card details from the database or taking control over your computer, by using a cross site scripting vulnerability on game provider’s website?

Also when, for example, your online game is not responding fast enough. Studies have shown if a web browser request is taking more than 2 seconds, customers would prefer to go to a competition rather than staying, that is why web app testing has become crucial to ensure ultimate performance.

Or if due to data inconsistency or functional problem your co-player is gaining advantage of you, while you should have it. In online gaming – defects in your system would have a really strong effect on your business. In banking, if your online bank is having defects, you still can retain your customers, because not all of them are using your online bank. So I would rank the importance a little bit higher than in a bank.

a1qa: Since game QA is considered less technical than general software QA, some people assume game testing as some kind of pure fun while we know this is not the case. Since you are just in the middle of such amazing industry as gaming could you please describe the way games are usually being tested in InnoGames? How do you think this process might be enhanced?

Rainar Ütt: Yes indeed, game QA is often considered as only playing games and reporting defects. At InnoGames we are keeping a healthy balance between game testing and technical QA. When a game project is in the very beginning, concepts are developed and changed rapidly. Often it also happens that a concept is discarded completely. Therefore in the beginning of the game project there is less focus on the technical aspects of QA. In this phase it is important to develop acceptance criteria and checklists for functional testing.

Later, when the architecture evolves and not so many conceptual changes are introduced, it is important to start looking at quality of different components and the integration between those. For core user stories, we are also introducing automated GUI tests, those are then run after automated unit and integration tests are finished at least nightly.

Of course there is still manual testing involved as well – when new features are being developed, tester is giving feedback among other testing types by exploratory testing. Testers also write acceptance criteria, or let me put it that way: information for developers how they are planning to test it, so that developers will make sure to do basic testing on the feature already before it gets integrated.

Cross functional or non-functional testing we are usually starting, a few months before the launch. The architecture of online games is usually quite simple on the backend side, so this process is usually quite straightforward.

Our game testers and tech testers, both work as part of the development team, each product development team has based on their needs game testers and technical QA leads working as part of it.

a1qa: What are the most effective testing tools you have been using for game QA? Do you actually think Test automation is applicable for online games development as well? In other words, is it possible to automate some aspects of this process?

Rainar Ütt: From one side online games are games, from other side they are software – to be more specific we are dealing with mobile and web applications and there is a great variety of free or commercial tools available. Best experience we have made so far with selenium. Our customer base is over 120 million, worldwide, so for us cross browser and cross market testing is a really hot topic. We are using Thucydides framework, which enables us data-driven testing and reporting capabilities. We have created a small core framework that takes care about the configuration of selenium, integration with different tools. For example reporting into our test management tool (Testlink), launching the tests on cloud testing service provider. For maintaining the framework our automation architect takes care of that and all of our tech QA-s write selenium tests with Java.

Of course selenium cannot cover everything, especially frontend technologies, such as flash, which still will remain one of the major technologies in browser game development. For flash we are using a combination of 2 tools: Genie – tool from adobe that can interact with objects in Flash and Sikuli – a screenshot recognition tool. For mobile end to end tests, there was unfortunately nothing on the market, what we could use – so we are in the process of building our own tool that can interact with cocos2dx and integrate it into a commonly used testing tool, either Monkeytalk or Seetest.

For API testing we are using SOAPUI, for cross functional testing JMeter. We have also built our own tools for measuring the frame rate on mobile apps or improved JMeter websocket plugin.

For testing online games automation is definitely possible, sometimes getting started with it is a little bit difficult, because of the skills of the people. We’ve been through that process with InnoGames, when I started here we only had a manual functional QA, which was getting involved in the development process in the very end. During one year, we managed rearrange the way QA is working – instead of a separate team, we introduced QA as part of development teams and became more technical. We made several workshops, where we learned about automation, trained our Java skills, data-driven testing and eventually also developed an Android application. I think for those saying automation is not possible in games development; it is rather a question of mindset.

a1qa: So as a1qa has some experience testing online games like Roads of Rome and Islands Tribe we know exactly how popular they become being properly tested. Can you give us your examples to which extent effective Quality Management influences the success videogames are getting on the market?

Rainar Ütt: For a successful game launch, there are several factors that make it successful. You have to have a brilliant game concept, find the right marketing channels and of course quality.

As mentioned above, if you product response time is too slow, or if you are not able to keep the players interested for playing more than 2 rounds, if the player data isn’t safe, if the graphics aren’t awesome, if the customer cannot have it in its local language – the product won’t be successful. Also another aspect is predictability – when your product is going to be launched. A good timing for a product / feature launch is important and if you can keep the deadline – for example, launching an eastern event in midsummer, wouldn’t bring the same amount of customers as during Eastern Time.

Rainar thanks for sharing your experience and ideas. We`ll be glad to see you and talk to you again.

You can follow Rainar on Twitter.

Steve Rowe today is a Principal Test Manager at Microsoft.

What is amazing hе has worked at this global company since 1997 i.e. 17 years (!) which is so rare in the modern world of software testing.

Started on the DirectXMedia team testing multimedia components in IE4 he is currently responsible for Windows Runtime in Windows 8 test development. Previously Steve has also worked on Media Center, media pipelines (DirectShow and Media Foundation), DVD, video rendering, and audio.

In addition, Steve is a curious tester who keeps popular blog presenting his “Ruminations on Computing – Programming, Test Development, Management and More…”. a1qa is glad to discuss challenges and lessons learned by Steve during his long testing career.

a1qa: Steve, since you’ve been working for Microsoft for the last 17 years you must be the right person who understands the development and role of software testing in big global corporations like Microsoft. So, please express your opinion if the situation has changed during last decades and testers become more influential in overall software development process?

Steve Rowe (S.R.): The role of web testing the tester has changed a lot since I began. When I started, most testing was done manually. Development would implement some feature and then we would go pound on it by clicking buttons or writing scripts. At some point we realized how expensive it was to have people pushing the same buttons every day. For software like Windows, the testing doesn’t stop when we ship. Windows XP was actively supported for over a decade. That means someone pressing the same buttons every day for 10 years. So we had to innovate.

We moved into a phase where we automated the button clicking. Software Test Engineers gave way to Software Development Engineers in Test. Testing became less about exploring the software and more about writing tools to exhaustively cover the surface. This is an expensive proposition up front, but the cost can be amortized across the life of the software. This had the side effect of distancing the tester from the experience of the user. It was easy to write software that was correct, but hard to use. We had to innovate again.

The newest phase of testing seems to be about reconnecting to the user. The internet allows a real time connection with users. No longer do we have to try to envision what a user might do, it is now possible to observe what they are actually doing. Data science and machine learning can provide a lot of insight about what is working well and what needs to be fixed. A/B testing can be used to help determine what solution works best. Websites and services figured this out first. Big, packaged software is late to this party, but is starting to come along. I suspect testing will do much more in this direction over the next several years.

a1qa: This is interesting you work for one employer so long which is not so common in the modern testing world. What impresses you the most in Microsoft? What is the most important lesson you have learned while working there?

S.R.: Microsoft was one of the earlier companies to really embrace testing. They developed the SDET role and really supported it. At some companies test or QA is relegated to a second-tier status. Not so at Microsoft where we had an equal seat at the table. The willingness to embrace testing, to put some of our best and brightest on the task, and to provide a full career path for test is one of the things that impresses me about Microsoft.

As for what I’ve learned. One of the most important lessons is the power of saying ‘No’. Carefully consider what you can do. Plan for it. And then commit to only that. There are too many good ideas to do them all. The criteria for accepting new work can’t be, “Is this a good idea?” Most of the time that answer will be yes. Instead it has to be, “Is this better than what we’re already doing?” If it is, take off the lowest priority items of your plate and add this new thing. Always clear space for the new work. You won’t magically be able to find time in an already full schedule.

Overcommitting may feel like the right thing to do. It may feel like it will get you ahead because that is what your manager wants, but over the long term, it will hurt you and the product. You’ll work really hard and still ship late. It is best to set realistic expectations up front.

a1qa: As a software tester you are working with Windows 8 generation, which includes thousands of new APIs from new UI controls to new ways to interact with the devices. How was Windows 8 different from other projects you’ve worked on and what has been your biggest challenge that you had to solve?

S.R.: I’ve worked in and around Windows for my whole career at Microsoft. I started working on DirectXMedia and DirectShow which were the multimedia APIs in Windows 98. For Windows 8 we created a brand new type of application, the Windows Store app, and a whole new API surface for it called Windows Runtime. The magic of Windows Runtime was the ability to write an API once, in say C++, and then have it automatically projected into three different languages: C++/CX, C#/.Net, and JavaScript. This means three times the surface to test. This projection was free. As in free puppies. It didn’t cost much up front, but it was a lot of work.

The challenge was figuring out how to test all three projections without hiring 3x the number of testers. We did this through better policies and better tooling. We determined what kind and how much testing needed to be done in each language. We wrote tools that could scour the API surface and catalog it and other tools that could determine what had been tested already and what had not. We wrote tools that leveraged the metadata and automatically wrote tests for the APIs. We had a dashboard that could show us the status of every API in the SDK so we could find the weak points and shore them up.

Thanks for being our guest and for sharing your viewpoint. We will be glad to see you and talk to you again.

Dmitry Tishchenko has been patching code for the past nine years, and says automated testing algorithms and agile development can help businesses anticipate rapid change that has become the bane of the industry.

Interview by Techweekeurope

Techweekeurope: What has been your favorite project so far?

Dmitry Tishchenko: My favorite project so far is actually the implementation of a KPI-based management system at a1qa. This system covers business aspects of our company as well as testing itself. I’m addicted to the measurement of any values. Only when that’s done, we can say that we fully control any particular activity.

For testing activities this measurement approach means that it is growing from Quality Control to real Quality Assurance.

Techweekeurope: What tech you were involved with ten years ago?

Dmitry TishchenkoTen years is a lifetime for the IT industry. During the last decade the complexity of the software, anticipated value for business and end-users’ expectations have changed dramatically. Rapid change in IT has become the norm and it will continue this way, possibly with even more intensity and drive.

Techweekeurope: What tech do you expect to be using in ten years’ time?

Dmitry Tishchenko: One of the most important factors to emerge during the last decade is mobility. Ten years ago we were focusing on heavy desktop systems but now even corporate software is ready to be ported to mobile platforms. The mobile world is investigating better ways (including UI, hardware – different screen resolutions and sizes, number of buttons etc.) to interact with users. I believe that moving corporate systems to mobile environment and ensuring compatibility and security is one of the top priorities in IT world.
Another point is the increasing number of transactions and complexity of integration paths of the software. I am 100 percent sure that performance, ability to process large amount of data quicker, and increasing complexity of integration is a challenge all of us are going to face in the near future. It will push the hardware market as well as the software.

I also expect that existing software solutions will be fragmented into smaller parts, with a goal to target and address particular and precise tasks.

Techweekeurope: Who’s your tech hero?

Dmitry Tishchenko: I believe that heroes are the individuals who make a breakthrough, whose invention has significantly increased results based on the solution and process of how it is to be applied. In terms of process, my heroes are the Agile founders. Agile principles have significantly improved the flexibility of IT development process and its ability to quickly respond to change.

Techweekeurope: Who’s your tech villain?

Dmitry Tishchenko: I cannot give you any particular names here, but I’m far from being fond of the people who do not respect Intellectual Property rights in our industry.

Techweekeurope: What is your favorite technology ever made?

Dmitry Tishchenko: There are a lot of great tools which help testers do a better job. But there are also tools which have an impact on the whole industry. I believe that one such tool is the Selenium project. It increased the flexibility of automated tests, made them quicker, and much, much more.

We at a1qa widely use Selenium. For the last three years, we managed to improve our ROI timeline by 50 percent (with 10 percent due the fact we’ve adopted Selenium).

Techweekeurope:What is your budget outlook going forward?

Dmitry Tishchenko: With a constantly growing competition across all industries and verticals, businesses are becoming more and more IT dependent in order to keep up with consumers’ requirements and expectations, which are growing exponentially.
So nowadays, even if the business is not focusing on IT, the share of IT in business is constantly growing. As a result, the budget for technology is increasing. On one hand, it’s ok considering the penetration of technology in peoples’ lives. On the other hand, it’s quite a challenge for non-IT companies to deal with and invest into something that is not their core business.

Apart from your own, which company do you admire most and why?
I admire companies which bring innovation to the masses. Although it’s great to find new ways to optimize the production process, it is many times more significant to have an impact on the lives of ordinary people. In our company, we contribute to this ultimate goal by improving the quality of software.
Following this concept, I really admire Apple for what they do, how they do it and their dedication to the end users’ satisfaction.

Techweekeurope: What is the greatest challenge for an IT company/department today?

Dmitry Tishchenko: The market is highly competitive and IT companies must be able to deliver their products or services quickly. So I believe that the main challenge is to continue optimizing time and costs, and increasing the effectiveness of software development life-cycle, as well as quality of the software.

Techweekeurope: To cloud or not to cloud?

Dmitry Tishchenko: To cloud, of course! Many people consider “the cloud” to be a technology. The way I see it, it’s more of a new level of abstraction. It helps people to use more and more IT services in a convenient and user-friendly way, which does not require special technical background. As a result, it has turned out to be an accelerator for the entire IT industry, setting a new direction. Software QA, as an IT industry vertical, has also been impacted by the “cloud” revolution, reflected in the TaaS (Testing-as-a-Service) concept.

Living in Switzerland, Franco Martinig has always tried to conciliate his interest for software development and literature. In parallel with its consulting activities, he created in 1993 the Methods & Tools magazine, a free PDF publication that covers all aspects of software development, and in 2009 the web site SoftwareTestingMagazine.com that presents software testing knowledge, resources and news.

a1qa: Franco, today you are editing several reputable software development magazines and web sites. This role allows you to have a “helicopter view” to everything happening in the world of software development and software quality assurance. What comes to your mind first being asked about tendencies & trends in software testing? Where are we going today?

Franco Martinig (F.M.): I mostly worked in teams where people performed multiple activities: requirements, development, testing. This fosters the knowledge of the business by the developer, which I think is an important asset. It is also an organization that limits the loss of information that occurs when you have business analyst talking to the end users and then transmitting information to a developer and a tester. I recognize however that it could be sometimes difficult for developers to “break” their own code. Tests performed by other people are important to get an external view on your application, website QA testing just needs to be done by other people to avoid bias. I am however always surprised when people argue that unit testing your own code is useless. It is always easier and cheaper to catch a mistake on freshly produced code than to discover what went wrong later in the development cycle.

One of the current trends in software is the adoption of Agile approaches that should improve team collaboration and responsibility. The frequent delivery of working software is a challenge for software development teams and a true Agile “behavior” can be difficult to implement. Agile software testing techniques are helping to improve the quality of developed software and the visibility of software testing. Test-Driven Development (TDD) and Behavior-Driven Development (BDD) are two techniques that use tests as a support and validation for the coding and requirements activities. The adoption of these techniques is also made easier by the development of solid open source software testing tools for unit testing (the xUnit family or various mock frameworks) and functional testing (Selenium, Cucumber, etc.).

a1qa:  We know you were deeply involved in development process in the past working for banks. So, what do you think of developer testing practice specific to the financial industry?

F.M.: In Switzerland there has been an important shift from solutions developed internally to the implementation of packaged software banking solutions. In most of the banks, the main package rarely covers all the functional requirements, so banks are dealing with multiple suppliers and need to integrate their products. The increased number of stakeholders in this type of projects and the fact that the testers have now to adopt a black box approach create important issues as it could be more difficult to determine where some bugs come from. Suppliers have a natural tendency to blame other people for the problems… just like we often do it as software developers ;O). Another trend I notice is the increased difficulty to have access to actual customer data when you try to understand production bugs. I understand the fear of seeing personal data leaking to the outside world, but this situation makes it harder and longer to understand which specific customer configuration was at the origin of the bug and how to it.

a1qa: How do you see the role of independent testing companies (like a1qa) on this overall software testing map? In other words, what is your bid in the competition like internal testing versus independent QA?

F.M.: In the beginning of my careers, software was targeting mainly “internal” end-user and intimate knowledge of system and business is an advantage to test this type of applications. Now software is used by “external” users through web sites or mobile apps. I think that this is where providers of software testing services have a major role as they can propose a structured testing of these systems with a “neutral” customer vision, something more difficult to find inside the organization. If you are providing products or services on a global basis, you also need people that could assess your product from different parts of the world. Does your application work the same in Moscow, Dubai, Bombay or Rio? Finally, these companies can also provide expertise for software testing techniques and tools that could improve the execution of testing by internal teams.

Thanks for being our guest and for sharing your viewpoint. We will be glad to see you and talk to you again.

Follow Franco on Twitter.

With over 18 years of  IT experience with various platforms and technologies, Scott Moore has tested some of the largest applications and infrastructures in the world. He has mentored and developed testing services for Big Five services firms, top insurance companies, and major financial institutions in the US. He is a Certified Instructor and Certified Product Consultant in HP’s LoadRunner and Performance Center products. In October 2010, Scott became President and CEO of Northway Solutions Group and currently holds HP certifications for ASE, ASC, and CI. This is a special treat for a1qa to talk over performance & load testing features with Scott.

a1qa: Scott, you have devoted your career to performance & load testing. Based on your experience, what issues typically arise when you start a new project?

Scott Moore (S.M.): I started my career in load testing back in 1998. I found a passion for it and it is something I love to do. I started Loadtester in 2004 and merged with AdvancedQA in 2010 to form Northway Solutions Group. Among all the disciplines of web application testing, we still focus on performance testing, but we also offer services around functional automation, implementing QA process, and training – mostly for the HP software solutions.

While it is different from client to client, there are a few common things I see. There are those companies that already understand the importance of performance testing and understand where it fits in the software development lifecycle. In those cases, it is easy to jump in and get the project done with minimal issues. They generally have good requirements, test environments that match production, and respond quickly when you need something to complete a task. Unfortunately, this is not the experience very often. Then there is the company that has no experience with performance testing. They don’t know what they don’t know. They usually have their heart in the right place, and want to do the right thing, but they don’t know what the right thing is. In those cases, they generally listen to us as trusted advisors and we’re eventually able to get testing completed successfully. However, there is a lot of hand holding, and the projects normally take longer than expected. Many times the people you are working with are dealing with things they never had to think about before. As long as they are willing to do what it takes to get the job done, we can be successful. The last type of company is somewhere in between. They think they already have everything in place and have a lot of approvals and political issues to deal with. They may have a defined process for QA in general, but it usually isn’t performance testing friendly. Not all players on the team are as involved as they should be, and there is generally a lot of confusion. This is probably the most common type of company that we work with. In some cases, it can be hard to be successful.

Typical issues that arise would be a poor understanding of performance testing tasks and deliverables, invalid test environments, bad or not enough test data, bad or undefined performance requirements, unresolved functional defects on the application under test, and access restrictions for monitoring the environment under load. I thought that by now some companies would learn and there would be some standards put into place by now, but we face some of the same issues on projects today as we did in the late 90’s.

a1qa: Based on your experience, can you remember concrete cases when load testing saved a lot of money and reduced risks for the client?

S.M.:Yes! Some specific examples would be:

We found a single outdated DLL file that caused a company to over purchase their hardware by $500,000.00. Performance testing would have saved them that much after only two days of investigation. Another time, we located a bottleneck in a single file that had some bad performing code that was shared by four other applications in the company. Correcting that shared code fixed all five applications, saving thousands of dollars in development troubleshooting time alone. In another project, testing results isolated five lines of code in an application that accounted for a 1000% performance improvement when re-written, saving hundreds of thousands in hardware purchases and software development time. At one client there was a load balancer that had an outdated firmware version. When updated, end user experience for page times went from 30 seconds to 3 seconds.

I have also seen it work the other way. I have seen companies go into revolt after finding out the cost of a load testing exercise would be $20,000.00. They decided to risk it and roll out their application without a performance test. They lost $2 million in clients the first year because of performance problems. I could share story after story about this. When I hear the words “performance testing is too expensive”, I always have to ask, “Compared to what? What will an hour of downtime in production cost in real dollars? What will it cost your reputation? What would a bad review in the iTunes store translate into in terms of dollars?” At the end of the day, the business has to weigh the cost verses the risks. It’s their decision.

We try not to take projects without success being defined up front in the statement of work. There needs to be a good pre-qualification as to WHY performance testing services are needed and what is expected from the exercise. If you can’t go into a project knowing you can be successful, then why do it? Many times I see testers blindly going through their tasks to test an application without knowing why they are doing it and what everyone hopes they will accomplish. If you cannot go into a project knowing that you will add value by reducing the risks of deploying an application, or if you cannot demonstrate that finding bottlenecks before end users will save quantifiable amounts of money, then you really should not engage. When done properly, performance testing usually pays for itself in the first round of test execution when something unexpected is found.

a1qa: Do you consider HP LoadRunner to be the best solution for performance testing?

S.M.:Overall, I do. Especially for the Enterprise customer. It could be overkill for a small shop that only has a need for end user timings for web/http traffic and web services calls on a small web site, or is doing basic “science experiment” testing on the developer side. With the new version 12 of LoadRunner, there is a free 50 virtual user license. I think this opens the product up to smaller shops who don’t have a huge virtual user requirement but still want to use a standardized load testing product.

a1qa: Why would this tool better as compared to others on the market?

S.M.:“Better” really depends on context and the situation at hand. Are there other products worthy to be used instead of LoadRunner? Sure, depending on your requirements and budget. There are three main reasons I prefer LoadRunner. The first reason is protocol support. No other product has an many transport protocols as LoadRunner. It allows you to test web applications, Xenapp deployed applications, mainframe “green screen” applications, ERP and CRP packaged applications like SAP, Oracle, and Siebel, and many more with the same product and the same basic process. No other product on the market that I know of has this level of flexibility across applications. For a large enterprise, this is critical.

Secondly, the ability to correlate end user timings with all of the native LoadRunner and SiteScope monitors gives the performance engineer a lot of data to pinpoint where bottlenecks arrive in real-time. The monitors are integrated into the tool once configured, so you don’t have to rely on third party solutions or additional resources until you hone in on a specific tier or problem spot. Again, I don’t know of another product that supports the number of technologies as LoadRunner does. In the Enterprise, you never know what you will run across, so this is very beneficial.

The third thing I like about LoadRunner is the Analysis engine. This component takes end user timings and correlates the data from the monitors gathered during test execution, storing them all in one place. This gives the performance engineer a powerful tool for sorting and displaying data in a way that makes sense for technical and non-technical roles involved in the testing project. I think this really separates LoadRunner from other products.

All of that said, performance engineering is a skill that should be tool agnostic. The process should work regardless of the tool, but some tools make it easier for the engineer to deliver than others. In my experience, I have found that to be true with LoadRunner. Many times the tool is blamed as the problem, when it may be the tool not being used properly. Whichever product is used, as long as you get the trustworthy results you need to be successful, that is the “better” one.

a1qa: Over the last few years, more companies are choosing “cloud” solutions for performance testing to avoid paying for equipment that is idle. Some companies say this allows them to deploy a test environment more quickly. Would you agree this is the best approach when performance testing is less frequent?

S.M.Generating load to and from the cloud has been a hot topic. It further complicates the requirements from a testing point of view. Is it a private or public cloud, or a mixture? Is it testing against production or a production-like environment? I understand the argument that having a production-like testing environment and a full-featured load testing lab can be expensive compared to doing everything as a service. However, it needs to be in context. Enterprise shops are usually testing all the time for multiple projects and require a test environment as close to production as possible with a lot of controls in place to ensure that test results are accurate. Any and all variables that can be removed, should be. It’s not easy to spin up a complex SAP environment on the fly whether it is in the cloud or not. For those companies with a traditional 3-tier LAMP stack web site on virtual servers, it may make sense to deploy and test against a “like” environment as needed in the cloud, and then shut it down when idle to save some money. The same issues with the application under test that I mentioned earlier concerning environment, functionality, data, and monitoring access still exists whether you generate load to or from the cloud. In version 12 of LoadRunner, it now supports the ability to generate load from Generators configured in Amazon EC2. Other products have this capability. The correct approach is still situational.

Unfortunately, in some cases we’re seeing a shift away from a more controlled testing process to an uncontrolled (almost whimsical) approach to performance testing in the name of “Agile”. There is an attitude that we can just spin up some virtual machines or test against production like we’re kicking the tires on a car and seeing what the results are. We get a few graphs and that satisfies the business that things are OK. Then everyone wonders why there are still performance issues in production. In my opinion, regardless of whether you test your application continuously or once a year, the most important decision about the approach you decide on should be one that enables the most realistic load tests and eliminates the most risks by removing as many variables and unknowns as possible. If we’re looking to the cloud for a solution because it will accomplish the same thing cheaper and faster without changing our process, that is great. If it is just thinking “well, we don’t have the time and money to do it the right way, so let’s just spin something up, throw some load at it and see what happens”, then I think that is a poor excuse.

a1qa: Scott, thanks for sharing your point of view. We will be glad to see you as our guest again.

Follow Scott on Google+, Facebook and Twitter.

Today a1qa continues interviewing Paul Carvalho, a software coach and practioner. You can check part 1 of the interview published on Thursday. This time we try to cover the issues of testing tools and Agile methodologies.

a1qa: We have been discussing the role of electronic tools and automated methods with many previous interviewees. The opinions were different to be frank. So, what do you think this is evil or panacea?

Paul Carvalho (P.C.): I don’t believe electronic tools will ever be the remedy for all diseases. I think that impression comes from the current statistical distribution of the general QA/Tester population. That is, we see a large percentage of testers focused solely on doing functional/black-box/system testing, so that is what most people (e.g. managers, developers, general public) think of as Testing. Therefore, automated/electronic tool vendors service that field, and that is where the “human vs. automation” debate festers.

Always remember that machines can’t think. The most important tools in your QA/Testing arsenal will always be your brains and your collaboration with others. Computers do what they are told, and nothing more, and that is a very narrow (and potentially dangerous) way to see the world.

If the majority of QA/Testers are doing work that can be better and more reliably performed by machines, then that testing/checking should be automated. That is the logical argument. But of course, we are talking about people here, so all this emotional stuff filters into the arguments and pollutes the logic.

I have no problem automating people out of a job. The reality is that if you do something that can be better performed by a machine, you need to take a good long look at what value you think you are contributing to the project and team. Is this an evil point of view? No. Humans should do creative, imaginative work, not boring, repetitive tasks that they are likely to mess up because, well, they are human.

The pinnacle of test automation on a development project may be found on a team following the Continuous Delivery model. The developers are responsible for automating all the risky parts of building software (including the majority of tests) so that it becomes boring and routine. This is a good thing. We don’t want software releases to be risky or stressful. We want them to be easy and quick so that we can release often and with confidence, and have our weekends free.

There is an opportunity here though. The Continuous Delivery practitioners say that we need some good thinkers (e.g. exploratory testers) to help the team see beyond the traditional automated checks. Good testers are amazing thinkers. They are valued and highly respected by developers who recognize their contributions above the usual boring functional testing work.

Good testers don’t fear automation replacing them any more than a carpenter fears being replaced by a tool belt. Automation and electronic tools present opportunities for testers and teams. Opportunities to grow and advance to new levels of quality confidence that are otherwise unattainable with manual methods alone.

Should testers learn about electronic tools that can help them do their jobs more quickly and efficiently? Yes. Do all testers have to become programmers? No. Do tool vendors service all our needs? No. Will they ever? No.
I do expect tools to get better — more useful and powerful. I also expect we will have fewer testers in the future as most of the boring, repetitive types of testing work will be automated by default. We’re not there yet, so there is still a wide range of skills, interests and opinions about how these tools help and hinder us in delivering high quality software.

a1qa: Paul, you spent a lot of time researching & teaching Agile and Exploratory testing methods. According to you these two are connected somehow? Does the Exploratory testing skills help to be better Agile tester?

P.C.: Yes. Becoming a good agile tester means becoming more adaptable and collaborative with your team members. I sometimes describe this as a two-stage process.

First, it’s important to know how to think and to really understand your particular specialty. That is, as a tester, I expect you to understand how to:

  1. think of good questions;
  2. design tests to uncover interesting things;
  3. look at your results to understand potential gaps and weaknesses, and [very importantly];
  4. communicate what you learn to others.

I see Exploratory Testing (ET) as a way that covers all these things, allowing for flexibility within each of those elements. I know some people who have a more narrow interpretation of ET and see it as a random, haphazard testing activity.

The next stage in becoming a good Agile Tester is to take Testing to the next level. You need to stop thinking like an *independent* contributor, and start thinking like a team player. New tools and techniques emerge when this happens.

For example, when I coach teams and give workshops I often hear the phrase: “how can I test this?” That is the wrong question. The right question is something like: “how will we test this?” or “how can we ensure the customer is happy when we deliver this to them?”

Note the difference in words. It’s not about you anymore. Everyone on an agile team has a special skill, a piece of the puzzle. It’s only through good collaborative communication, and iterative work and improvement that you will reap the agile benefits and rewards.

As an Agile Tester, you need to stop focussing on yourself and your testing contribution. Instead, focus on helping others on the team to learn what you know so that they can do it too. Start looking for new opportunities and ways to test things together and you will find a new powerful Testing force that you can’t achieve on traditional waterfall-type projects.

As an Agile Tester, you need to be a great tester. Your team members expect you to bring Testing expertise to the group. Without ET, you may find yourself struggling to apply familiar/traditional/waterfall testing approaches in ways that don’t help very much. You will be the weakest link.

And knowing ET alone isn’t enough either. I have seen good testers who couldn’t wait to get back to their desks to “find all the bugs that the devs missed.” Sigh. This is not a whole-team mindset to Quality, and also doesn’t help very much in the long run.

If you want to be a successful Agile Tester, learn to be a great tester and a great collaborative team problem-solver. You need both for success.

a1qa: Paul, thanks for the sharing your point of view. We will be glad to see you as our guest again.

Follow Paul on Twitter and subscribe to his blog.

Paul Carvalho, like everyone who a1qa has interviewed, is really passionate about Testing and enjoys mentoring and helping others to develop better testing skills and knowledge. His colleagues call Paul “The Quality doctor”. With over 25 years working with teams on computers, software and technology across various domains, he specializes in systems analysis, collaborative problem solving, test design and exploratory testing. During his long career Paul made his input to success of BlackBerry, AGFA, many scientific and research centers.

As a software testing coach and practitioner, Paul helps testers and teams grow and adapt to complement development efforts – to increase collaboration, effectiveness and efficiency, and to provide value to projects. He is a regular speaker at testing workshops and conferences, blogs occasionally, loves scripting with Ruby, and enjoys discovering new ways to break systems and applications.

“Have concerns about your Quality, QA, Testing or integrating with Agile? Let’s talk” – This is what Paul is proposing to do in his blog. a1qa accepted the invitation. 

a1qa: Paul, during your impressive career you worked in a variety of industries, including Financial Services, Healthcare, Telecommunications, Engineering and many others. Which industry was the most challenging and demanding for more QA skills, patience and diligence?

Paul Carvalho (P.C.): About 15 years ago, I first heard the Jerry Weinberg quote “all problems are people problems.” I hadn’t met Jerry yet, and so I struggled a bit with that phrase at the time. I was rising in my career, pursuing learning from anyone and anything related to software testing and QA. For reference, getting good information and training was harder in the late ’90s than it is today.

I dived into quality assurance consulting and it was like magic. I couldn’t wait to show other testers what I learned and how powerful these new techniques were. I was working in the Healthcare industry at the time. I learned two valuable lessons while working in that company:

1. Not all testers cared to learn test techniques or think of their jobs as a profession. Blasphemy! Surely all testers would want to learn these magical, scientific thinking tools?! Alas, I was wrong.

2. The majority of the problems we encountered in getting products out the door had nothing to do with software testing techniques, processes or tools. Most of the problems came from people and relationships.

It was then that I truly understood that “QA” involves a lot more than testing, and that there were whole new areas of skills that I wanted or needed to work on beyond the test techniques that I was proud of.

I would say that the Healthcare environment was the most challenging, interpersonally complex, Regulatory rigorous and demanded more QA skills, patience and diligence than any other field I have worked in. I have a lot of respect for the professionals who continue to work in these intense environments over many years.
On a side note, I met Jerry Weinberg years later and he said to me after hearing a particular story I told: “All problems are people problems, especially the technical problems.” I now fully understand and appreciate that statement. That’s one of the reasons I like the Agile community so much – every coach I meet treats all problems as people problems. I have learned so many more new techniques and approaches for problem solving from the Agile community.

a1qa: How responsive were people in mentioned spheres to specific testing principles and requirements?

Paul Carvalho (P.C.): Different industries and technologies have different requirements for testing. I think each industry is responsive to testing principles and approaches, but I see the problem as more complex than that.

There is no “one right way” to do testing. In each situation I needed to learn what the project and customer needs were so that I could efficiently provide that information. Sometimes I would focus on exploratory testing; sometimes it might be test/check automation. Once, my main role was “Beta Test Manager” and I focused on managing a community of people worldwide to provide feedback on products that weren’t released yet. These were regular people, not testers, so it was certainly a challenge (for me) to keep the participants motivated and provide good feedback to us so we could make the products better.

In every company, I got the sense that Testing is important (in some way). The more you move from company to company, or industry to industry, the more flexible and adaptable you need to be in the types of approaches you use. I would say that moving around has helped me appreciate what “context-driven” means more than someone who has only ever worked in one place.

The first time you go into a company and try your favourite technique, tool or approach and watch it crash and burn, is very humbling. You adapt, learn and grow. You become wiser and better. Let go of your personal preferences and focus on what other people need. Challenges to specific testing principles often happen when testers refuse to let go.

Read the continuation of the interview next Tuesday.

Follow Paul on Twitter and subscribe to his blog.

Raimond Sinivee says: «Context-driven-testing is my cup of tea…». But his area of interests is much wider than that. It covers broadly tools-based testing and test management as well. Today Raimond is council member of Estonian Testing Board and member of The Association for Software Testing. Raimond has been speaking on all of the Peers of Estonian Software Testers peer conference. He is one of the founding organizer and content owner of Nordic Testing Days.

Raimond Sinivee started his career in Estonian biggest mobile telecommunication company EMT AS as tester working from stored procedures to web UI application testing and creating tools for testing. Currently he implements mentioned testing concepts working for such big VoIP provider as Skype .
a1qa has the honor to ask Raimond some questions.

a1qaRaimond, you are adherent to Context driven testing, right? Sometime ago we interviewed James Bach who actually stood at the origins of this methodology. James suggests a program working well for one person in a given situation might prove inadequate or inappropriate for another person or context. Does Context driven testing proved its effectiveness for Skype which is really client oriented technology?

Raimond Sinivee (R.S): I value the principles of the Context-Driven-Testing and the first two principles are that the value of any practice depends on its context and there are good practices in context, but there are no best practices. This means what you’ve just said – that in web app testing there can be no silver bullet or a script that works in all situations. Context to me means all the aspects of the situation and taking into an account people, background, available resources and goals among other things. For example I avoid theoretical questions on interviews which ask what you would do. I haven’t seen a person who could give enough context to really explain all the aspects that would matter and I would not actually know how the person would behave. I can ask behavioral questions, because person can explain the context, because he/she has experienced it and that is specific example from what we can learn.

Skype has a lot of different teams working on different client applications. Skype is committed to platform diversity which means Skype makes clients also to Android, iOS, Mac OS X etc. The platform specifics are context to test teams and not all teams test exactly the same way. Similar teams share knowledge and tools, but every team uses their specific testing that matches their context. I work on Skype Plug-in for Outlook.com, OneDrive.com and Facebook.com and I use web based testing which is different from Skype desktop application testing in many ways like the tools we use and platforms we need to cover.
Context-Driven-Testing says that: “Only through judgment and skill, exercised cooperatively throughout the entire project, are we able to do the right things at the right times to effectively test our products” and I think that is great responsibility. I think this way of thinking attracts smart and thinking people and its honor to work with them.

Skype uses SCRUM for software development project management and I think Context-Driven-Testing matches the values of Agile very well.

People value these principles, reacting to situation and using Context-Driven-Testing makes it easy. I create test plans so that I could change them fast and they are not fragile to changes that I want to be open to. I use test ideas and objectives and don’t use detailed scripts for test planning. Some other team in Skype could use different kind of details in their test plans if it does match their context.

I think that Context-Driven-Testing gives the needed flexibility and makes people do smart decisions. The company does not need to state that they use Context-Driven-Testing, everybody can do that by themselves like I do it in the Skype, because not every team uses this definition or have to completely agree with that.

a1qa: Raimond, according to your vision “Testing is context driven learning process providing new, fast, relevant and trustworthy feedback to all stakeholders about the state of the software.” What is the role of testing tools in that process? I mean both debugging tools as well as complex systems supporting test management activities.

R.S.: First of all, I think that using debugger is sign of problems in coding process and I use that as rarely as possible. This shows to me that team is not using Test Driven Development or doing it wrong, because unit tests should give that information anyway. I do use it in for crash debugging to make my bug reports easier to understand and follow the coder way of approaching this kind of issues.

I haven’t used complex system for supporting test management activities, because by definition they are complicated and I want to make my life easier. I rather use lightweight and easy tools like readable documents tailored to the readers.

The role of testing tools is to speed up the feedback and help to check aspects which are humanly impossible or hard to achieve. I use tools for things that I’m not able to do or a tool could do it faster.
I use tools to collect information to make decision. For example, I test video calling application and use tools to make a lot of calls to measure different kind of resource usage. The tool does not return pass/fail answer, but it return numerical answer and I make a decision based also on the context. A value may be reasonable in one situation and not in other depending on the change or environment state. Harry Robison has good examples about it in his blog.

a1qa: Coming to automation testing let’s discuss famous Testing pyramid concept. How realistic it is to follow all the stages , if we do not begin developing a new product, but implement the automation in the current product that has already been released? How likely it is that API layer of the pyramid will go into the third one and become UI?

R.S.: This additionally depends on what are the plans with the application and where are the priorities in the company. Having no or low number of unit or API tests hints that the application would be hard to test on other levels than UI and that means it needs to be refactored. It becomes a trade-off of would refactoring add value or changes will not give much benefits, because for example we don’t plan to change the application and it’s in maintenance mode.

I don’t believe in creating unit tests after the fact of coding, because programmer would need to break the code to show that the test would find the issue if it would exist there. I have experienced that unit tests written after coding just conform to the code and are bringing very little value compared to the case when they would have been seen failing when writing them before coding.

I think that the testing pyramid shows the trade-offs and differences between different levels of testing and is guideline for refactoring or building an application and not for only testing teams to follow. It’s very likely that we can have only end to end UI tests if application is not testable from API level and no changes will be done to the system under test to increase testability.

a1qa: Raimond, since you are one of Nordic Testing Days conference organizers could tell us a few words about this event? What are the goals you are setting for this event? Who do you expect this year to be the key speakers? And finally, why should people attend?

The main aims are to educate people and motivate them to learn more about testing. Secondly, we want to raise the awareness of the testing discipline in the region and make it conscious career decision for people. Third, goals are to create network and make the bond between people stronger so they would share their experiences and it would be easier to communicate.

Our key speakers for this year are Matthew Heusser (USA) who is writer, blogger and vocal test professional, Anto Veldre (EST) from CERT, Jevgeni Kabanov (EST) from ZeroTurnaround and final keynote speaker will be announce in upcoming weeks.

People should attend, because we provide great learnings from people’s experiences and have practical tutorials, workshops and tracks.
Attendees have told us that they have really learned something practical and we are gaining popularity having more people every year creating better networking possibilities.

Reach Raimond Sinivee on Twitter.

Today a1qa continues interviewing Joel Montvelisky, international Software expert and popular testing blogger. You can check out part 1 of the interview published on Tuesday this week. Joel suggests: “The main link for all my tasks is the fact they are centered around the world of Testing”. So, this time we try to touch various testing tools and the way they are implemented.

a1qa: A lot of new testing methods and tools are appearing today. Is it worth to spend time for an ordinary tester going through them all to be more efficient?

J.: Imagine you or someone in your family is sick. Now imagine you are looking for a doctor to treat your sick family member.

Given the choice between two doctors which one would you choose: the doctor that is up to date with all the new diseases, diagnostic methods, treatments and drugs; or the doctor that constantly sticks to the same methods time and time again almost regardless of the patient standing in front of him?

Now let me ask you another question. Given the choice between another 2 doctors: the one that is all the time busy in his study-room reading all the latest medicine publications and can tell you everything about the latest treatments; or the one that is busy all day in the hospital treating actual patients and learning from practical experience. Which doctor would you choose then?

As you can see, the world is not black and white and there are no easy choices. This is also true for quality assurance consultants.

As professionals we need to make sure to keep our “vitual tool-box” full and up-to-date with as many useful techniques and methodologies (and tools) as we can. But we also need to make sure we know how to choose the appropriate tool for each challenge and how to use it correctly.

You need to reach a balance between keeping up-to-date with the latest professional information, but you cannot make this “theoretical learning” stand in front of constant practicing and improving your testing techniques.

a1qa: You have developed a special complex testing tool helping to manage QA process. It gathers the testing and product related information from testers, developers, product managers, etc. Why do you think the tool is necessary? Some people say the systems like these complicate the testing process a bit…

J.: It is true that I work in PractiTest, I am also part of the team who is constantly gathering practical information from our users in order to make our solution better. But on the other hand a very large part of our efforts go into making our system simple. In fact, the objective of PractiTest is to simplifying testing.

As you correctly put it, we are constantly gathering information from many sources in the organization and outside of it.

We need to use this information in order to understand what to test and how to create the correct scenarios to test this.

Then we need to gather additional information that will tell us under what environments and configurations to run our tests.

And finally, we need to go ahead and run these tests, reporting the results in actionable and timely ways to all our stakeholders, and most of the times we will need to perform this operation a number of times until the product reaches the level of stability that allows us to release it to the field.

This process is not simple, on top of that there are many factors and updates that are constantly affecting and shifting our priorities. Without a system such as PractiTest it is very difficult to be able to manage the process and provide the information we need to deliver.

It is true, there are many tools out there that aim to manage the testing process, and most of them are really hard to install, configure, maintain and operate. But there are also a small number of systems (of which PractiTest is one of them) that have managed to achieve the goal of managing the testing process, without making it even more complex than what it was.

Subscribe to the blog.

For the past fifteen years, Joel Montvelisky has been a tester, QA manager, and consultant, working with both small and large organizations in the Silicon Valley and worldwide. Joel believes that testers should always strive to provide more QA intelligence value to their organizations and to expand both professionally and technically. As principal solution architect at PractiTest, he works with hundreds of teams worldwide, gathering knowledge and experience on the changing roles and the challenges testers face in today’s dynamic reality. Joel is a blogger, Co-Founder & Editor of Think Testing, the first Testing Magazine for QA Professionals in Israel and collaborator in multiple QA communities and testing medias.

Joel is a cosmopolitan. Being born in Costa Rica he now lives in Israel with his wife, 3 kids and a dog. He is a Testing Professional and that explains a lot. “The main link for all my tasks is the fact they are centered around the world of Testing” – that is how he is describing his life.

a1qa: Joel, when you say «quality», what do you mean by that?  Do you actually feel as a good technical specialist but merely a bug-finder or you prefer to be considered as customer`s advocate to some extent? What is your place on this scale?

Joel (J.): I believe QA outsourcing needs to have a purpose, and that purpose cannot be solely detecting the bugs. Let me even go further than that, I strongly believe that finding bugs is mainly the responsibility of developers. They must have the processes and tools (e.g. functional and technical reviews, unit tests, etc) that will allow them to find these bugs by themselves.

Of course we as testers should help developers in this task by guiding and teaching them, but in the most logical way, if a developer wrote (a piece of code that introduced) a bug, then he or she should be responsible for finding and correcting this bug.

Our direct responsibility on the bug-catching front should be to serve as an additional line of defense that will find any final issues left in the product. But in the same sense that you don’t drive carelessly because you are wearing a sit-belt, developers cannot code carelessly because we will test the product after they complete their work.

OK, so if we are not in charge of catching bugs, then what is our purpose within the process?

As testers we are in charge of Project and Product Intelligence, or simply put QA Intelligence.

QA Intelligence is the information and more importantly the “visibility” that allows our stakeholders to make their decisions.

What stakeholders and what decisions? This will vary from project to project. In one project it may be the Product Manager who wants to know if the system will answer the needs of the End User. In another project it can be Marketing and Sales Team that need to know if the project will meet its timelines and so decide when to schedule their marketing campaign and sales accordingly. And in another project it can be the End Users themselves, who want to know if the system has been developed correctly before paying the vendor for it.

Even though we are called testers, we actually are in the process of information. Testing is only one of the means (the most common one maybe) that we use in order to achieve our objectives.

a1qa: Basing on your experience can you name the most common questions asked by customers during your career? Could you share some of your communication tricks helping QA people to communicate with customers? In other words… how to translate from Testing into customer’s language and to be understood?

J.: Tell me who are your customers and I will tell you what they want to learn…

But be very careful, most of the times your customers will not necessarily be your End-Users.

Having said that you need to start by mapping the customers of your testing, or as I explained previously of the information you are gathering as part of your testing.

Still, to answer your question, most customers will want to know two main things:

First of all they will want to know if all is working according to plan. And they are referring to timelines, functionality, performance, etc. So you should not base your answer on features alone, but taking into account all aspects of your project and its timelines.

The second thing they will want to know is, in case “not all is going according to plan” (and this is usually the case), what is it about the project that is not processing according to plan.

This sounds simple, right?
No!

1. Politics and your sense of camaraderie will get in your way.
2. Not all that is going on in your project is actually known to you.
3. Things change all the time.
and
4. YOU might be the part of the problem.

So take that into account, when trying to answer your Customers questions.

Maybe most important, it is not only about what information you provide, but how and when.

As you asked, on what language and on what time should you provide the information in order for this information to be useful and understood by your customers?

When should this information be provided? When it is USEFUL and when it can HELP your customers to make their decisions.

How should this information be transmitted? In the language they are familiar with! Technical people want to hear technical stuff, and business people want to hear business stuff (mainly because they don’t really understand the language of the other side). There is no way around it. It will be not enough to provide the information in the way that you found it. You will also need to provide it in the language they understand.

In the continuation of the interview we`ll talk with Joel about tester skills and tools for use. Reach Joel on Twitter, and subscribe to his blog.

This is the second part of the interview with Shrinivas Kulkarni, international Software expert and popular blogger . If you haven’t read, please check out part 1 of the interview published on Tuesday this week. Shrini suggests,  “Test automation to be probably the most misunderstood concept in the field of software Testing…” Today we continue discussing Test Automation using different angles and approaches.

a1qa:. Can you give the concrete example of the Automation Testing positive result for the business processes from your experience or the one you heard about? (How much money saved, time to market accelerated or some other input).

S.K.: As I said before, automation when done “properly” is going to increase skilled testers` productivity and efficiency. Concrete examples of positive results of automation – in my opinion – should be related to these terms mainly. Automation in IT services world is a big money making enterprise. There are automation tool makers, consultants and automation developers that make up the commercial equation of automation. When automation is done using discretionary expenditures – the money spent need to be justified in some commercial terms that executives required to sign-up. This is where the problems start and we slip into realm of manipulation, politics, personal motivations biases, likes-dislikes etc. I have held very skeptic views of claims of automation such as ROI, cycle time reduction, faster time to market; as these terms are too vague when applied to automation and often depend on several parameters outside the control of testing.

For example, if someone claims that they reduced cycle time due testing – here is how I scrutinize the claim. First of all the whole idea of cycle time is often not clearly stated – are they refer to only testing cycle time or overall development or release cycle. If we consider testing cycle alone – how do we mark start and end of test cycle? Test cycle is a provisional construct devised as convenient unit and whose definition changes from project to project. Even within any given definition of testing cycle – what does end of test cycle signify, release of the product? Often, end of one testing cycle throws of bunch of bugs for developers to fix and that needs more testing. Unless you see such a testing cycle where end of the cycle we have zero new bugs or zero outstanding bugs to worry about – end does not technically mean anything about release of product. Release of product at the end of testing cycle is a business decision based on risk appetite of the stakeholders and business/market considerations. Automation, testing or even development team – cannot do anything if executives decide to release a product to meet some specific business goals. My point here is – effectiveness and success of automation needs to be measured at different level – tester’s level, not at a business level. Cycle time or testing cycle time is an example of a parameter dictated at business/stakeholder level.

Same argument goes for “faster time to market”.

My recommendation is to measure success of automation by capturing how it makes skilled human tester productive and efficient. As go higher towards business stakeholder level – picture gets blur and many parameters that are out of control of testing or automation teams get in. This impacts the decision making process about automation and marketing persuasion prevails over critical thinking. Thus, when you hear phrases like ROI, cycle time reduction money saved through automation – take a step back and subject those to critical thinking. Just do not accept them as granted. The devil is in the details.

a1qa:. What are the typical, most common mistakes while implementing Test Automation?

S.K.: One big mistake often committed is to treat automation as magic wand solving any problem related to testing – even reduce or eliminate testing itself. I often advised teams not to accept automation as a solution when there is less than required time for testing. For managers and many executives – automation is often sold as a means or tool when there is insufficient time for testing. This is the wrong way of using automation. I am yet to see a real and practical example of automation balancing the lack of time for testing.

Another big mistake related to above is giving very little or no consideration about testing and how automation fits in it. If you are doing bad testing, throwing some automation at such testing cause worse results. James Bach in his famous article on automation (Test Automation snake oil) lists down a number of reckless assumptions about testing that automation folks often make. For example consider testing as a sequence of actions. If you allow people (consultants and tool vendors) to trivialize testing, you are giving them the freedom to sell anything under the name of automation. You cannot automate testing, as testing is a fundamentally human cognitive process, but such a process can be enhanced by use of tools. A good automation has deep grounding in good testing – find out, for your context what that good testing is and then ask how automation assists the process. It is a pity that many tool vendors market their tools as those doing testing and user’s (testers to test managers) community accepts such claims.
You need to be wary of marketing pitch that comes along automation. Apply critical thinking to marketing claims such as faster automation, easy to use, quick ROI and assess the feasibility of each of these claims in the context of your project, company and industry.

Most tempting marketing pitch by tool vendors is “automation to non-technical users”. Non-technical users (meaning those who do not write code) can create tests but cannot write or maintain code. So, the rhetoric of non-technical automation is an oxymoron. Watch out for this.

Another big problem that we need to deal with automation is about tools that claim “script less” or “code less”. If you reckon automation as software development – how can there be a script less or code less development? I have used several tools that start off as “pure script less”. When we raise questions about support for programming constructs like looping, decision control, exception handling – the tool starts opening up into programming domain. Eventually they end up in halfway where there is some script less part (to market tool to execs in the name of “being easy to use”) and other part is to support programming needs. Such tools, in my opinion are better to be avoided.

In summary: (how to spot and avoid mistakes)

1. Tool cannot do testing – humans do. There is no such thing called automated testing as testing cannot be automated but can be enhanced by the tools
2. Automation for Non-technical users is an oxymoron (like a water-proof towel)
3. Avoid Scriptless automation tool if you can.
4. Beware of Marketing pitches that come with automation – watch out for phrases like cycle time reduction, faster time to market. They are traps.

a1qa:. Who is the test Automator according to you? He is under-developer or over-tester? How to check the aptitude for test automation? Developers may face a problem with motivation, as for testers – their technical level is not always good enough.

S.K.: For anyone to be an automator – software development and programming skills in one formal/full programming language is a-must. It is possible that a tester has such skills and can be an effective automator. One of them is the ability to understand/decipher software code and ability to update/modify code. The general tester`s description (a manual tester) is someone who cannot write or understand code. This has to change. IT service providers have been charging a premium for so called “automation engineer” who is nothing but a tester with some training automation tools (in most of the cases no programming skills). A rejected developer cannot perform well as an automator as the key skill of programming/design is missing and needless to say such a rejected developer would not have invested in becoming a good tester. How much or how deep a tester should know programming is a question based on the context of the project/company/domain.

There is no one-size-fits-all technical bar for an automator. A good automator would have a balanced mix of testing skills and programming skills so that one can blend them (skills) in developing/maintaining automation solution that enhances efficiency and productivity of testing. For all you know, in future there might not be any specific role called “automator” – a tester or a developer would create and maintain automation solution. Accordingly, there would be a need for skill enhancement for developer (of testing) and tester (of programming) or we might just pair a developer and tester for the purpose. Interesting times are ahead.

a1qa: Thank you again, Shrini. I hope for the best in your work, and thanks for contributing to something that is important and cognitive to all curious Testers!

Reach Shrini on Linkedin, Twitter and read his blog.

Through his career Shrini has worked in companies like Microsoft Global Delivery center, iGATE Global Solutions, and currently engaged to Barclays Technology Center India activities. Having over 16 years of testing experience hе worked in many roles covering entire spectrum of IT – like Software developer, Project lead, SQA lead, Test Lead, Test Manager and Solution Architect.

According to him «Test automation is probably the most misunderstood concept in the field of software Testing…». a1qa invited Shrini for an interview to discuss this topic.

a1qa:.Shrini, we are grateful that you accepted our invitation. The first question goes to the core of automation process. Are there any determined practices where we should definitely start testing automation and when it’s worth finishing? Could you please share your viewpoint about, where this border is in Test Automation?

Shrinivas Kulkarni (S.K.):. Before I answer this question, please let me turn to background. When it comes to automation – one thing that I have not heard much talk about is the industry or domain in which testing/automation is happening. I have studied the industry as a consultant working for IT service providers and an automation developer for software development companies, for last 10-15 years. The beliefs, practices and expectations from automation in these business domains (hence need testing and hence automation) are very-very different.

Coming to your question of when to start and when to end automation, I think one needs to take into account the type of business/industry, where automation is used. Traditionally software product companies never had (I think that is true even now) a specific thing called automation or automation team. It was all part of development. In the present world, companies like Google, Microsoft have taken the idea of automation to a level, where engineers in these companies cannot (at almost all times) distinguish between writing code and testing. Testing = writing some code to test another piece of code. This is how product companies view automation. Here there is no clear start or end of automation – it is all part of the lifecycle, happens throughout and continuously. Agile movement eradicated any traces of separate automation cycle/effort/start-end through the practices of XDD – X would be any favorite letter such as “B” or “T” or similar.

a1qa: How actually IT world sees the automation?

S.K.: In the world of IT service providers or offshore tech centres of Banks, insurance companies – automation takes a whole new meaning. It is seen as a means to cut down costs, reduce time and effort of testing. It is founded and tracked as a separate (at times totally disjointed from core dev/test teams) activity. How it became this way? In the world driven by outsourcing – the IT service providers are constantly in need of broadening and deepening the services to continuously generate revenue. This often causes situations (sometimes artificially) where they need to look at testing, automation and other related services in another way. First comes independent testing that is disconnected from the development team. IT service providers essentially operate in this model. More separate and disjointed services – more revenue.

How do the IT service providers look at automation? Mostly as a means of cutting testing costs, this is a primary and most popular way of selling automation. On the surface, it might look plausible that a sensible automation, aligned to development life cycle can help to reduce cost of testing and improve value out of testing. But the way IT service companies sell and approach automation has – over a period of decade or so – created more concerns than solutions and benefits. The start and end of automation in such conditions depends on many factors such as availability of funds, technology road map of products, status of testing etc. In my opinion, IT service industry should adopt the model similar to software product companies – treat automation as continuous process, running alongside the development cycle – with no definite start and end. But then, they have to figure out how to sell such long running automation service that costs more, though they sell it as a cost saving method. That is a puzzle, a difficult one. Thus these companies have chosen to downplay the issue.

a1qa:. In your opinion what indicators can prove effectiveness of Automated Testing implementаtion? And which of them can be applied to people, who involved in that processes?

S. K.: One sign of successful automation is its widespread usage. Any automation that is outdated, cumbersome to maintain and use – would be put aside by testers and developers alike. Do not make a mistake – the primary aim of automation is to assist testing in coverage, speed and filling gaps/shortcomings. Effective automation works to help human testers to be more productive and efficient. On the contrary to popular belief – effective automation would allow broader testing assistance to skilled human testing in the areas where machine/computer program does things better than a human. Successful automation makes more people to embrace it and promote.

Cem Kaner defined automation or automated testing as an assisting testing tool. You would be surprised to see that many software applications such as MS Excel can assist so much in testing that Excel can at times be used as an automation tool. There are similar examples from Non-windows world. The bottom line is to ask “what can enhance my productivity and efficiency as a tester” then pick tools that help in specific direction. Anything that helps testing is an automation tool. Many specialists in our industry don’t think of automation like this and that is a problem…

On this Thursday we continue the conversation with Shrinivas Kulkarni about test automation.

You can check his blog here.

Markus Gärtner is a popular German testing blogger, and MIATPP 2013 Award winner. This award is annually held online survey for the entire Agile testing community from all over the world.  We invited Markus to be the next guest in the  series of interviews with most popular QA people we started last week . 

Though Markus is rather young hе has already gained popularity as famous Agile tester, consultant, trainer, coach, and regular presenter at Agile and testing conferences. He used to call himself a software craftsman, and was around when the Craftsmanship Manifesto was lifted.

Markus specializes in Acceptance Test-driven development, Exploratory Testing, and skillful manual testing, which includes the whole development cycle from idea, vision, coding, finally on to delivery and maintenance. One of his favorites is Telecom sphere, which we intend to touch below.

a1qa: Markus, thanks a lot for being so responsive to our query. a1qa is the company  engaged in telecom-testing. What would you suggest to QA specialist planning to work with telecommunication software? What should he start with?

Maerkus Gärtner (M.) : Telecommunications is a large field. When you start working in that field, you are probably not confronted with all of it, but only a part. When working in software testing outsourcing on a telecom project, I would suggest to start with learning enough about that particular detail until you become bored. Then you can easily extend your knowledge from there.

For example, I started with the business administration part a few years ago. I dived deep enough into it to create and maintain a testing suite for that purpose. Over time I noticed that we needed to address more and more concerns about real-time rating or billing as well. That was when I started to dive deeper into those topics.

Of course, there are also fields which I know nothing about, like low-level network traffic from the SS7-stack, the differences between various network types, and so on. I maintain the illusion that I can easily dive into those topics if I ever need to.

a1qa: Let’s talk a bit about OSS/BSS projects for operators. It is a well-known fact that OSS/BSS solution replacement is one of the most complicated projects for the operators. It might cause lots of troubles and take a long time to get a success. What’s your opinion about the main challenges operators have during OSS/BSS integration process? Is it possible to minimize them with a QA team?

M.: Right now, I would argue from a background with domain-driven design. OSS and BSS appear to be different views on the problem space. So, I would model them in different bounded contexts, then I would pick a dedicated QA team for each, and afterwards ask another QA team for the integration of those two parts.

However, I think operators face different problems in this space. OSS/BSS forms a circular dependency. That usually means that you need to have feedback loops between the two teams. I would strive to solve the dependency to a major primary player, for example OSS, and then have BSS depending on OSS after it is stable.

I don’t know how a QA team can help with those problems. They can certainly make you aware of these types of problems. The bigger problem I usually see is that people are not listening to the transparency those QA teams can deliver. That’s a missed opportunity to improve something.

a1qa: Do you agree that telecom sphere becomes more and more virtual, highly influenced by IP and VoIP? What are the main trends you foresee in telecom development for the next decade and what is the role of QA in this process?

M.: I am bad at foreseeing anything – just like anyone else. We are all guessing. Some are guessing better, others don’t.

For QA virtualization has an advantage as test systems become available more easily and more quickly. Of course, there is a risk that the system will perform differently than production, but we know pretty much how to overcome that risk.

I am not sure for the next decade though. Yesterday I overheard a conversation at a client in the telecom business that the market will become satisfied. At that point mobile phone vendors will probably pivot to other fields, like fitness wristbands, clocks, or integrating the smartphone with the connected home environment including TV, and heating. I think some of this is already happening.

Then again, who knew that SMS would be a killer feature fifteen years ago? And that we would overcome it in 10 more years? I certainly didn’t. I was more a late adopter when it comes to mobile phones. So, don’t listen too closely to me :).

a1qa: Thanks again, Markus, for the privilege of having an interview with you. We were pleased to share your viewpoint and will be definitely glаd to see you as our guest again.

You can check Markus Blog here, follow him on Twitter.

With this blog post a1qa starts interviews series with famous persons of software testing and quality assurance industry. James Bach honored us to be the first one. He is the expert who speaks at prestigious conferences, and an ardent trainer who strives to bring benefit to his students.

James Bach is the person for whom Testing is not only a job … it`s his life. Previously working for Apple and Borland; now he is leading Satisfice Inc., a teaching and QA consulting company. James is the one who is behind of Exploratory Testing concept and the Context Driven School of Testing. His testing blog breaks all records of popularity over the globe. We are delighted to host James in our blog; see James`s blog.

a1qa: James, thank you very much for being our guest. The last post in your blog was about measuring testing quality. It’s not a secret that a developer and project manager usually want to know how good their code is. And test activities often result in test report from responsible QA engineer with a quality analysis and a quality estimates. How do you estimate quality? Is that verbal, numeric attribute or instinct feeling for you?

James Bach (J.B.): All reasonable people assess quality of software the same way: weighing the evidence in their minds. It’s what juries do, it’s what voters do, it’s what political and military leaders do, it’s what everyone does when assessing anything complicated and important. Anyone who tries to specify software quality substantially in terms of numeric attributes is either joking, incompetent, or a fraud. Management is primarily responsible for weighing the evidence about the product. There are no metrics that can take this responsibility away from management. If they attempt to shift their responsibility onto metrics then responsible people must resist that.

My job as a tester (I prefer not to say “QA”) is to collect, arrange, filter, and present the evidence that management needs. I might use metrics as part of that evidence. What metrics I choose is a decision that is completely dependent on the situation. However, I use metrics with skepticism. Numbers are too attractive to people who want decisions to be simple. I never let a metric go unsupervised into the hands of management. And I almost never use any metrics associated with test cases (there are a few, rare exceptions), since they mean nothing. I never reduce quality to any set of numbers.

a1qa: What are your expectations with regards to QA and trends you foresee in testing? Will test automation ever replace manual QA?

J.B.: Skilled testers have always been a tiny minority. Now, at least, there is good international community to support the growth of skilled testers. But the community of serious testers is under constant attack from different quarters. Some Agilists are bored by testing and want all testers to be programmers. Many old factory-style testing consultants are threatened by the new wave of intellectual testers, which is why they have conspired to create the new ISO testing standard that was obsolete long before it was even officially created. Certain large companies famously deny they even have testers.
I think that the days of poorly skilled testers who follow test cases has largely passed. Those people will get less and less work.
As for automation, well testing cannot be automated. There is no such thing as automated testing. I realize that your company publishes blog posts that speak of automation, but that’s not testing– that is fact checking. Unless fact checking is guided and designed and managed by people who know how to test, it will remain an expensive waste of time. Today, most of what is called test automation delivers very little value, relative to the cost.
I am a tester. Of course I’m interested in facts, but I also have the skills to analyze and interpret evidence. I know how to design experiments. Those skills cannot be automated, although I do use tools to help me test.

a1qa: QA to Dev ratio as 1 to 3: do you believe in it? What’s your advice on QA team size? What factors do you consider when setting up a team?

J.B.: Tester/Developer ratios don’t mean anything.
My advice is to hire or train fantastic testers until the critical testing work is getting done quickly enough. I generally start by hiring one tester and go from there. This is the same way developers are hired.
One factor that matters to me in building a team is diversity. I want different kinds of people, with different temperaments and educational backgrounds.
Another factor is service. Testing is a service role. People who test should embrace that. Testers, like lawyers and doctors, are not in charge of their clients’ lives. Instead they serve them. A tester who doesn’t enjoy being of service should go into some other line of work. A tester should be thinking “how can I help?”

a1qa: Who would you recommend to consider QA as their career? How would you recommend to train newcomers and senior engineers? How to learn new if you are really long in this sphere?

J.B.: I would not recommend any career to anyone. It’s like recommending that two strangers get married. They should fall in love, first. I like testing. It’s good for someone who likes to solve puzzles and work on different things each day.
I cannot quickly tell you how to train testers. I’ve worked many years on that, and there is too much to say. I will make one suggestion: do not get ISTQB certification. It’s a waste of time and it merely gives money to a corrupt organization.

a1qa: Looks like there is no an event with Agile, Scrum, or Kanban being discussed. So Agile for you is a silver bullet or pop trend? What is your strategy in selecting approach?

J.B.: Agile is a fairly vague collection of aspirations, habits, and expectations. Skill, motivation, and social relationships are what make projects succeed.
I’m a problem solver. My way of selecting an approach is to understand the problem and my role and my available tools to solve it. Then I do what solves that problem. It’s called the Context-Driven way.

a1qa: Thanks a lot, James, for sharing your insightful views! Obviously, one post is not enough to discuss all topics of the interest. Hope we will have an opportunity to talk more and meet in the future.

Reach James Bach on Twitter.

Get in touch

Please fill in the required field.
Email address seems invalid.
Please fill in the required field.
We use cookies on our website to improve its functionality and to enhance your user experience. We also use cookies for analytics. If you continue to browse this website, we will assume you agree that we can place cookies on your device. For more details, please read our Privacy and Cookies Policy.