Blog

How to decrease post-release risks. Interview with Parimala Hariprasad. Part I

Parimala Hariprasad has worked as a tester for close to 12 years in domains like CRM, Security, e-Commerce and Healthcare. Her expertise lies in test coaching, test delivery excellence and creating great teams which ultimately fired her because the teams became self-sufficient.
25 September 2014
Interviews
Article by a1qa
a1qa

Parimala Hariprasad spent her youth studying people and philosophy. By the time she got to work, she was able to put those learnings to help train skilled testers. She has worked as a tester for close to 12 years in domains like CRM, Security, e-Commerce and Healthcare. Her expertise lies in test coaching, test delivery excellence and creating great teams which ultimately fired her because the teams became self-sufficient. She has experienced the transition from Web to Mobile and emphasizes the need for Design Thinking in testing. She frequently rants on her blog, Curious Tester.

a1qa: Your most recent work has been extensively in mobile apps testing space. What according to you is important while devising a Test Strategy for Mobile Apps?

Parimala Hariprasad: Mobile apps testing groundwork begins by understanding the customer and the apps to be tested. The better we know about these two, the better test strategy will be. A high level test strategy includes understanding business goals, release goals, mobile personas, platforms to test on and testing for competitiveness of the apps. Once test strategy is ready, tests in each area can be planned and executed for good coverage. Testers must plan for surprise platforms that are problematic, but weren’t tested thoroughly enough. Whenever we optimize and narrow our focus, we run the risk of missing something important. Mitigation plans are important to have, for known risks.

Fishing net heuristic
Is your test strategy good enough? Every element in test strategy is a fishing net. Ask, ‘What kind of fishing net do you use?’, ‘Does it catch small fishes?’, ‘Does it deal with sharks?’, ‘Do you have just one kind of net or more?’. Remember, which type of sea creature you catch depends on the type of net you use! Fishing nets is a powerful heuristic to assess if test strategy is good enough or not.

I don’t have time to create a strategy
Abraham Lincoln once said, “Give me six hours to chop down a tree and I will spend the first four sharpening the axe”. So what if you don’t have the time to test? Even if you have an hour to test, you must spend time creating a strategy first, because the less time you have to test, the more effective your testing must be. These words from Jonathan Kohl keep coming back to me whenever my team feels time pressure to complete testing.

a1qa: Faster release of mobile apps is always a risk to any organization. What do you think app owners should do to decrease post-release risks and how testing can help?

Parimala Hariprasad: Post-release risks can be mitigated well if testing is backed by powerful test strategy and is context-driven. There are several techniques to gather information post-release of apps.

Real world testing
Hiring testers and users to test in real world conditions w.r.t location, network types, network speeds and so forth. Such feedback has high possibility of finding problems that might occur only in real world conditions and corner case scenarios.

App store reviews
Studying reviews and comments by users on the app store is a goldmine of information about how the app can become better in subsequent releases. App discoverability and user engagement are key metrics of measurement to increase app store ratings for apps. Testers can assimilate these inputs and come up with a ‘Recommendations’ report from user perspective.

Social media analytics
What people say about the released app on social media is a good way to assess how users are feeling about the app in general. There are several tools in the market that collect social media reports about the app from different social media and provide that information to stakeholders. Analytics gives great visibility into user distribution in real world. Based on this information, testing can focus on platforms/configurations that were not previously covered during testing. Additionally, analytics data can be used to improve test strategy for subsequent test releases.

Competitor analysis
Released app can be compared against competitor apps to test its strength and stickiness. A better approach might be to take the app to users of competitor apps and provide feedback at all levels.

Recently, there was an instance where missing out on testing a ‘so-perceived’ trivial flow cost the organization, refunds to many of its users. Until then, the testing team involved did not know the importance of that flow.

Once the apps are released, information about the quality or the lack of it, keeps flowing in all directions. It’s important for testers to work outside testing team with tech support personnel, sales/marketing teams and product owners to listen to the feedback coming in. The underlying message is, ‘Testers need to keep listening in all directions.’

a1qa: Mobile Market has millions of devices today. How do you choose which devices to test on? Can you describe your approach?

Parimala Hariprasad: I like Jonathan Kohl’s approach to choosing mobile devices. According to him, there are three basic approaches to select from an ocean of devices:

  1. Singular approach: test on one device type. This is either because that is all our team plans to support, or the most popular device in a device family, with one operating system, using one cellular carrier. Problem child device that reveals lots of problems is the best bet in this approach.
  2. Proportional approach. Which devices and how many devices to test in this approach needs research which is based either based on web/mobile traffic, analytics data or user data. For e.g. if existing historical data shows 50% Android mobile traffic, 45% Apple iOS mobile traffic, and the remaining 5% are other handset types, this data can be used to prioritize testing using android and iOS devices.
  3. Shotgun approach. For a mass market app, we may need to support all sorts of devices, and we have no self or customer-imposed restrictions on devices. It has the highest risk, because there are many, many platform combinations out there, Problem devices, research data like in proportional approach above are good places to start.
  4. Outsourced approach. There are various services you can use to supplement your own test devices with basic testing on devices that other people own and have set up. Formally this can be done using remote device access services which allows to install software and control a device remotely over the web to do basic functional tests. You can also use a crowd sourcing services where they manage people with different device types in different locations and parts of the world to do testing on their phones.

Despite above approaches, organizations face the brunt of setting up mobile device labs, maintaining them and nurturing them with latest devices over a period of time. I handle this challenge by using a collective approach:

  • In-house Mobile Device Lab with access to most popular devices based on device models, platforms, countries, mobile app types and user types
  • Online Mobile Device Lab with access to millions of devices that can be accessed from across the world through in-house or external remote access mobile device organizations
  • Simulators / Emulators for quick / basic tests [Trust these at your own risk, but there are good tools in the market which are close to being real
  • BYOD approach where millions of users across the globe can be invited to complement other mobile device labs using crowdtesting

a1qa: Success of any product depends on positive user experience. Usability testers apply various techniques to enhance user experience. One of these techniques is paper prototyping. How helpful is it and what are its weak sides?

Parimala Hariprasad: Paper Prototyping is a technique adopted from design thinking world. In this technique, a tester wears a designer’s hat and re-designs prototypes of screens or pages. Testers take existing applications (Web or Mobile), view page by page or screen by screen, understand the design and perform basic tests on Design, UI and Business Logic for each design.

Designers create prototypes anyway, why re-invent the wheel, isn’t it? Testers gather vast experience over time by testing multiple products or applications in a variety of domains. For e.g, testers might say, ‘This button must be in this position’ or ‘This UI element must be in this color’ or ‘Remove this UI element as this is redundant in my experience’. This feedback is driven by testers’ knowledge of different applications, domains and industries. A step forward from here would be to incorporate above decisions and create fresh prototypes of these applications which can then be reviewed by designers/developers/product owners for further discussions.

An advanced approach towards paper prototyping can be to design two different prototypes, show it to a group of users and gather feedback on which was a better hit with users. Going to stakeholders with such information helps testers build credibility.

What are the weak sides of Paper Prototyping?
Paper Prototyping has its weaknesses:

1. Ideas are tester dependent and may not represent an ideal user at all times
2. Users involved in getting feedback may not represent a holistic sample of users

In the next post Parimala will touch upon the topic of crowdtesting.

More Posts

2 July 2024,
by a1qa
6 min read
Interview with Mike Urbanovich: How to build a robust test automation strategy?
The Head of testing department at a1qa answers the questions on how to smartly build a winning test automation strategy and talks about the advantages you may obtain with it.
Interviews
Test automation
The year in valuable conversations: recapping 2023 a1qa’s roundtables for IT executives 
8 December 2023,
by a1qa
3 min read
The year in valuable conversations: recapping 2023 a1qa’s roundtables for IT executives 
From dissecting novel industry trends to navigating effective ways of enhancing software quality — let’s recall all a1qa’s roundtables. Join us!
Big data testing
Cybersecurity testing
Functional testing
General
Interviews
Performance testing
QA trends
Quality assurance
Test automation
Usability testing
Web app testing
6 top reasons why business should invest in software quality
9 November 2023,
by a1qa
4 min read
6 top reasons why business should invest in software quality
We congratulate you on the World Quality Day with the article by Alina Karachun, Account director at a1qa, having 10+ years of QA expertise. Delve into it to explore the reasons why businesses should prioritize software quality.
Cybersecurity testing
Functional testing
General
Interviews
Performance testing
Quality assurance
alina
25 July 2023,
by a1qa
4 min read
Interview with Alina Karachun, Account director at a1qa: unearthing the power of a true IT leader
Read the interview with Alina Karachun, Account director at a1qa, about the importance of creativity and feedback for executives and their teams, what is ethical leadership, and many more.
Interviews
Quality assurance
debated technologies
30 May 2023,
by a1qa
3 min read
a1qa tech voice: Managing director at a1qa, North America, discusses pros and cons of much-debated technologies
Nadya Knysh, Managing director at a1qa, North America, puts a spotlight on 6 current technologies, discussing their positives and negatives.
General
Interviews
Test automation
10 March 2020,
by a1qa
6 min read
Dedicated team model in QA: all you should know about it
Check on everything you should know about when to apply, how to run and pay for a dedicated team in QA.
Interviews
QA consulting
Quality assurance
30 September 2019,
by a1qa
4 min read
“Every team member is responsible for software quality”: interview with Head of QA at worldwide media resource
We continue talking about unsurpassed software quality. Consider how to make QA more efficient using shift-left and continuous testing.
Interviews
8 December 2017,
by a1qa
4 min read
a1qa: one-stop shop for first-rate QA services
Dmitry Tishchenko, Head of a1qa Marketing and Pre-Sales Department, answers the questions of The Technology Headlines. 
Interviews
Quality assurance
17 August 2017,
by a1qa
4 min read
From requirements specification to complex business analysis: interview with a1qa head of BA
Check how we at a1qa converge business knowledge with IT skills to deliver maximum value. 
Interviews
QA consulting

Get in touch

Please fill in the required field.
Email address seems invalid.
Please fill in the required field.
We use cookies on our website to improve its functionality and to enhance your user experience. We also use cookies for analytics. If you continue to browse this website, we will assume you agree that we can place cookies on your device. For more details, please read our Privacy and Cookies Policy.