As we approach the culmination of 2023, it’s time to take an opportunity and reflect on the wealth of knowledge that has transpired during a1qa’s online roundtables.

Let’s cut to the chase!

Unveiling the importance of a1qa’s roundtables for IT leaders

Recognizing the paramount importance of fostering a dynamic exchange of QA insights and best practices, a1qa hosts a series of monthly online roundtables designed for top executives.

These exclusive sessions help bring together diverse IT experts to deliberate on topical QA-related issues, such as quality engineering trends, test automation, shift-left testing principles, among others.

Roundup of 2023 a1qa’s sessions

The first quarter roundtables overview

During this period, participants discussed three relevant topics — “A practical view on QA trends for 2023,” “How to get the most of test automation,” and “Dev+QA: constructive cooperation on the way to project success.”

Analyzing QA trends helps business executives to proactively shape their QA strategies, ensuring they are in sync with the industry’s evolving landscape. While automation assists them in accelerating IT product’s delivery, enhancing its quality, and reducing operational expenditure.

Also, the attendees talked about the best moment for QA to step into the SDLC stages and methods to make the communication between Dev and QA more efficient.

The second quarter roundtables overview

This period was marked by three vibrant conversations:

  1. “QA for complex software: tips for enhancing the quality” — IT peers shared the challenges they encounter when testing sophisticated systems and the ways to overcome them.
  2. “How to release a quality product within a limited budget” — C-level reps exchanged practical experience on mapping software quality expectations to a QA strategy and optimizing QA costs.
  3. “How to improve QA processes with shift-left testing principles” — participants discussed how shifting QA workflows left allows businesses to identify and fix defects early on while speeding up the release of top-quality applications.

The third quarter roundtables overview

“A closer look at the field of automated testing” took center stage during the 3rd quarter, emphasizing how to derive more values from test automation supported by AI and behavior-driven development.

The fourth quarter roundtables overview

During the last quarter of 2023, IT executives have already engaged in two insightful conversations — “How to organize testing and increase confidence when starting a new project” and “Rough deadlines: how to deliver better results in less time.”

At the October event, the attendees revealed the best QA approach to choose to be confident in a project’s success from the outset, optimize ROI, and reduce business risks. The November roundtable helped the participants voice their ideas and share real-life cases on meeting tight deadlines without compromising software quality.

Thanks for being part of our roundtables in 2023!

To sum up

Our journey through the diverse and insightful roundtable discussions hosted by a1qa’s professionals with in-depth QA and software testing expertise throughout 2023 has been a testament to the company’s commitment to fostering knowledge, collaboration, and innovation in the ever-evolving landscape of IT.

From exploring emerging QA trends to delving into the nuances of automated testing, each session has played a pivotal role in helping IT executives shape future strategies.

Need support in refining the quality of your IT solutions? Reach out to a1qa’s team.

By generating a drastic amount of data, the Internet is somewhat a Pandora’s box. IDC predictions state that the worldwide data ecosystem will grow by 3.8 times and reach 175 ZB by 2025. Wow, right?

With that, data storing and its accurate processing become much more challenging. Here’s the need to apply novel tools for big data scenarios.

Let’s take a trip back in time. Nearly a decade ago, forward-thinking companies included big data initiatives in their strategies. Today, 96% of big data efforts yield tangible results and help strengthen business continuity.

How did they succeed in addressing big data issues? They introduced a big data strategy with big data testing at the core.

Let’s delve deep into each step to enable error-free data handling and let’s explore the benefits that companies get by applying QA.

Step 1. Pre-define big data testing strategy and its objectives

Step 2. Consider big data testing essentials

Step 3. Perform mission-critical testing types

Step 1. Pre-define big data testing strategy and its objectives

McKinsey report indicates that data-driven companies are:

  • 23X more likely to attain new users
  • 6X more likely to reinforce customers’ loyalty
  • 19X more likely to increase revenue.

A comprehensive big data strategy is one of the clues to such business prosperity. Defining QA activities beforehand helps reach 5 core data traits — accuracy, completeness, reliability, relevance, and timeliness.

With QA, organizations ensure high data quality and consistency while properly forecasting market requirements and effectively analyzing customers’ expectations.

Once having tested big data architecture, its components, and their interaction with each other, companies optimize budget on data storage.

What’s more, well-structured data and its timely processing help build effective business strategies and make sound decisions while reaching desired outcomes.

Step 2. Consider big data testing essentials

Databases, internal ERP/CRM systems, weblogs, social media — these and many other sources transfer information to big data systems.

Data comes in 3 ways: structured, semi-structured, and unstructured. As unstructured data forms prevail, it’s getting more difficult to collect and store it due to complex converting processes. Only 0.5% of unstructured data across the globe is analyzed and used today.

Source: www.analyticsinsight.net 

To verify that data is processed accurately, the good strategy is to follow these three stages:

  • Data ingestion testing. To check that data is pulled into the system correctly, corresponds to the original values, and is extracted to the right location.
  • Data processing testing. To dodge any data discrepancy by asserting the business logic of ingested data and comparing output and input results. If used, test automation helps facilitate the verifying process and shorten the testing time.
  • Validating the outputs. To test further data transmitting to other more specific DBs that track customers’ feedback, internal processes, financial reports, etc., and check transformation logic as well as coinciding key value pairs.

Step 3. Perform mission-critical testing types

High adoption of big data programs across enterprises is to push applying big data testing and proper data management. With that, the big data testing market size is expected to grow from $20.56 billion in 2019 by a CAGR of 8.53% during 2020-2025.

While the worldwide volume of information is rising exponentially in turn, organizations face issues with defining test approaches for structured and unstructured data forms, configuring suitable test environments, ensuring data integrity and security.

To navigate these and other critical big data challenges, we offer to include these checks into the QA strategy:

End-to-end testing

To eliminate duplicates, inconsistent information, non-corresponding values, overall poor data quality and ensure continuous data availability, QA engineers perform end-to-end testing while validating business logic and layers of the big data app and ascertaining there are no missing values.

Integration testing

QA specialists verify that interaction between each of the thousands of modules, sections, and units is well-tuned while avoiding errors affecting the entire data storage.

Architecture testing

Within processing intensive resources round the clock, it’s vital to check that a big data app has a proper architecture that doesn’t provoke performance degradation, node failures, high data latency, and a need for expensive data maintenance.

Performance testing

Enormous data sets — with little time to process them. That means QA specialists verify that big data systems are able to withstand a heavy load as well as receive and handle voluminous information at short notice. Performance testing engineers are to check how fast each system’s component consumes various data forms, processes acquired files, and retrieves them.

Cybersecurity testing

While getting sizable volumes of customers’ sensitive data, it’s pivotal to minimize the risks of cyberattacks as they are becoming more sophisticated. To imitate cybercriminals’ behavior while creating real-life conditions and preventing data leakage, QA engineers execute penetration testing that helps ensure the system’s resistance to viruses, malware, and other kinds of tampering.

Test automation

“I’ve covered the entire big data system with manual testing.” Sounds kind of like a science fiction episode, right? This is why test automation is of help to reduce human errors and free up time and efforts for high-priority tasks. The thing to remember is that not each and every check can be automated — if a feature can be checked frequently and isn’t likely to change in several weeks, it’s worth automating.

Summing up

To respond to today’s high pace of the IT market and the drastically growing amount of data, companies are actively introducing big data initiatives.

When applying comprehensive big data testing, organizations are more likely to succeed while accurately predicting customers’ behavior patterns, effectively making business decisions, and strengthening their competitive advantage. 

Feel free to get hold of the a1qa team to get professional QA support for your big data solution.

Some years ago, companies were focused on optimizing operational processes, meanwhile, leaving the work on procurement, personnel, customer relationships, and more in the background. Considering the gravity of both internal and external activities, improving all in-house operations has become a clue to the maintenance of a competitive position in the market.

ERP systems turned out to be pervasive means of improving business processes. Statista’s report indicates that the world ERP software market revenue will reach $43 billion by 2021.

According to business needs and goals, companies opt for various ERP systems. Acumatica is a common platform amid small and mid-sized organizations. However, its implementation is not enough to be confident in data integrity and its stable operation. By applying software testing, companies can assure the systems are running like clockwork.

Otherwise, adverse consequences may emerge. For instance, due to some errors in the software, namely the lack of notification to the employee, an appointment with the client might be disrupted. In its turn, these issues affect the entire business and may lead to reputation decline.

How to avoid such cases? Read the article to explore the QA role in ERP systems’ flawless operation and its effective performance.

What if not to test ERP systems?

These platforms supervise all processes within the company — from procurement and delivery to financial transactions. They cover a great amount of information about products, employees, and customers.

With the advent of new technologies, many companies are shifting to the cloud storage. As Panorama’s survey on the ERP systems implementation and support showcases, over 60% of ERP software, including Acumatica, work with cloud technologies.

ERP usage stats
Source: Panorama Consulting Group

When implementing such a system, it is vital to ensure safe data migration to the cloud, as it may contain confidential information. Due to possible bottlenecks, the software is highly susceptible to cyber incidents up to intellectual property theft. Therefore, security is one of the essential issues under consideration.

Within massive blocks of information, businesses should keep data integrity and accuracy to prevent inconsistencies in the future. Other way, it may affect, for example, the volume of purchases that impact on the budget.

Data storing plays a pivotal role in introducing an Acumatica ERP system. Erratic data entry can impede business processes requiring extra time to regulate the issue.

To set a certain format and structure of a system, you need to take care of it in advance. Companies use big data technologies to address the challenge. Proper operation is another difficulty. Make sure whether the information is distributed among the corresponding databases assigned to particular activities.

Considering ERP software like Acumatica works with other platforms and browsers, appropriate integration should be carried out. It’s important to check its compatibility to dodge problems or lack of functioning at all. Aiming to add corporate software, also verify the interaction between them and all modules of the system.

ERP solutions process numerous activities every day and may operate 24/7. Due to such frequent and vast usage, the server can be overloaded. So, companies are to examine the system’s response to a heavy load: whether data is saved after recovery, whether some information is deleted during a failure, and many more.

Therefore, to leverage the Acumatica system and other ERP software with confidence pushing fear aside, you need to concern its reliability and avert all possible failure scenarios.

Holistic approach to ERP systems testing

Despite the differences in internal process management systems, a1qa’s experts recommend performing thorough testing of ERP software that covers all aspects and risks.

Functional testing

Once QA engineers have studied the documentation and business logic of the system, they proceed with the testing activities. Specialists verify the entire functionality in accordance with the requirements and identify defects. Before the new functionality is released, the QA team performs regression testing to check whether the changes didn’t affect the previously developed features. To make sure bugs are fixed, they conduct defect validation.

For Acumatica systems and other ERP solutions, it is crucial to check correct data storage both during migration and in the system itself. So, alongside functional tests, QA engineers review the data: proper distribution to databases, correct usage, information compliance with the previous storage.

Security testing

According to Panorama’s survey, about 30% of respondents are concerned about the risk of data breaches when introducing an ERP system. Two reasons are prevalent: the lack of information from cloud solutions (16%) and potential data loss (9%).

Security testing can help protect the ERP system from such cyber incidents. Harnessing penetration testing, experts simulate the actions of malusers, thereby checking the system for vulnerabilities.

Integration testing

In most cases, companies integrate ERP systems with ready-made software that increases the risk of malfunction. Therefore, system’s behavior is unpredictable. a1qa’s experts advise performing integration testing to identify defects and ensure stable operation of the platforms.

Moreover, you may embed additional functionality, such as an electronic signature, in the Acumatica systems and other ERP solutions. Here, the QA specialists check how the digital signature works with various documents, who can sign them, and what statuses the signed papers acquire.

Performance testing

A large number of data operations that are continuously carried out and numerous ERP modules can cause server restart or crash. Through load testing, one can evaluate the behavior of the system under the expected load. Stress testing determines the peak number of simultaneous sessions and evaluates the stability of the software product.

When executing performance testing, a1qa’s specialists use a behavioral approach, simulating end-user actions and setting test conditions as close as possible to real ones.

Test automation

The engineers write automated tests for the frequent checks, the business logic of which is subjected to rare changes, such as regression cases. So, it saves time for testing, thereby reducing iteration.

Besides, the execution speed of the autotests exceeds one of the manual checks. Within large data sets of the Acumatica system and other ERP solutions, automatic tests detect errors faster and minimize the human factor.

Testing automation also allows QA engineers to focus on performing other types of testing that are only executed manually, such as UI, UX, exploratory, ad-hoc, and others.

Effective QA for an ERP system

Performance is affected by a number of factors, including the methodology on the project. The most pervasive approach in the IT industry is Agile.

The main reasons for implementing Agile methods include accelerating time-to-market, managing rapidly changing priorities, improving productivity, and more.

However, the introduction and testing of ERP systems require another tactic. One of the best options is a combination of long-standing planning with traditional Waterfall methodology and short-term planning and task tracking using Agile practices. This scheme allows achieving the desired results and combining strategic objectives and adaptability.

A team with the necessary skillset is another indicator of effective testing. When onboarding specialists on a project, it is essential to conduct an introductory course so that QA engineers get acquainted with the requirements and business logic and further promptly realize the ERP system’s principles.

There are two variants of attracting QA talents: organize an in-house testing team or contact outsourcing companies. If you want to focus on higher-priority tasks, then hiring a dedicated team is one of the ways out of the situation.

Therefore, setting a well-defined approach and a testing strategy, including a QA team, paves the way for deriving planned outcomes and conducting efficient testing with minimal costs.

In conclusion

In a highly competitive IT market, companies are forced to optimize not only production processes, but also all internal operations by implementing ERP systems.

Thorough testing is a go-to means that ensures software soundness and stable operation, as compromising on quality may lead to repercussions in the process management, budget, and reputation of the business.

A comprehensive QA bundle — functional, performance, security, integration testing, and test automation — allows detecting software defects before go-live, eliminating them, and maintaining a competitive advantage.

Need help with quality assurance of ERP systems? Get in touch with us to have a consultation with a1qa’s experts.

The healthcare sector was definitely not that ready for an unforeseeable February situation forcing clinicians to re-image their attitude to novel technologies in medicine. Undoubtedly, innovations have become an inextricable part of the human experience.

People’s values are shifting, and digital age technology models are increasingly out of sync with them. Despite broadly benefitting from technology, people are expressing concerns about their usage and focus. They are turning to health more than ever to get answers about their day-to-day tension.

But how can one provide the ground for safer harnessing eHealth tools and ensure consumer confidence? Welcome to reading our article, focusing on the following:

  • Topical healthcare innovations
  • Why it is crucial to deliver high-quality eHealth apps
  • How to do it effectively.

Let’s get started.

Digital medicine trends

Gone are the days of mass services. Now, medical centers focus on personalized treatment, as every person feels different about the same disease. Genetic information, access to big data, and the Internet of medical things technology unlock the customer-centric approach.

People had to leverage non-standard solutions to surmount the hurdle of an unprecedented situation. Within lock-down, they interacted at a distance to solve any issue, including receiving medical care. Using telecom technologies, the number of virtual visits to medical institutions has increased by more than 15 times.

Smartphone usage has grown as well. Statista’s report showcases 70% of respondents prefer using mobile devices that have emerged to collect symptoms data, notify about risk zones, and provide information about contacts’ health condition.

For years, artificial intelligence has been one of the most pervasive technologies. According to the World Quality Report 2019-2020, overall investment in AI in the healthcare sector is higher than average, and more medical organizations are planning to introduce it.

Today, the innovation helps to identify diseases, select the necessary treatment, create the optimal drug formula, and much more. Frost & Sullivan estimates AI in the healthcare sector will grow by 40% per year, so its market share will be close to $6.6 billion by 2021.

More and more medical institutions are switching to electronic systems that store a large amount of patients’ information. Big data technology helps organize all the data and fit it into one format that is available to any healthcare center.

Machine learning algorithms are able to predict the means of treatment for patients. Novel technologies are paving the way for an all-embracing grasp of health essence, therefore, people will possess ample opportunities to choose suitable medical care.

Considering all mentioned above, the volume of the global eHealth market is forecasted to be more than $630 billion, as Statista showcases.

ehealth market
Resource: Statista

The gravity of quality assurance in eHealth

In line with global digitalization, IT solutions development and sophisticated new technologies implementation are increasing in healthcare institutions. Only thorough testing before going live one can guarantee unhindered software operation under real conditions.

Noteworthy is that every mistake in a medical solution triggers adverse consequences that threaten human well-being. For example, the occasional substitution of health test results may cause inappropriate treatment or no treatment at all. Incorrect configuring clinical equipment can spoil condition parameters that will also affect the health state.

By shifting to electronic document management, information security in medical centers is up the ante. Miscellaneous personal data is often of doctor-patient confidentiality.

Like all applications, medical solutions are at risk of cyber attacks and malicious data usage. From time to time, we hear unpleasant stories of customer information being shared with third parties, including credit agencies, advertising companies, and private organizations. Yes, that sounds like a big problem.

Every information system in the healthcare sector should be developed in compliance with the security and confidentiality requirements defined by GDPR in the European Union and HIPAA and COPPA in the USA. These measures allow users, for example, to ask for any information erasure or explanation of the reasons why a particular piece of data needs to be stored.

Thus, timely testing prevents possible bottlenecks and shortcomings in IT solutions, thereby avoiding data leaks, customer concerns, and deterioration of the institution’s reputation.

QA in eHealth: effective testing

Agile has been a mainstream development approach in the healthcare sector for the last two or three years, the World Quality Report 2019-2020 indicates. However, businesses still confront challenges in their usage, especially in providing an appropriate level of test automation.

Within great responsibility for human well-being and a number of issues during development, healthcare applications need thorough testing. The choice of services depends on the IT solution’s peculiarities and business goals.

Functional testing

The importance of functional checks is reinforced by a1qa’s success story about providing QA services to a leader in medical equipment production.

The specialists were responsible for testing the system applied for collecting, storing, and processing session data from blood transfusion devices. When estimating solutions’ safety class according to IEC 62304 standard, it was assigned the Class C — death or serious injury is possible.

Within such severity of eHealth solution, it was vital to onboard professional QA engineers, who were to pass an introductory course. After getting satisfactory results, they verified all functionality for compliance with the requirements and the IEC 62304 standard.

During functional testing, the team performed smoke tests to ensure the absence of critical issues that could impede further activities and harnessed new feature testing. Regression testing and defect validation helped ensure changes didn’t affect previously developed units.

Performance testing

A large amount of data involves multiple operations that may affect the system stability. Therefore, a1qa’s experts recommend checking the software response to a heavy load and identifying the peak number of simultaneous sessions before the product goes into production.

a1qa helped ensure flawless operation of the medical system applying the user behavior approach. Simulating real users’ actions, the experts gauged the velocity of the system reaction to operations and analyzed its behaviour under a certain load.

Testing big data

Integrity and completeness are the cornerstones of a confident and safe patient data transferring process ubiquitous for medical institutions.

a1qa has experience in ensuring data accuracy for a corporation that provides professional services to pharmaceutical companies. The main challenges were verifying the complex business logic of databases composed for customers and a great volume of work. Despite the issues, the specialists worked out an approach for streamlined big data testing and improved product quality.

Test automation

Healthcare IT solutions require completely accurate testing. Alongside QA processes optimization, test automation helps minimize the human factor, namely errors. However, this testing type is not a silver bullet. The World Quality Report indicates companies developing eHealth products face challenges with test automation implementation due to the inappropriate skills and tools and hindrances with testing environments.

To move on, organizations should leave the manual process behind and introduce a strategy of lean automation. That means the case is worth to be automated if the feature is to be checked frequently and isn’t likely to change within a few weeks.

In a nutshell

A new era of medical care paved the way for novel core concepts including information technologies, patient-centric approach, and process improvement. But the only thing that remains static is that errors are inadmissible.

Innovation has turned into assistants helping save lives. eHealth solutions need comprehensive and effective testing to match doctors’ excellent performance.

Wish to get high-quality healthcare software products? Get hold of us, and a1qa’s experts can help you address the challenge.

Annually, retail experts draw up lists of trends that are expected to shape the year ahead. However, the changes in the global context have affected the nature of the retail industry progress in 2020, giving the first place to developing technological aspects and, as a result, enhancing the quality of software solutions.

In this article, we are talking about six key retail trends for 2020 and explain how QA helps follow them accurately.

Increasing internal processes speed

Only in the USA, retail sales were expected to grow by 2.0% to $5.574 trillion in 2020. Still, the global outbreak with its tough predictability is making unexpected and unpleasant amendments to the retail industry development.

How can a company speed up the in-house processes to win the competition and enhance success rates? Take business operations to online and help the customers go online too.

Let’s have an example. Retail giants like Amazon have set high standards for the speed of accepting and processing user requests. There, it takes less than a day from a click on a site to unpacking an order.

Global studies testify how it is vital to decrease consumers’ time spent on getting what they want. More and more end users are ready to pay extra money for their time saved, raising the expectations bars high.

However, some things don’t change: as you can see below, the speed of loading pages practically did not modify in 10 years.

Page load time
Source: Httparchive report

In an unstable situation, it is essential to go beyond the expectations and do the utmost to level up the customer experience. With the growing needs of digital consumers, it’s high time to adapt the products in online space to end-users expectations and implement an individual approach.

Here is an idea of how. If the goods delivery speed highly depends on logistics, including multi-level collaboration across many services, development of logistics as a service, and more, than why not optimize this process and create the platforms that can ensure high-paced delivery?

This is where one can see the ever-evolving need to ascertain the flawless functioning of the software and identify performance bottlenecks as well as ensure that the IT solution can cope with the required load. To deliver a truly top-tier app, you can go for full-cycle testing encompassing the perfect match of required testing types like cross-browser, usability, mobile application, migration testing, etc.

Undergoing digital transformation

In recent years, by adopting new IT apps and fundamentally reorganizing internal processes, this trend has intensified its influence.

Retail companies that have already made a transition to the digital environment and adapted their businesses’ processes to the online space, embrace a larger part of the target audience. An additional advantage for many of them was the development of proprietary software products like mobile applications, digital assistants, etc.

Why isn’t it high risky now? For instance, the share of mobile traffic in retail is only growing. A 2019 study showed that 46% of U.S. users surveyed used mobile apps to search for additional information about a purchase or service and made at least one purchase last month. And with the rising use of mobile devices throughout the outbreak, the importance of assuring their quality is raising multifold to deliver the debugging software that astonishes end users. At a1qa, we conduct checks on real mobile devices from the 300+ device fleet to take into account all software versions.

Using Big data for analysis

Big data helps reveal relevant patterns and trends in users’ purchasing behavior, accurately predict the best prices, and plan sales activities. By investing in applying this technology, retail companies can understand their consumers better and deliver more personalized products by changing pricing strategies almost instantly and rapidly responding to market changes.

With this, retailers can avoid constant price reductions in a wish to be the first in the market.

Consider that working with unstructured datasets containing vast arrays of information involves evaluating their quality to get expected business results and build solid strategies. In this case, big data software testing can help ensure fail-safe performance, high data integrity, security, user-centricity of your IT architecture.

Implementing AR/VR technologies

Virtual and augmented reality have already been enhancing the online customer experience for a couple of years. Today, the development of VR- and AR-based solutions are most relevant when people around the world tend to spend more time at home. For example, IKEA uses AR-based features in its application so that users can place virtual furniture elements at home.

Such innovations serve as an example of frictionless commerce when the user takes a minimum of actions, and the process of choosing and buying a product is simplified.

However, the flip side of virtual reality is the high user’s expectations. In a short time, really great applications attract hundreds of thousands of new users. AR/VR testing helps protect sensitive user data against cyberattacks, ensure high user-centricity of UX and UI, ascertain that the software product can work under high loads, and more.

Localizing software product

Localization is especially relevant when scaling the retail ecosystem with the advent of new markets.

Many companies not only translate sites into other languages but also create independent digital platforms, taking into account the cultural and local characteristics of the region. It is not surprising that the majority of users prefer websites in their native language. While the poor-quality adaptation of the resource to the understanding of end users can lead to dissatisfaction, the decision is to conduct localization testing providing compatibility control with regional standards, GUI compliance, and uniformity of lexical and visual style.

Automating business processes

Reducing routine tasks frees up the resources of companies. Take Amazon Go stores that successfully operate without cashiers having 26 stores.

In the coming decades, the self-checkout systems are expected to become very common, and this is just one example of how the processes can be automated.
In the crisis, take some time to review, which repetitive processes can be automated, and take some care about them now to exit the unstable situation with new prospects.

Automation can also become part of the quality assurance process. Test automation is a wise business decision being a long-term investment in the future, helping repeat checks while minimizing the human factor and helping conduct testing at the scheduled time with minimal load on the server.

In uncertain times, by following these trends and being forward-thinking, the businesses in the retail industry can grow more actively. One of the most important vectors of efficient companies’ development is the attraction of technology and the progress of new IT solutions.

We can already say that the spread of the pandemic has accelerated the formation of a highly integrated digital retail network. Experts believe that we should expect the acceleration of transactions from 12% to 25-30% in the United States and from 25 to 60% in China.

However, the success of the software product in the market largely depends on its quality. QA can help maintain quality at the level that is necessary for users in the face of growing competition.

Would you like to evaluate the quality of your retail software product? Contact a1qa experts to do it right.

Digital consumers are impatient when speaking about fulfilling their wishes as quickly as possible. The companies that process such requests faster than competitors do enter their list of top brands.

By being recommended to other customers, the businesses are actively growing and obtaining the desired outcomes, including increased market share, cost reduction, profit growth, etc.

For the digital consumer, it does not matter how the company grants these wishes. But many businesses have already realized that adaptation to the requirements of customers is easier when going through the process of digital transformation.

Today, we will focus on what trends and strategic amendments can help the company pass it with fewer difficulties.

Digital transformation strategy components

Each successful digitalization story begins with creating a strategy. It is no longer enough to invest only in the implementation of new technologies, e.g. connecting social networks to a website or creating a chatbot.

Digital transformation implies a significant change in the business model as well as mindset, starting from the product itself and up to improving customer service items.

Before following digitalization, make sure you have taken into consideration the customer experience issues and adaptation to the upcoming changes.

Boosting customer experience

Step-by-step work on managing CX increases the satisfaction and loyalty of current and potential clients and reduces the risk of their outflow.

Digital transformation rethinks the customer experience paradigm. Now, companies should invest in technology that helps accumulate, analyze, and apply clients’ data.

Rethinking the CX model

In addition, any company that is entering the global market should be aware of the importance of generating end-user loyalty concerning its software product. Internationalization and localization of software can ensure successful adaptation of the application to work around the globe. Have a look at what is important to consider when testing such an IT solution in the article by the a1qa expert.

Quick adaptability to the new conditions

Broadly speaking, adaptability is the speed of businesses’ changes. The company has to clearly understand its plans in the market and follow the latest trends. But introducing innovation without a clear understanding of the benefits to a business can be really harmful.

Adaptability also refers to locally tailoring a product to various formats of its usage. For example, not all companies have adapted their websites to mobile phone screens, although since 2017, mobile traffic exceeded that of the desktop.

According to a study by Oxford Economics and SAP, 93% of senior executives surveyed believe that digitalizing a business is critical to survive in the market.

The basis of digital transformation are cutting-edge technologies

For some time, leadership in the market can be preserved with no innovations adoption, but not for long. Read below about technologies that can help to stay afloat and ahead of competitors.

Internet of things

The internet of things (IoT) has become a new stage in the development of the digital world. The main feature of IoT is that there are fewer people online compared to the number of things. According to a Gartner study, the number of things connected to the Internet in 2020 will be over 21 billion items.

IoT connects the objects around us with a global network, where they exchange information and work with no human intervention. How can the IoT technology benefit to a business?

  • Helps keep track of all business assets. Sensor control systems and detectors quickly identify problems while the system independently takes measures to eliminate them.
  • Rapidly identifies problems reducing potential business profit losses.
  • Generates online analytical reports.

Within the IoT trend, digital twinning is used to digitally reflect a real physical object, process, or system and indicate how to increase their efficiency as well as track the technical health and create new technologies.

Here we provide a success story on how this concept was applied to our project.

a1qa was contacted by a company that develops, manufactures, sells, and services analytical equipment for the scientific community to ensure high-quality levels. The system under test consisted of three components: the main processing center, lab, and real devices for water quality indication that were connected to the lab.

To conduct performance testing, the QA specialists would have required to launch the whole laboratory with hundreds of computers and devices. To reduce testing time and save QA budget, a1qa specialists developed a simulator for real devices, which helped mitigate risks and accelerate time to market.

Cloud technologies

In 2009, cloud technology represented 5% of the global IT market ($17 billion). Furthermore, in 2014, business investments in cloud technologies were amounted to over $175 billion. There is no secret that this indicator continues to rise.

Cloud technologies provide convenient network access to the information fund and allow several teams to work on a project at the same time.

According to the forecasts of the international research and consulting company IDC, cloud services will be actively used in 2020 and after. This can allow companies to work anywhere and anytime.

In five years, more than half of the businesses are expected to develop 90% of cloud-based and microservices-related applications. IDC encourages them to think about it now and start working with open-source software communities.

Artificial intelligence

Artificial intelligence (AI) has significantly improved the quality of business processes by quickly managing large amounts of information, accelerating the pace of goods production and task execution, and improving the product-user experience.

For humans, the technology has a familiar and understandable embodiment of the voice assistant. Starbucks is a good example of a company that uses AI to work with clients. Alexa’s cloud-based virtual assistant has become a waiter on the Starbucks network. A user may request: “Alexa, let Starbucks make my coffee.”

AI-based digitizing will require serious financial investments. In order not to lose money in vain, the process of introducing AI technology should begin with the definition of business goals.

Machine learning

Machine learning (ML) is one of the most sought-after technical areas for business. The main idea of ML lies in the self-training process based on a given algorithm.

This technology helps create a new line of new goods and services faster, increase the attractiveness of products for the client, and identify the patterns of user behavior.

How to use this technology? For example, some telecom companies have learned to predict the desire of customers to access a service using machine learning. The client receives an offer before directly applying for it. The user saves his time, and the company gains profit.

Big data

By now, people have formed almost 40-44 zettabytes of information, which is expected to increase by 10 times by 2025 according to The Age of Data of 2025 report.

The concept of predictive analytics is closely related to this big data helping identify patterns and algorithms in it. It is especially relevant for e-commerce brands allowing analyzing information about customers’ behavior and identifying the likelihood of purchasing in the future.

This year, predictive analytics is expected to be an investment for improving customer experience technologies.

Fast and accurate information processing creates new business opportunities. Though, it is important to remember that working with big data is always related to information security issues. The data leakage can result in million-dollar losses for companies and invaluable damage to their reputation.

To protect the brand from such major losses, you can apply to our experts to conduct accurate big data testing.

Blockchain

The analytical company Gartner called practical blockchain one of the strategic trends for 2020. Now, this technology is badly scalable and is adopted in experimental and small projects. According to the experts, by 2023, practical blockchain will become fully scalable.

By helping reduce costs, increase the speed of money transactions, and provide a more secure data transfer from transaction participants, analysts recommend thinking about implementing blockchain in the businesses in 2020.

Technologies used for digitalization

Bottlenecks of digital transformation

It is worth realizing that digital transformation remains a multi-level and multifaceted process. Although introducing advanced technologies is an investment in the future, one can bring the required profits when searching for new business solutions. Nevertheless, digitalization has its downsides.

While being over-enthusiastic at the start of the transformation, the incorrect ROI definition can also be a key mistake. The world has already seen the launch of ambitious projects and companies seeking to become leaders in digitalization, which resulted in money loss.

Another common mistake is when the businesses create a new division and turn it into a deeply integrated company. Innovations designed for good ROI become a financial burden for companies in the present situation.

What mistakes should be avoided in addition to the two named before? For example, starting the journey of transformation for the sake of transformation to support the trend. It also might be an imprudent decision to ignore the creation of a unified strategy with the chaotic usage of tools and approaches.

The process of ensuring the quality of software products is equally important. Timely testing helps release bug-free software and take care of the customers’ loyalty providing the necessary business outcomes.

Conclusion

Being an ongoing process, digital transformation is rapidly gaining momentum. The companies have to select the needed types and tools based on the specifics of their business. By using new technologies, brands continue to grow and gain a competitive advantage in the market.

However, the path of digital transformation includes some risks as well. In the case of neglecting to build a coherent strategy, choose the right toolset, and ensure software quality a company can lose its profits.

Soon, businesses can face problems of successfully existing outside the digital space. Step-by-step implementation of this process can help stay on the wave.

Are you thinking about integrating the digital transformation into your business? Request an expert consultation to do it right.

Software testing has expanded substantially from the manual approach since the 1980s. As much as the testing activities aims are altering, the QA experts have to expeditiously adjust to the numerous software testing sphere transformations.

The testing discipline will carry on augmenting. Accordingly, we’ve rounded up the top 11 tendencies that will determine the future of testing in 2019 and beyond.

Here’s what we suppose QA professionals need to focus on to stay ahead of top technology progress.

Internet of Things testing

IoT is one of the fastest developing technologies in the modern world. The latest World Quality Report (WQR) revealed that the number of IT respondents that somehow deal with IoT had risen from 83% in 2017 to 93% in 2018.

IoT devices and applications with the connection to the internet are to be tested for security, usability, and performance. Most IoT developments include such technologies as Near Field Communication (NFC), Bluetooth, RFID (Radio Frequency Identification) to connect and enable communication. All these make IoT gadgets vulnerable to network-related threats that should also be recognized by QA engineers.

Artificial intelligence in testing

According to the Gartner’s 2018 CIO Survey, 1 in 25 CIOs has implemented artificial intelligence in their companies. Google, Facebook, Microsoft spend billions on artificial intelligence and machine learning initiatives.

Obviously, AI will grow further and it has its own role in testing as well.

AI can definitely streamline the process and make it smarter. AI-powered software testing can recognize the code changes, analyze them, and launch tests to make sure there are no mistakes. As of today, AI is widely used in test automation.

But in the future with the adoption of AI-powered testing, manual testers will be able to move forward their routine tasks, perform more of exploratory testing, thus reducing costs and bringing more value to the business.

In general, AI will change the profession of software testers and turn them all into test automation specialists.

But of course, this won’t happen overnight and the impact of AI on software testing is yet to be observed.

Increased adoption of Agile and DevOps practices

In DevOps, software testing starts from the very beginning of the software development lifecycle. As a result, most of the defects can be recognized at the earliest and the high-quality application will make it to the market sooner. This approach enables Continuous Delivery and Continuous Integration.

No surprise, 30% of the WQR respondents claimed these methods to be a significant aspect of their today IT business strategy.

There’s nothing path-breaking about saying that the Agile and DevOps adoption tendency will keep on gaining momentum in 2019.

Big Data is getting bigger

Data can be very beneficial to organizations. Given its proper quality, of course.

Volume, velocity, variety – these are the 3 V’s that characterize big data. Considering the exponential growth of big data generated, software testing engineers will have to continue keeping their eyes on its quality.

With the European Union’s General Data Protection Regulation has come into effect on May 25, 2018, more attention should be given to data privacy. And while GDPR is only focused on Europe, many companies outside it stated they would change their data policies accordingly to keep good relationships with their customer base.

Test automation (yes, again!)

Test automation has been the key trend in testing for more than 15 years already. It is hardly surprising that the purpose of QA automation has fundamentally changed – the point is to make a high-quality product as opposed to saving the resources.

68% of the World Quality Report respondents said test automation improved the test coverage compared with the previous year when the percentage was lower by 17% and by 28% since 2016.

In other words, the contribution of QA automation in companies increases. It has undeniable pros in cost savings, removing defects, transparency testing expansion. Test automation guarantees high-grade software is delivered.

And as test automation guarantees a top-notch quality of the software, its tools will be used further to perform both functional and non-functional tests. Testing engineers will concentrate their time and efforts on running experiments and exploratory tests rather than perform routine testing.

a1qa has developed an open-source framework – Aquality  Automation. See its main benefits at the short overview of the presentation done by test automation engineer at the 9th traditional a1qa conference.

The open-source way

Manual testing will stay

Regardless test automation is becoming more popular, manual testing has much to say to the industry. There’re still some spheres like design and usability, which require manual efforts. So yes, manual testing will stay longer with us.

Performance engineering & performance testing

We’ve heard it multiple times that very soon performance engineering will replace performance testing. What’s the difference between them?

Performance testing is about preparing and executing tests, while performance engineering is about understanding how all parts of the system work together and designing its best performance.

However, performance testing is not sharply falling behind the performance engineering. According to the World Quality Report, performance testing conducted in cloud environments has grown by 14% since 2016.

Delivery cycles will get shorter

DevOps, test automation, constant improvements in communication flow have one common goal – speed up releases.

In pursuit of willingness to take a proper place in the market and provide high-quality software organizations enlarge budgets to shorten delivery processes and quicken releases.

Of course, this puts (and will put in 2019) additional pressure on QA departments and make them find imperfections and supply the finished products more frequently.

Open-source tools will prevail

Easily accessible, resilient, and free of charge – open-source products are precious and extremely helpful for IT business.

Though they don’t give a sense of security. However, frequent usage by the community helps to discover and eliminate bugs faster than you can imagine.

Cloud will get more popular

The WQR survey mentions only 27% of all applications are non-cloud based. Today cloud computing is the groundwork for other tendencies like DevOps and IoT.

The public cloud is becoming more popular – its percentage in the number of clouds’ types has got higher by 3% since 2017.

The tendency goes further – respondents prefer to use different cloud service providers, so we see the multi-cloud popularity growing.

Running tests in the cloud has its many benefits: minimum efforts required (you don’t need your own infrastructure to perform mobile and web testing), simple accessibility, and high versatility.

Security testing becomes more crucial

With the broad use of smartphones, tablets, computers, and other devices, one’s got used to relying on them for transactions. It has made security testing more crucial for every company to store shared or accessed data safe and deter security violations.

The survey states, it has grown up by 10% since 2016. Since the confrontation between security and privacy continues to grow, this testing will remain an urgent necessity for many companies.

Summing up

Forewarned is forearmed. Considering all these tendencies, organizations and businesses have time and opportunities to implement industry best practices creating unique QA approaches and ensure the impeccable quality of their solutions.

Business Intelligence software is a set of technologies aimed at enabling executives, managers, and analysts to make better and faster decisions.

Let’s consider an example of the online store.

Customers visit the website, navigate its pages, make decisions, and add the selected goods to the cart. Meanwhile, they don’t realize that every step is registered, analyzed, and alongside with other steps used to make some business decision.

Ecommerce owners try to get maximum information about the consumer: gender, age, country of origin, purchases, money spent on this or that item, payment methods preferred, etc. The data is further organized, restructured, and provided to business users (managers, marketing specialists, etc.) to help them make well-grounded and informed decisions.

What is the architecture of business intelligence software?

The company may prefer to make use of either off-the-shelf BI solution or invest in developing its own. The market today offers a wide range of BI software of different complexity and functionality.

However, any of them shall be made of three obligatory blocks:

  1. Data loading and transformation system
  2. Data warehouse system
  3. Reporting and data visualization system

Data comes to the BI system from a number of different sources. First of them is JSON files. These are text files with lines in JSON format that record the subset of the website activity that is of interest to us. Also, information may be entered by the company’s employees (goods description, price, etc.). Finally, the information may be extracted from the marketing campaigns.

DWH and business intelligence testing is divided into several phases:

Certainly, the well-grounded business decision can be made only if the source data is reliable and error-free. Thorough testing will help to guarantee it.

ETL testing

In data warehousing, ETL refers to data pulling out (Extract), placing it in a DWH system in the organized format (Transform and Load). Testing engineers will verify the data moves from the source to the target repository and the transformation rules have been applied as required.

ETL testing will help to

1) Make sure no data is lost.

The data may be lost on any of the stages:

  • On the website itself. For example, the user made a purchase but no data entered the DWH.
  • On the way from the website to the DWH. Before the information gets to the DWH system it may be stored in a cloud using Amazon services, for example.
  • Inside the DWH while moving from one level to another.

Software engineers will check the entire process and detect the bottlenecks.

2) Verify key-value

The records are stored and retrieved using a key that uniquely identifies the record, and is used to quickly find the data within the database. Testers should verify that the key corresponds to the record and the data may be processed correctly. Otherwise, the data loss may be significant.

3) Check the user session parameters are recorded correctly: start and end of the session, its duration, user profile data, his or her activity during the session.

4) Make sure the SQL functional works correctly.

For example, on one of the project the a1qa engineers came across the following defect: the DATEDIFF function that returns the difference between two date values, based on the interval specified, calculated only hours. As a result, the session lasted 11 minutes but the record was an hour.

5) Verify calculations accuracy.

Data warehousing is all about information and calculations. The calculations should be performed correctly.

In the table above, user activity is measured and summarized. The ‘Total’ value should be equal to the sum of values in all columns. And it’s the tester’s job to check it.

6) Exclude data duplication.

On any stage of the processing, data entries can be duplicated. If the customer purchased one item and the system creates two files – it’s a false scenario that should be omitted.

Report testing

The QA engineer will verify the data sorting, exporting and getting to the report. This phase also includes usability testing of the reports with the focus on various data sets (time, currency, etc.).

Load / Performance testing

As the company grows, the data volume will grow as well. So it’s vitally important to validate the system performance and scalability and define its load limits.

What are the main features of the DWH/BI applications?

Data testing is a complicated topic. Up to now, there is no single testing methodology. Every data testing project will be a unique one.

The main features of the DWH/BI solutions that should be taken into account are the following:

  1. Complex architecture and business logic
  2. Large volume of heterogeneous data
  3. Multiple data sources
  4. Constantly growing data scope
  5. Changing business requirements

How to choose DWH and BI testing team?

In order to perform efficient testing of the data processing and storage a tester is expected to:

  • have a clear idea of the ETL process
  • possess good understanding of database principles
  • know BI/DWH concepts and technologies
  • adapt to dynamic software requirements
  • be able to ‘talk’ to business users.

What is more, it will be a big plus if the QA team is able to participate in the design and requirements testing phase. This will reduce the number of late improvements and ensure the project readiness within the set time and budget limits.

Timely and unbiased BI and DWH testing will guarantee information accuracy and reporting efficiency. Valid and correct data will help you make better decisions to fulfill your needs. 

Need help? Book a free consultation with an a1qa expert.

Recent years have brought in a lot of innovations. Technologies have moved so far forward, and the progress is seen with the naked eye. All these recent alterations will definitely impact the sphere of software development.

And as always, business will want the high-quality product launched as early as possible. In today’s blog post we share the prominent QA trends for 2018 to help shape future plans related to the assurance of the software impeccable quality.

#1. Increasing role of DevOps and Agile

DevOps provides for close collaboration between development team and operations staff throughout all the stages of final product creation. According to the World Quality Report 2017-2018, about 88% of the companies used the DevOps principles in 2017, which is an obvious majority. DevOps and Agile together give you the smooth and fast development process and minimize the time and money spent on the product.

‘Applying DevOps and Agile will give you and your clients in the long run such benefits as acceleration of time to market and outage reduction, increase of quality and faster reaction to changes and defects.

Moreover, today SAFe (Scaled Agile Framework) as an Agile for large teams is becoming more and more widespread. If we talk about our own experience at a1qa, we see that clients want to have QA engineers who are able to provide both manual and automated testing – cross-functional QA specialists, so to say. 

That’s convenient for both QA vendors and their clients. The former benefit from having one person who can perform multiple tasks and grow as a professional in various testing areas. As for the clients, they don’t need to spend additional time on knowledge transfer and communication’, says Vitaly Prus, Head of Agile Testing Department at a1qa.

#2. Ongoing trend on test automation

Test automation is a great method to shorten software lifecycle. Every client is eager to have time to market accelerated and cut the costs of the whole process.

However, automation should be applied wisely. If it’s an end in itself, there is no reason to use it. For example, fast changes in the product will make the automation process unnecessary and unreasonable. If the customer wants to automate testing process, it’s always worth estimating its practicability and figure out whether there is even a slight possibility of negative earnings’, Maxim Chernyak, Head of a1qa Test Automation & Performance Lab, talks about the trend.

However, test automation is under-exploited now as only parts of the QA process are automated. According to the World Quality Report 2017-2018, the average level of automation is about 16%.

#3. Open source tools

Today a big portion of IT companies accept the use of open source tools for testing process, which are easy to apply. Moreover, they are technology-savvy and offer great testing opportunities. You will definitely benefit from these solutions, as the expenses for your services will include only the costs for the actual work of your QA team.

#4. Security testing

Security today is of crucial importance for any product or system. Given the fact of the increased popularity of the IoT technologies, security testing became an inalienable part of the product development. Security and penetration testing services are worth using as hackers will continue seeking access to the IoT devices for destructive purposes.

a1qa pays a lot of attention to the security testing to assure that the protection of personal data must be implemented on the highest possible level.

‘IoT devices security is a pain in the neck for the developers of smart devices in 2018. It is reinforced by hackers’ interest in routers, cameras and other smart devices available through the Internet. Several botnets, which were used for DDoS attacks on various corporations, appeared in 2017. In addition, there is a trend for complicating and sophistication of the attack. Thus, the first versions of botnets simply gathered in password and usernames, however now they are able to compromise the device without knowing the username and password‘, Alexey Abramovich, Head of a1qa Security Testing Department, comments on the trend.

#5. Big Data testing

The expansion of Internet of Things (IoT) deals a lot with big data as laptops, home devices, various sensors and machines generating huge amount of data on a daily basis. IoT evolution, as well as digital revolution in general, plays a great role in Big Data world.

The Big Data testing will be in great demand in the near future. It seems that big data system testing will be easier as machine-learning models are becoming more sophisticated and are able to cope with great deal of data variety.

#6. Mobile testing

The number of smartphone users is increasing every year and it is expected to surpass the 5 billion mark by 2019, which will increase the mobile development and testing.

People tend to use their mobile devices for the activities they used to perform on their PCs. Considering the variety of services trusted to smartphones, customer experience and apps functionality become the most important things to check before the final release of the product.

‘As a number of mobile devices grow constantly, the number of mobile applications grow exponentially. Mobile applications are not only an additional customer acquisition channel, but they are becoming the leaders for this goal. What concerns the trends they are determined by the new technologies and innovations. For example, mobile games still stay popular, but AR technology will definitely increase the number of mobile games on the market in the near future. Apple, Facebook, Google use this technology not only in GameDev sphere – its use is much wider.

Another incontestable trend is blockchain technology which was a great deal of discussion in 2017. This technology became in high demand as it provides new opportunities and growth for businesses. However we should not forget about the other popular technologies, such as IoT, Cloud Based applications and E-commerce, which are still edgy’, Pavel Novik, Head of a1qa Mobile and System Application Testing Department, shares his thoughts.

#7. Performance testing vs. performance engineering

Today, we are moving from Performance Testing to Performance Engineering.

To amplify the chances for a successful release of the app on the market, user experience and performance issues must become the most significant things to consider throughout the entire development process.

‘DevOps and Agile practices couldn’t but influence the QA involvement. More and more often, the QA performance team collaborates with the development team, the functional testing team, and the business stakeholders. This gives an opportunity to move from simple performance tests to a deeper understanding of the way how all parts of the system work together. The use of true-and-tried practices and techniques during each phase of software development lifecycle enables the performance team to improve the software speed and robustness, ensure optimum performance given the business goal, which is the main objective of Performance Engineering‘, says Mihail Urbanovich, a1qa Performance Testing Manager.

We hope this brought together trends will help you make up smart plans in assuring high quality of your products.

The year 2017 is just around the corner. We guess it’s high time to remember the most common QA requests in 2016. Based on them we give our recommendations to software testing vendors to run successful testing in 2017.

Embrace test automation as integral part of testing

Test automation is still the best way to speed time to market, quickly test changes and not delay deployment.

“Today we observe the booming growth of test automation as a trend. How does it manifest? Well, obviously, clients have become more aware of the automation goals and advantages. The number of automation service requests for projects with QA in place also grows up continuously.

For the last six months we’ve received a significant number of requests to automate testing of the desktop applications (mainly for Windows). They are still not as many as the web apps testing requests but the number is very close to it. The driving force is the emergence of high quality toolsets that enable to solve complicated issues that were hard to solve before.

Mobile apps test automation has also grown in demand and this trend will likely keep gaining speed.

Open-source project Docker is used more often to speed up deployment of test environment. Docker offers to scale automation of various activities related to the software development, deployment and testing.

And the last but not the least important factor for test automation popularity is the opportunity to provide complex high quality solutions without being limited to the sets of automated tests only.

Now we offer automated solutions that are integrated with testing and bug tracking systems and enable to analyze test results. Summing up, test automation is becoming an integral part to continuous testing.”  – says Sergey Hamzatov, a1qa Test Automation Engineer.

Develop new service lines

Alongside with traditional services, we do constant research and develop new services to meet specific QA demands, for example, Baseline Testing. It enables to evaluate the current quality level of any IT product and propose a roadmap to its increasing.

Another non-conventional service is QA consulting, which is rather popular among those customers who need to develop QA processes or improve current testing strategies not to outsource testing needs on a regular basis and manage distributed teams.

Instill out of the box thinking approach

More and more often we have to deal with assuring quality of various IoT developments. They require testers to become real users for some time and try the most unthinkable scenarios. What we recommend is to start thinking out of the box.

How can a professional manual tester who runs routine tests regularly become more creative? There are some useful pieces of advice that might be of help to any tester:

  • Find out what the software under test is not expected to be doing. Try those things out.
  • The ‘what if’ should become the leading question of the software research. So you are finding yourself in the middle of Apple Watch testing. How will it act if an iPhone it is paired to runs out of battery, etc.?
  • If you can do anything in the system (meaning it allows you to) do so without question and despite everything telling you shan’t do just that.
  • If possible, get the system (or device) under test out of your working premises and try it in real environment.

Get ready for testing Big Data applications

Large companies often ask for comprehensive strategies to test big data systems that are too big in scope to be processed in traditional ways. And here again test automation comes to help us. Automation is one of the best means that can be used for testing big data apps.

Give priority to security testing

Security has been and is probably the most important aspect of any IT strategy. Nowadays we are getting ready to handle increase in systematic testing of all applications (mobile, web, desktop).

The cost of mistake also increases as users now are less forgiving of broken security. To stop the vulnerability trend, users, mobiles apps developers and testers should join their efforts. Users shouldn’t share their personal data and have to become smarter downloaders; developers must ensure the 100% code security, while testing engineers should identify threats to the app and help develop countermeasures.

We hope that our recommendations will help to shape your future plans and start efficient and productive 2017.

In the second part of the interview, Adam Knight speaks on the combination of exploratory testing and automated regression checking in testing Big Data systems. If you’ve missed the first part of the interview with Adam on recent changes in testing, you can find it here.

Adam Knight is a passionate tester eagerly contributing to the testing community. He is an active blogger and constantly presents at such testing events as Agile Testing Days, UKTMF and STC Meetups and EUROStar.

Adam, you specialize in testing Big Data software using an exploratory testing approach. Why do you find it necessary to do exploratory testing?

It is not so much to say that I find exploratory testing necessary. Rather I would say that I found it in my experience to be the most effective approach available to me in testing the business intelligence systems that I have.

My preferred approach for testing when working on such systems is pretty much:

  • Perform a human assessment of the product or feature being created.
  • At the same time automate checks around behavior relevant to that assessment to provide some confidence in that behavior.
  • If at some point the checks indicate a different behavior than expected, then reassess.
  • If you become aware that the product changes in a way that causes you to question the assessment, reassess.

I believe that exploratory testing is the most effective testing approach for rapidly performing a human assessment of a product, both initially but particularly in response to the discovery of unexpected behavior or identification of new risks, where you may not perhaps have a defined specification to work from, as per the last two points here.

The combination of exploratory testing and automated regression checking is a powerful combination in the testing Big Data systems, as well as many other types of software.

What peculiarities of exploratory testing can you mention?

That’s an interesting question. I’m not sure I would describe it as a peculiarity, however one characteristic of exploratory testing that I believe makes it most effective is the inherent process of learning within an exploratory approach. As I describe in response to the previous question, testing can often come in response to the identification of an unexpected behavior or newly identified risk.

The characteristic of exploratory approaches is that they will incrementally target testing activity around the areas of risk within a software development, thereby naturally focusing the effort of the tester where problems are most apparent. This helps to maximize the value of testing time, which is a valuable commodity in many development projects.

How can a tester find the right balance between exploratory and scripted testing?

It isn’t always up to the tester. I am a great believer in autonomy of individuals and teams and allowing people to find their own most effective ways of working. Many organizations, however, don’t adhere to this mentality and believe in dictating approaches as corporate standard.

Many testers I’ve spoken to or interviewed in such environments often perform their own explorations covertly in addition to the work required to adhere to the imposed standards.

For those who are in a position where they do have some control over the approach that they adopt then the sensible answer for me to this question is to experiment and iterate. I’d advocate an approach of experimentation and learning to find the right balance across your testing, whether scripted vs exploratory, manual vs automated or any other categorization you care to apply.

What are the main difficulties when it comes to Big Data testing?

The challenge of testing a Big Data product was one that I really relished, and in my latest role I’m still working with business intelligence and analytics. When I was researching the subject of Big Data one thing that became apparent to me was that Big Data is a popular phrase with no clear definition.

The best definition that I could establish was that it relates to quantities of data that are too large in volume to manage and manipulate in ways that were sufficiently established to be considered ‘traditional’.

The difficulty in testing is then embedded in the definition. Many of the problems that Big Data systems aim to solve are problems which present themselves in the testing of these systems.

Issues such as not having enough storage space to back up your test data, or not being able to manage the data on a single server, affect testing just as they do with production data systems.

Typically those responsible for testing huge data systems won’t have access to the capacity, or the time to test at production levels – some of the systems I worked on would take 8 high specification servers running flat out for 6 months to import enough data to reach production capacity.

We simply didn’t have the time to test that within Agile sprints. The approach that I and my teams had to adopt in these situations was to develop a deep understanding of how the system worked, and how it scaled.

Any system designed to tackle a big data problem will have in-built layers of scalability to work around the need to process all of the data in order to answer questions on it. If we understand these layers of scalability, whether they be metadata databases, indexes or file structures, then it is possible to gain confidence in the scalability of each without necessarily having to test the whole system at full production capacity each time.

So Big Data testing is all about understanding and being surgical with your testing, taking a brute force approach to performance and scale testing on that kind of system is not an option.

Thanks for sharing your viewpoint with us.

If you want to learn more from Adam, visit his blog a-sisyphean-task.com.

The article by Alexander Panchenko was published on EE Times, you can read the full version here.

Big Data

Furthermore, Big Data is growing at a rapid pace: social networks, mobile devices, data from measuring devices, and business information are just sources capable of generating huge amounts of information. But Big Data is followed by “bad” data. For companies, unstructured and broken (bad) data means wrong, costly decisions. Testing can quickly locate bad data and provide a holistic view on the overall data health. This ensures that the data extracted from some sources remains intact by analyzing and quickly pinpointing any differences in Big Data at every touch point.

Moving onto to the future of security testing itself, its role in the overall testing structure will undoubtedly grow. In the future, a great deal of vulnerabilities (including the critical ones) will still be in shelf software. Here, we are talking not only about site content management systems, but also about data encryption protocols (Heartbleed) and command shells (Shellshock) that have existed for quite a long time. For example, all Bash shells are exposed to Shellshock, which means that this vulnerability will exist for almost 25 years. Such cases are good examples of open source libraries usage—source code that was subjected to security analysis a long time ago. Furthermore, classic vulnerabilities won’t disappear. They include XSS (cross-site scripting), injections, authentication and authorization flows, etc.

To sum up the points of security testing, the quantity of checks is increasing; toolkit testing is developing; and more automatic utilities (both static and dynamic) are appearing. Despite these facts, not all companies are concerned about the level of security of their products, but they should be. The question of user information confidentiality is an acute one. This adds to the argument for the necessity of security testing as an indicator of the whole branch development.

The center of all these trends is still a human, a QA engineer. He/she has to serve as a so-called “universal soldier,” but with deeper and more extensive knowledge. QA engineers should not only be specialists in testing, but also have experience in the domains in which their projects fall.

All IT and software development trends directly affect quality assurance. Such characteristics as mobility, flexibility, reliability, availability and scalability, which are relevant for IT sector solutions, are automatically transferred to software testing. With the future will come the demand for security testing engineers and automation testers, but they, in turn, will face challenges that are more complicated than the challenges we face today. What’s more, the QA engineer of the future should be prepared to upgrade coding skills, equally to developers or even to a larger extent. The effective use of specialists, standardization of processes and increased automation levels are the main factors that can lead to higher cost efficiency. Furthermore, focus will be on non-functional testing—in particular, on security and performance testing.

In case you missed the second part of the article please find it here.

Testing is a process of execution of the program to detect defects. The generally accepted methodology for the iterative software development Rational Unified Process presupposes the performance of a complete test on each iteration of development. The testing process of not only new but also earlier codes written during the previous iterations of development, is called regression testing.

It’s advisable to use the automated tools when performing this type of testing to simplify the tester work. “Automation is a set of measures aimed at increasing the productivity of human labor by replacing part of this work, the work of machines”. The process of automation of software testing becomes part of the testing process.

The requirements formulation process is the most important process for software developed. The V-Model is a convenient model for information systems developing. It’s become government and defense projects standard in Germany.

The basic principle of V-model is that the task of testing the application that is being developed should be in correspondence with each stage of application development and refinement of the requirements. One of the development model challenges is the system and acceptance testing.

Typically, this type of testing is performed according to the black box strategy and is difficult for automation because automated tests have to use the application interface rather than API. “Capture and replay” is the one of the most widely used technologies for web application test automation according to the black box strategies today. In accordance with this technology the testing tool records the user’s actions in the internal language and generates automated tests.

Practice shows that the development of automated tests is most effective if it is carried out using modern methods of software development: it is necessary to analyze the quality of the code, merge into the library the duplicate codes of tests, which must be documented and tested. All this requires a significant investment of time and the tester should have the skills of the developer.

Thus, the question arises of how to combine the user actions recording technology and the manually automated tests development, how to organize the automated tests verification, and whether it is possible to develop an application and automated tests in parallel according to the methodology of the test-driven development (TDD).

There are systems capable of determining the set of tests that must be performed first. Such systems offer manually associate automated tests with the changes in the source files of application under test. However, the connection between the source and the tests can be expressed in terms of conditional probabilities.

The probabilistic networks used in the artificial intelligence, could also be useful when defining the relations automatically based on the statistics of tests results. By using Big Data networks we can link interface operations and test data and this will allow reducing the complexity of automation.

Get the full artilce here.

Big Data is a big topic in software development today, and quality assurance consulting is no exception to it. When it comes to practice, software testers may not yet fully understand what Big Data exactly is. What testers do know is that you need a plan for testing it.

The problem here is the lack of a clear understanding about what to test and how deep inside a tester should go. There are some key questions that must be answered before going down this path. Since most Big Data lacks a traditional structure, what does Big Data quality look like? And what are the most appropriate software testing tools?

As a software tester, it is imperative to first have a clear definition of Big Data. Many of us improperly believe that Big Data is just a large amount of information. This is a completely wrong approach. For example, a 2 petabyte Oracle database alone doesn’t constitute a Big Data situation – just a high load one. To be very precise, Big Data is a series of approaches, tools and methods for processing high volumes of structured and (most importantly) of unstructured data. The key difference between Big Data and “ordinary” high load systems is the ability to create flexible queries.

The Big Data trend first appeared five years ago in U.S., when researchers from Google announced their global achievement in the scientific journal, Nature. Without any significant results of medical tests, they were able to track the spread of flu in the U.S. by analyzing numbers of Google search queries to track influenza-like illness in a population.

Today, Big Data can be described by three “Vs”: Volume, Variety and Velocity. In other words, you have to process an enormous amount of data of various formats at high speed. The processing of Big Data, and, therefore its software testing process, can be split into three basic components.

The process is illustrated below by an example based on the open source Apache Hadoop software framework:

  • Uploading the initial data to the Hadoop Distributed File System (HDFS).
  • Execution of Map-Reduce operations.
  • Rolling out the output results from the HDFS.

Uploading the initial data to HDFS

In this first step, the data is retrieved from various sources (social media, web logs, social networks etc.) and uploaded to the HDFS, being split into multiple files:

  • Verify that the required data was extracted from the original system and there was no data corruption.
  • Validate that the data files were uploaded to the HDFS correctly.
  • Check the files partition and copy them to different data units.
  • Determine the most complete set of data that needs to be checked. For a step-by-step validation, you can use such tools as Datameer, Talend or Informatica.

Execution of map-reduce operations

In this step, you process the initial data using a Map-Reduce operation to obtain the desired result. Map-reduce is a data processing concept for condensing large volumes of data into useful aggregated results:

  • Check required business logic on standalone unit and then on the set of units.
  • Validate the Map-Reduce process to ensure that the “key-value” pair is generated correctly.
  • Check the aggregation and consolidation of data after performing “reduce” operation.
  • Compare the output data with initial files to make sure that the output file was generated and its format meets all the requirements.

The most appropriate language for the verification of data is Hive. Testers prepare requests with the Hive (SQL-style) Query Language (HQL) that they send to Hbase to verify that the output complies with the requirements. Hbase is a NoSQL database that can serve as the input and output for Map-Reduce jobs.

You can also use other Big Data processing programs as an alternative to Map-Reduce. Frameworks like Spark or Storm are good examples of substitutes for this programming model, as they provide similar functionality and are compatible with the Hadoop community.

Rolling out the output results from HDFS

This final step includes unloading the data that was generated by the second step and loading it into the downstream system, which may be a repository for data to generate reports or a transactional analysis system for further processing: Conduct inspection of data aggregation to make sure that the data has been loaded into the required system and thus was not distorted. Validate that the reports include all the required data, and all indicators are referred to concrete measures and displayed correctly.

Testing data in a Big Data project can be obtained in two ways: copying actual production data or creating data exclusively for testing purposes – the former being the preferred method for software testers. In this case, the conditions are as realistic as possible and thus it becomes easier to work with a larger number of test scenarios. However, not all companies are willing to provide real data when they prefer to keep some information confidential. In this case, you must create testing data yourself or make a request for artificial info. The main drawback of this scenario is that artificial business scenarios created by using limited data inevitably restrict testing. Only real users themselves can detect defects in that case.

As speed is one of Big Data’s main characteristics, it is mandatory to do performance testing. A huge volume of data and an infrastructure similar to the production infrastructure is usually created for performance testing. Furthermore, if this is acceptable, data is copied directly from production.

To determine the performance metrics and to detect errors, you can use, for instance, the Hadoop performance monitoring tool. There are fixed indicators like operating time, capacity and system-level metrics like memory usage within performance testing.

To be successful, Big Data testers have to learn the components of the Big Data ecosystem from scratch. Since the market has created fully automated testing tools for Big Data validation, the tester has no other option but to acquire the same skill set as the Big Data developer in the context of leveraging the Big Data technologies like Hadoop. This requires a tremendous mindset shift for both the testers as well as testing units within organizations. In order to be competitive, companies should invest in Big Data-specific training needs and developing the automation solutions for Big Data validation.

In conclusion, Big Data processing holds much promise for today’s businesses. If you apply the right test strategies and follow best practices, you will improve Big Data testing quality, which will help to identify defects in early stages and reduce overall cost.

You can also read the article on Computer Technology Review.

Get in touch

Please fill in the required field.
Email address seems invalid.
Please fill in the required field.
We use cookies on our website to improve its functionality and to enhance your user experience. We also use cookies for analytics. If you continue to browse this website, we will assume you agree that we can place cookies on your device. For more details, please read our Privacy and Cookies Policy.