As we approach the culmination of 2023, it’s time to take an opportunity and reflect on the wealth of knowledge that has transpired during a1qa’s online roundtables.

Let’s cut to the chase!

Unveiling the importance of a1qa’s roundtables for IT leaders

Recognizing the paramount importance of fostering a dynamic exchange of QA insights and best practices, a1qa hosts a series of monthly online roundtables designed for top executives.

These exclusive sessions help bring together diverse IT experts to deliberate on topical QA-related issues, such as quality engineering trends, test automation, shift-left testing principles, among others.

Roundup of 2023 a1qa’s sessions

The first quarter roundtables overview

During this period, participants discussed three relevant topics — “A practical view on QA trends for 2023,” “How to get the most of test automation,” and “Dev+QA: constructive cooperation on the way to project success.”

Analyzing QA trends helps business executives to proactively shape their QA strategies, ensuring they are in sync with the industry’s evolving landscape. While automation assists them in accelerating IT product’s delivery, enhancing its quality, and reducing operational expenditure.

Also, the attendees talked about the best moment for QA to step into the SDLC stages and methods to make the communication between Dev and QA more efficient.

The second quarter roundtables overview

This period was marked by three vibrant conversations:

  1. “QA for complex software: tips for enhancing the quality” — IT peers shared the challenges they encounter when testing sophisticated systems and the ways to overcome them.
  2. “How to release a quality product within a limited budget” — C-level reps exchanged practical experience on mapping software quality expectations to a QA strategy and optimizing QA costs.
  3. “How to improve QA processes with shift-left testing principles” — participants discussed how shifting QA workflows left allows businesses to identify and fix defects early on while speeding up the release of top-quality applications.

The third quarter roundtables overview

“A closer look at the field of automated testing” took center stage during the 3rd quarter, emphasizing how to derive more values from test automation supported by AI and behavior-driven development.

The fourth quarter roundtables overview

During the last quarter of 2023, IT executives have already engaged in two insightful conversations — “How to organize testing and increase confidence when starting a new project” and “Rough deadlines: how to deliver better results in less time.”

At the October event, the attendees revealed the best QA approach to choose to be confident in a project’s success from the outset, optimize ROI, and reduce business risks. The November roundtable helped the participants voice their ideas and share real-life cases on meeting tight deadlines without compromising software quality.

Thanks for being part of our roundtables in 2023!

To sum up

Our journey through the diverse and insightful roundtable discussions hosted by a1qa’s professionals with in-depth QA and software testing expertise throughout 2023 has been a testament to the company’s commitment to fostering knowledge, collaboration, and innovation in the ever-evolving landscape of IT.

From exploring emerging QA trends to delving into the nuances of automated testing, each session has played a pivotal role in helping IT executives shape future strategies.

Need support in refining the quality of your IT solutions? Reach out to a1qa’s team.

Digitalization is pushing many companies to develop state-of-the-art software products faster than competitors do to grow their businesses. And this movement towards the creation of new technical capabilities is bearing fruit. Web services perform hundreds of diverse tasks to optimize production processes, which saves companies’ resources and provides end users with more useful options.

Let’s take as an example world-renowned manufacturers, retailers, and marketers of goods that introduce new technologies allowing remotely monitor the condition of machines creating the products. Experts can study the problem at the factory remotely and make recommendations on troubleshooting. This helps reduce the number of business trips, which saves money for companies.

Other businesses go even further and develop their digital projects to create IT products. They increase the recognition of the company and attract a new audience.

Let’s take a look at one success story. A media company owning a magazine-format newspaper decided to make a successful digital shift to the online space. To align customers’ expectations from the world over and provide them with a quick and stable digital version of the solution, the client turned to performance testing, a series of checks on the system’s responsiveness to high load. The results of these tests helped to adjust the software product to the end users’ needs. By now, the news portal continues to work smoothly after the launch of the online version.

However, this update process could have failed if the newspaper’s website hadn’t withstood the influx of new users. How to avoid such possible incidents? The decision is to collect more information about the work of an IT product in different conditions and under different circumstances.

There is no need to introduce performance testing, which helped achieve the business need in the above-mentioned success story. In this article, we will highlight its main types and discuss, which metrics to take into account for tracking the flawless functioning of the system.

Components of performance testing

As you know, who owns the information, he owns the world. In this case, those who own the information are likely to develop the web product more successfully.

The stable and smooth functioning of the software depends on an understanding of the state of the server, systems, and infrastructure. Up-to-date performance data allows the QA team to respond quickly to software problems and solve them.

What software indicators are analyzed throughout the performance testing process?

  • Behavior under expected load
  • Work with different configurations
  • Stability over time
  • Scalability features
  • Productivity with increasing data volume
  • Capacity limits

Now we are moving to discuss each of these indicators in more detail.

  • Behavior under expected load (load testing)

Implementation of this performance testing type helps evaluate and predict the behavior of the system under real-life load conditions when multiple users utilize it simultaneously.

If you don’t know how your web product works under a specific expected load, then how can you be sure that it will function as required every day?

  • Work with different configurations (configuration testing)

A web service can be run on different devices and platforms. Configuration testing helps predict the behavior of the software product operating with complex combinations of software and hardware configurations. It will allow timely detecting defects in the web service and eliminate them before go-live.

  • Stability over time (stability testing)

This verification type can allow you to evaluate the stability of the software product over a certain period. Indeed, if now the system does not fail, it is not a confirmation of the web service impeccability. Stability testing defines the ability of your software product to work over a long period of use and not to crash at any moment.

  • Scalability features (scalability testing)

The goal of scalability testing is to reveal that a system can adapt to changing conditions while increasing the processing power and changing the architecture. Scalability checks verify that your application is ready for critical situations like when the user traffic is scaling up or down.

  • Productivity with increasing data volume (volume testing)

This type of performance testing is necessary if you are wondering how the web service can behave when stored user and system data volumes in the database are increased. Volume testing is perfect for long-running projects and is conducted to ensure the system can cope with a large amount of data used. It also reveals such bottlenecks as insufficient memory resources, incorrect storage and loss of data.

  • Capacity limits (stress testing)

The main goal of this type of verification is to figure out the maximum load that a system can handle without failures. You can also evaluate whether the system is available when processor time, memory, network channel width are changing under the increased load.

Stress testing also allows defining the time that the software needs to return to the normal state after stress conditions. In addition, one can make sure that valuable data is saved before crashing, and this crashing does no harm to the tested system.

Information analysis methods

All these performance testing types help identify possible defects and eliminate them promptly. And this is what is also important to know about your software product – system bottlenecks and the real level of software quality.

When choosing indicators for assessing the quality, you can focus on the following factors:

  • Conditions for the information collection and analysis
  • Accuracy of data analysis
  • Importance of specific criteria.

The real assessment of software quality is based on metrics, which are the particular characteristic measurements. For example, value indicators for web services are speed and completeness of page loading, factors for evaluating user convenience, and more.

We suggest considering these valuable indicators:

  • Response time
  • Concurrent users
  • Requests per second
  • Transactions per second
  • Error rate.

What does each of these indicators mean, and why should it be considered when evaluating the quality of a software product? Let’s specify.

  • Response time 

It is a measure of the time during which the server completes a user request. A lengthy page or image loading may indicate poor web service performance. The optimal indicator of this parameter depends on the characteristics of the product and other external factors. The average rate for a request should not exceed 3 seconds.

When evaluating the response time, it is worth considering not only average indicators but also the peak ones that can help identify not obvious system bottlenecks.

For example, the average page loading time for a mobile application is 2 seconds. This is an acceptable indicator. But a large image increases the download time to 10 seconds. This is a peak indicator that identifies the performance bottleneck of a web product.

Realizing such details will help you better understand the capabilities of the software product.

At the traditional summer professional conference, our performance testing experts covered the topic of optimized page loading speed. You can see at the picture below the cherry-picked points from the presentation.

Page loading a1qa
  • Concurrent users 

Do you know how many virtual users can use your web service in parallel?

This metric allows you to identify problems with the inability to work under high load. It is important to consider this indicator on the eve of the planned influx of users. After all, the inability to cope with the requests of many users can lead to disruption of the regular operation of the application. And possible freezes or incorrect query execution can cause the opposite effect – the outflow of users.

  • Requests per second 

The number of successful requests per second is another important measure of web application performance. This metric takes into account all possible options for the interaction of the user and the software product (information search, loading or unloading of data, etc.).

After all, low rates of the number of requests processed per second might be a result of unsatisfactory server operation.

  • Transactions per second

In this case, we are talking about those operations that, when executed, turn into a single chain. An example of a transaction is the transfer of money from one account to another, adding a product to the cart, or subscription to the news.

This indicator reflects the number of completed transactions for a specific time. The metric reveals the maximum user load.

For the user, a low transaction rate per second means a decrease in the speed of specific tasks. The timely detection of defects in this area will make the software product more flawless.

  • Error rate

The error rate indicates the frequency of errors in the working software product. This metric depends on the number of concurrent users and queries.

An increase in the number of errors during the testing process indicates a problem state of the system. This takes the software product beyond stable operation.

The error rate is not a universal and accurate indicator. We can say with confidence that an error rate that is close to 0 confirms the high quality of the software product.

These are significant but not the only factors for evaluating the effectiveness of a software product. It may seem that a greater number of measurable factors can make the project better. But it’s not.

There are two extremes:

  • Attempts to measure every issue of the project
  • Ignoring quality indicators.

Both approaches in their pure form are ineffective. It is important to control the software product quality and refer to the necessary and relevant indicators.

Summarizing

Evaluation of software quality is a key requirement for its successful development. A series of tests that assess the behavior of the service under high load or load scaling can form a complete picture of the product. Metrics help evaluate the system’s performance and accurately identify system bottlenecks that may entail difficulties in using the IT product.

Do you know everything about your web application performance? Get a free consultation with the a1qa experts on your software quality issues.

Do you have strong confidence in successful customer experience (CX) on your website? If you’re not sure of the answer, this article can be helpful for you.

In most cases, when people hear about performance testing, they think about server-side optimization. Let’s imagine your server is fully tested for performance, all back-end bottlenecks are fixed, and the system can handle the desired load.

At the same time, when the user navigates through your website, he or she doesn’t care about the possible server load and the response time. The only thing users care about in terms of performance is how fast they can book tickets, read an article, or put some long-awaited goods into a basket.

Even with flawlessly configured and optimized application server, client-side is the key to improved CX. The best way to find this key is to perform client-side testing.

Based on the CX performance testing results, you can optimize the layout of the pages and speed up the application full load time in GUI.

Let’s see through real-world examples of how client-side performance can strongly affect CX.

Client-side testing essentials

Almost everyone has once faced the need to find flight tickets. With popular destinations, it is a common thing when several airlines are flying to the same place. With a large choice of carriers, the final decision made by a user is based not only on the best price/time combination but also on the speed and smoothness of the website. Page load time and time until UI is interactive are crucial for positive CX.

According to Google research (www.thinkwithgoogle.com), if page load time takes 1–3 seconds, the probability of bounce (when a user leaves the first page without further interaction) increases by 32%, whilst page load time of 10 s increases the bounce probability by 123%.

To test from user perspective, one should think like a user and behave like a user. Customers don’t care about the response time from the server, they just want the page to load faster.

Even if an application handles thousands of concurrent users and gets responses from the server less than in a second, it doesn’t mean the page will return results to the users fast too.

Not to be unfounded, we conducted the client-side testing of the 5 biggest European airlines and compared performance from the end-user perspective, measured the basic performance metrics, and identified the application’s bottlenecks. We tested the main functionality of the official airlines’ websites: main page, user profile, and flight search.

Test environments

First Meaningful Paint is a point when the primary page content is visibleBasic metrics you need to consider include:

  • Speed Index is the average time at which the page displays its visible parts.
  • Time to interactive is metric determining interactivity of the page by the network and JS activity.

Test results

We gathered the metrics using Lighthouse tool. The graphs for more visual information with values are presented below:

Main page

User profile page

Search flights page

Looking at the test results and evaluations obtained with the Lighthouse tool, applications rating is the following:

  • https://www.turkishairlines.com/
  • https://www.britishairways.com/
  • https://www.klm.com/
  • https://www.ryanair.com/
  • https://www.airfrance.fr/.

However, even the leader in this comparison – Turkish airlines website – takes more than 5 seconds to become fully interactive. Others – even worse. 20 seconds until the page is fully accessible for the user – this is something one cannot afford in the competitive web applications market.

How to enhance client-side performance

If you run your website through Lighthouse tool and see the page load metrics, this is half the battle. What’s left is how to decrease page load time. Let’s go through the most common client-side performance issues.

Common client-side performance pitfalls are:

  • Unused CSS
  • Blocking resources rendering
  • Images compression and format.

Another bottleneck is JavaScript (JS) execution time. To reduce this parameter, one can consider the following steps:

  • JS code minification
  • JS code compression
  • Unused JS code removing
  • JS code caching.

Whenever the project gets rid of all unused resources, compresses images, JS code, and properly orders the resources to load, the page load can decrease a lot making end-user satisfied and encouraged to continue the journey on the website.

However, client-side testing is not a performance magic wand.

Server response time still matters: if the webpage is ready to render fast but doesn’t get any response, CX is affected strongly.

If you face slow server response time, here are possible solutions:

  • The server application logic optimization. If you use a server framework, it can have recommendations on how to do it right.
  • Database queries optimization. Together with queries optimization, consider migrating to a faster database.
  • The server hardware upgrade to have more memory and CPU.

Afterword

Client-side testing is key to smooth CX. With the increasing amount of elements on the pages as well as a high amount of competitors in the market, it is vital to ensure the website is loading and becomes interactive fast. Client-side testing has its advantages:

  • Can be automated or just used to test the most significant pages

Have a look at the summarized report that was submitted by the QA professional at the traditional a1qa summer conference. It highlights how applied test automation best practices helped fast-track the quality assurance process at the real-life project.

Automating client-side testing
  • Doesn’t require a separate testing environment
  • Can be conducted under/without load
  • Doesn’t take a lot of time
  • It doesn’t require special skills and doesn’t consume a lot of resources.

However, client-side testing is not something that will resolve all your performance issues. It identifies front-end performance bottlenecks but works the best way when combined with server-side performance testing.

Book a free consultation with the a1qa professionals to see how we can boost the performance of your software product.

Before solving any problem, the trouble spots should be detected and only then any measures to fix the situation are taken.

Often, our customers complain about the slow loading of their websites and ask us to test the system performance. Our most frequent response is the following: first, you need to test the client-side of the application. Once you’ve determined and eliminated all bottlenecks on this side, you can proceed with the performance testing.

What is the difference between performance testing and client-side testing?

Every web application consists of a client and server side. The client-side, or front-end, implements the user interface, generates requests to the server and processes the responses received from the server.

The server-side, or back-end, receives a request from the client, performs calculations, accesses the database, and generates a web page requested by the user. The page is sent to the user using the HTTP protocol, for example.

In some cases, slow loading can be caused by the server. However, usually the client-side is to blame. And timely testing helps detect the pain points.

Essentially, client-side testing is functional testing of the user interface.

This type of testing gives answers to the following questions:

  1. How long does it take to display a web page?
  2. How fast will the server reply be delivered to the user?
  3. Should the website content be optimized?
  4. Does the caching process work properly?
  5. Are there any issues with loadable resources?

Usually, the frontend optimization roadmap is the following:

  • Compress the website content.
  • Apply both client and server caching.
  • Get rid of the data that isn’t used but still loaded by the subquery.

Oftentimes, there are 10 JavaScript libraries with only one being used.

  • Configure cookies settings properly.
  • Store static data on the separate CDN server. This way, the user from the UK will get images from the server geographically located closer to them. As a result, the loading time will reduce.
  • Simplify and optimize JavaScript on the client-side.

Logic assumes there is no sense to conduct performance testing until the client-side has been enhanced (which is the cope of front-end developers responsibility). The server may be as fast as light, but the user experience will suffer for the slow loading.

Performance testing as the second step

Performance testing checks the server-side (back-end) of the application. It is performed bypassing the graphic user interface. The load is served with a direct query to the server. The competent QA team can also prepare scripts that will remember the interface functioning. This will help imitate the concurrent work of as many users as needed. As a result, it’ll become clear how the server copes with the production load.

This type of testing will provide answers to the following questions:

  1. How fast does the app perform?
  2. How many concurrent users can it handle?
  3. Is there a place for server performance optimization?
  4. Are there any issues that manifest themselves under the load?
  5. Can the system resources be scaled?
  6. Does the app meet the performance criteria?

In fact, there are many various types of performance testing each helping to find answers to the specific questions.

Stress testing

Used to determine the upper limits of capacity within the application. We recommend performing this type of testing when the traffic is expected to increase significantly, for example, during the holiday sales season.

Load testing

Involves testing the app with the constant load that is served over a continuous period of time. Usually the load is applied during 4-8 hours, however, the period can reach days or more. It all depends upon the performance expectations. While the system behaves under the load, the necessary metrics are collected and checked against the set requirements.

As a result, the app behavior is validated under production use.

Volume testing

Refers to the app testing with a certain amount of data. You’ll learn how the system behavior will change shall the data volume (database size, interface file size, etc.) be increased.

Soak testing

The duration of the load can vary depending on the goals and capabilities of the project, reaching up to seven days or more. As a result, you will see how the performance of the system changes over a long period of time under load, for example, during a week. Will the level of performance decrease? Is the application capable of withstanding a stable load without critical failures?

Scale testing

Allows to find out how the performance will change (if it will) with the new hardware resources added.

Web application testing types: it’s all about the load

By now, you should understand that client-side testing and performance testing are different activities, with the main difference being in the applied load. If you need to find out why the website loads slowly with one visitor on it, request for client-side testing. If you’re interested to discover how the user from Mallorca will experience your app – client-side testing will be of help.

On the other hand, if you want to learn how the website will function with, let us say, 1000 concurrent visitors from different locations – this is the task for performance testing experts. They will use the right tools and technologies, analyze the real use situations and provide tangible results and a roadmap to take your web solution from good to great!

Before a software product release, it’s vital to perform a series of tests in order to check its quality. However, often project managers don’t consider one crucial testing type, which is accessibility testing.

Accessibility testing is the process of checking whether the product is suitable for disabled people.

In this post, we will elucidate its relevance as well as core standards and guidelines applied for such a process.

Why is accessibility testing vital?

First and foremost, it can help your business grow. If the app is truly accessible to both able-bodied people and those with disabilities, for sure it will increase loyalty towards a brand. Only in America 67% of all disabled people (56 mln) obtain digital experience. Can you feel the ongoing scale?

Moreover, the income will enlarge proportionally to the number of audience.

Who can benefit from accessible content?

  • People with deafness / hearing loss
  • Blind and partially sighted people
  • People with dyslexia and other cognitive issues
  • Individuals with multiple combinations of disabilities
  • Aging people who undergo lucid changes.

WCAG 2.0: ACCESSIBILITY TESTING FRAMEWORK

The basic recommendations for the testing of accessibility are stated in a special document called Web Content Accessibility Guidelines (WCAG) 2.0.
It’s not a stand-alone paper and is supplemented by a number of guidelines. They provide additional information in order to introduce a full outline on accessibility testing.

WCAG 2.0 standard is intended to provide accessible content to people with numerous disabilities, from visual to neurological, including their innumerable combinations.

Content accessibility deeply depends on four key principles: perceptibility, operability, understandability, and robustness. Let us have a closer look at each point.

Core principles to bear in mind

To be esily accessible by the audience, the web content should be

1. Perceivable

The content and UI should be presented in the most suitable way for perceiving depending on this or that disability. Perceptibility can be reached on four levels.

Text-alternatives

Each non-text element should be presented as a text and elaborated into the appropriate form (Braille, special symbols or voicing). For instance, CAPTCHA, a tool that helps to identify human beings. This point is of high gravity. The more alternative types of this tool appear on the page, the easier people with different disabilities will perceive it.

Media

Diverse scenarios for media content will help people with hearing loss encounter fewer hurdles, especially if a sign language is used for prerecorded content.

Adaptable

This point is about using content that maintains sense and structure after transformation in various types. For instance, if the meaning depends on the content sequence, it’s vital to ensure that the order is set up correctly.

Distinguishable

People with disorders for sure will benefit from the ability to see and hear the content clearly, for instance, if there are no text images. They may be applied exceptionally, when they are necessary for layout or play a key role for preserving sense.

2. Operable

This principle presupposes that disabled people should encounter no encumbrances while using and navigating the app. This section also covers several points.

Keyboard accessibility

The whole app functionality can be operated by means of a keyboard, thus significantly simplifying user’s activities.

Enough time

Tool that allows disabled people pause certain update or scrolling will assist them study the content without any haste.

Harmful design elements

Here it is possible to talk about certain elements on a web page that spark three or less than three times a second. It’s a completely lucid requirement, as people may confront inconvenience or even mischief to their eyesight.

Navigation

If app users have an opportunity to skip some content blocks they encounter on other pages, navigation won’t be a serious bottleneck any more.

3. Understandable

The majority of bottlenecks can be offset if people with disorders fully understand the content and UI. The list of requirements here includes three vital points.

Readable content

This point, for instance, presupposes the use of a special mechanism that allows users to receive an expanded version of each abbreviation.

Predictability

For instance, if any context changes are run only after user’s approval, disabled people will unlikely get confused.

Input assistance

Content handling will be much easier for users if they obtain an opportunity to automatically locate and rectify existing input errors.

4. Robust

The crux of this matter is that the content should be compatible with the existing user applications.

WHO IS PROFICIENT ENOUGH TO PERFORM ACCESSIBILITY VERIFICATIONS?

In order to carry out accessibility testing successfully it’s vital to find a professional and experienced team who:

  • Strives to introduce quality
  • Knows necessary tools specific to each mobile platform (for instance, Android, iOS, Windows Phone)
  • Always has an eye for state-of-the-art techniques implemented for such kind of tests.

SUMMING UP

Accessibility testing is a vital constituent of a prerelease product preparation. The more attention is dedicated to such tests, the more users will strive to use your product regularly.

Do you want to check your application accessibility in accordance to the WCAG 2.0 standard? Address the a1qa specialists who will make sure your product hits all target audience.

Web application testing service is a general term that denotes different types of testing.

The main goal of any testing endeavor is to detect where there are faults/bottlenecks in your software that may cause harm to your business and find possible ways to prevent them.

In this 5-minute read guide, we’ll help you understand what every of these terms mean and how they help you to get what you want most – certainty of success in your IT project.

Three areas of concern that web application testing addresses

1. Does your app do what it is expected to do?

Functional Testing is the process of evaluating the behavior of the application to determine if all the functions perform as you expect them to perform. Examples of functional behavior include everything from limiting access to authorized users to accurately processing all transactions and correctly logging out.

Functional testing can be performed in different ways: using formal test cases or by means of exploratory testing techniques.

2. Will you app function correctly on all browsers and devices that your customers use?

Compatibility or Cross-Browser Testing is the process of evaluating the behavior of the app in a variety of configurations that include numerous browsers, screen resolutions, and operating systems.

Examples of proper Cross-Browser Testing may include testing on the latest versions of Chrome, Firefox, MS Edge, Safari and on Windows 7, 8 and 10. It’s advised to run tests on a number of latest versions as not all users are prone to go for updates as soon as one is released.

3. Will your web solution survive with a lot of users at the same time? Or will it crash?

Load or Performance Testing is another type of testing that determines the performance limits of the app. The typical final report by QA engineers will include the following:

  • Statistics on the response time from the server for the most crucial transactions
  • Diagrams that show the dependence of the app performance on the number of concurrent users
  • Data about the maximum possible number of concurrent users that would allow the system to cope with the load
  • Information on the system stability and its ability to cope with the continuous load
  • Error statistics
  • Conclusions on the system performance in general, its performance bottlenecks
  • Recommendations for improving the system performance.

Check out how the a1qa web app testing team ran full-cycle testing and ensured the quality of the online movie website.

Other risks that web app testing helps to mitigate

The list of questions that the team of professional QA engineers answers can be continued. Depending on the type of your business and your desire to accept risks, there are other reasons to perform your app testing.

1. Can unauthorized users access the app?

Security and Penetration Testing is the process to determine how and under what circumstances the app can be hacked. Security testing engineers employ a number of techniques to perform thorough analysis and assess the level of the app security.

Moreover, if the app uses personal data of the customer, it’s vital to make sure the passwords are strong enough.

2. Is you web application properly adapted to the cultural and linguistic peculiarities of the target regions?

Obviously, the localized product creates more business opportunities. Localization Testing is the process of verifying localization quality.

Localization testers will deal with the following:

  • Content and UI elements translation
  • Data and time formats
  • Currency
  • Color schemes, symbols, icons, and other graphic elements that can be misinterpreted in various regions
  • Legal requirements of various regions that should be taken into account.

Actually, the latter point lies in the scope of responsibility of both Localization and Compliance Testing.

3. Compliance testing is the process that verifies the app behavior against the rules and regulations your business is subject to.

An example of compliance testing is Web Content Accessibility Guidelines (WCAG) accessibility compliance that should be considered when developing web products available to people with disabilities.

5 Questions to help you make the right choice

We hope that now you understand the purpose of every testing type. However, it can be still a difficult task to make the right choice and select one or several of them that will help your project.

Here’s a list of five quick questions. If you make your selection based on the answers to them, your chances to select the right testing type and the best QA vendor to perform it get high.

  1. What is the goal for your software development project?
  2. What are the project constraints?
  3. What are the top 3 risks for the project delivery?
  4. What strategy does the QA provider recommend considering the goals and constraints?
  5. What does the provider recommend to mitigate the risks?

Web application testing can be messy and complex but it can also be safe and reliable when you are able to understand your options and select the services that are most valuable for your business.

a1qa provides on-demand web app testing services to help you make it faster to market and delight your customers. Contact us now and get an obligation-free consultation.

What is POS?

A POS (point of sale) is a software-hardware combination designed to bring all merchant’s ecosystem together.

Visually, it is a computer connected to a number of peripheral devices. The computer collects all information on sales transactions and inputs it into the storage system.

Why is it so important to ensure quality of POS solutions through rigorous testing?

Retail is a highly competitive business. A good point of sale can make a significant difference. It will increase process efficiency by eliminating unnecessary work. If the POS doesn’t function as expected it’s likely to cause significant troubles:

  • More man-hours will be needed to process and correct unreliable and slow checkouts.
  • Risks of incorrect records and employees thefts or misbehavior will go up.
  • Costs control will become more challenging.
  • Sales reports will provide erroneous information preventing to make informed business decisions.
  • Promotions, discounts, and coupons will be hard to get tracked.
  • Business may lose loyal customers due to incorrect loyalty members’ data.

Therefore, it’s clear that any POS solution should be reliable, easily scalable and maintainable, highly performing, secure, and customizable. To ensure all this, it demands a lot of focus on testing the solution properly before deploying it.

Throughout POS testing, a QA team should bear in mind

  • Positive and negative scenarios: To prevent any issues at the customer end, test cases should be designed covering every positive and negative scenario (invalid PIN, expired card, etc.).
  • Connected devices: Peripheral devices connected to a POS (a barcode scanner or a cash drawer) may cause some issues that should be considered by a professional QA team.
  • PCI compliance: Electronic payment is the basis of any POS solution. Hence, a good POS system should meet the globally accepted security standard such as PCI compliance to protect cardholders’ data and integrity.
POS testing

Conducting POS testing

A rigorous test scenario aimed at ensuring high quality of POS solutions should include the point given below:

1. Cashier activities: This should be the starting point in every POS testing project.

These are the main points that should be checked by a QA engineer:

  • Correctness of the entry of items purchased
  • Correctness of the Total applied
  • Validity of discount coupons/gift cards
  • Total and closing figures match.

2. Sales: Regular sales, sales with a credit/debit card, manage of return and exchange, quantities and prices.

3. Discounts and promotions applied.

4. Loyalty members data: The system should track who the customers are, what kind of purchases they make, and how often. The effective POS system will also remind you to e-mail your customers prior to their birthday, for example.

5. Performance of the system: How long does it take to send a request and get a response applying all transaction rules?

6. Smart terminal ability to read various kinds of cards.

Testing POS

Testing POS software applications

There are apps for any retail need in the market: accounting apps, eCommerce solutions, employee management, etc. Their objective is to automate and streamline some aspects of the retail workflow. By synchronizing their data with any of the apps, merchants get an opportunity to manage stocktaking and wastage data, increase customers’ loyalty, collect the feedback, etc.

But once again.

Each and every app should be tested.

Usually, it will take functional and integration testing to ensure the app delivers its value. Functional testing will check the app from the functional requirements standpoint, while integration testing will detect any issues related to the app connection to other business software solutions.

When testing apps for POS terminals, one has to focus on all the scenarios. A good tester should be able to get into the shoes’ of a cashier serving the customers and check all the flows.

It’s also important to differentiate between the functionality of the app and that of the POS. To do this, good test documentation is needed and the team should be aware of all business and user requirements.

Check out the case study on how the a1qa ensured the high quality of the POS apps prior to their release.

It’s vital to assess the system performance with any app installed. Slow and irresponsive POS solution will spoil any business. If the app slows the speed of requests handling, it should be uninstalled and replaced by another one that won’t decrease the performance parameters.

Summing up

If you’re engaged in a retail business, evaluation of your POS solution is a vital step to ensure its acceptable level of quality in all essential areas such as reliability, security, recovery management.

Contact a1qa Retail Testing Center of Excellence to learn what QA solution will fit your POS needs best.

Transformation of the billing solution

The billing system is a vital element in any telecom network. High-quality billing solutions predetermine great customer service and operator’s stellar reputation in the market.

A traditional billing system is network-derived and serves as a tool to calculate fees for services usage (mainly voice and SMS). However, the customers’ needs change and the reality of digital economy press telecom operators to transform their business models and billing solutions.

A redesigned billing solution should have all the features to generate complex offerings and value-added services, operate in real time (as no user wants to exceed their data cap while watching a video), should be agile in terms of services and products.

The transformation process is a long way in terms of development and quality assurance works that should go unnoticed to the customers. To this end, the fees and terms of service provisioning should stay the same as even a slight increase in fees or a calculation mistake will deteriorate customer experience and increase the claims.

Telecom data migration testing

Ensuring high quality of telco software is the key area of the a1qa expertise.

When testing data migration to the new solution, our company applies a combination of testing types. Yet, the final one is Parallel Testing (also called Back-to-back Testing).

The following article provides insights into what we believe needs to be considered and actioned as part of the planning and execution of a successful Parallel Testing for the Telecom industry.

What is Parallel Testing?

From the view of Telecom industry, Parallel Testing is a strategy to verify the quality of data migration from the existing system to the target one. Testing is performed on the same data with both systems running side by side. The results are compared and any mismatches are analyzed.

It is expected, that in the end, any transaction on the migrated clients will have the same effect when performed in the legacy (old) system and a target (new) one.

In our context, the effect is the same fees charged for the usage of the same services, equal calculation and payments reflection on the customer’s balance sheet.

Any discovered discrepancy is a potential defect in software configuration, migration process, or functionality.

What business processes are tested?

Parallel Testing verifies that the following critical business systems processing large scope of the migrated data work as expected:

  • Cash payments processing
  • Online and offline calls processing
  • Balance forwarding, payments, and fee adjustments processing
  • Fee calculation
  • One-time charges calculation
  • Change of the data plan
  • Service enabling/disabling
  • Data packets activating/deactivating
  • SIM card replacement
  • Billing
  • Remuneration calculation

Setting up environment for Parallel Testing

Setting up a right test environment will ensure testing success. The following components are required for Parallel Testing:

  • A testbed with a target system
  • An environment for data comparison and analysis

The data for the legacy system are collected from the production environment.

Before testing starts, at least one billing plan with all its products should be located on the testbed. Mapping tables with all products, customers’ attributes should be developed.

On the testbed of the target system, there should be a stable version of the latest release that has passed system and acceptance testing.

Additional test environment will help to accomplish the following tasks:

  • Copy operating results of the business processes under test (fees, payments, bonuses, accounts, post-migration data records, etc.)
  • Launch scripts for comparison and save results
  • Analyze discrepancies with the help of the supporting subject tables

Two phases of Parallel Testing

Parallel Testing is performed in two phases:

  • Preliminary phase
  • Regular phase

In the preliminary phase, various kinds of defects are detected and eliminated: poor product mapping, incomplete clients’ attributes transfer, poor synchronization of data between billing subsystems, functionality flaws.

Scripts for results comparison and data analysis will also be debugged in this stage.

Finally, the testing team should get ready for discrepancies analysis before launching regular tests.

Once the preliminary round of testing is over, the regular phase begins.

The main goal of the regular testing round is to detect and eliminate the defects mentioned above.

The difference between the two rounds lies with the amount of clients’ data under test. In the preliminary round, engineers will take only a small portion of the clients that are to be migrated. In the regular phase, all clients should be taken.

By the way, in some cases, it’s possible to omit the preliminary phase.

Dry run testing phase

In any of the phases (preliminary and regular), Parallel Testing is performed immediately after the iteration of Dry Run.

Dry Run shall provide the scope of clients that can migrate to a new system.

For example, the project requirements may define that the clients with a debt in the balance sheet can’t migrate until the debt is paid off.

So in fact, Dry Run is the preparation of data for Parallel Testing.

Once the testing is over, all discrepancies are analyzed and the reasons for them are examined. If necessary, defects are reported to the bug tracking system.

After that, the discrepancy statistics correlated to business processes in collected. The discrepancies impact on the overall workflow is estimated and described.

All the results are presented in the final report.

All the defects that have been detected in the previous stages of Parallel Testing are validated while executing system test cases. However, their elimination should be confirmed in the next stage of Parallel Testing for the same scope of data and products.

Summing up

Parallel Testing is an extra type of data migration testing. Due to the relatively high cost, we recommend launching parallel tests once the system testing that will detect the majority of defects is over.

The advantage of Parallel Testing is that this type of testing provides a wide coverage of both the subscriber base and the configuration of the company’s products due to the fact that real data are taken from the production environment and processed in mass.

In addition, Parallel Testing detects defects that were overseen during system testing and brings down financial and reputational risks of data migration to a minimum.

Finally, we’d like to note down that this type of testing can be useful not only for telecom solutions but also for testing migration of large scope of data of any type.

Contact us to get more information on how our services can help your software deliver the expected value to your business.

Integration testing does not frequently grab the headlines of the hot news in the Information Technology section. The scale of defects is definitely not as critical as of security defects.

Also, planning for a software release, business stakeholders rarely ask for integration testing giving priority to functional testing, cross-browser and cross-platform testing, or software localization testing to meet the demands of the international audience.

However, it does not seem right to underestimate the importance of integration testing as it is one of the primary keys on the way to a solid product release.

What is integration testing?

Integration testing is the phase in software testing in which individual software modules are combined and tested as a group.

The first thing that comes to mind is the software integration with the payment systems. No doubt, assuring the quality of payment flows is an important aspect to be tested, but not the only one. Today, business relies on a large number of software solutions like websites, ERP, CRM, CMS systems. Smooth communication among them all will guarantee proper users’ requests handling, service delivery efficiency, and overall business success.

In this blog post, we are going to demonstrate what systems might be tested in the QA project and what can be the possible challenges that engineers will have to beat.

Integration testing: project review

Client

A representative of a popular English-language magazine (available in print and digital formats) turned to a1qa to perform full-scale testing of the website.

Product under test

Apart from the website functionality, the team was to check the Subscription Portal that was an integral part of the website and consisted of a few components. This module was of prime concern, as the business relied on it for revenue.

The subscription function was implemented with the help of the following software solutions:

  • The open-source CMS system eZ publish that performed subscription data filtering (type of subscription, subscription period, discounts applied, etc.).
  • The website through which a user interacted with the system.
  • Salesforce CRM software. It stored all users and subscription data. An additional plugin allowed the client’s team to manage the subscription acquisition, create new types and review the existing ones.
  • Zuora SaaS software to process billing and payment flaws.
  • Mule ESB service bus to enable data exchange between the components.
  • The database as a BI tool.
  • Salesforce Marketing Cloud software for online marketing.
  • Drupal CMS used to function instead of eZ publish. At the given moment, it contained data of the registered users and serves as a tool for publishing articles, video and audio content.

The subscription workflow is the following:

  1. User’s data is gathered.
  2. A user is provided with a possibility to subscribe after filling out the personal and payment information forms.
  3. The subscription order is handled by a third-party contractor.

Project goal

The client was planning to free the process of the third parties involvement. For this purpose, it was important to make sure that the developed system could function properly on its own.

Testers’ task

The a1qa team was to ensure that the whole system made up of the above-mentioned components was able to solve the necessary tasks.

a1qa integration testing strategy

  1. Key business processes, covered by the system, were defined: buying, cancelling, freezing, and renewing a subscription, changing the billing information, etc.
  2. Test documentation was developed with the consideration of all possible variations. In the project context, variations are all possible flows (e.g., subscription can be cancelled by a client, or automatically if the payment was rejected by a bank). The documentation was to include checking things, like whether the subscription can be performed successfully for all products within each business process.
  3. Testing included a systematic execution of every business process from the start (where it was initiated) through all the transitional steps and to the final business process (or processes), checking that all the data was transferred correctly and the expected outcome happened.

Most processes included data transferring from one module (most commonly Salesforce) to the rest.

If the starting point was not SF, the information went from the starting module to MuleESB, and then to SF. After that it was spread to the rest of the modules (again, via MuleESB).

All in all, integration testing took about 40% of all the a1qa efforts.

Success story - integration testing

Challenges

Surprisingly, but the majority of the integration testing difficulties were caused by the poor requirements elicitation at the very start of the project. The requirements of poor quality caused defects and the overall system instability.

What was the problem? Initially, the requirements were prepared by developers and looked like a number of User Stories in JIRA, containing only the headings without any explanation.
The a1qa team initiated the changes in the requirements preparation process. Description and Acceptance Criteria became the fields that should be filled for every Story. Subtasks were also created with a clear definition of who was responsible for its fulfillment.

Integration testing: automate or not automate?

Test automation is a complicated question and requires detailed consideration of all the pros and cons.

Integration testing automation needs an even more detailed approach. On the one hand, automated scripts reduce the QA time. On the other hand, automated tests are effective only when dealing with continuous or, at least, predictable data sets.

With subscription, it is not often the case – the data is updated regularly and randomly. Therefore, the testing was mostly manual.

Only at the later stages of the project, the automation was put into practice. What test cases were automated? The key business processes were selected. Each business process had a number of variations written. Only the test cases that covered the most stable business processes were automated.

With such an approach, automation guaranteed maximum coverage at optimized costs.

Results

The project is still on, though, even now it is possible to conclude that the system works properly. While each component is performing its function separately, all together they help to reach the goal of the non-stop business processes operations important for the client’s business.

Bottom line

For a project with complex business logic, integration testing can’t be neglected.

For effective testing, defects and flaws detecting, the QA team must:

1) Understand the structure of the product, knowing how all the modules interact;

2) Know the specific aspects of a project. It is important for preparing good test cases, tests analysis, making a choice between manual or automated testing techniques.

In autumn 2017, the OWASP project has published the updated Top 10 list of web apps vulnerabilities. The Top 10 is produced with the goal of empowering webdevs, security testing teams, and web product owners to ensure the apps they build are secure against the most critical flaws. This time, the data for the Top was submitted by 23 contributors covering 114,000 applications of all kinds, which makes the Top 10 impartial source of AppSec information.

As security testing is one of a1qa most in-demand services, we couldn’t pass the Top 10 release by. After analyzing the changes and novelties, we offer you to go through the main changes and learn what they mean in terms of the state of information security.

If you develop apps, ensure their quality, run penetration tests, or own the web app to run business – keep reading.

How has the OWASP Top 10 changed?

penetration testing, security testing

In general, OWASP Top 10 has welcomed three novelties and retired two that pose no such a severe threat. Aleksey Abramovich, Head of a1qa Security Testing Department, has commented on the recent changes.

New entries on the Top

XML External Entity, Insecure Deserialization, and Insufficient Logging and Monitoring – are the newcomers to the list.

“Together with the growing complexity of web solutions, there is constant growth in the variety of data and servers that generate it. Nevertheless, it’s not a rare case when new solutions are based on legacy principles that not always go in hand with the best practices. A good and illustrative example is simple server commands to extract critical data. An insecure XML processor may process the command without suspecting an authorized access:

<?xml version=”1.0″ encoding=”ISO-8859-1″?>
<!DOCTYPE foo [
<!ELEMENT foo ANY >
<!ENTITY xxe SYSTEM “file:///etc/passwd” >]>
<foo>&xxe;</foo>

In such an easy way, the intruder may gain access to the users list. The next possible step is the attempt to dig up passwords and get information from databases to gain control over the app,” comments Aleksey.

Another obvious change that took place since 2013 is the Insecure Direct Object References category merged with Missing Function Level Access Control into Broken Access Control category that occupied the fifth spot in the release version.

“I guess the two itmes have been merged as exploiting any of them the attacker has one goal in mind – gain unauthorized access to the system, private accounts and manipulate the system as desired. That’s why they were merged into one category”, Aleksey assumes.

What vulnerabilities have left the top?

Two of the vulnerbilities – CSRF and Unvalidated Redirects and Forwards – found no place on the list. Based on the OWASP data, they have dropped to the 13th and 25th spots respectively. What does it mean?

“Correct redirects and cross-site scripting plays a very important role when using advertisement, or there are complex, multi domain web sites. Today, online ads are crucial for millions of web businesses. That’s why testing redirects security is vital. The owner of the resource should be sure that the users data will be secured and they won’t get redirected to a maliciously crafted link or third-party web page. Today, the developers take serious measures to make users stay within their domain or redirected correctly.”

Injections are still No.1

Despite the changes mentioned above (three new vulnerabilities and two retirements), some vulnerabilities have stayed on Top 10 since 2010 and for the second year in a row Injection is the Top leader. How comes it that with the development of security practices there are still gaps that can be used by the abusers? Aleksey Abramovich answers the question.

“Since 2013, the security awareness has really got on the rise. Numerous secure coding best practices, data cleansing tools, and web tokens to secure their apps have been introduced. Unfortunately, all this didn’t make the app safer for users and most common security flaws remain the same.

Injection is still ranked No.1 and it’s easy to explain. There are many types of injections, SQL are probably the most common, but all of them are highly destructive, easy to perform, and therefore are responsible for a large number of public disclosures and security breaches.

Any injection attack occurs when unvalidated input comes from outside of the system and is embedded into the input stream. The variety of entry points is huge. Again, this type of attack is rather easy to exploit, it can be done from any kind of query. More often injections are found in legacy code, but at times developers generate them when coding, and the consequences can be very dramatic for the app owner.” 

Penetration testing as a way to identify vulnerabilities

Certainly, the weaknesses on the list are only the most common ones. Secutiy checks should stretch far beyond them. However, checking the app against them is a good way to find the most common flaws that have to be fixed and improve the security of the app.

Will your website pass the OWASP Top 10 test? Order vulnerability scan by a1qa security experts.

If you are not happy with how well your e-commerce store is selling and you have no idea in terms of what shall be done about it, here is the list of issues composed by Head of Web Testing Department at a1qa, Elena Yakimova, that are likely to prevent your customers from making a purchase.

1. Your website is underperforming

According to the study conducted by the University of London, 90% of users would stop using an app if it is underperforming. Performance – whether it’s pages taking too long to load or browsing being slow and difficult, or maybe the pictures won’t display – is the top frustration of respondents.

2. The functionality of the website doesn’t respond to the customer’s needs

Despite the large variety of e-commerce websites and products they offer, all of them share the core functionality: search, list of products with the detailed description, product range filters and sorting, shopping cart, customers’ reviews. If any of this is missing or doesn’t work as expected, the customer will stumble and feel dissatisfied.

3. The payment process takes long and orders can’t be easily returned

Do you ask your customers to register before making a purchase? How long does it take them to complete the transaction? Is there a try-before-you-buy option? Do you offer various payment methods?

All these issues are very important and should be taken into consideration when developing a customer journey mapping. Keep it in mind: today’s users aim to get things done as quickly as possible.

4. The content of the website isn’t adapted for the target audience

60% of consumers will stay more positive about a brand after consuming content from it. Obviously, if the content isn’t localized to the customer’s region, it will pose extra difficulties and user experience will be damaged.

5. The website doesn’t work in the client’s browser / device / operating system

Customers will use different browsers (Chrome, Firefox, Opera, IE, etc.), devices (desktops, laptops, smartphones, etc.), and OS (Windows, UNIX, Linux, Mac, etc.) to access your e-commerce website. To satisfy them all you have to make sure the website runs smoothly in any combination.

By the way, mobile users are five times more likely to abandon the site if it’s not optimized for mobile. 83% say a seamless experience across all devices is somewhat or very important.

Obviously, the stakes are high for any business that depends on its website or mobile app. Addressing professional testers may help to get rid of the numerous shortcomings. Tech-savvy testing engineers will perform comprehensive testing to make your website deliver maximum value.

How can solid testing help? 

  1. A few seconds between clicking on the link and presentation can impair the use of the website. Performance testing will help answer the following questions:
  • Is the product ready for launch?
  • What is the maximum load the system can stand?
  • Why is the system’s performance low?
  • What are the bottlenecks?
  • Can the system stand the everyday workload?
  • How many concurrent users can the system handle?

Specialists will use performance testing tools to measure the average performance, detect all reasons that prolong the final presentation of your web page to the customer, and find out whether your website will survive the peak loads. By the way, in case you’ve missed it: here is the article on testing e-commerce before the peak loads. It’s well worth a read too.

  1. Every user covers his or her way before clicking a Pay button. To make this journey successful, a UX testing specialist will get into the customer’s shoes to complete all the steps and check them for inconveniences or ambiguities. 
  2. No e-commerce website functions without payments. After all, this is what allows users to purchase the desired items. Different payment types should be verified, e.g. Credit Card, Bank Transfers, Paypal, etc. Also, QA engineers should check whether the credit card details are stored securely and there will no data leakage occur. 
  3. Localization testing is an indispensable part of the e-commerce testing. If you think that localization is translating, you’re only partially right. Taxes, product returns and refunds, financial transactions, currencies must all be localized. And again, software localization testing specialists should take the responsibility. 
  4. Complex cross browser testing and adaptation for mobile are highly important for the e-commerce project. The professional software testing team will set up the right environment to test the product against relevant software and hardware combinations.

As you see, there’s a lot of work required to create and maintain an e-commerce website that will generate income. To make your customers come back, it’s vital to provide them with a user-friendly, fast, and informative e-commerce. Don’t neglect testing. It will help reveal all the drawbacks and timely eliminate them.

a1qa has a great experience in assuring the quality of websites. For example, this is the case study of how a1qa team assured the quality of the UK biggest fashion and home goods online store.

Contact a1qa today to make your e-commerce endeavor successful!

We’re about to do one horrible thing. Just imagine that you’ve developed the software. Secure, safe, and easily accessible software. It performs well across multiple devices and browsers and is localized for various cultural regions. It went live and massive adds among the target audience ensured its downloads and installations. However, weeks after the users quitted it and uninstalled the product. What went wrong?

There are many possible answers, but one of them, that is the topic of our today’s post, is the failure to meet user requirements. The best software in the world, with responsive design, vivid graphics, and rich functionality will have no value if it is not up to end users’ expectations. It’s 100% true and has no exceptions.

User requirements really matter

A user requirement is elicited by answering the following question: “What does the user need the software or product to perform?”

The only way to understand what makes a product easy to use is to elicit user requirements and answer such questions as:

  • In what manner will the users interact with the product?
  • What functions are important for users to accomplish their goals?
  • What are we trying to achieve with this or that functionality?
Requirements CategoryExamples
InterfaceInterface elements should be clear and easily understood by all users. The user should be able to make a purchase by sending a request from the ordering system to the warehouse management system.
PerformanceThe system must be able to process 200 transactions per second in peak load.
It must take more than 1 second to scroll one page in a 40-page document.
AccessibilityAll information included in the system can be perceived by a user with restricted or no vision or hearing.
For all accessibility features there should be instruction and description provided.
SecurityThe system should check all input data.
The system should identify any user trying to access the system.

However, too frequently, requirements turn out to be ambiguous, which leads to their loose interpretation by the IT team. Of course, it causes requirement errors occurring through the SDLC and growing into defects. And this is disastrous.

These defects will probably be detected after implementation, when the costs are 10 times greater to resolve defects. So it is imperative to elicit and define requirements concisely, so that the expected results are transparent, with no unexpected surprises after going live.

Professional approach to deal with requirements

Businesses are steadily relying on the expertise of requirements subject matter experts (SMEs) to lead them to product success, by eliciting and validating software requirements.

It’s a current tendency to allocate these responsibilities to business analysts who serve both the business and IT teams and help elicit clear and measurable requirements to satisfy both the stakeholders and final users. Based on the requirements, QA consultants design test cases leveraging concise requirements. Each test scenario has pre-defined acceptance criteria and simulates an aspect of functionality of the product by capturing all steps in sequence.

Once the software is developed according to all elicited and documented user requirements, there is only one step left before shipping the product – user acceptance testing (UAT).
If carried out competently, UAT will warn the IT team about the vulnerable gaps. If UAT is mismanaged, defects become extremely expensive to fix.

Sure thing, in UAT software is tested from user perspective or with users’ direct participation.

4 main UAT challenges to overcome

1) Offloading UAT to functional testers

Due to a lack of competent resources, customers often assign UAT to their functional test teams. However, in this case the whole idea of UAT is compromised. While functional testers will check the product proper functionality, business analysts will focus on the end users’ requirements and verify the product against them.

Solution: Assign a professional team with the domain knowledge to perform UAT.

2) Poor communication

It’s not a rare thing when communication between distributed teams is impeded. If software developers, testers, business analysts and other persons involved can’t discuss any issues in time, the smallest ambiguity in defect reports can delay fixing.

Solution: Proper communication is a must for team collaboration. Also, user-friendly tools should be used by UAT team to log defects and validate fixing.

3) New requirements from stakeholders

No one can restrict business from setting new requirements when the product is being tested. Sometimes, stakeholders ask testing time to consider new requirements in the upcoming release. Of course, they do not always take into account the time it will take.

Solution: Project management is to take timely decision on these last-second changes not to fail the release.

4) UAT planning

As we’ve already mentioned above, UAT takes place after the regression testing of the whole product is completed. It’s the last chance for the team to check whether the product fits the purpose of its development. Generally, it’s the most critical and vulnerable time period before the release. Delays (which are not rare) in testing or development will reduce UAT time.

Solution: Ideally, UAT planning should be performed during the requirements analysis phase. Firstly, real use cases have to be identified for further execution. Secondly, they have to be communicated to the QA team to help them prepare tests cases and set up proper test environment.

Bottom line

Any software project may let your effort and money down the drain if the requirements of the product fail to meet user expectations. If conducted properly by experts with deep knowledge of the project and its objective, UAT will help discover critical functionality and system vulnerabilities as well as decrease huge rework costs.

Over to you

How do you verify the quality of the product before release? What is your approach to UAT? Do you invite any real users to participate?

After Thanksgiving Thursday, Black Friday and Cyber Monday are coming. According to Deloitte’s annual survey, more consumers than ever are planning to shop online for gifts this season with many retailers bringing their deals from real stores to their cyber world websites.

a1qa specialists give the last-minute recommendations on website QA testing that helps to check if your website is ready for the increased amount of visitors.

Here are the things you can do right now to prepare for holiday shoppers and not to miss out on post-Thanksgiving revenue.

Performance management

  • Plan auto scaling of resources.

Auto scaling is a technique used in cloud computing that helps handle the unexpected traffic spikes.

Don’t wait until the influx of traffic crashes your website. It’s better to predict the days of peak data and allocate, for example, +10 virtual machines to deal with the increased and then decreased back-end capacity to meet traffic fluctuation levels.

  • Increase the website capacity.

Use the analytics to compare the current and the planned system peak loads. Address your web hosting provider and ask about the traffic limitations and how much it will cost to use an upgraded server for the given period of time. As a rule, a better server can help handle increased traffic and sustained volume.

  • Optimize speed and improve client-side performance.

Statistically, the average time for a website to display is 3.9 seconds. Your visitors may not want to wait longer to see your discount offers, especially if they are using mobile devices for shopping. So make sure that you have optimized the website speed. This may be achieved by improving the client-side performance. Shrink the size of the CSS, shorten the JavaScript, compress the images. There are many free online tools to find the possible places and recommendations for improvement: PageSpeed Insights, WebPagetest.org, etc.

Data security

Security issues are also highly important as shoppers want to be sure that their payment data will be kept safe.

  • Scan the website for viruses.

Very often the website owners think about making their resource safer when the worst has happened: the traffic has fallen and the users caught the virus. Don’t wait for this. Scan the website using one of the free online tools (VirusTotal, for example). It will help detect errors, malware, spyware, etc. It’s a tedious process but it usually gets the job done.

  • Enable SSL.

SSL is a commonly-used protocol that guarantees safe data transmission on the Internet. If SSl is encrypted, there’ll be a lock symbol in the url address line. The lock is associated with security and your visitors will know that their confidential data is safe and there is no risk of leakage.

If you haven’t enabled SSL encryption yet, address your web hosting provider and ask what SSL certificates they provide and what services go along.

  • Check the web server basic configuration.

Errors in server configuration often result in major security vulnerabilities and may be the root cause of confidential data leakage. Pay attention to the following. How does the server process the reported errors?  Are there any confidential files available (backup copies, for example) in the root directory? Explore your HTTP security headers and ensure you are keeping up with best practices.

By taking these urgent measures you’ll get better performance of your e-commerce website and get the richly deserved profit.

However, it’s always better to take such important steps in advance and address the professional QA team. They will run full-cycle testing of your web resource while you’ll have to calculate the holiday profit. Sounds great?

Happy Thanksgiving and Biggest-Ever Sales!

The reality is that the majority of us haven’t heard about A/B tests, despite regularly participating in them.

In this article Diana Ring, specialist of the a1qa Agile Testing Department, describes the main principles of A/B testing and QA engineer’s role in getting true-to-life test results.

A/B testing is a marketing tool used to evaluate and enhance efficiency of a web resource. To get better understanding of the notion, it’s vital to answer two questions:

  • What is the main objective of A/B testing?
  • How is it conducted?

Let’s imagine: we own an online store. And one day we’ve decided to introduce some changes in the user interface to get a significant gain in sales. We may want to highlight the BUY button and make it red assuming that it’ll become more visible to the customer. However, our decision is a mere hypothesis that has to be backed up with numbers. Seemingly, the target audience may disagree with us. To get the desired effect, we need to prove our hypothesis with facts. This can be achieved by running A/B tests.

The key goal of A/B tests is to attract more users and maximize the outcome of interest.

The goal is achieved by creating variations of experimental pages. As the name implies, we take the original page (A) and modify it somehow, thus getting a B version of the page. Then the user’s behavior on both of the pages is compared. In such a way, the users unconsciously vote for the better variation.

A/B tests may be monovariant (split) ones – when the two variations of the page differ only in one element (button color, title of a section or font). As for the rest, the pages should look the same.

There is also multivariate testing when the pages differ in a number of ways. The total number of created variations may be up to four.

A/B testing process

  • First the hypothesis is generated. You assume that some modification will be better than the current version and will increase website or online store visitor-to-lead conversion rate. Conversion rate is the proportion of visitors to a website who take action to go beyond a casual content view or website visit.
  • Based on the hypothesis, experimental versions of the page are created. It may be done with the help of external A/B testing services that allow for creating and editing website pages.
  • All website visitors are randomly divided into groups. The number of groups should be equal to the number of variations. Every group is assigned to see only one variation. The traffic is also allocated in a random manner. Obviously, one user shouldn’t view different variations of the page. To ensure this, the users should be identified (using cookies, for example). Sometimes the test page may be shown within a specific segment of the customer base. In this case the segmentation approach is applied to include multiple customer attributes – for example, age and gender – to get more accurate test results. The test usually runs for a couple of weeks.
  • During the test run, the metrics are gathered and the conversion rate is measured for every page variations. The conversion rate will be a total number of target actions completed. Every project will have its own target action. For the e-commerce website it will be a product purchase, whereas for other projects it may be registration completion, e-mail signups, clicking a banner advertisement, etc. But in all cases, conversion rate will determine the variation that performs better.
  • Users’ interaction with every variation is measured and analyzed. Based on the results of the data collected, it’s possible to determine whether the change had a positive, negative or no effect at all. The variation bringing positive effect should further be implemented on the website.

This is the step-by-step algorithm of running A/B experiments. What is the software testing engineers’ role in this process?

Immediately after the variations creation, the tests will run in the testing environment and QA engineers will verify the functionality of every variation, its compliance with the website design, functional requirements and the accuracy of data collection.

How can the A/B tests be optimized?

To run A/B tests and make a sound decision, there are a number of tools available online. The most popular of them are the following:

  • Google Analytics Content Experiments
  • BestABtest
  • Visual Website Optimizer
  • Unbounce
  • Evergage
  • Optimizely

Using A/B testing software allows for creation of test pages, making the desired changes, setting up tests, allocating the website traffic and collecting the relevant data.

Let’s look closer at Optimizely – easy-to-use platform for running split and multivariate A/B tests. It allows for target audience segmentation, new experiments creation and gathering experiment results.

On the main page there’s a list of all currently active A/B tests, number of users interacting with the test and data of its launch and modification.

In test description there is information on how many experiments have been launched, traffic allocation, tests goals and target audience. The goals may be different: action completion (adding the product to the shopping cart), achieve the set session length, total number of purchases or just clicking the link. During the test runs we, testing engineers, should verify the proper data collection.

To check all A/B variations, we should edit the traffic allocation. To do this, we assign 100% of users to see the page A, then page B, etc.

The Optimizely page with test results displaying the losing and winning variations will look as follows. The conversion rate will also be calculated here.

Finally, it’s worth saying that it’s not that easy to predict the users’ behavior on the website. Running A/B tests may be very helpful as it helps check any hypothesis. And there’ll be no need to argue over the page modifications. Kick off the experiment and wait for your customers to participate.

If your variation is a winner – great! Now see if you can apply learnings from the test on other pages of your website. If the test has generated a negative result, don’t panic. Use the experiment as a learning experience and draw conclusions.

You may try to run the A/B tests on your own or address experienced QA company who’ll complete the test, analyze the results and present you the data of the experiment.

Test and convert your visitors into customers!

Wearable devices are tech gadgets connected to the smartphone and worn in the user’s pocket or on the body creating constant and seamless hands-free access to the smartphone. As wearable devices are rapidly advancing in terms of technology and functionality, there appear more and more real-time applications that use the data gathered by a wearable device: most commonly these are health and fitness apps that track user’s physical activities thus helping stay fit.

In this article, Veronica Yanchenco, QA-specialist at a1qa, shares her experience of testing apps for wearables and gives solutions to the possible difficulties that a software tester might face.

The article looks at a real-life example of our mobile application testing services that we’ve delivered for one hotly debated project. The project includes a website and mobile app that help users keep healthy way of life motivating and rewarding them. For instance, a user shares another fitness achievement and gets points that can be spent on purchasing real fitness objects. Initially, wearable devices were integrated with the website. Later on, the integration of the mobile app with Apple Watch was added and the author has been working on it.

Complex test coverage

Too many wearable devices from different manufacturers – that’s a challenge. Obviously, it’s time and efforts consuming to test on all devices on the market. Supporting any additional vendor means additional development and testing integration that may be too costly for a customer. How is it possible to pick up the right devices to deliver holistic testing?

Solution: To test on the right devices, address the statistics and consider the preferences of the target audience, if possible.

Absence of the emulators

In order to get the required data, you’ll need to produce this data yourself, i.e. run, walk, raise your pulse rate, have a nap, etc. But keep in mind that as soon as you reconnect the device to another smartphone, all obtained data will be lost and you’ll have to generate it again.

Solution: Think wider to come round the app. If you need to produce thousands of steps (which was our case), try making movements that will increase the steps production or imitate them. In some cases, shaking hands with the smartphone might help.

Dependence on third-party applications

The integration issues may also block your testing process. If the application you use to receive data has some shortcomings, it might impact the whole system. It is also necessary to take into account that wearable devices wouldn’t always collect data correctly; the accuracy of reading may be far from perfect.

Solution: If you have found out that the accuracy of measurement suffers, try to figure out how it affects the user’s application behavior and come up with the questions to the following answers:

  • Is it real to handle the inaccurate data and prevent it from getting into the app?
  • Is it possible to change the business logics of the app to prevent the improper use of the data to get the in-app reward, for example? If there are such loopholes in the application, it may bring losses to the customer.

Multiple user scenarios

Some of the user scenarios can be predicted (i.e. the data coming gradually during a day or two devices running simultaneously), while others can hardly be foreseen. At times, it is difficult to guess where your user will log into the app and how it will impact the data collection.

Solution: Get into your user’s shoes or become a user yourself: use the devices in your daily life monitoring how it reacts to the changing usage environment.

To overcome any challenge you might face, stay creative and use your imagination to come up with nontrivial testing decisions as our team did in the following real cases:

Testing on towels

On one of the projects we discovered that if you put on your Apple Watch and open the heart rate screen and then take it off and place it on a towel or a pair of jeans (the fabric really matters), the pulse rate will reach about 200 beats per minute. Too much for a towel, agree?

When the customer learnt about our finding, he asked us to check how this behavior would affect the application functionality. We tried to reproduce that scenario, but had neither tissue in relief nor towels in the office. Do you know what helped us? A stuffed toy! When you placed the Apple Watch on it, the pulse reached 200 beats per minute.

The question arises: Is this situation life-real?

In fact, quite real: the user finishes his training in the gym, take the wrist-worn device off and put it on the pair of jeans or on a towel. However, it’s very difficult to foresee it and include it in your testing scenario.

 #10000000 steps

On another project the user complained that the app wouldn’t synchronize the completed steps. After studying the problem, we found out that it happens when the number of steps contains more than seven digital symbols, which should at least be 10,000,000. Pay attention that when the app is launched for the first time, it uploads all the steps taken over the past 90 days.

After simple calculations, we got about 111,000 steps per day, which is approximately 80-90 kilometers (50-55 miles). Is it real? To answer this question, we decided to find out how much real users walk a day. And here popular Instagram service turned out to be helpful as many people eagerly share the number of completed steps in their accounts.

Looking through multiple posts marked with the hashtag #steps, #activity, #healthapp, we came to the conclusion that the average number of daily steps lied in between 7,000 and 10,000. The most active users can take up to 20,000 but it’s next to impossible to complete more than 35,000 steps.

Another possible solution is to search for a more precise hashtag, #10000000 in our case. When we worked on the project there was only one sole photo with such a hashtag. Judging by the caption, someone has walked across the Great Wall of China. Unfortunately, there was no time period specified. Anyway, it is unlikely that someone could cover 80-90 kilometers (50-55 miles) for 90 consecutive days. The bug was fixed but the question who and how managed to get 10,000,000 steps in a day is still open.

Summing up, I can say that testing on wearable devices may be quite challenging, but never boring. After all, how can one get bored when running tests on a towel?

For any company data confidentiality is a matter of high importance. Leak of clients’ usernames and passwords or loss of system files may result in great financial expenses and destroy the reputation of the most trustworthy organization. The article by Vadim Kulish, security testing engineer.

Considering all potential risks, companies spend big money to embed latest security technologies to prevent unauthorized access to the valuable data.

But have you ever given a thought that besides sophisticated hacking attacks there are simple ways to uncover the files that weren’t effectively protected. In this article we’ll focus on Google search operators that can be used to get more specific search results or to detect sensitive information.

Let’s start from the beginning.

One can hardly imagine Internet surfing without search systems as Google, Bing and others alike. Search engines index vast amount of web pages to make them available for surfing.

Google search operators

When you search in Google, you can include search operators in the entry field to narrow or broaden your search. The most commonly used of them are the following:

* site: returns results from certain sites or domains

E.g.: If you enter site:example.com you’ll get all info in Google related to the example.com website.

* filetype: searches for exact file type

E.g.: The entry filetype:php site:example.com will provide you with the list of php-files from the website example.com.

* inurl: searches for specific text in the indexed URL

E.g.: The entry site:example.com inurl:admin will search for the administration panel on the website.

* intitle: searches for query terms in the page’s title

E.g.: The entry example.com intitle:”Index of” will return documents from the website example.com that mention the word “index of” in their titles.

* cache: searches in Google cache

E.g.: cache:example.com will show Google’s cached version of the page instead of the current one.

Unfortunately, web crawlers are not able to determine the type and degree of information confidentiality. Therefore, they equally treat blog articles, which are published for wide audience, and database backup copy stored in the web server root directory and not intended for third parties view.

Thanks to this feature and using the search operators, hackers manage to detect vulnerabilities of web resources, information leaks (backup copies and text of the web applications errors), hidden resources, such as opened administration panel without authentication and authorization mechanisms embedded.

Types of information that can be detected by search engines and may be potentially interesting to hackers include the following:

* Third-level domains of the explored resource

Third-level domains can be found using the keyword “site:”. For example, the query site:. * example.com will return all domains of the third level of the website example.com. Such requests enable to detect hidden management resources, release management systems, as well as other applications with the web interface.

* Hidden files on a web server

When searching, you may happen to view various parts of the web application. To find them, use the query filetype:php site:example.com. It will return previously unavailable functionality in the application, as well as other information about the app.

* Backup copies

Backup copies may be found with the filetype: keyword. Usually backup copies are stored using the following file extensions: bak, tar.gz, sql. For instance: site:. * example.com filetype:sql. Backup copies often contain logins and passwords of the admin interfaces, as well as user data and source code of your website.

* Errors of the web application

The text of the error may contain various data about the app’s system components (web server, database, web application platform). This information is always very interesting to hackers because it allows to find out more about the target system and to enhance the attack. For instance: site: example.com “warning” “error”.

* Login and password

Web application cracking may reveal big amount of users’ sensitive data. The request filetype:txt “login” “password” will allow you to find files with usernames and passwords. Likewise, you can check whether your email or any account has been hacked. Just make a request filetype:txt “user_name_or_email”.

The combinations of keywords and search strings used to detect confidential information are commonly named Google Dorks.

Google has collected them in the public Google Hacking Database. Now any company representative, whether CEO, a developer or a webmaster, may learn about what type of sensitive data was detected with this or that query. All dorks are broken down by categories to make the search more comfortable.

Google Dorks leaving mark in the history of hacking

Finally, learn about the cases of how Google Dorks helped the attackers to get access to sensitive but poorly protected information:

#1. Leakage of confidential documents on the bank’s website

During the official bank site security analysis a large number of pdf-documents was detected. All documents were found with a query “site:bank-site filetype:pdf“. Interestingly, it turned out that the contents of documents represented plans of the bank branch premises across the country. For sure, that information would be very interesting to bank robbers.

#2. Cardholders’ data search

Very often breaking online stores attackers gain access to the users payment data. To make this info public, violators use public services that are indexed by Google. Sample query: “Card Number” “Expiration Date” “Card Type” filetype:txt.

With all this in mind, we recommend that you check the security of your website to prevent dubious activities related to your resource.

But we advise you to look beyond the basic checks. Address security testing specialists to conduct comprehensive analysis of your software product. After all, it’s better and cheaper to prevent data loss than repair the damage incurred.

Today a large number of web applications are tested in Internet Explorer. Despite all its disadvantages it has been a Microsoft Windows default browser since 1995. The brand new Windows 10 has recently introduced a modern browser by default – Microsoft Edge. What should you expect from this innovation, and how is it likely to impact web app testing?

Microsoft Edge

Microsoft developers are at their best to create a browser different from the unpopular Internet Explorer. In future, Microsoft Edge is intended to replace Internet Explorer, but the current version of Windows 10 has both browsers installed.

Microsoft Edge has a number of new features, such as a Markup tool to create notes on the necessary pages, reading mode and reading list, Cortana assistance, etc. This new browser has inherited some features from IE, but it has an upgraded structure.

Let’s have a closer look on Microsoft Edge new and peculiar features.

Home page and search bar

The first page is not the actual web-page, but a customizable home page. It is split up in multiple sections containing the latest news, weather and other information you need.

Microsoft Edge search bar is the address bar as well. It allows you to search information or instantly view the weather forecast, stock quotes, definitions, results of calculations, etc.

Settings

Unlike Internet Explorer, Microsoft Edge doesn’t have awkward settings that are not clear to common users. Microsoft Edge menu contains the minimum required options for web surfing and easy use, e.g. Cortana assistant, theme customization, favorites, etc.

Notes right on the web page

Microsoft Edge is the only browser that allows you to draw or type directly on web pages. Add your ideas to the page content and share them with others or save them in OneNote.

To use the Notes tool you need to click the Pencil icon on the bar. You can add comments, highlight the text or clip out some parts of the page. If you have a touch screen, you’ll certainly like this feature.

Share function

Not all web pages have special buttons to share the article through your mail, OneNote or social network accounts. Microsoft Edge has a solution. Simply click the Share button on the bar and select the necessary way of sharing.

Reading mode and reading list

Microsoft Edge is aimed at making the process of reading as comfortable as possible. Using the reading mode you can read the text without needless elements such as banners, adds etc. The text is matched to the size of your screen. In addition, any web pages or PDF files can be saved in the reading list for a later reading.

Reading mode is not a new feature for a browser. Special extensions for Google Chrome, Mozilla Firefox, Opera, and other browsers leave only the text block and pictures of the article removing all other elements from the page.

Cortana assistant

Microsoft Edge allows you to act quickly. Cortana personal assistant gives an instant access to the main actions, such as booking the tickets or hotel rooms, reading reviews, etc. right from your current page. All you need to do is just to say “Hey, Cortana” and make a request.

Cortana assistant is one of the considerable advantages of Windows 10 over Apple OS X. Cortana is a competitor of well-known Siri that is integrated into iOS, watchOS, and tvOS operating systems. Cortana has the main benefit for Russian speaking countries – it has complete Russian language support.

Adobe flash player

Microsoft Edge has a built-in Adobe Flash Player. It means that you don’t need to install and upgrade it manually. Moreover, you can turn it off in the browser settings.

Extensions

One of the first extensions compatible with Microsoft Edge is Bing Translator that will provide automatic translation of web sites, even in the read mode. Unfortunately, this feature will be turned on only after the release of the Windows 10 final version by updating through the Windows Store.

Microsoft developers wanted to create a new browser and they did it. Microsoft Edge is not similar to any other browser available on the present-day market.

Is Microsoft Edge good enough to replace unpopular Internet Explorer? Or will it just become IE’s next version? Any integrated system solution will always have its audience. The current version of Microsoft Edge is unstable and contains a number of bugs. Some of them are already reported and fixed. Window developers are aware of this situation and are eager to improve the browser. Anyone is welcomed to send a message with a feedback.

Windows 10 is currently considered to be the most protected Windows. This new operating system has a variety of built-in features to protect from viruses, spyware, or malware. All the security tools can be automatically updated and maintain the ability to withdraw the latest threats. Let’s review these tools and their implications for security testers.

Multi-factor authentication

One of the most interesting solutions to ensure the account’s security is the multi-factor authentication. It presupposes that you need to pass several stages to access information. In Windows 10 the first factor is the device itself.

The second factor is your personal identification number or biometrics. If your devise is hacked, your credentials would not be enough for attackers to get the access to your data. They will need your physical device (mobile phone, computer, tablet, etc.) as well.

Access tokens protection

Access tokens, which are generated when you have authenticated, are becoming the attacks’ target more and more frequently. When hackers obtain these markers, they can access your information even without your credentials. Windows 10 will ensure the access tokens security due to the architectural solution that stores tokens in the safe container working on Hyper-V technology.

Windows Hello

How often do you need to enter a password? We bet that very often. Windows Hello uses your biometrics (iris, face, or fingerprints) instead of passwords. You can instantly get secure access to Windows 10. Windows obtains and finds your biometrics using a fingerprint scanner or camera that supports this feature. When you sign in, Windows welcomes you by name. It is a quick and safe way to use your device without any passwords. Thanks to the automatic and free updates, you will have all new features as long as your device is supported.

Microsoft Edge

What do you usually do after signing in? Most people open a browser. Microsoft Edge is an absolutely new browser that will enable you to surf the Internet easier and safer. Microsoft Edge provides a browsing sandbox that allows you to stay isolated from private information and data. Read the detailed Microsoft Edge review here.

SmartScreen filter and Windows Defender

SmartScreen protects you from phishing sites that are aimed at stealing your passwords and sensitive information. SmartScreen analyzes all visited sites and blocks potentially dangerous ones. This technology also protects you from downloading malware.

Microsoft SmartScreen provides perimeter protection while Windows Defender fights against advanced malware. Windows Defender does not require complicated settings and protects your software from the latest threats. Windows Defender instantly analyzes the information and responds to threats in a moment.

Parental control

Windows 10 allows you to connect your PC to your children’s local or Microsoft account and take care of their security. Parental control blocks websites, applications and games for adults, regulates the time when the device is used, and provides activity reports.

Data protection

Most of us need to access corporate applications and documents when being on a business trip or at home. Remote access involves the risks associated with the VPN connection, especially if you use your own device. Windows 10 provides IT specialists with a possibility to manage VPN permissions. Thanks to this possibility administrators will be able to define access to specified applications and manage them with Master Data Management solutions.

Windows 10 goes back to Windows 7 and borrows some elements from competitors. Windows 10 has remained functional, but became easier to use.

Windows 10 will influence the following testing activities:

  • Installation testing (a new operating system presupposes some peculiarities).
  • Compatibility testing (an upgraded application programming interface can affect desktop applications; web applications are not likely to be affected).
  • Security testing (new features involve new vulnerabilities).

Application security is the most critical point for any business regardless of its sphere. Even if you have no relation to web application testing in any way, it is useful to know the most common vulnerabilities and security problems an application can have. Moreover, it is vital to be able to prevent or deal with them.

Open Web Application Security Project (OWASP) is an open project to ensure web applications security. The organization is not affiliated with any company involved in the software development and supports the thoughtful use of security technologies.

TOP 10 most significant vulnerabilities

  • A1 – Injection
  • A2 – Broken Authentication and Session Management
  • A3 – Cross-Site Scripting (XSS)
  • A4 – Insecure Direct Object References
  • A5 – Security Misconfiguration
  • A6 – Sensitive Data Exposure
  • A7 – Missing Function Level Access Control
  • A8 – Cross-Site Request Forgery (CSRF)
  • A9 – Using Components with Known Vulnerabilities
  • A10 – Unvalidated Redirects and Forwards

OWASP has compiled a comprehensive Top Ten list that contains the most dangerous Web application vulnerabilities and provides recommendations for handling those flaws. Here’s list of 10 most critical vulnerabilities.

The purpose of the Top Ten list is to increase the awareness of application security by determining the most critical risks to organizations. The Top Ten list refers to the set of standards, tools and organizations, including MITRE, PCI DSS, DISA, FTC, and many others.

The article by Aleksey Abramovich was published in Pipeline Magazine. 

According to the Ponemon Institute, on average, one denial-of-service (DoS) attack costs the owner of a website, including those associated with telecom organizations, more than 166 thousand dollars.

Every year, DoS attacks are becoming more expensive as site owners increasingly intensify protection against them. But who is implementing these DoS attacks and for what reason? And what is the best way to be prepared to protect your website?

The birth of DoS attacks

The first DoS attack was registered in September 1996. The attack was addressed to the site Panix.com – the first Internet provider in New York.

However, this first DoS attack was not as impressive as one carried out by a 15-year-old Canadian using the handle ”MafiaBoy,” who attacked a range of commercial sites in 2000. One by one, he caused the collapse of the largest American portals – eBay.com, Amazon.com and Yahoo.com – thus demonstrating the vulnerability of those websites at that time.

The next wave of popularity in DoS attacks occurred in 2010 when hackers began to break, one by one, the largest e-commerce systems: PayPal, Visa, and MasterCard. Every year DoS attacks continue to gain power. To provide an idea, in 2005, the “strength” of an average DoS attack was no more than 100 Gbit/sec; this year’s most powerful attack has already reached 400 Gbps per second.

Both attacking and defending sides spend billions of dollars on this ongoing “virtual war.” In order to understand how to defend against DoS attacks, you must first be aware of their nature.

DoS attacks: types and possible protection

We have established that a DoS attack is a malicious activity that blocks the site for the use of legitimate users. A subset of this is distributed denial of service (DdoS) attacks, which involve multiple computers under single management. DoS attacks usually addresses one of three resources, which we’ll cover below.

Channel attacks – a traffic emergency

The most complicated and expensive DoS attacks for hackers and, thus, the most dangerous for the site owner, are attacks on the server network connection. Using a botnet or vulnerable servers, an attacker can generate a huge stream of malicious traffic to the website of the chosen victim.

Because of the large amount of “extra” load in the channel, legitimate users have problems while connecting, or can even be blocked. This situation can be compared to a traffic emergency where cars are stuck in a jam; the traffic is being caused by the DoS attack, and those “stuck in traffic” are the users trying to connect to the site.

The prevention of such attacks is quite expensive because of the need to purchase special equipment for detection and blocking. Otherwise, third parties, such as providers and data centers, should be engaged. The most effective solution is to create a “black hole” outside the server (onsite at the provider) to “catch” the traffic; imagine additional road lanes, which are consigned to offload the traffic jam. This is how black holes are activated at the start of a DoS attack. While all traffic is routed into the black hole, the owner of the website buys time to move the application to another IP address.

Attack on the CPU time – confused pupil

When attacking the CPU time, the maximum CPU load of the server becomes the target of such criminal attack. Imagine a student who is given 10 tasks to solve instead of one during a limited amount of time allocated for a lesson. Unable to concentrate on one, they eventually cannot do what is required.

The best way to defend against this type of attack is with a special software (for example, web application firewall of “WAF”), which is set on the server and filters out potentially dangerous requests.

Read the full article here

The goal of attacks on CPU time is to reach a 100% load of CPU server. One of the options is to process a large amount of information from a database, which takes a lot of effort.

A great example of an attack on CPU time is a SSL Renegatiation DoS attack. To perform it a client repeatedly connects with the server applying SSL protocol, whereas the server needs 15 times more processing power than a client to process the request.

How to protect the server from this attack?

For security testers, the first step is to turn off SSL Renegatiation mechanism. Attacks on CPU time are very dangerous for all kind of services and result in serious damage. A software installed on you server and operating as a filter against potentially dangerous request can be a good security measure.

The goal of attacks on server`s memory is to fill the internal memory or a hard disk, or a critical part of memory completely. When the goal is reached the server is unavailable. The classic example of the attack is a TCP SYN flood attack. It`s main objective is to create multiple half open-connections to limit the access to the server. To reach the goal the attacker sends a number of TCP – segments with mounted SYN flash.

If your set up is correct, the information about new connection will be stored in the server`s response, that sends to the initiator of connection. When the server gets the client`s response it checks the information about the connection in the package. This security technology is called syncookies. It extracts the IP-address and the port from the income package, transforms the data into numbers and adds them to the Sequence Number + 1 field.

TCP FIN flood attack is similar to SYN flood attack but creates a number of half-open connections. In case like this, when the connection is established, a TCP-segment with FIN flag is sent to complete it. The server responds with FIN + ACK flagged segment and waits for the approval. If the approval is not send and the memory is filled. To fix the problem you need a patch from a product owner.

“Slow” attacks on HTTP protocol are performed in a bit different way. Apache web-server is most vulnerable to them.

The attacker acts in the following way: connects to a server and starts accepting and sending the data. He performs multiple connections to reach Denial-of-Service. As a result, server`s memory is filled due to the great number of data streams from the Apache server.

To protect the server you need to define the minimal and maximum time limit for sending the request and accepting the answer. If the client exceeds the limit the connection stops. Use specially created modules to set up the protection.

In the end, I would like to say that automatic systems are really important today. They manage various critical processes. Denial-of-Service attacks can lead to unexpected results, thus the issues of protection against DoS/DDoS attacks are as critical as never before and won`t be minor in future.

Testing is a process of execution of the program to detect defects. The generally accepted methodology for the iterative software development Rational Unified Process presupposes the performance of a complete test on each iteration of development. The testing process of not only new but also earlier codes written during the previous iterations of development, is called regression testing.

It’s advisable to use the automated tools when performing this type of testing to simplify the tester work. “Automation is a set of measures aimed at increasing the productivity of human labor by replacing part of this work, the work of machines”. The process of automation of software testing becomes part of the testing process.

The requirements formulation process is the most important process for software developed. The V-Model is a convenient model for information systems developing. It’s become government and defense projects standard in Germany.

The basic principle of V-model is that the task of testing the application that is being developed should be in correspondence with each stage of application development and refinement of the requirements. One of the development model challenges is the system and acceptance testing.

Typically, this type of testing is performed according to the black box strategy and is difficult for automation because automated tests have to use the application interface rather than API. “Capture and replay” is the one of the most widely used technologies for web application test automation according to the black box strategies today. In accordance with this technology the testing tool records the user’s actions in the internal language and generates automated tests.

Practice shows that the development of automated tests is most effective if it is carried out using modern methods of software development: it is necessary to analyze the quality of the code, merge into the library the duplicate codes of tests, which must be documented and tested. All this requires a significant investment of time and the tester should have the skills of the developer.

Thus, the question arises of how to combine the user actions recording technology and the manually automated tests development, how to organize the automated tests verification, and whether it is possible to develop an application and automated tests in parallel according to the methodology of the test-driven development (TDD).

There are systems capable of determining the set of tests that must be performed first. Such systems offer manually associate automated tests with the changes in the source files of application under test. However, the connection between the source and the tests can be expressed in terms of conditional probabilities.

The probabilistic networks used in the artificial intelligence, could also be useful when defining the relations automatically based on the statistics of tests results. By using Big Data networks we can link interface operations and test data and this will allow reducing the complexity of automation.

Get the full artilce here.

Today everyone knows about DDoS/DoS attacks. The buzz has long spread beyond security testing and is now everywhere: on the internet, TV, etc. You don’t have to be an experienced user to face the consequences of these attacks, when you visit, for example, a favorite news portal and see the “service is unavailable” message, here it is – the DDoS/DoS attack.

Even knowing what the letters DDoS/DoS stand for, do you understand what they mean? DoS means Denial of Service, which is a malicious activity aimed at a web-site to make it unavailable for legitimate users. DDoS means Distributed Denial of Service, which is an attack performed by several computers.

The most often motives for carrying out DDoS/DoS attacks are:

  • Political protest
  • Unfair competition
  • Blackmailing
  • Personal issues

The first DoS attack was registered in September 1996. A few days after the first attack – on September 19th – Carnegie Mellon University computer team of fast response to information security incidents published a brochure about DDoS/DoS attacks. In 1999-2000 when the largest portals like Ebay, Amazon and Yahoo couldn’t handle the attacks the information about them spread around the world. Several years later in 2010 DDoS attacks became notorious again. Hackers started applying them for political protests and attacked the biggest e-commerce systems, like Paypal, Visa and MasteCard. Recently attacks have become more powerful, for example, in 2013 hackers performed a 300 Gbps attack, while in 2014 they reached a record of 400 Gbps.

DoS attack: behind the scenes

Today all attacks can be divided into three big categories depending on the target:

  • Server network connection attacks
  • Server CPU time attacks
  • Server memory attacks

Server’s network connection attacks are aimed at server bandwidth. Hackers generate a huge stream of spurious traffic towards the goals, applying botnet or vulnerable internet servers. The attack can be compared with a traffic jam with stuck ambulance; ambulance here means connections of legitimate clients. On the scheme below you can see how the attacks with vulnerable DNS servers are performed.

From the technical viewpoint channel attacks are very difficult. They shouldn’t run for a long period, otherwise they can harm the hacker’s system itself. Protection against these attacks is very pricey. A company has to buy expensive equipment to detect and block the attacks, or it addresses a 3rd party security provider.

Still, users should understand that this kind of attacks cannot be aimed at a blog, because the cost of the attack performance and the result are not equal.

In the next blog post we’ll cover the topics of server CPU time and server memory attacks.

Previously we have covered all necessary things that are to be tested and checked, but there must be something to be skipped in software testing. Of course, there are a few things.

What can be skipped

There is no need to check basic columns, content types in Site collections; columns and content types in the libraries, they are usually different from those in the gallery and get changed regardless of the columns. You can also skip testing those lists, libraries, standard page templates that won`t be used in your application. System pages can be seen only administrators, so there is no need to check them. The same with the standard web parts, especially when they don`t fit standard application page.

In the process of web application testing, you can get puzzled, it can be difficult to identify page as standard or updated. In fact, it is easy to find it out: create a standard SharePoint site collection and compare the columns. Usually, every application is developed on the basis of standard site collections. If the application doesn`t need the standard fields then they are usually hided from users.

Program restrictions of SharePoint platform

SharePoint has lots of restrictions that are either static or custom. Static ones are those that cannot be exceeded structurally; custom are those that can be exceeded due to certain requirements. Being aware of these restrictions you can avoid registering defects aka “SharePoint defect”.

When you work with lists and libraries, remember that the 250 MB is a maximum file size for lists and libraries, still it can be expanded to 2 Gb. Another valuable option of a user interface is that you can choose 100 elements simultaneously and open 10 documents in different file formats in the same time. Working with page mind that one page can include 25 web parts.

As for the security restrictions, SharePoint has the option of including users and Active directory groups in one SharePoint group. Each group is limited by 5000 members and each user can member in 5000 groups.

There are also some restrictions for Excel, for example, maximum available size for a book is 10 Mb. And one more thing about SharePoint – Datasheet view is available only in Internet Explorer as it needs Active X.

In the end, I would like to say, there would be lots of things that were not touched here, when you`ll be working with SharePoint. Still the covered points will help you to understand the specifics of SharePoint applications’ testing.

In the previous post we have learned what SharePoint is, how it can be used and started to talk about the components that need to go through web application testing.

So, you are to test:

  • Name, description, group. It is good when the updated content is included in the group;
  • The Columns included in the content define what metadata can be included in the content and what the content goal is. Check that all the columns are the descendants of the standard content type, to avoid problems when the content is updated;
  • Check automatic workflows, if they are included in your project.

Remember to check the settings of Libraries and Lists that will be used in the application to store documents and information (included in web-parts). The following items should be checked:

  • Navigation settings: check that the library or the list is visible in the website navigation;
  • Versioning settings: define whether the added documents get moderated, documents are edited, draft copies are created and, in the end, who can view all that. The check-out setting can be included here to avoid simultaneous documents editing;
  • Advanced settings: define whether the documents of the library are included in the search results. Advanced settings are also in charge of creating new folders in the libraries, documents opening, new document templates recognition;
  • Audience targeting settings: the option allows to use targeting for library documents;
  • Permissions for this document library: you can control it only if the library rights or the document should be unique. Otherwise, when a user gets rights to the website s/he gets the rights to the library as well;
  • Content types: check whether all the necessary types of the content added and which of them is set as a default;
  • Check also the Views of the list and library content, just in case the application views them in an awkward way.

Pay your special attention to the Versioning settings, Advanced settings and Audience targeting settings, as they control the circulation of documents, the search and the library views in the web-parts.

Verify that the page layouts and design have all the necessary controls and comply with the design. There also should be no problems with viewing the page in the full screen mode or editing layout. Check that everything is in the right area and functions well.

Test how the websites get created on the base of the Site templates. The settings are to be correct; all the lists and the libraries should be displayed.

Apart from that, check the settings of the Web parts. When you test the Web Parts use lots of test data and check the Web Parts with the documents created for different groups. After installing the application check that the necessary user groups have no problems with Permissions. Check them under different accounts with different rights. As long as the Search is often used check the availability of the fields and profiles.

Though the checklist is quite long and some items must be completed, there is also a list of points that can be skipped, keeping in mind the restrictions of the platform. This topic will be covered in the next post.

What is SharePoint?

In fact, it is a Content Management System combined with a well-developed Document Management System. The possibilities of document management in SharePoint are quite impressive and it manages these tasks perfectly, but using it as a content management system takes a lot of effort. SharePoint is often used for development of corporate intranet portals to ease the employees’ interaction.

This web-oriented platform applied for teamwork and document management was developed and launched by Microsoft. In fact, it`s a unified communication center and a universal data storage. The solution can be used for corporate web-portal to storage and public use of various documents and specialized applications.

Data in the SharePoint is organized in the form of lists (tasks, discussions and calendars) and documents` libraries. Functionality includes several web parts, which, in fact, are the control elements used to show the lists and edit them. These web parts are placed on the pages published on the portals; users can access them via browser. Giving more technical details about SharePoint, I can say that SharePoint is ASP.NET 2.0 application that uses IIS to show the web-pages and SQL Server to store the data.

What to test?

When starting web application testing, you should know all the specifics of SharePoint applications, as long as you are to test not only the functionality, but the platform also.

Site Columns and Site Column Gallery testing is obligatory and goes without saying. Site Column is an attribute managed by users. It can be a fragment of metadata in lists or content. The Columns are added to the websites or lists; you can also give link to them in different content types.

The check of these attributes helps to avoid potential defects in the application. If the project includes several site collections, each of them can use its own columns. In case like this you are to check all them separately. Remember to draw your attention to the column`s name, data type, a group where the column is situated and to its settings.

Apart from that, concentrate also on the Site Content Types, which is a set of parameters used several times. Content Types provide centralized management of metadata and documents behavior, elements and folders. Again, if the product you test uses its own content types, they are to be tested for every site collection.

Things that are to be checked and skipped will be touched upon in the next blog posts.

Julia Liber is head of Telecom and Web application testing department at a1qa . In this role, she manages the Internet applications and telecom systems testing team and provides consulting for wireless operators. She also assists with organizing the testing process and the acceptance phase for modification or new billing solution implementations.

Would you say testing in OSS/BSS is a trivial task? Definitely not

It usually takes at least a couple of days just to test functionality. While implementing a new system/subsystem or changing functionality, there are so many questions to be answered. Which product to choose for testing? Which tariff plan to choose for testing? What kind of charges to check, and what type of subscribers to use?

It is obviously impossible to cover all possible options and combinations during functional testing, so it is necessary to select the most important products and services.

At this point, the thought comes to mind to turn to the good old engineering approach of back-to-back testing, which is based on the law of large numbers. The point is simple: It is necessary to compare system behavior using the same data. Imagine that we have two environments:

  1. Production — your live system serving subscribers
  2. Testing — your environment intended for testing.

First, at a specific date/time, usually after a billing period has been completed, we transfer copies of user data (migration data) and the product catalog (configuration data) to the testing stand. At the end of the reporting period (month or week, depending on the timing of the invoice), a copy of the input data for each transaction (payments, charges, maintenance fees) is loaded into the testing environment.

First, the output data has being processed from both environments. Second, it is placed on a comparison server. Then, specially developed script checks if the number of transactions and the charges from both environments match. The matching and unmatched numbers, the results of this comparison, are the input data for testers.

What’s next? The testing team goes to work. Testers analyze the records based on any discrepancies between the two sets of results. The causes for the differences are identified, the records are grouped into causes, and we get the results in the following format:

Numerical discrepancies:

  • 5 percent of records could not boot due to a functional defect 1
  • 3.5 percent of records could not be loaded due to a functional defect 2
  • 1 percent of records could not boot due to a functional defect 3
  • 0.5 percent of records could not be loaded due to a functional defect 4

Discrepancies in the amount of write-offs:

  • 7 percent of records are processed improperly because of a defect in the configuration No. 5
  • 1 percent of records are processed improperly because of a defect in the configuration No. 5
  • 0.5 percent of records processed improperly because of a defect in the configuration  No. 5
  • 0.1 percent of records processed improperly because of a defect in the configuration No. 5

What is the outcome of this approach?

  1. It provides complete coverage of the product catalog, through the activities of real users. The system tests exactly what is used by subscribers;
  2. It checks the quality of the configuration and system migration, as well as the most critical functionality for OSS/BSS parts: rating, billing and payments;
  3. It helps clearly prioritize defects that are present in the system based on the needs of the business. Let’s say, it is more important to correct defect No. 1, compared to defects No. 3 or No. 4, since defect No. 1 does more damage to the business;
  4. Because testing takes place on a large volume of data, this is a good way to test how well the new version of the system can withstand real-world loading.

Of course, there are limitations to this approach.

First, the data comparison is always dependent on what type of OSS/BSS you use. It will be necessary to develop a unique script for your system to compare and select data to analyze. Second, in the ideal case, the test environment must comply with the product environment. Otherwise, there is a risk that you won’t meet the deadline because the test environment processes transactions too slowly.

Get in touch

Please fill in the required field.
Email address seems invalid.
Please fill in the required field.
We use cookies on our website to improve its functionality and to enhance your user experience. We also use cookies for analytics. If you continue to browse this website, we will assume you agree that we can place cookies on your device. For more details, please read our Privacy and Cookies Policy.