When checking for any bugs and disruptions, quality assurance (QA) testers need to look beyond an application’s individual elements and consider its general architecture and the connections between its components. After all, a monolithic application will be built differently than one based on microservices, and you have to adjust your testing approach accordingly. Read on to find out why you should pay attention to architecture when testing microservice-based apps – and what happens if you don’t examine potential error-prone situations.
What is a microservice architecture?
As Amazon Web Services (AWS) put it, “Microservices are an architectural and organizational approach to software development where software is composed of small independent services that communicate over well-defined APIs.” Each service has its separate code base and serves a specific function, which makes an app easier to update, test and scale.
This approach contrasts with a monolithic architecture, where an application is developed as a unified system with a single code base. To make any changes or updates to a monolithic app, a development team has to build and deploy a new version of the server-side interface. In the long term, it can result in products that are difficult to scale, take longer to update and are less reliable.
That’s why a microservice architecture is usually recommended for large, complex systems, such as applications that serve thousands of users from different countries, and constantly evolving solutions. It comes as no surprise, then, that companies such as Netflix, Amazon, Uber, Spotify and Zalando use a microservice architecture in their own product and service development. Microservices enable developers to speed up or change specific components without having to rebuild and deploy an entire app.
For example, in one of the projects I’ve worked on, a functionality enabled users to log in, then have their documents copied from different systems into one place, add metadata and save files to a designated folder. After some time, the process changed on the client’s side and the files didn’t need to be copied. Instead of programming this change into the entire system, we only had to change one service.
Another advantage of microservices is their higher resilience to errors and crashes. Since each service functions independently, when one of them suffers an outage, the system doesn’t crash the way it might in the case of a monolith. Instead, an application can continue working with the other functioning services while developers fix the malfunctioning component. Microservices are also a good foundation for building continuous delivery/continuous integration (CD/CI) pipelines. Releasing changes and updates can be more flexible when you can alter select components, not an entire solution.
However, when testing apps, testers have to take into account the specificities of microservices to make sure their testing approach covers microservice-related scenarios.
Why do you need to run non-functional testing on microservices-based apps?
When developing software, you test its functionalities to make sure they enable required user actions. In other words, you check if your solution functions the way it’s supposed to. That’s called functional testing.
But there’s more to an app than just ensuring it does what the documentation says. To deliver a truly outstanding product, you have to think holistically about the entire system and pay attention to its environment, not just individual elements.
That’s where non-functional testing comes in. Non-functional testing focuses on aspects that impact user experience, but aren’t covered in functional testing, such as performance, security, usability and scalability. By checking the app against these requirements, you can ensure that end users get a high-quality product that doesn’t break under traffic or disappoint with buggy performances.
In a microservice-based application, services are independent from each other and have separate responsibilities that shouldn’t overlap. At the same time, services have to effectively communicate with each other for a smooth user experience. One service shouldn’t bottleneck this communication network and slow down the performance of an entire app. Testers have to make sure that all elements of an app cooperate correctly and don’t negatively affect users’ experience.
Thinking holistically about a microservice architecture helps you mitigate the risk that communication between services will be disrupted when you introduce a change into the system e.g., by speeding up one specific service to handle increased traffic.
Read also: Service Mesh – A New, Safe Architecture to Enhance Your Microservices
How microservices impact testing approaches
Testing scalability
Microservices give applications scalability that you can control to accommodate changing demands, such as traffic spikes. To deal with high traffic, DevOps or infrastructure engineers can add instances of a service or raise the power of a specific pod. But they can also scale down resources for elements that consume less.
When managed well, the scalability of microservices helps avoid scenarios where communication between services or between services and a database is overloaded. Communication has to be adapted to the amount of data each service can process and to a client’s requirements, which include budget limits, as well as real data and user traffic. Solutions such as Autoscaler can aid with these issues.
When increasing the processing power of a service, testers need to ensure that this strategy won’t worsen an app’s functioning in general. Testers run performance tests to check if an app will crash under heavy traffic, with load testing tools like Jmeter, Flood.io or Gatling. Based on a client’s requirements, they make sure that an app can handle expected usage at the speed required by a client (e.g., it can process a set number of files per minute).
Through performance tests QA testers can identify services which might disrupt data and user flows. They can also check if an app performs well when different services operate at different speeds in various scenarios. To anticipate these hypothetical, bug-prone situations, it’s crucial to understand the different dependencies between services and how the app is built.
In big-scale applications, it is almost guaranteed the infrastructure is going to fail at some point. This is why testers should focus on the resiliency of an application. Chaos Monkey, created by Netflix and now used in the largest tech companies, is a tool responsible for random termination of instances. The goal is to ensure the solution is resilient to instance failures – the system should be capable of withstanding turbulent conditions in production. This enables systems to establish redundancy to survive incidents, without impacting the users.
Testing idempotency
Idempotency in microservices means that services should always generate the same output for the same input. If you apply idempotency and combine it with retry-type mechanisms, you should get the same output in a process, even when an error occurs at some stage. Additionally, if one service encounters an error, it shouldn’t affect an app as a whole.
Because of the different communication dependencies present in a microservice architecture, there’s a risk that some issues connected to idempotency will go unnoticed. Testers should be aware of disruptions that can occur in the communication between services so that they can minimize them and create an app that’s resilient to architectural disturbances. What proves useful here is, again, an in-depth knowledge of the goals and functions of each service.
To check for different disruptions, testers consider different scenarios and check how a service will behave if something happens. Common situations include restarting, along with adding and removing pods, as a result of scaling a service.
Take for example an app that is supposed to automatically copy files. When testing this app, you’d have to account for a situation where a service that reads file names and their location (ReadingService) sends information about 100 files to copy to the service responsible for making copies (FileCopyService). As the ReadingService sends messages about the files to the FileCopyService’s queue, the FileCopyService starts processing them and its central processing unit (CPU) and memory usage increases. In response, the pods of the FileCopyService are ramped up.
When the ReadingService runs out of files to queue, the FileCopyService’s additional pods won’t have files to copy. Thus, an autoscaler will start removing the extra pods. But there’s a risk that a pod that is in the process of copying a file will get removed, or a network connection error will occur. As a result of these potential errors, the file left the queue, but it didn’t get copied because something happened to the pod before it completed this operation. Testers need to know what should happen next – e.g., the message about the file should go back to the queue so another pod can process it successfully.
There are, of course, many other error-prone situations to consider. For example, when several pods process files from a queue, you can never be sure in what order the files will be copied. Out of those 100 files, a message that arrived first in the queue might not be processed first, because the pods are busy. In another example, when a firewall is set for only one of the pods, the app will only process every other file. There’s also a risk that the services will enter go into deadlock as they both wait for the other service to process the file. You have to look out for any issues resulting from these different situations.
Performance tests help you identify these unwanted problems so you can start thinking about solutions. It’s recommended to implement a health check to monitor the health status of pods. But whatever solution you decide on, you will also have to test it to make sure it doesn’t cause other bugs.
Ensure high performance of microservice-based apps with non-functional testing
The severity of risks resulting from different scenarios depends on the function of an affected service. However, even relatively small bugs can ruin an otherwise excellent user experience or an app’s performance and usability.
Non-functional testing helps you make sure that each element of a microservice-based app works correctly, both as an individual unit and as a whole network of services. It enables you to identify bottlenecks, maintain high performance and ensure that a solution meets a client’s requirements. Through testing scalability and idempotency scenarios, you ensure that an app will function properly in various situations, depending on current user demands.
A cross-functional development team in which engineers, architects and testers collaborate closely is the best guarantee that a solution will meet your expectations and attract users. Companies looking to engineer products from scratch or modernize existing solutions turn to Software Mind to benefit from our diverse talent and expertise with a range of technologies. Use the form below and our experts will explain how we can accelerate your software development.
About the authorMonika Mazur
Test Automation Engineer
A Test Automation Engineer with 5 years’ experience in test automation and quality assurance and a passion for high-quality software, Monika designs testing processes and creates automated test solutions. An Applied Computer Science graduate, she has a track record of driving testing efficiency and delivery processes, including automating manual invoicing for a financial services project.