Performance Testing
We do the uncool stuff so that you don't have to...
The performance test area is unique. On a new project testing of its performance is usually not regarded as very cool. On an existing project it’s something that needs to be done periodically (especially around future capacity) but usually it falls to some unfortunate soul who has to do it alongside their usual responsibilities.
We like uncool. We have the experience and tools to make it attractive.
How do we do it?
We usually come on-board for the requirements gathering and build stages and after a successful test execution and results analysis, you’ll have the option for on-demand support. These are the phases we usually go through:
Gather requirements
​
You will usually want to find out if your system(s) meets certain performance requirements now or in the future. There will usually be a discussion about what these requirements are. Perhaps as a business you have figured out that if a transaction is not performed within 5 seconds, the user is more likely to abandon it. Or maybe you are bound by some regulation or contract to maintain a SLA (Service Level Agreement).
Setting the targets for the requirement is usually a bit arbitrary and often it’s a good idea to question where they came from and why they are important. By this process we are able to find out the relative priority of the criteria which will inform your test plan. We always ask why a requirement is in place. If there is no good answer, chances are it’s not a real requirement.
It is a good idea to document performance requirements in the form of a user story:
-
As a user I want the product search page to return results within 1 second so that I don’t switch to another service.
-
As a business we want to be able to return any page within 1 second to up to 1,000 concurrent users so that potential customers don’t switch to another service.
​
The advantage of a user story is that it neatly encapsulates the reason for a feature of the system including what, who and why in a simple readable sentence.
​
Build & Run
​
We work with several industry standard tools such as JMeter, Gatling and Locust.io as well as proprietary ones (especially ORQA and Datascade, the best automation tools you've never heard of).
​
Generally we found that generating the test data for a capacity test is the biggest challenge and we’ve developed proprietary tools to help to do this. We have tools for generating synthetic data for day zero as well as future volume tests. We can also use open source tools to extrapolate data from existing data sets.
​
The data problem is especially acute in the financial services settings where the data can be date dependent. It’s a challenge which we have successfully met several times in the past.
​
Results and Reporting
All results are likely to be gathered in a local database from where we can easily gather stats on either granular or aggregate level. The data we store depends on your requirements. Report generation is very flexible so as long as we have the data, any type of report can be generated regularly or ad-hoc.
​
Analyse and Solve
​
The results we gather is only the start of the journey if we find issues. We usually use Python Numpy+Pandas or DataScade to load, transform and make sense of the underlying data. Once we've figured out a possible area of concern we'll try to setup additional monitoring on the problem resource (database, network, etc.) to understand the issue better. We'll attempt to propose a solution. We've done this countless times so we're fairly confident we can help.
​
Repeat
​
Once all of the above is done it’s ready to be repeated as often as you need. We’re on stand-by to help.
​